diff --git a/contrib/machine-learning/IntroToCNNs.md b/contrib/machine-learning/IntroToCNNs.md index 802cfdb..f832980 100644 --- a/contrib/machine-learning/IntroToCNNs.md +++ b/contrib/machine-learning/IntroToCNNs.md @@ -25,22 +25,6 @@ - - -
Table 1 Heading 1 Table 1 Heading 2
- -|Table 1| Middle | Table 2| -|--|--|--| -|a| not b|and c | - - - -|b|1|2|3| -|--|--|--|--| -|a|s|d|f| - -
- ## Introduction Convolutional Neural Networks (CNNs) are a specialized type of artificial neural network designed primarily for processing structured grid data like images. CNNs are particularly powerful for tasks involving image recognition, classification, and computer vision. They have revolutionized these fields, outperforming traditional neural networks by leveraging their unique architecture to capture spatial hierarchies in images. @@ -79,35 +63,8 @@ The convolutional layer is the core building block of a CNN. The layer's paramet #### Input Shape The dimensions of the input image, including the number of channels (e.g., 3 for RGB images & 1 for Grayscale images). -
-

- - - - - - - - - - - -
1 and 0
10111
10101
10101
10101
10101
10101
10111
-

-

- - - - - - - - - - - -
9
01110
01010
01110
00010
00010
00010
00010
-

+
+
- The input matrix represents a simplified binary image of handwritten digits, @@ -121,39 +78,8 @@ where '1' denotes the presence of ink and '0' represents the absence of ink. #### Strides The step size with which the filter moves across the input image. -
- - - - - - - - - - - -
3
011 - 10
010 - 10
01110
00010
000 - 10
000 - 10
00010
- - - - - - - - - - - -
2
01 - 110
01 - 010
01110
00 - 010
00 - 010
00010
00010
+
+
- This visualization will help you understand how the filter (kernel) moves acroos the input matrix with stride values of 3 and 2. @@ -168,33 +94,8 @@ The step size with which the filter moves across the input image. #### Padding Determines whether the output size is the same as the input size ('same') or reduced ('valid'). -
- - - - - - - - - - - - - -
padding='same'
0000000
0011100
0010100
0011100
0000100
0000100
0000100
0000100
0000000
- - - - - - - - - - - -
padding='valid'
01110
01010
01110
00010
00010
00010
00010
+
+
- `Same` padding is preferred in earlier layers to preserve spatial and edge information, as it can help the network learn more detailed features. @@ -205,31 +106,8 @@ Determines whether the output size is the same as the input size ('same') or red #### Filters Small matrices that slide over the input data to extract features. -
- - - - - - - -
closed loop
111
101
111
- - - - - - - -
vertical line
010
010
010
- - - - - - - -
both diagonals
101
010
101
+
+
- The first filter aims to detect closed loops within the input image, being highly relevant for recognizing digits with circular or oval shapes, such as '0', '6', '8', or '9'. @@ -242,37 +120,8 @@ Small matrices that slide over the input data to extract features. #### Output A set of feature maps that represent the presence of different features in the input. -
- - - - - - - - - -
('valid', 1)
404
25-3
25-3
032
032
- - - - - - - - - - - -
('same', 1)
22422
34843
22533
12533
00323
00323
00212
- - - - - - - -
('valid', 2)
44
23
02
+
+
- With no padding and a stride of 1, the 3x3 filter moves one step at a time across the 7x5 input matrix. The filter can only move within the original boundaries of the input, resulting in a smaller 5x3 output matrix. This configuration is useful when you want to reduce the spatial dimensions of the feature map while preserving the exact spatial relationships between features. @@ -289,69 +138,29 @@ Pooling layers reduce the dimensionality of each feature map while retaining the - **Pooling Size:** The size of the pooling window (e.g., 2x2). - **Strides:** The step size for the pooling operation. - **Output:** A reduced feature map highlighting the most important features. -
- - - - - - - - - - -
((2,2), 1)
4884
4884
2553
2553
0333
0333
- - - - - - - -
((3,3) 2)
88
55
33
+
+
- The high values (8) indicate that the "closed loop" filter found a strong match in those regions. - - First matrix of size 6x4 represents a downsampled version of the input. - - While the second matrix with 3x2, resulting in more aggressive downsampling. - -
- ### Flatten Layer The flatten layer converts the 2D matrix data to a 1D vector, which can be fed into a fully connected (dense) layer. - **Input Shape:** The 2D feature maps from the previous layer. - **Output:** A 1D vector that represents the same data in a flattened format. -
- - - - - - - - - - -
After max pooling (with kernel size = 3 and stride = 1)
888888555555333
+
+
-
- ### Dropout Layer Dropout is a regularization technique to prevent overfitting in neural networks by randomly setting a fraction of input units to zero at each update during training time. - **Input Shape:** The data from the previous layer. - **Dropout Rate:** The fraction of units to drop (e.g., 0.5 for 50% dropout). - **Output:** The same shape as the input, with some units set to zero. -
- - - - - -
dropout rate = 0.3
880800 505055 033
+
+
- The updated 0 values represents the dropped units. @@ -402,9 +211,8 @@ class CNN: # Output dimensions conv_height = (height - filter_size[0]) // strides[0] + 1 conv_width = (width - filter_size[1]) // strides[1] + 1 - output_matrix = np.zeros((conv_height, conv_width, channels)) - + # Convolution Operation for i in range(0, height - filter_size[0] + 1, strides[0]): for j in range(0, width - filter_size[1] + 1, strides[1]): @@ -443,7 +251,7 @@ class CNN: return input_matrix * dropout_mask ``` -Run the below command to generate output, based on random input and filter matrices. +Run the below command to generate output with random input and filter matrices, depending on the given size. ```python input_shape = (5, 5) @@ -470,4 +278,4 @@ dropout_output = cnn_model.dropout(flattened_output, dropout_rate=0.3) print("\nDropout Output:\n", dropout_output) ``` -Feel free to play around with the parameters! +Feel free to play around with the parameters! \ No newline at end of file diff --git a/contrib/machine-learning/assets/cnn-dropout.png b/contrib/machine-learning/assets/cnn-dropout.png new file mode 100644 index 0000000..9cb18f9 Binary files /dev/null and b/contrib/machine-learning/assets/cnn-dropout.png differ diff --git a/contrib/machine-learning/assets/cnn-filters.png b/contrib/machine-learning/assets/cnn-filters.png new file mode 100644 index 0000000..463ca60 Binary files /dev/null and b/contrib/machine-learning/assets/cnn-filters.png differ diff --git a/contrib/machine-learning/assets/cnn-flattened.png b/contrib/machine-learning/assets/cnn-flattened.png new file mode 100644 index 0000000..2d1ca6f Binary files /dev/null and b/contrib/machine-learning/assets/cnn-flattened.png differ diff --git a/contrib/machine-learning/assets/cnn-input_shape.png b/contrib/machine-learning/assets/cnn-input_shape.png new file mode 100644 index 0000000..34379f1 Binary files /dev/null and b/contrib/machine-learning/assets/cnn-input_shape.png differ diff --git a/contrib/machine-learning/assets/cnn-ouputs.png b/contrib/machine-learning/assets/cnn-ouputs.png new file mode 100644 index 0000000..2797226 Binary files /dev/null and b/contrib/machine-learning/assets/cnn-ouputs.png differ diff --git a/contrib/machine-learning/assets/cnn-padding.png b/contrib/machine-learning/assets/cnn-padding.png new file mode 100644 index 0000000..a441b2b Binary files /dev/null and b/contrib/machine-learning/assets/cnn-padding.png differ diff --git a/contrib/machine-learning/assets/cnn-pooling.png b/contrib/machine-learning/assets/cnn-pooling.png new file mode 100644 index 0000000..c3ada5c Binary files /dev/null and b/contrib/machine-learning/assets/cnn-pooling.png differ diff --git a/contrib/machine-learning/assets/cnn-strides.png b/contrib/machine-learning/assets/cnn-strides.png new file mode 100644 index 0000000..26339a9 Binary files /dev/null and b/contrib/machine-learning/assets/cnn-strides.png differ