CIFAR-10 Image Classification with numpy
only
Example on Image Classification with the help of CIFAR-10 dataset and Convolutional Neural Network.
Test online here
Content
Short description of the content. Full codes you can find inside the course by link above:
- CIFAR-10 Image Classification with
numpy
only- Loading batches of CIFAR-10 dataset
- Plotting examples of images from CIFAR-10 dataset
- Preprocessing loaded CIFAR-10 dataset
- Saving and Loading serialized models
- Functions for dealing with CNN layers
- Naive Forward Pass for Convolutional layer
- Naive Backward Pass for Convolutional layer
- Naive Forward Pass for Max Pooling layer
- Naive Backward Pass for Max Pooling layer
- Forward Pass for Affine layer
- Backward Pass for Affine layer
- Forward Pass for ReLU layer
- Backward Pass for ReLU layer
- Softmax Classification loss
- Creating Classifier - model of CNN
- Initializing new Network
- Evaluating loss for training ConvNet1
- Calculating scores for predicting ConvNet1
- Functions for Optimization
- Creating Solver Class
- _Reset
- _Step
- Checking Accuracy
- Train
- Overfitting Small Data
- Training Results
- Full Codes
CIFAR-10 Image Classification with numpy
only
In this example we’ll test CNN for Image Classification with the help of CIFAR-10 dataset.
Following standard and most common parameters can be used and tested:
Parameter | Description |
---|---|
Weights Initialization | HE Normal |
Weights Update Policy | Vanilla SGD, Momentum SGD, RMSProp, Adam |
Activation Functions | ReLU, Sigmoid |
Regularization | L2, Dropout |
Pooling | Max, Average |
Loss Functions | Softmax, SVM |
Contractions:
- Vanilla SGD - Vanilla Stochastic Gradient Descent
- Momentum SGD - Stochastic Gradient Descent with Momentum
- RMSProp - Root Mean Square Propagation
- Adam - Adaptive Moment Estimation
- SVM - Support Vector Machine
For current example following architecture will be used:
Input
–> Conv
–> ReLU
–> Pool
–> Affine
–> ReLU
–> Affine
–> Softmax
For current example following parameters will be used:
Parameter | Description |
---|---|
Weights Initialization | HE Normal |
Weights Update Policy | Vanilla SGD |
Activation Functions | ReLU |
Regularization | L2 |
Pooling | Max |
Loss Functions | Softmax |
Loading batches of CIFAR-10 dataset
First step is to prepare data from CIFAR-10 dataset.
Plotting examples of images from CIFAR-10 dataset
After all batches were load and concatenated all together it is possible to show examples of training images.
Result can be seen on the image below.
Preprocessing loaded CIFAR-10 dataset
Next, creating function for preprocessing CIFAR-10 datasets for further use in classifier.
- Normalizing data by
dividing / 255.0
(!) - up to researcher - Normalizing data by
subtracting mean image
anddividing by standard deviation
(!) - up to researcher - Transposing every dataset to make channels come first
- Returning result as dictionary
As a result there will be following:
x_train: (49000, 3, 32, 32)
y_train: (49000,)
x_validation: (1000, 3, 32, 32)
y_validation: (1000,)
x_test: (1000, 3, 32, 32)
y_test: (1000,)
Saving and Loading serialized models
Saving loaded, prepared and preprocessed CIFAR-10 datasets into pickle
file.
Functions for dealing with CNN layers
Creating functions for CNN layers:
- Naive Forward Pass for Convolutional layer
- Naive Backward Pass for Convolutional layer
- Naive Forward Pass for Max Pooling layer
- Naive Backward Pass for Max Pooling layer
- Forward Pass for Affine layer
- Backward Pass for Affine layer
- Forward Pass for ReLU layer
- Backward Pass for ReLU layer
- Softmax Classification loss
Creating Classifier - model of CNN
Creating model of CNN Classifier:
- Creating class for ConvNet1
- Initializing new Network
- Evaluating loss for training ConvNet1
- Calculating scores for predicting ConvNet1
Defining Functions for Optimization
Using different types of optimization rules to update parameters of the Model.
Vanilla SGD updating method
Rule for updating parameters is as following:
Creating Solver Class
Creating Solver class for training classification models and for predicting:
- Creating and Initializing class for Solver
- Creating ‘reset’ function for defining variables for optimization
- Creating function ‘step’ for making single gradient update
- Creating function for checking accuracy of the model on the current provided data
- Creating function for training the model
Overfitting Small Data
Training Results
Training process of Model #1 with 50 000 iterations is shown on the figure below:
Initialized Filters and Trained Filters for ConvNet Layer is shown on the figure below:
Training process for Filters of ConvNet Layer is shown on the figure below:
MIT License
Copyright (c) 2018 Valentyn N Sichkar
github.com/sichkar-valentyn
Reference to:
Valentyn N Sichkar. Neural Networks for computer vision in autonomous vehicles and robotics // GitHub platform. DOI: 10.5281/zenodo.1317904