CIFAR 10 Image Classification with PyTorch

CIFAR 10 Image Classification with PyTorch

CIFAR 10 is a dataset that consists of several images divided into 10 classes. In this blog post, we’ll be using PyTorch to train a convolutional neural network to classify these images.

Check out this video:

Introduction to CIFAR 10 and PyTorch

CIFAR 10 is a widely used dataset for image classification. The dataset contains 50,000 training images and 10,000 testing images. The images are of size 32×32 and are labeled with one of ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.

PyTorch is a deep learning framework for Python that enables easy and rapid development of neural network models. In this tutorial, we will use PyTorch to build a simple convolutional neural network for classifying images from the CIFAR 10 dataset.

Preparing the Dataset

In order to train a deep learning model to classify images, we first need a dataset of images to train and test our model on. The CIFAR 10 dataset is a popular dataset for image classification which contains 60,000 color images in 10 classes, with 6000 images per class. The images are 32×32 pixels in size and are labeled with one of the following classes:

– airplane
– automobile
– bird
– cat
– deer
– dog
– frog
– horse
– ship
– truck

Building the Model

In this section, we’ll build and train our image classification model. We’ll be using a convolutional neural network (CNN), which is a type of neural network that is particularly well-suited for image classification tasks.

First, we’ll need to specify the model architecture. We’ll be using a relatively simple CNN with three convolutional layers and two fully-connected layers. You can see a graphical representation of the model architecture below.


Next, we’ll define the forward pass for our model. This is the part of the code where we actually perform the computations that our model needs to do in order to make predictions. In PyTorch, this is encapsulated in the `forward` method of our `CNN` class.

def forward(self, x):

out = self.conv1(x)
out = self.conv2(out)
out = self.conv3(out)

out = out.view(out.size(0), -1)

out = self.fc1(out)
out = self.fc2(out)

return F.log_softmax(out, dim=1) “`

Training the Model

In this section, we’ll train the model using PyTorch. First, we need to specify the parameters for training:

-The number of epochs, which is the number of times the model will go through the training data.
-The learning rate, which is how much the model adjusts its weights with each iteration.
-The batch size, which is the number of images used in each training step.

Then, we’ll define our loss function and optimizer. The loss function measures how well the model is doing on training, and the optimizer adjusts the model’s weights to try to minimize the loss.

After that, we can start training! We’ll loop through each epoch, and in each epoch we’ll loop through each batch of images. For each batch, we’ll pass the images and labels to the model and run optimization. Then we’ll check the loss and accuracy on our validation set to see how well the model is doing.

Evaluating the Model

Once the model is trained, we need to evaluate it on the test set. For this, we’ll use the `evaluate` function defined earlier. This function takes as input the test dataloader, the model, and a loss function. It returns the average loss and accuracy over all test images.

We can call this function like so:

loss, accuracy = evaluate(test_loader, model, criterion)
print(‘Test Loss: {:.4f}’.format(loss))
print(‘Test Accuracy: {:.4f}’.format(accuracy))

This will print out the average loss and accuracy over all test images.

Saving and Loading the Model

Saving and loading the model:

1. Save the model:, ‘’)
2. Load the model:
model = TheModelClass(*args, **kwargs)

Predictions on the Test Set

Now that our model is trained, we can make predictions on the test set. We’ll first need to load the test set, which we will do with the CIFAR10TestLoader helper function. This function takes in a path to the test set and returns a DataLoader that we can iterate over to get the images and labels of the test set.

We’ll also need to specify transforms that normalize the images in the same way that we did for the training set. We can do this by creating a transforms.Compose object with transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) as one of its transforms:

testset = datasets.CIFAR10(root=’./data’, train=False, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0

Visualizing the Predictions

In this section, we will visualize some of the images that our model was able to correctly label, as well as some it got wrong. We will also see what sort of features the model is pick up on to make these predictions by visualizing the Class Activation Maps.

First, we’ll grab a batch of images from the validation set:
dataiter = iter(test_loader)
images, labels =
images = images.numpy()

We need to make sure that the models outputs are in the same order as our validation data, so lets re-order the outputs in top-1 order:
output = model(images)
_, top1 = torch.max(output, 1)
top1 = top1.tolist() # convert to list for numpy indexing
print(“Predicted: “, ” “.join(“%5s” % classes[x] for x in top1))
print(“True Labels: “, ” “.join(“%5s” % classes[x] for x in labels)) “`


We’ve now trained a convolutional neural network to accurately classify images from the CIFAR 10 dataset using PyTorch. We’ve seen how to build a convolutional layer, add activation functions, and build a network that can achieve over 80% accuracy on the test set.

Keyword: CIFAR 10 Image Classification with PyTorch

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top