Home Page

Case Study

CIFAR-10 Image Classification (NetA–NetD)

2025
  • Python
  • PyTorch
  • Deep Learning
  • Neural Networks

Built and trained several PyTorch neural networks (NetA–NetD) on CIFAR-10 to classify 32×32 color images into 10 classes, comparing fully connected and convolutional models using GPU training.

Problem & Motivation:

Use PyTorch to design neural networks that reach the required validation accuracy on CIFAR-10 and understand how different architectures affect image classification performance.

Data & Approach:

  • Loaded the CIFAR-10 train/validation sets with torchvision, applied the given tensor/normalize transforms, and used DataLoader batches on GPU.
  • Implemented NetA and NetB as fully connected networks that flatten each image and pass it through one or two hidden layers with ReLU.
  • Implemented NetC as a small convolutional network with a conv → ReLU → max-pool block, then a fully connected layer to 10 outputs.
  • Designed NetD with two convolution layers and two fully connected layers, following the assignment rules, and trained all nets with cross-entropy loss, Adam, and the provided train/accuracy/plot_history helpers.

Results:

  • NetA and NetB reached validation accuracies in the low-50% range, showing what simple fully connected networks can do on CIFAR-10.
  • NetC improved validation accuracy to around mid-60%, and NetD reached about 71% best validation accuracy.
  • Training vs validation curves showed higher training accuracy than validation, illustrating overfitting but still clear gains from adding convolution layers.

Limitations:

Did not deeply tune hyperparameters or explore additional regularization beyond what was required.