A deep learning project that focuses on building and evaluating an Autoencoder Neural Network for image reconstruction using the MNIST handwritten digits dataset.
Click below to open the notebook directly in Google Colab:
This project demonstrates how an Autoencoder model can learn compressed representations of image data and reconstruct the original images.
The workflow includes:
-
Loading the MNIST dataset
-
Data preprocessing and normalization
-
Designing encoderβdecoder architecture
-
Training and validating the model
-
Visualizing loss curves
-
Comparing original and reconstructed images
The primary goal is to build a model capable of efficient data compression and accurate image reconstruction.
-
Understand the working of Autoencoders (Encoder + Decoder)
-
Compress 28Γ28 images into a lower-dimensional latent space
-
Reconstruct images from compressed representations
-
Evaluate reconstruction performance
-
Visualize and interpret output quality
The dataset used is the MNIST Handwritten Digits Dataset.
| Feature | Description |
|---|---|
| Images | 28Γ28 grayscale images |
| Classes | Digits (0β9) |
| Training Samples | 60,000 |
| Test Samples | 10,000 |
The following preprocessing steps were applied:
- Pixel values scaled to the range 0β1
- Images reshaped into vectors (784) or maintained as 28Γ28 format
- Used predefined MNIST training and test datasets
A fully connected Autoencoder was implemented using TensorFlow / Keras.
β Encoder
-
Dense layers to compress input data
-
Generates a compact latent representation
β Decoder
-
Dense layers to reconstruct the image
-
Output layer uses sigmoid activation
β Loss Function
- Binary Crossentropy / Mean Squared Error
The model was trained using the training dataset and evaluated on the test dataset.
-
Optimizer: Adam
-
Epochs: (as defined in notebook)
-
Batch Size: (as defined in notebook)
-
Training Loss
-
Validation Loss
- Displays encoder and decoder architecture
-
Visual comparison of training vs validation loss
-
Helps identify learning trends and overfitting
Model performance is evaluated by comparing:
-
Original images
-
Reconstructed images
-
Reconstructed images closely resemble the original digits
-
Slight blurring occurs due to compression
-
Key visual features are preserved effectively
The autoencoder successfully learned meaningful compressed representations of the MNIST dataset. Both training and validation loss decreased steadily, indicating stable learning.
Although reconstructed images may lose some fine details, the overall structure and digit shapes are preserved. This demonstrates the effectiveness of autoencoders in dimensionality reduction and feature learning.
| Tool | Purpose |
|---|---|
| Python | Programming language |
| TensorFlow / Keras | Deep learning framework |
| NumPy | Numerical computation |
| Matplotlib | Visualization |
| Scikit-learn | Preprocessing |
| Google Colab | Development environment |
autoencoder-mnist/
β
βββ DL_Assignment_3_AutoEncoders.ipynb
βββ README.md
βββ DL Assignment 3 - Auto Encoders.pdf
Click the Google Colab button above
pip install tensorflow numpy matplotlib scikit-learn-
Execute all cells step-by-step
-
Train the model
-
Visualize reconstruction results
This project was developed as part of a Deep Learning assignment, demonstrating the implementation of an Autoencoder Neural Network for image reconstruction. It covers data preprocessing, model design, training, evaluation, and visualization.
-
Uses a basic fully connected architecture instead of CNN-based models
-
Reconstruction may appear slightly blurred due to compression
-
Limited to grayscale MNIST dataset
-
Not suitable for complex or high-resolution images
-
Minimal hyperparameter tuning
-
Limited generalization to other datasets
-
Implement Convolutional Autoencoders (CNN-based)
-
Explore Variational Autoencoders (VAE)
-
Apply to complex datasets like CIFAR-10
-
Perform advanced hyperparameter tuning
-
Build a Denoising Autoencoder
-
Deploy using Streamlit / Flask
Name: Laya Mary Joy
Organization: Entri Elevate
Date: March 28, 2026
I would like to thank Entri Elevate for their valuable guidance and support throughout this project.