0% found this document useful (0 votes)
2 views

JS_ann 13

The document outlines a practical implementation of MNIST handwritten character detection using PyTorch, Keras, and TensorFlow. It includes code for data loading, model definition, training, and evaluation, achieving an accuracy of 97.69%. The model is a simple convolutional neural network (CNN) with one convolutional layer and one fully connected layer.

Uploaded by

Jotiram Shinde
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

JS_ann 13

The document outlines a practical implementation of MNIST handwritten character detection using PyTorch, Keras, and TensorFlow. It includes code for data loading, model definition, training, and evaluation, achieving an accuracy of 97.69%. The model is a simple convolutional neural network (CNN) with one convolutional layer and one fully connected layer.

Uploaded by

Jotiram Shinde
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

PRACTICAL NO:-13

Name : Shinde Jotiram Navanath


Class : T.E. AI&DS
Roll.NO : 25

TITLE:- MNIST Handwritten Character Detection using PyTorch, Keras and Tensorflow

PROGRAM:-

import torch
import torch.nn as nn import
torch.optim as optim
from torchvision import datasets, transforms from
torch.utils.data import DataLoader

transform = transforms.ToTensor()
train_loader = DataLoader(datasets.MNIST('./data', train=True, download=True,
transform=transform), batch_size=32, shuffle=True)
test_loader = DataLoader(datasets.MNIST('./data', train=False, download=True,
transform=transform), batch_size=32)

class SimpleCNN(nn.Module):
def init (self):
super(). init ()
self.conv1 = nn.Conv2d(1, 8, 3)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(8 * 13 * 13, 10)

def forward(self, x):


x = self.pool(torch.relu(self.conv1(x)))
x = x.view(-1, 8 * 13 * 13)
return self.fc1(x)

model = SimpleCNN()
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

for epoch in range(3):


for images, labels in train_loader: preds
= model(images)
loss = loss_fn(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1} done")
Epoch 1 done
Epoch 2 done
Epoch 3 done

correct, total = 0, 0
with torch.no_grad():
for images, labels in test_loader: preds
= model(images)
correct += (preds.argmax(1) == labels).sum().item()
total += labels.size(0)

print(f"Accuracy: {100 * correct / total:.2f}%")

Accuracy: 97.69%

You might also like