0% found this document useful (0 votes)
18 views

deep learning

deep learning practical file

Uploaded by

dhruvbansal0035
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

deep learning

deep learning practical file

Uploaded by

dhruvbansal0035
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Final Year Practical file

On
DEEP LEARNING PRACTICAL FILE

Submitted for Partial Fulfillment of Award of


Bachelor of Technology
in
Computer Science and Engineering

Submitted by

VIVEK SHARMA (2100640100120)

HINDUSTAN COLLEGE OF SCIENCE AND TECHNOLOGY, FARAH,


MATHURA
ACADEMIC SESSION 2024-2025
Deep Learning Lab Experiment

AIM - Describe the statistical measures of the dataset and represent relationship between fields of dataset
using different types of graphs. Perform Train-test Split operation on the dataset.

CODE USED :
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

# Load dataset
df = pd.read_csv('/content/Churn_Modelling.csv')

# Display first few rows


print(df.head())

# EDA: Summary statistics


print(df.describe())

# Missing values
print(df.isnull().sum())

# Data visualization
sns.pairplot(df)
plt.show()

# Train-test split
# Changed 'target' to 'Exited' assuming 'Exited' is the correct target column name.
# Check your DataFrame columns using df.columns to confirm.
X = df.drop('Exited', axis=1)
y = df['Exited']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

OUTPUT
AIM : Use different types of encoding methods to convert categorical data fields of the datasets in numeral
forms.

CODE USED:
# Basic statistics
# Select only numeric columns for calculating the mean
numeric_df = df.select_dtypes(include=['number'])
print(numeric_df.mean())

# Calculate median only for numeric columns


print(numeric_df.median()) # Now calculating median only for numeric columns
print(numeric_df.std()) # Select only numeric columns for calculating the standard
deviation # Correlation heatmap
# Calculate correlation only for numeric columns
sns.heatmap(numeric_df.corr(), annot=True,
cmap='coolwarm') plt.show()

# Pair plot
# You may want to specify specific columns or use 'vars' to control which columns
are plotted # to avoid issues with non-numeric data in the pair plot
sns.pairplot(df, vars=['CreditScore', 'Age', 'Balance', 'EstimatedSalary']) # Example plt.show(

OUTPUT:

RowNumber 5.000500e+03
CustomerId 1.569094e+07
CreditScore 6.505288e+02
Age 3.892180e+01
Tenure 5.012800e+00
Balance 7.648589e+04
NumOfProducts 1.530200e+00
HasCrCard 7.055000e-01
IsActiveMember 5.151000e-01
EstimatedSalary 1.000902e+05
Exited 2.037000e-01
dtype: float64
RowNumber 5.000500e+03
CustomerId 1.569074e+07
CreditScore 6.520000e+02
Age 3.700000e+01
Tenure 5.000000e+00
Balance 9.719854e+04
NumOfProducts 1.000000e+00
HasCrCard 1.000000e+00
IsActiveMember 1.000000e+00
EstimatedSalary 1.001939e+05
Exited 0.000000e+00
dtype: float64
RowNumber 2886.895680
CustomerId 71936.186123
CreditScore 96.653299
Age 10.487806
Tenure 2.892174
Balance 62397.405202
NumOfProducts 0.581654
HasCrCard 0.455840
IsActiveMember 0.499797
EstimatedSalary 57510.492818
Exited 0.402769
dtype: float64
AIM: Implement a Perceptron model to classify the AND gate logic function. Implement a Perceptron
model to solve the OR gate classification problem.

CODE USED :

import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder

# Replace 'YourCategoricalColumn' with the actual name of your categorical column


categorical_column_name = 'Geography' # Example: Assuming 'Geography' is your categorical column

# Label Encoding
le = LabelEncoder()
df['encoded_' + categorical_column_name] = le.fit_transform(df[categorical_column_name])

# One Hot Encoding


df = pd.get_dummies(df, columns=[categorical_column_name])

from sklearn.linear_model import


Perceptron # AND gate data
X = [[0, 0], [0, 1], [1, 0], [1, 1]]
y = [0, 0, 0, 1]

# Perceptron model
model =
Perceptron(max_iter=1000)
model.fit(X, y)

# Prediction
predictions = model.predict(X)
print(predictions)

[0 0 0 1]

# OR gate data
X = [[0, 0], [0, 1], [1, 0], [1, 1]]
y = [0, 1, 1, 1]

# Perceptron model
model =
Perceptron(max_iter=1000)
model.fit(X, y)

# Prediction
predictions = model.predict(X)
print(predictions)

[0 1 1 1]
AIM : . Implement a deep learning model for classification problem of the bank customers churning from
the churning dataset of the bank as attached herewith this list of experiments.

CODE USED:

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import Dense
from sklearn.preprocessing import StandardScaler, OneHotEncoder # Import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.compose import ColumnTransformer # Import ColumnTransformer
import pandas as pd

# Load dataset
df = pd.read_csv('/content/Churn_Modelling.csv')

# Check if 'Churn' column exists in the DataFrame


print(df.columns) # Print all column names to verify

# Preprocessing
# Replace 'Churn' with 'Exited' (the actual column name for customer churn)
X = df.drop('Exited', axis=1)
y = df['Exited']

# Separate numeric and categorical features


numeric_features = X.select_dtypes(include=['number']).columns
categorical_features = X.select_dtypes(include=['object']).columns

# Apply StandardScaler to numeric features and OneHotEncoder to categorical features


preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_features),
('cat', OneHotEncoder(handle_unknown='ignore', sparse_output=False), categorical_features), # One-hot encode categorical
features
])

X_processed = preprocessor.fit_transform(X) # Apply the transformations

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X_processed, y, test_size=0.2, random_state=42)

# Define model
# Adjust input_dim to match the number of features after preprocessing
model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile and train


model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=32)

# Evaluate
score = model.evaluate(X_test, y_test)
print(f'Accuracy: {score[1]}')
OUTPUT:

AIM:- Implement a Basic Artificial Neural Network (ANN) using a Multilayer Perceptron (MLP)
architecture to solve classification problem on Iris dataset. .Implement PCA for feature extraction and apply
PCA on a dataset to reduce its dimensionality. Train a deep learning model on the extracted features and
evaluate the performance.

CODE USED:

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import Dense
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder

# Load Iris dataset


iris = load_iris()
X = iris.data
y = iris.target

# Encode labels
le = LabelEncoder()
y_encoded = le.fit_transform(y)

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y_encoded, test_size=0.2, random_state=42)

# Define model
model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax'))

# Compile and train


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=32)
# Evaluate
score = model.evaluate(X_test, y_test)
print(f'Accuracy: {score[1]}')

OUTPUT:-

AIM:- Demonstrate the vanishing gradient problem by training a deep neural network. Compare networks
using different activation functions (Sigmoid vs ReLU). Visualize the gradient flow to understand the
vanishing gradient effect.Implementation of Perceptron Model for classification problem. Upload a dataset,
perform the EDA and preprocessing operations, and classify the target variable.

CODE USED:-

from sklearn.decomposition import PCA


from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline

# Load dataset
df = pd.read_csv('/content/Churn_Modelling.csv')

# Check the actual column names in your DataFrame


print(df.columns)

# Preprocessing
# Replace 'target' with the actual name of the target variable column
# In this dataset, the target variable is likely 'Exited'
X = df.drop('Exited', axis=1) # Replace 'Exited' if it's different
y = df['Exited'] # Replace 'Exited' if it's different

# Separate numeric and categorical features


numeric_features = X.select_dtypes(include=['number']).columns
categorical_features = X.select_dtypes(include=['object']).columns

# Create a pipeline to handle numeric features


numeric_transformer = Pipeline(steps=[
('scaler', StandardScaler())
])

# Use 'drop' directly in the ColumnTransformer for categorical features


preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', 'drop', categorical_features) # Directly drop categorical features
])

# Apply the preprocessing pipeline


X_processed = preprocessor.fit_transform(X)

# PCA for dimensionality reduction


pca = PCA(n_components=2)
X_pca = pca.fit_transform(X_processed)

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X_pca, y, test_size=0.2, random_state=42)

# Model training
model = RandomForestClassifier()
model.fit(X_train, y_train)

# Evaluate
score = model.score(X_test, y_test)
print(f'Accuracy: {score}')

OUTPUT:-
AIM:- Implement a CNN model to classify the target variable in the dataset.Implement the Chi-Square Test
for feature selection and apply feature selection to a dataset (e.g., the Iris dataset or any other dataset). Train
a deep learning model (e.g., a neural network) using the selected features and evaluate its performance.

CODE USED:-
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np # Import numpy and alias it as np

# Generate dummy data


X = np.random.rand(1000, 10)
y = np.random.randint(2, size=1000)

# Define models
def create_model(activation_func):
model = Sequential()
model.add(Dense(64, input_dim=10, activation=activation_func))
model.add(Dense(32, activation=activation_func))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model

# Train models with different activations


model_sigmoid = create_model('sigmoid')
model_relu = create_model('relu')

model_sigmoid.fit(X, y, epochs=10, batch_size=32)


model_relu.fit(X, y, epochs=10, batch_size=32)

OUTPUT:-
AIM:- Implement a basic RNN architecture using PyTorch. Train the model on a sequential dataset.
Evaluate and visualize the model performance .Perform sentiment analysis on movie review dataset using
RNN. Evaluate and visualize the model performance.

CODE USED:-
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np
from sklearn.linear_model import Perceptron
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd # Import pandas

# Load dataset
# Replace 'your_dataset.csv' with the actual path to your dataset file
df = pd.read_csv('/content/Churn_Modelling.csv') # Changed to the example dataset file

# Preprocessing
# Replace 'target' with the actual name of the target variable column
# In this case, assuming 'Exited' is the target column
X = df.drop('Exited', axis=1) # Changed target column to 'Exited'
y = df['Exited'] # Changed target column to 'Exited'

# Select only numeric features for scaling


numeric_features = X.select_dtypes(include=['number']).columns
X_numeric = X[numeric_features] # Create a DataFrame with only numeric features
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_numeric) # Scale only the numeric features

# Convert the scaled numeric features back to a DataFrame


X_scaled = pd.DataFrame(X_scaled, columns=numeric_features, index=X.index)

# If you need to include the categorical features, you can concatenate them back
# X_final = pd.concat([X_scaled, X.drop(columns=numeric_features)], axis=1)

# Train-test split
# Use X_scaled (or X_final if you included categorical features) for splitting
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)

# Perceptron Model
model = Perceptron(max_iter=1000)
model.fit(X_train, y_train)

# Evaluation
score = model.score(X_test, y_test)
print(f'Accuracy: {score}')

OUTPUT:-

AIM :- Implement Residual Blocks and a ResNet architecture in PyTorch. Train a ResNet model on a real-
world dataset. Evaluate the performance of the ResNet model.Implement an LSTM architecture using
PyTorch. Train the LSTM on a sequence prediction task. Evaluate and visualize the model's performance.

CODE USED :-
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from tensorflow.keras.datasets import mnist

# Load dataset (MNIST as an example)


(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Preprocessing
X_train = X_train.reshape(-1, 28, 28, 1).astype('float32') / 255.0
X_test = X_test.reshape(-1, 28, 28, 1).astype('float32') / 255.0

# Define CNN model


model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))

# Compile and train


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=32)

# Evaluate
score = model.evaluate(X_test, y_test)
print(f'Accuracy: {score[1]}')

OUTPUT:-

You might also like