deep learning
deep learning
On
DEEP LEARNING PRACTICAL FILE
Submitted by
AIM - Describe the statistical measures of the dataset and represent relationship between fields of dataset
using different types of graphs. Perform Train-test Split operation on the dataset.
CODE USED :
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Load dataset
df = pd.read_csv('/content/Churn_Modelling.csv')
# Missing values
print(df.isnull().sum())
# Data visualization
sns.pairplot(df)
plt.show()
# Train-test split
# Changed 'target' to 'Exited' assuming 'Exited' is the correct target column name.
# Check your DataFrame columns using df.columns to confirm.
X = df.drop('Exited', axis=1)
y = df['Exited']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
OUTPUT
AIM : Use different types of encoding methods to convert categorical data fields of the datasets in numeral
forms.
CODE USED:
# Basic statistics
# Select only numeric columns for calculating the mean
numeric_df = df.select_dtypes(include=['number'])
print(numeric_df.mean())
# Pair plot
# You may want to specify specific columns or use 'vars' to control which columns
are plotted # to avoid issues with non-numeric data in the pair plot
sns.pairplot(df, vars=['CreditScore', 'Age', 'Balance', 'EstimatedSalary']) # Example plt.show(
OUTPUT:
RowNumber 5.000500e+03
CustomerId 1.569094e+07
CreditScore 6.505288e+02
Age 3.892180e+01
Tenure 5.012800e+00
Balance 7.648589e+04
NumOfProducts 1.530200e+00
HasCrCard 7.055000e-01
IsActiveMember 5.151000e-01
EstimatedSalary 1.000902e+05
Exited 2.037000e-01
dtype: float64
RowNumber 5.000500e+03
CustomerId 1.569074e+07
CreditScore 6.520000e+02
Age 3.700000e+01
Tenure 5.000000e+00
Balance 9.719854e+04
NumOfProducts 1.000000e+00
HasCrCard 1.000000e+00
IsActiveMember 1.000000e+00
EstimatedSalary 1.001939e+05
Exited 0.000000e+00
dtype: float64
RowNumber 2886.895680
CustomerId 71936.186123
CreditScore 96.653299
Age 10.487806
Tenure 2.892174
Balance 62397.405202
NumOfProducts 0.581654
HasCrCard 0.455840
IsActiveMember 0.499797
EstimatedSalary 57510.492818
Exited 0.402769
dtype: float64
AIM: Implement a Perceptron model to classify the AND gate logic function. Implement a Perceptron
model to solve the OR gate classification problem.
CODE USED :
import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
# Label Encoding
le = LabelEncoder()
df['encoded_' + categorical_column_name] = le.fit_transform(df[categorical_column_name])
# Perceptron model
model =
Perceptron(max_iter=1000)
model.fit(X, y)
# Prediction
predictions = model.predict(X)
print(predictions)
[0 0 0 1]
# OR gate data
X = [[0, 0], [0, 1], [1, 0], [1, 1]]
y = [0, 1, 1, 1]
# Perceptron model
model =
Perceptron(max_iter=1000)
model.fit(X, y)
# Prediction
predictions = model.predict(X)
print(predictions)
[0 1 1 1]
AIM : . Implement a deep learning model for classification problem of the bank customers churning from
the churning dataset of the bank as attached herewith this list of experiments.
CODE USED:
# Load dataset
df = pd.read_csv('/content/Churn_Modelling.csv')
# Preprocessing
# Replace 'Churn' with 'Exited' (the actual column name for customer churn)
X = df.drop('Exited', axis=1)
y = df['Exited']
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X_processed, y, test_size=0.2, random_state=42)
# Define model
# Adjust input_dim to match the number of features after preprocessing
model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Evaluate
score = model.evaluate(X_test, y_test)
print(f'Accuracy: {score[1]}')
OUTPUT:
AIM:- Implement a Basic Artificial Neural Network (ANN) using a Multilayer Perceptron (MLP)
architecture to solve classification problem on Iris dataset. .Implement PCA for feature extraction and apply
PCA on a dataset to reduce its dimensionality. Train a deep learning model on the extracted features and
evaluate the performance.
CODE USED:
# Encode labels
le = LabelEncoder()
y_encoded = le.fit_transform(y)
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y_encoded, test_size=0.2, random_state=42)
# Define model
model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax'))
OUTPUT:-
AIM:- Demonstrate the vanishing gradient problem by training a deep neural network. Compare networks
using different activation functions (Sigmoid vs ReLU). Visualize the gradient flow to understand the
vanishing gradient effect.Implementation of Perceptron Model for classification problem. Upload a dataset,
perform the EDA and preprocessing operations, and classify the target variable.
CODE USED:-
# Load dataset
df = pd.read_csv('/content/Churn_Modelling.csv')
# Preprocessing
# Replace 'target' with the actual name of the target variable column
# In this dataset, the target variable is likely 'Exited'
X = df.drop('Exited', axis=1) # Replace 'Exited' if it's different
y = df['Exited'] # Replace 'Exited' if it's different
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X_pca, y, test_size=0.2, random_state=42)
# Model training
model = RandomForestClassifier()
model.fit(X_train, y_train)
# Evaluate
score = model.score(X_test, y_test)
print(f'Accuracy: {score}')
OUTPUT:-
AIM:- Implement a CNN model to classify the target variable in the dataset.Implement the Chi-Square Test
for feature selection and apply feature selection to a dataset (e.g., the Iris dataset or any other dataset). Train
a deep learning model (e.g., a neural network) using the selected features and evaluate its performance.
CODE USED:-
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np # Import numpy and alias it as np
# Define models
def create_model(activation_func):
model = Sequential()
model.add(Dense(64, input_dim=10, activation=activation_func))
model.add(Dense(32, activation=activation_func))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model
OUTPUT:-
AIM:- Implement a basic RNN architecture using PyTorch. Train the model on a sequential dataset.
Evaluate and visualize the model performance .Perform sentiment analysis on movie review dataset using
RNN. Evaluate and visualize the model performance.
CODE USED:-
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np
from sklearn.linear_model import Perceptron
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd # Import pandas
# Load dataset
# Replace 'your_dataset.csv' with the actual path to your dataset file
df = pd.read_csv('/content/Churn_Modelling.csv') # Changed to the example dataset file
# Preprocessing
# Replace 'target' with the actual name of the target variable column
# In this case, assuming 'Exited' is the target column
X = df.drop('Exited', axis=1) # Changed target column to 'Exited'
y = df['Exited'] # Changed target column to 'Exited'
# If you need to include the categorical features, you can concatenate them back
# X_final = pd.concat([X_scaled, X.drop(columns=numeric_features)], axis=1)
# Train-test split
# Use X_scaled (or X_final if you included categorical features) for splitting
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)
# Perceptron Model
model = Perceptron(max_iter=1000)
model.fit(X_train, y_train)
# Evaluation
score = model.score(X_test, y_test)
print(f'Accuracy: {score}')
OUTPUT:-
AIM :- Implement Residual Blocks and a ResNet architecture in PyTorch. Train a ResNet model on a real-
world dataset. Evaluate the performance of the ResNet model.Implement an LSTM architecture using
PyTorch. Train the LSTM on a sequence prediction task. Evaluate and visualize the model's performance.
CODE USED :-
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from tensorflow.keras.datasets import mnist
# Preprocessing
X_train = X_train.reshape(-1, 28, 28, 1).astype('float32') / 255.0
X_test = X_test.reshape(-1, 28, 28, 1).astype('float32') / 255.0
# Evaluate
score = model.evaluate(X_test, y_test)
print(f'Accuracy: {score[1]}')
OUTPUT:-