DL PYTH Keras
DL PYTH Keras
Keras
• The word "Keras" itself has its roots in Greek and is related to the term "horn" or
"projectile."
• In this context, it's used metaphorically to signify a focus on simplicity, modularity, and
extensibility in the design of the library.
• Keras was developed by François Chollet and first released in March 2015.
Popular Open-Source Data Repositories
def sigmoid(x):
return 1 / (1 + np.exp(-x))
x = np.linspace(-5, 5, 100)
y = sigmoid(x)
plt.plot(x, y, label='Sigmoid')
plt.title('Sigmoid Activation Function')
plt.xlabel('Input')
plt.ylabel('Output')
plt.legend()
plt.show()
Hyperbolic Tangent
(tanh) Activation
Function
•Range: (-1, 1)
Hyperbolic
Tangent • Similar to the sigmoid, but with an output range
(tanh) between -1 and 1. Often used in hidden layers of
neural networks.
Activation
Function
Hyperbolic Tangent (tanh) Activation
Function
def tanh(x):
return np.tanh(x)
y_tanh = tanh(x)
y_relu = relu(x)
y_leaky_relu = leaky_relu(x)
• SLP: A single-layer perceptron can only learn linear decision boundaries, which
means it can only separate classes that are linearly separable.
• It cannot capture nonlinear relationships between input features and target variables.
• MLP: Multi-layer perceptrons with more layers (hidden layers) and nonlinear
activation functions have the ability to learn complex, nonlinear decision boundaries.
• They can capture intricate patterns and relationships in the data, allowing for more
accurate and flexible modeling.
Example
• In an SLP, we would directly connect the input features (age, weight,
height, BMI) to the output layer, without any hidden layers.
• The SLP would learn a linear decision boundary in the input feature
space to separate individuals with and without DM.
• If the relationship between input features and DM is linearly
separable, the SLP may achieve decent performance. However, if the
relationship is nonlinear or complex, the SLP may struggle to capture
it effectively.
• For instance, younger individuals with higher BMI and weight may be at
higher risk of DM, but this relationship may not be strictly linear.
Example
• In this case, an SLP may struggle to capture the nonlinear relationship
between input features and DM, as it can only learn linear decision
boundaries.
• It may underperform and fail to accurately classify individuals with
and without DM.
DL Representation Benefit
• In an MLP with more layers, we would add one or more hidden layers
between the input and output layers.
• Each hidden layer in the MLP would consist of multiple neurons with
nonlinear activation functions (e.g., ReLU).
• The MLP would learn hierarchical representations of features, where each
hidden layer extracts increasingly abstract and complex features from the
input data.
• With more layers and nonlinear activation functions, the MLP can capture
nonlinear relationships and complex patterns in the data, enabling it to
achieve better performance, especially on tasks with nonlinear decision
boundaries.
Challenges of SLP Vs DL: Expressiveness
• Underfitting can also occur when the model is not trained for a
sufficient number of iterations (epochs) or when the training dataset is
too small or not representative of the true data distribution.
import pandas as pd
data = pd.read_csv('path/to/your/sample_data.txt')
# Assuming 'Age' and BMI' are the input features and 'Target' is the target variable
X = data[['Age’, ‘BMI']]
y = data['Target']
model = Sequential()
model.add(Dense(units=1, activation='relu', input_dim=2)) # Assuming two input features (Age and BMI)
Deep Learning
Sequential Model
• keras.models.Sequential()
• Used to create a linear stack of layers
• Appropriate for plain stack of input tensor, {1 input, 1 output}
• Not Suitable when
• Model has multiple inputs and multiple outputs
Dense Layer
• keras.layers.Dense()
• A fully connected layer
Convolutional Layer
• keras.layers.Conv2D()
• Used for convolution operations.
• create a convolutional layer for 2D spatial convolution over images
• Mainly used for image classification, object detection, and image
segmentation
Convolution Additional Details
• Convolution involves combining two functions to produce a third
function
• Input Image: This represents the original image or input signal that
you want to process.
• Kernel or Filter: This is a small matrix of weights (also known as a
filter or kernel) that is used for the convolution operation.
Convolution..
Input Image: Kernel:
[1 2 3] [0 1]
[4 5 6] [1 0]
[7 8 9]