Building CNN Model - Formatted Paper
Building CNN Model - Formatted Paper
Volume 4 Issue 1
ABSTRACT
Recent times has seen a rising trend in the area of automation. One of the most trending
topics in the field of automation is self-driving cars which shows the culmination of artificial
intelligence, machine learning, deep learning, internet of things and many other in-demand
domains and technologies which have improved significantly in the last decade. These cars
use complex artificial intelligence and machine learning algorithms. This paper aims at
explaining how to build an architecture for autonomous cars in Udacity simulator and adjust
parameters to get a deeper understanding of how to tune different parameters of a CNN
model to get an acceptable accuracy and presents its outcomes in an easy and
understandable language. This research uses a simulation environment to generate dataset
and test the model on the same.
not clear, so in this layer a kernel is taken, This virtual environment can also be used
which compresses the image such that no to test a trained machine learning model.
data loss occur. By doing this the effect of The project is open source hence any
any problem with input is suppressed. enthusiast is allowed to access and modify
the simulator as per their requirements. It
Pooling Layer is developed using Unity’s video game
This layer down samples the features development IDE. Its resolution as in
received. Simply put it reduces the Figure 2, controls and tracks can be
number of parameters or dimensions of changed as per user’s wish.
the input feature, reducing its spatial size.
Hence acting as a noise suppressant. The environment has two modes and two
tracks to work upon as seen in Fig3. The
Activation Function two modes are the training mode, where
A biological neuron, that is the inspiration the user can record the movement of the
behind neural network, is triggered at a car. User uses controls to steer the car. At
certain threshold. Thus, introducing every frame three pictures are taken from
activation function in neural networks. the left, right and center camera mounted
This means that only after a certain value on top of the car. And an additional csv
output is sent to the next layer, file, that can be seen in Fig4, is also
introducing non-linearity into the model. generated to not only store the path of the
images but also provide some additional
Output Layer data related to the frame. The additional
The output layer consists of a few fully- data stored consists of steering angle,
connected neural layers. Fully-connected throttle, brake and speed of the car at that
layer is one where all the neurons get instance[7]. This dataset can be generated
input from all the neurons from previous on any one of the two tracks provided.
layer. This allow model to learn all After the model is built, the second track
possible non- linear combinations of high can be used to test the model in a real-
level features. time simulated environment, and also the
second track is not seen by the model
UDACITY SIMULATOR previously, this will also validate if or not
Udacity built a virtual environment for our model overfits the training data.
generating dataset for self-driving car.
Fig.5:-Base Architecture
For model B input shape is also changed to input shape as 66x200. Before dense layers,
66x200 which is the default image size for there is a flatten layer to convert a 2D array
the images generated by Udacity simulator. to 1D array. We also insert normalization,
pooling and dropout layer for better
For model C we use multiple convolutional generalization. This architectural design can
layers and dense layers. The number of be seen below.
features in the third architecture is 64, with