0% found this document useful (0 votes)
15 views

Image Search Engine Using DeepLearning

The document presents an image search engine built using deep learning. A pretrained CNN model called VGG16 is used for feature extraction. Flask is used to deploy the model on a local webpage. The model takes an input image, extracts features using VGG16, saves the features and returns similar images.

Uploaded by

Rajesh Bathula
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Image Search Engine Using DeepLearning

The document presents an image search engine built using deep learning. A pretrained CNN model called VGG16 is used for feature extraction. Flask is used to deploy the model on a local webpage. The model takes an input image, extracts features using VGG16, saves the features and returns similar images.

Uploaded by

Rajesh Bathula
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Image Search Engine Using Deep Learning

1
P Ganesh, 2C Radhika, 3P Kiranmai Yadav
1
Assistant Professor, Department of Information Technology, Bhoj Reddy Engineering College for Women,
Hyderabad, India
2,3
Student, Department of Information Technology, Bhoj Reddy Engineering College for Women, Hyderabad,
India

esenting an Image search engine build using deep learning. Automation is everybody’s dream whether it is to ease their efforts
or to make machines learn how to do something. In this paper we are presenting a way to search for similar images in
local disk when an input image is applied. We used CNN for feature extraction and Flask for implementing the
program functionality in the local webpage. The model is used to extract the features as NumPy arrays and store them
locally and be able to search for similar images when and input is applied.

, convolutional neural network, Feature vector, Artificial intelligence.

I. INTRODUCTION: offline file which will preprocess the data and


secound is making a feature extractor file which
With the rise of users having access to
can
high quality cameras whether its from their phones
be imported whenever we need to perform feature
or from a external camera source and access to
extraction.Third file consist of web application
high speed internet for searching for images or text
code which describe how the webpage works. The
or videos and be able to download them right
paper proposes a model which returns similar
away from the internet, this demands for
images as input. The dataset used in this project
filtering,sorting and organising these images
contains 8000 images of different types of flowers.
which can save up alot of time which is used up if
We are going to use VGG16 which consists of 16
manually searching for the images we want,it is
hidden layers which is used for feature extraction.
possible to manually filter images but it becomes
Flask is used to deploy the model in the webpage
impossible when dealing with huge datasets hence
and acts as an interface between user and the
we need a machine that can sort large datasets.In
programme.
this Paper we are proposing a deep learning model
which is capable of returning similar images when
a input image is applied. A pretrained CNN model
known as VGG16 is used. CNN is used to extract
features and Flask is used to deploy the model in
to a local webpage. The images that are getting
returned should be similar to the input [2].Most of
the previous methods are based on extracting pixel
information and manually saving all the data and
search through all the dataset which can be relaced
by using a deep learning for better feature
extraction. In order to get similar images as output.
We will be designing a single model which takes
an input image and then extract features and save II. APPROACH:
the extracted features in the database as NumPy
In this paper we are purposing a model
arrays and have same name as the image. This
which contains a CNN and flask. CNN is used
project is divided in to 3 parts.First making an
for image preprocessing also called as
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 106
encoding and Flask is used to deploy the model
in thelocal webpage. All these neural networks
can be mathematically represented for better
understanding and easier analysis.
A. Convolutional Neural Network:
Convolutional neural networks are
best suited for image processing and visual
analysis. The reason they are used is that
the image can be easily manipulated by
convoluting image data with a filter data for
easier manipulations. CNN contains
multiple layers, layers after CNN are
connected as multilayer neural networks. Feature extraction:
The design of CNN allows it to be able to Just like the above filters
take a 2D image as an input. The output can CNN filters are used to extract features
be achieved by using multiple layers and from an image,when compared to the usual
weights just like in a neural network by filters CNN filters does not have any
convoluting one layer to another for predefined values and these values are
manipulations and with various pooling determined during the training period this
techniques to get the feature vector. helps the model to make filters by its own
In this Model we used VGG16 which which can result in pretty amusing filters
consists of 16 hidden layers for its model, that humans can never think of manually. A
which is a deep learning CNN model. This 2D convolution filter is commonly used
model is a pretrained model capable of and it is referred as Conv2D.This filter adds
categorizing 1million types of images. This up all the inputs and a single output is
model is used for feature extraction which obtained from the image.
can be further utilized for training.

Preprocessing using CNN:


Preprocessing of data implies that the input
data is converted in a form that the
computer can understand that easy training.
Preprocessing is used to improve the
performance by manipulating the image so
that it can be easily understood by the
machine. As VGG16 accepts only 224 x
224 images it is important to first resize the B. Flask:
images so it can be fed in to VGG16.This Flask is a Python-based microweb framework. It is
reduction in size greatly enhances the referred to as a microframework because it does
performance as it has less data to work on. not necessitate the usage of any specific tools or
libraries. [2] It doesn't have a database abstraction
Filtering:
layer, form validation, or any other components that
In CNN, the main application of rely on third-party libraries to do typical tasks.
Convolution is done in filtering. In filtering Extensions, on the other hand, can be used to add
the input image vectors are multiplied with application functionalities as if they were built into
the filter to get the modified output that we Flask itself. Object- relational mappers, form
need. There are several types of filters such validation, upload handling, different open
as sharpening, grayscale, blur and etc. authentication protocols, and other framework-

Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 107
related tools all have extensions.. imagenet.
 We apply Normalization and then
II. METHEDOLOGY: return numpy arrays.
 This model converts the image info to
numpy arrays that are stored locally.

Server.py:
 This file contains code for web
implimentation of our model.
 We are using flask web framework for
interfacing between the model and the
user.
 We make use of feature extractor class
for features of user input image.
 This file is responsible to host the web
page locally and display output on the
webpage

IV. RESULT:
A. Dataset:
We divided the project into 3 parts:
The data set consists of 8000 images
1. Offline.Py (Save features into numpy with various types of flowers.
arrays)
B. Results:
2. Feature extractor.Py (Contains
VGG16 for preprocessing) The model returns first 30 closest images
3. Server.Py (contains Flask for web based in the score from distance between
interactions) the images.

Offline.py:
 This Python file consists of code that is
used to get the images from local
directory
 It is used to initialise Features folder
which contains all the feature as
Numpy arrays.
 This File is able to extract features by
calling the feature extractor class.
Feature Extractor.py:
 This file consists of code required for
preprocesssing the images.
 Here we are using VGG16 which is a
pretrained CNN model for feature
extraction by eleminating the
classification layer from the model
 This model imports weights from REFERENCES:

Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 108
[1] Y. Liu, D. Zhang, G. Lu, and W.-Y. Ma, “A
[10] N. Khosla and V. Venkataraman,
survey of content-based image retrieval with “Building image-based shoe search using
high-level semantics,” Pattern recognition, vol. convolutional neural networks,” CS231n
40, no. 1, pp. 262–282, 2007. Course Project Reports, 2015.

[2] Y. LeCun, Y. Bengio, and G. Hinton, [11] A. Iliukovich-Strakovskaia, A. Dral,


“Deep learning,” Nature, vol. 521, no. 7553, pp. and E. Dral, “Using pre-trained models for fine-
436–444, 2015. grained image classification in fashion field,” 2016.

[3] C. Szegedy,W. Liu, Y. Jia, P.


[12] J. Donahue, Y. Jia, O. Vinyals, J.
Hoffman, N. Zhang, E. Tzeng, and T. Darrell,
Sermanet, S. Reed, D. Anguelov, D. Erhan, V. “Decaf: A deep convolutional activation feature
Vanhoucke, and A. Rabinovich, “Going for generic visual recognition.” in Icml, vol. 32,
deeper with convolutions,” in Proceedings of 2014, pp. 647–655.
the IEEE Conference on Computer Vision and
Pattern Recognition, 2015, pp. 1–9. [13] G. Shrivakshan, C.

[4] C. Szegedy, V. Vanhoucke, S. Ioffe, J.


Chandrasekar et al., “A comparison of
various edge detection techniques used in
Shlens, and Z. Wojna, “Rethinking the inception image processing,” IJCSI International
architecture for computer vision,” in Proceedings Journal of Computer Science Issues, vol. 9,
of the IEEE Conference on Computer Vision and no. 5, pp. 272–276, 2012.
Pattern Recognition, 2016, pp. 2818– 2826.

[5] A. Krizhevsky, I. Sutskever, and G. E.


[14] A. Maurya and R. Tiwari, “A novel
method of image restoration by using different
Hinton, “Imagenet classification with deep types of filtering techniques,” International Journal
convolutional neural networks,” in Advances in of Engineering Science and Innovative Technology
neural information processing systems, 2012, pp. (IJESIT) Volume, vol. 3, 2014. [15] R.
1097–1105. Kandwal, A. Kumar, and S. Bhargava, “Review:
[6] A. Babenko, A. Slesarev, A. existing image segmentation techniques,”
International Journal of Advanced Research in
Chigorin, and V. Lempitsky, “Neural codes
Computer Science and Software Engineering,
for image retrieval,” in European
vol. 4, no. 4, 2014.
conference on computer vision. Springer,
[16] K. Roy and J. Mukherjee, “Image similarity
2014, pp. 584–599.
measure using color histogram, color coherence
[7] R. Xia, Y. Pan, H. Lai, C. Liu, vector, and sobel method,” International Journal
and S. Yan, “Supervised hashing for image of Science and Research (IJSR), vol. 2, no. 1,
retrieval via image representation learning.” pp. 538–543, 2013.
in AAAI, vol. 1, 2014, p. 2. [17] J. Shlens, “Train
your own image
[8] K. Lin, H.-F. Yang, J.-H. Hsiao, and classifier with
C.-S. Chen, “Deep learning of binary hash codes Inception in
for fast image retrieval,” in Proceedings of the TensorFlow,”
IEEE Conference on Computer Vision and Pattern https://research.google
Recognition Workshops, 2015, pp. 27–35. blog.com/2016/03/train
[9] J.-C. Chen and C.-F. Liu, “Visual-
-your-
ownimageclassifier-
based deep learning for clothing from large
with.html, 2016.
database,” in Proceedings of the ASE BigData
[18] P. Sermanet, D. Eigen, X. Zhang, M.
& SocialInformatics 2015. ACM, 2015, p. 42.
Mathieu, R. Fergus, and Y. LeCun,
“Overfeat: Integrated recognition, localization
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 109
and detection using convolutional networks,”
arXiv preprint arXiv:1312.6229, 2013.
[19] P. Wu, S. C. Hoi, H. Xia, P. Zhao, D. Wang,
and C. Miao, “Online multimodal deep similarity
learning with application to image retrieval,” in
Proceedings of the 21st ACM international
conference on Multimedia. ACM, 2013, pp. 153–
162.
[20] S. Liu, Z. Song, G. Liu, C. Xu, H. Lu, and S.
Yan, “Street-toshop: Cross-scenario clothing
retrieval via parts alignment and auxiliary set,” in
Computer Vision and Pattern Recognition (CVPR),
2012 IEEE Conference on. IEEE, 2012, pp. 3330–
3337.
[21] K. Yamaguchi, M. H. Kiapour, L. E. Ortiz,
and T. L. Berg, “Retrieving similar styles to
parse clothing,” IEEE transactions on pattern
analysis and machine intelligence, vol. 37, no. 5,
pp. 1028–1040, 2015.
[22] J. Wan, P. Wu, S. C. Hoi, P. Zhao, X. Gao,
D. Wang, Y. Zhang, and J. Li, “Online learning to
rank for content- based image retrieval,” 2015.

Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 110

You might also like