Research-Paper
Research-Paper
1,3,4 Graduate, SCOPE, Vellore Institute of Technology, Vellore, Tamil Nadu, India
2 Graduate, SCOPE, Vellore Institute of Technology, Chennai, Tamil Nadu, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Authorisation for the Covid-19 vaccination necessary. Looking through the history of face recognition,
drive in India currently involves an Aadhaar card or any it can be seen that it has been the subject of numerous
other type of valid identification. it's possible that people research papers. [1-6]. Human face image processing has
have forgotten their Aadhaar or lost their legitimate been a popular and exciting study topic for many years.
documents as a result of the lockdown. This is where a Because human features are very detailed, numerous
Facial Recognition device comes in as a solution. The goal of issues have piqued people's attention and have been
the paper is to create a deep learning model and integrate it widely investigated. In recent years, a variety of feature
into a system that would allow individuals in India to extraction and pattern categorization approaches have
register for covid vaccinations and then log in to reserve been developed. Surveillance, facial recognition, video
their slots using the "Real-time face recognition" approach. indexing, and developing market surveys are just a few of
Presently, any kind of valid id evidence is required for the sectors where this study topic has a lot of promise.
verification for the Covid-19 immunization initiative. Since Some challenges are there in this solution such as pose
such identification techniques need touch and a frontline variation, lighting, and facial expression changes that
worker, it's preferable to make the process contactless and happen from time to time as in references mentioned in
quick. Facial recognition will assist to alleviate the concern the references section [7-14]. The obvious benefit of
of unintentional infections at vaccination centers by making utilizing facial recognition in a vaccination campaign is
the operation contactless. For the procedure to work, that it reduces the number of contact points between
eligible persons will need to register for the Covid-19 people across the circuit. Face recognition with cameras
immunization setup using the Co-WIN platform on the located at a distance eliminates the need for customers to
Aarogya Setu app. During registration, users can link their repeatedly tap the same fingerprint authenticator, which
mobile numbers. When users who have chosen to validate has been the major driver of Covid-19's global
their identification using Aadhaar information come to a development. One of the issue statements explored here is
vaccination booth, the facial recognition technology will de-duplication. Given the limited amount of vaccines
immediately check them. The system proposed in this paper available, it is the government's responsibility to deliver
achieved a 98.34% accuracy in a real-time approach which them in a timely and organized manner that each person
makes it effective enough to replace the primitive methods. receives the appropriate amount. another issue statement
that has a high likelihood of being seen at a vaccination
Key Words: Covid-19, CNN, Feature Extraction, Facial location is a violation of the COVID-19 preventative
landmarks, Image processing, Hash encoding, Data procedures, such as wearing suitable masks and
Augmentation. maintaining social distance. It's unlikely that keeping track
of everyone will be done manually. Overcrowding can
1.INTRODUCTION occur at times, needing attention to the issue for the
benefit of protection and better management, Hence more
The Covid-19 outbreak – and the contemporary world it complicated facial characteristics can be extracted using
has revealed – has thrown many technologies and lives deep learning as referenced in papers [15-27].
into disarray. In what has become a severe failure, contact
biometric systems may not only be outmoded but also 2. LITERATURE SURVEY
potentially lethal, given the risk of virus transmission. It is
vital to vaccinate the whole population against the SaRS.- About paper [28], Joint face detection & alignment
CoV-2. virus, as vaccination will be the most effective using Multitask cascaded convolutional networks
strategy for limiting the pandemic. This is a significant (Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li)
problem since we must first develop a safe and effective
vaccine before producing, distributing, and giving it to the In this paper, the system employs a three-network cascade
susceptible population promptly. It is difficult to develop a configuration: first, the image is rescaled to a variety of
COVID-19 vaccine that is both effective and safe, but sizes (known as an image pyramid), then the first model
operations, transportation, and production of the vaccine (Proposal Network or P-Net) suggests the target facial
could pose significant challenges, especially in developing features of R-Net filters the feature vectors, and the third
countries where the vaccination will be injected and to model (Output Network or O-Net). MTCNN is the most
assure its efficacy and usefulness, cold storage is
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 674
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 04 | Apr 2022 www.irjet.net p-ISSN: 2395-0072
precise and produces the most accurate findings. It detects left their Aadhaar cards behind or lost them as a result of
five major points in addition to detecting the face. They the lockout. This might be the case for the people in rural
first employ a convolutional network to gather target India. While E-Aadhaar cards exist, this poses a problem
frames and their input vector regression vectors, and then when it comes to children, as they do not have Aadhaar
they employ Non- maximum suppression to overlap the cards. another option is to use biometrics, but this
strongly conflicting targets (NMS). The members are then requires physical contact which may result in the virus
passed to a second CNN, which filters out a large no. of spreading. as a result, a solution with minimal physical
false outcomes and calibrates bounding boxes. processes will be beneficial. An Iris-based authentication
system can’t be used in this use case because one of the
About paper [29] Facial expression recognition using symptoms of covid is that your eye might turn red and
feature additive pooling and progressive fine-tuning swollen and here the system can fail.
of CNN (Yu, C.1 Lee, S. W. in 2018)
4. SUGGESTED APPROACH
According to this paper, A VGGface- based facial live
expression detection network is being used, which is the This is where the Facial Recognition model comes in as a
VGG16 version trained on the VGGface dataset. To workaround, the system is designed in a way as when the
accommodate the difficulty, the VGGface network's last user comes to the portal, he gets two options i.e. if he’s
completely integrated (FC) layer with 2622 channels is already registered, he can log in and book his slots and if
substituted by an FC layer with six/ seven channels. To he’s new, he can register himself. Now if the user goes to
solve the challenge of picking the proper layers to be register, the user can enter his/her email id, and an image
retrained, they presented a network architecture built of is captured on a live stream video. Аfter loading the
convolution block branches from the five VGGface models captured photo, the proposed model measures various
that were trained on facial characteristics data with varied facial features, referred to as landmarks mentioned in the
fine-tuning degrees. The five Conv5 blocks in the five paper referenced in [31-35] or nodal points. including the
VGGface kinds and three FC layers are also added element distances between both the eyes, the thickness of a nose,
by element. as demonstrated in this paper, they developed the distance from its forehead to the chin area, etc. In an
a CNN algorithm for facial emotion detection based on a ideal condition, It's possible to gather up to 80 distinct
progressive hyper-tuning methodology and an adaptive parameters. This evaluated data is then converted into a
feature pooling procedure. The results show that the formula that reflects the distinctive facial identity. Аnd the
proposed methodology outperforms state-of-the-art data. pickle file contains this encrypted signature.
techniques, better performance when information Similarly, When the user visits the login page, the
distributions between training and test data change algorithm confirms the user’s identity by encoding the
considerably. newly acquired image into a facial signature and
comparing it to hashes of known faces in the pickle file to
About the paper [30] Deep Face Recognition for see if there is a match. The goal of this research is to
Biometric authentication (Maheen Zulfiqar, Fatima improve the efficiency of the Covid-19 vaccine
Syed, Muhammad Jaleed Khan, Khurram Khurshid in administration procedure. This is designed by combining
July 2019) several Machine Learning and Artificial Intelligence
models into a system that the government or another
In this paper, they’ve introduced a face recognition system commercial organization may use.
based on CNN that identifies faces in an input picture
using the Viola-Jones object detection and identifies 5. METHODOLOGY
characteristics from discovered faces using a pre-trained
CNN for recognition. avast, a library of subject facial Deep Neural Networks -
photographs is built, which is then supplemented to
A deep neural network is a system that employs several
increase the number of photos per topic and to include
layers of neural networks. Nodes are used to derive high-
diverse light and noise conditions. For deep facial
level functions from input data. It involves repurposing
recognition, an ideal pre-trained CNN model and a set of
data into something more conceptual and creative
hyperparameters are extracted empirically. The
usefulness of deep facial recognition in automated Convolution Neural Network (CNN) -
biometric assessment systems is demonstrated in
promising testing findings, with an overall accuracy of Convolutional Neural Network (Conv Net/CNN) is a well-
98.76 percent. known Deep Learning technique/algorithm that takes a
photo as the input and further assigns importance
3. RESEARCH GAP (learnable weights and biases) to different aspects of the
photo which is later used to distinguish between photos. A
As previously reported, de-duplication is one of the Convolutional layer requires far less pre-processing than
statements of the issues discussed. It is currently achieved other classification techniques. Primitive approaches need
with the aid of an Aadhaar. It's possible that people have hand-engineering of filter classes, while with ConvNets,
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 675
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 04 | Apr 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 676
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 04 | Apr 2022 www.irjet.net p-ISSN: 2395-0072
recursively until it has the given number of potential areas embedded in the model is Conv2D, which consists of 100
is reduced. unique filters with a 3X3 dimension. Then, the 'ReLu'
activation mechanism is employed in the first step. If the
Pre-processing: Pre-processing techniques like resizing, input is positive, the Rectified Linear Unit (ReLu) function
normalization, and augmentation were done. outputs it directly, else it returns zero. For all the photos
Augmentation helped in increasing the currently existing to be created and evaluated for this model, the input scale
dataset significantly. More images were produced using is similarly set to 150X150X3.
the ImageDataGenerator from Keras. Variations were
produced by moving the images horizontally, vertically, The MaxPooling2D is utilized in the second layer for a pool
and rotating them. This step helped to provide more capacity of 2X2. Following that is another Conv2D layer,
variant data instead of simple linear data and thus, helped this time with 100 unique filters having the same
to train the proposed model with even more accuracy. dimensions that are (3X3) with the previously used ReLu
activation function. Following the Conv 2D layer, another
Feature Extraction: The relevant and important features 2-Dimensional MaxPooling layer with a pool size of 2 X 2 is
(having high variance) are extracted. used. The Flatten layer is added later to blend the above-
mentioned layers onto one single 1-dimensional
Dataset: As and when the system is used, the images layer. From a huge number of positive and negative
captured, trigger the pipeline and automatically store the pictures, a cascade mechanism is learned in this stage.
image in the database as an encoded hash and is parallelly after then, it's utilized to find items in other pictures. The
sent to the model, for retraining purposes. Face Detection Cascade Classifier was employed as the
Initially, the Labeled Faces in the Wild (LFW) dataset is cascade classifier in this experiment. In this experiment, a
used to feed sample faces to the model. This dataset has model is built that has been pre-trained using frontal facial
face photographs designed for studying the problem of traits and is used to recognize faces in dynamic conditions
unconstrained face recognition. The dataset is considered or in real-time.
a public benchmark for face verification, also known as
pair matching. The size of the dataset is 173MB and it There are two stages to this method's UI development. A)
consists of over 13,000 images of 5749 unique faces Setting up a Flask server for the deep learning model
collected from the web. Therefore, this LFW dataset, along mentioned in reference [22] and B) Creating a React login
with the new set of images that are updated in the app, are two of these steps. The inbuilt typescript
database every time a user comes and registers in the framework could connect with the deep learning model
app. The proposed CNN model has a loss function termed via the Flask API. It will send an image of a person to the
"binary cross-entropy" to locate responses in hidden model, who will subsequently reveal the individual's
layers for the pictures in this paper (Log-loss). Initially, a genuine identity. Web-App needs a model that can
dataset of 13000 pictures is utilized. This is then input into recognize faces in photos and convert facial information
a multi-column CNN with the preferred loss function. For into a 128-dimensional vector that the system will use for
each input picture, a conditioned algorithm is built to facial recognition or authentication later. This is done by
forecast filters with rankings. The correctness of the input constructing a test database containing user IDs. The
set is determined by the assessment subset. recognition module will use the picture to encode and
create a 128-dimensional vector that the app can store as
a value in a database record with the identified person as
the key. The system will trigger the feature automatically
that searches the database for the individual in the
pictures identify. It will calculate the proper distance for
all database entries and the new picture by determining
the distance. The individual entry is not recorded if the
minimal distance is larger than the upper limitation;
nonetheless, the identification with the least distance is
returned. Finally, the Flask server will perform two
functions: 1) If the user isn't already registered, add them
to the server list. 2) Determine the person's identity based
Fig,1 Layers in the proposed CNN model on the inputs of the user with appropriate exceptions.
The model is trained using the Keras sequential library
API. This allows us to gradually add more elements to the
proposed model. The layers employed in the proposed
model are shown in Figure 1. The first layer that is
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 677
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 04 | Apr 2022 www.irjet.net p-ISSN: 2395-0072
6. ALGORITHM
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 678
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 04 | Apr 2022 www.irjet.net p-ISSN: 2395-0072
Now when the user is registered and tries to log in, the
algorithm verifies the user’s face by encoding the newly
captured image into a facial signature and comparing it
with existing hashes of known faces in the existing
data.pickle file, looking whether there is a match.
8. CONCLUSION
Given the complexity of the challenge and the necessity for
the model to execute accurately and effectively, the
application of software engineering best practices was
extremely beneficial. Software engineering artifacts
helped guide the model's evolution and keep track of its
development. The built-in mechanism is set up in a way
where users may come in and register for the Covid
vaccination and then reserve their slots at a later time.
Validation for the Covid-19 immunization campaign
Fig -8: “Successfully logged in” template rendered
presently requires an Aadhaar card or similar form of
acceptable identification. Because such identification
7. RESULT
methods necessitate physical contact, it's conceivable that
The training of the CNN model is an iterative process. people have forgotten their Aadhaar numbers or
After a new user is registered successfully, its facial misplaced their legal papers as a result of the shutdown.
landmarks are extracted and parallelly, sent to the dataset Here's when a Facial Recognition device comes in handy.
for retraining of the model in real-time. The model has its Different machine learning models may be used. In future
custom dataset on which it performs deep learning investigations. For facial recognition, Advanced
algorithms and helps authorize beneficiaries in no time. Convolutional Neural Networks techniques (CNNs) are
The proposed model resulted in an accuracy of 98.34% as quite popular and accurate. Because CNNs require a
shown in figure 9. The system can correctly identify the substantial quantity of training data, they have struggled
users, on which the network has been trained. The graph with emotion identification tasks. Using face landmarks
between Training loss/Accuracy and the number of might expand the size of a training dataset to the point
epochs is shown in figure 10. where a CNN may be used.
9. REFERENCES
[1] S. G. Bhele and V. H. Mankar, “A Review Paper on Face
Recognition Techniques,” Int. J. Adv. Res. Comput. Eng.
Technol., vol. 1, no. 8, pp. 2278–1323, 2012.
Fig -9: Face recognition model accuracy for 10 epochs [4] W. Zhao et al., “Face Recognition: A Literature Survey,”
ACM Comput. Surv., vol. 35, no. 4, pp. 399–458, 2003.
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 679
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 04 | Apr 2022 www.irjet.net p-ISSN: 2395-0072
[5] K. Delac, Recent Advances in Face Recognition. 2008. [22] Z. P. Fu, Y. N. Zhang, and H. Y. Hou, “Survey of deep
learning in face recognition,” in IEEE International
[6] A. S. Tolba, A. H. El-baz, and A. A. El-Harby, “Face Conference on Orange Technologies, ICOT 2014, pp. 5–8,
Recognition: A Literature Review,” Int. J. Signal Process., 2014.
vol. 2, no. 2, pp. 88–103, 2006.
[23] X. Chen, B. Xiao, C. Wang, X. Cai, Z. Lv, and Y. Shi,
[7] C. Geng and X. Jiang, “Face recognition using sift “ odular hierarchical feature learning with deep neural
features,” in Proceedings - International Conference on networks for face verification,” Image Processing (ICIP),
Image Processing, ICIP, pp. 3313–3316, 2009. 2013 20th IEEE International Conference on. pp. 3690–
3694, 2013.
[8] S. J. Wang, J. Yang, N. Zhang, and C. G. Zhou, “Tensor
Discriminant Color Space for Face Recognition,” IEEE [24] Y. Sun, D. Liang, X. Wang, and X. Tang, “DeepID3: Face
Trans. Image Process., vol. 20, no. 9, pp. 2490–501, 2011. Recognition with Very Deep Neural Networks,” Cvpr, pp.
2–6, 2015.
[9] S. N. Borade, R. R. Deshmukh, and S. Ramu, “Face
recognition using a fusion of PCA and LDA: Borda count [25] G. Hu, “When Face Recognition Meets with Deep
approach,” in 24th Mediterranean Conference on Control Learning: An Evaluation of Convolutional Neural Networks
and Automation, MED 2016, pp. 1164–1167, 2016. for Face Recognition,” 2015 IEEE Int. Conf. Comput. Vis.
Work., pp. 384–392, 2015.
[10] M. A. Turk and A. P. Pentland, “Face Recognition Using
Eigenfaces,” Journal of Cognitive Neuroscience, vol. 3, no. [26] C. Ding and D. Tao, “Robust Face Recognition via
1. pp. 72–86, 1991. Multimodal Deep Face Representation,” IEEE Trans.
Multimed., vol. 17, no. 11, pp. 2049– 2058, 2015.
. . Simon, “Improved RGB-D-T based face
recognition,” IET Biometrics, vol. 5, no. 4, pp. 297–303, [27] A. Bharati, R. Singh, M. Vatsa, and K. W. Bowyer,
Dec. 2016. “Detecting Facial Retouching Using Supervised Deep
Learning,” IEEE Trans. Inf. Forensics Secur., vol. 11, no. 9,
[12] O. Dniz, G. Bueno, J. Salido, and F. De La Torre, “Face pp. 1903–1913, 2016.
recognition using Histograms of Oriented Gradients,”
Pattern Recognit. Lett., vol. 32, no. 12, pp. 1598–1603, [28] Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face
2011. detection and alignment using multitask cascaded
convolutional networks. IEEE Signal Processing Letters,
[13] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, 23(10), 1499-1503.
“Robust face recognition via sparse representation,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210– [29] Lu, C.1 Lee, S. W. in 2018, "Facial expression
227, 2009. recognition using feature additive pooling and progressive
fine-tuning of CNN (Y)” IEEE Access, Lu, Yang et al., IEEE
[14] C. Zhou, L. Wang, Q. Zhang, and X. Wei, “Face Access 7, p93594:
recognition based on PCA image reconstruction and LDA,”
Opt. - Int. J. Light Electron Opt., vol. 124, no. 22, pp. 5599– [30] M. Zulfiqar, F. Syed, M. J. Khan and K. Khurshid, "Deep
5603, 2013. Face Recognition for Biometric Authentication," 2019
International Conference on Electrical, Communication,
[18] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Learning Deep and Computer Engineering (ICECCE), 2019, pp. 1-6, DOI:
Representation for Face Alignment with Auxiliary 10.1109/ICECCE47252.2019.8940725.
Attributes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38,
no. 5, pp. 918–930, 2016. [31] Mortezaie, Zahra, and Hamid Hassanpour. "ASurvey
ON AGE-INVARIANT FACE RECOGNITION METHODS."
[19] G. B. Huang, H. Lee, and E. Learned-Miller, “Learning Jordanian
hierarchical representations for face verification with
convolutional deep belief networks,” in Proceedings of the [32] Day, M. (2016). Exploiting Facial Landmarks for
IEEE Computer Society Conference on Computer Vision emotion Recognition in the Wild. In Proceedings of the
and Pattern Recognition, pp. 2518–2525, 2012. IEEE Conference on Computer Vision and Pattern
Recognition.
[20] S. Lawrence, C. L. Giles, Ah Chung Tsoi, and A. D. Back,
“Face recognition: a convolutional neural-network [33] Kumari, J., Rajesh, R., & Pooja, KM. (2015). Facial
approach,” IEEE Trans. Neural Networks, vol. 8, no. 1, pp. Expression Recognition: a Survey. In Proceedings of IEEE
98–113, 1997. Translation and Pattern analysis Machine Intelligence
Conference.
[21] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face
Recognition,” in Proceedings of the British Machine Vision
Conference 2015, pp. 41.1- 41.12, 2015.
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 680