Deepi Pro
Deepi Pro
A Project report submitted in partial fulfilment of the requirements for the award of the
degree of
BACHELOR OF TECHNOLOGY
IN
OF
DR. A.P.J. ABDUL KALAM TECHNICAL UNIVERSITY
LUCKNOW (U.P.)
Submitted by
DECLARATION
Date: _________________
Signature: ____________________
Name: Kartikey Singh Rajpoot
Roll no: 2000350100013
Signature: ___________________
Name: Navneet Mishra
Roll No. 2000350100016
Signature: _______________________
Name: Deependra Chaurasiya
Roll No. 2000350100006
CERTIFICATE
Signature: __________________________
CERTIFICATE OF APPROVAL
We are very much thankful to the Director and Management, Babu Banarasi Das Institute
of Technology, Duhai Ghaziabad (Uttar Pradesh) for their encouragement and cooperation
to carry out this work.
We express our thanks to all teaching faculty of Department of CSE, whose suggestions
during reviews helped us in accomplishment of our project. We would like to thank all
non-teaching staff and lab assistant of the Department of Computer Science &
Engineering, Babu Banarasi Das Institute of Technology Duhai Ghaziabad (Uttar
Pradesh) for providing great assistance in accomplishment of our project.
STUDENTS
CONCLUSION 56
REFERENCES 57
ABSTRACT
In colleges, universities, organizations, schools, and offices, taking attendance is one of the most
important tasks that must be done on a daily basis. The majority of the time, it is done manually,
such as by calling by name or by roll number. The main goal of this project is to create a Face
Recognition-based attendance system that will turn this manual process into an automated one.
This project meets the requirements for bringing modernization to the way attendance is handled,
as well as the criteria for time management. This device is installed in the classroom, where and
student's information, such as name, roll number, class, sec, and photographs, is trained. The
images are extracted using Open CV. Before the start of the corresponding class, the student can
approach the machine, which will begin taking pictures and comparing them to the qualified
dataset. Logitech C270 web camera and NVIDIA Jetson Nano Developer kit were used in this
project as the camera and processing board. The image is processed as follows: first, faces are
identified using a Haarcascade classifier, then faces are recognized using the LBPH (Local Binary
Pattern Histogram) Algorithm, histogram data is checked against an established dataset, and the
device automatically labels attendance. An Excel sheet is developed, and it is updated every hour
with the information from the respective class instructor.
Keywords: Face Detection, Face Recognition, HaarCascade classifier, NVIDIA Jetson Nano
CHAPTER 1
1
Introduction
2
1.2 Background:
3
face recognition for criminal investigations by tracking suspects, missing
children and drug activities (Robert Silk, 2017). Apart from that, Facebook
which is a popular social networking website implement face recognition to
allow the users to tag their friends in the photo for entertainment purposes
(Sidney Fussell, 2018). Furthermore, Intel Company allows the users to use
face recognition to get access to their online account (Reichert, C., 2017).
Apple allows the users to unlock their mobile phone, iPhone X by using face
recognition (deAgonia, M., 2017).
4
the class with a large number of students might find it difficult to have the
attendance sheet being passed around the class. Thus, face recognition
attendance system is proposed in order to replace the manual signing of the
presence of students which are burdensome and causes students get distracted
in order to sign for their attendance. Furthermore, the face recognition based
automated student attendance system able to overcome the problem of
fraudulent approach and lecturers does not have to count the number of
students several times to ensure the presence of the students.
The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial
identification. One of the difficulties of facial identification is the identification
between known and unknown images. In addition, paper proposed by Pooja G.R et
al. (2010) found out that the training process for face recognition student attendance
system is slow and time-consuming. In addition, the paper proposed by Priyanka
Wagh et al. (2015) mentioned that different lighting and head poses are often the
problems that could degrade the performance of face recognition based student
attendance system.
Hence, there is a need to develop a real time operating student attendance system
which means the identification process must be done within defined time constraints
to prevent omission. The extracted features from facial images which represent the
identity of the students have to be consistent towards a change in background,
illumination, pose and expression. High accuracy and fast computation time will be
the evaluation points of the performance.
5
Expected achievements in order to fulfill the objectives are:
CHAPTER- 2
LITERATURE REVIEW
6
2.2 Students Attendance System:
7
Digital Image Processing is the processing of images which are digital in nature by
a digital computer. Digital image processing techniques are motivated by three
major applications mainly:
8
Digital image processing involves the following basic tasks:
● Image Acquisition - An imaging sensor and the capability to digitize the signal
produced by the sensor.
● Preprocessing – Enhances the image quality, filtering, contrast enhancement etc.
● Segmentation – Partitions an input image into constituent parts of objects.
● Description/feature Selection – extracts the description of image objects suitable
for further computer processing.
● Recognition and Interpretation – Assigning a label to the object based on the
information provided by its descriptor. Interpretation assigns meaning to a set of
labelled objects.
● Knowledge Base – This helps for efficient processing as well as inter module
cooperation.
9
Face Detection
Face detection is the process of identifying and locating all the present faces
in a single image or video regardless of their position, scale, orientation, age and
expression. Furthermore, the detection should be irrespective of extraneous
illumination conditions and the image and video content[5].
Face Detection
A face Detector has to tell whether an image of arbitrary size contains a
human face and if so, where it is. Face detection can be performed based on
several cues: skin color (for faces in color images and videos, motion (for faces in
videos), facial/head shape, facial appearance or a combination of these
parameters. Most face detection algorithms are appearance based without using
other cues. An input image is scanned at all possible locations and scales by a sub
10
window. Face detection is posed as classifying the pattern in the sub window
either as a face or a non-face. The face/nonface classifier is learned from face and
non-face training examples using statistical learning methods[ 9]. Most modern
algorithms are based on the Viola Jones object detection framework, which is
based on Haar Cascades.
Face Detection
Advantages Disadvantages
Method
1. High detection 1. Long Training Time. 2.Limited Head
Viola Jones Speed. Pose. 3.Not able to detect dark faces.
Algorithm 2. High Accuracy.
1.Simple computation. 1.Only used for binary and grey
Local Binary 2.High tolerance against images. 2.Overall performance is
Pattern Histogram the monotonic illumination inaccurate compared to Viola-Jones
changes. Algorithm.
Need not to have any prior The result highly depends on the training
Ada Boost knowledge about face data and affected by weak classifiers.
Algorithm structure.
11
Table 2.2: Advantages & Disadvantages of Face Detection Methods
12
Figure 2.3: Integral of Image
It was first described in 1994 (LBP) and has since been found to be a
powerful feature for texture classification. It has further been determined that
when LBP is combined with histograms of oriented gradients (HOG) descriptor, it
improves the detection performance considerably on some datasets. Using the
13
LBP combined with histograms we can represent the face images with a simple
data vector.
● Radius: the radius is used to build the circular local binary pattern
and represents the radius around the central pixel. It is usually set to
1.
● Neighbors: the number of sample points to build the circular local
binary pattern. Keep in mind: the more sample points you include,
the higher the computational cost. It is usually set to 8.
● Grid X: the number of cells in the horizontal direction. The more
cells, the finer the grid, the higher the dimensionality of the resulting
feature vector. It is usually set to 8.
● Grid Y: the number of cells in the vertical direction. The more cells,
the finer the grid, the higher the dimensionality of the resulting
feature vector. It is usually set to 8.
2. Training the Algorithm: First, we need to train the algorithm. To do so,
we need to use a dataset with the facial images of the people we want to
recognize. We need to also set an ID (it may be a number or the name of
the person) for each image, so the algorithm will use this information to
recognize an input image and give you an output. Images of the same
person must have the same ID. With the training set already constructed,
let’s see the LBPH computational steps.
3. Applying the LBP operation: The first computational step of the LBPH
is to create an intermediate image that describes the original image in a
better way, by highlighting the facial characteristics. To do so, the
14
algorithm uses a concept of a sliding window, based on the parameters
radius and neighbors.
Based on the image above, let’s break it into several small steps
so we can understand it easily:
15
● Then, we convert this binary value to a decimal value and set it to the central
value of the matrix, which is actually a pixel from the original image.
● At the end of this procedure (LBP procedure), we have a new image which
represents better the characteristics of the original image.
It can be done by using bilinear interpolation. If some data point is between the
pixels, it uses the values from the 4 nearest pixels (2x2) to estimate the value of
the new data point.
16
Based on the image above, we can extract the histogram of each region as follows:
● So the algorithm output is the ID from the image with the closest histogram.
The algorithm should also return the calculated distance, which can be used
as a ‘confidence’ measurement.
17
● We can then use a threshold and the ‘confidence’ to automatically estimate if
the algorithm has correctly recognized the image. We can assume that the
algorithm has successfully recognized if the confidence is lower than the
threshold defined.
\
CHAPTER-3
MODAL IMPLEMENTATION AND ANALYSIS
3.1 INTRODUCTION:
Face detection involves separating image windows into two classes; one
containing faces (turning the background (clutter). It is difficult because although
commonalities exist between faces, they can vary considerably in terms of age,
skin colour and facial expression. The problem is further complicated by differing
18
lighting conditions, image qualities and geometries, as well as the possibility of
partial occlusion and disguise. An ideal face detector would therefore be able to
detect the presence of any face under any set of lighting conditions, upon any
background. The face detection task can be broken down into two steps. The first
step is a classification task that takes some arbitrary image as input and outputs a
binary value of yes or no, indicating whether there are any faces present in the
image. The second step is the face localization task that aims to take an image as
input and output the location of any face or faces within that image as some
bounding box with (x, y, width, height). after taking the picture the system will
compare the equality of the pictures in its database and give the most related
result.
We will use NVIDIA Jetson Nano Developer kit, Logitech C270 HD Webcam,
open CV platform and will do the coding in python language.
19
Figure 3.1: Model Implement
The main components used in the implementation approach are open source
computer vision library (OpenCV). One of OpenCV’s goals is to provide a simple
to-use computer vision infrastructure that helps people build fairly sophisticated
vision applications quickly. OpenCV library contains over 500 functions that span
many areas in vision. The primary technology behind Face recognition is OpenCV.
The user stands in front of the camera keeping a minimum distance of 50cm and his
image is taken as an input. The frontal face is extracted from the image then
converted to grey scale and stored. The principal component Analysis (PCA)
algorithm is performed on the images and the eigen values are stored in an xml file.
When a user requests for recognition the frontal face is extracted from the captured
video frame through the camera. The eigen value is re-calculated for the test face
and it is matched with the stored data for the closest neighbour.
20
1. OpenCV: We used OpenCV 3 dependency for python 3. OpenCV is library
where there are lots of image processing functions are available. This is very
useful library for image processing. Even one can get expected outcome
without writing a single code. The library is cross-platform and free for use
under the open-source BSD license. Example of some supported functions are
given bellow:
● Derivation: Gradient/Laplacian computing, contours delimitation
● Hough transforms: lines, segments, circles, and geometrical shapes
detection
● Histograms: computing, equalization, and object localization with back
projection algorithm
● Segmentation: thresholding, distance transform, foreground/background
detection, watershed segmentation
21
Fig 3.2: Installing OpenCV
We copied this script and place it on a directory on our raspberry pi and saved it.
Then through terminal we made this script executable and then ran it.
22
3.3.2 Hardware Implementation:
It’s simpler than ever to get started! Just insert a microSD card with the system image,
boot the developer kit, and begin using the same NVIDIA JetPack SDK used across the
entire NVIDIA Jetson™ family of products. JetPack is compatible with NVIDIA’s world-
leading AI platform for training and deploying AI software, reducing complexity and effort
for developers.
23
Specifications:
The developer kit uses a microSD card as boot device and for main storage. It’s important
to have a card that’s fast and large enough for your projects; the minimum requirement is a
32GB UHS-1 card.
So we used 64Gb microSD card.
24
Camera 1x MIPI CSI-2 connector
Display HDMI
USB 1x USB 3.0 Type A,2x USB 2.0 Type A, USB 2.0 Micro-B
Before utilizing it, we have to configure our NVIDIA Jetson Nano Board for Computer
Vision and Deep Learning with TensorFlow, Keras, TensorRT, and OpenCV.
The NVIDIA Jetson Nano packs 472GFLOPS of computational horsepower. While it is a
25
Figure 3.4: The first step to configure your NVIDIA Jetson Nano for computer vision and deep learning is
to download the Jetpack SD card image While your Nano SD image is downloading, go ahead
and download and install balenaEtcher, a disk image flashing tool:
Figure 3.5: Download and install balenaEtcher for your OS. You will use it to flash your Nano image to a
microSD card.
26
Once both (1) your Nano Jetpack image is downloaded, and (2) balenaEtcher is installed,
you are ready to flash the image to a microSD.
Insert the microSD into the card reader, and then plug the card reader into a USB port on
your computer. From there, fire up balenaEtcher and proceed to flash.
Figure 3.6: Flashing NVIDIA’s Jetpack image to a microSD card with balenaEtcher is one of the first steps
for configuring your Nano for computer vision and deep learning.
When flashing has successfully completed, you are ready to move on to Step #2.
Step #2: Boot your Jetson Nano with the microSD and connect to a network
● Insert your microSD into your Jetson Nano as shown in Figure 4:
Figure 3.7: To insert your Jetpack-flashed microSD after it has been flashed,
find the microSD slot as shown by the red circle in the image. Insert your microSD until it clicks into
place.
From there, connect your screen, keyboard, mouse, and network interface.
27
Finally, apply power. Insert the power plug of your power adapter into your Jetson Nano
(use the J48 jumper if you are using a 20W barrel plug supply).
Figure 3.8: Use the icon near the top right corner of your screen to configure networking settings on your
NVIDIA Jetson Nano. You will need internet access to download and install computer vision and deep
learning software.
Once you see your NVIDIA + Ubuntu 18.04 desktop, you should configure your wired or
wireless network settings as needed using the icon in the menubar as shown in Figure 5.
When you have confirmed that you have internet access on your NVIDIA Jetson Nano,
you can move on to the next step.
1. Option 1: Open a terminal on the Nano desktop, and assume that you’ll perform
all steps from here forward using the keyboard and mouse connected to your
Nano
28
2. Option 2: Initiate an SSH connection from a different computer so that we can
remotely configure our NVIDIA Jetson Nano for computer vision and deep
learning
For Option 1, open up the application launcher, and select the terminal app. You may
wish to right click it in the left menu and lock it to the launcher, since you will likely use
it often.
You may now continue to Step #4 while keeping the terminal open to enter commands.
For Option 2, you must first determine the username and IP address of your Jetson Nano.
On your Nano, fire up a terminal from the application launcher, and enter the following
commands at the prompt:
$
whoami
nvidia $
ifconfig
en0: flags=8863 mtu 1500
options=400
ether 8c:85:90:4f:b4:41
inet6 fe80::14d6:a9f6:15f8:401%en0 prefixlen 64 secured scopeid 0x8
inet6 2600:100f:b0de:1c32:4f6:6dc0:6b95:12 prefixlen 64 autoconf secured
inet6 2600:100f:b0de:1c32:a7:4e69:5322:7173 prefixlen 64 autoconf temporary
inet 192.168.1.4 netmask 0xffffff00 broadcast 192.168.1.255 nd6 options=201
29
media: autoselect
status: active
Grab your IP address. Then, on a separate computer, such as your laptop/desktop, initiate
an SSH connection as follows:
$ ssh [email protected]
Notice how I’ve entered the username and IP address of the Jetson Nano in my command
to remotely connect.
Step #4: Update your system and remove programs to save space
In this step, we will remove programs we don’t need and update our system. First, let’s set
our Nano to use maximum power capacity:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
The nvpmodel command handles two power options for your Jetson Nano: (1) 5W is
mode 1 and (2) 10W is mode 0. The default is the higher wattage mode, but it is always
best to force the mode before running the jetson_clocks command.
After you have set your Nano for maximum power, go ahead and remove LibreOffice —
it consumes lots of space, and we won’t need it for computer vision and deep learning:
30
$ sudo apt-get update && sudo apt-get upgrade In
Step #5: Install OpenCV system-level dependencies and other development dependencies
Let’s now install OpenCV dependecies on our system beginning with tools needed to
build and compile OpenCV with parallelism:
Lastly, we’ll install Video4Linux (V4L) so that we can work with USB webcams and
install a library for FireWire cameras:
31
Step #6: Set up Python virtual environments on your Jetson Nano
Figure 3.9: Each Python virtual environment you create on your NVIDIA Jetson Nano is separate and
independent from the others.
I can’t stress this enough: Python virtual environments are a best practice when both
developing and deploying Python software projects.
Virtual environments allow for isolated installs of different Python packages. When you
use them, you could have one version of a Python library in one environment and another
version in a separate, sequestered environment.
In the remainder of this tutorial, we’ll create one such virtual environment; however, you
can create multiple environments for your needs after you complete this Step#6. Be sure
to read the RealPython guide on virtual environments if you aren’t familiar with them.
First, we’ll install the de facto Python package management tool, pip:
32
$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python3 get-pip.py
$ rm get-pip.py
And then we’ll install my favorite tools for managing virtual
environments, virtualenv and virtualenvwrapper:
The virtualenvwrapper tool is not fully installed until you add information to your bash
profile. Go ahead and open up your ~/.bashrc with the nano ediitor:
$ nano ~/.bashrc
Save and exit the file using the keyboard shortcuts shown at the bottom of the nano editor,
and then load the bash profile to finish the virtualenvwrapper installation:
$ source ~/.bashrc
33
Figure 3.10: Terminal output from the virtualenvwrapper setup installation indicates that there are no errors.
We now have a virtual environment management system in place so we can create computer vision and
deep learning virtual environments on our NVIDIA Jetson Nano.
So long as you don’t encounter any error messages, both virtualenv and
virtualenvwrapper are now ready for you to create and destroy virtual environments as
needed in Step #7.
This step is dead simple once you’ve installed virtualenv and virtualenvwrapper in the
previous step. The virtualenvwrapper tool provides the following commands to work with
virtual environments:
● mkvirtualenv
● lsvirtualenv
34
● rmvirtualenv
● workon
● deactivate
: Exits the virtual environment taking you back to your system environment
Assuming Step #6 went smoothly, let’s create a Python virtual environment on our Nano:
I’ve named the virtual environment py3cv4 indicating that we will use Python 3 and
OpenCV 4. You can name yours whatever you’d like depending on your project and
software needs or even your own creativity.When your environment is ready, your bash
prompt will be preceded by (py3cv4). If your prompt is not preceded by the name of your
virtual environment name, at any time you can use the workon command as follows:
$ workon py3cv4
35
Figure 3.11: Ensure that your bash prompt begins with your virtual environment name for the remainder of
this tutorial on configuring your NVIDIA Jetson Nano for deep learning and computer vision.
For the remaining steps , you must be “in” the py3cv4 virtual environment.
3.3.2.2 Webcam:
Specifications:
36
• Logitech C270 Web Camera (960-000694) supports for NVIDIA jetson nano
developer kit.
• The C270 HD Webcam gives you sharp, smooth conference calls (720p/30fps) in
a widescreen format. Automatic light correction shows you in lifelike, natural
colors.
• Which is suitable to use with the NVIDIA Jetson Nano and NVIDIA Jetson
Xavier NX Development Kits.
Face Detection:
Start capturing images through web camera of the client side: Begin:
● calculate the eigen value of the captured face image and compared with eigen
values of existing faces in the database.
● If eigen value does not matched with existing ones,save the new face image
information to the face database (xml file).
● If eigen value matched with existing one then recognition step will done.
End
Face Recognition:
Using PCA algorithm the following steps would be followed in for face recognition:
Begin:
● Find the face information of matched face image in from the database.
37
● update the log table with corresponding face image and system time that
makes completion of attendance for an individua students.
End
This section presents the results of the experiment conducted to capture the face
into a grey scale image of 50x50 pixels.
38
Figure 3.13 : Dataset sample
CHAPTER-4
39
ALGORITHM IMPLEMENTATION
All our code is written in Python language. First here is our project directory
structure and files.
FRASJN
| [Attendance]
| [ImagesUnknown]
| [StudentDetails]
| [TrainingImage]
| [Traininglabel]
| main.py
| automail.py
| CaptureImage.py
| check_camera.py
| haarcascade_frontalface_default.xml
| recognize.py
| requirements.txt
Note: The names inside square brackets [“folder name”] indicate it is a folder.
[Attendance] => It contains all the attendance sheets saved after taking attendance.
[ImagesUnknown] => Unknown images are placed inside this folder to avoid false positives.
[StudentDetails] => Here we place Studentdetails.csv file to use while recognizing faces.
[Trainingimage] => After capture dataset of a student, all his/her images are stored here.
4.1.1 main.py
40
All the work will be done here, Detect the face ,recognize the faces and take attendance.
41
42
4.1.2 automail.py
In this project we add an extra feature called auto mail. It can automatically sent
the attendance file to specific mail. Auto mail code given below.
43
4.1.3 Capture_Image.py
This capture_image.py will collect the data set of a student and add his/her name
in tha StudentsDetails.csv
44
45
4.1.4 checkcamera.py
46
4.1.5 Train_Image.py
All the images in the Training Image folder will be accessed here and a model is
created by using this trainimage.py file.
47
4.1.6 Recognize.py
When this Recognize.py file is executed, camera will be opened and it will
recognize all the students present in this Students.csv file and those who are present it
will mark attendance automatically and save in Attendance folder with date and time.
48
49
4.1.7 requirements.txt
This file consists all the required files to be install before executing the codes.
We can make use of the above commands or we can run a simple command with
thw requirements.txt file
opencv-contrib-
50
4.2 Sample Images:
51
CHAPTER-6
PERFORMANCE ANALYSIS
6.1 Introduction:
6.2 Analysis:
52
6.3 Flow Chart:
53
We are setting up to design a system comprising of two modules. The first
module (face detector) is a mobile component, which is basically a camera
application that captures student faces and stores them in a file using computer
vision face detection algorithms and face extraction techniques. The second
module is a desktop application that does face recognition of the captured
images (faces) in the file, marks the students register and then stores the results
in a database for future analysis.
CONCLUSION
54
Face recognition systems are part of facial image processing applications and their
significance as a research area are increasing recently. Implementations of system are
crime prevention, video surveillance, person verification, and similar security activities.
The face recognition system implementation can be part of Universities. Face
Recognition Based Attendance System has been envisioned for the purpose of reducing
the errors that occur in the traditional (manual) attendance taking system. The aim is to
automate and make a system that is useful to the organization such as an institute. The
efficient and accurate method of attendance in the office environment that can replace
the old manual methods. This method is secure enough, reliable and available for use.
Proposed algorithm is capable of detect multiple faces, and performance of system has
acceptable good results.
REFERENCES
55
[1]. A brief history of Facial Recognition, NEC, New Zealand,26 May 2020.[Online]. Available:
https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facialrecognition/
[2]. Face detection,TechTarget Network, Corinne Bernstein, Feb, 2020.[Online]. Available:
https://searchenterpriseai.techtarget.com/definition/face-detection
[3]. Paul Viola and Michael Jones, Rapid Object Detection using a Boosted Cascade of Simple
Features. Accepted Conference on Computer Vision and Pattern Re cognition, 2001.
[4]. Face Detection with Haar Cascade,Towards Data Science-727f68dafd08,Girija Shankar
Behera, India, Dec 24, 2020.[Online]. Available:https://towardsdatascience.com/face-
detectionwith-haar-cascade-727f68dafd08
[5]. Face Recognition: Understanding LBPH Algorithm,Towards Data
Science90ec258c3d6b,Kelvin Salton do Prado, Nov 11, 2017.[Online]. Available
:https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
[6]. What is Facial Recognition and how sinister is it, Theguardian, IanSample, July, 2019.
[Online]. Available: https://www.theguardian.com/technology/2019/jul/29/what-is-
facialrecognition-and-how-sinister-is-it
[7].Kushsairy Kadir , Mohd Khairi Kamaruddin, Haidawati Nasir, Sairul I Safie, Zulkifli Abdul
Kadir Bakti,"A comparative study between LBP and Haar-like features for Face Detection using
OpenCV", 4th International Conference on Engineering Technology and Technopreneuship
(ICE2T), DOI:10.1109/ICE2T.2014.7006273, 12 January 2015.
[8].Senthamizh Selvi.R,D.Sivakumar, Sandhya.J.S , Siva Sowmiya.S, Ramya.S , Kanaga Suba
Raja.S,"Face Recognition Using Haar - Cascade Classifier for Criminal Identification",
International Journal of Recent Technology and Engineering(IJRTE), vol.7, issn:2277-3878, ,
issue-6S5, April 2019.
[9]. Robinson-Riegler, G., & Robinson-Riegler, B. (2008). Cognitive psychology: applying the
science of the mind. Boston, Pearson/Allyn and Bacon..
[10]. Margaret Rouse, What is facial recognition? - Definition from WhatIs.com, 2012. [online]
Available at: http://whatis.techtarget.com/definition/facial-recognition
[11]. Robert Silk, Biometrics: Facial recognition tech coming to an airport near you: Travel
Weekly, 2017. [online] Available at: https://www.travelweekly.com/Travel-News/Airline-
56