0% found this document useful (0 votes)
53 views40 pages

Object Tracking System Bvm Jainish Parmar

The document is a project report titled 'Falcon Eye Object Tracking System' submitted by Jainish Parmar for the B. Tech. degree in Computer Engineering. It details the development of a face detection and tracking system using the Jetson Nano platform, integrating AI and robotics technologies. The report includes project objectives, hardware and software requirements, design and implementation details, and acknowledges the support received during the project.

Uploaded by

Akshat Sirohi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views40 pages

Object Tracking System Bvm Jainish Parmar

The document is a project report titled 'Falcon Eye Object Tracking System' submitted by Jainish Parmar for the B. Tech. degree in Computer Engineering. It details the development of a face detection and tracking system using the Jetson Nano platform, integrating AI and robotics technologies. The report includes project objectives, hardware and software requirements, design and implementation details, and acknowledges the support received during the project.

Uploaded by

Akshat Sirohi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

FALCON-EYE OBJECT TRACKING SYSTEM

A PROJECT REPORT

Submitted by

Jainish Parmar(20CP030)

In partial fulfillment for the award of the degree of


B. TECH. in COMPUTER ENGINEERING

4CP33: Full Semester External Project (FSEP)

BIRLA VISHVAKARMA MAHAVIDYALAYA


(ENGINEERING COLLEGE)
(An Autonomous Institution)
VALLABH VIDYANAGAR

Affiliated to

GUJARAT TECHNOLOGICAL UNIVERSITY, AHMEDABAD

Academic Year: 2023 – 2024

BVM ENGINEERING COLLEGE, VALLABH VIDYANAGAR-388120

I
APPROVAL SHEET

The project work entitled “Falcon eye an object tracking system-jetson nano"
carried out by Jainish Parmar(20CP030) is approved for the submission in the
course 4CP33, Full Semester External Project for the partial fulfillment for the
award of the degree of B. Tech. in Computer Engineering.

Date:

Place:

Signatures of Examiners:

(Names and Affiliations)

II
CERTIFICATE

This is to certify that Project Work embodied in this project report titled “Falcon
eye an object tracking system-jetson nano" was carried out by Jainish Parmar
(20CP030) under the course 4CP33, Full Semester External Project for the
partial fulfillment for the award of the degree of B. Tech. in Computer
Engineering. Followings are the supervisors at the student:

Date:

Place:

Shri Dhrupesh S Shah Prof. Prashant B. Swadas Prof. Mahasweta Jayantbhai Joshi
SCI/ENGR Associate Professor Associate Professor
SEDA-EOSDIG-SFSD Computer Engineering Department, Computer Engineering Department,
SAC-ISRO, BVM Engineering College BVM Engineering College
Ahmedabad

Dr. Darshak G Thakore


Prof. & Head,
Computer Engineering Department,
BVM Engineering College

COMPUTER ENGINEERING DEPARTMENT, BVM ENGINEERING COLLEGE, VALLABH


VIDYANAGAR – 388120

III
CERTIFICATE

IV
DECLARATION OF ORIGINALITY

I hereby certify that I am the sole author of this report under the course 4CP33 (Full Semester
External Project) and that neither any part thereof nor the whole of the report has been submitted
for a degree to any other University or Institution.

I certify that, to the best of my knowledge, the current report does not infringe upon anyone’s
copyright nor does it violate any proprietary rights and that any ideas, techniques, quotations or
any other material from the work of other people included in my report, published or otherwise,
are fully acknowledged in accordance with the standard referencing practices. Furthermore, to
the extent that I have included copyrighted material that surpasses the boundary of fair dealing
within the meaning of the Indian Copyright (Amendment) Act 2012, I certify that I have
obtained a written permission from the copyright owner(s) to include such material(s) in the
current report and have included copies of such copyright clearances to the appendix of this
report.

I declare that this is a true copy of report, including any final revisions, as approved by the report
review committee.

I have checked write-up of the present report using anti-plagiarism database and it is in
permissible limit. However, at any time in future, in case of any complaint pertaining of
plagiarism, I am the sole responsible person for the same. I also understand that, in case of such
complaints of plagiarism, as per the University Grants Commission (UGC) norms, the University
can even revoke the degree conferred to the student submitting such a plagiarized report.

Date:

Institute Code: 007

Institute Name: Birla Vishvakarma Mahavidyalaya (BVM) Engineering College

Jainish Parmar

20CP030

V
ACKNOWLEDGEMENT

I would want to take this occasion to express my gratitude to everyone who has supported and
helped me over the entire time.

Firstly, I am grateful to the Computer Engineering Department at B.V.M engineering college and
space application center SAC-ISRO, Ahmedabad for giving me this opportunity of Full Semester
External Project.

I would like to convey my deepest gratitude to my project guides, Prof. Prashant B. Swadas and
Prof. Mahasweta Jayantbhai Joshi and my industry guide Shri. Dhrupesh S. Shah for their kind
support, continuous supervision, and for the valuable knowledge that they imparted to me.

Finally, I would like to express my gratitude to my friends and fellow classmates for their
support and inspiration. I am sincerely appreciative of the unwavering support from my families.
I am grateful that I were given this opportunity.

Jainish Parmar

20CP030

VI
Plagiarism Report

VII
Abstract

Artificial Intelligence (AI) and Robotics have emerged as synergistic disciplines, reshaping
industries, economies, and societal landscapes. This report presents an application which is the
integration of AI and Robotics, focusing on the advancements, challenges, and implications of
this convergence.

Beginning with the introduction to the problem statement, proposed solution the document
introduces the device name jetson nano, which is a compact and powerful computing platform,
and which has gained prominence for its ability to integrate artificial intelligence (AI)
capabilities with robotics applications. Further it discusses hardware specifications, including its
GPU-accelerated computing power, multiple camera inputs, and GPIO pins for hardware
integration.

Furthermore, the report contains the tools and technology which are used in this application
which includes tools like jetson nano, servo motors, pca9685, camera and technologies like
python, computer vision, deep learning face recognition algorithm and many more.

Additionally, the report explains the design and modeling part of the application and
algoritham.it discuss various algorithm and UML diagram includes, usecase diagram, flow chart
of application etc.

Lastly, the report contains implementation of the application. First it covers configuration of the
jetson nano and configuration external hardware with jetson nano. Further it covers
implementation of face detection, face recognition and tracking algorithms followed by
integration of this algorithm to control robot.

VIII
Table of Contents

APPROVAL SHEET II

CERTIFICATE III

CERTIFICATE IV

DECLARATION OF ORIGINALITY V

ACKNOWLEDGEMENT VI

Plagiarism Report VII

Abstract VIII

Table of Contents IX

List of Figures XI

List of Symbols, Abbreviations, and Nomenclature XII

1. Introduction 13

1.1 Introduction 13

1.2 Project Objectives & Modules 13

1.3 Motivation 13

2. Related Work and Background 14

2.1 Requirement Analysis 14

2.1.1 Functional Requirements 14

2.1.2 Additional Functional Requirements 15

2.1.3 Proposed Solution 16

IX
2.1.4 Hardware Requirements 16

2.1.5 Software Requirements 16

2.2 Tools and Technologies Used 17

3. Modeling and Design 25

3.1 Relations to database 25

3.2 UML Diagrams 25

4. Implementation 30

4.1 System-Configuration Module 30

4.2 Prepare Dataset Module 34

4.3 Face Detection and Tracking Module 35

4.4 Face Recognition and Tracking Module 36

5. Conclusion and Future Scope 38

5.1 Conclusion 38

5.2 Future Scope 38

References

X
List of Figures

Fig 2.1 Jetson-nano developer kit 16

Fig 2.2 servo motor,pca9685,camera&pan-tilt bracket system 17

Fig 2.3 Pyhton3 & OpenCV logo 18

Fig 2.4 Feature Extraction 18

Fig 2.5 Circuit-python logo 19

Fig 3.1 E-R diagram 20

Fig 3.2 Use case Diagram 21

Fig 3.3 Flowchart of Model Training 22

Fig 3.4 Design of Face-recognition Algorithm 23

Fig 3.5 Flowchart of Falcon-Eye 24

Fig 3.6 Pan-Tilt Tracking Algorithm 25

Fig 3.7 DFD-0 Level 25

Fig 4.1 System Components 26

Fig 4.2 Jetson-nano Configuration 27

Fig 4.3 Pin-configuration between pca9685 and jetson nano gpio pin 27

Fig 4.4 Servo Motor configuration with Pca9685 28

Fig 4.5 Attachment of all components 29

Fig 4.6 Preparing Dataset 30

Fig 4.7 File Structure of Dataset 30

Fig 4.8 Face detection & Tracking 31

Fig 4.9 Model Training 32

Fig 4.10 Face Recognition & Tracking 33

XI
List of Symbols, Abbreviations and Nomenclature

SAC – Space Application Center


ISRO – Indian Space Research Organization
SC ENG. – Scientists Engineer
CPU - Central Processing Unit
GPU - Graphics Processing Unit
AI – Artificial Intelligence
ML – Machine Learning
ER – Entity Relationship
DFD – Data Flow Diagram
GB – Giga Bytes

XII
Chapter 1: Introduction

This chapter contains overall introduction of this project. It contains introduction of project,
project objective further it contains motivation behind this project.

1.1 Introduction:

This project is an artificial intelligence and robotics based application developed at space
application center SAC-ISRO located at ahmedabad as a part of research on compact yet
powerful computing device named nvidia jetson nano developer kit. this project is integration of
the various artificial algorithms and robotics. Further details are covered in upcoming chapters.

1.2 Project Objectives

The objective of the project is to develop a robust system for face following and tracking using
the Jetson Nano platform. Leveraging the power of the Jetson Nano's GPU-accelerated
computing capabilities, the project aims to create a real-time, efficient solution for detecting and
tracking human faces in various environments.

The primary goal is to implement computer vision algorithms to accurately identify and localize
faces within a video stream captured by a camera connected to the Jetson Nano. This involves
exploring state-of-the-art techniques such as deep learning-based object detection and tracking
algorithms like Haar cascades, HOG+SVM, or more advanced methods like convolutional neural
networks (CNNs) and deep learning-based trackers.

Furthermore, the project seeks to optimize the performance of the face detection and tracking
algorithms to ensure real-time processing on the Jetson Nano's hardware. This optimization may
involve model quantization, algorithmic optimizations, and parallel processing techniques to
fully utilize the capabilities while minimizing computational overhead.

Ultimately, the completed system will allow for seamless and accurate tracking of faces within
the camera's field of view, enabling applications such as automated camera control, surveillance
systems, human-computer interaction, and more.

1.3 Motivation

The driving force behind my project is deeply rooted in the ever-growing need for advanced
surveillance, more engaging human-computer interactions, and automation systems that truly
make a difference in people's lives. I'm passionate about crafting a reliable face following and
tracking solution on the Jetson Nano platform because I believe it can address these evolving
needs with a touch of humanity, efficiency, and precision. My goal is to empower applications
ranging from smart security systems to friendly interactive robots, enriching experiences and

1
enhancing safety measures in our daily lives. And by making this technology accessible and
affordable, I hope to bring its benefits to everyone, from enthusiasts to professionals, fostering a
more inclusive and connected future.

Chapter 2: Related Work and Background

This chapter contains information about tools and technology used to make this
application.
Overall, it’s a tools and technology exploration part of the report.

2.1 Requirement Analysis:

This part contains the overall analysis which contain Functional as well as Non-
Functional Requirements of the system as given below:

2.1.1 Functional Requirements:

In order to acquire information for the requirement analysis of the project I discussed with my
guides apart from this I studied various research papers. The functional requirements of the
project are as follows:

1. Face Detection & Recognition: The camera system should be able to accurately detect
and localize human faces in real-time, regardless of factors such as lighting conditions,
facial expressions, and occlusions.

2. Tracking Capability: The camera system should have the ability to track and follow the
detected face as it moves within the camera's field of view. It should be capable of
smoothly adjusting the camera orientation and position to keep the face centered in the
frame.

3. Robustness: The face-following camera should be robust against various environmental


conditions and challenges, including changes in lighting, background clutter, occlusions,
and sudden movements of the subject.

4. Real-Time Processing: The system should perform face detection and tracking tasks in
real-time, with minimal latency, to ensure smooth and responsive performance.

5. Adaptability: The camera system should be adaptable to different scenarios and


environments, such as indoor and outdoor settings, varying distances between the camera
and the subject, and different types of facial appearances.

6. Accuracy: The system should accurately follow the movements of the detected face
without losing track or jittering. It should maintain a consistent and smooth tracking
performance to provide a high-quality user experience.

1
7. User Interaction: The camera system may include features for user interaction, such as
the ability to select specific faces to track, adjust tracking parameters, or temporarily
disable tracking when needed.

8. Integration: The face-following camera should be compatible with existing hardware and
software systems, allowing seamless integration into various applications such as
robotics, video conferencing, or security surveillance.

9. Security and Privacy: The camera system should prioritize the security and privacy of
individuals by adhering to best practices for data protection, encryption, and user consent
when capturing and processing facial data.

10. Documentation and Support: The manufacturer should provide comprehensive


documentation, tutorials, and technical support to assist users in setting up, configuring,
and troubleshooting the face-following camera system.

2.1.2 Additional Functional Requirements

The additional requirements of the project are as follows:

1. Performance:

 Response Time: The camera system should have low latency, ensuring that it can
quickly and accurately track faces without noticeable delays.

 Throughput: The system should be capable of processing multiple frames per


second to maintain smooth tracking performance even in dynamic environments.

2. Reliability:

 Availability: The camera system should be highly available, minimizing


downtime and ensuring continuous operation.

 Error Handling: The system should have robust error handling mechanisms to
gracefully handle exceptions and recover from failures.

3. Scalability:

 Scalability: The camera system should be scalable to accommodate a varying


number of tracked faces and support the addition of multiple cameras for broader
coverage.

 Performance Scalability: The system should maintain consistent performance


levels as the number of tracked faces or the complexity of the scene increases

1
2.1.3 Proposed Solution

The proposed solution that fulfills all requirements and objective of project is to develop object
tracking system using jetson nano and external hardware device along with artificial intelligence
algorithms and computer vision. Algorithms and implementation are discussed in upcoming
chapters.

Modules:

 System Configuration Module


 Tracking Algorithm Module
 Face Recognition Module
 Prepare dataset
 Model Training
 Face Recognition
 Integration of Tracking Algorithm with Face Recognition Module.

2.1.4 Hardware Requirement

 Jetson nano developer kit 2Gb/4Gb


 2-Standard servo motor-MG995
 Pan-tilt bracket servo mount kit
 Pca9685
 Camera
 Lcd-display, Keyboard, CPU
 Jumper wires, HDMI cable,64 or above Gb Sd-card

2.1.5 Software Requirements

 Linux/ubuntu operating system


 Python3.6 or above
 OpenCV, face-recognition, dlib, circuit python etc. library
 Balena etcher
 Sd-card formatter
 Jetson-nano sdk card image
 Code-Oss

1
2.2 Tools and Technologies Used:

1. Jetson Nano

The Jetson Nano is a compact yet powerful single-board computer designed by NVIDIA specifi-
cally for running artificial intelligence (AI) workloads at the edge. This abstract provides a con-
cise overview of the Jetson Nano's key features, applications, and impact in the field of edge
computing.

The Jetson Nano boasts a quad-core ARM Cortex-A57 CPU and an NVIDIA Maxwell GPU with
128 CUDA cores, offering significant computational power in a small form factor. It supports
various AI frameworks and libraries, including TensorFlow, PyTorch, and OpenCV, enabling
developers to deploy complex AI models directly onto the device.

One of the most compelling aspects of the Jetson Nano is its suitability for a wide range of edge
AI applications. From autonomous drones and robots to intelligent surveillance systems and
smart IoT devices, the Jetson Nano empowers developers to implement AI-driven functionalities
directly on the device, without relying on cloud connectivity.

Moreover, the Jetson Nano's low power consumption and thermal efficiency make it an ideal so-
lution for embedded systems where energy efficiency is paramount. Its GPIO pins and peripheral
interfaces further enhance its versatility, enabling seamless integration with sensors, cameras,
and other hardware components.

Fig 2.1 jetson-nano developer kit


Source: https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit

1
Fig 2.2 jetson-nano architecture
Source: https://liliputing.com/wp-content/uploads/2019/03/jetson-nano_06.jpg

2. Standard Servo Motor – (MG995)

The Tower Pro MG995 Standard Servo Motor is a widely used electromechanical device in
robotics and hobbyist projects. This servo motor is known for its affordability, durability, and
versatility, making it a popular choice among makers, engineers, and hobbyists alike.

Key features of the MG995 servo motor include:

1. High Torque: The MG995 servo motor is capable of delivering high torque, making it
suitable for applications requiring precise control and manipulation of objects.

2. Metal Gears: It is equipped with metal gears, which enhance its durability and reliability,
allowing it to withstand heavy loads and continuous use.

3. Wide Operating Range: The MG995 servo motor typically operates within a wide range
of rotational angles, typically around 180 degrees, providing flexibility in various robotic
and mechanical designs.

4. PWM Control: It utilizes pulse width modulation (PWM) signals to control its position,
speed, and direction, allowing for smooth and precise movement.

5. Standard Size: The MG995 servo motor follows a standard size and mounting pattern,
making it compatible with a wide range of servo mounts, brackets, and accessories
available in the market.

Applications of the MG995 servo motor span across robotics, RC vehicles, animatronics, camera
gimbals, and various other mechanical systems where precise motion control is required.

1
Fig 2.2 servo motor and its specifications
Source: https://cf.shopee.ph/file/9190ea1a8c863ed944ee838e15f40c64

3. Pan – Tilt Servo Bracket Set

The Pan-Tilt Servo Bracket Set is a mounting solution designed for positioning and controlling
devices such as cameras, sensors, or other components in two axes - pan (horizontal rotation) and
tilt (vertical rotation).

Key features of the Pan-Tilt Servo Bracket Set include:

1. Dual-Axis Control: The bracket set allows for simultaneous movement in both the
horizontal and vertical directions, providing a wide range of motion for the mounted
device.

2. Compatibility: It is compatible with standard servo motors, making it easy to integrate


with various robotic or DIY projects.

3. Adjustability: The bracket set typically offers adjustable mounting holes and angles,
allowing for precise positioning and alignment of the mounted device.

4. Sturdy Construction: Constructed from durable materials such as aluminium or plastic,


the bracket set provides stability and support for the mounted device.

5. Easy Installation: The bracket set is designed for easy assembly and installation, often
requiring minimal tools and hardware.

In summary, the Pan-Tilt Servo Bracket Set offers a convenient and flexible solution for
achieving controlled movement in two axes, enabling users to customize and optimize the
orientation of devices in various applications.

1
Fig 2.3 Pan-Tilt Servo Mount Bracktet Set
Source : https://pan-tilt-servo-bracket-set-mg995.jpg

4. PCA – 9685

The PCA9685 is a popular component used for controlling multiple PWM (Pulse Width
Modulation) signals simultaneously. It serves as a PWM driver or controller, capable of driving
up to 16 independent PWM channels. Here's a brief overview:

1. PWM Generation: The PCA9685 generates PWM signals with high precision and
stability, making it ideal for controlling LEDs, servo motors, and other devices that
require variable control over their intensity or position.

2. I2C Interface: It communicates with the host microcontroller or computer through the
I2C (Inter-Integrated Circuit) protocol, allowing for easy integration into various projects.
This interface enables multiple PCA9685 modules to be daisy-chained together for
controlling a larger number of PWM channels.

3. 16-Channel Output: The PCA9685 features 16 output channels, each capable of driving a
PWM signal independently. This allows for controlling a wide range of devices or
components with individualized control.

4. Frequency and Duty Cycle Control: Users can adjust the frequency and duty cycle of the
PWM signals generated by the PCA9685, providing flexibility in meeting the
requirements of different applications.

5. Wide Voltage Range: It operates over a wide voltage range, typically from 2.3V to 5.5V,
making it compatible with a variety of microcontroller and electronic systems.

2
The PCA9685 is commonly used in robotics, automation, lighting control, and other projects
where precise and synchronized PWM control is required. Its ease of use, versatility, and
reliability make it a popular choice among hobbyists, makers, and professional engineers alike.

Fig 2.4 PCA – 9685 Chip


Source : https://makershop.ie/image/cache/catalog/p/00099/002-1024x768.png

5. Logitech Webcam

Logitech webcams are renowned for their quality, reliability, and versatility, offering a range of
features suitable for various purposes, from video conferencing to content creation. Here's a brief
overview:

1. High-Quality Imaging: Logitech webcams are known for their high-resolution imaging
capabilities, providing crisp and clear video output. Many models support HD (720p) or
Full HD (1080p) resolution, ensuring sharp visuals for both video calls and recordings.

2. Plug-and-Play Convenience: Logitech webcams are designed for easy setup and use,
typically requiring no additional drivers or software installation. They connect to
computers via USB and are compatible with a wide range of operating systems, including
Windows, macOS, and Linux.

3. Adjustable Mounting: Many Logitech webcam models feature adjustable mounting


mechanisms, allowing users to securely attach the webcam to various surfaces such as
monitors, laptops, or tripods. This flexibility enables users to find the optimal angle and
position for their video setup.

2
Fig 2.5 Logitech Webcam(1080P)
Source: https://www.bhphotovideo.com/images/Logitech 832460.jpg

6. Python3 & OpenCV

Python 3 is a powerful, high-level programming language known for its simplicity, readability,
and versatility. It offers a wide range of features and libraries, making it suitable for various
applications, including web development, data analysis, machine learning, artificial intelligence,
and automation. Python 3 emphasizes code readability and encourages a clean and concise
coding style through the use of indentation. It supports object-oriented, procedural, and
functional programming paradigms, providing flexibility for developers with different coding
preferences. Python 3's extensive standard library includes modules for tasks such as file I/O,
networking, and data manipulation, reducing the need for external dependencies. Its interpreter-
based nature allows for interactive and rapid development, facilitating prototyping and
experimentation. Python's vibrant community and extensive documentation make it easy for
beginners to get started and for experienced developers to find resources and support. Python 3 is
cross-platform, running seamlessly on major operating systems like Windows, macOS, and
Linux, ensuring portability and compatibility across different environments. With its robust
ecosystem and user-friendly syntax, Python 3 continues to be one of the most popular
programming languages for both beginners and experienced developers alike.

OpenCV, short for Open-Source Computer Vision Library, is an open-source computer vision
and machine learning software library. It provides a wide range of tools and algorithms for real-
time computer vision applications, including image processing, object detection, facial
recognition, and more. OpenCV supports multiple programming languages, including C++,
Python, and Java, making it accessible to a broad community of developers. It offers high-
performance implementations of common computer vision tasks, optimized for various hardware
platforms, including CPUs, GPUs, and specialized processors. OpenCV's modular structure and
extensive documentation make it easy to use and integrate into projects of all sizes, from
hobbyist experiments to large-scale commercial applications. Whether you're a beginner

2
exploring the basics of computer vision or an expert developing advanced algorithms, OpenCV
provides the tools and resources needed to bring your vision-based projects to life.

Fig 2.6 Python3 and OpenCV logo


Source : https://opencv-python-logo.png

7.Face Recognition Library (1.3.0)

The face-recognition library version 1.3.0 is a popular Python package for face detection,
recognition, and manipulation tasks. Here's a brief overview:

1. Facial Detection: The library utilizes advanced algorithms to detect faces within images
or video streams accurately. It can identify faces even in complex scenes with multiple
individuals.

2. Facial Recognition: With its facial recognition capabilities, the library can match detected
faces to known faces stored in a database or provided as reference images. This feature is
useful for applications such as biometric authentication and access control.

3. Face Landmarks: face-recognition can detect and locate facial landmarks such as eyes,
nose, mouth, and chin. This information is valuable for tasks like facial expression
analysis and augmented reality applications.

4. Cross-Platform Compatibility: The library is compatible with various operating systems,


including Windows, macOS, and Linux. It offers flexibility for developers working on
different platforms.

5. Ease of Use: face-recognition provides a simple and intuitive API, making it easy for
developers to integrate facial recognition capabilities into their Python applications. It
abstracts away complex algorithms, allowing users to focus on building their
applications.

2
6. Performance: The library is optimized for performance, leveraging efficient algorithms
and data structures to achieve fast and accurate face detection and recognition. It can
process images and video streams in real-time, suitable for applications requiring low
latency.

Overall, the face-recognition library version 1.3.0 is a valuable tool for developers seeking to
incorporate facial detection and recognition functionality into their Python projects. Its ease of
use, accuracy, and performance make it a preferred choice for a wide range of applications, from
security systems to entertainment and beyond.

Fig 2.7 Face Detection & Recognition Feature Extraction


Source: https://learnopencv.com/wp-content/uploads/2023/05/deepface.png

8.Adafruit-CircuitPython-Servokit

The Adafruit CircuitPython ServoKit library is a Python library specifically designed for
controlling servo motors using Adafruit's Servo Kit hardware. Here's a concise overview:

1. Servo Motor Control: The library enables easy control of multiple servo motors
simultaneously, offering precise positioning and movement control.

2. Adafruit Servo Kit Compatibility: It is tailored to work seamlessly with Adafruit's Servo
Kit hardware, providing an intuitive interface for interacting with servo motors connected
to the kit.

3. Pythonic API: The library offers a Pythonic programming interface, simplifying servo
motor control for developers familiar with the Python programming language.

4. PWM Control: Under the hood, the library utilizes Pulse Width Modulation (PWM)
signals to control the position and speed of servo motors, allowing for smooth and
accurate movement.

5. Ease of Use: Adafruit prioritizes user experience, ensuring that the ServoKit library is
easy to install, configure, and use, even for beginners or those new to electronics and
robotics.

2
6. Community and Documentation: Adafruit provides comprehensive documentation and
examples for the ServoKit library, along with an active community forum where users
can seek assistance, share projects, and collaborate with fellow enthusiasts.

In summary, the Adafruit-CircuitPython-ServoKit library offers a user-friendly and efficient


solution for controlling servo motors with Adafruit's Servo Kit hardware, empowering makers,
hobbyists, and educators to incorporate precise motion control into their projects with ease.

Fig 2.8 Circuit-python Logo


Source: https://blog.makerdiary.com/introducing-circuitpython/cover.png

2
Chapter 3: Modeling and Design

This chapter contains diagrams that describes the planning and modeling of the project. This
section contains algorithm design along with flow chart, data-flow diagram, timeline chart which
gives information about the flow of the working of project.

3.1 Relations to database:

Currently This system runs in local Environment Thus system stores data on local storage (on
systems’ memory). database will be required when system will work on online server.

Fig 3.1 E-R diagram

3.2 UML Diagrams:

This section contains various diagrams used for the design of this system as given below:

Use Case diagram:

A use case diagram illustrates a system's dynamic behavior. To encapsulate the


system's functionality, it includes use cases, actors, and their interactions. The
actors in the given diagram (Fig 3.1) diagram represents the users. In this
project, the actors is User. This diagram is graphical representation of how users
will interact with the system and what functionalities of the system the user is
authorized to use.

2
Fig 3.2 Use Case Diagram

Flowchart Of Model Training:

The following diagram depicts steps included in the process of training model from the
beginning to till the training process complete.

Fig 3.3 Flowchart of Model Training

2
Design Of Face-Recognition Algorithm:

The following diagram depicts design and working of face-recognition algorithm. First model
will try to detect face in current frame (input through camera). Once it detects the face, model
finds it encoding and compare with all store encoding. Model calculates euclidean distance
between each encoding and find minimum euclidean distance. After finding minimum distance
model give label to detected face.

Fig 3.4 Design of Face-recognition Algorithm

2
Flowchart of System:

The following diagram depicts flowchart of the falcon-eye object tracking system. First system
set to track one person by taking its name after that system check whether model is trained on
this person or not. If yes then system will initialize face recognition module. If recognize person
label matched with input label only in that case system initialize tracking servo module
otherwise system keep finding and recognizing faces.

Fig 3.5 Flowchart of Falcon-Eye object tracking system

2
Design Of Pan-Tilt Movement:

The following diagram depicts pan and tilt movement of camera. suppose face is detected at any
point of the time model calculate distance between center of frame point and center of rectangle
which will be drawn around detected face. Based on that distance model will decide whether to
move servo left or right or up or down.

Fig 3.6 Pan-Tilt Tracking Algorithm

Data Flow Diagram:

The classic visual representation which depicts how information moves in a


system is known as data flow diagram (DFD). The Data Flow Diagram shown in
below Figure represents the flow of the data in the system.

Fig 3.7 DFD-0 Level

3
Chapter 4: Implementation

This chapter demonstrates the implementation and the results of the project. It also has the
information about the modules included in the project and lastly it contains snapshots of the
project.

4.1 System Configuration Module

This module describes the configuration of hardware with the each other and the system like how
jetson-nano board in connected to pca-9685 in order to control servos and how servo is
connected to rotate in pan and tilt movement.

Fig 4.1 System Components

Above figure depicts all necessary hardware components that are required for system
configuration. As you can see pair of standard servo motor, pca9685, pan-tilt servo bracket set,
jetson-nano board, camera, and couple of connectors like wires etc.

3
Fig 4.2 Jetson-nano Configuration

Above figure depicts jetson-nano setup. Jetson-nano sdk image booted in 64 giga byte sd card
and then that sd card is inserted in jetson nano to boot it along with 5v barrel supply power, hdmi
cable to attach monitor, usb to type-c connector to connect mouse, keyboard, camera.

Fig 4.3 Pin-configuration between pca9685 and jetson nano gpio pin

3
Pin configuration
Jetson nano gpio Pca-9685 pin
pin
Pin-2 (5v) V+
Pin-1(3v) VCC
Pin-3(SDA) SDA
Pin-5(SCL) SCL
Pin-6(GND) GND

Above figure and table depicts pin configuration of jetson nano gpio and pca-9685.

Fig 4.4 Servo Motor configuration with Pca9685

Above figure depicts pair of servo motor and pca-9685.Connect pan servo to pca9685 pin-1(16
servo outputs pin) and connect tilt servo to pca9685 pin-3. connect servo signal pin to pwm pin,
servo gnd pin to gnd pin and servo power pin to v+.in nutshell connect all servo and pca-9685
output pin according to color scheme.

3
Fig 4.5 attachment of all components

Above figure depicts view of system after complete configuration. reader can see how servo and
pan-tilt mount bracket system connected with each other in the left side of image. And in right
side image depicts view of complete system configuration.

3
4.2 Prepare-Dataset Module

This module describes how to prepare dataset. How to take images through camera and how to
organize them in file structure.

Fig 4.6 Preparing Dataset


Above figure depicts how camera capturing and storing images. System will capture only those
images in which it can detect face. Quantity of image depends on threshold value which is set by
developer.

Fig 4.7 File Structure of Dataset


Above figure depicts file structure of dataset. Reader can see how subdirectory are created to
or- ganize data.

3
4.3 Face Detection & Tracking Module

This module primary focus on the tracking algorithm and its working and testing. This module
describes how system detect face and how it tracks it continuously and discuss some edge cases.

Fig 4.8 Face detection & Tracking

Above figure depicts face detection and tracking module.it calculate some parameter as reader
can see in image and based on this parameter system would decide in which direction and
how many angle pan and tilt servo should rotate.

3
4.4 Face Recognition & Tracking Module

This module describes face_recogntion algorithm, training module and tracking module. This
module also describes how model will get train on dataset. After that how it recognizes and how
system track it and discuss some edge cases.

Fig 4.9 Model Training

Above figures depict file structure of data set and how model process dataset, calculate encod-
ings and create mapping between labels and encodings. Algorithm has been discussed in 3 rd
chapter.

3
Fig 4.10 Face Recognition & Tracking.

Above figures depict how model do face recognition and track particular person between bunch
of the other people. Algorithm has been discussed in chapter3.

3
Chapter 5. Conclusion and Future Scope

This chapter contains a conclusion and the future scope of the project.

5.1 Conclusion
Falcon-eye object tracking system is a complete package for face detection, face recognition &
tracking. This project fulfills all the objective which is described in chapter1. Project is smoothy
running in real environment with minimum latency, detecting, recognizing and tracking the
faces. This project is built using face recognition library with an ideal accuracy of 99.38% on the
Labeled Faces in the Wild benchmark. However, accuracy of model also depends on many
factors like training data size and quality of data and quality of camera.

5.2 Future Scope


Falcon-eye object tracking system project is developed using opencv and circuit-python. The
whole project can be developed using tensorflow to make computation and processing fast. As of
now, the model is trained on a small dataset. In future we can increase size of the training dataset
in order to increase accuracy of the face recognition model. Apart from this currently system is
only able to track between 0 to 180 degrees however this can be further extend using continuous
servo motor (0 to 360 degrees rotation) along with some changes in control system. As jetson
nano comes with 40 gpio pin and this project is currently using only 5 gpio pin in future more
hardware can be connected to increase functionality as example a developer can connect jetbot or
rover to control jetbot or rover according to object movement.

3
References:

[1] “Jetson Nano Official Documentation ”


Available at: https://developer.nvidia.com/embedded/jetson-nano-developer-kit

[2] “Circuit Python Documentation ”


Available at: https://circuitpython.org/

[3] “Jetson GPIO configuration Document”


Available at:
https://pypi.org/project/Jetson.GPIO/

[4] “Harcasscade face detection with OpenCV documentation”


Available at: https://docs.opencv.org/3.4/d2/d99/tutorial_js_face_detection.html

[5] “Face recognition library”


Available at: https://pypi.org/project/face-recognition/#description

[6] “Hello ai world jetson nano course”


Available at: https://developer.nvidia.com/embedded/learn/jetson-ai-certification-programs

[7] “artificial intelligence with jetson nano course”


Available at: https://youtube.com/playlist?list=PLGs0VKk2DiYxP-ElZ7-
QXIERFFPkOuP4_&si=f-bbrXvVqyZeXwXp

You might also like