0% found this document useful (0 votes)
50 views

MCT Project Phase II Report

The document describes the development of an autonomous spy bee that can capture images, process them to create 3D reconstructions, and transmit the data. It combines hardware like a camera, sensors, motors and a frame with software for image processing, navigation, 3D modeling and communication. The goal is for the bee to autonomously capture and transmit images that can then be used to generate 3D models of its surroundings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

MCT Project Phase II Report

The document describes the development of an autonomous spy bee that can capture images, process them to create 3D reconstructions, and transmit the data. It combines hardware like a camera, sensors, motors and a frame with software for image processing, navigation, 3D modeling and communication. The goal is for the bee to autonomously capture and transmit images that can then be used to generate 3D models of its surroundings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 43

AUTONOMOUS BEE WITH 3D IMAGE

CAPTURE THROUGH REMOTE


SENSING TECHNOLOGY

19MT8901/ PROJECT PHASE II REPORT

Submitted by

NITHIN AKASH. M B (REG. NO. 20109828)

SWETHA. A (REG. NO. 20109069)

RAJAPPA. V (REG. NO. 20109830)

NAVEEN KUMAR. P (REG. NO. 20109827)

in partial fulfilment for the award of the degree

of
BACHELOR OF ENGINEERING

in

MECHATRONICS ENGINEERING

HINDUSTHAN COLLEGE OF ENGINEERING AND TECHNOLOGY


Approved by AICTE, New Delhi, Accredited by NBA and ‘A++’ Grade by NAAC
(An Autonomous Institution, Affiliated to Anna University, Chennai)
COIMBATORE - 641032

APRIL - 2024
HINDUSTHAN COLLEGE OF ENGINEERING AND TECHNOLOGY
Approved by AICTE, New Delhi, Accredited by NBA and ‘A++’ Grade by NAAC
(An Autonomous Institution, Affiliated to Anna University, Chennai)
COIMBATORE – 641032

BONAFIDE CERTIFICATE

Certified that this project report “AUTONOMOUS BEE WITH 3D

IMAGE CAPTURE THROUGH REMOTE SENSING TECHNOLOGY” is the

bonafide work “Nithin Akash M B (20109828), Swetha A (20109069), Rajappa V

(20109830), Naveen Kumar P (20109027)” who carried out the project work

under my supervision.

SUPERVISOR HEAD OF THE DEPARTMENT


Mr. K. KESAVARAJ M.E. Dr. P.T. SARAVANAKUMAR M.BA., Ph.D.

ASSISTANT PROFESSOR PROFESSOR & HEAD

DEPARTMENT OF MECHATRONICS DEPARTMENT OF MECHATRONICS

ENGINEERING, ENGINEERING,

HINDUSTHAN COLLEGE OF ENGINEERING HINDUSTHAN COLLEGE OF

AND TECHNOLOGY, ENGINEERING AND TECHNOLOGY,

COIMBATORE - 641032. COIMBATORE–641032.

Submitted for Anna University Examination Project Viva Voice held on _____________

INTERNAL EXAMINER EXTERNAL EXAMINER

1
ACKNOWLEDGEMENT

We express our sincere thanks to Almighty God, the guiding light of our life for

giving us the potential and courage to complete this project successfully.

We extend our sincere thanks to managing trustee of HINDUSTHAN

EDUCATIONAL AND CHARITABLE TRUST, Mrs.SARASUWATHI

KHANNAIYAN for providing the essential infrastructure.

We would like to express our gratitude to Dr.J.JAYA, M.Tech., Ph.D.,

Principal, for the encouragement and the facilities provided to complete this project

successfully and for strengthening the ray of hope..

We are highly indebted to Dr.P.T.SARAVANAKUMAR M.B.A., Ph.D.,

Professor and Head for the suggestion that have been valuable for our project

development and improvement.

We would like to extend a grateful and special thanks to our project guide

Mr.K.KESAVARAJ M.E. Assistant professor for his guidance and constructive

criticism.

We wholeheartedly thank our faculty members and our parents who spend them

today for our tomorrow. We thank our friends and all good-hearted who helped us to

complete our project.

2
ABSTRACT

The main objective of the project is to create miniature spy bee equipped with image
capture and 3D conversion capabilities. This multidisciplinary project combines
various hardware components, such as a miniature camera module, microcontroller
navigation sensors, communication modules, propulsion systems, and a power supply,
all integrated into a robust frame. On the software side, the project entails the
development of image capture and processing algorithms, control and navigation
algorithms, 3D reconstruction algorithms, and a communication protocol for data
transmission. Additionally, a user interface for monitoring and control can be
included. The working idea involves the spy bee autonomously capturing images,
processing them, and transmitting the data, with the ultimate goal of creating 3D
reconstructions from the captured imagery.

Keywords: 2D image capture, 3D conversion, face detection, image processing, day


and night time, Lidar, ROS.

3
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT 3

LIST OF TABLES 6

LIST OF FIGURES 7

1 INTRODUCTION 8

1.1 Project Definition

1.2 Project Objectives

1.3 Working Principle

1.4 Existing Method

2 LITERATURE REVIEW 11

3 METHODOLOGY 15

3.1 Scope

3.2 Process For Development Of Product

3.3 Construction

3.4 Operation

4 DESIGN 19

4.1 Different Views Of Bee And Its Dimensions

5 SELECTION AND SPECIFICATION OF 20

MATERIALS

6 COMPONENT DESCRIPTION 21

7 BLOCK DIAGRAM 25

4
7.1 Working of Lidar

8 PROTOTYPING 26

8.1 3D Printing

9 ADVANTAGES, APPLICATIONS & FUTURE 27

SCOPE

9.1 Advantages

9.2 Applications

9.3 Future Scope

10 CONCLUSION 28

11 RESULT AND DISCUSSION 29

REFERENCES 31

APPENDIX 1 32

APPENDIX 2 40

5
LIST OF TABLES

S.NO LIST OF TABLES PAGE NO

1 Material Selection 19

6
LIST OF FIGURES

S.NO FIG NO LIST OF FIGURES PAGE NO

1 Fig.3.1 Methodology To Develop Spy Bee 15

2 Fig.4.1 Top View 18

3 Fig.4.2 Isometric View 18

4 Fig.4.3 Dimensions of Spy Bee 18

5 Fig. 6.1.1 Ld19 Lidar 20

6 Fig. 6.2.1 N50 Dc Micro Motor 21

7 Fig. 6.3.1 Lora E5 Mini Transmitter And Receiver 22

8 Fig. 6.4.1 Camera Module 22

9 Fig. 6.5.1 Arduino Uno 23

10 Fig. 7.1 Working Of Lidar 24

7
CHAPTER 1
INTRODUCTION

In the ever-evolving landscape of technological advancements, the convergence of


robotics and surveillance has paved the way for innovative solutions in various fields.
One such pioneering venture is the creation of a miniature spy bee, a discreet and agile
device designed for autonomous image capture and 3D reconstruction. This
multidisciplinary project seamlessly integrates cutting-edge hardware components,
including a miniature camera module, microcontroller navigation sensors,
communication modules, propulsion systems, and a robust frame, to achieve a
compact yet highly functional spy bee.
The hardware aspects of this project involve a careful fusion of disparate technologies,
each serving a crucial role in the bee's operation. The miniature camera module acts as
the bee's eyes, capturing images with precision and detail. Microcontroller navigation
sensors facilitate autonomous movement, ensuring the spy bee navigates its
surroundings adeptly. Communication modules enable seamless data transmission,
allowing the bee to relay captured imagery to an external source. Propulsion systems
drive the bee through its environment, while a robust frame ensures durability and
resilience in various conditions. A sophisticated power supply system sustains the spy
bee's operations, emphasizing longevity and autonomy.
Complementing the hardware components, the software architecture of this project is
equally ambitious. The development encompasses intricate image capture and
processing algorithms, allowing the spy bee to autonomously analyze its
surroundings. Control and navigation algorithms enable precise movements, ensuring

8
the bee's ability to navigate complex environments. Advanced 3D reconstruction
algorithms are employed to convert captured images into detailed three-dimensional
models. A specialized communication protocol facilitates efficient data transmission,
forming a crucial link between the spy bee and its operator. Additionally, the project
may incorporate a user interface for monitoring and control, offering users a seamless
means of interacting with and overseeing the spy bee's activities.
The envisioned functionality of the spy bee is to autonomously capture images,
process them in real-time, and transmit the data, ultimately culminating in the creation
of detailed 3D reconstructions. Such a capability holds immense potential for
applications ranging from surveillance and reconnaissance to environmental
monitoring and disaster response.

1.1 PROJECT DEFINITION


The project revolves around the conception and realization of a miniature spy bee, a
compact robotic entity designed to execute discrete image capture and 3D
reconstruction tasks. This involves the integration of various hardware components
and the development of sophisticated software algorithms, collectively culminating in
a versatile and autonomous spy bee.

1.2 PROJECT OBJECTIVE


The primary objective of this multidisciplinary project is to engineer a miniature spy
bee capable of autonomously navigating its surroundings, capturing high-resolution
images, processing them in real-time, and transmitting the data. The ultimate goal is to
leverage these capabilities for 3D reconstruction, thereby providing a novel and
efficient solution for surveillance and reconnaissance applications.

9
1.3 WORKING PRINCIPLE
The spy bee operates on a multifaceted working principle that combines hardware and
software intricacies. On the hardware front, the integration of a miniature camera
module, microcontroller-based navigation sensors, communication modules,
propulsion systems, and a reliable power supply forms the backbone. Simultaneously,
sophisticated software algorithms govern image capture, processing, control,
navigation, and communication, orchestrating the spy bee's autonomous functionality

1.4 EXISTING METHOD


Autonomous robots have been used in a variety of fields, from industry, hospitality, to
health environments with light detection and ranging (Lidar) sensors as their main
components. The application of Lidar in robotic navigation systems has a huge impact
in predicting environmental conditions in real-time because Lidar has very high
accuracy, particularly for short-range measurements in room mapping. Robot
Operating System (ROS) is used to perform mapping using Lidar sensors, considering
that in recent years the use of ROS has become increasingly widespread in the
development of robots.

10
CHAPTER 2
LITERATURE REVIEW

The integration of advanced technologies in the field of robotics has led to the
development of innovative solutions for various applications, ranging from healthcare
to surveillance. In recent years, there has been a growing interest in the creation of
miniature robotic agents capable of navigating complex environments discreetly. This
literature survey explores the multidisciplinary realm of a unique project involving the
design and implementation of a miniature spy bee, equipped with image capture and
3D conversion capabilities

 “3D people surveillance on range data sequences of a rotating Lidar” by


Csaba Benedek: 3D surveillance framework for detecting and tracking
multiple moving pedestrians in point clouds obtained by a rotating multi-beam
(RMB) Lidar system, with focusing on specific challenges raised by the
selected range sensor. This paper proposed first an efficient foreground
segmentation model, which uses a spatial foreground filter to decrease artifacts
of angle quantization and background motion.

 “Surveillance system based on Flash Lidar” by Bharat Lohani, Siju Chacko,


Suddhasheel Ghosh and Sandeep Sasidharan: In this paper, a system design
along with a set of algorithms for the purposes of surveillance using a Flash
Lidar camera. However, owing to the absence of the Flash Lidar camera, the
dataset was emulated by using a Terrestrial laser scanner. The paper has
successfully demonstrated the use of the proposed algorithms for detection, and
reconstruction of the intruder, as well as plotting the motion of the same.

11
 “Imaging Lidar system for night vision and surveillance application” by P
Sudhakar; K Anitha Sheela; M Satyanarayana: the Night Vision Imaging Lidar
System (NVIL) presented in this paper showcases a robust and innovative
solution for imaging targets with exceptional recognition capability in both day
and night conditions. The constructional features of the NVIL, including a
compact Nd: YAG pulsed laser system and a unique transmitter-receiver
configuration, contribute to its effectiveness in achieving at least 90%
recognition capability up to a range of 2 km during nighttime operations and up
to 4 km in daylight.

 “A dense background representation method for traffic surveillance based


on roadside Lidar” by Yingji Xia, Zhe Sun, Andre Tok and Stephen Ritchie:
Traffic surveillance systems can provide rich information on real-time traffic
conditions. Video-based systems suffer from illumination limitations in
continuous outdoor surveillance applications due to their sensitivity to varying
ambient lighting conditions. Hence, a Dense Background Representation
Method (DBRM) for traffic surveillance based on stationary roadside Lidar was
proposed in this paper.

 “Mapping of Quaternary Geomorphology in Southern Dalarna: a Lidar


Study” by Carl Malmborg: This study demonstrates the effectiveness of Lidar
in mapping glacial landforms, offering a detailed portrayal of the southern
Dalarna region's glacial legacy. The results contribute to the broader
understanding of the Quaternary period and provide valuable data for future
research and comparisons with earlier works, advancing our comprehension of
landscape evolution in the wake of glacial dynamics.

12
 “How UAV Lidar Imaging Can Locate And Map Minefield Features” by
Katherine James, Gert Riemersma, and Pedro Pacheco: This work demonstrates
how valuable Lidar data can be to provide evidence that is not obtainable by
other means, offering advantages over RGB and TIR imagery when looking for
minefeld features hidden by vegetation.

 “Methodology of real-time 3D point cloud mapping with UAV lidar” by


Levent Candan, Elif Kaçar: In this study, a real-time Lidar measurement system
was established. The designed system shortens the processes of target-based
flight planning and post-flight data processing. With this developed real-time
Lidar mapping system in laboratory, data is obtained instantly. The simulation
system, produce 3D point cloud, and data was stored in a database for later
analysis. The X3D scene used in the study to produce 3D point data provide an
infrastructure for AI and ML-based systems in identifying urban objects in
systems containing big data such as Lidar.

 “Real-Time Escape Route Generation in Low Visibility Environments


using Reinforcement Learning” by Hari Srikanth: This work considers the
relevant Search and Rescue use case for autonomous robot systems for the
traversal of a burning building. In addition to discussing the efficacy of current
low visibility perception systems, this work proposes a novel perception system
that makes use of a multi agent mapping array.

 “Rp Lidar-Based Mapping in Development of a Health Service Assisting


Robot in Covid-19 Pandemic Health” by Zulfan Jauhar Maknuny, Sami
Fauzan Ramadhan, Arjon Turnip, Erwin Sitompul: service assisting robot with
a ROS-based automated navigation system equipped with Lidar sensors
13
successfully mapping and running autonomous navigation system. The highest
mapping accuracy reached 95.983% for the robot's speed variation of 0.2 m/s.
However, the best mapping speed value for healthcare setup is at 0.5 m/s with
an accuracy value of 95.939% because it has an accuracy value that is almost
the same as that produced at a speed of 0.2 m /s but can do mapping faster.

 “Linear Feature-Based Image/Lidar Integration for a Stockpile


Monitoring and Reporting Technology” by Seyyed Meghdad Hasheminasab,
Tian Zhou , and Ayman Habib: In this study, a new image/Lidar integration
using linear features has been proposed for deriving accurate registration and
system calibration parameters for a smart. The proposed strategy is
implemented in three main steps: initial registration of Lidar scans using planar
features, calibration/registration parameter refinement using conjugate
image/Lidar lines, and point cloud colorization.

14
CHAPTER 3
METHODOLOGY

Fig.3.1 Methodology to Develop Spy Bee

15
3.1 SCOPE
The scope of the miniature spy bee project encompasses the development of a highly
specialized robotic system with autonomous capabilities for discreet image capture
and 3D reconstruction. This extends to applications in surveillance, reconnaissance,
and environmental monitoring, where the compact size and agility of the spy bee offer
a unique advantage. The project's scope also includes the integration of advanced
hardware components and the implementation of sophisticated software algorithms to
achieve optimal performance.

3.2 PROCESS OF DEVELOPMENT OF PRODUCT


The development process follows a systematic and iterative approach, involving the
integration of hardware and software components.
 Conceptualization: Define the project objectives, requirements, and constraints.
Develop a conceptual design that outlines the key features and functionalities of
the spy bee.
 Hardware Integration: Select, integrate, and optimize the hardware components,
ensuring compatibility, miniaturization, and power efficiency. Rigorously test
the integrated hardware to address any issues.
 Software Development: Implement image capture and processing algorithms,
control and navigation algorithms, 3D reconstruction algorithms, and a
communication protocol. Develop a user interface for monitoring and control.
Test and refine the software components iteratively.
 Testing and Optimization: Conduct comprehensive testing of the integrated
system, addressing any hardware or software issues. Optimize the system for
performance, reliability, and energy efficiency.

16
 User Interface Implementation: Design and implement a user interface for real-
time monitoring and control. Ensure a user-friendly experience for operators
overseeing the spy bee's activities.
 Autonomous Operation Validation: Validate the spy bee's autonomous
operation through simulated and real-world scenarios. Assess its ability to
capture, process, and transmit data while navigating environments, with a focus
on 3D reconstruction capabilities.
 Iterative Refinement: Gather feedback from testing and real-world operation to
iteratively refine and enhance the spy bee's performance. Address any identified
challenges or limitations.

3.3 CONSTRUCTION
The construction phase involves physically assembling the spy bee with the integrated
hardware components. This includes the assembly of the frame, incorporation of the
miniature camera module, installation of navigation sensors, integration of
communication modules, attachment of propulsion systems, and implementation of
the power supply. Construction also entails ensuring the structural integrity and
durability of the spy bee to withstand diverse environmental conditions.

3.4 OPERATION
The operational phase involves deploying the miniature spy bee for its intended tasks.
This includes initiating autonomous missions, monitoring its activities through the
user interface, and assessing its performance in real-world scenarios. The spy bee
should demonstrate the ability to navigate environments, capture images, process data,
and transmit information for 3D reconstruction. Continuous operation allows for the
validation of the spy bee's effectiveness and the identification of areas for further
improvement and refinement.

17
CHAPTER 4
DESIGN
4.1 DIFFERENT VIEWS OF BEE AND ITS DIMENSIONS

FIG.4.1 Top View FIG.4.2 Isometric View

Fig.4.3 Dimensions of Spy Bee

18
CHAPTER 5
SELECTION AND SPECIFICATION OF MATERIALS

S.No Description Specification

1 LD19 Lidar LD Robot DTOF Laser Ranging Sensor, 360°


Omni-Directional Lidar, UART Bus

2 N50 Dc micro motor DC motor rated to 3.7V and 27500 RPM

3 Lora E5 Mini AI thinker, trans receiver

Ov7670 Vga Camera


4 Camera module -RGB Data (8 digit)
module
32 bit development board with 12 x
5 Arduino Uno
8 led Matrix

19
CHAPTER 6
COMPONENT DESCRIPTION

6.1 LD19 LIDAR


LD19 is mainly composed of a laser ranging core, wireless transmission unit, wireless
communication unit, angle measuring unit, motor driving unit, and mechanical
housing. The LD19 ranging core adopts DTOF technology to measure 4500 times
each second. When it works, LD19 emits the infrared laser forward, and the laser is
reflected to the single photon receiving unit after encountering the target object. Thus,
we get both times of laser emitting and receiving, the gap between them is the time of
flight. With the light speed, we can calculate the distance. After receiving distance
data, LD19 will combine them with angel value getting from the angle measurement
unit to comprise the points cloud data, then transmit the points cloud data to an
external interface via wireless communication. Meanwhile, the external interface
provides PWM to allow the motor driving unit to drive the motor. After the external
control unit gets the rotational speed, it will reach to specified speed through PID
algorithm closed-loop control to ensure LD19 works stably.

Fig. 6.1.1 Ld19 Lidar

20
6.2 N50 DC MICRO MOTOR
The micro motor converts electrical energy into mechanical energy. The rotor of the
micro motor is driven by the current. Different rotor current comes out different
magnetic poles.

6.2.1 Specification
Model: N50; Size: 10x12x25mm.
RPM: 21000 RPM; Voltage: 3.7V - 7.4V; Shaft Diameter: 1mm.
Propeller Length: 55mm.
Package Include 2 x N50 Motor, 2 x 55mm Propeller.

Fig. 6.2.1 N50 Dc Micro Motor

6.3 LORA E5 MINI


Lora is a wireless modulation technique derived from Chirp Spread Spectrum (CSS)
technology. It encodes information on radio waves using chirp pulses - similar to the
way dolphins and bats communicate
Supporting Lora-Wan protocol and global frequency, Lora-E5 mini is able to achieve
a transmission range of up to 10 km in open area and ultra-low power consumption.
21
Fig. 6.3.1 Lora E5 Mini Transmitter and Receiver

6.4 CAMERA MODULE


 The CMOS OV7670 camera module is one kind of CMOS image sensor
available in low voltage, high sensitive, and small size. This module provides
the complete single-chip VGA camera functionality & an image processor
within a tiny footprint package.
 This OV7670 camera module offers complete frame, sub-sampled 8-bit images
in a variety of formats which are controlled throughout the SCCB which
standsfor “Serial Camera Control Bus” interface.

FIG. 6.4.1 Camera Module

22
6.5 ARDUINO UNO
 Arduino Uno Is A Low-Cost, Flexible, And Easy-To-Use Programmable Open-
Source Microcontroller Board That Can Be Integrated Into A Variety Of
Electronic Projects.
 This Board Can Be Interfaced With Other Arduino Boards, Arduino Shields,
Raspberry Pi Boards And Can Control Relays, Leds, Servos, And Motors As
An Output.
 The Arduino Uno Comes With Usb Interface, 6 Analog Input Pins, 14 I/O
Digital Ports That Are Used To Connect With External Electronic Circuits.

Fig. 6.5.1 Arduino Uno

23
CHAPTER 7
BLOCK DIAGRAM

7.1 WORKING OF LIDAR

FIG.7.1 Working of Lidar

24
CHAPTER 8
PROTOTYPING

8.1 3D PRINTING
3D printing or additive manufacturing is a process of making three dimensional solid
objects from a digital file. The creation of a 3D printed object is achieved using
additive processes. In an additive process an object is created by laying down
successive layers of material until the object is created. Each of these layers can be
seen as a thinly sliced cross-section of the object. 3D printing is the opposite of
subtractive manufacturing which is cutting out / hollowing out a piece of metal or
plastic with for instance a milling machine. 3D printing enables to produce complex
shapes using less material than traditional manufacturing methods.

25
CHAPTER 9
ADVANTAGES, APPLICATIONS & FUTURE SCOPE

10.1 ADVANTAGES
The development of a miniature spy bee offers several distinct advantages, making it a
unique and valuable technology in various fields:
 Stealth and Discreetness: The small size and inconspicuous nature of the spy
bee make it an ideal tool for discreet surveillance and reconnaissance
operations.
 Agility and Maneuverability: The compact design allows the spy bee to
navigate through confined spaces and intricate environments with agility,
reaching areas inaccessible to larger surveillance systems.
 Autonomous Operation: The spy bee's ability to operate autonomously reduces
the need for constant human intervention, making it efficient for prolonged
surveillance missions.

10.2 APPLICATIONS
The miniature spy bee has potential applications in various domains, contributing to
advancements in technology and problem-solving:
 Surveillance and Reconnaissance: The spy bee can be deployed for covert
surveillance missions, gathering critical information in sensitive or hard-to-
reach areas.
 Environmental Monitoring: Utilizing its compact form, the spy bee can monitor
and assess environmental conditions, aiding in tasks such as pollution tracking
and wildlife observation.

26
 Search and Rescue Operations: The agility of the spy bee makes it suitable for
search and rescue missions in disaster-stricken areas, enhancing the efficiency
of locating survivors or assessing damage.
 Industrial Inspections: In industrial settings, the spy bee can navigate complex
structures to inspect and assess equipment, infrastructure, or hazardous
environments.
 Scientific Research: The spy bee's capabilities can be harnessed for scientific
research, such as studying ecosystems, biodiversity, and climate patterns.

10.3 FUTURE SCOPE


The miniature spy bee presents exciting possibilities for future advancements and
applications:
 Integration of Advanced Sensors: Future iterations could incorporate advanced
sensors, such as thermal imaging or gas sensors, expanding the spy bee's
capabilities for specific applications.
 Swarm Intelligence: Research into swarm intelligence could lead to the
development of multiple spy bees working collaboratively, enhancing coverage
and efficiency in data collection.
 Machine Learning and AI Integration: Incorporating machine learning and
artificial intelligence algorithms could enable the spy bee to adapt and learn
from its surroundings, improving decision-making in dynamic environments.
 Extended Battery Life: Advancements in power supply technology may lead to
longer-lasting batteries, allowing the spy bee to operate for extended durations
without recharging.

27
CHAPTER 10
CONCLUSION

In conclusion, the development of the miniature spy bee represents a remarkable


fusion of cutting-edge technology, interdisciplinary expertise, and innovative
problem-solving. Through a meticulous process of hardware integration, software
development, and iterative refinement, this project has given rise to a compact robotic
entity with the potential to revolutionize surveillance, reconnaissance, and
environmental monitoring. The advantages inherent in the miniature spy bee,
including its stealth, agility, and autonomy, position it as a versatile tool capable of
navigating complex environments and gathering critical data in real-time. The
applications of this technology span across diverse fields, from security and industrial
inspections to environmental monitoring and search and rescue operations. Its ability
to operate autonomously, coupled with the real-time transmission of data, underscores
its relevance in scenarios where immediate insights and actionable information are
paramount. Looking toward the future, the miniature spy bee opens avenues for
further innovation. The integration of advanced sensors, exploration of swarm
intelligence, incorporation of machine learning and AI, and advancements in power
supply technology present exciting possibilities for enhancing its capabilities. As
technology continues to evolve, the spy bee could become an even more integral
component in addressing complex challenges across various domains.

28
CHAPTER 11
RESULT AND DISCUSSION

The implementation of the system equipped with image capture and 3D conversion
capabilities has yielded significant advancements in the field of autonomous aerial
surveillance and imaging. This multidisciplinary project seamlessly integrates various
hardware components, including a miniature camera module, microcontroller
navigation sensors, communication modules, propulsion systems, and a robust power
supply, within a compact and efficient frame.

On the software front, the project has necessitated the development of sophisticated
algorithms to facilitate image capture and processing, control and navigation, 3D
reconstruction, and data transmission. These algorithms work in tandem to ensure the
spy bee's autonomy in capturing high-resolution images, processing them in real-time,
and transmitting the data efficiently.

One of the notable achievements of this project is the successful integration of image
capture and processing algorithms, enabling the spy bee to identify and capture
relevant imagery autonomously. The utilization of advanced control and navigation
algorithms ensures precise maneuverability, allowing the spy bee to navigate complex
environments with ease while optimizing image capture.

Moreover, the implementation of 3D reconstruction algorithms marks a significant


milestone, as it enables the generation of three-dimensional models from the captured
imagery. This capability enhances the utility of the system by providing spatial
information, facilitating better analysis, and visualization of the captured data.

29
The communication protocol developed for data transmission ensures reliable and
secure transfer of information between the spy bee and the base station. This robust
communication framework is essential for real-time monitoring, control, and retrieval
of captured data.

Additionally, the inclusion of a user interface for monitoring and control enhances the
usability of the system, enabling operators to oversee mission progress, adjust
parameters, and analyze captured data effectively.

Overall, the successful implementation of this system signifies a significant


advancement in autonomous aerial surveillance and imaging technology. The spy
bee's ability to autonomously capture images, process them in real-time, and generate
3D reconstructions opens up new possibilities for applications such as environmental
monitoring, infrastructure inspection, and disaster management. Further refinements
and optimizations to the hardware and software components promise even greater
capabilities and potential for future iterations of the system.

30
REFERENCES

1. Csaba Benedek , “3D people surveillance on range data sequences of a rotating


Lidar”.
2. Bharat Lohani, Siju Chacko, Suddhasheel Ghosh and Sandeep Sasidharan,
“Surveillance system based on Flash Lidar”.
3. P Sudhakar; K Anitha Sheela; M Satyanarayana, “Imaging Lidar system for
night vision and surveillance application”.
4. Yingji Xia, Zhe Sun, Andre Tok and Stephen Ritchie, “A dense background
representation method for traffic surveilleance based on roadside Lidar”
5. Carl Malmborg, “Mapping of Quaternary Geomorphology in Southern
Dalarna: a LiDAR Study”
6. Katherine James, Gert Riemersma, and Pedro Pacheco, “How Uav Lidar
Imaging Can Locate And Map Minefield Features”.
7. Levent Candan, Elif Kaçar, “Methodology of real-time 3D point cloud mapping
with UAV lidar”.
8. Hari Srikanth, “Real-Time Escape Route Generation in Low Visibility
Environments using Reinforcement Learning”.
9. Zulfan Jauhar Maknuny, Sami Fauzan Ramadhan, Arjon Turnip, Erwin
Sitompul, “Rp Lidar-Based Mapping in Development of a Health Service
Assisting Robot in Covid-19 Pandemic”
10.Seyyed Meghdad Hasheminasab, Tian Zhou , And Ayman Habib, “Linear
Feature-Based Image/Lidar Integration for a Stockpile Monitoring and
Reporting Technology”

31
APPENDIX 1
ARDUINO CODE

CAPTURING AN IMAGE USING AN OV7670 CAMERA MODULE

import cv2
face_classifier = cv2.CascadeClassifier(
cv2.data.haarcascades + "haarcascade_frontalface_default.xml"
)
video_capture = cv2.VideoCapture(0)
def detect_bounding_box(vid):
gray_image = cv2.cvtColor(vid, cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(gray_image, 1.1, 5, minSize=(40, 40))
for (x, y, w, h) in faces:
cv2.rectangle(vid, (x, y), (x + w, y + h), (0, 255, 0), 4)
return faces
while True:
result, video_frame = video_capture.read() # read frames from the video
if result is False:
break # terminate the loop if the frame is not read successfully

faces = detect_bounding_box(
video_frame
) # apply the function we created to the video frame

cv2.imshow(
"My Face Detection Project", video_frame
32
) # display the processed frame in a window named "My Face Detection Project"

if cv2.waitKey(1) & 0xFF == ord("q"):


break

video_capture.release()
cv2.destroyAllWindows()

RECEIVER

#include <Arduino.h>
#include<SoftwareSerial.h>
SoftwareSerial e5(0, 1);

static char recv_buf[512];


static bool is_exist = false;

static int at_send_check_response(char *p_ack, int timeout_ms, char *p_cmd, ...)


{
int ch = 0;
int index = 0;
int startMillis = 0;
va_list args;
memset(recv_buf, 0, sizeof(recv_buf));
va_start(args, p_cmd);
e5.printf(p_cmd, args);
Serial.printf(p_cmd, args);
33
va_end(args);
delay(200);
startMillis = millis();

if (p_ack == NULL)
{
return 0;
}
do
{
while (e5.available() > 0)
{
ch = e5.read();
recv_buf[index++] = ch;
Serial.print((char)ch);
delay(2);
}
if (strstr(recv_buf, p_ack) != NULL)
{
return 1;
}

} while (millis() - startMillis < timeout_ms);


return 0;
}
static int recv_prase(void)
{

34
char ch;
int index = 0;
memset(recv_buf, 0, sizeof(recv_buf));
while (e5.available() > 0)
{
ch = e5.read();
recv_buf[index++] = ch;
Serial.print((char)ch);
delay(2);
}

if (index)
{
char *p_start = NULL;
char data[32] = {
0,
};
int rssi = 0;
int snr = 0;
p_start = strstr(recv_buf, "RSSI:");
if (p_start && (1 == sscanf(p_start, "RSSI:%d,", &rssi)))

p_start = strstr(recv_buf, "SNR:");


return 1;
}
return 0;
}

35
static int node_recv(uint32_t timeout_ms)
{
at_send_check_response("+TEST: RXLRPKT", 1000, "AT+TEST=RXLRPKT\r\
n");
int startMillis = millis();
do
{
if (recv_prase())
{
return 1;
}
} while (millis() - startMillis < timeout_ms);
return 0;
}
void setup(void)
{
Serial.begin(115200);
// while (!Serial);

e5.begin(9600);
Serial.print("Receiver\r\n");

if (at_send_check_response("+AT: OK", 100, "AT\r\n"))


{
is_exist = true;
at_send_check_response("+MODE: TEST", 1000, "AT+MODE=TEST\r\n");

36
at_send_check_response("+TEST: RFCFG", 1000,
"AT+TEST=RFCFG,866,SF12,125,12,15,14,ON,OFF,OFF\r\n");
delay(500);
}
else
{
is_exist = false;
Serial.print("No E5 module found.\r\n");
}
}
void loop(void)
{
if (is_exist)
{
node_recv(2000);
}
}
TRANSMITTER
#include <Arduino.h>
#include<SoftwareSerial.h>

SoftwareSerial e5(D1, D2);

static char recv_buf[512];


int counter = 0;
static int at_send_check_response(char *p_ack, int timeout_ms, char *p_cmd, ...)
{

37
int ch;
int num = 0;
int index = 0;
int startMillis = 0;
va_list args;
memset(recv_buf, 0, sizeof(recv_buf));
va_start(args, p_cmd);
e5.print(p_cmd);
Serial.print(p_cmd);
va_end(args);
delay(200);
startMillis = millis();

if (p_ack == NULL)
return 0;

do
{
while (e5.available() > 0)
{
ch = e5.read();
recv_buf[index++] = ch;
Serial.print((char)ch);
delay(2);
}

if (strstr(recv_buf, p_ack) != NULL)

38
return 1;

} while (millis() - startMillis < timeout_ms);


Serial.println();
return 0;
}

void setup(void)
{
Serial.begin(9600);

e5.begin(9600);
Serial.print("E5 LOCAL TEST\r\n");
at_send_check_response("+TEST: RFCFG", 1000,
"AT+TEST=RFCFG,866,SF12,125,12,15,14,ON,OFF,OFF\r\n");
delay(200);
}

void loop(void)
{
char cmd[128];
counter=counter+1;

sprintf(cmd, "AT+TEST=TXLRPKT,\"%d\"\r\n", counter);


int ret = at_send_check_response("TX DONE", 5000, cmd);
}

39
APPENDIX 2

40
41
42

You might also like