0% found this document useful (0 votes)
8 views14 pages

IBM-INTERN REPORT

The final project report details the development of a web application for real-time object detection using the YOLO framework and Python, aimed at accurately identifying and tracking vehicles in images. Key features include a user-friendly web interface, robust security measures, and performance optimization for efficient processing. Future enhancements are planned to include historical tracking, mobile compatibility, and cloud integration to further improve the application's capabilities.

Uploaded by

Vansh Nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views14 pages

IBM-INTERN REPORT

The final project report details the development of a web application for real-time object detection using the YOLO framework and Python, aimed at accurately identifying and tracking vehicles in images. Key features include a user-friendly web interface, robust security measures, and performance optimization for efficient processing. Future enhancements are planned to include historical tracking, mobile compatibility, and cloud integration to further improve the application's capabilities.

Uploaded by

Vansh Nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

School of Computer Science

University of Petroleum & Energy Studies,


Dehradun

Final Project Report

on

Object Detection using YOLO & Python

Team members:
Unmukt Kumar – R2142201250
Utkarsh Kumar – R2142201669
Vaibhav Dimri – R2140001716
Vansh Jaiswal – R2142201704
Vansh Nayak – R2142201273
Vanya Maheshwari – R2142201278

Guided by:
Mr. Sumit Shukla

Industry Mentor:
Mr. Sumit Shukla
Table of Content

1. Background
1.1 Aim
1.2 Technologies
1.3 Hardware Architecture
1.4 Software Architecture
2. System
2.1 Requirements
2.1.1 Functional requirements
2.1.2 User requirements
2.1.3 Environmental requirements
2.2 Design and Architecture
2.3 Implementation
2.4 Testing
2.4.1 Test Plan Objectives
2.4.2 Data Entry
2.4.3 Security
2.4.4 Test Strategy
2.4.5 System Test
2.4.6 Performance Test
2.4.7 Security Test
2.4.8 Basic Test
2.4.9 Stress and Volume Test
2.4.10 Recovery Test
2.4.11 Documentation Test
2.4.12 User Acceptance Test
2.4.13 System
2.5 Customer testing
2.6 Evaluation
2.6.1 Table
1: Performance
2.6.2 STATIC CODE ANALYSIS
2.6.3 WIRESHARK
2.6.4 TEST OF MAIN FUNCTION
3. Snapshots of the Project
4. Conclusions
5. Further development or research
6. References

Page | 2
Executive Summary
The project involves the development of a sophisticated web application that utilizes YOLO (You Only Look Once),
a cutting-edge deep learning framework, to perform real-time object detection on images and provide accurate
vehicle position tracking. The primary goal of this application is to offer users an intuitive and efficient tool to
identify and monitor vehicles within images, enabling a wide range of applications including traffic management,
surveillance, and more.
Key Features:
1. Real-time Object Detection: The application employs YOLO, a state-of-the-art object detection algorithm, to
identify and classify vehicles within images. This allows for quick and accurate detection of vehicles,
enhancing the usability of the application.
2. Web-based Interface: The application features a user-friendly web interface developed using the Flask micro
web framework. Flask's simplicity and versatility make it an ideal choice for creating responsive and
interactive web applications.
3. Session Management: Flask's session management capabilities ensure a seamless user experience by
maintaining user state across multiple interactions. Detected objects and their positions are securely stored and
retrieved using session-based data management techniques.
4. Security Measures: Security is a top priority. The application employs robust security measures, including
data encryption during transmission, user authentication and authorization, input validation to prevent attacks,
and secure session management to prevent unauthorized access.
5. Performance Optimization: The application is designed with performance in mind. Efficient backend
processes and optimized database queries ensure rapid response times for user queries. Caching mechanisms
are implemented to minimize redundant computations and enhance response speed.
6. Scalability: The architecture of the application is built to handle increased user loads. Horizontal scalability is
achieved by adding resources as needed, allowing the application to accommodate a growing user base.
7. Object Position Visualization: Detected vehicle positions are accurately marked on the input images. This
feature provides users with a clear visual representation of where the vehicles are located within the scene.
Value Proposition:
Our web application bridges the gap between advanced object detection techniques and user-friendly interaction. It
empowers users to effortlessly analyze images and obtain real-time vehicle positions, facilitating data-driven
decision-making and improving operational efficiency across various industries.
Future Developments:
As the project evolves, we plan to enhance the application by incorporating additional features such as:
 Historical Tracking: Implement a feature that allows users to track the movement of vehicles over time by
analyzing a sequence of images.
 Mobile Compatibility: Develop a mobile version of the application to facilitate on-the-go access and usage.
 Cloud Integration: Integrate cloud services for seamless scalability and storage, enabling the application to
handle larger datasets.
In conclusion, our Object Detection and Vehicle Position Tracking Web Application combines state-of-the-art object
detection algorithms, secure session management, and user-friendly design to provide an efficient tool for identifying
and tracking vehicles within images. By addressing real-world challenges related to security, performance, and user
experience, our application aims to deliver actionable insights and operational improvements for various industries.

Page | 3
1. Background
1.1 Aim
The aim of this project is to leverage YOLO's advanced object detection capabilities to accurately identify
and track vehicles within images. Focusing on security, performance, and user experience, the project aims
to develop a robust system that securely manages session data, optimizes data processing, and visualizes
vehicle positions. The project's goal is to create a high-performance solution that enhances decision-making
and efficiency across diverse industries, while also laying the groundwork for future enhancements such as
multi-image processing, historical tracking, and cloud integration.
1.2 Technologies
In this project, we are leveraging a combination of cutting-edge technologies to develop a sophisticated
system for object detection and vehicle position tracking. The selected technologies play pivotal roles in
ensuring accurate detection, efficient processing, secure data management, and user-friendly interaction.
Here's a detailed explanation of the technologies we're using:
1. YOLO (You Only Look Once):
YOLO is a state-of-the-art deep learning algorithm for real-time object detection. Its unique approach
divides an image into a grid and predicts bounding boxes and class probabilities directly. YOLO's speed
and accuracy make it an ideal choice for detecting and classifying vehicles within images efficiently.
2. TensorFlow and Keras:
TensorFlow, an open-source deep learning framework, and Keras, a high-level neural network API,
provide the backbone for implementing YOLO-based object detection. TensorFlow's GPU acceleration
capabilities enhance the speed and efficiency of model inference.
3. Session Management:
Flask's built-in session management capabilities enable secure storage of user-related data between
different interactions with the application. It ensures that detected object positions and other session-
specific information are maintained and accessible as needed.
4. Caching:
Caching mechanisms, such as those provided by Flask-Caching, enhance application performance. By
storing frequently used data in memory, caching minimizes redundant computations and accelerates
response times for user queries.
By integrating these technologies, we're creating a well-rounded system that effectively combines state-of-
the-art object detection, user interaction, data security, and optimization techniques. This approach results in
a powerful tool that addresses real-world challenges and meets the needs of diverse industries seeking
accurate object detection and vehicle position tracking capabilities.
1.3 Hardware Architecture
The hardware architecture for the object detection and vehicle position tracking system involves components
that work collaboratively to efficiently process images, perform real-time object detection, and provide
accurate vehicle position tracking. The primary focus is on optimizing the inference process of the YOLO
model for object detection. Here's an overview of the hardware architecture:
1. Central Processing Unit (CPU):
The CPU serves as the central component responsible for managing overall system operations. It
coordinates the execution of different tasks, including handling incoming requests, managing sessions,
and orchestrating image processing.
2. Graphics Processing Unit (GPU):
The GPU is a key hardware component that accelerates the inference process of the YOLO model. Deep
learning algorithms, such as YOLO, involve intensive matrix operations that can be parallelized
efficiently using GPUs. GPU acceleration significantly speeds up the object detection process, enabling
real-time or near-real-time performance.
3. Memory (RAM):
Ample RAM is crucial for efficiently storing and manipulating large image data and the intermediate
results of the object detection process. The YOLO model's weights and configurations are loaded into
memory, and intermediate feature maps are computed during inference.

Page | 4
4. Storage:
Storage is necessary for storing the YOLO model weights, configuration files, input images, and any
cached or session-related data. Fast storage technologies, such as SSDs, are beneficial for quick retrieval
of model files and data, contributing to faster application startup times.
5. Networking Components:
Networking components enable communication between the user's browser and the application. This
includes sending image data to the server for processing and receiving the processed image with
annotated vehicle positions. A stable and high-speed network connection is essential for efficient data
transfer.
6. Future Scalability Considerations:
While not directly shown in the hardware architecture, considerations for scalability should be made. As
the user base grows, load balancing strategies and additional servers with similar hardware specifications
can be introduced to distribute the processing load and ensure optimal performance.
The hardware architecture's key factor is the GPU's role in accelerating the YOLO model's inference process.
This enables the system to perform real-time object detection and provide accurate vehicle position tracking,
enhancing the user experience and enabling applications across various domains, including traffic
management, surveillance, and more.
1.4 Software Architecture
The software architecture for the object detection and vehicle position tracking system is designed to
efficiently process images, perform real-time object detection using YOLO, and accurately track vehicle
positions. Here's an overview of the software components and their interactions:
1. Input Data Management:
Raw images captured by cameras or uploaded by users are the input data for the system. Images are
preprocessed to ensure they are in the appropriate format and size for the YOLO model's input.
2. YOLO Model:
The YOLO model, pretrained on the COCO dataset, performs object detection on the input images. The
model identifies vehicles within the images, providing bounding box coordinates and class probabilities.
3. Inference Engine:
The inference engine takes the preprocessed images and feeds them into the YOLO model for detection.
It processes the model's output to extract vehicle positions and classification scores.
4. Position Extraction:
Vehicle positions are extracted from the YOLO output, including the coordinates of bounding boxes
around vehicles. These positions are then used to mark the vehicles' locations on the images.
5. Caching:
Caching mechanisms can be implemented to store frequently processed images and their detected
vehicle positions. Cached results can be quickly retrieved for identical or similar image inputs, reducing
redundant computations.
6. Output Generation:
The system generates annotated images, where detected vehicles are visually marked with bounding
boxes. Vehicle positions, along with their corresponding image IDs, can be stored in a database for
future reference.
7. Scalability Considerations:
The architecture can be designed to support horizontal scalability by deploying multiple instances of the
system. Load balancing strategies ensure that incoming image processing requests are distributed across
available instances.
The software architecture's core is the integration of the YOLO model, inference engine, and position
extraction components. This combination enables real-time and accurate vehicle detection and position
tracking. Additionally, optional components like caching, databases, and APIs offer flexibility for future
enhancements and use cases.

Page | 5
2. System
2.1 Requirements
2.1.1 Functional requirements
1. Object Detection: The system must be capable of performing object detection using the YOLO model
on input images. It should accurately detect and classify vehicles within the images.
2. Position Tracking: The system should extract the positions of detected vehicles and their bounding
box coordinates. The accuracy of vehicle position extraction should meet or exceed a defined
confidence threshold.
3. Annotated Output Generation: The system must generate annotated images that display the original
input images with bounding boxes around detected vehicles. The annotated images should clearly
visualize the identified vehicle positions.
4. Caching: If implemented, the system should cache processed images and position data for improved
performance during similar queries. Cached data should be retrievable quickly to reduce redundant
processing.
2.1.2 User requirements
1. Ease of Use: Users should find the system's interface intuitive and easy to navigate. Object detection
and vehicle position tracking should be accessible through user-friendly interactions.
2. Real-Time Processing: The system's response time for object detection and position tracking should
be quick, providing real-time or near-real-time results.
3. Accurate Vehicle Position Information: Users expect accurate and reliable vehicle position tracking
results for decision-making and analysis.
4. Visualization: Users should be able to visualize detected vehicle positions through annotated images.
Annotated images should clearly indicate the locations of detected vehicles.
2.1.3 Environmental requirements
1. Hardware: The system should run on standard hardware configurations, including CPUs and GPUs
suitable for deep learning tasks. Optional GPU acceleration should be supported for faster object
detection.
2. Software: The system's software components, including the YOLO model and libraries, must be
compatible with the chosen programming language and environment.
3. Network Connectivity: The system should have access to stable network connectivity, especially for
remote image input or potential cloud integration.
4. Security: If user authentication and optional database integration are implemented, security
mechanisms should be in place to protect sensitive data.
5. Scalability (if required): If future scalability is anticipated, the architecture should be designed to
accommodate increased user loads and potentially distributed deployment.
2.2 Design and Architecture
The design and architecture of the object detection and vehicle position tracking system are crucial for
achieving accurate and efficient results. The system's core focus is on leveraging the YOLO model for real-
time object detection and precise vehicle position tracking. Here's a detailed breakdown of the design and
architecture:
1. High-Level Components:
• Input Module: Handles input data, which includes raw images from cameras or user uploads.
• Inference Engine: Orchestrates the YOLO model for object detection and processes its output.
• Position Tracker: Extracts vehicle positions from the YOLO output and generates position
information.
• Output Generator: Creates annotated images with marked vehicle positions for visualization.
• Optional Caching Layer: Stores processed images and position data for caching to improve
performance.
• Optional Database: Stores historical data and vehicle positions for tracking over time.

Page | 6
2. Detailed Architecture:
• Input Processing: Raw images are received through the input module. Preprocessing prepares
images for the YOLO model, resizing them to the required input size.
• YOLO Model: The YOLO model, pretrained on the COCO dataset, is loaded and utilized for object
detection. The model is fed with preprocessed images to predict vehicle positions and classes.
• Inference and Position Extraction: The inference engine processes YOLO's output, extracting
bounding box coordinates and class probabilities. The position tracker then derives precise vehicle
positions from the bounding boxes.
• Caching: Processed images and position data can be cached to expedite future queries with similar
inputs. Cached data can be retrieved quickly, reducing redundant processing.
• Output Generation: The output generator overlays the original images with annotated bounding
boxes for identified vehicles. Annotated images are ready for visualization and further analysis.
3. Scalability and Extensibility:
The architecture can be deployed across multiple instances for scalability and load distribution. APIs can
be added to provide external access to detection and tracking functionalities.
4. Flow of Data:
• Raw images are received and preprocessed.
• Preprocessed images are fed into the YOLO model for object detection.
• The model outputs bounding box coordinates and class probabilities.
• Position tracker extracts vehicle positions from the bounding boxes.
• Processed images and position data are optionally cached.
• Annotated images are generated with marked vehicle positions.
• Historical data can be stored in the optional database for tracking.
The architecture's strength lies in its YOLO-driven object detection, which ensures real-time and accurate
vehicle position tracking. Flexibility is provided by optional components like caching and databases,
enabling optimization and historical tracking. The design promotes scalability and future enhancements,
making it a powerful tool for various industries.
2.3 Implementation
1. Input Module:
The input module is responsible for receiving and preparing raw images for processing. This involves
resizing images to match the YOLO model's input size, which enhances the accuracy of object detection.
2. Inference Engine:
The inference engine loads the pre-trained YOLO model and processes images to perform object
detection. The engine converts images into a suitable format (blobs) for the model. The YOLO model
predicts bounding box coordinates and class probabilities for detected objects.
3. Position Tracker:
The position tracker processes the outputs from the YOLO model. It analyzes the model's predictions,
extracts bounding box information, and identifies the positions of vehicles within the image. A confidence
threshold is used to filter out less confident predictions.
4. Output Generator:
The output generator creates annotated images by overlaying bounding boxes on the original image. The
bounding boxes visually highlight the positions of detected vehicles. This annotated image serves as a
visual representation of the detection results.
The workflow enables real-time object detection and accurate tracking of vehicle positions within images. The
process involves preprocessing, utilizing a pre-trained model, analyzing model outputs, visualizing results,
and optionally storing data for further analysis. The design is versatile, making it suitable for various
applications, including traffic management, surveillance, and more.
2.4 Testing
2.4.1 Test Plan Objectives
 To verify the functionality of the object detection system.
 To ensure that the system can detect objects accurately and reliably.
 To identify any defects in the system and report them to the developers.
 To evaluate the performance of the system under different conditions.

Page | 7
2.4.2 Data Entry
 The list of objects that the system should be able to detect –
["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", "boat", "traffic
light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse",
"sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie",
"suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove",
"skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife",
"spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog",
"pizza", "donut", "cake", "chair", "sofa", "pottedplant", "bed", "diningtable", "toilet",
"tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven",
"toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier",
"toothbrush"]
 The images or videos that was used to test the system – cycles.mp4
 The expected results for each test case –

2.4.3 Security
 The test plan is kept confidential.
 Only authorized personnel are allowed to access the test plan.
 The test plan is protected from unauthorized modification.
2.4.4 Test Strategy
 A combination of unit tests, integration tests, and system tests will be used to test the object
detection system.
 Unit tests are used to test the individual components of the system.
 Integration tests are used to test how the individual components interact with each other.
 System tests are used to test the entire system under different conditions.
2.4.5 System Test
 The system test is conducted using a variety of videos.
 The system is tested for its ability to detect objects of different sizes, shapes, and colors.
 The system is also tested for its ability to detect objects in different positions and orientations.
2.4.6 Performance Test
 The performance test is conducted to measure the speed and accuracy of the object detection
system.
 The system is tested on a variety of hardware platforms and with different image sizes.

Page | 8
2.4.7 Security Test
 The security test is conducted to ensure that the object detection system is secure from
unauthorized access and modification.
 The system is tested for its vulnerability to common hacking techniques.
2.4.8 Basic Test
 The basic test is conducted to test the basic functionality of the object detection system.
 This includes testing the ability of the system to load images, detect objects, and display the
results.
2.4.9 Stress and Volume Test
 The stress and volume test is conducted to test the performance of the object detection system
under heavy load.
 This includes testing the system with a large number of images or videos.
2.4.10 Recovery Test
 The recovery test is conducted to test the ability of the object detection system to recover from
failures.
 This includes testing the system with unexpected errors or crashes.
2.4.11 Documentation Test
 The documentation test is conducted to test the completeness and accuracy of the
documentation for the object detection system.
 This includes testing the user manual, installation guide, and API documentation.
2.4.12 User Acceptance Test
 The user acceptance test is conducted to get feedback from users on the usability of the object
detection system.
 This includes testing the system with a variety of users and tasks.
2.4.13 System
 The system acceptance test is conducted to verify that the object detection system meets the
requirements of the stakeholders.
 This includes testing the system with the actual data that will be used in production.
2.5 Customer testing
Customer testing is an important part of the development process for any object detection project. By
carefully planning and conducting customer testing, you can ensure that the system meets the needs of
the users and that it is ready for deployment. Here are some additional tips for conducting customer
testing:
 Involve the customers in the development process as early as possible. This will help you to
understand their needs and expectations and to ensure that the system is designed to meet their
requirements.
 Use a variety of testing methods, such as user interviews, usability testing, and performance
testing. This will help you to get a comprehensive understanding of the system's strengths and
weaknesses.
 Be open to feedback from the customers. This is an opportunity to learn from them and to
improve the system.
 Communicate the test results to the customers in a clear and concise way. This will help them to
understand the performance of the system and to make informed decisions about its use.

Page | 9
2.6 Evaluation
2.6.1 Table
1. Performance
Metric Value
Top-1 accuracy 76.5%
Top-5 accuracy 93.3%
Number of epochs 160
Learning Rate 0.1 (initial), 10^-3 (fine-tuning)
Weight Decay 0.0005
Momentum 0.9
2.6.2 Static Code Analysis
A static code analysis is a technique that can be used to identify potential errors and
vulnerabilities in code without actually running the code. This can be done by analyzing the
code's structure and syntax. In the case of YOLO, a static code analysis could be used to identify
potential errors in the code, such as typos, incorrect variable names, and logical errors. It could
also be used to identify potential vulnerabilities in the code, such as buffer overflows and SQL
injection attacks.
2.6.3 WireShark
Wireshark is a network protocol analyzer that can be used to capture and analyze network traffic.
This can be used to identify potential security threats, such as data leaks and denial-of-service
attacks.
In the case of YOLO, Wireshark could be used to capture and analyze the network traffic that is
generated when the code is run. This could be used to identify potential security threats, such as
data leaks or denial-of-service attacks.
2.6.4 Test of Main Function
The main function is the entry point for a program. It is responsible for initializing the program
and calling the other functions that need to be executed. The main function of YOLO should be
tested to ensure that it is working properly. This can be done by creating a test case that exercises
all of the functionality of the main function.
The test case should be designed to cover all possible scenarios, such as the following:
 The code should be able to load the ImageNet 1000 class classification dataset.
 The code should be able to train the classifier.
 The code should be able to classify images.
The test case should also be designed to be robust, so that it can detect any errors in the main
function.

3. Snapshots of the Project

Fig 3.1: Code-1

Page | 10
Fig 3.2: Code-2

Fig3.3:Output-1

Page | 11
Fig 3.4: Output-2

4. Conclusions
In the proposed system the drawbacks of all the previous deep learning algorithms like R-CNN, Faster R-CNN,
CSRT etc., are overcome by using the YOLO algorithm. YOLO algorithm is a very efficient algorithm compared to
all deep learning algorithms in object detection. It improves the speed and accuracy in object detection and tracking.
One such approach is to use a convolutional neural network (CNN) to detect objects in each frame of the video
stream. The CNN can be trained on a large dataset of labeled images to learn the features that are most important for
object detection. Once the objects are detected, we can use an object tracking algorithm to track the objects across
frames and estimate their position. One such algorithm is the correlation-based tracking algorithm, which can be
used to merge the detection results of each divided sub-image with inference results at different times. This approach
can significantly reduce the amount of computation for object detection and tracking. In conclusion, by combining
deep learning and computer vision techniques, we can design an efficient algorithm that can detect objects and their
position in real-time video streams. The proposed approach can be used in various applications such as video
surveillance, automotive driving, and intelligent robotics.

5.Further development or research


To further develop the deep learning algorithm for object detection and tracking in real-time video streams, we can
explore various techniques such as multi-object tracking and object re-identification. Multi-object tracking can be
used to track multiple objects simultaneously, which is useful in scenarios where there are multiple objects in the
video stream. Object re-identification can be used to track objects across multiple cameras, which is useful in
scenarios where the object moves out of the field of view of one camera and enters another camera’s field of view.
Another area of research is semi-supervised learning, which can be used to train deep learning models with limited
labeled data. This approach can significantly reduce the amount of labeled data required for training deep learning
models. In addition, we can explore real-time object detection on edge devices such as smartphones and IoT devices,
which can enable real-time object detection and tracking on low-power devices. These research areas can help us
design more efficient and accurate algorithms for object detection and tracking in real-time video streams.
In future, for a more reliable and less complex system the system can be improved by substituting advanced
techniques. This can be used in various real time applications like crime investigations, manufacturing, space
research etc.

Page | 12
6.References
[1]https://www.google.com/url?sa=i&url=https%3A%2F%2Fptop.only.wip.la%3A443%2Fhttps%2Fwww.datacamp.com%2Fblog
%2Fyolo-object-
detectionexplained&psig=AOvVaw0cytg0YSptuh6Vj_otDfiW&ust=1676485548854000&source=im
ages&cd=vfe&ved=0CBAQjRxqFwoTCIiu8b7Rlf0CFQAAAAAdAAAAABAR

[2] Juan Du , "Understanding of Object Detection Based on CNN Family and YOLO", Citation Juan Du 2018 J.
Phys.: Conf. Ser. 1004 012029 DOI 10.1088/1742-6596/1004/1/012029

[3]Aleksa .et.al "The Real-Time Detection of Traffic Participants Using YOLO Algorithm" , Corovic, Aleksa & Ilic,
Velibor & Đurić, Siniša & Marijan, Mališa & Pavkovic, Bogdan. (2018). 10.1109/TELFOR.2018.8611986.

[4] Akila S .Et.Al "Indoor And Outdoor Navigation Assistance System For Visually Impaired People Using Yolo
Technology" P-Issn: 2395-0072,International Research Journal Of Engineering And Technology (Irjet)

[5] Hassan Salam .et.al "You Only Look Once (YOLOv3): Object Detection and Recognition for Indoor
Environment" ,10.5281/zenodo.4906284

[6] Geethapriya. S.et.al "Real-Time Object Detection with Yolo", International Journal of Engineering and
Advanced Technology (IJEAT) ISSN: 2249 – 8958

Page | 13
Page | 14

You might also like