YoloV4 Based Object Detection For Blind Stick
YoloV4 Based Object Detection For Blind Stick
ISSN No:-2456-2165
Abstract:- Blind humans face many problems to interact that take over front side and back side images and use
with their close by surroundings. The intention of this Machine Learning to operate them. India been the most
paper is to offer a device to help blind humans to navigate populated country in the world about 20 percent of its
in addition to feel the barriers. We plan to propose an population is blind or visually impaired. A tiny rectangular
operating model that is taking walks stick with in- box including raspberry pi, Bluetooth speaker, and battery
constructed ultrasonic sensor with a micro controller bank will be designed to be fitted with a cane that is typically
gadget. Detection and monitoring algorithms are laid out roughly 55 inches long. The Raspberry pi model 4 with 4GB
in terms of extricating the capabilities of photographs and RAM is used. Yola v4 algorithm is used for object
motion pictures for protection and scrutiny applications. recognition, to warn the user of any obstructions in the road, a
famous algorithms of item detection consist of You only Bluetooth speaker module is integrated. During the navigation
look once (YOLO), area-based Convolutional Neural phase, a power bank is inserted to serve as the raspberry pi’s
Networks (RCNN), quicker RCNN (F-RCNN). RCNN has source of power.
higher accuracy in comparison to different algorithms,
but YOLO surpasses whilst pace is considered over II. LITERATURE SURVEY
accuracy.
A. Selecting a Template
Keywords:- YOLOv4, Raspberry Pi, RCNN, Blind stick, This proposed method uses the Arduino UNO as a
Object Detection. controller. The branch is accomplished by sensing all
difficulties in front of the user. [6] The instrument stands used
I. INTRODUCTION to perceive the obstacles at the range of four meters and the
infrared instrument is castoff to perceive the nearer
Need and Definition of ML based Blind Stick. Eye is the complications in front of the blind people.
most significant part of the body. The vision helps us to obtain
the environmental information. Blindness is a condition in Stick with in-built ultrasonic sensor with a
which a person is unable to see and detect things happening in microcontroller system. [9] The ultrasonic sensor is used to
his/her surrounding can may lead to various problems which detect obstacles using ultrasonic waves. On sensing obstacles,
cannot be solved by medical means. There are many people the sensor passes the data to the microcontroller. The
with severe vision impairment that restricts them from microcontroller then processes the data and calculates if an
travelling individually over their path. These blind people obstacle is close enough. Design and implementation of an
should have access to a range of tools which will help them ultrasonic sensor based walking stick for visually impaired
travelling independently in their path. One of the oldest tools person. [3] an ultra sonic sensor module, HC-SR04 is used for
for blind people have been the walking stick also knows as the obstacle detection in the path of the blind person and a buzzer
wide stick. They proved to be very useful back in time but is use to make the person alert. the proposed system is
now it has some significant problems. The rapid growth of implemented using PIC microcontroller 16F877A. The project
modern technology has introduced better systems such as was published which used ultrasonic, [10] infrared and water
smart guided stick that can provide intelligent navigation to sensors to detect any objects within 4 meters very quickly.
the blind person. One of the most visceral parts of computer The stick is integrated with various sensors like ultrasonic
science includes computer vision. Artificial Intelligence based sensor, [1] water sensor with GPS-GSM module and RF
smart guide stick, furnished with image detection technologies module and with microcontroller. This paper focuses on deep
Yolo V4
YoloV4 is an important improvement of YoloV4, the
implementation of a new architecture in the Backbone and the
modifications in the Neck have improved the map (mean
Average Precision) by 10 percent and the number of FPS
(Frame per Second) by 12 percent. In addition, it has become
easier to train this neural network on a single GPU. For
getting higher values for precision, YOLOv4 uses a more
complex and deeper network via Dense Block.
Google Collab
Collab, or ‘Collaboratory’, allows you to write and Fig 4: Car is identified which is captured by Pi camera
execute Python in your browser, with Zero configuration
required, Access to GPUs free of charge, Easy sharing VII. CONCLUSION
Whether you are a student, a data scientist or an AI
researcher, Collab can make your work easier. Proposed system is implemented using Raspberry-Pi
modules where camera and speaker are interface with it.
gTTS (Google Text to Speech): Yolo-V4 is used in proposed system to identify object in the
A Python library and CLI tool to interface with Google surrounding environment. After identifying object, system is
Translates text-to-speech API. Writes spoken mp3 data to a producing audio of name of the object. This system can be
(stout). It features flexible pre-processing and tokenizing. implemented globally to provide blind human beings ease
and privateness in daily existence. so as to seriously aid
LABELME: manufacturing and industrial boom in harsh conditions, it is
LabelMe is a python-primarily based open-source also expected to use for industrial places in which reduced
photograph polygonal annotation device that may be used for visibility occurs, together with coal mines and sea bottoms.
manually annotating snap shots for item detection, The aim of the observe is to improve the independence of
segmentation and type. The device is a lightweight graphical persons with visual impairment, via effectively making use of
application with an intuitive user interface. With LabelMe the proposed system and its associated audio feedback,
you can create: polygons, rectangles, circles, traces, factors, human with visual impairment may be capable to conquer
or line strips. diverse risks. The camera of the tool may be used to come
across item from the surroundings and give output in audio
VI. RESULT format. Accordingly, assisting visually impaired humans to
‘See via the Ears’.
As a result, system detected automobile vehicles and at
the same time provides audio output through the Bluetooth REFERENCES
headphones. Number of objects are inputed through camera
module. System has correctly identified object surrounding [1]. M. P. Agrawal and A. R. Gupta, "Smart Stick for the
environment. Some of the results are shown in following Blind and Visually Impaired People," 2018 Second
figures 3. International Conference on Inventive Communication
and Computational Technologies (ICICCT),
Coimbatore, India, 2018, pp. 542-545, doi:
10.1109/ICICCT.2018.8473344.
[2]. Ali, Uruba, Hoorain Javed, Rekham Khan, Fouzia
Jabeen, and Noreen Akbar, "Intelligent stick for blind
friends," International Robotics and Automation Journal
4, no. 1 (2018).
[3]. N. Dey, A. Paul, P. Ghosh, C. Mukherjee, R. De and S.
Dey, "Ultrasonic Sensor Based Smart Blind Stick,"
2018 International Conference on Current Trends
towards Converging Technologies (ICCTCT),
Coimbatore, India, 2018, pp. 1-4, doi:
10.1109/ICCTCT.2018.8551067.
[4]. Balu N Ilag and Yogesh Athave, "A design review of
smart stick for the blind equipped with obstacle
Fig 3: Car is identified which is captured by Pi camera. detection and identification using artificial intelligence,"
International Journal of Computer Applications,
182:55– 60, 04 2019.