0% found this document useful (0 votes)
14 views

Rashedpaper

The document discusses the design and implementation of a voice-controlled robotic arm, which can operate in hazardous environments and perform tasks automatically through voice commands. The robotic arm is equipped with various components including an Arduino controller, servo motors, and a voice module, allowing for precise control and operation. The project aims to enhance efficiency and reduce labor costs in industries by enabling robots to perform complex tasks with high accuracy.

Uploaded by

Sakthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Rashedpaper

The document discusses the design and implementation of a voice-controlled robotic arm, which can operate in hazardous environments and perform tasks automatically through voice commands. The robotic arm is equipped with various components including an Arduino controller, servo motors, and a voice module, allowing for precise control and operation. The project aims to enhance efficiency and reduce labor costs in industries by enabling robots to perform complex tasks with high accuracy.

Uploaded by

Sakthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/355656303

Design and Implementation of Voice Controlled Robotic ARM

Conference Paper · June 2021


DOI: 10.1109/ICICT52195.2021.9568446

CITATIONS READS

5 1,705

3 authors:

Shakil Rashed Seyed Enayatallah . Alavi


Bangladesh Agricultural University Shahid Chamran University of Ahvaz
113 PUBLICATIONS 1,867 CITATIONS 36 PUBLICATIONS 204 CITATIONS

SEE PROFILE SEE PROFILE

Ali Abed
University of Basrah
45 PUBLICATIONS 289 CITATIONS

SEE PROFILE

All content following this page was uploaded by Ali Abed on 02 May 2023.

The user has requested enhancement of the downloaded file.


2021 International Conference on Communication & Information Technology - ICICT2021– Basra- IRAQ

Design and Implementation of Voice Controlled Robotic ARM


Rashed S. Kanash Seyed E. Alavi Ali A. Abed
Department of Computer Engineerin Department of Computer Engineering Department of Computer
Engineering Shahid chamran Shahid chamran University of Ahvaz University of Basra
University of Ahvaz Ahvaz , Iran Basra , Iraq
Ahvaz- Iran [email protected] [email protected]
[email protected]

ABSTRACT—Modern technologies have required the labor sensor, which is an advanced type of sound system that
market to provide robots that work in many fields, including simulates parts of the arm and allows it to perform tasks more
industrial, medical, military, and agricultural [3]. In this efficiently [2]. Robots are not just machines [4], but an
project we design a robotic arm that enters in many areas that indispensable part of large industries and hospitals, as they
can be used in oil fields that are dangerous to operators are used in the manufacture of auto parts, laboratories, and
2021 International Conference on Communication & Information Technology (ICICT) | 978-1-6654-3914-5/21/$31.00 ©2021 IEEE | DOI: 10.1109/ICICT52195.2021.9568446

working on sites containing inert gases or in environmentally military sites, for example, removing explosives, and almost
polluted areas. The aim of the design is to make the robot arm all kinds of heavy-duty operations. Robotic arms are
fully operate by voice, that is, it responds to commands issued designed with different types, consisting of five servo motors
by the operator. There are many methods that can be used to
or six motors according to the type of movement and
make the robot work without controlling it manually, but each
design has a special practical character, as the sound-based
required angles, as it can perform all complex tasks such as
approach makes the robot practically easier and efficient to welding and cutting. Moreover, it can operate in hazardous
perform accurate tasks in dangerous places. A robotic arm is environments and areas inaccessible to workers or operators.
classified from a group of units: (arm, Arduino controller, The main objective of designing this project is to help
NRF sensor, and voice module). The arm contains the main industries reduce errors, work with high accuracy and speed,
base, three rotary joints and an end-effector, in which the reduce labour costs, and achieve better results. Several types
rotational motion is provided by a servo motor. The NRF of robots have various designs depending on the actual need
sensor has been placed to be a wireless remote control for a to use them, as the robot can be useful for people with
distance up to one kilometer and can work inside and outside different abilities in performing their daily tasks [5]. There
buildings. The arm is programmed by training the voice sensor are several methods of wireless control, such as keyboard
on the voice commands (hello, right-left, up down, forward - control, gesture control, and vision-based techniques such as
reverse, catch - open). placing cameras with a robot, but they are time-consuming
and require a lot of technical expertise and field knowledge.
Keywords- Arduino UNO, Robotic arm, NRF 24L01 wireless, The voice control work is smooth and it helps the user to
Voice module sensor. control the arm easily, with a high level of proficiency from
the robot programming language, which makes the operation
of the arm flexible in terms of use [6].
I. INTRODUCTION
II. RELATEDWORK
The robotic arm has entered many fields, including
industrial, medical devices, and military sites, where it has In [6], the authors designed a wheelchair controlled by
become almost an important requirement for it to be able to sound through embedded systems. This chair serves the
perform a wide range of tasks automatically and with the physically handicapped by setting a voice system instead of
help of some supervisors who are in the control room [1]. manual control, as the voice commands are predefined in the
Robots contain many types designed according to the Arabic language and the word recognition system depends
location and work environment, as it consists of an on the loudspeaker associated with the sound sensor (IWRS)
eelectromechanically machine that can be controlled by to give sound results with high accuracy and commensurate
computer and some electronic control units that can be with the patient's situation. A voice controller was used with
programmed easily to control them. Another types of robots the HM2007 speech processor as it contains a buffer
that are autonomous or semi-autonomous and they are memory, processor, and a set of digital outputs to send
controlled remotely. Robots are widely used in a variety of digital signals according to the voice command. The overall
tasks such as factories, military, medical sites, banks, and in
mechanical parts work adopts two 14A / 24V / 200Watt DC
locations dangerous to humans. The robotic arm is the most
motors with engagement / disengaging clutch and speed
commonly used because its operation is similar to a human
hand. This robotic arm is inherently programmable and its reduction parts with integrated lockout control.
movement can be controlled through the combined action of In [7], the authors designed a simple technology to
each joint it is attached to. So the angle of the joint can be recognize isolated Arabic words to control robots. The
determined according to the required movement and extracted data is stored in a mat file by the Mel Frequency
response time to work with utmost precision. The robotic Cepstral Coefficients (MFCC) algorithm. Data analysis is
arm is designed to be manually or self-programmable with done with MATLAB software. The addition of the new
components such as mechanical or electronic and electrical feature, which is a non-linear feature, is identical technology
circuits. In general, it is a robot that can perceive a range of works in determining speech with the lowest error rates and
voice commands under various conditions and make rapid response to voice commands in the field of robotics.
decisions about the course of action based on the method of In [8], the authors designed a robotic arm that is also based
design and programming. Current robots are equipped with
sensory devices such as the Elechouse v3 voice recognition on an Arduino controller based on voice commands. The

978-1-6654-3914-5/21 /$31.00 ©2021 IEEE 284

Authorized licensed use limited to: SWANSEA UNIVERSITY. Downloaded on November 23,2022 at 15:34:20 UTC from IEEE Xplore. Restrictions apply.
2021 International Conference on Communication & Information Technology - ICICT2021– Basra- IRAQ

robot detects the target through sonar technology before


issuing a voice command to the robot. Voice commands are
sent through smartphones via Bluetooth and for a short
distance. The authors in [9], designed an automatic voice-
controlled car through a smartphone that sends commands
via Bluetooth by designing an application that determines
the movement of each servo motor and adding a night
camera to see the targets required with the addition of an
obstacle detector to protect the robotic vehicle from
obstacles on the road by using an ultrasound sensor. Another
authors [10] designed an Arduino controlled robotic vehicle
with six servo motors, four of which are for vehicle
movement and two for arm grip and are controlled via an
RF sensor. The robotic arm has two degrees of freedom of
Fig. 2. Hardware Implementation of the 3-DOF robotic arm
movement. This vehicle follows its movement on a linear
path drawn in a pre-determined color through the color
sensor. III. DESIGNING THE MANIPULATOR BODY
In [11], the authors designed a voice control system for 6
Figure 2 shows the overall hardware implementation of the
DOF (Degree of Freedom) auxiliary robots for patients with
special needs. Jaco2 kinova tabletop robot built-in wireless proposed system. The design procedure for this system is
microphone and speaker. As they showed that, there is a detailed in the following subsections.
difference between the manual joystick and the voice
commands sent to the wheelchair. These robots are known A. END-EFFECTOR SELECTION (HAND)
to provide an interface between the user and the computer-
The hand fingers of the arm dominate the main problem that
based application allowing for a seamless spoken interaction we faced in this project in terms of programming, as it is
with the application. Voice recognition gives crystal clear considered one of the important aspect of the robotic arm
production and output to the user or operator. Our paper applications. The main and more complex parts of the
deals with the fact that the difference between the two programming code are the angle of control of opening and
sounds (especially speech) may slightly affect the output closing and the speed of response, since it differs from the
since the robotic arm has been assigned a security password rest of the parts of the arm in terms of work and tasks. There
to be controlled by the authorized operator only and no other are robotic fingers that can be controlled pneumatically or
person can operate on the arm. We also offer a simple and by hydraulic [12], but in our project, we used the control by
advanced package of very inexpensive electronics that allow sending a digital signal, where an electrical signal is sent by
the robotic arm to be controlled very efficiently through an Arduino controller to be sent to each servo motor linked
voice recognition. to the arm and according to the program code. Fig. 3 shows
how to connect mechanically with the servo motor.

Fig. 1. The general block diagram of the robotic arm

II. ROPOSED SYSTEM DESIGN


The robotic arm is designed using several hardware Fig. 3. End-Effector fingers of the arm
components including an Arduino UNO with Arduino
programming IDE. The operation of the robot depends
The flow chart of the designing process is presented in
entirely on the servo motors, the sound sensor, and the
Fig.4.
feeding sources, in addition to the wireless NRF sensor to
transmit signals to long distances as shown in fig 1 . This
task is accomplished with the Arduino Control Board. The
remote control is equipped with feed sources where the
physical components are attached to the body of the robotic
arm. The Arduino controller processes the signals received
from the sound sensor and converts them into necessary
digital pulses which are then sent to the servo motors and
then this servo motor responds according to the signal sent
to the arm

285

Authorized licensed use limited to: SWANSEA UNIVERSITY. Downloaded on November 23,2022 at 15:34:20 UTC from IEEE Xplore. Restrictions apply.
2021 International Conference on Communication & Information Technology - ICICT2021– Basra- IRAQ

Fig. 6. MG996R servo motor

D. THE VOICE MODULE


Voice Module unit shown in the Fig. 7 is one of the main
components of this project, as it works on the principle of
transmitting sequential audio data when connected to an
Arduino board. It consists of the SC57X series digital signal
processor (DSP) (Harvard single-chip architectural
Fig. 4. Flow chart of the designing process
supercomputer). It comes with an ability to control the
Cortex-A5 system, which provides good performance for
complex applications that require modern and advanced
B. ARDUINO MICROCONTROLLER algorithms.
In this project, we find most of the procedure work and the
Arduino Uno shown in fig 5 is a medium-sized, good, responsible master program that controls the arm.
adaptable, and breadboard neighborly microcontroller
board, created by Arduino. cc, in light of Microchip
ATmega328P. Equipped with 14 digital and 6 analog pins,
it has an operating voltage of 5V. With a flash memory of
32 KB and EEPROM of 1 KB, it is the most used
microcontroller for mini projects.

Fig. 7. The voice module

E. NRF24L01 WIRELESS MODULE


The NRF24L01 is a wireless transmitter and receiver. This
is meant that every unit can transmit and receive data. As it
operates at a frequency of 2.4GHz, which is within the ISM
range, it can be used in most engineering applications. The
unit when operated efficiently can cover 100 meters, which
makes it has multiple options and is available for all wireless
remote-controlled projects. The unit works at a voltage of
Fig. 5. Arduino Uno board
3.3 volts as it can be easily used with 3.2 volt or 5-volt
C. SERVO MOTOR systems. With ready-made controllers such as Arduino Uno,
where each unit contains an address range of 125, and each
Mg996r Servo Motor shown in fig 6 is widely used in high-
unit can communicate with 6 other units.
tech devices as it is used in industrial applications such as
Thus, it is possible to have several wireless units
automation technology. As it is considered as a standalone
communicating with each other in a specific area. There are
electrical device, it rotates machine parts very efficiently two NRF Wi-Fi sensors connected to the first Arduino
and with great accuracy. Moreover, the output shaft of this controller connected to the arm and the second controller
motor can be moved to a certain angle, as it can rotate 180 unit to communicate between the arm and the voice sensor
° or (90 ° in both directions), and it works just like standard module wirelessly and remotely. Its connection with
machines but is smaller in size to be used in a lot of Arduino is shown in Fig 8.
graduation projects. Servomotors are mainly used in home
electronics, robotics, cars, and many other devices.

286

Authorized licensed use limited to: SWANSEA UNIVERSITY. Downloaded on November 23,2022 at 15:34:20 UTC from IEEE Xplore. Restrictions apply.
2021 International Conference on Communication & Information Technology - ICICT2021– Basra- IRAQ

Fig. 8. NRF24L01 Wireless Module


Fig. 9. Voice sensor training with Arduino UNO

Figure 6 presents the circuit simulation for connecting


TABLE I. ACCESS PORT SETTING SOFTWARE
Arduino and the voice module. The simulation is made
using Fritzing software. As shown in Fig.8 the TX and RX
Parameters Value of the Arduino are connected to the TXD and RX pins of
the voice module to establish serial communication. The
Baud Rate 9600 bits/seconds Arduino is connected to a laptop or a PC.

B. IDENTIFICATION PHASE
Parity bit None
It is the stage in which voice commands are examined after
completing the registration for all groups as shown in fig 10
Stop bit 0

Send format Hex

Receive format Char

IV. SOFTWAREIMPLEMENTATION

In this project, we suggest most of the procedure work and


develop the responsible master program that controls the Fig.10 Identification of commands
arm. The entire procedure is divided into two parts.
After completing the recording process of voice prompts,
A. TRINING THE VOICE MODULE each group is imported separately, and each command is
In the Arduino UNO program, a library is added for the tested.
voice recognition sensor to be trained. The following steps To test all commands, the user speaks to send voice
are followed: commands through the microphone. If it is correct, the UI
1. Connect the voice module sensor with Arduino Uno (User Interface) will show the serial number of all its tags,
through the USB port of the laptop with setting shown for example: Halo N contains symbol 13. When the user
in table I . sends an audio prompt by using the microphone, the user
2. Open the Arduino UNO IDE environment. interface should show result, 13. This means that the
3. Via the voice sensor library, the sensor-training command recording, and storage has been completed
window is opened. successfully. ADSP reduces the impact of noise on the
4. Write the following instructions: recording process to a large extent. It uses a high-frequency
Strain 15 for example “hello” Then we go to the filter that removes noise and estimates the signals to
microphone of the voice sensor and say hello when the compensate for attenuation caused by noise and other
Led signal appears and repeat the process twice to sounds.
confirm the word.
2. Load 1 2 3 4 5 6 7 To conduct the test process for
recorded votes. V. RESULTS AND DISCUSSION
3. Clear deletes the recorded votes and re-recording. In this project, fourteen words have been selected, which are
4. The sensor used in the project takes approximately five groups that control nine phonemic signals: "goodbye"
forty votes. "hand" "move" "base" "hello" "up" "down" "right" "left"
287

Authorized licensed use limited to: SWANSEA UNIVERSITY. Downloaded on November 23,2022 at 15:34:20 UTC from IEEE Xplore. Restrictions apply.
2021 International Conference on Communication & Information Technology - ICICT2021– Basra- IRAQ

"stop "catch " "open" " revers" "to define how this robotic TABLE III. RESPONSE TIME AND RECOGNITION RATE
arm works as summarized in table II . The accuracy of the Recognition rate
operation depends on various factors such as noise, Commands/ (Number of successful Response time
environment, and work site. To evaluate the performance of Instructions recordings/ Total number of (seconds)
the voice recognition system provided for the command of attempts to record)
the robotic arm, the recorded instructions are divided into Left 89 % 3S
two groups: a group for training the voice sensor and a group Right 89 % 3S
Forward 90 % 4S
for testing the recording.
Revers 90 % 4S
Each of the fourteen-word words (thirteen plus the volume Up 90 % 4S
key) are trained fifteen times to create enough number of Down 90 % 4S
vectors within the SRAM of the speech system for different Stop 92 % 4S
states of experimental work. Therefore, we create many Open 92 % 3S
templates. In test mode, each of the fourteen words is tested Catch 92 % 4S
Release 92 % 3S
at least 40 times at system time. Test results are taken for
four different speakers. The measured recognition rates and
the real-time response time are summarized in table III.
VII. CONCLUSIONS
TABLE II. VOICE COMMAND ACTION The design of an effective speech recognition system for an
isolated voice to control the robotic arm to serve a large
No of Voice
Groups Command
condition group of workers in dangerous places is the focus of this
Security Key to Enter All paper. Creating five groups leads to each group having
1 Hello isolated and pre-defined prompts in the database that are
Groups
Left tagged and partitioned to match the test words at runtime
2 Base
Right instead of storing the entire speech signal, saving memory
Forward space, and making the matching process very fast. The
Revers processing units (speech group and microcontrollers) are
3 move Up connected directly to the robotic arm in one package that
Down made the design represent an intelligent and fully
Catch independent arm that can be controlled remotely and at one
4 hand kilometer with the presence of an NRF sensor, which makes
Open
it control the voice transmission of Wi-Fi. The voice sensor
Close all groups and return
5 Good-by
to the starting point has been tested to demonstrate its performance in generating
precise arm movement.
VI. THE ALGORITHM FOR MOTION CONTROL Future works can add a group of other sensors to make the
The flow chart in the fig 11, lists the algorithm of operation benefit more, such as temperature sensors and inert gases in
of the arm starting from the first word "hello", which is the factories and industrial workshops, or moisture sensors in
word that prepares the robot. This word is the main key and agricultural sites.
a source of protection for the arm from unauthorized
persons. Four servomotors were used that have been REFERENCES
[1] Naik, P. Adwait , and Abraham. "Arduino based Voice-controlled
programmed corresponding to the movement of each joint Robotic Arm." arXiv preprint arXiv: 2001.01222, 2020.
attached to the arm and according to the voice command [2] K. HARISH, et al. “Pick and place robotic arm using Arduino”, Int.
directed to it. The arm movement can be controlled ("right- J. Sci. Eng. Technol. Res. (IJSETR), 6.12: 1568-1573. International
Journal of Science, Engineering and Technology Research (IJSETR)
left" - "up - and down" - "forward - revers" - "catch - open") John Iovine, 2017.
where the movement of each servo motor is separate from [3] D. Diwakar, A. Choudhary, A. Singh, & N. Fatima, “Voice
the other by its action and at different angles according to Controlled Robotic Vehicle”, 2019
the design of the robot. After applying all the voice [4] B. Jolad, M. Arora, R. Ganu, & C. Bhatia, “Voice-controlled
robotic vehicle, 2017.
commands in the form of a successful operation on the [5] A. Paswan, A.K. Gautam, & B. Vimal, “Voice-controlled robotic
robot, a voice command (goodbye) is placed to exit and vehicle”, 2014.
returns the robotic arm to its normal position. [6] A. A. Abed, “Design of Voice Controlled Smart Wheelchair”,
International Journal of Computer Applications, 131(1), 32-38.,
2015.
[7] A. A. Abed, & A. A. Jasim, “Design and implementation of a
wireless voice-controlled mobile robot”. Al-Qadisiyah Journal for
Engineering Sciences, 9(2), 135-147. 2016.
[8] P.H. Pande, P.V. Saklecha, P.D. Pawar, & Shire, A. N. “Arduino
Based Robot for Pick and Place Application” International Journal
of Electronics, Communication and Soft Computing Science &
Engineering (IJECSCSE), 23-27., 2018.
[9] M. V. Chikhale, M.R. Gharat, M.S. Gogate, & Amireddy, M. R.
“A voice-controlled robotic system using Arduino microcontroller”
International Journal of New Technology and Research, 3(4) -
Paswan, A., Gautam, A. K., & Vimal, B., 2017.
[10] .K. Mostaque, & B. Karmakar, (2016). “Low-Cost Arduino Based
Voice Controlled Pick and Drop Service with Movable Robotic
Arm”, European Journal of Engineering Research and Science, 1(5),

288

Authorized licensed use limited to: SWANSEA UNIVERSITY. Downloaded on November 23,2022 at 15:34:20 UTC from IEEE Xplore. Restrictions apply.
2021 International Conference on Communication & Information Technology - ICICT2021– Basra- IRAQ

29-33, EJERS, European Journal of Engineering Research and


Science Vol. 1, No. 5, November 2016.
[11] T.B. Pulikottil, et al. "A voice control system for assistive robotic
arms: Preliminary usability tests on patients" 2018 7th IEEE
International Conference on Biomedical Robotics and
Biomechatronic (Biorob) IEEE, 2018.
[12] V.K. Sharma, Saluja, K., Mollyn, V., & Biswas, P. “Eye Gaze
Controlled Robotic Arm for Persons with Severe Speech and Motor
Impairment”, In ACM Symposium on Eye Tracking Research and
Applications (pp. 1-9), 2020.

Fig.11. Flow chart for motion control of the voice-controlled Robotic Arm

289

Authorized licensed use limited to: SWANSEA UNIVERSITY. Downloaded on November 23,2022 at 15:34:20 UTC from IEEE Xplore. Restrictions apply.
View publication stats

You might also like