Multitasking Voice-Controlled Robot
Multitasking Voice-Controlled Robot
MSA
October University for Modern Science & Arts
Faculty of Engineering
Department of Mechatronics System Engineering
A Graduation Project
Submitted in Partial Fulfillment of B.Sc. Degree
Requirements in Mechatronics System Engineering
(Part II)
Prepared By
Supervised By
Spring 2024
Abstract
This study illustrates the concept of using voice recognition technology to manage
and command robots, it’s aimed to manage tasks in workplaces with ease and creativity.
The design is made to be utilizing different techniques, such as avoiding obstacles,
tracking to a specific point via line follower, carrying objects and placing them in the
desired spot via robotic arm and even more potentials.
So, we made a robot that recognizes wide varieties of quotes, with the right
materials and design it has some tools for some field jobs and programmed to perform
the instructions well and it will be a prototype to inspires real field upgrades.
i
Acknowledgement
Praise be to Allah the most gracious. Thanks to our professors and supervisors, we were
able to complete this project with their extensive knowledge, extensive experience and
professional experience in a variety of sectors. This endeavor would not have been
possible without their help and guidance.
And special thanks to Dr. Mohamed Shibat for his support, patience, thoughtfulness,
and for guiding us through the whole thing. We also want to thank our families and
friends for supporting us and for tolerating our tempers when things get harsh. We’ve
come a long way and we still have another way to go, so this is to thank them and ask
them to keep supporting us for the rest of the road.
Thanks to our university MSA and all the faculty members for their constant support
and follow, they work continually to improve the courses and programs.
Finally, we owe a debt of gratitude to the engineering team and our colleagues for their
invaluable assistance.
ii
Table of Contents
Abstract...............................................................................................................................i
Acknowledgement..............................................................................................................i
Table of Contents.............................................................................................................iii
List of Figures...................................................................................................................iv
List of Tables....................................................................................................................vi
List of Abbreviations......................................................................................................viii
Activity Work Contribution.............................................................................................ix
Report Writing Contribution..........................................................................................viii
Graduation Project II – Time Plan.....................................................................................x
Chapter I Introduction...............................................................................1
1.1.1 Background of the Study.....................................................................................1
1.1.2 Problem Definition...............................................................................................2
1.1.3 Project Aim and Objectives.................................................................................2
Chapter II Literature Background............................................................3
2.1 Preface.....................................................................................................................3
2.2 Manipulator Design................................................................................................3
2.3 Motor Selection.......................................................................................................5
2.4 Sensor Integration...................................................................................................5
2.5 Microcontroller and Motor Driver..........................................................................6
2.6 Control algorithms..................................................................................................7
Chapter III Mechanical Design................................................................11
3.1 Preface...................................................................................................................11
3.2 Material Selection.................................................................................................11
3.3 Design and Specifications.....................................................................................12
3.4 Mecanum Wheels Modelling................................................................................13
3.5 Robotic Arm Modelling........................................................................................16
Chapter IV Electric Circuit Design.........................................................24
4.1 Motors...................................................................................................................24
4.2 Sensor Fusion........................................................................................................27
4.3 Modules.................................................................................................................29
4.3 Wiring Diagram....................................................................................................34
Chapter V Control Algorithms................................................................36
5.1 Preface...................................................................................................................36
5.2 Speach-recognition...............................................................................................37
5.3 Line Following......................................................................................................38
Chapter VI Implementation Proccess.....................................................39
6.1 Preface...................................................................................................................39
6.2 Manufacturing process..........................................................................................39
6.3 Assembly...............................................................................................................40
List of Figures
v
List of Tables
List of Abbreviations
AI Artificial Intelligence
DOF Degrees of Freedom
PWM Pulse Width Modulation
I/O Input and Output
CAD Computer Aided Design
IR Infrared
JARVIS Just A Rather Very Intelligent System
GPIO General Purpose Input/Output
ANN Artificial Neural Network
ASTM American Society for Testing and Materials
WMR Wheeled Mobile Robot
DH Denavit-Hartenberg
WMR Wheeled Mobile Robot
ROS Robot Operation Software
GUI Graphical User Interface
SLAM Simultaneous Localization and Mapping
RNN Recurrent Neural Network
CTC Connectionist Temporal Classification
vi
GRU Gated Recurrent Unit
IMU Inertial Measurement Unit
LiDAR Light Detection and Ranging
Students
Week Meetings Work Done
Name
Mohamed Ahmed
Dr. Mohamed Proposal Abdelhay
01 Shibat Ali Mohamed Ali
Dr. Mohamed Mohamed Ahmed
02 Shibat Literature Review Part 1 Abdelhay
Ali Mohamed Ali
Dr. Mohamed Mohamed Ahmed
03 Shibat Literature Review Part 2 Abdelhay
Ali Mohamed Ali
Mohamed Ahmed
Dr. Mohamed Abdelhay
04 Shibat Mechanical Design
Ali Mohamed Ali
Dr. Mohamed
Mohamed Ahmed
Shibat
Stress Analysis of mechanical Design Abdelhay
05 Ali Mohamed Ali
vii
Dr. Mohamed Mohamed Ahmed
12-14 Shibat Control algorithm implementation Abdelhay
Ali Mohamed Ali
Mohamed Ahmed
14-18 Finals Simulation Abdelhay
Ali Mohamed Ali
viii
Chapter V Control Algorithm
5.1 Preface Mohamed Abdelhay
5.2 Voice-recognition Mohamed Abdelhay
Chapter VI Implementation Process
6.1 Preface Mohamed Abdelhay
6.2 Manufacturing Process Mohamed Abdelhay
6..3 Assembly Mohamed Abdelhay
Chapter VII Control Algorithm
7.1 Preface Mohamed Abdelhay
7.2 Analysis Mohamed Abdelhay
Chapter VIII Testing and Results
8.3 Obstacle Avoidance Mohamed Abdelhay
8.4 Navigation and Path Planning Mohamed Abdelhay
Chapter IX Conclusions and Future Plan
9.1 Conclusions Ali Mohamed
9.2 Future work plan Ali Mohamed
ix
Oct. Nov. Dec. Jan.
Task
Week 1
Week 2
Week 3
Week 4
Week 1
Week 2
Week 3
Week 4
Week 1
Week 2
Week 3
Week 4
Week 1
Week 2
Week 3
Week 4
Introduction
Literature Review
Mechanical Design
Electric Circuits
Control algorithm
Results and Discussion
Conclusion and Future
Work
Book
Report Writing
x
xi
Chapter I
Introduction
1.1 Background of the study:
It is obvious that some workers in different fields usually in need of low and high-level
assistance whether to contribute in the task at hand or to wrap up other tasks
accordingly to spare time for other things. Hence, for decades mobile robots are
commonly utilized on construction sites, warehouses, and industrial plants, not to
mention be used in material handling applications, which are gaining popularity.
That’s why it was necessary to precisely develop the methods of controlling and
directing the robotic devices through multiple ways. This is where the voice-controlled
robots’ role come in the light, humans' most common mode of communicating is
through voice. Almost every communication is carried out utilizing speech signals.
Electrical signals can be generated from sounds and speech signals, this is done via
voice recognition technique for converting voice impulses into text format for a device
or computer instructions. This voice recognition system can be used to regulate and
generate speech acknowledgment. speech-controlled robots comprehend and carry out
the required actions using hundreds of speech instructions.
1.2 Problem definition:
Managing work field’s tasks and objects with versatility and swiftly is recommended to
organize the environment for better production, using simplest way of controlling to get
the job done.
It’s aimed to design, test and implement a versatile mobile robot capable of navigating
and performing several tasks by recognizing human spoken commands in order to get
the job done smoothly and with more creativity. This can be done by the following:
Fig. 2.1: Lynx motion Lynx 6 (Kabir and el. al. (2011))
According to Y. Choi et al. (2008), the gripper has force-torque sensors, which
improves its ability to precisely grasp and manipulate objects, and the manipulator is
mounted on a vertical lift to extend its vertical reach. Additionally, it locates and
engages with objects in its surroundings using its sensors and cameras. It advances in
the direction of a predetermined 3D point, locates surrounding objects with the laser
range finder, and then releases its gripper to manipulate it. Should it be unable to
successfully grasp an object the first time, it will try up to four times before giving up.
(Y. Choi and others, 2008)
Deciding how many degrees of freedom (DOF) are needed for a given application is a
critical part of manipulator design. The authors utilize a five-degree-of-freedom small
robotic arm. Finding a DOF that balances precision and flexibility has become popular
in manipulator design.
The choice of actuators is another aspect of the manipulator's design. Because of their
high torque-to-weight ratio and precise control via pulse-width modulation (PWM),
servo motors are widely used. The servos in the reference are HS-422 and HS81. These
motors are frequently used by manipulators because of their small size and PWM
control compatibility.
Inverse kinematics is often used in manipulator design for voice-controlled systems to
determine the joint angles needed to achieve a given end-effector position and
orientation. According to (Kabir and et al., 2011), joint angles are calculated using an
inverse kinematics method based on desired locations.
When it comes to comprehending how robotic arms move, kinematics analysis is
essential because it sheds light on the connections between joints, links, and the location
and orientation of the end-effector. The paper by Tahseen F. Abaas et al. (2020) that
describes the kinematics analysis of a 5-DOF robotic arm employing forward and
inverse kinematics is the main subject of this review. A MATLAB GUI software is used
to illustrate how they integrated the geometric approach and the DH method for both
forward and inverse kinematics analysis.
Fig. 2.2. The coordinate frame of the robotic arm (Tahseen F. Abaas et al. (2020))
2.3 Motor selection for robotics
It's highlighted by (Saleh Ahmed and et al. (2021)) that the basic significance of
appropriate engine measuring in robot plan. Various ponders emphasize the ought to
calculate the specified engine torque and speed to attain wanted robot execution. It is
recognized that insufficient engine measuring can result in diminished effectiveness,
solidness, and expanded costs in robot stages.
DC engines are commonly utilized due to their controllability and reasonableness for
different applications in mechanical technology. The creators talked about the
determination of a particular DC engine (Gm25-370ca) based on the calculated engine
torque and rotational speed necessities. Researchers often consider components like
engine sort, productivity, control yield, and compatibility with the robot's planning
errands.
Furthermore, because it was outlined by (Dr. Ali Abed and et al. (2015)), the two servo
engines utilized within the robot's chassis are basic for controlling its development. The
L298 engine driver, a double H-bridge driver, plays a crucial part in directing the speed
and course of these engines. Inquire about on engine drivers in mechanical applications
underscores their noteworthiness in engine control, giving a dependable implies of
interpreting electrical signals into mechanical movement (User's Direct, 2012), but in
our case, we'd lean toward utilizing the microcontroller with engine driver that'd be
implanted on it.
As it was highlighted by (Ms. Sarada and et al. (2023)), the use of the Arduino
ATmega328/P microcontroller as the central processing unit for the voice-controlled
robot for its flexibility and ease of use in robotics applications. Researchers often
leverage Arduino for its extensive I/O capabilities, programmability, and compatibility
with various sensors and modules.
Additionally, the integration of hardware components in robotics is a fundamental
aspect of system design. The reference highlights the use of the Raspberry Pi 3 as the
core hardware component for controlling the robotic arm. The Raspberry Pi's small size,
cost-effectiveness, and processing capabilities have made it a popular choice in the field
of robotics (Daniel and et al. (2018)). While the authors focus on this specific
implementation, some others have numerous examples of Raspberry Pi-based robotic
systems. such object recognition and tracking, showcasing the platform's versatility.
Power Supply
Receiver Microcontroller
Efficient control algorithms are vital for ensuring precise and accurate robotic arm
movements. it was also mentioned that the controller module issues low-level
instructions to control the arm's servo motors. Various control strategies are employed
in robotic systems (Daniel and et al. (2018)).
Raspberry pi was also used by Altayeb & Al-Ghraiah (2022). The authors explained
that employing Raspberry Pi in a voice-controlled robot presents a multitude of
advantages, revolutionizing the way robots interact with their environments. The
compact yet powerful nature of Raspberry Pi allows for a seamless integration of
sophisticated voice recognition and processing capabilities. Its low-cost, open-source
architecture makes it an accessible platform for developers and hobbyists alike,
fostering innovation and customization in the realm of robotics. Raspberry Pi's GPIO
(General Purpose Input/Output) pins enable the connection of various sensors and
actuators, enhancing the robot's responsiveness to voice commands.
Fig. 2.5. Suggested overall software system (Dr. Ali Abed and et al. (2015))
Using ANNs is another strategy that we most likely will use in our project. An algorithm for
supervised machine learning called Artificial Neural Networks (ANNs) is frequently used to
recognize patterns in signals and images. In their paper, Rendyansyah et al. (2022)
described how ANNs are made up of an input, a hidden layer, and an output. Each training
data comparison in the ANN process is changed until the machine learning outcomes match
the intended output.
In addition to strength, modulus of elasticity and shear modulus the density and section
properties are needed as input values in the design process. These properties have been
determined for Finnish plywood by VTT (Technical Research Centre of Finland) in
cooperation with the plywood producers.
The chassis is designed to withstand the weight of the object being carried by the
robotic arm. And the weight of the components as well. The chassis consists of mainly
two parts, upper part, and lower part. The base part has the space to hold the electrical
components as well as the motors from the bottom. The top part of the chassis is
supported on the bottom part using side walls fastened together with press-fit technique
and also screws and nuts. And is also supported on a column fastened to the base to give
it extra strength under the center of the robotic arm base. The figure shows the 3D
model of the robot. The dimensions of the 3D model are 500x300x60 mm regardless of
the robotic arm length when fully extended which will be described in details in the
robotic arm section.
The three parts of the chassis are made of 3 mm thick stainless-steel sheet which are cut
and machined to give the design of each part provided in the appendices. The following
table summarizes the specifications of the chassis.
In the analysis stage, ASTM A240 material is used, the force applied is 500N which is
roughly twice the actual force the robot will be exposed to. The body will be subjected
to different forces with different orientations. The force of the body’s weight and
components subjected vertically downwards on the chassis, and the resolution of the
force of the lifted objected along the axes of the robotic arm. Each part is analyzed
separately.
Fig. 3.3. Stress analysis of the top part
This new wheel used in wheeled mobile robots (WMR) includes the so-called
omnidirectional mecanum wheels. It consists of a hub and a number or rollers installed
on the circumference of the hub. The number of the rollers vary depending on the
design, all are free to rotate around their axis which makes an angle =45° with the axis
of rotation of the wheel itself.
Fig. 3.9. Model of a mecanum wheel with graphically marked angle
(Hendzel & Rykała, 2017)
The wheeled mobile robot consists of a chassis, four mecanum wheels and four
independently controlled direct-current motors. The mecanum wheels are in turn
rigidly placed on the gearboxes fixed to the shafts of the motors. The 3D model of
the mentioned WMR is presented in the Fig.
Fig. 3.11. Analytical model of WMR with mecanum wheels (Hendzel & Rykała, 2017)
The description of kinematics of the WMR in the case under consideration can also be
presented in the form of the following equations.
As for robotic arm, it was previously mentioned the design shall be a 5 DoF robotic arm
with end effector of two gripper to grab small to medium objects and to provide
flexibility and good end effector orientation, there will be some places for servo motor
attached to each link connected to the main microcontroller, it’s also made from 3D
printed durable plastic which serves the purpose.
Speaking of waist, this one is what will carry the first arm, it’s installed on the base with
1 mm less to freely rotate on it. Note that this also has a slot for servo behind the
vertical hole to be attached on the arm to control it.
The arm, on the other hand, will be two of them to cover good distance of gripping, it’s
ought to rotate 180 degrees freely if not for the collision that’ll occur due to the arm
position on the car.
Fig. 3.15. Link drawing in Solid Work
Now for the end effector, the gripper. Two small gears are attached to each other, one of
them is operated by a small servo attached to the gripper base. Each gear is holding a
gripping finger that’s ought to open and close 90 degrees freely.
Furthermore, Kinematics analysis are essential concepts in robotics, and both forward
and inverse kinematics serve distinct purposes in understanding and controlling the
movement of robotic systems, also for designing, programming, and controlling robotic
systems, enabling them to perform a wide range of tasks in diverse applications.
The next section is for reviewing kinematics analysis modelling, using both standard
equations and MATLAB software to illustrate it more.
The Denavit-Hartenberg (DH) parameters method for the kinematic scheme is one of
the most methods used in forward kinematics which represents the relationship of the
joint coordinate between two links as shown in the next table.
Table 2.2. DH parameters of kinematics
Fig. 3.17. Representation model of robotic arm in MATLAB software using Robotics
Toolbox
By using transition matrixes, these factors may be utilized to characterize the relative
location and orientation of linkages.
For the Denavit-Hartenberg parameters of the 5 DOF manipulator, where Ci=cos (qi),
Si=sin (qi), Cαi=cos(αi), Sαi=sin(αi):
[ ]
C i −Cαi . Si Sαi . Si ai . Ci
S i Cαi .Ci −Sαi .Ci ai . Si
Ti= 0 Sαi Cαi di
0 0 0 1
[ ] [ ]
C 1 0 −S 1 0 C 2 −S 2 0 a 2.C 2
S1 0 C1 0 S2 C2 0 a 2. S 2
T1= , T2=
0 −1 0 d1 0 0 1 0
0 0 0 1 0 0 0 1
[ ] [ ]
C 3 i −S 3 0 a 3. C 3 C4 0 S4 0
S3 i C3 0 a 3. S 3 S4 0 −C 4 0
T3= , T4=
0 0 1 0 0 1 0 0
0 0 0 1 0 0 0 1
[ ]
C 5 −S 5 0 0
S5 C5 0 0
T5=
0 0 1 d5
0 0 0 1
From all the previous transformation matrixes the end effector transformation matrix
(Te)= T1 T2 T3 T4 T5=
[
C 1 .C 234.C 5+S 1. S 5 −C 1 . C 234. S 5+ S 1. C 5 −C 1. S 234 C 1(d 5. S 234 +a 3. C 23+a 2. C 2)
C 1 .C 234.C 5−S 1. S 5 −C 1 .C 234. S 5−C 1.C 5 −S 1. S 234 S 1(−d 5. S 234 +a 3.C 23+a 2. C 2)
−S 234. C 5 S 234. S 5 −C 234 d 1−a 2. s 2−a 3. S 23−d 5.C 234
0 0 0 1
Solution to the inverse kinematics problem reduced search of the arguments q1, q2, q3,
q4, q5 based on the gripper’s position and orientation. These six equations in five
unknowns may have no solution, but based on elements of Te matrix, the 4 th column
elements are P5x, P5y , P5z respectively, the solution could be written as following:
P5 y
q1=tan-1(¿ P 5 x )
Note that for this solution P5x > 0 (- π /2) < q1 < ( π /2)
By representing all this on MATLAB using Graphical User Interface (GUI) tool, the
program's input, as demonstrated, was the robotic arm's end-effector's location; its
outputs, on the other hand, were the arm's joints' angles and positions. Additionally, a
3D simulation of the robotic arm's motion was displayed in the GUI.
Fig. 3.19. The GUI program calculates forward and inverse kinematics variables.
.
Chapter IV
Electric Circuit Design
4.1 Motors
4.1.1 DC Motor
Motor sizing is an important procedure in order to find optimal ones for the required
tasks.
For the wheels, which will carry the whole mobile robot, the silver DC metal geared
motor will do the job as it incorporates metal gears in its construction, that plays a role
in enhancing the durability, strength, and resistance to wear and tear of the motor.
There’re several versions with different specifications, the one used here operates on
6V, has a torque of 3.2 kg.cm and rotation speed is up to 133 RPM.
It is more robust and resistant to wear compared to plastic gears, has higher strength and
torque capability, allowing the motor to handle more substantial loads and offer better
precision and reliability, not to mention the stability and smooth rotating.
So, it’s concluded that the motor is suitable for lifting the arm links and the load it
should carry, making it functional relative to the unit size.
4.2.2 IR Sensor
It's quite simple to utilize this infrared obstacle/object detecting sensor. A potentiometer
is included to change the sensitivity. The output is a digital signal, so it is easy to
interface with any microcontroller such as Maker UNO, Mega, or even the Raspberry
Pi. By using infrared reflection, this infrared sensor provides quick, easy, and non-
contact obstacle detection. Because it relies on light reflection, different surfaces yield
varied results for the detection, and any infrared source could obstruct the detection.
Fig. 4.5. IR sensor
At the front of the module, there are two infrared emitters and receivers. When an object
blocks the infrared source, it reflects the light, which is picked up by the receiver and
passes through an on-board comparator circuit. The output pin will display logic LOW
in response to adjustments made to the threshold, and the green LED will illuminate to
signify detection. The sensitivity and detection range can be further increased by
rotating the onboard potentiometer clockwise, which is compatible with both 3.3V and
5V power input, making it good for obstacles avoidance purpose.
4.3 Modules
Hence comes the role of Arduino Mega2560 microcontroller, which is assigned here to
control DC motors, sensors and servos. The main reason for this one choice is for it has
many numbers of digital general-purpose input and outputs sockets and analogs sockets,
which is needed for the many numbers of motors.
Fig. 4.7. Arduino Mega2560
This L298N Motor Driver Module is a high-power motor driver module for driving DC
and Stepper Motors. This module consists of an L298 motor driver IC and a 78M05 5V
regulator. L298N Module can control up to 4 DC motors, or 2 DC motors with
directional and speed control.
Fig. 4.8. L298N Motor Driver Module
It operates on 5-24V and 2A current at max with maximum power 25W, each motor
requires an input pin, output pin and enabler pin for PWM control.
In addition, it's provided with small heat syncs and overtemperature protection, for
safety measurement it will be attached to two DC motors, meaning it will require two
drivers.
4.3.3 Raspberry Pi 4
Now for the mind of the whole system which operates all actions, the microcontroller.
The one used here is Raspberry Pi 4 microcontroller, it provides the following features
that makes it a good candidate for such project.
With the overall performance, you should be able to work on a Raspberry Pi just as you
would on a standard desktop computer right now. Naturally, it depends on the operating
system and software you're using. However, for this small single board computer, this
represents a major power boost overall. The Raspberry Pi 4's sole disadvantage is that it
will get much warmer than its predecessors.
For RAM memory size it has options for 2GB, 4GB, or 8GB of LPDDR4-3200
SDRAM.
As for USB 3.0, Gigabit Ethernet and Bluetooth, it offers two USB 3.0 ports and two
USB 2.0 ports. While the storage is still handled by an SD card, the team added USB
3.0 to make hooking up SSD’s faster.
The Raspberry Pi 4 B offers 802.11ac Wi-Fi, the same as the Raspberry Pi 3 B+, but
sports Bluetooth 5.0. To improve upon PCB routing, the Raspberry Pi team moved the
Gigabit Ethernet-jack from the bottom right of the board to the top right, it is also
compatible with the Power over Ethernet HAT.
In addition, it has 40-pin GPIO header for connecting to various devices and
peripherals. And a USB-C power supply with support for 5V/3A power input.
Hence an external audio card will be added to the system, the one below is fit for the
requirement, it has an external 7.1 channel sound card connected via USB Type-A,
stereo 3.5mm jack, mono microphone input, is compatible with USB 2.0 full speed
specification (12Mbps), and is powered by a USB port by default, so it doesn't require
an external power adapter.
Also, it supports several of common operating systems such Windows, Mac OS and
Linux, which will be used and explained after the next few sections
Fig. 4.10. External USB 7.1 audio card
Paired with audio card there isa wireless microphone to transfer voice data to it, for this
purpose the K35 microphone is used.
The main key feature is it operates on connector of 3.5mm jack which connects well to
the audio card and recognized, other features apart from being portable and wireless are
worth to mention like noise cancellation, operation frequency 2.40GHz, transmission
distance (without walls) is up to 20 m, signal-to-noise ratio is 65 dBm and Sensitivity is
about 42dB±2dB.
4.3.5 Logic Level Converter
This chip solves the problem of connecting and sending data from 5V logic level
devices to 3.3V logic microcontrollers such as Raspberry Pi and Arduino. This chip
stands in between Arduino and the Sensor and converts the 5V signals from the sensor
to 3.3V which can be directly fed to Arduino. 74LVC245 can be used with digital
signals and works great with SPI, Serial, Parallel bus, and other logic interfaces.
By looking on the IC schematic, we'd find that the bi-directional logic level converter is
actually a very simple device. There is basically one level-shifting circuit on the board,
which is repeated four times to create four level-shifting channels. The circuit uses a
single N-channel MOSFET and a couple pull-up resistors to realize bi-directional level
shifting.
Fig.4.13. Logic level converter scheme (Bi-Directional Logic Level Converter Hookup
Guide - SparkFun Learn, n.d.)
4.4 Wiring Diagram
For this part of the electric design, the main goal is to handle the power in the most
efficient way. Not all components function under the same voltage, nor do they draw
the same current. For this matter an efficient wiring diagram had to be designed.
Starting with the power, the main components are the batteries, Arduino, DC motors,
and the motor drivers. Those are the essential components to make the wheeled mobile
robot function.
Four batteries of 3.7 rated voltage were used. Connected to the motor drivers as a 12V
power input, as to power the DC motors efficiently. As previously mentioned, the motor
drivers handle up to 32V, which means 14.8V wasn’t a problem for the drivers to
handle. The DC motors draw around 0.4 amps at maximum speed and maximum load,
making it a convenience for our circuit design efficiency. In the following figure, the
circuit design and wiring are illustrated.
During the process of simulating the circuit, four batteries of 3.7V did not allow the
simulation to run, because it was more than what was required by the circuit. So we
altered the design on the simulation for it to work fine.
For the servo motors, they operate on 3.8-5.0V and withdraw current of 1.5-2.0 amps
depending on the load. So, that was the challenge to handle the power for the servo
motors as well as the DC motors.
For this matter, two of the four batteries only were used to power the servo motors.
With total voltage of 7.4-8.0V, connecting the two batteries with voltage regulators so
that it regulates the voltage to prevent any failure in the operation of the servo motors.
Each motor is connected to each one of the regulators, except for the small SG90
motors, the withdraw way lower current than the MG995, so operating two of them on
one regulator should do it.
The Arduino board needs 5V to operate, although it can handle up to 12V. connecting
the Arduino board with the 8V coming out of the two batteries should not form any
problem to the Arduino.
Leaving the circuit with a few components left such as the IR sensors and Ultrasonic for
the robotic arm control technique which will both draw Vcc from the Arduino board. As
for the mic and the module, they will operate under the voltage that the Raspberry pi
outputs. As for the Raspberry Pi itself, an outer power supply such as a power bank is
used to power the board, because it only has one source to power the board and it’s the
type-C USB.
Fig. 4.15. Powering Raspberry pi (Start up Your Raspberry Pi | Coding Projects for
Kids and Teens, n.d.)
Chapter V
Control Algorithm
5.1 Preface
This chapter discusses each aspect of the control algorithm. Regarding the voice-
recognition model, navigation, and picking up techniques. As previously discussed, the
robot needs to carry out functions in a dynamic environment but with fixed tasks and
commands and also fixed coordinates to commute between them. So, in the following
sections we will explain the methods of control of each task.
5.2 Speech-recognition
The speech recognition concept in our projects depends on the recognition of the robot
to the voice commands given to it by using Pico voice platform, which is an on-device
voice AI and large language models (LLMs). LLMs are machine learning models that
can comprehend and generate human language text. It is a machine learning program
that has been fed enough examples to be able to recognize and interpret human language
or other types of complex data They work by analyzing massive data sets of language.
Voice commands are given to the robot and are pre trained so that the robot can
recognize them later on even by using different voices and accents.
A keyword spotter can also be used to create always-listening voice commands. Where
the main benefit of using always-listening voice commands versus follow-on commands
is user convenience, as it is not required to utter the wake phrase first. Because
matching the stream of Speech-to-text output against a list of phrases or keywords
would result in dreadfully low accuracy while using an unnecessary amount of
computing resources. For power-sensitive applications like wearable devices, the power
requirements alone are a complete non-starter.
Fig. 5.1 STT model (Strategy Guide for Voice AI-powered Applications, n.d.)
(https://picovoice.ai/blog/a-strategy-guide-for-voice-applications/)
For this matter, we had to give the robot a name so it’s easier to detect the keywords. In
our case the robot’s name is MARVY. The robot is given a voice command using the
wake-up keyword, so that the robot differentiate between all the words spoken before
and after that keyword. In our case the keyword is “Wake-up MARVY”. Everything
said before the keyword isn’t detected by the robot, and after the keyword comes the
commands which the robot is then to execute. Other pre trained commands are “Follow
Line”, “Pick Up”, “Go Back”, and “Stop”. Under each command is a script to guide the
robot what to do and how to do it, whether the code for the ultrasonic sensor to scan for
an object to pick up or even the script for the robotic arm to execute the movement
successfully. Same concept goes with following the line and going back to initial
position.
5.3 Line Following
In order to the robot to accomplish its task successfully, the robot will need to follow a
line taped to the ground from point A (where it picks the object) to point B (Where it
drops it). Of course, that’s not always the case in such robotic applications, sometimes a
robot follows techniques such as autonomous navigation and localization using various
sensors and complicated algorithms. In our project we chose to keep it simple and easy
to deploy, as long as it carries out its function just right.
The robot will be able to carry out line the following technique using infra-red sensor
fixed to the face of the wheeled mobile robot so it can detect the presence of the black
line on the ground.
The concept of IR sensors is that it has an emitter and a receiver of IR waves, the
emitter is simply an IR LED (Light Emitting Diode) and the detector is an IR
photodiode. Photodiode is sensitive to IR light of the same wavelength which is emitted
by the IR LED. When IR light falls on the photodiode, the resistances and the output
voltages will change in proportion to the magnitude of the IR light received. That’s why
a black line is used, when the IR light falls on the black line it reflects back to the
receiver and then the robot detects the line. And then the robot will be able to adjust
based on whether it’s directly over the line or next to it, so it will fix the orientation to
make sure it's following the line at all times.
This method gives the robot durability and stability in comparison to other ways such as
supporting the upper plate on spacers or vertical columns. The chassis parts are then
fastened together with bolts and nuts in addition to being already stable with the press-
fit method.
6.3 Assembly.
The next step is fixing the components in place. The place of each component depends
on the circuit wiring and analysis explained above. Each component is fixed using bolts
and nuts in pre-determined positions and with predesigned holes. From fixing the motor
mounts with the wheels mounting, to fixing the electric components such as the motor
drivers and electronic boards.
The robotic arm is then assembled, each link is fastened to the link previous to it and the
motors is fastened. Then each motor is adjusted so that the zero of the servo motor is in
the middle of the range of motion to the link. This helps in the coding algorithm for the
pick and place technique. The other aspect to care for when assembling the robotic arm
is making sure that its working area is free of any blind spots, so we had to make sure
that the surrounding area is free from components and that the wires of each motor isn’t
loose and that it’s tied together fixed to the arm itself.
Fig. 6.5. Assembling robotic arm.
Chapter VII
Testing and Results
Chapter VIII
Cost Analysis
8.1 Preface
In this chapter, the main target is to make a thorough analysis of the cost of our project.
Specially in the applications of robotics, it is usually loaded with components whether
sensors or boards which help the robot perform its function. Taking into consideration
the aspects of every decision we made and on what basis. We had to make this analysis
to study to evaluate whether we took the optimal decisions based on the resources we
had or we could have made better decisions. For example, we could have chosen from a
variety of engineering materials other than plywood for the chassis. This chapter
illustrates the whole process for this matter.
8.2 Analysis
starting to work on the project, every decision is taken based on research that is made
prior to taking the decision. One of the main aspects of this research is the cost of each
component, alongside with the capabilities of the components and functions.
The first decision we had to make is choosing the material, at previously discussed in
project report part I, the first choice was stainless steel sheets. In the process of
manufacturing, we found out that stainless steel will be expensive. So, we went with
MDF as our choice for the material. Which we later on decided to use it only for testing
and as a model, because it was not as durable as we expected it to be. Although
increasing the thickness from 3 mm to 5 mm were discussed but we decided to choose
the 6mm Finnish plywood panel for its characteristics as previously shown in chapter
two.
Fig. 8.1. MDF chassis.
The second decision we took was based on the motor sizing we performed on the robot
based on the calculated weight of the chassis and the arm. Choosing the stainless-steel
material would have added more weight to the robot, which in return would have
resulted in choosing bigger motors, which by result means higher cost.
After the motor selection, the robotic arm is designed and printed. Which was based on
research to the market availability, and we found that the best choice is to 3D print the
arm parts. The next step was to choose the basic components for the robot functions,
and even that part took time and cost us more than what we had researched. We choose
components and then during testing some components were burnt and we had to replace
them. And other components were not suitable for the application and then we had to
choose other components. For example, we choose Arduino Uno instead of the Mega at
first but then we noticed that it doesn’t have enough GPIOs for the rest of the
components. Also, some of the voltage regulators were burnt due to high voltage input
after we discovered that the 3.7V batteries outputs 4V in reality. After some changes in
the power circuit, we finally made it right.
Another issue we faced was whether to choose expensive sensors with higher accuracies
or to choose cheaper ones and improvise in the coding stage or to tune them manually.
For example, we could have used a LIDAR sensor instead of Ultrasonic and IR sensors.
This would have cost more, but it would have also made it easier for us to work with.
During the process of building up the robot, many enhancements had to be made on
some part or the other, which is a mandatory part in the process. Another example is the
fabrication of suitable motor coupling for the wheels. The first thing we tried is trying to
make the coupling that came with wheels work, but it did not go as expected. After that
we choose to buy metal coupling. Unfortunately, it wasn’t suitable for the encoder disk
that had to be mounted on the motor shaft. That lead us to making our own design for
the coupling, and even that was a hassle of itself. More than a design was made, with
trial and error we finally reached the optimal design.
In the following, Table 8.1. shows a detailed cost analysis of all the components and
materials used in building the robot.
From the work done in this project some concepts may be done in new versions:
Improve the used materials: for better durability and more cohesive structure
it's ought to cut and use metallic materials, it might be a little heavier but more
stable and efficient. There are some recommended materials for such robot can
vary depends on the application and environment but some common for such is
aluminum alloy or stainless steel.
Utilize ROS and LIDAR approaches for voice recognition and mapping: In
order to create a more natural and effective human-robot interaction, LiDAR
uses laser pulses to measure reflections in order to provide accurate distance
measurements, the environment is mapped in great detail using this data.
Furthermore, robot control and navigation become more effective and user-
friendly when voice recognition and LiDAR are integrated into ROS. This
technique creates a robotic system that is effective, approachable, and flexible
by utilizing the natural interface of voice commands and the accuracy of LiDAR
sensing. This combination increases the effectiveness of user interaction, boosts
navigation accuracy, and expands the range of applications for robots.
Adding more gadgets with more functionality: As it was mentioned, there are
lots of possibilities for such project to have different jobs like firefighting,
scanning for defects or mines or even painting. By providing the right equipment
designed and attached to it, it can be reprogrammed and adding more functions
to the microcontroller and map it to triggering voice lines.
Consider using AI model with camera: AI models and image processing have
been a large approach in many fields and precisely robotics. In projects like this,
an effective and user-friendly method of controlling and navigating robots is
provided by ROS's integration of voice recognition with AI-based camera
models, the camera is important for mapping and navigation purpose for it
provide to save preset locations via built AI model and adaptability of different
locations. This method makes use of sophisticated computer vision techniques
for dynamic navigation and detailed environmental understanding, as well as
natural language processing for simple interaction. It offers improved
adaptability, versatility, and usability, making it appropriate for a broad range of
uses.
References
[1] Ahmad, S. (2021 ). On the Design and Fabrication of a Voice-controlled Mobile
Robot. International Conference on Informatics in Control, Automation and
Robotics (ICINCO 2021) (p. 6). Dubai: Mechatronics Engineering Technology,
Higher Colleges of Technology.
[2] Daniel Jeswin Nallathambi1, A. T. (2018). Voice Controlled Robotic
Manipulator with Decision Tree and Contour Identification Techniques.
International Conference on Intelligent Systems (IS) (p. 4). Chennai, India:
Department. of Computer Science and Engineering SSN College of Engineering.
[3] Dr. Ali Ahmed Abed, D. A. (2016). DESIGN AND IMPLEMENTATION OF
WIRELESS VOICE CONTROLLED MOBILE ROBOT. Al-Qadisiyah Journal
For Engineering Sciences.
[4] Humayun Kabir1, A. U. (2011). DEVELOPMENT OF A VOICE
CONTROLLED ROBOTIC ARM. Proceedings of the Conference on
Engineering Research, Innovation and Education 2011. Bangladish : Chittagong
University of Engineering & Technology.
[5] Maruthi Naik R K, C. R. (2022). Fire Fighting Robot with a Voice.
International Journal of Engineering Research & Technology .
[6] Ms. B. Sarada, M. H. (2023). Voice Controlled Robot using Bluetooth.
International Journal of Engineering Research & Technology.
[7] Ms. Dipti Damodar Patil, P. U. (2014). Design and Development of Voice/Telie
Operated Intelligent Mobile Robot. Int. Journal of Engineering Research and
Applications .
[8] Svitlana Maksymova, R. M. (2017). Software for Voice Control Robot:
Example of. Open Access Library Journal.
[9] Inverse Kinematics Analysis and Simulation of a 5 DOF Robotic Arm using
MATLAB. Abbood, Tahseen F. Abaas Ali A. Khleif and Mohanad Q. 2020.
s.l. : Al-Khwarizmi Engineering Journal, , 2020.
[10] Young Sang Choi, C. D. (2008). Laser Pointers and a Touch Screen: Intuitive
Interfaces for Autonomous Mobile Manipulation for the Motor Impaired.
[11] Microsoft Speech Application Programming Interface (API) and SDK, Version
5.1, Microsoft Corporation, http://www.microsoft.com/speech
[10] The modeling of Inverse Kinematics for 5 DOF manipulator. V.N. Iliukhina,
K.B. Mitkovskiib , D.A. Bizyanova a and A.A. Akopyan. 2017. V.N. Iliukhina,
*, K.B. Mitkovskiib , D.A. Bizyanova a , A.A. Akopyan : Samara National
Research University,, 2017.
[12] Jamil, A. S., Hassan, N. F., & Obaida, T. H. (2023). Real-time voice recognition
and modification by convolutional neural network. ResearchGate.
https://www.researchgate.net/publication/373629934_Real-
time_Voice_Recognition_and_Modification_By_Convolutional_Neural_Networ
k
[13] Meng, Z., Liu, H., & Alfred, C. (2023). Optimizing voice recognition informatic
robots for effective communication in outpatient settings. Cureus.
https://doi.org/10.7759/cureus.44848
[14] Sharma, M. D., & Kumar, P. (2023). Application of artificial intelligence for
voice recognition. ResearchGate.
https://www.researchgate.net/publication/368543413_Application_of_Artificial
_Intelligence_for_Voice_Recognition/related
[15] Li, S., Liu, Y., Chen, Y., Feng, H., Shen, P., & Wu, Y. (2023). Voice Interaction
Recognition design in Real-Life Scenario Mobile robot applications. Applied
Sciences, 13(5), 3359. https://doi.org/10.3390/app13053359
[16] Rendyansyah, R., Prasetyo, A. P. P., & Sembiring, S. (2022). Voice command
recognition for movement control of a 4-DOF robot arm. Elkha, 14(2), 118.
https://doi.org/10.26418/elkha.v14i2.57556
[17] Kulkarni, M. (2023). Voice controlled robot. International Journal for Research
in Applied Science and Engineering Technology, 11(5), 2918–2922.
https://doi.org/10.22214/ijraset.2023.51864
[18] Qin, M., Kumar, R., Shabaz, M., Agal, S., Singh, P., & Ammini, A. C. (2023).
Broadcast speech recognition and control system based on Internet of Things
sensors for smart cities. Journal of Intelligent Systems, 32(1).
https://doi.org/10.1515/jisys-2023-0067
[19] Rashid, M. M., Ali, M. Y., & Sailan, S. (2023). Service robot using voice
recognition. Nucleation and Atmospheric Aerosols.
https://doi.org/10.1063/5.0112691
[20] Khoon, T. N., Sebastian, P., & Saman, A. B. S. (2012). Autonomous Fire
Fighting mobile platform. Procedia Engineering, 41, 1145–1153.
https://doi.org/10.1016/j.proeng.2012.07.294
[21] Naeem, B., Saeed-Ul-Hassan, & Yousuf, N. (2023). An AI based Voice
Controlled Humanoid Robot. Research Square (Research Square).
https://doi.org/10.21203/rs.3.rs-2424215/v1
[22] Luo, C. (2023). A voice recognition sensor and voice control system in an
intelligent toy robot system. Journal of Sensors, 2023, 1–8.
https://doi.org/10.1155/2023/4311745
[23] Li, D., Shi, X., & Dai, M. (2024). An improved path planning algorithm based
on A* algorithm. In Lecture notes in electrical engineering (pp. 187–196).
https://doi.org/10.1007/978-981-99-9239-3_19
[24] M, P. K. (2022). Speech Recognition Robot using Endpoint Detection
Algorithm. International Journal of Engineering and Management Research,
12(6), 206–210. https://doi.org/10.31033/ijemr.12.6.28
[25] Kumar, K. S. (2022). Voice controlled robot vehicle using Arduino.
International Journal for Research in Applied Science and Engineering
Technology, 10(6), 2786–2791. https://doi.org/10.22214/ijraset.2022.44458
[26] Kanode P, Kanade S (2020) Raspberry pi project-voice controlled robotic
assistant for senior citizens. Int Res J Eng Technol (IRJET) 7(10):1044–1049
[27] Zhang J, Wang M, Li CY, Li XM, Xu W (2012) Design of intelligent-tracking
car based on infrared photoelectric sensor and speech recognition technology. J
Chongqing Univ Technol (Nat Sci) 7.
[28] Rendyansyah, R., Prasetyo, A. P. P., & Sembiring, S. (2022b). Voice command
recognition for movement control of a 4-DOF robot arm. Elkha, 14(2), 118.
https://doi.org/10.26418/elkha.v14i2.57556
[29] https://www.unifiedalloys.com/blog/stainless-grades-families
[30] https://www.upmet.com/sites/default/files/datasheets/316-316l.pdf
[31] https://www.fireextinguisheronline.com.au/blog/post/what-are-dry-chemical-
fire-extinguishers
[32] Abdullah, M. A., Mansor, M. R., Tahir, M. M., & Ngadiman, M. N. (2012).
Design, analysis and fabrication of chassis frame for UTEM Formula VarsityTM
race car.
[33] Liu, Z., Li, Z., Sun, Y., Liu, A., & Jing, S. (2022). Fusion of binocular vision,
2D lidar and IMU for outdoor localization and indoor planar mapping.
Measurement Science and Technology, 34(2), 025203.
https://doi.org/10.1088/1361-6501/ac9ed0
[34] Korendiy, V., Kachur, O., Boikiv, M., & Yaniv, O. (2023). Analysis of
kinematic characteristics of a mobile caterpillar robot with a SCARA-type
manipulator. ResearchGate. https://doi.org/10.23939/tt2023.02.056
[35] AlHaza, T., Alsadoon, A., Alhusinan, Z., Jarwali, M., & Alsaif, K. A. (2015).
New concept for indoor fire fighting robot. Procedia - Social and Behavioral
Sciences, 195, 2343–2352. https://doi.org/10.1016/j.sbspro.2015.06.191
[36] Hendzel, Z., & Rykała, Ł. (2017). Modelling of Dynamics of a Wheeled Mobile
Robot with Mecanum Wheels with the use of Lagrange Equations of the Second
Kind. International Journal of Applied Mechanics and Engineering, 22(1), 81–
99. https://doi.org/10.1515/ijame-2017-0005
[37] Amodei, D., Ananthanarayanan, S., Anubhai, R., & Zhu, Z. (2015). Deep
Speech 2: End-to-End speech recognition in English and Mandarin.
ResearchGate.
https://www.researchgate.net/publication/286513561_Deep_Speech_2_End-to-
End_Speech_Recognition_in_English_and_Mandarin
[38] Agbeyangi, A., Alashiri, O. A., Odiete, J., & Adenekan, O. (2020). An
autonomous obstacle avoidance robot using ultrasonic sensor. Journal of
Computer Science and Its Application, 27(1).
https://doi.org/10.4314/jcsia.v27i1.15
[39] Yang, X., Liu, H., & Zhang, S. (2023). Overview of path planning algorithms.
Recent Patents on Engineering, 18.
https://doi.org/10.2174/1872212118666230828150857
[40] Ata, A., & Myo, T. (2008). Optimal trajectory planning and obstacle avoidance
for flexible manipulators using generalized pattern search. ResearchGate.
https://www.researchgate.net/publication/255592185_Optimal_trajectory_planni
ng_and_obstacle_avoidance_for_flexible_manipulators_using_generalized_patt
ern_search
[41] Xu, X., Cao, Y., & Li, X. (2023). Improved DQN algorithm for path planning of
autonomous mobile robots. Research Square (Research Square).
https://doi.org/10.21203/rs.3.rs-3566583/v1
[42] Zhang, Han. (2023). Path planning for storage robots using the RRT algorithm.
Theoretical and Natural Science. 26. 279-285. 10.54254/2753-
8818/26/20241118.
Appendices
Appendix A: Plywood Mechanical Properties
Appendix B: Datasheets of Components
Geared DC motor
Different DC geared motor data:
Rated Breaking
Gearbox Length Stage Torque Torque
Reduction Ratio
(mm) Number
kg.cm
20.4/
16.2 4 25/1 28/1 31/1 35/1 1.0 2.5
1
173/
19.2 6 128/1 187/1 217/1 2.0 5.0
1
319/ 542/
20.7 7 266/1 438/1 467/1 2.5 6.0
1 1
The device includes 30cm of color-coded wire leads with a 3 X 1 pin female header
connector with 0.1" pitch that is compatible with the majority of receivers.
Specifications
• Weight: 55 g
• Temperature range: 0 ºC – 55 ºC
HC-SR04 Ultrasonic Sensor Module
Weight 9g
Weight 7g
IR Sensor Module
Raspberry Pi 4 Model Pinout
L289N Dual H-Bridge Motor Driver Specifications
Max Current 2A
على افتراض أنه تم تجميع التصميم الميكانيكي وتصميم الدوائر ،من خالل
برمجة المتحكمات الدقيقة وتوفير الكيمياء بينها ،مما يجعله ملًما بالعبارات في
لغتنا حتى يتمكن من التعرف على االقتباسات وتحليلها والتعرف على األوامر
المبرمجة يفضل تحسين دقة عناصر التعرف على الصوت لزيادة الكفاءة وتقليل
،األخطاء أو األخطاء المحتملة .عالوة على ذلك
لذلك ،قمنا بصناعة روبوت يتعرف على مجموعة واسعة من االقتباسات ،باستخدام
المواد والتصميم المناسبين ،ويحتوي على بعض األدوات لبعض الوظائف الميدانية،
كما تمت برمجته لتنفيذ التعليمات بشكل جيد ،وسيكون نموذًج ا أولًيا يلهم إجراء
.ترقيات ميدانية حقيقية
MSA
جامعة اكتوبر للعلوم الحديثة واالداب
كلية الهندسة
قسم هندسة نظم الميكاترونكس
اشراف
ربيع 2024