0% found this document useful (0 votes)
5 views

Research Methodologies

fyp

Uploaded by

rilwanyusuf2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Research Methodologies

fyp

Uploaded by

rilwanyusuf2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
You are on page 1/ 11

S/N Author Research Methodology

1 Alon Poudel (2019) A Comparative Study of Project Management System Web Applications Built on ASP

BUILDING AN AUTOMATED TASK


DELEGATION ALGORITHM FOR
2 PROJECT MANAGEMENT AND
DEPLOYING IT AS SAAS

Data Source -
Project Management Software - A
Website of UCBL,
3 Fahim Ahmed (2021) Project by United Commercial
Annual reports,
Bank Ltd.
Internet

Intellectualization of project
4 management web services based
Yegoshyna G.A., Voronoy S.M on integration with natural
(2018) language processing module
Results
S/N Research Author(s)

Advancements in14
Humanoid Robotics: Sarika Shrivastava,
Designing an Artificial Somendra Bannerjee,
1 Neural Network-based Mayank Srivastava, Saifullah
Speech Recogniti Robot for Khalid, D. K. Nishad
Tactical Deployment
4-degree-of-freedom Adekunle Taofeek, Elegbeji
2 voice-controlled robotic Wahab , Adeniyi
arm Oluwabukunmi ()

Design of a decentralized
Internet-based control Jin Zhang1,2, Wenjun
3 system for industrial robot Meng1,†, Yufeng Yin
robotic
arms
A review on Artificial
intelligence, machine Mohsen Soori a,∗ , Behrooz
4 learning and deep learning Arezoo b , Roza Dastres
in advanced robotics

Modeling and Control of 2- Nasr M. Ghaleb1


5
DOF Robot Arm and Ayman A. Aly1,
Reinforcement Learning
for Pick and Place Andrew Lobbezoo, Yanjun
6 Qian and Hyock-Ju Kwon
Operations in Robotics: A (2021)
Survey

Design and Ravikumar Mourya1


, Amit Shelke2
Implementation of Pick , Sourabh Satpute3
7 and Place , Sushant Kakade4
Robotic Arm , Manoj Botre5
Design & Kinematic Elias Eliot1
, B.B.V.L. Deepak1*, D.R.
8 Analysis of an Articulated Parhi2
Robotic Manipulator , and J. Srinivas2 (2012)

Position Control Method Khin Moe Myint, Zaw Min


For Pick And Place Robot
9 Arm For Object Sorting Min Htun, Hla Myo Tun
System (2016)

Bilal İşçimen, Hüseyin


Smart Robot Arm Motion Atasoy, Yakup Kutlu, Serdar
10 Using Computer Vision
Yıldırım, Esen Yıldırım (2015)
Methodology Results Remarks

The methodology used in this article


involved recording a number of voice
samples for five commands using a
MATLAB GUI, followed by the extraction
of the unique characteristics of the voice The ANN-based
samples using techniques such as FFT and speech recognition
MFCC. For pre-processing, silence system achieved an
removal and extract features were used
for bettering their overall accuracy during 87% accuracy in
recognition by the mode. A feedforward recognizing five
commands using a
ANN was trained using MATLAB’s Deep diverse dataset from
Learning Toolbox. The trained model was ten speakers.
embedded in a humanoid robot
prototype via MATLAB’s Hardware
Support Package for Arduino which
enabled command execution through a
Graphical User Interface.
The voice-controlled robotic arm was
designed using Autodesk Fusion 360 and
analyzed with Denavit-Hartenberg (D-H) The robotic arm's
parameters for kinematics. An Android performance was
app was responsible for capturing voice analyzed using
commands, converting it into text, and
sending them to the Arduino via Denavit-Hartenberg
Bluetooth. The microcontroller processes (D-H) parameters to
these commands, generating signals to model its kinematics in
MATLAB and also
control the arm for pick-and-place tasks. calculatiion of its
Hardware components of the arm reach, torque, and
included an Arduino Uno microcontroller, speed.
HC-05 Bluetooth module, lithium-ion
batteries, and a servo motor driver
shield.

The system used a two-level control


architecture with an STM32F429
microprocessor-based robot controller
and a VisualC++-based human-machine
interface for the upper computer. The
hardware design included servo motor
drivers, serial communication modules,
and voltage regulation circuits. The
software module handled data
processing, motion control, and feedback
through the controller, while the network
tracked and fused sensor data using
consistency, algebraic averaging, and
geometric mean fusion algorithms
This paper highlights the application of
Artificial Intelligence, Machine Learning,
and Deep Learning by combining
previous studies across various areas Significant
including object recognition, motion advancements in
planning, control, and predictive robotics enabled by AI,
maintenance. Convolutional neural ML and DL to achieve
networks (CNNs) were used for object improvements in
detection and classification, while properties such as
reinforcement learning (RL) algorithms precision, adaptability
allowed robots to perform motion and efficiency.
planning and pathfinding. Deep
reinforcement learning (DRL) approaches
were also adopted for control tasks.

The modeling of the robotic arm involved


the development of mathematical
models, dynamic analysis, and control Approximate
system design using mathematical models
MATLAB/Simulink.The Forward can be
kinematics of a robotic arm is obtained and then
determined a group of parameters called simulated in
Denavit-Hartenberg (DH) parameters combination
which used for deriving the homogenous with the designed
transformation matrices between the control law, for
different frames assigned on the robot providing a
arm structure. A proportional-integral- more realistic
derivative (PID) controller was designed validation of the
and tuned using trial and error for each system behavior
joint to achieve optimal performance. A and control
permanent magnet DC motor (PMDC) performance.
was used for actuation and a transfer
functionto simulate the actuation
The article talks about using
reinforcement learning (RL) in robotic
pick-and-place tasks, which use Markov
Decision Processes (MDPs) as the Reinforcement
underlying formation for learning with
reinforcement iterations. Methods like learning produces high
dynamic programming, Actor-Critic, and accuracy, with
Proximal Policy Optimization (PPO) are algorithms like Deep
Deterministic Policy
used for improving policies. Most training Gradient (DDPG) and
takes place in simulation settings, like Hindsight Experience
MuJoCo and ODE. The survey reviews Replay (HER) excelling
various existing methodologies used in
reinforcement learning such as policy in simulation.
optimization, Reward Shaping, Imitation
(Apprenticeship) Learning, and Pose
Estimation for Grasp Selection.

The methodology involves the following


steps:

Mechanical Design: A CAD model was


created using Creo software to design the
main components of the arm.
Actuation System: The arm is powered by
four servo motors that control the
degrees of freedom. An ATmega16 was
utilized as a microcontroller for control,
and RoboAnalyzer software was used to
solve inverse kinematics. The design
considerations involved Electrical
actuators DC servo instead of hydraulic
and pneumatic actuators, a continuous
path controller. Materials used for the
fabrication were locally sourced from
available materials.

You might also like