0% found this document useful (0 votes)
2 views

UNIT v Robotics

The document provides an overview of robotics, including definitions, hardware components, and software architectures. It discusses the philosophical foundations of AI, differentiating between Strong AI and Weak AI, and addresses ethical concerns and risks associated with AI development. Additionally, it covers robotic perception, motion planning, and the future of AI in various sectors, emphasizing the importance of ethical guidelines and the potential impact of AI on society.

Uploaded by

manikalachandu48
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

UNIT v Robotics

The document provides an overview of robotics, including definitions, hardware components, and software architectures. It discusses the philosophical foundations of AI, differentiating between Strong AI and Weak AI, and addresses ethical concerns and risks associated with AI development. Additionally, it covers robotic perception, motion planning, and the future of AI in various sectors, emphasizing the importance of ethical guidelines and the potential impact of AI on society.

Uploaded by

manikalachandu48
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

UNIT V Robotics

Robotics: Introduction, Robot Hardware, Robotic Perception, planning to move, planning


uncertain movements, Moving, Robotic software architectures, application domains

Philosophical foundations: Weak AI, Strong AI, Ethics and Risks of AI, Agent Components,
Agent Architectures, Are we going in the right direction, What if AI does succeed.

PHILOSOPHICAL FUNDATIONS
1. Strong AI: Can Machines Really Think?

Strong AI, also known as Artificial General Intelligence (AGI),AI with general cognitive
abilities, capable of understanding, learning, and applying knowledge across diverse
tasks, similar to human intelligence.
A machine can perform tasks on its own, just like human beings.
This type of AI allows machines to solve problems in an unlimited domain.
It is an independent AI which can take the decision on its own.

Examples (Hypothetical):

1. Personal Assistants:
Fully autonomous AI capable of understanding emotions, managing schedules, and
making informed decisions.
2. Autonomous Robots:
Robots that can independently handle complex tasks such as caregiving or disaster
management.
3. Self-Driving Cars:
Vehicles that understand complex traffic scenarios and make decisions as humans would.
4. Intelligent Diagnosis: AI accurately diagnoses diseases by analyzing patient conditions,
symptoms, and history.
5. AI-Powered Education: Adaptive systems personalize learning based on individual styles
and needs.

2. Weak AI: Can Machines Act Intelligently?

Weak AI, also known as Narrow AI, is a type of artificial intelligence designed to perform a
specific task or set of tasks.
A machine can perform tasks but need human intervention.
This type of AI allows machines to solve problems in a limited domain.
It is dependent on humans and can simulate like human beings.

Examples Explained:

1. Virtual Assistants:
Tools like Siri and Alexa respond to voice commands, manage schedules, or play music.
2. Image Recognition:
AI used in identifying faces or objects in photos (e.g., facial recognition in security).
3. Language Translation:
Google Translate or other AI-powered tools convert text from one language to another.
4. Chatbots:
Automated systems that provide customer support or answer queries online.
5. Recommendation Systems:
Netflix, Amazon, or Spotify suggest content based on user preferences and past behavior.

Ethics and Risks of Developing Artificial Intelligence


The development of Artificial Intelligence (AI) raises ethical concerns and potential risks that
must be carefully addressed to ensure AI benefits society without causing harm.

Why AI Ethics Matter:


1. AI replicates human intelligence, potentially amplifying biases.
2. Scale and complexity of AI systems make them difficult to comprehend.
3. Urgent need to address risks and benefits.
Ethical Challenges:
1. Explainability: Understanding AI decision-making.
2. Responsibility: Accountability for AI consequences.
3. Fairness: Avoiding biases in data and algorithms.
4. Misuse: Preventing harmful applications.

What are the 5 Pillar of AI ethics?

1. Fairness & Non-discrimination: Avoid bias in AI decisions.


2. Transparency: Explain AI decision-making.
3. Data Protection: Handle personal data responsibly.
4. Explainability: Make AI decisions understandable.
5. Human Autonomy & Control: Ensure human oversight.

AI Code of Ethics

Policy: Developing regulations to ensure AI is used responsibly.


Education: Ensuring stakeholders understand AI's ethical implications.
Technology: Creating systems that detect and prevent unethical AI behavior, including
data manipulation or deep fakes.

AI Risks and Concerns

1. Job Loss to Automation: Robots and AI replacing human labor, causing widespread
unemployment.
2. Impact on Leisure Time: Excessive or insufficient leisure due to automation, leading to
societal boredom or stress.
3. Loss of Uniqueness: AI suggesting humans are like machines, undermining autonomy and
humanity.
4. Misuse of AI: Deployment in harmful areas like autonomous weapons or unethical
surveillance.
5. Lack of Accountability: Ambiguities in assigning liability for AI-driven errors or failures.
6. Existential Risk: AI surpassing human control, posing threats to human survival.

Agent Architectures in AI:


Agent architectures are blueprints for software agents and intelligent control systems. A
complete agent integrates both theoretical and practical elements using a hybrid
architecture, combining multiple decision-making components.

Theoretical Insights
1.Hybrid Architectures:

Hybrid systems integrate reflex, and goal-oriented actions.


Boundaries between components evolve through compilation, converting declarative
knowledge at the deliberative level into reflex-level actions for efficiency.

2.Deliberation Control:

Anytime Algorithms: Provide incremental improvements, ready at any interruption.


Meta reasoning: Optimizes computational effort using decision-theoretic principles.

3.Reflective Systems:

Use Meta reasoning to refine decision-making, balancing computational cost with


benefits.

Practical Components

Agent systems rely on hardware to interact with the environment through sensors and
actuators:

1. Sensors:
Sensors detect environmental changes and provide critical data to intelligent agents.
Accelerometers: Measure motion or orientation, used in devices like smartphones and
VR headsets.
Ultrasonic Sensors (SONAR): Emit ultrasonic waves to measure object distance,
commonly used in robotics and vehicle systems.
2. Actuators:
Enable the agent to take action based on its perceptions and decisions.
DC Motors: Enable simple, continuous motion, as seen in RC cars.
BLDC Motors: Power-efficient motors used in drones and electric vehicles.
Servo Motors: Allow precise position control, used in robotic arms.
Stepper Motors: Facilitate accurate stepwise motion, such as in clocks or 3D print.

Agent Components
An AI agent interacts with its environment through the following key components:

1.Interaction with the Environment (Sensors and Actuators)


Sensors: Collect data from the environment, allowing the agent to perceive the world.
Actuators: Enable the agent to take action based on its perceptions and decisions.
Historical Challenge: AI systems required human input for data and output
interpretation.
2.Keeping Track of the State of the World

This is one of the core capabilities required for an intelligent agent. It requires both
perception and updating of internal representations.

3.Projecting and Selecting Future Actions

Knowledge Representation: Same principles apply to tracking the environment and


planning actions.
Goal: Plan a sequence of steps to achieve successful outcomes.

4.Utility as an Expression of Preferences

Rational Decision-Making: Decisions are based on maximizing expected utility.


Benefit: Helps prioritize actions and resolve conflicts in goals.

5.Learning

Inductive Learning: AI agents improve through learning, where they adjust their functions
based on experiences or feedback.
Types of Learning:
Supervised Learning: Learning from labeled examples.
Unsupervised Learning: Identifying patterns in data without predefined labels.
Reinforcement Learning: Learning by trial and error through rewards and
punishments.
Role: Enhances the agent's ability to make better decisions and adapt to new situations
over time.

Are we going in the right direction


The rapid progress in AI presents both immense opportunities and profound challenges.
Whether we are on the "right path" depends on how well we address current challenges and
align AI development with human values.
Specifications for AI Development

To guide AI toward safe and effective applications, four theoretical frameworks emerge:

1.Perfect Rationality:

A perfectly rational agent maximizes expected utility at every moment based on


available information. However, achieving this is unrealistic due to the significant
computational time required in most environments.

2.Calculative Rationality:

A rational agent makes optimal decisions through deliberation.


However, in many environments, the right answer at the wrong time is ineffective.
AI designers must balance decision quality with performance, lacking clear guidelines.

3.Bounded Rationality :

Herbert Simon introduced bounded rationality.


Humans' decision-making capacity is limited by:

1. Cognitive constraints

2. Problem complexity
Result: Humans "satisfice" (choose "good enough" solutions).

Limitation: No formal framework for intelligent agents; "good enough" lacks clear definition.

4.Bounded Optimality:

Bounded optimality balances decision quality with computational constraints, making it


a promising approach for AI development.
An agent that maximizes utility within its computational limits.

Advantages:
1. Achievable (unlike perfect rationality).

2. Offers a strong theoretical foundation for AI.

Future of Artificial Intelligence (If AI Succeeds)


AI success is when Artificial Intelligence systems seamlessly integrate into various sectors,
solve complex problems, and enhance human capabilities, achieving human-level or superior
performance with minimal errors and optimal efficiency.

Healthcare:
AI will revolutionize healthcare by enabling faster, more accurate diagnoses and drug
discovery. It will enhance patient engagement, streamline appointment scheduling, and
reduce errors. However, the challenge lies in ensuring AI adoption in daily clinical practices.
Cybersecurity:
AI will improve cybersecurity by monitoring security incidents, identifying the origin of cyber-
attacks using NLP, and automating rule-based tasks through RPA bots.

Transportation:
AI will contribute to autonomous vehicles and cockpit technology, improving pilot
performance and reducing stress. Challenges include the risk of over-dependence on
autonomous systems, especially in public transportation.

E-commerce:
AI will enhance the e-commerce sector through automated warehouses, personalized
shopping experiences, improved marketing, and chatbots for customer service.

Employment:
AI is transforming employment by automating resume screening, job search processes, and
interview evaluations. It’s expected that AI-enabled applications will play a larger role in
recruitment, including written tests and interviews.

Characteristics of AI Success:

1. Accuracy: Delivering precise and correct results.


2. Reliability: Consistently delivering results with minimal errors.
3. Scalability: Ability to handle increasing levels of complexity and data.
4. Adaptability: Ability to learn and adjust to new situations and environments.
5. Human-Centric Design: Focusing on enhancing human interaction and needs.

Robotics
Robotics is the term used in artificial intelligence that deals with a study of

creating intelligent and efficient robots.

What are Robots?


Robots are multifunctional, re-programmable, automatic industrial machines designed to
replace human's repetitive tasks.
Examples of robot applications:

Automatic sweepers
Space exploration
Mine removal in war zones
Automated toys
Military operations

History of Robotics
The term "robot" was introduced by Czech writer Karel Čapek in his 1920 play Rossum's
Universal Robots (R.U.R.).
"Robotics" was coined by Isaac Asimov in the 1940s.

Asimov's Laws of Robotics

1. Zeroth Law: Robots must not harm humanity or allow it to come to harm.
2. First Law: Robots must not harm humans or allow harm through inaction, unless it
violates the Zeroth Law.
3. Second Law: Robots must obey human orders, except when conflicting with higher laws.
4. Third Law: Robots may protect themselves unless it conflicts with higher laws.

The First Industrial Robot: UNIMATE


In 1954, George Devol designed the first programmable robot and coined the term "Universal
Automation," later shortened to "Unimation." This became the name of the first robot company,
founded in 1962.

ROBOT HARDWARE

1. Mechanical Structure:
Body: Provides the robot's shape and houses other components.
Actuators: Devices (motors, hydraulic systems) that enable movement.
Joints and Links: Facilitate motion and flexibility in arms, legs, or appendages.
2. Electrical Components:
Power Source: Supplies energy (batteries, solar cells, or AC power).
Sensors: Collect data about the environment (e.g., cameras, proximity sensors).
3. Computational System:
Processor/Controller: Acts as the robot's brain, running software to process input and
control actions.
Memory: Stores programs and data for decision-making and tasks.
4. Communication Systems:
Interfaces (wired or wireless) enabling interaction with other machines, humans, or
networks.
5. End Effectors:
Tools or attachments (e.g., grippers, welding torches, or cameras) designed for
specific tasks.
6. Software/Programming:
Algorithms and instructions that determine how the robot operates and responds to
its environment.

These components work together to allow robots to perform a wide range of tasks efficiently.

Robotic Perception
Robotic perception is the ability of robots to collect, interpret, and understand
environmental and self-state information, enabling informed decision-making, adaptability,
and effective task execution.

Key Components of Robotic Perception


Sensors

Environmental Sensors: Capture data about the surroundings. Examples include cameras
(vision), lidar (distance), sonar (sound), and temperature sensors.

Proprioceptive Sensors: Provide internal information about the robot’s state, such as joint
angles, motor speeds, and battery levels.

Perceptual Models

Robots use models to interpret sensor data. These models can include:
Geometric Models: Represent shapes and distances.
Dynamic Models: Predict object motion and interaction outcomes.
Semantic Models: Assign meanings to objects or actions (e.g., identifying a "chair").

Machine Learning in Perception:

Machine learning helps when internal representations are not predefined:

Low-Dimensional Embedding: Reduces complex sensor data into simpler, meaningful forms.

Self-Supervised Learning: Robots gather labeled training data themselves, improving their
ability to classify and interpret sensor inputs.

Localization and Mapping


Localization involves determining the position of the robot and objects in its environment.
Accurate localization is crucial for navigation and interaction.
Pose in a 2D World:

The robot's position is defined by a vector (xt,yt,θt), where x and y are Cartesian
coordinates and θ is orientation.
Kinematic Model: The robot updates its position (vtΔt) and orientation (ωtΔt ) based on
movement.

Sensor Models for Localization:

Range-Scan Sensors: Measure distances to landmarks.


Bayesian models compare measured data to predicted positions, estimating the
likelihood of the robot’s pose.

Robot Motion Planning

Motion Planning:
Motion planning involves breaking down a movement task into discrete motions that satisfy
constraints and optimize certain goals. This includes:

Point-to-Point Motion: Moving the robot or end effector to a target.


Compliant Motion: Moving while in physical contact with obstacles, such as a robot
manipulating objects.

Configuration Space:

Uses configuration space (location, orientation, joint angles) for path planning instead of
workspace coordinates.
Helps avoid constraints like fixed distances between robot joints.

Cell Decomposition Methods:

Divides free space into cells to simplify path planning.


Grid decomposition is the simplest method but suffers from the "curse of
dimensionality" and issues with "mixed" cells (partially free or occupied).

Modified Cost Functions:


Potential fields guide the robot away from obstacles by adding obstacle-avoidance costs
to path calculations.
Balances minimizing path length and obstacle avoidance.
Post-planning smoothing improves control for smoother, more realistic motion

PLANNING UNCERTAIN MOVEMENTS

Robots often use deterministic algorithms for planning, but uncertainty arises from sensor
noise or imperfect models. Online replanning allows robots to adjust paths based on updated
sensor data during execution. For more uncertain environments, the problem is modeled as a
Markov Decision Process (MDP) or a Partially Observable MDP (POMDP), where a robot
makes decisions based on both known and uncertain states.

Markov Decision Processes (MDPs):

MDPs model robot decisions under uncertainty when the state is fully observable.
Solutions involve creating an optimal policy to handle motion errors and decision-
making.

Partially Observable MDPs (POMDPs):

POMDPs handle uncertainty when the robot has incomplete knowledge of the
environment.
The robot uses a belief state (probability distribution) to make decisions based on both
known and uncertain states.

ROBOTIC SOFTWARE ARCHITECTURES


A methodology for structuring algorithms, including tools and languages for program
development.
Hybrid Architectures combine both reactive and deliberative techniques.

Subsumption Architecture (by Rodney Brooks):

Levels of Commands: Higher-level commands inhibit lower-level ones.


Fault Tolerance: If higher-level commands are missing, the robot can still operate at a
reduced capacity.

Three-Layer Architecture (Hybrid):

Reactive Layer: Provides low-level control with a tight sensor-action loop, typically
operating on a millisecond cycle.
Executive Layer: Acts as the bridge between the reactive and deliberative layers, handles
directives from the deliberative layer, and integrates sensor information for internal state
representation (decision cycle: 1 second).
Deliberative Layer: Generates global solutions for complex tasks using planning models,
often with a decision cycle of minutes.

Pipeline Architecture:

Executes multiple processes in parallel, similar to subsumption architecture.


Sensor Interface Layer: Collects data, passed to the Perception Layer, which updates
internal models.
Planning and Control Layer: Plans turns them into actual controls for the robot. Those
are then communicated back to the vehicle through the vehicle interface layer.
Asynchronous Operations: Processes run concurrently (perception, planning, and
acting), resulting in a fast, robust system, similar to the human brain.

You might also like