UNIT v Robotics
UNIT v Robotics
Philosophical foundations: Weak AI, Strong AI, Ethics and Risks of AI, Agent Components,
Agent Architectures, Are we going in the right direction, What if AI does succeed.
PHILOSOPHICAL FUNDATIONS
1. Strong AI: Can Machines Really Think?
Strong AI, also known as Artificial General Intelligence (AGI),AI with general cognitive
abilities, capable of understanding, learning, and applying knowledge across diverse
tasks, similar to human intelligence.
A machine can perform tasks on its own, just like human beings.
This type of AI allows machines to solve problems in an unlimited domain.
It is an independent AI which can take the decision on its own.
Examples (Hypothetical):
1. Personal Assistants:
Fully autonomous AI capable of understanding emotions, managing schedules, and
making informed decisions.
2. Autonomous Robots:
Robots that can independently handle complex tasks such as caregiving or disaster
management.
3. Self-Driving Cars:
Vehicles that understand complex traffic scenarios and make decisions as humans would.
4. Intelligent Diagnosis: AI accurately diagnoses diseases by analyzing patient conditions,
symptoms, and history.
5. AI-Powered Education: Adaptive systems personalize learning based on individual styles
and needs.
Weak AI, also known as Narrow AI, is a type of artificial intelligence designed to perform a
specific task or set of tasks.
A machine can perform tasks but need human intervention.
This type of AI allows machines to solve problems in a limited domain.
It is dependent on humans and can simulate like human beings.
Examples Explained:
1. Virtual Assistants:
Tools like Siri and Alexa respond to voice commands, manage schedules, or play music.
2. Image Recognition:
AI used in identifying faces or objects in photos (e.g., facial recognition in security).
3. Language Translation:
Google Translate or other AI-powered tools convert text from one language to another.
4. Chatbots:
Automated systems that provide customer support or answer queries online.
5. Recommendation Systems:
Netflix, Amazon, or Spotify suggest content based on user preferences and past behavior.
AI Code of Ethics
1. Job Loss to Automation: Robots and AI replacing human labor, causing widespread
unemployment.
2. Impact on Leisure Time: Excessive or insufficient leisure due to automation, leading to
societal boredom or stress.
3. Loss of Uniqueness: AI suggesting humans are like machines, undermining autonomy and
humanity.
4. Misuse of AI: Deployment in harmful areas like autonomous weapons or unethical
surveillance.
5. Lack of Accountability: Ambiguities in assigning liability for AI-driven errors or failures.
6. Existential Risk: AI surpassing human control, posing threats to human survival.
Theoretical Insights
1.Hybrid Architectures:
2.Deliberation Control:
3.Reflective Systems:
Practical Components
Agent systems rely on hardware to interact with the environment through sensors and
actuators:
1. Sensors:
Sensors detect environmental changes and provide critical data to intelligent agents.
Accelerometers: Measure motion or orientation, used in devices like smartphones and
VR headsets.
Ultrasonic Sensors (SONAR): Emit ultrasonic waves to measure object distance,
commonly used in robotics and vehicle systems.
2. Actuators:
Enable the agent to take action based on its perceptions and decisions.
DC Motors: Enable simple, continuous motion, as seen in RC cars.
BLDC Motors: Power-efficient motors used in drones and electric vehicles.
Servo Motors: Allow precise position control, used in robotic arms.
Stepper Motors: Facilitate accurate stepwise motion, such as in clocks or 3D print.
Agent Components
An AI agent interacts with its environment through the following key components:
This is one of the core capabilities required for an intelligent agent. It requires both
perception and updating of internal representations.
5.Learning
Inductive Learning: AI agents improve through learning, where they adjust their functions
based on experiences or feedback.
Types of Learning:
Supervised Learning: Learning from labeled examples.
Unsupervised Learning: Identifying patterns in data without predefined labels.
Reinforcement Learning: Learning by trial and error through rewards and
punishments.
Role: Enhances the agent's ability to make better decisions and adapt to new situations
over time.
To guide AI toward safe and effective applications, four theoretical frameworks emerge:
1.Perfect Rationality:
2.Calculative Rationality:
3.Bounded Rationality :
1. Cognitive constraints
2. Problem complexity
Result: Humans "satisfice" (choose "good enough" solutions).
Limitation: No formal framework for intelligent agents; "good enough" lacks clear definition.
4.Bounded Optimality:
Advantages:
1. Achievable (unlike perfect rationality).
Healthcare:
AI will revolutionize healthcare by enabling faster, more accurate diagnoses and drug
discovery. It will enhance patient engagement, streamline appointment scheduling, and
reduce errors. However, the challenge lies in ensuring AI adoption in daily clinical practices.
Cybersecurity:
AI will improve cybersecurity by monitoring security incidents, identifying the origin of cyber-
attacks using NLP, and automating rule-based tasks through RPA bots.
Transportation:
AI will contribute to autonomous vehicles and cockpit technology, improving pilot
performance and reducing stress. Challenges include the risk of over-dependence on
autonomous systems, especially in public transportation.
E-commerce:
AI will enhance the e-commerce sector through automated warehouses, personalized
shopping experiences, improved marketing, and chatbots for customer service.
Employment:
AI is transforming employment by automating resume screening, job search processes, and
interview evaluations. It’s expected that AI-enabled applications will play a larger role in
recruitment, including written tests and interviews.
Characteristics of AI Success:
Robotics
Robotics is the term used in artificial intelligence that deals with a study of
Automatic sweepers
Space exploration
Mine removal in war zones
Automated toys
Military operations
History of Robotics
The term "robot" was introduced by Czech writer Karel Čapek in his 1920 play Rossum's
Universal Robots (R.U.R.).
"Robotics" was coined by Isaac Asimov in the 1940s.
1. Zeroth Law: Robots must not harm humanity or allow it to come to harm.
2. First Law: Robots must not harm humans or allow harm through inaction, unless it
violates the Zeroth Law.
3. Second Law: Robots must obey human orders, except when conflicting with higher laws.
4. Third Law: Robots may protect themselves unless it conflicts with higher laws.
ROBOT HARDWARE
1. Mechanical Structure:
Body: Provides the robot's shape and houses other components.
Actuators: Devices (motors, hydraulic systems) that enable movement.
Joints and Links: Facilitate motion and flexibility in arms, legs, or appendages.
2. Electrical Components:
Power Source: Supplies energy (batteries, solar cells, or AC power).
Sensors: Collect data about the environment (e.g., cameras, proximity sensors).
3. Computational System:
Processor/Controller: Acts as the robot's brain, running software to process input and
control actions.
Memory: Stores programs and data for decision-making and tasks.
4. Communication Systems:
Interfaces (wired or wireless) enabling interaction with other machines, humans, or
networks.
5. End Effectors:
Tools or attachments (e.g., grippers, welding torches, or cameras) designed for
specific tasks.
6. Software/Programming:
Algorithms and instructions that determine how the robot operates and responds to
its environment.
These components work together to allow robots to perform a wide range of tasks efficiently.
Robotic Perception
Robotic perception is the ability of robots to collect, interpret, and understand
environmental and self-state information, enabling informed decision-making, adaptability,
and effective task execution.
Environmental Sensors: Capture data about the surroundings. Examples include cameras
(vision), lidar (distance), sonar (sound), and temperature sensors.
Proprioceptive Sensors: Provide internal information about the robot’s state, such as joint
angles, motor speeds, and battery levels.
Perceptual Models
Robots use models to interpret sensor data. These models can include:
Geometric Models: Represent shapes and distances.
Dynamic Models: Predict object motion and interaction outcomes.
Semantic Models: Assign meanings to objects or actions (e.g., identifying a "chair").
Low-Dimensional Embedding: Reduces complex sensor data into simpler, meaningful forms.
Self-Supervised Learning: Robots gather labeled training data themselves, improving their
ability to classify and interpret sensor inputs.
The robot's position is defined by a vector (xt,yt,θt), where x and y are Cartesian
coordinates and θ is orientation.
Kinematic Model: The robot updates its position (vtΔt) and orientation (ωtΔt ) based on
movement.
Motion Planning:
Motion planning involves breaking down a movement task into discrete motions that satisfy
constraints and optimize certain goals. This includes:
Configuration Space:
Uses configuration space (location, orientation, joint angles) for path planning instead of
workspace coordinates.
Helps avoid constraints like fixed distances between robot joints.
Robots often use deterministic algorithms for planning, but uncertainty arises from sensor
noise or imperfect models. Online replanning allows robots to adjust paths based on updated
sensor data during execution. For more uncertain environments, the problem is modeled as a
Markov Decision Process (MDP) or a Partially Observable MDP (POMDP), where a robot
makes decisions based on both known and uncertain states.
MDPs model robot decisions under uncertainty when the state is fully observable.
Solutions involve creating an optimal policy to handle motion errors and decision-
making.
POMDPs handle uncertainty when the robot has incomplete knowledge of the
environment.
The robot uses a belief state (probability distribution) to make decisions based on both
known and uncertain states.
Reactive Layer: Provides low-level control with a tight sensor-action loop, typically
operating on a millisecond cycle.
Executive Layer: Acts as the bridge between the reactive and deliberative layers, handles
directives from the deliberative layer, and integrates sensor information for internal state
representation (decision cycle: 1 second).
Deliberative Layer: Generates global solutions for complex tasks using planning models,
often with a decision cycle of minutes.
Pipeline Architecture: