0% found this document useful (0 votes)
4 views

Robotics Mod 3

Uploaded by

AswathyPrasad
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Robotics Mod 3

Uploaded by

AswathyPrasad
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

ROBOTICS

MODULE 3
• There are several ways to communicate with a robot, and three major approaches to
achieve it are discrete word recognition, teach and play back, and high-level
programming languages.
• Teach and playback is typically accomplished by the following steps:
(1) leading the robot in slow motion using manual control through the entire assembly
task and recording the joint angles of the robot at appropriate locations in order to replay the
motion;
(2) editing and playing back the taught motion; and
(3) if the taught motion is correct, then the robot is run at an appropriate speed in a
repetitive mode.
• Leading the robot in slow motion usually can be achieved in several ways : using a joystick, a set of pushbuttons (one for each joint), or a
master-slave manipulator system.

• The most commonly used system is a manual box with pushbuttons.

• With this method, the user moves the robot manually through the space, and presses a button to record any desired angular position of the
manipulator.

• The set of angular positions that are recorded form the set-points of the trajectory that the manipulator has traversed.

• These position set-points are then interpolated by numerical methods, and the robot is "played back" along the smoothed trajectory.

• In the edit-playback mode, the user can edit the recorded angular positions and make sure that the robot will not collide with obstacles
while completing the task.

• In the run mode, the robot will run repeatedly according to the edited and smoothed trajectory.
• If the task is changed, then the above three steps are repeated.

• The advantages of this method are that it requires only a relatively small memory space to record angular positions and it is simple to
learn.

• The main disadvantage is that it is difficult to utilize this method for integrating sensory feed-back information into the control system.
• High-level programming languages provide a more general approach to solving the human-robot communication problem.

• The use of robots to perform assembly tasks requires high-level programming techniques because robot assembly usually relies on
sensory feedback, and this type of un-structured interaction can only be handled by conditionally programmed methods.

Considerations which must be handled by any robot programming method:

• The objects to be manipulated by a robot are three-dimensional objects which have a variety of physical properties; robots operate in a
spatially complex environment;

• The description and representation of three-dimensional objects in a computer are imprecise; and

• Sensory information has to be monitored, manipulated, and properly utilized.

Current approaches to programming can be classified into two major categories:

• robot-oriented programming and object-oriented, or task-level programming.

• In robot-oriented programming, an assembly task is explicitly described as a sequence of robot motions.

• The robot is guided and controlled by the program throughout the entire task with each statement of the program roughly corresponding
to one action of the robot.

• The task-level programming describes the assembly task as a sequence of positional goals of the objects rather than the motion of the
robot needed to achieve these goals, and hence no explicit robot motion is specified.
CHARACTERISTICS OF ROBOT-LEVEL LANGUAGES

We can easily recognize several key characteristics that are common to all robot-oriented languages by examining the steps
involved in developing a robot program.

Consider the task of inserting a bolt into a hole.

This requires moving the robot to the feeder, picking up the bolt, moving it to the beam and inserting the bolt into one of the
holes.

Typically, the steps taken to develop the program are:

1. The workspace is set up and the parts are fixed by the use of fixtures and feeders.
2. The location (orientation and position) of the parts (feeder, beam, etc.) and their features (beam_bore, bolt_grasp, etc.) are
defined using the data structures provided by the language.
3. The assembly task is partitioned into a sequence of actions such as moving the robot, grasping objects, and performing an
insertion.
4. Sensory commands are added to detect abnormal situations (such as inability to locate the bolt while grasping) and monitor
the progress of the assembly task.
5. The program is debugged and refined by repeating steps 2 to 4.
The important characteristics we recognized are:

position specification (step 2),


motion specification (step 3),
and sensing (step 4).

Languages AL (Mujtaba et al. [1982]) and AML (Taylor et al.[1983]) as examples.

AL has influenced the design of many robot-oriented languages and is still actively being developed. It provides a large
set of commands to handle the requirements of robot programming and it also supports high-level programming
features.

AML is currently available as a commercial product for the control of IBM's robots and its approach is different from
AL. Its design philosophy is to provide a system environment where different robot programming interfaces may be
built. Thus, it has a rich set of primitives for robot operations and allows the users to design high-level commands
according to their particular needs.

These two languages represent the state of the art in robot-oriented programming languages.
Position Specification

In robot assembly the robot and the parts are generally confined to a well-defined workspace.

The parts are usually restricted by fixtures and feeders to minimize positional uncertainties.

Assembly from a set of randomly placed parts requires vision and is not yet a common practice in industry.

The most common approach used to describe the orientation and the position of the objects in the workspace is by coordinate
frames.

They are usually represented as 4 x 4 homogeneous transformation matrices.

A frame consists of a3 x 3 submatrix (specifying the orientation) and a vector (specifying the position)which are defined with
respect to some base frame.
Table above shows AL and AML definitions for the three frames base, beam, and feeder shown in Fig.

The approach taken by AL is to provide predefined data structures for frames(FRAME), rotational matrices (ROT), and vectors
(VECTOR), all of them in cartesian coordinates.

AML provides a general structure called an aggregate which allows the user to design his or her own data structures.

The AML frames defined in Table are in cartesian coordinates and the format is<vector, matrix>, where vector is an aggregate of
three scalars representing position and matrix is an aggregate of three vectors representing orientation.

The first statement in AL means the establishment of the coordinate frame base, whose principal axes are parallel (nilrot implies no
rotation) to the principal axes of the reference frame and whose origin is at location (20, 0, 15) inches from the origin of the
reference frame.

The second statement in AL establishes the coordinate frame beam, whose principal axes are rotated 90° about the Z axis of the
reference frame, and whose origin is at location (20, 15, 0) inches from the origin of the reference frame.

The third statement has the same meaning as the first, except for location.

The meaning of the three statements in AML is exactly the same as for those in AL.
A convenient way of referring to the features of an object is to define a frame(with respect to the object's base frame) for it.
An advantage of using a homogeneous transformation matrix is that defining frames relative to a base frame can be simply
done by post multiplying a transformation matrix to the base frame.
Table 9.3 shows the AL and AML statements used to define the features T6, E,bolt_tip, bolt_grasp, and beam_bore with
respect to their base frames.

AL provides a matrix multiplication operator (*) and a data structure TRANS (a transformation which consists of a rotation
and a translation operation)to represent transformation matrices.

AML has no built-in matrix multiplication operator, but a system subroutine, DOT, is provided.

In order to illustrate the meaning of the statements in Table 9.3, the first AL statement means the establishment of the
coordinate frame T6, whose principal axes are rotated 180° about the X axis of the base coordinate frame, and whose origin
is at location (15, 0, 0) inches from the origin of the base coordinate frame.

The second statement establishes the coordinate frame E, whose principal axes are parallel (nilrot implies no rotation) to the
principal axes of the T6 coordinate frame, and whose origin is at location (0, 0, 5) inches from the origin of the T6coordinate
frame.

Similar comments apply to the other three AL statements.

The meaning of the AML statements is the same as those for AL.
Figure 9.2a shows the relationships between the frames we have defined in Tables 9.2 and 9.3.
Note that the frames defined for the arm are not needed for AL because AL uses an implicit frame to represent the position
of the end-effector and does not allow access to intermediate frames (T6, E).
Another way of acquiring the position and orientation of an object is by using the robot as a pointing device to gather the
information interactively.

POINTY(Grossman and Taylor [1978]), a system designed for AL, allows the user to lead the robot through the workspace (by
hand or by a pendant) and, by pointing the hand (equipped with a special tool) to objects, it generates AL declarations similar to
those shown in Tables 9.2 and 9.3.

This eliminates the need to measure the distances and angles between frames, which can be quite tedious.

Although coordinate frames are quite popular for representing robot configurations, they do have some limitations.

The natural way to represent robot configurations is in the joint-variable space rather than the cartesian space.

Since the inverse kinematics problem gives nonunique solutions, the robot's configuration is not uniquely determined given a
point in the cartesian space. As the number of features and objects increases, the relationships between coordinate frames
become complicated and difficult to manage.

Furthermore, the number of computations required also increases significantly.


MOTION SPECIFICATION

The most common operation in robot assembly is the pick-and-place operation.

It consists of moving the robot from an initial configuration to a grasping configuration, picking up an object, and moving
to a final configuration.

The motion is usually specified as a sequence of positional goals for the robot to attain.

However, only specifying the initial and final configurations is not sufficient.

Path is planned by the system without considering the objects in the workspace and obstacles may be present on the planned
path.

In order for the system to generate a collision-free path, the programmer must specify enough intermediate or via points on
the path.
For example, in Fig. 9.3, if a straight line motion were used from point A to point C, the robot would collide with the beam.
Thus, intermediate point B must be used to provide a safe path.

The positional goals can be specified either in the joint-variable space or in the cartesian space, depending on the language.

In AL, the motion is specified by using the MOVE command to indicate the destination frame the arm should move to.
Via points can be specified by using the keyword "VIA" followed by the frame of the via point (see Table 9.4).

AML allows the user to specify motion in the joint-variable space and the user can write his or her own routines to specify motions in the
cartesian space.

Joints are specified by joint numbers (1 through 6)and the motion can be either relative or absolute (see Table 9.4).

One disadvantage of this type of specification is that the programmer must preplan the entire motion in order to select the intermediate points.

The resulting path may produce awkward and inefficient motions.

Describing a complex path as a sequence of points produces an unnecessarily long program.

As the robot's hand departs from its starting configuration or approaches its final configuration, physical constraints, such as an insertion, which
require the hand to travel along an axis, and environmental constraints, such as moving in a crowded area, may prohibit certain movement of the
robot.

The programmer must have control over various details of the motion such as speed, acceleration, deceleration, approach and departure directions
to produce a safe motion.
Instead of separate commands, the usual approach is to treat them as constraints to be satisfied by the move command.

AL provides a keyword "WITH" to attach constraint clauses to the move command.

The constraints can be an approach vector, departure vector, or a time limit.

Table 9.5 shows the AL statements for moving the robot from bolt_grasp to A with departure direction along +Z of feeder and
time duration of 5 seconds (i.e., move slowly).

In AML, aggregates of the form<speed, acceleration, deceleration> can be added to the MOVE statement to specify speed,
acceleration, and deceleration of the robot.

In general, gripper motions have to be tailored according to the environment and the task.

Most languages provide simple commands on gripper motion so that sophisticated motions can be built using them.

For a two-fingered gripper, one can either move the fingers apart (open) or move them together (close).

Both AML and AL use a predefined variable to indicate the gripper (bhand corresponds tobarm for AL and GRIPPER for
AML).

Using the OPEN (for AL) and MOVE(for AML) primitives, the gripper can be programmed to move to a certain opening (see
Table 9.5).
SENSING AND FLOW OF CONTROL

The location and the dimension of the objects in the workspace can be identified only to a certain degree of accuracy.

For the robot to perform tasks in the presence of these uncertainties, sensing must be performed.

The sensory information gathered also acts as a feedback from the environment, enabling the robot to examine and
verify the state of the assembly.

Sensing in robot programming can be classified into three types:


1. Position sensing is used to identify the current position of the robot.
This is usually done by encoders that measure the joint angles and compute the corresponding hand position in
the workspace.

2. Force and tactile sensing can be used to detect the presence of objects in the workspace.
Force sensing is used in compliant motion to provide feedback for force-controlled motions.
Tactile sensing can be used to detect slippage while grasping an object.

3. Vision is used to identify objects and provide a rough estimate of their position.
There is no general consensus on how to implement sensing commands, and each language has its own syntax.

AL provides primitives like FORCE(axis) and TORQUE(axis) for force sensing.

They can be specified as conditions like FORCE(Z)> 3*ounces in the control commands.

AML provides a primitive called MONI-TOR which can be specified in the motion commands to detect asynchronous
events.

The programmer can specify the sensors to monitor and, when the sensors are triggered, the motion is halted (see Table
9.6). It also has position-sensing primitives like QPOSITION (joint numbers) which returns the current position of the
joints.

Most languages do not explicitly support vision, and the user has to provide modules to handle vision information.
One of the primary uses of sensory information is to initiate or terminate an action.

For example, a part arriving on a conveyor belt may trip an optical sensor and activate the robot to pick up the part, or an action may be terminated if an abnormal
condition has occurred.

Table 9.6 illustrates the use of force sensing information to detect whether the hand has positioned correctly above the hole.
The robot arm is moved downward slightly and, as it descends, the force exerted on the hand along the Z axis of the hand coordinate frame is returned by
FORCE(Z).

If the force exceeds 10 ounces, then this indicates that the hand missed the hole and the task is aborted.
The flow of a robot program is usually governed by the sensory information acquired.

Most languages provide the usual decision-making constructs like "if_then_ else_", "case_", "do_until_", and "while_do_" to control the flow of the program
under different conditions.

Certain tasks require the robot to comply with external constraints.

For example, insertion requires the hand to move along one direction only. Any sideward forces may generate unwanted friction which would impede the motion.

In order to perform this compliant motion, force sensing is needed.

Table 9.6 illustrates the use of AL's force sensing commands to perform the insertion task with compliance.

The compliant motion is indicated by quantifying the motion statement with the amount of force allowed in each direction of the hand coordinate frame.

In this case, forces are applied only along the Z axis of this frame.
CHARACTERISTICS OF TASK-LEVEL LANGUAGES

A completely different approach in robot programming is by task-level programming.

The natural way to describe an assembly task is in terms of the objects being manipulated rather than by the robot motions.

Task-level languages make use of this fact and simplify the programming task.

A task-level programming system allows the user to describe the task in a high-level language (task specification);
a task planner will then consult a database(world models) and transform the task specification into a robot-level program(robot program synthesis) that will accomplish
the task.

divide task planning into three phases: world modelling, task specification, and program synthesis.

It should be noted that these three phases are not completely independent, in fact, they are computationally related.

Figure 9.4 shows one possible architecture for the task planner.

The task specification is decomposed into a sequence of subtasks by the task decomposer and information such as initial state, final state, grasping position, operand,
specifications, and attachment relations are extracted.

The subtasks then passthrough the subtask planner which generates the required robot program.

The concept of task planning is quite similar to the idea of automatic program generation in artificial intelligence.

The user supplies the input-output requirements of a desired program, and the program generator then generates a program that will produce the desired input-output
behaviour.

Task-level programming, like automatic program generation, is, in the research stage with many problems still unsolved.
World Modelling

World modelling is required to describe the geometric and physical properties of the objects (including the robot) and to represent the state of the
assembly of objects in the workspace.

Geometric and Physical Models.

For the task planner to generate a robot pro-gram that performs a given task, it must have information about the objects and the robot itself.

These include the geometric and physical properties of the objects which can be represented by models.

A geometric model provides the spatial information (dimension, volume, shape) of the objects in the workspace.

Numerous techniques exist for modelling three-dimensional objects .

The most common approach is constructive solid geometry (CSG), where objects are defined as constructions or combinations, using regularized
set operations (such as union, intersection), of primitive objects (such as cube, cylinder).

The primitives can be represented in various ways:


1. A set of edges and points
2. A set of surfaces
3. Generalized cylinders
4. Cell decomposition
Representing World States.

The task planner must be able to stimulate the assembly steps in order to generate the robot program.

Each assembly step can be succinctly represented by the current state of the world.

One way of representing these states is to use the configurations of all the objects in the workspace.
Task Specification

Task specification is done with a high-level language.

At the highest level one would like to have natural languages as the input, without having to give the assembly steps.

An entire task like building a water pump could then be specified by the command "build water pump."

However, this level of input is still quite far away.

Not even omitting the assembly sequence is possible.

The current approach is to use an input language with a well-defined syntax and semantics, where the assembly sequence
is given.
Robot Program Synthesis

The synthesis of a robot program from a task specification is one of the most important and most difficult phases of task planning.

The major steps in this phase are grasping planning, motion planning, and plan checking.

Before the task planner can perform the planning, it must first convert the symbolic task specification into a usable form.

One approach is to obtain configuration constraints from the symbolic relationships.

The RAPT interpreter extracts the symbolic relationships and forms a set of matrix equations with the constraint parameters of the objects as unknowns.

These equations are then solved symbolically by using a set of rewrite rules to simplify them.

The result obtained is a set of constraints on the configurations of each object that must be satisfied to perform the operation.

Grasping planning is probably the most important problem in task planning because the way the object is grasped affects all subsequent operations.

The way the robot can grasp an object is constrained by the geometry of the object being grasped and the presence of other objects in the workspace.
A usable grasping configuration is one that is reachable and stable.

The robot must be able to reach the object without colliding with other objects in the workspace and, once grasped, the obiect must be stable during subsequent
motions of the robot.
ROBOT INTELLIGENCE & TASK PLANNING

A basic problem in robotics is planning motions to solve some prespecified task, and then controlling the robot as it executes
the commands necessary to achieve those actions.

Here, planning means deciding on a course of action before acting.

This action synthesis part of the robot problem can be solved by a problem-solving system that will achieve some stated goal,
given some initial situation.

A plan is thus, a representation of a course of action for achieving the goal.


STATE SPACE SEARCH

One method for finding a solution to a problem is to try out various possible approaches until we happen to produce the desired solution.

Such an attempt involves essentially a trial-and-error search.

To discuss solution methods of this sort, it is helpful to introduce the notion of problem states and operators.

A problem state, or simply state, is a particular problem situation or configuration.

The set of all possible configurations is the space of problem states, or the state space.

An operator, when applied to a state, transforms the state into another state.

A solution to a problem is a sequence of operators that transforms an initial state into a goal state.

It is useful to imagine the space of states reachable from the initial state as a graph containing nodes corresponding to the states.

The nodes of the graph are linked together by arcs that correspond to the operators.

A solution to a problem could be obtained by a search process that first applies operators to the initial state to produce new states, then applies
operators to these, and so on until the goal state is produced.

Methods of organizing such a search for the goal state are most conveniently described in terms of a graph representation.
Graph-Search Techniques

For small graphs, a solution path from the initial state to a goal state can be easily obtained by inspection.

For a more complicated graph a formal search process is needed to move through the state (problem)space until a path from an
initial state to a goal state is found.

One way to describe the search process is to use production systems.

A production system consists of


:
1. A database that contains the information relevant to the particular task. Depending on the application, this database may be
as simple as a small matrix of numbers or as complex as a large, relational indexed file structure.

2. A set of rules operating on the database. Each rule consists of a left side that determines the applicability of the rule or
precondition, and a right side that describes the action to be performed if the rule is applied. Application of the rule changes
the database.

3. A control strategy that specifies which rules should be applied and ceases computation when a termination condition on the
database is satisfied.
PROBLEM REDUCTION

Another approach to problem solving is problem reduction.

The main idea of this approach is to reason backward from the problem to be solved, establishing sub problems and sub-sub problems until, finally, the
original problem is reduced to a set of trivial primitive problems whose solutions are obvious.

A problem-reduction operator transforms a problem description into a set of reduced or successor problem descriptions.
For a given problem description there may be many reduction operators that are applicable.

Each of these produces an alternative set of subproblems.

Some of the subproblems may not be solvable, however, so we may have to try several operators in order to produce a set whose members are all solvable.

Thus it again requires a search process.

The reduction of problem to alternative sets of successor problems can be conveniently expressed by a graph like structure.

Suppose problem A can be solved by solving all of its three subproblems B, C, and D; an AND arc will be marked on the incoming arcs of the nodes B, C,
and D.

The nodes B, C, and D are called AND nodes.


On the other hand, if problem B can be solved by solving any one of the subproblems E and F, an OR arc will be used.

These relationships can be shown by the AND/OR graph shown in Fig. 10.9.
ROBOT TASK PLANNING

A task planner would transform the task-level specifications into manipulator-level specifications.

To carry out this transformation, the task planner must have a description of the objects being manipulated, the task
environment, the robot carrying out the task, the initial state of the environment, and the desired final (goal) state.

The output of the task planner would be a robot program to achieve the desired final state when executed in the specified
initial state.

There are three phases in task planning: modelling, task specification, and manipulator program synthesis.

The world model for a task must contain the following information:
(1) geometric description of all objects and robots in the task environment;
(2) physical description of all objects;
(3) kinematic description of all linkages; and
(4) descriptions of robot and sensor characteristics.

Models of task states also must include the configurations of all objects and linkages in the world model.
Obstacle Avoidance
The most common robot motions are transfer movements for which the only constraint is that the robot and whatever it is carrying should not collide with objects in the
environment.

Therefore, an ability to plan motions that avoid obstacles is essential to a task planner.

Several obstacle avoidance algorithms have been proposed in different domains.

The algorithms for robot obstacle avoidance can be grouped into the following classes:
(1) hypothesize and test,
(2) penalty function, and
(3) explicit free space.

The hypothesize and test method was the earliest proposal for robot obstacle avoidance.

The basic method consists of three steps:

first, hypothesize a candidate path between the initial and final configuration of the robot manipulator;

second, test a selected set of configurations along the path for possible collisions;

third, if a possible collision is found, propose an avoidance motion by examining the obstacle(s) that would cause the collision.

The entire process is repeated for the modified motion.

The main advantage of the hypothesize and test technique is its simplicity.

The method's basic computational operations are detecting potential collisions and modifying proposed paths to avoid collisions.
The first operation, detecting potential collisions, amounts to the ability to detect nonnull geometric
intersections between the manipulator and obstacle models.

The second operation, modifying a proposed path, can be very difficult.

Typical proposals for path modification rely on approximations of the obstacles, such as enclosing spheres.

These methods work fairly well when the obstacles are sparsely located so that they can be dealt with one at a
time.

When the space is cluttered, however, attempts to avoid a collision with one obstacle will typically lead to
another collision with a different obstacle.

Under such conditions, a more accurate detection of potential collisions could be accomplished by using the
information from vision and/or proximity sensors.
The second class of algorithms for obstacle avoidance is based on defining a penalty function on manipulator configurations that encodes
the presence of objects.

In general, the penalty is infinite for configurations that cause collisions and drops off sharply with distance from obstacles.

The total penalty function is computed by adding the penalties from individual obstacles and, possibly, adding a penalty term for
deviations from the shortest path.

At any configuration, we can compute the value of the penalty function and estimate its partial derivatives with respect to the
configuration parameters.

On the basis of this local information, the path search function must decide which sequence of configurations to follow.

The decision can be made so as to follow local minima in the penalty function.

These minima represent a compromise between increasing path length and approaching too close to obstacles.

The penalty function methods are attractive because they provide a relatively simple way of combining the constraints from multiple
objects.
This simplicity, however, is achieved only by assuming a circular or spherical robot; only in this case will the penalty function be a
simple transformation of the obstacle shape.
For more realistic robots, such as two-link manipulator, the penalty function for an obstacle would have to be defined as a transformation
of the configuration space obstacle.
Otherwise, motions of the robot that reduce thevalue of the penalty function will not necessarily be safe.

You might also like