Robotics - What Is Robotics?: Components of Robotic Systems
Robotics - What Is Robotics?: Components of Robotic Systems
Roboticists develop man-made mechanical devices that can move by themselves, whose motion must be modelled, planned, sensed, actuated and controlled, and whose motion behaviour can be influenced by programming. Robots are called intelligent if they succeed in moving in safe interaction with an unstructured environment, while autonomously achieving their specified tasks. This definition implies that a device can only be called a robot if it contains a movable mechanism, influenced by sensing, planning, actuation and control components. It does not imply that a minimum number of these components must be implemented in software, or be changeable by the consumer who uses the device; for example, the motion behaviour can have been hard-wired into the device by the manufacturer. So, the presented definition, as well as the rest of the material in this part of the WEBook, covers not just pure robotics or only intelligent robots, but rather the somewhat broader domain of robotics and automation. This includes dumb robots such as: metal and woodworking machines, intelligent washing machines, dish washers and pool cleaning robots, etc. These examples all have sensing, planning and control, but often not in individually separated components. For example, the sensing and planning behaviour of the pool cleaning robot have been integrated into the mechanical design of the device, by the intelligence of the human developer. Robotics is, to a very large extent, all about system integration, achieving a task by an actuated mechanical device, via an intelligent integration of components, many of which it shares with other domains, such as systems and control, computer science, character animation, machine design, computer vision, artificial intelligence, cognitive science, biomechanics, etc. In addition, the boundaries of robotics cannot be clearly defined, since also its core ideas, concepts and algorithms are being applied in an ever increasing number of external applications, and, vice versa, core technology from other domains (vision, biology, cognitive science or biomechanics, for example) are becoming crucial components in more and more modern robotic systems. This part of the WEBook makes an effort to define what exactly is that abovementioned core material of the robotics domain, and to describe it in a consistent and motivated structure. Nevertheless, this chosen structure is only one of the many possible views that one can want to have on the robotics domain. In the same vein, the above-mentioned definition of robotics is not meant to be definitive or final, and it is only used as a rough framework to structure the various chapters of the WEBook. (A later phase in the WEBook development will allow different semantic views on the WEBook material.) Components of robotic systems This figure depicts the components that are part of all robotic systems. The purpose of this Section is to describe the semantics of the terminology used to classify the chapters in the WEBook: sensing, planning, modelling, control, etc. The real robot is some mechanical device (mechanism) that moves around in the environment, and, in doing so, physically interacts with this environment. This interaction involves the exchange of physical energy, in some form or another. Both the robot mechanism and the environment can be the cause of the physical interaction through Actuation, or experience the effect of the interaction, which can be measured through Sensing.
Robotics as an integrated system of control interacting with the physical world. Sensing and actuation are the physical ports through which the Controller of the robot determines the interaction of its mechanical body with the physical world. As mentioned already before, the controller can, in one extreme, consist of software only, but in the other extreme everything can also be implemented in hardware. Within the Controller component, several sub-activities are often identified: Modelling. The input-output relationships of all control components can (but need not) be derived from information that is stored in a model. This model can have many forms: analytical formulas, empirical look-up tables, fuzzy rules, neural networks, etc. The name model often gives rise to heated discussions among different research schools, and the WEBook is not interested in taking a stance in this debate: within the WEBook, model is to be understood with its minimal semantics: any information that is used to determine or influence the input-output relationships of components in the Controller. The other components discussed below can all have models inside. A System model can be used to tie multiple components together, but it is clear that not all robots use a System model. The Sensing model and Actuation model contain the information with which to transform raw physical data into task-dependent information for the controller, and vice versa. Planning. This is the activity that predicts the outcome of potential actions, and selects the best one. Almost by definition, planning can only be done on the basis of some sort of model. Regulation. This component processes the outputs of the sensing and planning components, to generate an actuation setpoint. Again, this regulation activity could or could not rely on some sort of (system) model. The term control is often used instead of regulation, but it is impossible to clearly identify the domains that use one term or the other. The meaning used in the WEBook will be clear from the context. Scales in robotic systems The above-mentioned components description of a robotic system is to be complemented by a scale description, i.e., the following system scales have a large influence on the specific content of the planning, sensing, modelling and
control components at one particular scale, and hence also on the corresponding sections of the WEBook. Mechanical scale. The physical volume of the robot determines to a large extent the limites of what can be done with it. Roughly speaking, a large-scale robot (such as an autonomous container crane or a space shuttle) has different capabilities and control problems than a macro robot (such as an industrial robot arm), a desktop robot (such as those sumo robots popular with hobbyists), or milli micro or nano robots. Spatial scale. There are large differences between robots that act in 1D, 2D, 3D, or 6D (three positions and three orientations). Time scale. There are large differences between robots that must react within hours, seconds, milliseconds, or microseconds. Power density scale. A robot must be actuated in order to move, but actuators need space as well as energy, so the ratio between both determines some capabilities of the robot. System complexity scale. The complexity of a robot system increases with the number of interactions between independent sub-systems, and the control components must adapt to this complexity. Computational complexity scale. Robot controllers are inevitably running on real-world computing hardware, so they are constrained by the available number of computations, the available communication bandwidth, and the available memory storage. Obviously, these scale parameters never apply completely independently to the same system. For example, a system that must react at microseconds time scale can not be of macro mechanical scale or involve a high number of communication interactions with subsystems. Background sensitivity Finally, no description of even scientific material is ever fully objective or contextfree, in the sense that it is very difficult for contributors to the WEBook to forget their background when writing their contribution. In this respect, robotics has, roughly speaking, two faces: (i) the mathematical and engineering face, which is quite standardized in the sense that a large consensus exists about the tools and theories to use (systems theory), and (ii) the AI face, which is rather poorly standardized, not because of a lack of interest or research efforts, but because of the inherent complexity of intelligent behaviour. The terminology and systemsthinking of both backgrounds are significantly different, hence the WEBook will accomodate sections on the same material but written from various perspectives. This is not a bug, but a feature: having the different views in the context of the same WEBook can only lead to a better mutual understanding and respect. Research in engineering robotics follows the bottom-up approach: existing and working systems are extended and made more versatile. Research in artificial intelligence robotics is top-down: assuming that a set of low-level primitives is available, how could one apply them in order to increase the intelligence of a system. The border between both approaches shifts continuously, as more and more intelligence is cast into algorithmic, system-theoretic form. For example, the response of a robot to sensor input was considered intelligent behaviour in the late seventies and even early eighties. Hence, it belonged to A.I. Later it was shown that many sensor-based tasks such as surface following or visual tracking could be formulated as control problems with algorithmic solutions. From then on, they did not belong to A.I. any more.
Most robots of today are nearly deaf and blind. Sensors can provide some limited feedback to the robot so it can do its job. Compared to the senses and abilities of even the simplest living things, robots have a very long way to go. The sensor sends information, in the form of electronic signals back to the cfontroller. Sensors also give the robot controller information about its surroundings and lets it know the exact position of the arm, or the state of the world around it. Sight, sound, touch, taste, and smell are the kinds of information we get from our world. Robots can be designed and programmed to get specific information that is beyond what our 5 senses can tell us. For instance, a robot sensor might "see" in the dark, detect tiny amounts of invisible radiation or measure movement that is too small or fast for the human eye to see. Here are some things sensors are used for:
Physical Property
Contact Distance Light Level Sound Level Strain Rotation Magnetism Smell Temperature Inclination Pressure Altitude
Technology
Bump, Switch Ultrasound, Radar, Infra Red Photo Cells, Cameras microphones Strain Gauges Encoders Compasses Chemical Thermal, Infra Red Inclinometers, Gyroscope Pressure Gauges Altimeters
Sensors can be made simple and complex, depending on how much information needs to be stored. A switch is a simple on/off sensor used for turning the robot on and off. A human retina is a complex sensor that uses more than a hundred million photosensitive elements (rods and cones). Sensors provide information to the robots brain, which can be treated in various ways. For example, we can simply react to the sensor output: if the
Levels of Processing
To figure out if the switch is open or closed, you will need to measure the voltage
going through the circuit, that's electronics. Now lets say that you have a microphone and you want to recognize a voice and separate it from noise; that's signal processing. Now you have a camera, and you want to take the preprocessed image and now you need to figure out what those objects are, perhaps by comparing them to a large library of drawings; that's computation. Sensory data processing is a very complex thing to try and do but the robot needs this in order to have a "brain". The brain has to have analog or digital processing capabilities, wires to connect everything, support electronics to go with the computer, and batteries to provide power for the whole thing, in order to process the sensory data. Perception requires the robot to have sensors (power and electronics), computation (more power and electronics, and connectors (to connect it all).
Switch Sensors
Switches are the simplest sensors of all. They work without processing, at the electronics (circuit) level. Their general underlying principle is that of an open vs. closed circuit. If a switch is open, no current can flow; if it is closed, current can flow and be detected. This simple principle can (and is) used in a wide variety of ways. Switch sensors can be used in a variety of ways:
contact sensors: detect when the sensor has contacted another object (e.g., triggers when a robot hits a wall or grabs an object; these can even be whiskers) limit sensors: detect when a mechanism has moved to the end of its range shaft encoder sensors: detects how many times a shaft turns by having a switch click (open/close) every time the shaft turns (e.g., triggers for each turn, allowing for counting rotations) There are many common switches: button switches, mouse switches, key board keys, phone keys, and others. Depending on how a switch is wired, it can be normally open or normally closed. This would of course depend on your robot's electronics, mechanics, and its task. The simplest yet extremely useful sensor for a robot is a "bump switch" that tells it when it's bumped into something, so it can back up and turn away. Even for such a simple idea, there are many different ways of implementation.
Light Sensors
Switches measure physical contact and light sensors measure the amount of light
impacting a photocell, which is basically a resistive sensor. The resistance of a photocell is low when it is brightly illuminated, i.e., when it is very light; it is high when it is dark. In that sense, a light sensor is really a "dark" sensor. In setting up a photocell sensor, you will end up using the equations we learned above, because you will need to deal with the relationship of the photocell resistance photo, and the resistance and voltage in your electronics sensor circuit. Of course since you will be building the electronics and writing the program to measure and use the output of the light sensor, you can always manipulate it to make it simpler and more intuitive. What surrounds a light sensor affects its properties. The sensor can be shielded and positioned in various ways. Multiple sensors can be arranged in useful configurations and isolate them from each other with shields. Just like switches, light sensors can be used in many different ways:
o o
Their position and directionality on a robot can make a great deal of difference and impact
Polarized light
"Normal" light emanating from a source is non-polarized, which means it travels at all orientations with respect to the horizon. However, if there is a polarizing filter in front of a light source, only the light waves of a given orientation of the filter will pass through. This is useful because now we can manipulate this remaining light with other filters; if we put it through another filter with the same characteristic plane, almost all of it will get through. But, if we use a perpendicular filter (one with a 90-degree relative characteristic angle), we will block all of the light. Polarized light can be used to make specialized sensors out of simple photocells; if you put a filter in front of a light source and the same or a different filter in front of a photocell, you can cleverly manipulate what and how much light you detect.
Potentiometers
These devices are very common for manual tuning; you have probably seen them in
some controls (such as volume and tone on stereos). Typically called pots, they allow the user to manually adjust the resistance. The general idea is that the device consists of a movable tap along two fixed ends. As the tap is moved, the resistance changes. As you can imagine, the resistance between the two ends is fixed, but the resistance between the movable part and either end varies as the part is moved. In robotics, pots are commonly used to sense and tune position for sliding and rotating mechanisms.
Biological Analogs All of the sensors we described exist in biological systems Touch/contact sensors with much more precision and complexity in all
species
object presence detection object distance detection surface feature detection (finding/following markers/tape) wall/boundary tracking rotational shaft encoding (using encoder wheels with ridges or black & white
color) bar code decoding Note, however, that light reflectivity depends on the color (and other properties) of a surface. A light surface will reflect light better than a dark one, and a black surface may not reflect it at all, thus appearing invisible to a light sensor. Therefore, it may be harder (less reliable) to detect darker objects this way than lighter ones. In the case of object distance, lighter objects that are farther away will seem closer than darker objects that are not as far away. This gives you an idea of how the physical world is partially-observable. Even though we have useful sensors, we do not have complete and completely accurate information. Another source of noise in light sensors is ambient light. The best thing to do is subtract the ambient light level out of the sensor reading, in order to detect the actual change in the reflected light, not the ambient light. How is that done? By taking two (or more, for higher accuracy) readings of the detector, one with the emitter on, and one with it off, and subtracting the two values from each other. The result is the ambient light level, which can then be subtracted from future readings. This process is called sensor calibration. Of course, remember that ambient light levels can change, so the sensors may need to be calibrated repeatedly.
Break-beam Sensors
We already talked about the idea of break-beam sensors. In general, any pair of
an incandescent flashlight bulb and a photocell red LEDs and visible-light-sensitive photo-transistors or infra-red IR emitters and detectors
Shaft Encoding
Shaft encoders measure the angular rotation of an axle providing position and/or velocity info. For example, a speedometer measures how fast the wheels of a vehicle are turning, while an odometer measures the number of rotations of the wheels. In order to detect a complete or partial rotation, we have to somehow mark the turning element. This is usually done by attaching a round disk to the shaft, and cutting notches into it. A light emitter and detector are placed on each side of the disk, so that as the notch passes between them, the light passes, and is detected; where there is no notch in the disk, no light passes. If there is only one notch in the disk, then a rotation is detected as it happens. This is not a very good idea, since it allows only a low level of resolution for measuring speed: the smallest unit that can be measured is a full rotation. Besides, some rotations might be missed due to noise. Usually, many notches are cut into the disk, and the light hits impacting the detector are counted. (You can see that it is important to have a fast sensor here, if the shaft turns very quickly.) An alternative to cutting notches in the disk is to paint the disk with black (absorbing, nonreflecting) and white (highly reflecting) wedges, and measure the reflectance. In this case, the emitter and the detector are on the same side of the disk. In either case, the output of the sensor is going to be a wave function of the light intensity. This can then be processes to produce the speed, by counting the peaks of the waves. Note that shaft encoding measures both position and rotational velocity, by subtracting the difference in the position readings after each time interval. Velocity, on the other hand, tells us how fast a robot is moving, or if it is moving at all. There are multiple ways to use this measure: use a passive wheel that is dragged by the robot (measure forward progress) We can combine the position and velocity information to do more sophisticated things: rotate by an exact amount Note, however, that doing such things is quite difficult, because wheels tend to slip (effector noise and error) and slide and there is usually some slop and backlash in the gearing mechanism. Shaft encoders can provide feedback to correct the errors, but having some error is unavoidable.
incremented, and when it turns in the opposite direction, the counter is decremented, thus keeping track of the overall position. Other uses of quadrature shaft encoding are in robot arms with complex joints (such as rotary/ball joints; think of your knee or shoulder), Cartesian robots (and large printers) where an arm/rack moves back and forth along an axis/gear.
IR Communication
Modulated infra red can be used as a serial line for transmitting messages. This is is fact how IR modems work. Two basic methods exist:
bit frames (sampled in the middle of each bit; assumes all bits take the same amount of time to transmit) bit intervals (more common in commercial use; sampled at the falling edge, duration of interval between sampling determines whether it's a 0 or 1)
makes sense; they live in very dark caves where vision would be largely useless). Bat sonars are extremely sophisticated compared to artificial sonars; they involve numerous different frequencies, used for finding even the tiniest fast-flying prey, and for avoiding hundreds of other bats, and communicating for finding mates.
Specular Reflection
A major disadvantage of ultrasound sensing is its susceptibility to specular reflection (specular reflection means reflection from the outer surface of the object). While the sonar sensing principle is based on the sound wave reflecting from surfaces and returning to the receiver, it is important to remember that the sound wave will not necessarily bounce off the surface and "come right back." In fact, the direction of reflection depends on the incident angle of the sound beam and the surface. The smaller the angle, the higher the probability that the sound will merely "graze" the surface and bounce off, thus not returning to the emitter, in turn generating a false long/far-away reading. This is often called specular reflection, because smooth surfaces, with specular properties, tend to aggravate this reflection problem. Coarse surfaces produce more irregular reflections, some of which are more likely to return to the emitter. (For example, in our robotics lab on campus, we use sonar sensors, and we have lined one part of the test area with cardboard, because it has much better sonar reflectance properties than the very smooth wall behind it.) In summary, long sonar readings can be very inaccurate, as they may result from false rather than accurate reflections. This must be taken into account when programming robots, or a robot may produce very undesirable and unsafe behavior. For example, a robot approaching a wall at a steep angle may not see the wall at all, and collide with it! Nonetheless, sonar sensors have been successfully used for very sophisticated robotics applications, including terrain and indoor mapping, and remain a very popular sensor choice in mobile robotics. The first commercial ultrasonic sensor was produced by Polaroid, and used to automatically measure the distance to the nearest object (presumably which is being photographed). These simple Polaroid sensors still remain the most popular off-the-shelf sonars (they come with a processor board that deals with the analog electronics). Their standard properties include:
shortest distance return Polaroid sensors can be combined into phased arrays to create more sophisticated and more accurate sensors. One can find ultrasound used in a variety of other applications; the best known one is ranging in submarines. The sonars there have much more focused and have longer-range beams. Simpler and more mundane applications involve automated "tape-measures", height measures, burglar alarms, etc.
Machine Vision
So far, we have talked about relatively simple sensors. They were simple in terms of processing of the information they returned. Now we turn to machine vision, i.e., to cameras as sensors. Cameras, of course, model biological eyes. Needless to say, all biological eyes are more complex than any camera we know today, but, as you will see, the cameras and machine vision systems that process their perceptual information, are not simple at all! In fact, machine vision is such a challenging topic that it has historically been a separate branch of Artificial Intelligence. The general principle of a camera is that of light, scattered from objects in the environment (those are called the scene), goes through an opening ("iris", in the simplest case a pin hole, in the more sophisticated case a lens), and impinging on what is called the image plane. In biological systems, the image plane is the retina, which is attached to numerous
rods and cones (photosensitive elements) which, in turn, are attached to nerves which perform so-called "early vision", and then pass information on throughout the brain to do "higher-level" vision processing. As we mentioned before, a very large percentage of the human (and other animal) brain is dedicated to visual processing, so this is a highly complex endeavor. In cameras, instead of having photosensitive rhodopsin and rods and cones, we use silver halides on photographic film, or silicon circuits in charge-coupled devices (CCD) cameras. In all cases, some information about the incoming light (e.g., intensity, color) is detected by these photosensitive elements on the image plane. In machine vision, the computer must make sense out of the information it gets on the image plane. If the camera is very simple, and uses a tiny pin hole, then some computation is required to compute the projection of the objects from the environment onto the image plane (note, they will be inverted). If a lens is involved (as in vertebrate eyes and real cameras), then more light can get in, but at the price of being focused; only objects a particular range of distances from the lens will be in focus. This range of distances is called the camera's depth of field. The image plane is usually subdivided into equal parts, called pixels, typically arranged in a rectangular grid. In a typical camera there are 512 by 512 pixels on the image plane (for comparison, there are 120 x 10^6 rods and 6 x 10^6 cones in the eye, arranged hexagonally). Let's call the projection on the image plane the image. The brightness of each pixel in the image is proportional to the amount of light directed toward the camera by the surface patch of the object that projects to that pixel. (This of course depends on the reflectance properties of the surface patch, the position and distribution of the light sources in the environment, and the amount of light reflected from other objects in the scene onto the surface patch.) As it turns out, brightness of a patch depends on two kinds of reflections, one being specular (off the surface, as we saw before), and the other being diffuse (light that penetrates into the object, is absorbed, and then re-emitted). To correctly model light reflection, as well as reconstruct the scene, all these properties are necessary. Let us suppose that we are dealing with a black and white camera with a 512 x 512 pixel image plane. Now we have an image, which is a collection of those pixels, each of which is an intensity between white and black. To find an object in that image (if there is one, we of course don't know a priori), the typical first step ("early vision") is to do edge detection, i.e., find all the edges. How do we recognize them? We define edges as curves in the image plane across which there is significant change in the brightness. A simple approach would be to look for sharp brightness changes by differentiating the image and look for areas where the magnitude of the derivative is large. This almost works, but unfortunately it produces all sorts of spurious peaks, i.e., noise. Also, we cannot inherently distinguish changes in intensities due to shadows from those due to physical objects. But let's forget that for now and think about noise. How do we deal with noise? We do smoothing, i.e., we apply a mathematical procedure called convolution, which finds and eliminates the isolated peaks. Convolution, in effect, applies a filter to the image. In fact, in order to find arbitrary edges in the image, we need to convolve the image with many filters with different orientations. Fortunately, the relatively complicated mathematics involved in edge detection has been well studied, and by now there are standard and preferred approaches to edge detection. Once we have edges, the next thing to do is try to find objects among all those edges. Segmentation is the process of dividing up or organizing the image into parts that correspond to continuous objects. But how do we know which lines correspond to which objects, and what makes an object? There are several cues we can use to detect objects: 1. We can have stored models of line-drawings of objects (from many possible angles, and at many different possible scales!), and then compare those with all possible combinations of edges in the image. Notice that this is a very computationally intensive and expensive process. This general approach, which has been studied extensively, is called model-based vision. 2. We can take advantage of motion. If we look at an image at two consecutive
time-steps, and we move the camera in between, each continuous solid objects (which obeys physical laws) will move as one, i.e., its brightness properties will be conserved. This hives us a hint for finding objects, by subtracting two images from each other. But notice that this also depends on knowing well how we moved the camera relative to the scene (direction, distance), and that nothing was moving in the scene at the time. This general approach, which has also been studied extensively, is called motion vision. 3. We can use stereo (i.e., binocular stereopsis, two eyes/cameras/points of view). Just like with motion vision above, but without having to actually move, we get two images, which we can subtract from each other, if we know what the disparity between them should be, i.e., if we know how the two cameras are organized/positioned relative to each other. 4. We can use texture. Patches that have uniform texture are consistent, and have almost identical brightness, so we can assume they come from the same object. By extracting those we can get a hint about what parts may belong to the same object in the scene. 5. We can also use shading and contours in a similar fashion. And there are many other methods, involving object shape and projective invariants, etc. Note that all of the above strategies are employed in biological vision. It's hard to recognize unexpected objects or totally novel ones (because we don't have the models at all, or not at the ready). Movement helps catch our attention. Stereo, i.e., two eyes, is critical, and all carnivores use it (they have two eyes pointing in the same direction, unlike herbivores). The brain does an excellent job of quickly extracting the information we need for the scene. Machine vision has the same task of doing real-time vision. But this is, as we have seen, a very difficult task. Often, an alternative to trying to do all of the steps above in order to do object recognition, it is possible to simplify the vision problem in various ways: 1. Use color; look for specifically and uniquely colored objects, and recognize them that way (such as stop signs, for example) 2. Use a small image plane; instead of a full 512 x 512 pixel array, we can reduce our view to much less, for example just a line (that's called a linear CCD). Of course there is much less information in the image, but if we are clever, and know what to expect, we can process what we see quickly and usefully. 3. Use other, simpler and faster, sensors, and combine those with vision. For example, IR cameras isolate people by body-temperature. Grippers allow us to touch and move objects, after which we can be sure they exist. 4. Use information about the environment; if you know you will be driving on the road which has white lines, look specifically for those lines at the right places in the image. This is how first and still fastest road and highway robotic driving is done. Those and many other clever techniques have to be employed when we consider how important it is to "see" in real-time. Consider highway driving as an important and growing application of robotics and AI. Everything is moving so quickly, that the system must perceive and act in time to react protectively and safely, as well as intelligently. Now that you know how complex vision is, you can see why it was not used on the first robots, and it is still not used for all applications, and definitely not on simple robots. A robot can be extremely useful without vision, but some tasks demand it. As always, it is critical to think about the proper match between the robot's sensors and the task.
actu ator is the actual mechanism that enables the effector to execute an action. Actuators typically include electric motors, hydraulic or pneumatic cylinders, etc. The terms effector and actuator are often used interchangeably to mean "whatever makes the robot take an action." This is not really proper use. Actuators and effectos are not the same thing. And we'll try to be more precise in the class. Most simple actuators control a single degree of freedom, i.e., a single motion (e.g., up-down, left-right, in-out, etc.). A motor shaft controls one rotational degree of freedom, for example. A sliding part on a plotter controls one translational degree of freedom. How many degrees of freedom (DOF) a robot has is going to be very important in determining how it can affect its world, and therefore how well, if at all, it can accomplish its task. Just as we said many times before that sensors must be matched to the robot's task, similarly, effectors must be well matched to the robot's task also. In general, a free body in space as 6 DOF: three for translation (x,y,z), and three for orientation/rotation (roll, pitch, and yaw). We'll go back to DOF in a bit. You need to know, for a given effector (and actuator/s), how many DOF are available to the robot, as well as how many total DOF any given robot has. If there is an actuator for every DOF, then all of the DOF are controllable. Usually not all DOF are controllable, which makes robot control harder. A car has 3 DOF: position (x,y) and orientation (theta). But only 2 DOF are controllable: driving: through the gas pedal and the forward-reverse gear; steering: through the steering wheel. Since there are more DOF than are controllable, there are motions that cannot be done, like moving sideways (that's why parallel parking is hard). We need to make a distinction between what an actuator does (e.g., pushing the gas pedal) and what the robot does as a result (moving forward). A car can get to any 2D position but it may have to follow a very complicated trajectory. Parallel parking requires a discontinuous trajectory w.r.t. velocity, i.e., the car has to stop and go. When the number
of controllable DOF is equal to the total number of DOF on a robot, it is holonomic(for more information about holonomic). If the number of controllable DOF is smaller than total DOF, the robot is non-holonomic. If the number of controllable DOF is larger than the total DOF, the robot is redundant. A human arm has 7 DOF (3 in the shoulder, 1 in the elbow, 3 in the wrist), all of which can be controlled. A free object in 3D space (e.g., the hand, the finger tip) can have at most 6 DOF! So there are redundant ways of putting the hand at a particular position in 3D space. This is the core of why manipulations is very hard! Two basic ways of using effectors:
to move the robot around =>locomotion to move other object around =>manipulation These divide robotics into two mostly separate categories: mobile robotics manipulator robotics
Mobility end effectors are discussed in more detail in the mobility section of this web site. In contrast to locomotion, where the body of the robot is moved to get to a particular position and orientation, a manipulator moves itself typically to get the end effector (e.g., the hand, the finger, the fingertip) to the desired 3D position and orientation. So imagine having to touch a specific point in
3D space with the tip of your index finger; that's what a typical manipulator has to do. Of course, largely manipulators need to grasp and move objects, but those tasks are extensions of the basic reaching above. The challenge is to get there efficiently and safely. Because the end effector is attached to the whole arm, we have to worry about the whole arm; the arm must move so that it does not try to violate its own joint limits and it must not hit itself or the rest of the robot, or any other obstacles in the environment. Thus, doing autonomous manipulation is very challenging. Manipulation was first used in tele-operation, where human operators would move artificial arms to handle hazardous materials. It turned out that it was quite difficult for human operators to learn how to tele-operate complicated arms (such as duplicates of human arms, with 7 DOF). One alternative today is to put the human arm into an exo-skeleton (see lecture 1), in order to make the control more direct. Using joy-sticks, for example, is much harder for high DOF. Why is this so hard? Because even as we saw with locomotion, there is typically no direct and obvious link between what the effector needs to do in physical space and what the actuator does to move it. In
general, the correspondence between actuator motion and the resulting effector motion is called kinematics. In order to control a manipulator, we have to know its kinematics (what is attached to what, how many joints there are, how many DOF for each joint, etc.). We can formalize all of this mathematically, and get an equation which will tell us how to convert from, say, angles in each of the joints, to the Cartesian positions of the end effector/point. This conversion from one to the other is called computing the manipulator kinematics and inverse kinematics. The process of converting the Cartesian (x,y,z) position into a set of joint angles for the arm (thetas) is called inverse kinematics. Kinematics are the rules of what is attached to what, the body structure. Inverse kinematics is computationally intense. And the problem is even harder if the manipulator (the arm) is redundant. Manipulation involves
trajectory planning (over time) inverse kinematics inverse dynamics dealing with redundancy Manipulators are effectors. Joints connect parts of manipulators. The most common joint types are: rotary (rotation around a fixed axis) prismatic (linear movement)
These joints provide the DOF for an effector, so they are planned carefully. Robot manipulators can have one or more of each of those joints. Now recall that any free body has 6 DOF; that means in order to get the robot's end effector to an arbitrary position and orientation, the robot requires a minimum of 6 joints. As it turns out, the human arm (not counting the hand!) has 7 DOF. That's sufficient for reaching any point with the hand,
and it is also redundant, meaning that there are multiple ways in which any point can be reached. This is good news and bad news; the fact that there are multiple solutions means that there is a larger space to search through to find the best solution. Now consider end effectors. They can be simple pointers (i.e., a stick), simple 2D grippers, screwdrivers for attaching tools (like welding guns, sprayer, etc.), or can be as complex as the human hand, with variable numbers of fingers and joints in the fingers. Problems like reaching and grasping in manipulation constitute entire subareas of robotics and AI. Issues include: finding grasp-points (COG, friction, etc.); force/strength of grasp; compliance (e.g., in sliding, maintaining contact with a surface); dynamic tasks (e.g., juggling, catching). Other types of manipulation, such as carefully controlling force, as in grasping fragile objects and maintaining contact with a surface (socalled compliant motion), are also being actively researched. Finally, dynamic manipulation tasks, such as juggling, throwing, catching, etc., are already being demonstrated on robot arms. Having talked about navigation and manipulation, think about what types of sensors (external and proprioceptive) would be useful for these general robotic tasks.
Proprioceptive sensors sense the robot's actuators (e.g., shaft encoders, joint angle sensors, etc.); they sense the robot's own movements. You can think of them as perceiving internal state instead of external state. External sensors are helpful but not necessary or as commonly used.
actuators are powered by pneumatics (a ir pressure), hydraulics (fluid pressure), or motors (electric current). Most actuation uses electromagnetic motors and gears but there have been frequent uses of other forms of actuation including NiTinOL"muscle-wires" and inexpensive Radio Control servos. To get a motor under computer control, different motor types and actuator types are used. Some of the motor types are Synchronous, Stepper, AC servo, Brushless DC servo, and Brushed DC servo. Radio Control servos for model airplanes, cars and other vehicles are light, rugged, cheap and fairly easy to interface. Some of the units can provide very high torque speed. A Radio Control servo can be controlled from a parallel port. With one of the PCs internal timers cranked up, it is possible to control eight servos from a common parallel port with nothing but a simple interrupt service routine and a cable. In fact, power can be pulled from the disk drive power connector and the PC can run all servos directly with no additional hardware. The only down side is that the PC wastes some processing power servicing the interrupt handler.
DC Motors
The most common actuator you will use (and the most common in mobile robotics in general) is the direct current (DC) motor. They are simple, cheap, and easy to use. Also, they come in a great variety of sizes, to accommodate different robots and tasks. DC motors convert electrical into mechanical energy. They consist of permanent magnets and loops of wire inside. When current is applied, the wire loops generate a magnetic field, which reacts against the outside field of the static magnets. The interaction of the fields produces the movement of the shaft/armature. Thus, electromagnetic energy becomes motion. As with any physical system, DC motors are not perfectly efficient, meaning that
the energy is not converted perfectly, without any waste. Some energy is wasted as heat generated by friction of mechanical parts. Inefficiencies are minimized in well-designed (and more expensive) motors, and their performance can be brought up to the 90th percentile, but cheap motors (such as the ones you may use) can be as low as 50%. (In case you think this is very inefficient, remember that other types of effectors, such as miniature electrostatic motors, may have much lower efficiencies still.) A motor requires a power source within its operating voltage, i.e., the recommended voltage range for best efficiency of the motor. Lower voltages will usually turn the motor (but provide less power). Higher voltages are more tricky: in some cases they can increase the power output but almost always at the expense of the operating life of the motor. E.g., the more you rev
your car engine, the sooner it will die. When constant voltage is applied, a DC motor draws current in the amount proportional to the work it is doing. For example, if a robot is pushing against a wall, it is drawing more current (and draining more of its batteries) than when it is moving freely in open space. The reason is the resistance to the motor motion introduced by the wall. If the resistance is very high (i.e., the wall just won't move no matter how much the robot pushes against it), the motor draws a maximum amount of power, and stalls. This is defined as the stall current of the motor: the most current it can draw at its specified voltage. Within a motor's operating current range, the more current is used, the more torque or rotational force is produced at the shaft. In general, the strengths of the magnetic field generated in the wire loops is directly proportional to the applied current and thus the produced torque at the shaft. Besides stall current, a motor also has its stall torque, the amount of rotational force produced when the motor is stalled at its operating voltage. Finally, the amount of power a motor generates is the product of its shaft's rotational velocity and its torque. If there is no load on the shaft, i.e., the motor is spinning freely, then the rotational velocity is the highest, but the torque is 0, since no mechanism is being driven by the motor. The output power, then, is 0 also. In contrast, when the motor is stalled, it is producing maximum torque, but the rotational velocity is 0, so the output power is 0 again. Between free spinning and stalling, the motor does useful work, and the produced power has a characteristic parabolic relationship demonstrating that the motor produces the most power in the middle of its performance range. Most DC motors have unloaded speeds in the range of 3,000 to 9,000 RPM (revolutions per minute), or 50 to 150 RPS (revolutions per second). That turns out to put them in the high-speed but low-torque category (compared to some other alternatives). For example, how often do you need to drive something very light that rotates very fast (besides a fan)? Yet that is what DC motors are naturally best at. In contrast, robots need to pull loads (i.e., move their bodies and manipulators, all of which have significant mass), thus requiring more torque and less speed. As a result, the performance of a DC motor typically needs to be adjusted in that direction, through the use of gears.
Gearing
The force generated at the edge of a gear is equal to the product of the radius of the gear and its torque (F = r t), in the line tangential to its circumference. By combining gears with different radii, we can manipulate the amount of force/torque the mechanism generates. The relationship between the radii and the resulting torque is well defined, as follows: Suppose Gear1 with radius r1 turns with torque t1, generating a force of t1/r1 perpendicular to its circumference. Now if we mesh it with Gear2, with r2, which generates t2/r2, then t1/r1 = t2/r2. To get the torque generated by Gear2, we get: t2 = t1 r2/r1. Intuitively, this means: the torque generated at the output gear is proportional to the torque on the input gear and the ratio of the two gear's radii. If r2 > r1, we get a bigger number, if r1 > r2, we get a smaller number. If the output gear is larger than the input gear, the torque increases. If the output gear is smaller than the input gear, the torque decreases. Besides the change in torque that takes place when gears are combined, there is also a corresponding change in speed. To measure speed we are interested in the circumference of the gear, C= 2 * pi * r. Simply put, if the circumference of Gear1 is twice that of Gear2, then Gear2 must turn twice for each full rotation of Gear1. If the output gear is larger than the input gear, the speed decreases. If the output gear is smaller than the input gear, the speed increases. In summary, when a small gear drives a large one, torque is increased and speed is decreased. Analogously, when a large gear drives a small one, torque is decreased and speed is increased. Thus, gears are used in DC motors (which we said are fast and low torque) to trade off extra speed for additional torque. Gears are combined using their teeth. The number of teeth is not arbitrary, since it is the key means of proper reduction. Gear teeth require special design so that they mesh properly. If there is any looseness between meshing gears, this is called backlash, the ability for a mechanism to move back \& forth within the teeth, without turning the whole gear. Reducing backlash requires tight meshing between the gear teeth, but that, in turn,
increases friction. As you can imagine, proper gear design and manufacturing is complicated. To achieve "three to one gear reduction (3:1)", we apply power to a small gear (say one with 8-teeth) meshed with a large one (with 3 * 8 = 24 teeth). As a result, we have slowed down the large gear by 3 and have tripled its torque. Gears can be organized in series ("ganged"), in order to multiply their effect. For example, 2 3:1 gears in series result in a 9:1 reduction. This requires a clever arrangement of gears. Or three 3:1 gears in series can produce a 27:1 reduction. This method of multiplying reduction is the underlying mechanism that makes DC motors useful and ubiquitous.
Servo Motors
It is sometimes necessary to be able to move a motor to a specific position. If you consider your basic DC motor, it is not built for this purpose. Motors that can turn to a specific position are called servo motors and are in fact constructed out of basic DC motors, by adding:
an electronic circuit that controls the motor's operation Servos are used in toys a great deal, to adjust steering on steering in RC cars and wing
position in RC airplanes. Since positioning of the shaft is what servo motors are all about, most have their movement reduced to 180 degrees. The motor is driven with a waveform that specifies the desired angular position of the shaft within that range. The waveform is given as a series of pulses, within a pulse-width modulatedsignal. Thus, the width (i.e., length) of the pulse specifies the control value for the motor, i.e., how the shaft should turn. Therefore, the exact width/length of the pulse is critical, and cannot be sloppy. There are no milliseconds or even microseconds to be wasted here, or the motor will behave very badly, jitter, and go beyond its mechanical limit. This limit should be checked empirically, and avoided. In contrast, the duration between the pulses is not critical at all. It should be consistent, but there can be noise on the order of milliseconds without any problems for the motor. This is intuitive: when no pulse arrives, the motor does not move, so it simply stops. As long as the pulse gives the motor sufficient time to turn to the proper position, additional time does not hurt it.
remove mechanical limit (revert back to DC motor shaft) remove pot position sensor (no need to tell position) apply 2 resistors to fool the servo to think it is fully turning
ongoing, and changing often. Nickel-titanium alloys were first discovered by the Naval Ordinance Laboratory decades ago and the material was termed NiTinOL. These materials have the intriguing property that they provide actuation through cycling of current through the materials. It undergoes a phase change exhibited as force and motion in the wire. At room temperature Muscle Wires are easily stretched by a small force. However, when conducting an electric current, the wire heats and changes to a much harder form that returns to the "unstretched" shape -- the wire shortens in length with a usable amount of force. Nitinol can be stretched by up to eight percent of their length and will recover fully, but only for a few cycles. However when used in the three to five percent range, Muscle Wires can run for millions of cycles with very consistent and reliable performance.
Given that the robot arm movement is appropriate to its application, that the arm strength and rigidity meet the payload needs and that servo drives provide the necessary speed of response and resolution, a robot controller is required to manage the arm articulations, its End Effector, and the interface with the workplace. The simplest type of control, still widely used, is "record-playback," or "lead-through". An operator positions arm articulations to desired configurations. At each desired location the articulation encoder positions are recorded in memory. Step by step, an entire work-cycle sequence is recorded. Then in playback mode the sequence is observed and modified. As applications become more challenging, some jobs require continuous path control of an End Effector. For this action all articulations must be programmed in speeds appropriate to the particular task. This requires programming for the control of the robot. Robots today have controllers run by programs -- sets of instructions written in code. The program sets limits on what the robot can do. These requirements call into play sophisticated computerbased controllers and so-called robot languages. These languages permit a kind of robot control known as hierarchical control, in which decision making by the robot takes place on several levels. These levels are interconnected by feedback mechanisms that inform the next higher level of the status of previous actions. The advantage of a general-purpose robot arm is that it can be programmed to do many jobs. The disadvantage is that the programming tends to be a job for highly paid engineers. Even when a factory robot can perform a task more efficiently than a person, the job of programming it and setting up its workplace can be more trouble than its worth. Commotion Systems, a new California firm, is developing easier ways to program robots using pre-designed software modules. For now though, the job of "training" robots is still one of the main reasons that they are not used more. In the future, controllers with Artificial Intelligence could allow robots to think on their own, even program themselves. This could make robots more self-reliant and independent. Angelus Research has designed an intelligent motion controller for robots that mimics the brain's three-level structure, including instinctive, behavioral, and goal levels.
The controller, which can be used in unpredictable circumstances, uses a Motorola 68HC11 microprocessor.
that error. In some systems, the only information available about the error is whether it is 0 or non-0, i.e., whether the current and desired states are the same. This is very little information to work with, but it is still a basis for control and can be exploited in interesting ways. Additional information about the error would be its magnitude, i.e., how "far" the current state is from the desired state. Finally, the last part of the error information is its direction, i.e., is the current state too close or too far from the desires state (in whatever space it may be). Control is easiest if we have frequent feedback providing error magnitude and direction. Notice that the behavior of a feedback system oscillates around the desired state. In the case of a thermostat, the temperature oscillates around the set point, the desired setting. Similarly, the robot's movement will oscillate around the desired state, which is the optimal distance from the wall. How can we decrease this oscillation? We can use a smoother/larger turning angle, and we can also use a range instead of a set point distance as the goal state. Now what happens when you have sensor error in your system? What if your sensor incorrectly tells you that the robot is far from a wall, but in fact it is not? What about vice versa? How might you address these issues? Feedback control is also called closed loop control because it closes the loop between the input and the output, i.e., it provides the system with a measure of "progress."
The alternative to closed loop control is open loop control. This type of control does not require the use of sensors, since state is not fed back into the system. Such systems can operate (perform repetitive, state-independent tasks) only if they are extremely well calibrated and their environment does not change in a way that affects their performance. We have talked about feedback control so far, but there is also
an important notion of feed forward control. In such a system, the controller determines set points and sub-goals for itself ahead of time, without looking at actual state data.
Reactive Control
Reactive control is based on a tight loop connecting the robot's sensors with its effectors. Purely reactive systems do not use any internal representations of the environment, and do not look ahead: they react to the current sensory information. Thus, reactive systems use a direct mapping between sensors and effectors, and minimal, if any, state information. They consist of collections of rules that map specific situations to specific actions. If a reactive system divides its perceptual world into mutually exclusive or unique situations, then only one of those situations can be triggered by any sensory input at any one time, and only one action will be activated as a result. This is the simplest form of a reactive control system. It is often too difficult to split up all possible situations this way, or it may require unnecessary encoding. Consider the case of multiple sensors: to have mutually-
exclusive sensory inputs, the controller must encode rules for all possible sensory combinations. There is an exponential number of those. This is, in fact, the robot's entire sensory space (as we defined earlier in the semester). This space then needs to be mapped to all possible actions (the action space), resulting in the complete control space for that robot. Although this mapping is done while the system is being designed, i.e., not at run-time, it can be very tedious, and it results in a large look up table which takes space to encode/store in a robot, and can take time to search, unless some clever parallel look up technique is used. In general, this complete mapping is not used in hand-designed reactive systems. Instead, specific situations trigger appropriate actions, and default actions are used to cover all other cases. Human designers can effectively reduce the sensory space to only the inputs/situations that matter, map those to the appropriate actions, and thus greatly simplify the control system. If the rules are not triggered by mutually-exclusive conditions, more than one rule can be triggered in parallel, resulting in two or more different actions being output by the system. Deciding among multiple actions or behaviors is called arbitration, and is in general a difficult problem. Arbitration can be done based on
learning (process priorities may be initialized or not, and are learned at runtime, once or repeatedly/dynamically) If a reactive system needs to support parallelism, i.e., the ability to execute multiple rules
fixed priority hierarchy (processes have pre-assigned priorities) a dynamic hierarchy (process priorities change at run-time)
at once, the underlying programming language must have the ability to multi-task, i.e., execute several processes/pieces of code in parallel. The ability to multi-task is critical in reactive systems: if a system cannot monitor its sensors in parallel, but must go from one to another in sequence, it may miss some event, or at least the onset of an event, thus failing to react in time. Now that we understand the building blocks of a reactive system (reactive rules coupling sensors and effectors, i.e., situations and actions), we need to consider principled ways of organizing reactive controllers. We will start with the best known reactive control architecture, the Subsumption Architecture, introduced by Rod Brooks at MIT in 1985.
added, layer 0 continues to function, but may be influenced by layer 1, and so on up. If layer 1 fails, layer 0 is unaffected. When layer 1 is designed, layer 0 is taken into consideration and utilized, i.e., its existence is subsumed, thus the name of the architecture. Layer 1 can inhibit the outputs of layer 0 or suppress its inputs. Subsumption systems grow from the bottom up, and layers can keep being added, depending on the tasks of the robot. How exactly layers are split up depends on the specifics of the robot, the environment, and the task. There is no strict recipe, but some solutions are better than others, and most are derived empirically. The inspiration behind the Subsumption Architecture is the evolutionary process, which introduces new competencies based on the existing ones. Complete creatures are not thrown out and new
ones created from scratch; instead, solid, useful substrates are used to build up to more complex capabilities.
Many robots resemble human arms, and have shoulders, elbows, wrists, even fingers. This gives a robot lots of options for moving, and helps it do things in place of a human arm. In order to reach any possible point in space within its work envelope, a robot uses a total of 7 degrees of freedom. Each direction a joint can go gives an arm 1 degree. So, a simple robot arm with 3 degrees of freedom could move in 3 ways: up and down, left and right, forward and backward. Many robots of today are designed to move with these 7 degrees of freedom. The human arm is an amazing design. It allows us to place our all-purpose end effector, the hand, where it is needed. Jointed arm robots mimic the ability of human arms to be flexible, precise, and ready for a wide variety of tasks. The jointed-arm robot has six degrees of freedom, which enable it to perform jobs that require versatility and dexterity. The design of a jointed-arm robot is similar to a human arm, but not exactly the same.
The term "artificial intelligence" is defined as systems that combine sophisticated hardware and software with elaborate databases and knowledge-based processing models to demonstrate characteristics of effective human decision making. The criteria for artificial systems include the following: 1) functional: the system must be capable of performing the function for which it has been designed; 2) able to manufacture: the system must be capable of being manufactured by existing manufacturing processes; 3) designable: the design of the system must be imaginable by designers working in their cultural context; and 4) marketable: the system must be perceived to serve some purpose well enough, when compared to competing approaches, to warrant its design and manufacture.
Robotics is one field within artificial intelligence. It involves mechanical, usually computercontrolled, devices to perform tasks that require extreme precision or tedious or hazardous work by people. Traditional Robotics uses Artificial Intelligence planning techniques to program robot behaviors and works toward robots as technical devices that have to be developed and controlled by a human engineer. The Autonomous Robotics approach suggests that robots could develop and control themselves autonomously. These robots are able to adapt to both uncertain and incomplete information in constantly changing environments. This is possible by imitating the learning process of a single natural organism or through Evolutionary Robotics, which is to apply selective reproduction on populations of robots. It lets a simulated evolution process develop adaptive robots. The artificial intelligence concept of the "expert system" is highly developed. This describes robot programmers ability to anticipate situations and provide the robot with a set of "ifthen" rules. For example, if encountering a stairwell, stop and retreat. The more
sophisticated concept is to give the robot the ability to "learn" from experience. A neural network brain equipped onto a robot will allow the robot to sample its world at random. Basically, the robot would be given some life-style goals, and, as it experimented, the actions resulting in success would be reinforced in the brain. This results in the robot devising its own rules. This is appealing to researchers and the community as it parallels human learning in lots of ways. Artificial intelligence dramatically reduces or eliminates the risk to humans in many applications. Powerful artificial intelligence software helps to fully develop the highprecision machine capabilities of robots, often freeing them from direct human control and vastly improving their productivity. When a robot interacts with a richly populated and variable world, it uses it senses to gather data and then compare the sensate inputs with expectations that are imbedded in its world model. Therefore the effectiveness of the robot is limited by the accuracy to which its programming models the real world.
Industrial robots are rarely mobile. Work is generally brought to the robot. A few industrial robots are mounted o
and are mobile within their work station. Service robots are virtually the only robots that travel autonomously. Research on robot mobility is extensive. The goal of the research is usually to h robot navigate in unstructured environments while encountering unforeseen obstacles. Some projects raise the t barriers by insisting that the locomotion involve walking, either on two appendages, like humans, or on ma insects. Most projects, however, use wheels or tractor mechanisms. Many kinds of effectors and actuators can be move a robot around. Some categories are:
legs (for walking/crawling/climbing/jumping/hopping) wheels (for rolling) arms (for swinging/crawling/climbing) flippers (for swimming)
Wheels
Wheels are the locomotion effector of choice. Wheeled robots (as well as almost all wheeled mechanical devices,
cars) are built to be statically stable. It is important to remember that wheels constructed with as much variety and innovative flair as legs: wheels can vary in size and shape, can consist o tires, or complex tire patterns, or tracks, or wheels within cylinders within other wheels spinning in different dire provide different types of locomotion properties. So wheels need not be simple, but typically they are, becau simple wheels are quite efficient. Having wheels does not imply holonomicity. 2 or 4-wheeled robots are not holonomic. A popular and efficient design involves 2 differentially-steerable wheels and a passive caster. Dif steering means that the two (or more) wheels can be steered separately (individually) and thus differently. If on can turn in one direction and the other in the opposite direction, the robot can spin in place. This is very he following arbitrary trajectories. Tracks are often used (e.g., tanks).
Legs
While most animals use legs to get around, legged locomotion is a very difficult robotic problem, especial compared to wheeled locomotion. First, any robot needs to be stable (i.e., not wobble and fall over easily). T two kinds of stability: static and dynamic. A statically stable robot can stand still without falling over. This is feature, but a difficult one to achieve: it requires that there be enough legs/wheels on the robot to provide s static points of support. For example, people are not statically stable. In order to stand up, which appears effo us, we are actually using active control of our balance, though nerves and muscles and tendons. This balancing i unconscious, but must be learned, so that's why it takes babies a while to get it right, and certain injuries can difficult or impossible. With more legs, static stability becomes quite simple. In order to remain stable, the robot' of gravity (COG) must fall under its polygon of support. This polygon is basically the projection between all of its points onto the surface. So in a two-legged robot, the polygon is really a line, and the COG cannot be stably align a point on that line to keep the robot upright. However, a three-legged robot, with its legs in a tripod organizat its body above, produces a stable polygon of support, and is thus statically stable. But what happens when a s stable robot lifts a leg and tries to move. Does its COG stay within the polygon of support? It may or may not, de on the geometry. For certain robot geometries, it is possible (with various numbers of legs) to always stay s stable while walking. This is very safe, but it is also very slow and energy inefficient. A basic assumption of the st (statically stable gait) is that the weight of a leg is negligible compared to that of the body, so that the total c gravity (COG) of the robot is not affected by the leg swing. Based on this assumption, the conventional stati designed so as to maintain the COG of the robot inside of the support polygon, which is outlined by each support position. The alternative to static stability is dynamic stability which allows a robot (or animal) to be stable while For example, one-legged hopping robots are dynamically stable: they can hop in place or to various destinations, fall over. But they cannot stop and stay standing (this is an inverse pendulum balancing problem). A statically stable robot can use dynamically-stable walking patterns, to be fast, or it can use statically stable wa simple way to think about this is by how many legs are up in the air during the robot's movement (i.e., gait). 6 le most popular number as they allow for a very stable walking gait, the tripod gait . If the same three legs move a this is called the alternating tripod gait. if the legs vary, it is called the ripple gait. A rectangular 6-legged robo three legs at a time to move forward, and still retain static stability. How does it do that? It uses the so-called alt tripod gait, a biologically common walking pattern for 6 or more legs. In this gait, one middle leg on one side non-adjacent legs on the other side of the body lift and move forward at the same time, while the other 3 legs re the ground and keep the robot statically stable. Roaches move this way, and can do so very quickly. Insects w than 6 legs (e.g., centipedes and millipedes), use the ripple gate. However, when they run really fast, they swit to actually become airborne (and thus not statically stable) for brief periods of time. Statically stable walking is very energy inefficient. As an alternative, dynamic stability enables a robot to stay moving. This requires active control (i.e., the inverse pendulum problem). Dynamic stability can allow for greate but requires harder control. Balance and stability are very difficult problems in control and robotics, so that is w you look at most existing robots, they will have wheels or plenty of legs (at least 6). Research robotics, of co studying single-legged, two legged, and other dynamically-stable robots, for various scientific and applied r Wheels are more efficient than legs. They also do appear in nature, in certain bacteria, so the common myth that cannot make wheels is not well founded. However, evolution favors lateral symmetry and legs are much easier to
as is abundantly obvious. However, if you look at population sizes, insects are most populous animals, and they many more than 2 legs.
The Spider
, a Legged Robot
In solving problems, the Spider is aided by the spring quality of its 1 mm steel wire legs. Hold one of its feet relative to the body and the mechanism keeps turning, the obstructed motor consuming less than 40 mA while the leg. Let go and the leg springs back into shape. As I write this, the Spider is scrambling up and over my ke Some of its feet get temporarily stuck between keys, springing loose again as others push down. It has no whatsoever with this obstacle, nor with any of the others on my cluttered desk - even though it is still utterly brai
As the feet rise to a maximum of 2 cm off the floor, a cassette box is about the tallest vertical obstacle that the S able to step onto. Another limitation is slope. When asked to sustain a climb angle of more than about 20 degr Spider rolls over backwards. And even this fairly modest angle (extremely steep for a car, by the way) requires
gait control, making sure that both rear legs do not lift at the same time. Impro are certainly possible. Increasing step size would require a longer body (more distance between the legs) and different gear train. A better choice might be more legs, like 10 or 12 on a longer body, but with the same s wheels. That would give better traction and climbing ability. And if a third motor is allowed, one might con horizontal hinge in the `backbone'. Make a gear shaft the center of a nice, tight hinge joint. Then the drive t function as before. Using the third motor and a suitable mechanism, the robot could raise its front part to step on obstacle, somewhat like a caterpillar. But turning on the spot becomes difficult.
Mobile Robots
Mobile robots are able to move, usually they perform task such as search areas. A prime example is the Mars Explorer, specifically designed to roam the mars surface. Mobile robots are a great help to such collapsed building for survivors Mobile robots are used for task where people cannot go. Either because it is too dangerous of because people cannot reach the area that needs to be searched. Mars Explorer images and other space robot images courtesy of NASA. Mobile robots can be divided in two categories: Rolling Robots: Rolling robots have wheels to move around. These are the type of robots that can quickly and easily search move around. However they are only useful in flat areas, rocky terrains give them a hard time. Flat terrains are their territory.
Walking Robots: Robots on legs are usually brought in when the terrain is rocky and difficult to enter with wheels. Robots have a hard time shifting balance and keep them from tumbling. Thats why most robots with have at least 4 of them, usually they have 6 legs or more. Even when they lift one or more legs they still keep their balance. Development of legged robots is often modeled after insects or crawfish..
Stationary Robots
Robots are not only used to explore areas or imitate a human being. Most robots perform repeating tasks without ever moving an inch. Most robots are working in industry settings. Especially dull and repeating tasks are suitable for robots. A robot never grows tired, it will perform its duty day and night without ever complaining. In case the tasks at hand are done, the robots will be reprogrammed to perform other tasks..
Autonomous Robots
Autonomous robots are self supporting or in other words self contained. In a way they rely on their own brains. Autonomous robots run a program that give them the opportunity to decide on the action to perform depending on their surroundings. At times these robots even learn new behavior. They start out with a short routine and adapt this routine to be more successful at the task they perform. The most successful routine will be repeated as such their behavior is shaped. Autonomous robots can learn to walk or avoid obstacles they find in their way. Think about a six legged robot, at first the legs move ad random, after a little while the robot adjust its program and performs a pattern which enables it to move in a direction.
Remote-control Robots
An autonomous robot is despite its autonomous not a very clever or intelligent unit. The memory and brain capacity is usually limited, an autonomous robot can be compared to an insect in that respect. In case a robot needs to perform more complicated yet undetermined tasks an autonomous robot is not the right choice. Complicated tasks are still best performed by human beings with real brainpower. A person can guide a robot by remote control. A person can perform difficult and usually dangerous tasks without being at the spot where the tasks are performed. To detonate a bomb it is safer to send the robot to the danger area.
Virtual Robots
Virtual robots dont exits in real life. Virtual robots are just programs, building blocks of software inside a computer. A virtual robot can simulate a real robot or just perform a repeating task. A special kind of robot is a robot that searches the world wide web. The internet has countless robots crawling from site to site. These WebCrawlers collect information on websites and send this information to the search engines. Another popular virtual robot is the chatterbot. These robots simulate conversations with users of the internet. One of the first chatterbots was ELIZA. There are many
BEAM Robots
BEAM is short for Biology, Electronics, Aesthetics and Mechanics. BEAM robots are made by hobbyists. BEAM robots can be simple and very suitable for starters.
Biology
Robots are often modeled after nature. A lot of BEAM robots look remarkably like insects. Insects are easy to build in mechanical form. Not just the mechanics are in inspiration also the limited behavior can easily be programmed in a limited amount of memory and processing power.
Electronics
Like all robots they also contain electronics. Without electronic circuits the engines cannot be controlled. Lots of Beam Robots also use solar power as their main source of energy.
Aesthetics
A BEAM Robot should look nice and attractive. BEAM robots have no printed circuits with some parts but an appealing and original appearance.
Mechanics
In contrast with expensive big robots BEAM robots are cheap, simple, built out of recycled material and running on solar energy.