Intelligent Robot Assisted
Intelligent Robot Assisted
C.MARISELVI P.M.SINDHU FINAL ECE [email protected] SARDAR RAJA COLLEGE OF ENGINEERING, ALANGULAM
Abstract
The unprecedented number and scales of natural and human-induced disasters in the past decade has urged the emergency search and rescue community around the world to seek for newer, more effective equipment to enhance their efficiency. Search and rescue technology to-date still rely on old technologies such as search dogs, camera mounted probes, and technology that has been in service for decades. Intelligent robots equipped with advanced sensors are attracting more and they are more efficient. This paper presents the design and application of a distributed wireless sensor network prototyping system for tracking mobile search and rescue robots. The robotic system can navigate autonomously into rubbles and to search for living human body heat using its thermal array sensor. The wireless sensor network (Zigbee) helps to track the location of the robot by analyzing signal strength. Design and development of the network and the physical robot prototype are described in this paper.
Introduction
Humanitarian search and rescue operations can be found in most large-scale emergency operations. Tele-operated robotic search and rescue systems consist of tethered mobile robots that can navigate deep into rubbles to search for victims and to transfer critical onsite data for rescuers to evaluate at a safe spot outside of the disaster affected area has gained the interest of many emergency response institutions. Distributed wireless sensor network applied in many different fields including, medical, civil , and environment research , has demonstrated its value in conveying data over large area with high level of power efficiency, which is particular suitable for tracking the location of search and rescue robots in large search field. This work demonstrates the use of distributed wireless sensor network to track search and rescue robot in an open field. The goal of the research is to develop a physical prototype to demonstrate feasibility of the proposed application that can help to acquire realistic data to use as simulation parameters in future search and rescue research. This paper begins with an introduction to humanitarian search and rescue and robotics search and rescue systems. Then the paper moves on to describe the basic specifications of the wireless sensor network system. An introduction to AIS and the implementation
of GSCF into the mobile robot tracking prototyping system is also included in the second half of the paper. Future works are discussed at the end of the paper.
Humanitarian Search
Natural and human-induced disasters in the past decade has claimed millions of lives and demolished astronomical sum of assets around the world. Natural disasters such as the Oklahoma Tornado in 1999, the Indian Ocean Earthquake and Hurricane Katrina in 2005 and the Pakistan Earthquake in 2005, all claimed deadly and costly tolls to the affected communities. Human-induced disasters such as the civil war between Uganda government and the LRA (Lords Resistance Army) that dragged on for nearly two decades since 1987, the long-running Somali civil war since 1986, and the neverending Palestinian conflict in Hebron and the Gaza Strip caused much more causalities than nature has ever claimed. Searching and removing landmines during and after the war can reduce civilian casualty and sooth local tension. De-mining and defusing landmines after the settlement of a war is a humanitarian responsibility that war parties should bear. However, until today, yet-cleared minefields still scatter in countries like Vietnam and Cambodia, claiming lives of ill-fated civilians. Collapsed buildings are common field environment for humanitarian search and rescue operations. Earthquakes, typhoons, tornados, weaponry destructions, and catastrophic explosions can all generate damaged buildings in large scales. The use of heavy machinery is prohibited because they would destabilize the structure, risking the lives of rescuers and victims buried in the rubble. Only by hand should the pulverized concrete, glass, furniture and other debris be removed (see Figure 1).
Pakistan earthquake 2005, locals attempting to search for survivors in a collapsed girls college. The structure was in unstable condition; excavation and lifting machineries were prohibited from the site. Rescue specialists use trained search dogs, cameras and listening devices to search for victims from above ground. Though search dogs are effective in finding human underground, they are unable to provide a general description of the physical environment the victim locates. Camera mounted probes can provide search specialists a visual image beyond voids that dogs can navigate through, however their effective range is no more than 4-6 meters along a straight line below ground surface.
Design Principles
During field operations, existing robots being deployed into rubbles, to search for victims or into mine fields to locate bombs, are often deterred by narrow passages. Smaller robots can often reach deeper into rubble pile than larger ones; however, smaller robots are more likely to report lost if they drop themselves into large openings. Since search and rescue robots are typically deployed to work in highly unstructured environment, damaging and losing of robots due to uncontrollable external factors should not be considered as failures; instead, all loses should be considered as calculable risk and incorporate the risk into normal operation cost. Following this line of thought, to minimize operation cost, small low-cost search and rescue robots that can be deployed in high volume are more suitable for search and rescue missions in unstructured environment than large sophisticated units. The search and rescue robot system being discussed in this paper consists of an operation console and two autonomous robots. The robots are essentially two mechatronically loaded aluminum cases, each fitted with two treadbelts driven by two separate gear-motors. Each of these robots is equipped with a Thermal Array Sensor (TAS), a camera for transmitting visual image to the operator, a microphone for picking up sound under the rubble, an accelerometer to tell the orientation of the robot in respect to gravitational pull, a sonar range finder for obstacle avoidance, a high intensity LED for lighting, 6V rechargeable battery, a custom built AIS control board designed for executing different AIS control algorithms
on the physical prototype , and a ZigBee wireless network module for communication between the operator and the robots.
The physical prototype of the newly developed robot. The battery pack on top robot serves as a scale to show the robots dimension the two 2A on-board motor channels controlled by a L293D can chose to operate from.One
two- channel tri-color LED indicator for testing and debugging, a smaller LED power indicator as also installed. A 5V I2C port that can support up to eight I2C enabled instruments and a RS232 serial port for
programming are also available. The prototype is equipped with camera, TAS, and high- intensity LED encased in the aluminum case. Sonar is to be encased in the second prototype. The operation console consists of a mini-monitor for displaying video images obtained from the robots, a ZigBee wireless network module for communication and a remote control unit for interfacing human inputs to the mechatronic system. The primary function of the robots is to navigate autonomously into rubble to search for living bodies using the TAS equipped in front of the robot. The TAS is a thermopile array that detects infrared in the 2m to 22m range. The unit has eight thermopiles arranged in a row and can measure the temperature of 8 adjacent points simultaneously. These thermopiles are identical to those used in non-contact infrared thermometers, and can detect heat generated from a human body from 2 meters away regardless of lighting condition. The robots can avoid obstacles and find passage under rubble autonomously using its sonar range finder, but the operator can, at anytime, choose to control each robot individually using the remote controller with the assistant of the control consoles minimonitor. This alternative control scheme enables the human operator to assist the robot to solve navigation problems based on real-time visual images. Without this alternative control scheme, the robots would require many more onboard sensors and higher computational power to achieve comparable results, which in turn adds to the weight and cost of the robots. The robots are equipped with two separate communication channels to minimize power consumption. The A/V channel for audio and video uses
2.4GHz transmission, and is by default turned off to save power. When the robot detects an object with human- body-like temperature or when it has difficulty to navigate out of a trap, it will generate a request for the operator to turn on the A/V channel and assist it to navigate or to determine if the object is a human being. The data channel is for sharing data and transmitting command between robots and the operator. This is done through the second channel using the ZigBee communication modules (ZigBee 2007), custom-designed by Sengital (2007) (see Fig. 8). ZigBee is a low-power, shortdistance wireless standard based on 802.15.4 that was developed to address the needs of wireless sensing and control applications. ZigBee supports many network topologies, including Mesh. Mesh Networking can extend the range of the network through routing, while self-healing increases the reliability of the network by re-routing a message in case of a node failure. This unique feature is highly desirable for search and rescue robots operating in unstructured environment. The ZigBee communication channel can also be turned off to save power, and can be waken wirelessly with a single command. In fact, it is programmed to stay in standby mode when it is not transmitting or receiving data.
Mechatronic architecture
Behavioral Design
To effectively achieve the designated function, the robots are instructed to behave in two distinct modes in respond to external stimulations. These two distinct modes govern the robots actions in victim searching and in exception handling. When the robot is behaving in search mode, it uses its sonar to identify open passages and navigates autonomously into the rubble to look for possible victims using its TAS. While in this mode, the robot shuts down all onboard devices that are not directly related to its objective to conserve energy for navigation and exploration. In practice, the A/V system and the high intensity LED for illumination are deactivated under exploration mode. When the robot identifies a possible victim based on data obtained from the TAS, or when the robot believes it is trapped in rubble, it will switch to the exception-handling mode to request for operator assistance. In exception-handling mode, the robot would first send all data related to its current situation (i.e. current set of data from TAS and sonar) plus its current
status (i.e. possible victim identified or trapped) to the operation console. Then it shuts down all energy consuming devices, put the ZigBee communication module to standby mode and wait for the operators assistance. The human operator can reactivate the robot wirelessly by responding to the console. Once the robot is reactivated in exception-handling mode, it would reinitiate the A/V device, the LED, the sonar, the TAS, and the motor controllers to assist the human operator to determine whether the object identified is a living human body. The human operator can also remotely control the robot to navigate out of a trap with the assistant of the video feedback. The robot can switch back to exploration mode at the operators command. External interruptions (operator commands or help requests) received through ZigBee communication module can also cause the robot to enter exceptionhandling mode.
network by re-routing a message in case of a node failure. These unique features are highly desirable for search and rescue robots operating in unstructured environment. The ZigBee-based sensor network hardware employed is based on the Chipcon 2431 development kit.
Reference nodes and blind nodes Reference nodes are static nodes placed at known position and can tell other nodes where they are on request. Reference nodes do not need the hardware for location detection and do not perform any calculations. Blind nodes, on the other hand, are programmed to collect signals from all reference nodes responding to their request; then read out the respective RSSI values, feed the values into the location engine, and afterwards read out the calculated position and send to the control console. Since all location calculations are performed at each blind node, the algorithm is genuinely decentralized. This property reduces the amount of data transferred in the network, since only the calculated position is transferred, not the data used to perform the calculation. The system is therefore highly scalable. The ZigBee modules used are embedded with 8051 8-bit single-cycle processor, 128 KB in-system programmable flash, and 8 KB RAM, which adds up to roughly 8 times the performance of a standard 8051. This processing power allows the blind nodes to use up to 16 reference nodes to estimate its current position. In theory, signals from 3 reference nodes is the least to make a sensible estimation, the more reference node signals received, the more accurate the estimation is. Algorithm used to estimate locations of the blind nodes within the sensor network is Straight forward. To estimate its current location, the blind node on the mobile robot broadcast a specific signal to the surrounding. All reference nodes within range response to the signal by sending a packet containing the reference nodes relative coordinate. The algorithm uses
Zigbee modules used in this project. The sensor network built with the 12 Zigbee modules in the development kit has 9 modules programmed as reference nodes, and 2 modules programmed as blind nodes. The 9 reference nodes were distributed around the laboratory roughly resemble a square grid as show in Figure 4. The two blind nodes were installed on each of the two mobile robots. The last Zigbee module (or the first) of the 12 was gallantly sacrificed in short-circuit during programming.
Received Signal Strength Indicator (RSSI) values to estimate distance from each reference node. Since RSSI valuedecreases as distance increases, the blind node would chose the 8 nearest reference nodes by comparing RSSI values between all reference nodes in range. Based on the strength of these returned signals and the origin of each signal included in the packet, position of the blind node can be estimated.
Advantages
Disadvantages
Covers large Susceptible to area interferences Low Power Consumption Slow response time Cannot detect slippage
Distributed Network
Wireless
Sensor
The distributed wireless robot tracking system presenting in this paper is based on the GSCF (General Supression Control Framework) developed for controlling decentralized systems. For the wireless robot tracking system in this research, the primary objective is to continuously track the location of each robot by evaluating a collective set of feedbacks from multiple sources. These feedbacks include coordinates from the Zigbee Communication Module, motor encoders, and electronic compass. The only system constraint to be incorporated into the system is accuracy of the estimated robot locations. The low-cost Zigbee based sensor network used here is suitable for tracking robots in large area and to relate information over long distance in an energy efficient manner. However, position estimations obtained from RF based systems are venerable to interferences; therefore additional referencing sensors are often desirable in more accurate applications. The solution for this particular application is to take advantage of the readily available motor encoders and electronic compasses installed in the robots to generate more reliable position estimations, though these sensors all exhibits inherited reliability issues in their own way. Table 1 lists their advantages and disadvantages.
Table 1. Advantages and disadvantages of the three feedbacks used in the system. Based on the strength and weaknesses of each type of sensors listed above, RSSI is a more reliable source to quickly estimate the robots position without accumulative error. Motor encoders are not reliable for long distance tracking as slippage error would accumulate, however it is good for short distance position tracking. Electronic compasses can be used to confirm the direction in which the robot is moving towards, which in turn can verify the accuracy of the coordinates produced using RSSI estimation. In general, the blind node on the mobile robot would sample surrounding reference nodes 10 times per estimation. Each set of 10 RSSI returned per reference node are converted to distances. The highest and lowest readings of each set of RSSI are removed, and then standard deviation of the remaining readings in the data set is produced to evaluate the reliability of the estimated distance. The estimated distance is more reliable if the standard deviation is low, otherwise the reliability is low.
The distributed wireless robot tracking system under discussion has two additional sensor sources that influence the robots behaviors. The encoder tells the displacement of the robot by counting rotations made by the motor. The electronic compasses read the robots direction at any instant with reference to the earths magnetic field. Suppressor cells that have high sensitivity to the changes of these sensors readings are situated in the Suppression Modulator (Figure 5). Though there are only three types of sensor sources, there are six types of suppressor cells in the system. Table 2 lists their functions.
The function of summation cell SC5 is designed to compare the estimated travel distance from encoder and from the sensor network. For example, a mobile robot driving against an obstacle would report high counts on the encoder but the estimated position reporting from the sensor network would probably remain unchanged. This discrepancy between estimations from two sensors would reflect in the suppression index produced by SC5, the higher the discrepancy level, the higher the suppression index. In short, SC5 fuses data for Cell Differentiator to evaluate. Function of summation cell, SC6, is similar to that of SC5, except it considers an additional constraint.
meaning the RSSI data is sable, the system will behave in tolerant mode. Otherwise, the suppression index is high or the affinity index is high, the system will switch into aggressive mode.
SC6 determines whether the readings obtained from sensor network is reliable by comparing the estimated direction from sensor network against the reading from electronic compass. SC6 takes in the initial and final estimated locations from sensor network to trigonometrically estimate the direction the robot is moving, then compare this estimation against the electronic compass reading from SC3 to produce a suppression index that reflects the discrepancy, the higher the discrepancy level, the higher the suppression index. Suppression index from SC5 and SC6 are crucial for Cell Differentiator to adapt a behavior that best fit the situation. Cell Differentiator is responsible for integrating complex information from different sources into simple instructions and converts intricate problems into quantitative outputs. The decision flow of the Cell Differentiator. The suppression indices from the suppressor cells have priority over all others, it is being evaluated first to see whether the estimations based on encoders, sensor network, and compasses comply with each other. If the suppression index is low, meaning the estimation from sensor network agree with additional sources (encoder and compass); the suppressor modulator will not react strongly. If the Affinity Index is low,
Decision scheme in the Cell Differentiator of each modular fireguard. Since the Cell Differentiator in GSCF is only responsible for producing high-level behavioral instructions such as sound the alarm, stand fast, search for heat, etc. There has to be a component to interpret these high level instructions into low-level instructions for the mechanical controllers. This component is called Cell Reactor. Since mechanical control schemes varies greatly between different operation platforms, GSCF delegates this work to Cell Reactor, so the high level design of other components can remain platform independent.
The AIS-based distributed tracking system developed for the mobile search and rescue robots are being tested indoor in a laboratory between tables, chairs and miscellaneous obstacles. Within the environment there are uncontrolled RF interferences of different sorts, including Wi-Fi routers, mobile phones, activated .The suppression indices from the suppressor cells have priority over all others, it is being evaluated first to see whether the estimation based on encoders, sensor network, and compasses comply with each other. If the suppression index is low, meaning the estimation from sensor network agree with additional sources (encoder and compass) the suppressor modulator will not react strongly. If the Affinity Index is low, meaning the RSSI data is able, the system will behave in tolerant mode. Otherwise, the suppression index is high or the affinity index is high, the system will switch to aggressive mode. RFID systems, Bluetooth devices (keyboard and mouse),and EMF from various mechanical devices. Despite the abundant sources of interferences, the test environment is far from practical for what this system is designed for. Long term work is to develop methods to evaluate accuracy of sensor network estimated position against actual position in obstructed environment, i.e. in rubble. This work would provide a base to compare and evaluate results of different control and tracking algorithms. In addition, technologies and methods that can help to setup the system quickly for emergency application is another important area to make the system truly applicable.
Conclusion:
Now due to the mind blowing development of the robotics and wireless technologies, human lives can be saved from natures games like earthquakes and several disasters. Intelligent robots equipped with advanced sensors come to rescue precious lives struggling under the rubbles and make the engineering community feel proud of our innovations. Implementing this idea would surely provide immense assistance in rescuing operations and create a landmark in the field of electronics and robotics. References [1] Murphy, R. R. (2004). National Science Foundation Summer Field Institute for rescue robots for research and response(AI Magazine. Vol. 25, issue 2: pp.133-136). [2] Snyder, R. (2005). Robots assist in search and rescue efforts at WTC, IEEE Robot. Automation Magazine, vol. 8 . [3] Kobayashi, A. and Nakamura, K. (1983). Rescue Robot for Fire Hazards, [4] Ko, A. W. Y. and Lau, Y. K. H., An immuno robotic system for search and rescue, Proceedings 6th [5] Ko, A., Lau, H.Y.K., and Lau, T.L. (2006). "An immuno control framework for decentralized mechatronic control." Proc. 3rd International Conference on Artificial Immune Systems.