A multi-touch display for robot control could improve Human-Robot Interaction. The MERL DiamondTouch screen has antennas embedded below the laminated surface. The display transmits a synchronized radio signal relative to the respective x and y coordinates. The use of multiple receivers allows for unique identification of individuals.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
25 views
2007 07 27 Tabletop2007Poster
A multi-touch display for robot control could improve Human-Robot Interaction. The MERL DiamondTouch screen has antennas embedded below the laminated surface. The display transmits a synchronized radio signal relative to the respective x and y coordinates. The use of multiple receivers allows for unique identification of individuals.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2
Human-Robot Interaction using a Multi-Touch Display
Amanda Courtemanche, Mark Micire, Holly Yanco
Department of Computer Science, University of Massachusetts Lowell E-mail: {acourtem, mmicire, holly}@cs.uml.edu
Abstract fielded robot operator control units (OCUs) use a
combination of joysticks, switches, buttons, and on- Recent advances in digital tabletop touch-and- screen menus to facilitate HRI. While these interfaces gesture-activated screens have allowed for small group have proven themselves in the field, they are not collaboration. The newest generation screens directly portable to tabletop technology. This has led simultaneously support multiple users, multiple contact us to several research questions: points per user, and gesture recognition. To the What is the added value of moving interfaces from authors’ knowledge, this technology has never been mouse/keyboard/joystick control systems to a multi- applied to robot control. We envision that an touch system? As robots and sensors are constantly interactive multi-touch screen display for robot control becoming more complex, their control interfaces may would improve human-robot interaction (HRI) and have outgrown such independent input systems as increase efficiency. To this end we have adapted mice, keyboards, and joysticks. A multi-touch display several interfaces for use on the MERL removes these multiple input methods and removes the DiamondTouch. In this paper, preliminary findings interaction abstraction between the input device and and observations from user testing with one such the display, providing a single input and output interface are presented. apparatus. Experiments using Fitts’ law have shown that speed and efficiency may improve [2]. 1. Introduction Mitsubishi Electric Research Lab (MERL) began development on the DiamondTouch screen in 2001. The screen has an array of antennas embedded below the laminated surface, which transmit a synchronized radio signal relative to the respective x and y coordinates. These signals are transmitted back to the DiamondTouch hardware via radio signal receivers, with which users must remain in contact. The use of multiple receivers allows for unique identification of individuals [1]. From these signals, the computer software is able to determine who is touching the interface where, and at how many locations. MERL donated a DiamondTouch display to our lab Figure 1. A user interacting with the MERL in August 2006. Since that time, we have conducted a DiamondTouch screen. The DiamondTouch performance validation of the display, including Fitts’ was evaluated in regards to task completion law and cursor position time [2], as shown in Figure 1. time and accuracy. A standard mouse was We have also prototyped a Command and Control used for comparison. interface with registered satellite images of Biloxi Mississippi, both pre- and post-Hurricane Katrina. What changes need to be made to the interfaces to accommodate and exploit differences between classical 2. Research questions input devices and multi-touch devices? The multi- touch breaks classical paradigms for HRI, but it is also Our research represents a significant paradigm shift bound by user expectations. These expectations should for human-robot interaction (HRI) developers. Most be accommodated where needed, but we must also expectation that they are “pointing in the direction that exploit differences in the input methods. they want to see.” The second mode uses the opposite What gestures, if any, should be used? To the approach. An upward movement of the hand lowers authors’ knowledge, no multi-touch tabletop gesture the camera, giving the effect of “moving the picture to paradigms have been applied to mobile robot control. the area that the user wants to see.” Gestures may provide enhanced usability above and These two styles correspond to our findings that beyond current input devices, providing an entirely some users prefer the flight control style of camera new area of research for human robot interaction. movement, while others prefer a more direct approach [3]. It is currently unclear which of the positive or negative y-component methods is preferred. We expect novice users who do not have previous experience controlling robots to learn much more quickly with this system. By removing the joystick abstraction for the camera pan and tilt, the user now appears to directly manipulate the robot. Additionally, experienced users should see an increase in usability, since their single joystick hand is not overloaded with multiple controls for all robot functions. Rather, they can use one hand to control the robot and another hand to control the camera mechanism. Our early prototypes have already empirically given evidence that there seem to be gestures that humans find intuitive for such interaction. For example, zooming the camera by placing two fingers Figure 2. Screenshot of the original UML together on the board and pulling them apart feels USAR interface. The multi-touch adaptation natural. This “pre-wired” gesture set needs to be uses the video (center) for camera control and further studied and exploited by HRI developers. gray area (right) for robot control. 4. Conclusions 3. Interface After extensive user testing with this interface, we will We have investigated these questions by adapting create design guidelines for adapting HRI to multi- an existing interface for USAR. This interface was touch displays. This style guide reference will give the developed in-house, using a flight simulator style best-in-breed for this unique application of multi-touch joystick and keyboard to control an ATRV-JR, with an technology. emphasis on information presentation and sensor fusion [3] [4]. 5. References [1] Dietz, P. and Leigh, D. “DiamondTouch: A Multi-User The multi-touch prototype uses the lower right Touch Technology.” ACM Symposium on User Interface hand corner as a driving area and the center video Software and Technology, November 2001, pp. 219-226. window for camera control. The first gesture set uses [2] Micire, M., Schedlbauer, M., and Yanco, H. “Horizontal the convention that a finger placed in the positive y Selection: An Evaluation of a Digital Tabletop Input quadrants of the driving area relates to forward motion Device.” 13th Americas Conference on Information Systems, and a negative y is reverse. The x-axis is used to rotate Keystone, Colorado, August 9-12 2007. the robot chassis left and right. In this respect, the user [3] Baker, M., Casey, R., Keyes, B., and Yanco, H. drives by placing one finger on the coordinate system “Improved Interfaces for Human-Robot Interaction in Urban and “pushing” the robot in the direction he or she Search and Rescue.” Proceedings of the IEEE Conference on Systems, Man and Cybernetics, October 2004. wants it to go. [4] Yanco, H.A., Baker, M., Casey, R., Keyes, B., Thoren, The camera is controlled in a similar manner. A P., Drury, J.L., Few, D., Nielsen, C., and Bruemmer, D. finger placed in the camera view relates to the vertical “Analysis of Human-Robot Interaction for Urban Search and and horizontal components of movement. The x-axis Rescue.” Proceedings of the IEEE International Workshop relates directly to pan. A left movement rotates the on Safety, Security and Rescue Robotics, National Institute camera left. Conversely, a right movement rotates the of Standards and Technology, Gaithersburg, MD, Aug 22-24 camera right. The y-axis can operate in two modes. In 2006. the first mode, a positive y component indicates an upward camera tilt. This exploits the user’s