0% found this document useful (0 votes)
66 views

DAvinCi A Cloud Computing Framework For Service Robots

This document discusses DAvinCi, a cloud computing framework that provides scalability and parallelism advantages of cloud computing for service robots operating in large environments. The framework combines the distributed ROS architecture, the open source Hadoop Distributed File System (HDFS), and the Hadoop MapReduce framework. It allows heterogeneous robots to share sensor data and upload data to processing nodes for computationally intensive algorithms like FastSLAM. Initial results show the framework can achieve significant performance gains for building maps of large areas using a small Hadoop cluster.

Uploaded by

nathalie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

DAvinCi A Cloud Computing Framework For Service Robots

This document discusses DAvinCi, a cloud computing framework that provides scalability and parallelism advantages of cloud computing for service robots operating in large environments. The framework combines the distributed ROS architecture, the open source Hadoop Distributed File System (HDFS), and the Hadoop MapReduce framework. It allows heterogeneous robots to share sensor data and upload data to processing nodes for computationally intensive algorithms like FastSLAM. Initial results show the framework can achieve significant performance gains for building maps of large areas using a small Hadoop cluster.

Uploaded by

nathalie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/224155987

DAvinCi: A Cloud Computing Framework for Service Robots

Conference Paper  in  Proceedings - IEEE International Conference on Robotics and Automation · June 2010
DOI: 10.1109/ROBOT.2010.5509469 · Source: IEEE Xplore

CITATIONS READS
269 2,957

9 authors, including:

Vikas Enti Bingbing Liu


Massachusetts Institute of Technology Agency for Science, Technology and Research (A*STAR)
2 PUBLICATIONS   270 CITATIONS    45 PUBLICATIONS   643 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Bingbing Liu on 27 May 2015.

The user has requested enhancement of the downloaded file.


DAvinCi: A Cloud Computing Framework for Service Robots
Rajesh Arumugam†, Vikas Reddy Enti, Liu Bingbing, Wu Xiaojun, Krishnamoorthy Baskaran
Foong Foo Kong, A.Senthil Kumar, Kang Dee Meng, and Goh Wai Kit

Abstract— We propose DAvinCi, a software framework that a large environment, the presence of powerful onboard
provides the scalability and parallelism advantages of cloud computers on every single robot is both cost prohibitive and
computing for service robots in large environments. We have unnecessary. Traditionally, in large environments, each robot
implemented such a system around the Hadoop cluster with
ROS (Robotic Operating system) as the messaging framework would have to explore and build its own map. Without means
for our robotic ecosystem. We explore the possibilities of to create a live, global map of the large environment, there
parallelizing some of the robotics algorithms as Map/Reduce is duplication of exploration effort and sensor information
tasks in Hadoop. We implemented the FastSLAM algorithm in on the robots. Moreover, when a new robot is introduced to
Map/Reduce and show how significant performance gains in the same environment, it will again duplicate all the efforts
execution times to build a map of a large area can be achieved
with even a very small eight-node Hadoop cluster. The global of its predecessors in exploring the environment, making the
map can later be shared with other robots introduced in the system very inefficient.
environment via a Software as a Service (SaaS) Model. This The major aspect of our research involves the development
reduces the burden of exploration and map building for the new of a software architecture that will enable the heterogeneous
robot and minimizes it’s need for additional sensors. Our pri- agents to share sensor data and also upload data to pro-
mary goal is to develop a cloud computing environment which
provides a compute cluster built with commodity hardware cessing nodes for computationally intense algorithms. We
exposing a suite of robotic algorithms as a SaaS and share have evaluated various frameworks such as Player/Stage, MS
data co-operatively across the robotic ecosystem. Robotics Studio and ROS and found each of them to be par-
ticularly capable in small environments. However, for large
I. INTRODUCTION
environments, these architectures need to be augmented and
Service robotics is forecasted to become a US$12b indus- we propose the DAvinCi (Distributed Agents with Collective
try by the year 2015, [1]. There has also been an expressed Intelligence) framework to enable teams of heterogenous
need by the governments of Japan, Korea, and the EU [2] robots to handle large environments.
to develop robots for the home environment. Consequently, The primary method of surpassing the challenges in large
the amount of research being done in this area has increased environments is to network the robots. Networked robots
substantially and has taken a few distinct design directions. have certain challenges in the field of data sharing, co-
One design approach has been the use of a single, human- operative perception and collective intelligence [3] which
like robot with abilities to manipulate the environment and have been addressed by the DAvinCi framework.
perform multiple tasks. The second approach involves the use The DAvinCi framework combines the distributed ROS
of multiple, simple task-specific robots to perform multiple architecture, the open source Hadoop Distributed File System
tasks in the same environment. Our design approach fuses (HDFS [4]) and the Hadoop Map/Reduce Framework. The
the two approaches to create a hybrid team of distributed, DAvinCi framework is still in its early stage of development
networked and heterogenous agents with each agent having and should be largely considered as a work-in-progress. To
a unique ability or sensory perception. This is proposed as our knowledge, it is the first system of its kind and we
a solution for servicing large environments such as office consider it ideal for large scale environments with infras-
buildings, airports and shopping malls. tructure. A brief introduction to cloud computing and its
A. Robots in Large Environments benefits are described in Section 2. The DAvinCi architecture
is described in Section 3 along with descriptions of Hadoop
A typical robot executes several primary tasks such as ob-
and ROS in Sections 4 and 5, respectively. We then discuss
stacle avoidance, vision processing, localization, path plan-
a proof-of concept implementation of Grid-based FastSLAM
ning and environment mapping. Some of these tasks such as
as a Hadoop Map/Reduce task in Section 6. We have also
vision processing and mapping are computationally intensive
presented our preliminary results in Section 7. Finally, we
but given the increasing speeds of current processors these
conclude with a discussion of our future work and research
can be done on the onboard computers. However, these
direction towards completing DAvinCi.
onboard computers require dedicated power supplies, good
shock protection if a Hard disk drive is used and they II. C LOUD C OMPUTING
are responsible for a large amount of the robot’s power Cloud computing is a paradigm shift in the way computing
consumption. And in a heterogenous robotic team serving resources are used and applications delivered. These re-
All authors are with Data Storage Institute, A*STAR, Singapore. sources include servers, storage and the network insfrastruc-
† Rajesh [email protected] ture along with the software applications. Cloud computing
refers to providing these resources as a service over the assumed to have at least an embedded controller with Wi-Fi
Internet to the public or an organization [5]. There are three connectivity and the environment is expected to have a Wi-
types of cloud services and they differ in the approach on Fi infrastructure with a gateway linking the cloud service
how the resources are made available. The first approach is to to the robots. By linking these robots and uploading their
make the hardware infrastructure available as a service and is sensor information to a central controller we can build a live
called Infrastructure as a Service (IaaS). Amazon’s EC3/S3 global map of the environment and later provide sections
is an example of such a service [6]. The second approach of the map to robots on demand as a service. A similar
is to provide a platform (the OS along with the necessary approach can be used for other secondary tasks such as multi-
software) over the hardware infrastructure. This is called modal map building, object recognition in the environment
Platform as a Service (PaaS). An example of this kind is and segmentation of maps.
the Google Application Engine [7]. The third approach is to Currently our DAvinCi environment consists of Pioneer
provide the application as a service along with the hardware robots, Roombas, Rovios and SRV-1. The ROS platform was
infrastructure and is called Software as a Service (SaaS) and used for sensor data collection and communication among
examples include Google Docs, ZOHO and Salesforce.com the robot agents and clients. We make use of the Hadoop
. Distributed File System (HDFS) for data storage and Hadoop
The cloud environment has the following advantages: Map/Reduce framework for doing the batch processing of
1) Make efficient use of available computational resources sensor data and visual information.
and storage in a typical data center. Figure 1 shows the high level overview of our system
2) Exploit the parallelism inherent in using a large set of and how it can be accessed over the cloud. The DAvinCi
computational nodes server is the access point to external entities (Robots/Human
Their relevance to robotics is described below. The papers interface) accessing the cloud service. It also binds the
[8], [9], [10] describe algorithms, techniques and approaches robotic ecosystem to the backend Hadoop computational
for a network of robots for coordinated exploration and cluster. The ROS framework provides a standard form of
map building. Some of these approaches can be parallelized communication and messaging across the robots, between the
and refined by doing parts of the map building offline DAvinCi server and the robots. A standard set of algorithms
in a backend multiprocessor system which will also have (SLAM, global path planning, sensor fusion) are exposed as
information from the other robots. The decision to explore a a cloud service which can either be accessed over the intranet
particular area can also be coordinated among the robots by a as in a private cloud or over the Internet with ROS messages
central system. In [8], the segmenting of the environment and wrapped in HTTP requests/responses.
the corresponding combining of the maps can be offloaded
to a backend system. The fusion of visual processing of
camera images with the laser ranger described in [11] is
a computationally intensive task that can be offloaded to a
backend multiprocessor system.
Many of SLAM algorithms [e.g. FastSLAM] which use
particle filters for state estimation (feature sets in maps)
have conditional independence among the different particle
paths [12], [13], [14] and the map features. These algorithms
are candidates for parallel processing in a computational
cloud where the map for each particle can be estimated in
separate processors thus speeding up the whole procedure.
We describe such an implementation in a later section of
this paper. Other than this, most of the robotic algorithms
are inherently parallel computing tasks working on relatively
independent data sets. Our platform therefore provides an
ideal environment for executing such tasks.
The DAvinCi system is a PaaS which is designed to
perform crucial secondary tasks such as global map building
in a cloud computing environment.
III. DAVIN C I A RCHITECTURE Fig. 1. High level overview of DAvinCi.
For large environments, we propose a team structure where
the sensors are distributed amongst the members such that Figure 2 shows a high level architecture of our system.
some have very precise localization sensors, a few others At the bottom is our robotic ecosystem which as explained
have LIDARs, a few have image acquisition sensors and before consists of Pioneer robots, SRV-1, Roomba and the
all have the basic proprioceptive sensors i.e. a low-cost, Rovio. Some of these are equipped with onboard CPUs on
single-axis gyro and wheel encoders [15]. The robots are which we run the ROS nodes with some of the existing
data is then pushed to the backend HDFS file system. The
server triggers some of the backend Map/Reduce tasks to
process the data either through explicit requests from the
robots or periodically for some of the tasks. For example a
robot surveying a new area can send a new batch of sensor
data and trigger the map building task to refine parts of
the already surveyed global map. The communication mode
between the server and the robots will be standard Wi-Fi. A
typical setup will have a Wi-Fi mesh network to make the
connection available throughout a given area. For external
entities the services are exposed by the DAvinCi server via
the Internet over HTTP. The ROS messages or message bags
in this case are wrapped in HTTP requests.
B. HDFS cluster
The HDFS cluster contains the computation nodes and the
storage. In our setup we have a small 8 node cluster setup.
Each node is an Intel Quad core server with 4GB of RAM.
Currently we use only the internal storage of the servers. The
HDFS file system runs on these nodes and the Map/Reduce
framework facilitates the execution of the various robotic
algorithm tasks. These tasks are run in parallel across the
cluster as Map/Reduce tasks, thereby reducing the execution
times by several orders of magnitude.
Sensor data from the robots that was pushed by the
DAvinCi server are available to the Map/Reduce tasks across
the cluster through the HDFS file system.
IV. H ADOOP AND THE M AP /R EDUCE F RAMEWORK
Our platform uses the Hadoop distributed file system
Fig. 2. Architecture of the DAvinCi Cloud computing Platform. (HDFS) [4] for storing data from the different robots. The
data can be from the sensors like laser scanners, odometer
data or images/video streams from cameras. Hadoop is
drivers. Systems like the Roomba and the Rovio does not a open source software similar to Google’s Map/Reduce
have the provision to run as ROS nodes. For these types framework [16]. It also provides a reliable, scalable and
of robots the DAvinCi server runs proxy ROS nodes which distributed computing platform. Hadoop is a Java based
collect sensor data or control the robot via interfaces like framework that supports data intensive distributed applica-
Zigbee, Bluetooth or through normal Wi-Fi. Above the tions running on large clusters of computers. One of the
DAvinCi server is the HDFS cluster with the Map/Reduce main features of Hadoop is that it parallelizes data processing
framework for execution of various robotic algorithms. The across many nodes (computers) in the cluster, speeding up
DAvinCi server acts as the central point of communication large computations. Most of these processing occurs near
for the robots as well as the external agents to access the data or storage so that I/O latencies over the network are
the cloud services provided by the platform. The following reduced. The HDFS file system of Hadoop also takes care
section describes the architecture components in detail. of splitting file data into manageable chunks or blocks and
distributes them across multiple nodes for parallel process-
A. DAvinCi Server ing. Figure 3 shows the components of HDFS consisting of
As shown in Figure the DAvinCi server acts as a proxy data nodes where the file chunks are stored. The name node
and a service provider to the robots. It binds the robot provides information to the clients about how the file data is
ecosystem to the backend computation and storage cluster distributed across the data nodes to the clients.
through ROS and HDFS. The DAvinCi server acts as the One of the main functionality in Hadoop is the
master node which runs the ROS name service and maintains Map/Reduce framework which is described in [17]. While
the list of publishers. ROS nodes on the robots query the Hadoop’s HDFS filesystem is for storing data, the
server to subscribe and receive messages/data either from the Map/Reduce framework facilitates execution of tasks for
HDFS backend or from other robots. Data from the HDFS is processing the data. The Map/Reduce framework provides
served using the ROS service model on the server. The server a mechanism for executing several computational tasks in
collects data from the robot ecosystem through the Data parallel on multiple nodes on a huge data set. This reduces
collectors running as ROS subscribers or ROS recorders. This the processing or execution time of computationally intensive
shown in Figure 5. In publish subscribe model, nodes publish
their messages on a named topic. Nodes which want to
receive these messages subscribe to the topic. A master node
exposes the publishers to the subscribers through a name
service. An example of the ROS communication mechanism
is illustrated in Figure 6, showing a node publishing range
scans to a topic called ‘scan’ from a Hokuyo laser. The
viewer node subscribes to this topic to receive and display
the messages. The master node acts as the arbitrator of the
nodes for the publishing and subscribing for various topics.
Note that actual data does not pass through the master node.

Fig. 3. Hadoop High level Architecture.

tasks by several orders of magnitude compared to running the


same task on a single server. Even though Hadoop has been
primarily used in search and indexing of large volumes of
text files, nowadays it has even been used in other areas also
like in machine learning, analytics, natural language search
and image processing. We have now found its potential
application in robotics.
Figure 4 shows an overview of how Map/Reduce tasks gets
executed in Hadoop. The map tasks processes an input list of Fig. 5. ROS messaging mechanism (From [18]).
key/value pairs. The reduce tasks takes care of merging the
results of the map tasks. These tasks can run in parallel in a
cluster. The framework takes care of scheduling these tasks.
Normal Hadoop systems uses a single pass of Map/Reduce.
Here, it is shown that tasks can be broken down into multiple
passes of Map/Reduce jobs. Each of the map and reduce
tasks can run on individual nodes on the cluster working on
different sets of the large data set.

Fig. 6. Example ROS messaging scenario (From [18]).


Fig. 4. Hadoop Map/Reduce Framework.

The Hadoop cluster in our system provides the storage and We make use of the ROS messaging mechanism to send
computation resources required for executing the near real data from the robots to collectors which in turn push them
time parallel execution of batch algorithms. The algorithms to the backend HDFS file system. The collectors run ROS
run as a set of single pass or multiple pass Map/Reduce tasks. recorders to record laser scans, odometer readings and cam-
era data in ROS message bags. These are later processed by
V. ROS Map/Reduce tasks in the Hadoop cluster.
The ROS platform [18] is used as the framework for our
robotic environment. One of the attractive features of ROS
is that it is a loosely coupled distributed platform. ROS
VI. I MPLEMENTATION OF G RID BASED FAST SLAM IN
provides a flexible modular communication mechanism for
H ADOOP
exchanging messages between nodes. Nodes are processes
running on a robot. There can be different nodes running on
a robot serving different purposes such as collecting sensor We adapted the grid based FastSLAM algorithm described
data, controlling motors and running localization algorithms. in [14] which is shown in Figure VI.1. We parallelized the
The messaging system is based on either loosely coupled algorithm as Map/Reduce tasks for each particle trajectories
topic publish subscribe model or service based model. This is and the map estimates.
Algorithm VI.1: FAST SLAM(From[14]) were calculated for each of the cases for 1, 50 and 100
 particles. It can be seen that the running time decreases by
X̄ t = X t = Ø
for k ← 1 to N several orders of magnitude as we go from a single node to
⎧ [k] [k] an eight-node system. We believe that the execution times

⎪ xt = odometry model(ut , xt−1 )

⎨ [k] [k] [k] for mapping a large region will reduce to the order of few
wt = measurement model correction(zt , xt , mt−1 ) seconds if the number of nodes is increased further (say 16
do [k] [k] [k]

⎪ m = update occupancy grid(z t , xt , mt−1 ) or more). This is acceptable for a service robot surveying

⎩ t [k] [k] [k]
X̄ t = X̄ t + < xt , wt , mt > a large area and the surveying time is in the order of tens
for k ←  1 to N
of minutes or more. The idea here is that even the batch
[k] mode of execution can be done in short acceptable times
draw i with probability ∝ wt
do [i] [i] for mapping a large area in orders of a few seconds when
add < xt , mt > to X t a large number of nodes are used. In our case the Hadoop
[i] [i]
return (< xt , mt > with maximum i) jobs can be triggered by map update requests from the robot
itself while it is streaming the sensor data. It has been shown
Each Hadoop map task corresponds to a particle (k) in in [13] that the accuracy of the pose estimation reduces as
[k] [k]
the algorithm. The variables x t and mt are the state vari- the number of particles is increased. It is also show in [13]
ables corresponding to the robot path (pose) and the global that increasing the number of particles results in increased
[k]
map at time t respectively for particle k. The variable w t execution time of the algorithm for a given dataset. In our
corresponds to the weight of a particular estimation of the case this is handled by spreading the execution across several
robot path and map for particle k. This is obtained through machines in the compute cluster. It is also clear that whereas
the measurement model which calculates the correlation it is easier to scale the cluster it is not feasible to increase
between the range scan and the global map estimated in the the computational capacity of the onboard system of a robot.
previous time step. The algorithm returns the path and map
[i] [i]
< xt , mt > having the maximum probability [i] proportional
[k]
to the accumulated weight wt . We exploit the conditional
independence of the mapping task for each of the particle
[k] [k]
paths xt and the map features mt . All the particle paths
[k]
(1 to k) and global features m t are estimated in parallel
by several of map tasks. A single reduce task for all the
[i] [i]
particles selects the particle path and map < x t , mt > having
the highest accumulated weight or the probability [i]. This is
depicted in Algorithm VI.1.

Fig. 8. Execution time of FastSLAM in Hadoop vs. number of


nodes.

Figure 9 shows the map obtained from the data. It also


shows that the pose noise reduces as the number of particles
is increased to 100. The results show that a typical robotic
algorithm can be implemented in a distributed system like
Hadoop using commodity hardware and achieve acceptable
execution times close to real time. Running the same on the
Fig. 7. Implementation of FastSLAM in Map/Reduce Framework. onboard system of the robot might be time consuming as
the single node result shows. Once we have accurate maps
of such a large region, it can be shared across several of the
VII. M AP /R EDUCE I MPLEMENTATION RESULTS OF other robots in the environment. Any new robot introduced
FAST SLAM into the environment can make use of the computed map.
Figure 8 shows the graph of the time taken for execution This is even more advantageous in some cases where the
of the algorithm for different number of particles. A dataset robot itself might not have an on board processor (e.g.
published in [19] was used for the map estimation. The grid a Roomba vacuum cleaner robot) and the DAvinCi server
map dimensions used in the algorithm was 300x300 with acting as a proxy can use the map for control and planning.
a resolution of 10cm (900, 000 cells). The algorithm was Finally, as in any other cloud computing environment the
executed by using a single node, two-node and finally a computational and storage resources are now shared across
eight-node Hadoop cluster, respectively. The execution times a network of robots. Thus we make efficient use of the
computational and storage resources by exposing these as Our final goal is to expose a suite of robotic algorithms for
a cloud service to the robotic environment. SLAM, path planning and sensor fusion over the cloud. With
the higher computational capacity of the backend cluster
we can handle these tasks in an acceptable time period for
service robots. Moreover exposing these resources as a cloud
service to the robots make efficient sharing of the available
computational and storage resources. As a proof of concept,
this architecture will be deployed as a private cloud test bed
in our office building for testing in near future.
R EFERENCES
[1] ABI Research. Personal robots hit the consumer mainstream
[online]. http://www.roboticstrends.com/personal_
(a) 1 particle robotics/article/personal_robots_hit_the_
consumer_mainstream, 2008.
[2] EARTO. Eu to double its r&d investment in robotics [online]. http:
//www.earto.eu/nc/service/news/details/article/
eu_to_double_its_rd_investment_in_robotics/,
2008.
[3] Alessandro Saffiotti and Pedro Lima. Two “hot issues” in cooperative
robotics: Network robot systems and formal models and methods of
cooperation, 2008.
[4] Hadoop Distributed File System [Online]
http://hadoop.apache.org/hdfs/ , 2009.
[5] R. Griffith A. D. Joseph R. Katz A. Konwinski G. Lee D. Pat-
terson A. Rabkin I. Stoica M. Armbrust, A. Fox and M. Zaharia.
Above the clouds: A berkeley view of cloud computing [white pa-
per]. http://www.eecs.berkeley.edu/Pubs/TechRpts/
(b) 100 particles 2009/EECS-2009-28.pdf, 2009.
[6] Amazon ec2. amazon elastic compute cloud [online]. http://aws.
Fig. 9. Estimated map (black points) with the robot path (green amazon.com/ec2/, 2009.
curves) using 1 and 100 particles (raw sensor data), respectively. [7] Google app engine [online]. http://code.google.com/
appengine/, 2009.
[8] K.M. Wurm, C. Stachniss, and W. Burgard. Coordinated multi-robot
VIII. CONCLUSIONS AND FUTURE WORK exploration using a segmentation of the environment. Proc. Of Int.
In the paper a cloud computing architecture was proposed Conf. Intelligent Robots and Systems, 2008.
[9] Dieter Fox. Distributed multi-robot exploration and mapping. In
for service robots. The goal of this architecture is to offload CRV ’05: Proceedings of the 2nd Canadian conference on Computer
data intensive and computationally intensive workloads from and Robot Vision, pages .15–xv, Washington, DC, USA, 2005. IEEE
the onboard resources on the robots to a backend cluster Computer Society.
[10] W. Burgard, M. Moors, C. Stachniss, and F.E. Schneider. Coordinated
system. Moreover the backend system will be shared by multi-robot exploration. IEEE Transactions on Robotics, 21(3):376–
multiple clients (robots) for performing computationally in- 386, 2005.
tensive tasks as well as for exchanging useful data (like [11] J.A. Castellanos, J.Neira, and J.D.Tardos. Multisensor fusion for
simultaneous localization and map buildingg. IEEE Transactions on
estimated maps) that is already processed. The open source Robotics and Automation, 17(6):908–914, 2001.
Hadoop Map/Reduce framework was adopted for providing [12] M. Montemerlo and S. Thrun. Simultaneous localization and mapping
a platform which can perform computation in a cluster built with unknown data association using FastSLAM. Proceedings of the
IEEE International Conference on Robotics and Automation (ICRA),
on commodity hardware. A proof of concept adaptation of 2003.
the grid based FastSLAM algorithm was implemented as a [13] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM 2.0:
Hadoop Map/Reduce task. This task was on a small eight- An improved particle filtering algorithm for simultaneous localization
and mapping that provably converges. In Proceedings of the Sixteenth
node Hadoop cluster and promising results were achieved. International Joint Conference on Artificial Intelligence (IJCAI), 2003.
Even though the FastSLAM algorithm was adapted for the [14] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press,
Map/Reduce task, one of our primary goals is to adapt sensor 2005.
[15] Vikas Reddy Enti, Rajesh Arumugam, Krishnamoorthy Baskaran,
fusion algorithms such as to fuse visual data with laser Bingbing Liu, Foo Kong Foong, Appadorai Senthil Kumar, Dee Meng
range scans, which is much more intensive with regard to Kang, Xiaojun Wu, and Wai Kit Goh. Tea table, come closer to me. In
computation and data amount. HRI ’09: Proceedings of the 4th ACM/IEEE international conference
on Human robot interaction, pages 325–326, New York, USA, 2009.
A limitation of our design could be that we do not [16] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simplified data
consider the network latencies or delays inherent in any cloud processing on large clusters. pages 137–150.
environment. We might face difficulties in transferring ROS [17] Hadoop map reduce framework [online]. http://hadoop.
apache.org/hdfs/, 2009.
messages involving large data (like maps and images) be- [18] The robotic operating system [online]. http://www.
tween the DAvinCi server and the robots. This also requires willowgarage.com/pages/software/ros-platform,
that the communication channel is reliable most of the time 2009.
[19] Radish: The robotics data set repository [online]. http://radish.
during such transfers. We are also working on improving sourceforge.net/index.php, 2009.
the reliability and providing fail safe mechanisms for the
communication between the DAvinCi server and the robots.

View publication stats

You might also like