0% found this document useful (0 votes)
209 views

An Introduction To Swarm Robotics - A.martinoli - Tutorial - Slides

Uploaded by

Anonymous nXvsJH
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
209 views

An Introduction To Swarm Robotics - A.martinoli - Tutorial - Slides

Uploaded by

Anonymous nXvsJH
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

An Introduction to Swarm Robotics

Alcherio Martinoli
SNSF Professor in Computer and Communication Sciences, EPFL Part-Time Visiting Associate in Mechanical Engineering, CalTech Swarm-Intelligent Systems Group cole Polytechnique Fdrale de Lausanne CH-1015 Lausanne, Switzerland http://swis.epfl.ch/ [email protected] Tutorial at ANTS-06, Bruxelles, September 4, 2006

Outline
Background
Mobile robotics Swarm Intelligence Swarm Robotics

Model-Based Analysis of SRS


Methodological framework Examples

Machine-Learning-Based Synthesis of SRS


Methodological framework Combined method (model/machine-learning-based) Examples

From SRS to other Real-Time, Embedded Platforms Conclusion and Outlook

Background: Mobile robotics

An Example of Mobile Robot: Khepera (Mondada et al., 1993)


actuators sensors

microcontrollers batteries
5.5 cm

Strengths: size and modularity!

Perception-to-Action Loop
sensors Reactive (e.g., linear or nonlinear transform) Reactive + memory (e.g. filter, state variable) Deliberative (e.g. planning) actuators

Perception

Computation

Environment

Action

Autonomy in Mobile Robotics


Task Complexity Human-Guided Robotics

Swarm Robotics

Research Industry

Autonomous Robotics Autonomy

Different levels/degrees of autonomy:


Energetic level Sensory, actuatorial, and computational level Decisional level

Background: Swarm-Intelligent Systems

Swarm Intelligence Definitions


Beni and Wang (1990):
Used the term in the context of cellular automata (based on cellular robots concept of Fukuda) Decentralized control, lack of synchronicity, simple and (quasi) identical members, self-organization

Bonabeau, Dorigo and Theraulaz (1999)


Any attempt to design algorithms or distributed solving devices inspired by the collective behavior of social insect colonies and other animal societies

Beni (2004)
Intelligent swarm = a group of non-intelligent robots (machines) capable of universal computation Usual difficulties in defining the intelligence concept (non predictable order from disorder, creativity)

Swarm-Intelligent Systems: Features


Beyond bio-inspiration: combine natural Bio-inspiration Unit coordination
social with engineering knowledge principlesinsect societies technologies andflocking, shoaling in vertebrates fully distributed control (+ env. template) individual autonomy self-organization

Communication
direct local communication (peer-to-peer) indirect communication through signs in the environment (stigmergy)

Scalability Robustness vs. efficiency trade-off Robustness


redundancy balance exploitation/exploration individual simplicity individual simplicity mass production

System cost effectiveness

Current Tendencies
IEEE SIS-05
self-organization, distributedness, parallelism, local communication mechanisms, individual simplicity as invariants More interdisciplinarity, more engineering, biology not the only reservoir for ideas

ANTS-06, IEEE SIS-06 follow the tendency; IEEE SIS-07 even more so

Background: Swarm Robotics

First Swarm-Robotics Demonstration Using Real Robots


(Beckers, Holland, and Deneubourg, 1994)

Swarm Robotics: A new Engineering Discipline?


Why does it work? What are the principles? Is a new paradigm or just an isolated experiment? If yes, can we define it? Can we generalize these results to other tasks and experimental scenarios? How can we design an efficient and robust SR system? Methods? How can we optimize a SR system?

Swarm Robotics Features


Dorigo & Sahin (2004)
Relevant to the coordination of large number of robots The robotic system consists of a relatively few homogeneous groups, number of robots per group is large Robots have difficulties in carrying out the task on their own or at least performance improvement by the swarm Limited local sensing and communication ability

Swarm Robotics [Selected/Pruned] Definitions


Beni (2004)
The use of labels such as swarm robotics should not be in principle a function of the number of units used in the system. The principles underlying the multi-robot system coordination are the essential factor. The control architectures relevant to swarms are scalable, from a few units to thousands or million of units, since they base their coordination on local interactions and self-organization.

Sahin, Spears, and Winfield (2006)


Swarm robotics is the study of how large number of relatively simple physically embodied agents can be designed such that a desired collective behavior emerges from the local interactions among agents and between the agents and the environment. It is a novel approach to the coordination of large numbers of robots.

SWIS Mobile Robotic Fleet


Moorebot II PC 104, XScale processor, Linux, WLAN 802.11; available robots: # 4 Khepera III XScale processor, Linux, WLAN 802.11, Bluetooth; #20 E-puck dsPIC, PICos, WLAN 802.15.4, Bluetooth; #100 11 cm Alice II PIC, no OS, WLAN 802.15.4, IR com; #40

24 cm

Size & modularity ! 6 cm Standards, com, and batt. changing! 2 cm

size

SWIS Research Thrusts


System engineering & integration (single node)

Multi-level modeling, model-based methods

Automatic (machinelearning-based) design & optimization

Model-Based Approach
(main focus: analysis)

Multi-Level Modeling Methodology


Macroscopic: rate equations, mean field approach, whole swarm Microscopic: multi-agent models, only relevant robot feature captured, 1 agent = 1 robot Realistic: intra-robot (e.g., S&A) and environment (e.g., physics) details reproduced faithfully

Ss

Ss

Ss

Sa Sa Sa

Physical reality: Info on controller, S&A, morphology and environmental features

Experimental time

Abstraction

Ss

Sa

Common metrics

dN n (t ) = W (n | n, t ) N n (t ) W (n | n, t ) N n (t ) dt n n

Originality and Differences with other Research Contributions


The proposed multi-level modeling method is specifically target to self-organized (miniature) collective systems (mainly artificial up to date); exploit robust control design techniques at individual level (e.g. BB, ANN) and predict collective performance through models Different from traditional modeling approach in robotics for collective robotic systems: start from unrealistic assumptions (noise free, perfectly controllable trajectories, no com delays, etc.) and relax assumptions compensating with best devices available & computationally intensive on-board algorithms Different from traditional modeling approaches in biology (and similar in physics, chemistry) for insect/animal societies: as simple as possible macroscopic models targeting a given scientific question; free parameters + fitting based on macroscopic measurements since often microscopic information not available/accurate

Micro/Macro Modeling Assumptions


Nonspatial metrics for swarm performance Environment and multi-agent system can be described as Probabilistic FSM; the state granularity of the description is arbitrarily established by the researcher as a function of the abstraction level and design/optimization interest Both multi-agent system and environment are (semi-) Markovian: the system future state is a function of the current state (and possibly amount of time spent in it) Mean spatial distribution of agents is either not considered or assumed to be homogeneous, as they were randomly hopping on the arena (trajectories not considered, mean field approach)

Microscopic Level
Ss Sa

R11 R12 R1l

Se

Sd Si

Rn1

Ss

Sa

Ss Sa

Se

Sd Si

Rnm

Caste 1 Coupling (e.g., manipulation, sensing)

Caste n Robotic System (N PFSM; N = total # agents)

Sa

Sb

Sa

Sb

Oq1

Oqr

O11

O1p

Environment (Q PFSM; Q = total # objects)

Macroscopic Level (1)


Robotic (PFSM)
Ss Sa

Caste1
Se Si Sd

Caste n

Coupling
Sa Sb

average quantities central tendency prediction (1 run) continuous quantities: +1 ODE per state for all robotic castes and object types (metric/task dependent!) - 1 ODE if substituted with conservation equations (e.g., total # of robots, total # of objects of type q, )

Type 1 Environment (PFSM) Type q

Macroscopic Level (2)


If Markov properties are fulfilled, this is what we are looking for:
dN n (t ) = W (n | n, t ) N n' (t ) W (n | n, t ) N n (t ) dt n n

Rate Equation
(time-continuous)

inflow

outflow

n, n = states of the agents Nn = average # of robots in state n at time t W = transition rates (linear, nonlinear)
N n ((k + 1)T ) = N n (kT ) + TW (n | n, kT ) N n ' (kT ) TW (n | n, kT ) N n (kT )
n n

Time-discrete version. k = iteration index, T time step (often left out)

Model Calibration - Theory


Goal: calibration method for achieving 0-free parameter models, gray-box approach:
As cheap as possible calibration procedure Models should not only explain but have also predictive power Parameters should match as much as possible with design choices

Two types of parameters:


Interaction times Encountering probabilities

Calibration procedures:
Idea 1: run orthogonal experiments on local a priori known interactions (robot-to-robot, robot-to-environment) use for all type of interactions happening these values Idea 2: use all a priori known information (e.g., geometry) without running experiments get initial guesses fine-tune parameters automatically on the target experiment with a as cheap as possible calibration (e.g., fitting algorithm using a subset of the system)

Linear Example: Wandering and Obstacle Avoidance

A Simple Linear Model


Example: search (moving forwards) and obstacle avoidance

Nikolaus Correll 2006

A simple Example
Start Start ps Search Avoidance Search Ss Obstacle? N Y 1-pa Nonspatiality & microscopic characterization Obstacle? ps Avoid., a Sa

pa pa

PFSM (Markov Chain) Probabilistic agents flowchart

Deterministic robots flowchart

Linear Model Constant P Option


ps=1/Ta

Search
Ns(k+1) = Ns(k) - paNs(k) + psNa(k) Na(k+1) = N0 Ns(k+1)
Ns(0) = N0 ; Na(0) = 0

pa

Avoidance, Ta

Ta = mean obstacle avoidance duration pa = probability of moving to obstacle av. ps = probability of resuming search Ns = average # robots in search Na= average # robots in obstacle avoidance N0 = # robots used in the experiment k = 0,1, (iteration index)

Linear Model Time Out Option


1

Search

pa

Avoidance, Ta

Ns(k+1) = Ns(k) - paNs(k) + paNs(k-Ta) Na(k+1) = N0 Ns(k+1)


Ta = mean obstacle avoidance duration pa = probability moving to obstacle avoidance Ns = average # robots in search Na= average # robots in obstacle avoidance N0 = # robots used in the experiment k = 0,1, (iteration index)

! N (k) = N (k) = 0 for all k<0 !


s a

Ns(0) = N0 ; Na(0) = 0

Linear Model Sample Results


Na*/N0

Realistic to micro comparison (different controllers, dynamic/static scenarios, allocentric/egocentric measures)

Micro to macro comparison (same robot density but wall surface become smaller with bigger arenas)

Nonlinear Example Stick-Pulling

A Case Study: Stick-Pulling


Physical Set-Up Collaboration via indirect communication

2-6 robots 4 sticks 40 cm radius arena

IR reflective band

Proximity sensors

Arm elevation sensor

Systematic Experiments

Real robots
[Martinoli and Mondada, ISER, 1995] [Ijspeert et al., AR, 2001]

Realistic simulation

Experimental and Realistic Simulation Results


Nrobots > Nsticks

Nrobots Nsticks Real robots (3 runs) and realistic simulations (10 runs) System bifurcation as a function of #robots/#sticks

Geometric Probabilities
Aa = surface of the whole arena

p s = As / Aa p r = Ar / Aa p R = pr ( N 0 1) p w = Aw / Aa p g1 = p s p g 2 = Rg p s

From Reality to Abstraction

Deterministic robots flowchart

Markov Chain (PFSM) Interaction Probabilistic agents modeling flowchart

Full Macroscopic Model


For instance, for the average number of robots in searching mode:
N s ( k + 1) = N s ( k ) [ g1 ( k ) + g 2 ( k ) + p w + p R ]N s ( k ) + g 1 ( k Tcga )( k ; Ta ) N s ( k Tcga ) + g 2 ( k Tca ) N s ( k Tca ) + g 2 ( k Tcda ) N s ( k Tcda ) + p w N s ( k Ta ) + p R N s ( k Tia )

with time-varying coefficients (nonlinear coupling):


g1 ( k ) = p g1[ M 0 N g ( k ) N d ( k )] g 2 (k ) = p g 2 N g (k ) ( k ; TSL ) =
k TSL

j = k Tg TSL

[1 p

g2

N s ( j )]

6 states: 5 DE + 1 cons. EQ Ti,Ta,Td,Tc 0; xyz = x + y + z TSL= Shift Left duration [Martinoli et al., IJRR, 2004]

Swarm Performance Metric


Collaboration rate: # of sticks per time unit

C(k) = pg2Ns(k-Tca)Ng(k-Tca) : mean # of

C t (k) =

C (k )
k =0

Te

collaborations at iteration k

Te

: mean collaboration rate over Te

Sample Results
Webots (10 runs), microscopic (100 runs), macroscopic model (1 run)

Simplified Macroscopic Model (1)

i,a,d,c << g i=a=d=c=0

Simplified Macroscopic Model (2)


successful Nonlinear DE coupling through unit-to-unit interaction (in this case through the stick)!

Search
unsuccessful

Grip

Ns(k+1) = Ns(k) pg1[M0 Ng(k)]Ns(k) + pg1[M0 Ng(k-g)](k;0)Ns(k-Tg) Ng(k+1) =


(k ;0) =
k j = k Tg

+ pg2Ng(k)Ns(k)

N0 Ns(k+1)
g2

[1 p

N s ( j )]

Initial conditions and causality Ns(0) = N0, Ng(0) = 0 Ns(k) = Ng(k) = 0 for all k<0

Ns = average # robots in searching mode Ng= average # robots in gripping mode N0 = # robots used in the experiment M0 = # sticks used in the experiment = fraction of robots that abandon pulling Te = maximal number of iterations k = 0,1, Te (iteration index)

Steady State Analysis (Simplified Model)


Steady-state analysis It can be demonstrated that:

opt g

for

N0 2 M 0 1 + Rg

with N0 = number of robots and M0= number of sticks, Rg approaching angle for collaboration approaching angle for collaboration

Counterintuitive conclusion: an optimal Tg can exist also in scenarios with more robots than sticks if the collaboration is very difficult (i.e. Rg very small)!

Verification of Analysis Conclusions (Full Model)

20 robots and 16 sticks (optimal Tg)

Example: Rg =

1 Rg (collaboration very difficult) 10

Optimal Gripping Time


Steady-state analysis Tgoptcan be computed analytically in the simplified model (numerically approximated value):
= 1 1 ln

N0 1 ) 2 2 with = N0/M0 = ratio robots-to-sticks ln(1 p g1Rg

opt g

(1 + Rg ) for

2 c = 1 + Rg

opt Tg can be computed numerically by integrating the full model ODEs or solving the full model steady-state equations

[Lerman et al, Alife Journal, 2001], [Martinoli et al, IJRR, 2004]

Journal Publications
Stick Pulling
Li, Martinoli, Abu-Mostafa, Adaptive Behavior, 2004 -> learning + micro Martinoli, Easton, Agassounon, Int. J. of Robotics Res., 2004 -> real + realistic + micro + macro Lerman, Galstyan, Martinoli, Ijspeert, Artificial Life, 2001 -> realistic + macro Ijspeert, Martinoli, Billard, and Gambardella, Auton. Robots, 2001 -> real + realistic + micro

Object Aggregation
Agassounon, Martinoli, Easton, Autonomous Robots, 2004 -> realistic + macro + activity regulation Martinoli, Ijspeert, Mondada, Robotics and Autonomous Systems -> real + realistic + micro

Some Limitations of the current Methods

Model Calibration - Practice


Bin distribution of interaction time Ta (mean Ta= 25 *50 ms = 1.25 s) # of collisions

Micro model, time-out option

Micro model, const P option

Realistic, distal controller

Realistic, proximal controller

Collision time

Model Calibration - Practice


Encountering probability pa: example of transition in space from search to obstacle avoidance (1 moving Alice, 1 dummy Alice, Webots measurements, egocentric)

Distal controller (rule-based)

Proximal controller (Braitenberg, linear)

Stochastic vs. Deterministic Models


Webots (10 runs), microscopic (100 runs), macroscopic model (1 run)

Spatial vs. Nonspatial Models


[Correll & Martinoli, DARS-04, ISER-04, ICRA-05, DARS-06, ISER-06, SYROCO-06]

Boundary coverage problem (case study turbine inspection)

Spatial models required because:


environmental template fast performance metrics (e.g. time to completion) clustered dropping point for robots networking connectivity algorithms with enhanced navigation
2000

Unfolded turbine, blade geometry reproduced faithfully

Time to Completion

1500

1000

500

10 12 14 Number of robots

16

18

20

Machine-Learning-Based Approach
(main focus: synthesis)

Automatic Design and Optimization


Evaluative & unsupervised learning: multi-agent (GA, PSO) or single-agent (In Line Adaptive Search, RL) Targeted to embedded control or system (e.g., hw-sw codesign, multi-objective) Enhanced noise-resistance (e.g., aggregation criteria, statistical tests) Customization for distributed platforms (off-line and online learning; solutions to the credit assignment problem) Combined with one or more levels of simulation

Rationale for Combined Methods


Application of machine-learning method to the target system (hardware in the loop) might be expensive or not always feasible Any level of modeling allow us to consider certain parameters and leave others; models, as expression of reality abstraction, can be considered as filters Machine-learning techniques will explore the design parameters explicitly represented at a given level of abstraction Depending on the features of the hyperspace to be searched (size, continuity, noise, etc.), appropriate machine-learning techniques should be used (e.g., single-agent hill-climbing techniques vs. multi-agent techniques)

Learning to Avoid Obstacles by Shaping a Neural Network Controller using Genetic Algorithms

Evolving a Neural Controller


Oi output S1 Ni wij synaptic weight f(xi) neuron N with sigmoid transfer function f(x) M1 S2 S3 S4 S5 S6 M2

Oi = f ( xi )
Ij input

2 f ( x) = 1 x 1+ e
xi = wij I j + I 0
j =1 m

S8

S7 inhibitory conn. excitatory conn.

Note: In our case we evolve synaptic weigths but Hebbian rules for dynamic
change of the weights, transfer function parameters, can also be evolved

Evolving Obstacle Avoidance


(Floreano and Mondada 1996)
Defining performance (fitness function):

= V (1 V )(1 i )
V = mean speed of wheels, 0 V 1 v = absolute algebraic difference between wheel speeds, 0 v 1 i = activation value of the sensor with the highest activity, 0 i 1
Note: Fitness accumulated during evaluation span, normalized over number of control loops (actions).

Evolving Robot Controllers

Note: Controller architecture can be of any type but worth using GA/PSO if the number of parameters to be tuned is important

Evolving Obstacle Avoidance


Evolved path

Fitness evolution

Evolved Obstacle Avoidance Behavior


Generation 100, on-line, off-board (PC-hosted) evolution

Note: Direction of motion NOT encoded in the fitness function: GA automatically discovers asymmetry in the sensory system configuration (6 proximity sensors in the front and 2 in the back)

From Single to Multi-Unit Systems: Co-Learning in a Shared World

Evolution in Collective Scenarios

Collective: fitness become noisy due to partial perception, independent parallel actions

Credit Assignment Problem


With limited communication, no communication at all, or partial perception:

Co-Learning Collaborative Behavior


Three orthogonal axes to consider (extremities or balanced solutions are possible): Individual and group fitness Private (non-sharing of parameters) and public (parameter sharing) policies Homogeneous vs. heterogeneous systems

Example with binary encoding of candidate solutions

Co-Learning Competitive Behavior

fitness f1 fitness f2

Learning to Avoid Obstacle using Noise-Resistant Algorithms


(Example 1 of the Combined Method, realistic level with GA and PSO)

Noisy Optimization
Multiple evaluations at the same point in the search space yield different results Depending on the optimization problem the evaluation of a candidate solution can be more or less expensive in terms of time Causes decreased convergence speed and residual error Little exploration of noisy optimization in evolutionary algorithms, and very little in PSO

Key Ideas
Better information about candidate solution can be obtained by combining multiple noisy evaluations We could evaluate systematically each candidate solution for a fixed number of times not smart from computational point of view In particular for long evaluation spans, we want to dedicate more computational power/time to evaluate promising solutions and eliminate as quickly as possible the lucky ones each candidate solution might have been evaluated a different number of times when compared In GA good and robust candidate solutions survive over generations; in PSO they survive in the individual memory Use aggregation functions for multiple evaluations: ex. minimum and average

GA

PSO

A Systematic Study on Obstacle Avoidance 3 Different Scenarios


PSO, 50 iterations, scenario 3 Scenario 1: One robot learning obstacle avoidance Scenario 2: One robot learning obstacle avoidance, one robot running pre-evolved obstacle avoidance Scenario 3: Two robots co-learning obstacle avoidance Idea: more robots more noise (as perceived from an individual robot); no standard com between the robots but in scenario 3 information sharing through the population manager!

Scenario 3
Three orthogonal axes to consider (extremities or balanced solutions are possible): Individual and group fitness Private (non-sharing of parameters) and public (parameter sharing) policies Homogeneous vs. heterogeneous systems

Example with binary encoding of candidate solutions

Results Best Controllers


Fair test: same number of evaluations of candidate solutions for all algorithms
(i.e. n generations/ iterations of standard versions compared with n/2 of the noise-resistant ones)

Results Average of Final Population


Fair test: idem as previous slide

Learning to Pull Sticks


(Example 2 of the Combined Method, microscopic level with in-line adaptive search)

Not Always a big Artillery such a GA/PSO is the Most Appropriate Solution
Simple individual learning rules combined with collective flexibility can achieve extremely interesting results Simplicity and low computational cost means possible embedding on simple, real robots

In-Line Adaptive Learning


(Li, Martinoli, Abu-Mostafa, 2001)
GTP: Gripping Time Parameter d: learning step d: direction Underlying low-pass filter for measuring the performance

In-Line Adaptive Learning


Differences with gradient descent methods:
Fixed rules for calculating step increase/decrease limited descent speed no gradient computation more conservative but more stable Randomness for getting out from local minima (no momentum) Underlying low-pass filter is part of the algorithm

Differences with Reinforcement Learning:


No learning history considered (only previous step)

Differences with basic In-Line Learning: Step adaptive faster and more stability at convergence

Enforcing Homogeneity
Three orthogonal axes to consider (extremities or balanced solutions are possible): Individual and group fitness Private (non-sharing of parameters) and public (parameter sharing) policies Homogeneous vs. heterogeneous systems

Example with binary encoding of candidate solutions

Sample Results Homogeneous System


Short averaging window (filter cut-off f high)
1.4 6 robots 1.2 Stickpulling rate (1/min) 1 0.8 0.6 0.4 0.2 0 4 robots Stickpulling rate (1/min) 1.2 1 0.8 0.6 0.4 0.2 0 4 robots 1.4

Long averaging window (filter cut-off f low)


6 robots

5 robots

5 robots

3 robots 2 robots

3 robots 2 robots

100

200 300 400 500 Initial gripping time parameter (sec)

600

100

200 300 400 500 Initial gripping time parameter (sec)

600

Systematic (mean only) Learned (mean + std dev)

Note: 1 parameter for the whole group!

Allowing Heterogeneity
Three orthogonal axes to consider (extremities or balanced solutions are possible): Individual and group fitness Private (non-sharing of parameters) and public (parameter sharing) policies Homogeneous vs. heterogeneous systems

Impact of Diversity on Performance


(Li, Martinoli, Abu-Mostafa, 2004)
1.3 2caste, Global Heterogeneous, Global Heterogeneous, Local 1.25
Stickpulling rate ratio

Notes: global = group local = individual

1.2

1.15

Specialized teams

1.1

1.05

1 2 3 4 Number of robots 5 6

Homogeneous teams (baseline)

Performance ratio between heterogeneous (full and 2castes) and homogeneous groups AFTER learning

Diversity Metrics
(Balch 1998)
Entropy-based diversity measure introduced in AB-04 could be used for analyzing threshold distributions Simple entropy: Social entropy:

pi = portion of the agents in cluster i; m cluster in total; h = taxonomic level parameter

Specialization Metric
Specialization metric introduced in AB-04 could be used for analyzing specialization arising from a variable-threshold division of labor algorithm

S = specialization; D = social entropy; R = swarm performance Note: this would be in particular useful when the number of tasks to be solved is not well-defined or it is difficult to assess the task granularity a priori. In such cases the mapping between task granularity and caste granularity might not trivial (one-to-one mapping? How many sub-tasks for a given main task, etc. see the limited performance of a caste-based solution in the stick-pulling experiment)

Sample Results in the Standard Sticks


2 serial grips needed to get the sticks out 4 sticks, 2-6 robots, 80 cm arena

Relative Performance
Spec more important for small teams Local p > global p enforced caste: pay the price for odd team sizes

Diversity
Flat curves, difficult to tell whether diversity bring performance

Specialization
Specialization higher with global when needed, drop more quickly when not needed Enforcing caste: low-pass filter

Remarks on the Standard Set-Up Results


When local and global performance are almost aligned (i.e. by doing well locally I do well globally), local performance achieve slightly better results since no credit assignment Nevertheless, global performance less noisy, so part of diversity for increasing performance higher with global performance (specialization when needed)

From Robots to other Embedded, Distributed, RealTime Systems

Embedded, Real-Time SI-Systems

Symbiotic societies Traffic systems Social insects

?
Networks of S&A Vertebrates

Pedestrians

Multi-robot systems

Embedded, Real-Time SI-Systems: Common Features


Real-world systems (noise, small heterogeneities, ) From a few to millions of units (but not 1023!) Embodiment, sensors, actuators, often mobility and energy limitations Local intelligence, behavioral rules, autonomous units Local interaction, communication (unit-to-unit, unit-to-environment)

Collaborative Decision in Sensor Networks


[Cianci et al., in preparation]

Microscopic 1: spatial 2D montecarlo simulation, multi-agent models, 1 agent = 1 node


Ss

Realistic: intra-node details and communication channel reproduced faithfully (Webots with Omnet++ plugin) Physical reality: detailed info on sensor nodes available

Experimental time

Ss

Ss

Sa Sa Sa

Microscopic 2: nonspatial 1D montecarlo simulation, multi-agent models, 1 agent = 1 node

Common metrics

Abstraction

Macroscopic: rate equations, mean field approach, whole network

Leurre: Mixed Insect-Robot Societies


http://leurre.ulb.ac.be/
Nj (k +1) = Nj (k) + pr Ns (k) j pjoin ( j 1)Nj1(k) pjoin ( j)Nj (k) pleave ( j)Nj (k) j + pleave ( j +1)Nj+1(k) j

[Correll et al., IROS-06; ALife J. in preparation]

Ss

Ss

Ss

Sa Sa Sa

Macroscopic: rate equations, mean field approach, whole swarm Microscopic: multi-agent models, 1 agent = 1 robot or cockroach; similar description for all nodes Realistic: intra-robot details, environment (e.g., shelter, arena) details reproduced faithfully; cockroaches: body volume + animation Physical reality: detailed info on robots; limited info on physiology of cockroaches, individual behavior measurable externally

Experimental time

Common metrics

Abstraction

Supra-Molecular Chemical System


[Mermoud et al., 2006, in preparation]
Common metrics

Macroscopic 1: Chemical equilibrium is completely defined by equilibrium constants K of each reaction (law of mass action)
Abstraction Experimental time

Macroscopic 2: Reactions kinetics describes how a reaction occurs and at which speed (differential equations)
Ss Ss Ss Sa Sa Sa

Microscopic 1: Agent-Based model, molecules geometry abstracted, 1 agent = 1 aggregate

Microscopic 2: Agent-Based model, molecules 2D- and 3D geometry captured, 1 agent = 1 aggregate Physical reality: microscopic (e.g., crystallography) and macroscopic measurements (chemical reaction)

SAILS: 3D Self-Assembling Blimps


http://www.mascarillons.org
[Nembrini et al., IEEE-SIS, GA, 2005]

Realistic: intra-robot details, environment simplified (no realistic fluid dynamics yet) Physical reality: detailed info on robots

Experimental time

Microscopic: multi-agent models, 1 agent = 1 blimp; trajectory maintained, visualization with Webots

Abstraction

TBD

Macroscopic: rate equations, mean field approach, whole swarm?

Common metrics

Conclusions

Lessons Learned over 10 Years


1. Stress methodological effort with computer & mathematical tools; exploit synergies among the three main research thrusts 2. Keep closing the loop between theory and experiments with simulation 3. Formally proof claims using simple models and show experimental excellence with realistic conditions seek for system dependability 4. Choose case studies that are relevant for applications 5. Focus on system design and use off-the-shelf components and platforms

Lessons Learned over 10 Years


6. Leverage all the technologies you can from other markets (OS, wireless com, S&A, batteries) and go beyond bio-inspiration 7. Team-up with other research specialists and companies for specific problems and applications 8. Push towards miniaturization; probably key for non-military applications in swarm robotics 9. Consider other forms of coordination other than self-organization (swarm intelligence just one form of distributed intelligence) 10. Consider other artificial/natural platforms (e.g. static S&A networks, mixed societies, chemical systems, intelligent vehicles, 3D moving units)

Some Pointers for Swarm Robotics (1)


Events: in additions to ANTS, ICRA, IROS:
IEEE SIS (2003, 2005, 2006, 2007) DARS (1992 - , biannual) Swarm Robotics Workshop at SAB (2002, 2004)

Books:
Swarm Intelligence: From Natural to Artificial Systems", E. Bonabeau, M. Dorigo, and G. Theraulaz, Santa Fe Studies in the Sciences of Complexity, Oxford University Press, 1999. Balch T. and Parker L. E. (Eds.), Robot teams: From diversity to polymorphism, Natick, MA: A K Peters, 2002.

Journal special issues:


Ant Robotics, 2001, Annuals of Mathematics and Artificial Intelligence Swarm Robotics, 2004, Autonomous Robots

Some Pointers for Swarm Robotics (2)


Projects and further pointers in addition to SWIS activities:
SwarmBot (next tutorial): http://www.swarm-bots.org/ I-Swarm: http://www.i-swarm.org/ Leurre: http://leurre.ulb.ac.be/index2.html BORG group at Georgia Tech: http://borg.cc.gatech.edu/ Rus robotics group at MIT: http://groups.csail.mit.edu/drl/ RESL at USC: http://www-robotics.usc.edu/~embedded/ IASL at UWE: http://www.ias.uwe.ac.uk/ Robotics at Essex: http://cswww.essex.ac.uk/essexrobotics/ Race at Uni Tokyo: http://www.race.u-tokyo.ac.jp/index_e.html Fukudas laboratory: http://www.mein.nagoya-u.ac.jp/ Swarm robotics we page (by E. Sahin): http://swarm-robotics.org/

You might also like