Command and Control of Autonomous Unmanned Vehicles PDF
Command and Control of Autonomous Unmanned Vehicles PDF
Unmanned Vehicles
51
David H. Scheidt
Contents
51.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
51.2 Autonomous Unmanned Air Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
51.2.1 The Case for Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
51.2.2 C2 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
51.2.3 The Organic Persistent Intelligence Surveillance and
Reconnaissance System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
51.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297
Abstract
D.H. Scheidt
Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
e-mail: [email protected]
K.P. Valavanis, G.J. Vachtsevanos (eds.), Handbook of Unmanned Aerial Vehicles,
DOI 10.1007/978-90-481-9707-1 110,
Springer Science+Business Media Dordrecht 2015
1273
1274
D.H. Scheidt
51.1
Introduction
In the spring of 1940, the combined French, British, Dutch, and Belgian forces
outnumbered their German counterparts in troops, mechanized equipment, tanks,
fighter planes, and bombers. The ME109E German fighter aircraft was roughly
equivalent to the British Spitfire, and the French CharB1 tank was superior to
the German Panzer III. In addition, the allies were fighting on their home soil
which greatly simplified their logistics. Yet in less than 6 weeks, the Belgians,
Dutch, and French surrendered to the Germans, and the British retreated across the
English Channel. Even though the allies had superior equipment and larger forces,
they were defeated by the Germans who employed Auftragstaktik, a command
and control technique that enabled edge war fighters to directly coordinate on
tactical decisions using modern communications equipment (in WWII this was
radio). Allied forces were forbidden to use radio because it was not secure,
and allied maneuver decisions were made by generals at headquarters and based
upon hand-couriered reports. German decisions were made on the fly by Panzer III
commanders and JU-87 (Stuka) pilots conversing over the radio. By the time the
French commanders met to decide what to do about the German advance, General
Erwin Rommel and General Heinz Guderians Panzers had travelled over 200 miles
and reached the English Channel.
As demonstrated repeatedly in military history, including the German advance in
1940, the speed at which battlefield decisions are made can be a deciding factor
in the battle. A process model that describes military command and control is
the Observe, Orient, Decide, Act (OODA) loop described by Boyd (Fig. 51.1).
Boyd shows that, in military engagements, the side that can get inside the
opponents OODA loop by more rapidly completing the OODA cycle has a distinct
advantage. In their influential book Power to the Edge, Alberts and Hayes use the
term agility to describe an organizations ability to rapidly respond to changing
battlefield conditions. Modern warfare case studies, such as the Chechens against the
Russians, and not-so-modern warfare, such as Napoleon at Ulm, indicate that agile
organizations enjoy a decisive military advantage. Alberts points out that a common
feature of agile organizations is an empowerment of frontline forces, referred to
as edge war fighters. Commanders facilitate organizational agility by exercising
command by intent, in which commanders provide abstract goals and objectives
to edge war fighters who then make independent decisions based upon these goals
and their own battlefield awareness. This empowerment of edge war fighters reduces
the OODA loop at the point of attack, providing the desired agility.
orient
observe
1275
decide
Natural
World
act
1276
D.H. Scheidt
Fig. 51.2 An
AeroVironment Raven being
launched
Geospatial-Intelligence Agency, projects that it would take an untenable 16,000 analysts to study the video footage from UAVs and other airborne surveillance systems.
The intelligence, surveillance, and reconnaissance (ISR) capability represented by
medium- and large-scale unmanned vehicles represents a tremendous potential for
the edge war fighter if only the information could be processed and distributed in
time. For the edge war fighter to take advantage of the ISR capability represented
by these assets, information relevant to that specific war fighter must be gleaned
from the mass of information available and presented to the war fighter in a timely
manner. This presents a challenge as crews analyzing UAV payload data (far fewer
than Rushs 16,000 analysts) are not appraised of the changing tactical needs of all
war fighters, nor do the war fighters have access or the time required to select and
access data from UAV sources. Currently, operation centers are used to gather and
disseminate information from persistent ISR assets. This centralized information
management process introduces a delay between the observation and transmission to
the war fighter which reduces force agility and operational effectiveness. While U.S.
soldiers are empowered to operate on command by intent, their ISR systems are
all too frequently centralized systems reminiscent of the French command structure.
For U.S. forces to become a fully agile force, the ISR systems supporting the U.S.
soldier must be as agile as the soldier it supports. Agile unmanned vehicle systems
require that some decisions are made at the edge nodes; therefore to become agile,
unmanned vehicles must become autonomous.
51.2
UAVs currently in use are described as unmanned strictly because no human rides
inside the air vehicle; however, the manual labor required to operate an air vehicle
1277
1278
D.H. Scheidt
of autonomous tier 1 and tier 2 UAVs have been demonstrated outdoors at U.S.
government ranges (Scheidt et al. 2004; Kwon and Pack 2011; Tisdale et al. 2008).
Arguably the most mature autonomous UAV effort is the DARPA Heterogeneous
Airborne Reconnaissance Team (HART) program, providing automatic unmanned
air vehicles with an autonomous tactical decision aide (TDA). In these systems simple actions such as waypoint following are performed independently by each UAV,
and complex decisions are performed by the TDA algorithm located on the pilots
computer. Prior to execution, the pilot reviews and approves (or disapproves and
modifies) the plan. Proponents of HART correctly argue that the pilot is allowed to
auto-approve plans, effectively making HART an autonomous system; however,
the requirement to continually enable the pilot to review and approve all flight plans
profoundly impacts the UAV system architecture and performance by delaying the
exchange of information between the sensor and the UAV controller. The U.S. Army
planned on partial fielding of HART in 2012 (Defense Systems staff 2012).
51.2.2 C2 Fundamentals
In Power to the Edge, command and control (C2) is defined as the common
military term for management of personnel and resources but also gives the formal
definition of command as found in the Joint Chiefs of Staff Publication, which
subsumes some portions of control in that definition. Viewed as a black box the
purpose of the C2 system use observations to produce decisions. C2, including UAV
C2, involves the production and execution of decisions that, when executed, change
the world in which the UAV is operating in ways that benefit the operator. That world
(X) is described as a set of states, X D fx1 ; x2 ; : : : xn g, each one of which represents
a unique configuration of actors and attributes within the natural world in which
1279
the UAV operates. The command and control system can be viewed as a transfer
function (f .xi / ! xj ) that produces a state change in the world. The quality of
each state can be determined by applying a mission-based fitness criteria to elements
within the state. For example, if a UAV mission is to track a target, then states
in which the target is within the field of view of the UAVs sensor are evaluated
as of higher quality than those states in which the target is not seen by the UAV,
and an effective C2 system would cause state transitions that are of high quality
when compared to alternative transitions. C2 is a constant battle between chaos and
order, with order being state transitions designed to achieve mission goals that are
instigated by the C2 system and chaos being unanticipated state transitions produced
by adversaries, poorly coordinated teammates or random acts.
C2 can be best understood as an information-theoretic problem. This is apropos
as both the situational awareness upon which decisions are based as well as the
decision products can be viewed as information-theoretic messages and both the C2
process and the natural world can be viewed as information transfer functions. For
an interesting, albeit somewhat off-topic, discussion on the information-theoretic
nature of physics, see Wheeler (1990).
The information content (S ) of a message (m) is defined by Shannon (1948) as:
S.m/ D log2 .1=P.m//
(51.1)
The more improbable the message being received, the larger the information
content. For example, baring a highly improbable change to celestial mechanics,
a message stating tonight it will be dark has zero information content because
the a priori probability that it would be dark is one. By comparison, tonight it will
rain has positive information content because the a priori probability that it would
rain during a given evening is less than one. Information content is measured in bits,
which are real numbers. Note that the term bit is overloaded, and information
theory bits are not the same as computer science bits, which are integers.
The state space (p) of the world in which our UAVs operate is the amount of bits
required to uniquely identify all possible states in the world, which is referred to as
the state space of the world.
p D log2 jX j
(51.2)
Completely describing the world requires a message of length p. A communication
that completely describes the natural world would require a message whose length
would be effectively infinite as the natural world includes each blade of grass,
molecule of air, quantum states of ions within each atom, and so forth. Fortunately,
effective C2 of UAVs does not require such detailed knowledge, and effective
UAV control can be provided using artifacts that are abstract, limited, and while
potentially quite large in number, expressible in messages that are small enough to
be used within a modern computer network. In fact, military C2 systems routinely
express tactical worlds in finite languages such as the protocols used by the Global
Command and Control System and the Link16 network. In practice, the size of
a UAVs world as represented by the C2 system can be dynamic, with the state
1280
D.H. Scheidt
space of the tactical world changing as artifacts enter, or leave, the operational area.
That the complexity of a UAVs world, as represented by the state space of the
current situation, is subject to change is important to understanding how to control
UAVs.
Knowledge of the UAVs world is rarely complete, and UAVs are expected to
operate in the presence of varying degrees of uncertainty. Uncertainty in tactical
information can be produced by errors in sensor systems, gaps in sensor coverage, or
approximation error, which is the difference between the actual ground truth and the
coding scheme selected. For example, if the unit representation of a coding scheme
used to represent linear position is 1 m, then an approximation error of 0.5 m is
unavoidable. Consider the message (m) that encapsulates a C2 systems current
situational awareness (SA). When S.m/ p, uncertainty exists and additional
information is required to produce complete SA. Now consider a second message
(m0 ) that contains all of the missing information required to reduce uncertainty to
zero. The information content of m0 is information entropy (H) of m, shown as
S.m0 / D H.m/ D
1
P .x/ log P .x/
(51.3)
(51.4)
For each bit of order produced by the C2 system, entropy is reduced by a bit.
Likewise, each bit of entropy produced by unanticipated change reduces order by
1 bit.
As demonstrated by Guderian and Rommel, C2 systems are temporally sensitive.
As time elapses the information content of a message that describes a dynamic scene
decreases in proportion to the unpredictable change in the scene. An example of this
are unexpected maneuvers made by a target after a sensor observation was made but
before the execution of an response to the observation. The loss of information over
time is defined as entropic drag () which is expressed mathematically by Scheidt
and Schultz (2011) as:
.x; t; t0 / D
H. x.t/j x.t0 //
t t0
(51.5)
1281
Note that entropic drag is specific value for each state. Control of UAVs operating
in an uncertain world in which multiple states are feasible requires a consideration
of all admissible states. The measurement of entropic change that incorporates all
feasible states is the normalized form of entropic drag (norm ) that is the average
entropic drag for all states within the system:
norm .t/ D
X H. xi .t/j xi .t0 //
8xi
.t t0 / jX j
(51.6)
UAVs acquire, process, and share information over a control infrastructure that
includes onboard network and processing, radio downlinks and uplinks, and offboard processing. When processing information, two key characteristics of the
control infrastructure are latency (0 ), which is the unavoidable delay in processing
an information packet regardless of packet size, and bandwidth (), which is the
rate in bits per second at which bits of information can be processed irrespective of
the latency. The delay time () required to process a message m of information can
be viewed as
sizem
(51.7)
m D 0 C
1282
D.H. Scheidt
4.4
4.2
4
Uncertainty
3.8
3.6
3.4
3.2
3
2.8
106
105
104
103
resolution
102
10
Table 51.1 The complexity and rate of unpredictable change vary by mission class
Use case
Description
Complexity, P(x) Entropic drag, (t)
Strategic
Operational
Focused tactical
Multiunit tactical
None
Low
Low
Low
High
High
High
provided from a UAV to the UAV operator. The local minima in the plot represents
the optimal amount of information that should be transmitted. UAV system that
transmit too much information, represented by high-resolution data on the left of
the graph, reduces the net information provided due to the large loss of information
across all data due to entropic drag. To further understand this relationship, let us
examine bandwidth from Eq. (51.7) and its relationship to Eqs. (51.5) and (51.6) in
more detail. This is accomplished by viewing canonical ISR missions and operating
conditions through information theory and applying these views to differing forms
of UAV control. Table 51.1 describes four general classes of ISR missions for UAVs
and defines, in general terms, the complexity and entropic drag associated with those
missions.
1283
Recall that UAVs may be controlled using three general methods: tele-operation,
automatic control, and autonomous control. Descriptions of these methods are:
Tele-operation The most common form of UAV control is tele-operation. Teleoperated UAVs use a ground-based pilot to fly the UAV using techniques that are
identical those of human-piloted aircraft. Tele-operated UAVs utilize low-level
control features found in an airplane cockpit as well and onboard sensor data on
the pilots ground station that are duplicates of those found in the cockpit of a
piloted aircraft. All decisions used to control tele-operated planes are made by
the ground-based human pilot.
Automatic flight UAVs that contain autopilots are capable of automatic
flight. Automatic UAVs use autopilots to assure stable-controlled flight. The
pilots of automatic UAVs provide waypoint locations that direct the path of
the UAV. In addition to flying to pre-defined waypoints, automatic UAVs may
be preprogrammed to handle simple changes in operating conditions; however,
management of complex or unanticipated changes are handled by a ground-based
pilot.
Autonomous flight UAVs that contain an autopilot and an onboard, intelligent controller are capable of autonomous flight. Similar to automatic UAVs,
autonomous UAVs provide stable flight between waypoints; however, unlike
automatic UAVs, the intelligent controllers onboard an autonomous UAVs define
new waypoints in response to unanticipated changes in operational conditions.
Autonomous UAVs fundamentally change the relationship between the human
and the UAV because autonomous UAVs, unlike tele-operated UAVs, automatic
UAVs, and manned aircraft, do not require pilots. Autonomous UAVs do use human
supervision; however, that supervision is performed at a higher level that the typical
plane-pilot relationship. The relationship between an autonomous UAV and the
supervising human resembles the relationship between a human pilot and an air
traffic controller or, for Navy pilots, the Air Boss. Autonomous UAV operators
supervise their UAVs by providing abstract, mission-level objectives as well as
rules of engagements that are equivalent to the instructions provided to human pilots
prior to a mission. During the mission autonomous UAVs devise a course of action
aligned with these instructions in response to the current situation. As conditions
change during the course of a mission, an autonomous UAV constantly modifies the
course of action in accordance with the mission-level objectives. Unlike automatic
UAVs, autonomous UAVs respond to complex situations that were not explicitly
considered during UAV design.
The different forms of UAV control demand different levels of communications.
Direct control of UAV control surfaces requires that tele-operated UAV pilots
use low-latency, high quality of service communications to command UAVs. For
tele-operated UAVs even small perturbations in service run the risk of loss of
control and catastrophic failure. Automatic UAVs are more forgiving, as the pilot
is only required to provide guidance at the waypoint level. The communications
requirement for automatic UAVs is determined by the rate of change of those
operational elements that dictate the mission pace. For example, if the UAV
is engaged in an ISR task to track a specific target, the UAV communications
1284
D.H. Scheidt
infrastructure used to control the UAV must be capable of providing target track
data to the pilot prior to the target exiting the UAVs field of view. Depending upon
the nature of the target, the response time required for automatic UAVs can range
from sub-second intervals to minutes. Autonomous vehicles are the most forgiving
of communications outages and delays. In fact, autonomous vehicles are capable of
performing without communication to human operators during an entire mission.
The ability to function without continual human supervision changes operator and
designer perspectives on UAV communications from being a requirement to being
an opportunity. When autonomous UAVs can communicate, either to humans or
other vehicles, mission performance is improved by the sharing of information
and collaborating on decisions; however, when communications are not available,
autonomous UAVs are still capable of fulfilling the mission. A synopsis on the
communications availability, in terms of latency and bandwidth, for four different
classes of conditions is provided in Table 51.2.
When considering whether a UAV control decision should be made be a human
operator or by a control processor onboard a UAV, three criteria should be considered: (1) what is the quality of decisions made by the human/machine, (2) what are
the ethical and legal requirements for making decisions, and (3) what accessible
information is available to the human and the machine? There are ethical and
legal advantages and disadvantages for both human and machine decision-making.
Arkin provides an excellent overview of the legal and ethical issues, concluding
that no consensus exists that would favor human control over machine control or
vice versa (Arkin 2009). Regarding the ability to make higher-quality decisions,
anecdotal evidence suggests that there exist problem sets for which humans provide
better quality decisions and problem sets for which intelligent control algorithms
provide better decisions. For example, few would argue that the path planning
algorithms provided by Mapquest and Google find superior paths over complex
road networks in times unmatched by humans. On the other hand, even the
most sophisticated pattern recognition, algorithms are incapable of matching small
children in rapidly identifying and manipulating common household items in a
cluttered environment. The neuroscience and psychology communities have longstudied human cognitive abilities, and computer science, particularly the subfield of
complexity theory (Kolmogorov 1998), has been used to study the performance of
cognitive algorithms as a function of the problem space being addressed; however,
a comparative understanding between human and machine cognition (or even the
tools necessary to achieve this understanding) does not exist at this time.
In order to move the discussion of UAV C2 into a manageable space, two
simplifying assumptions are asserted: first, decisions should be made using the
maximum amount of information, and (51.2) given equivalent information, it is
preferred that decisions be made by a human. These simplifying assumptions
allow us to focus on the availability of information as the primary driver for
command and control. While it is somewhat disconcerting to ignore the quality
of the decision-maker and ethical issues, our focus on information as the driving
factor in C2 is consistent with the lessons learned from Guderian and Rommel
earlier.
1285
Table 51.2 Communication availability experienced by UAVs during a mission can vary greatly.
Depending upon the operational conditions, latency and bandwidth can vary greatly
Communications
Bandwidth ()
availability
Description
Latency (0 )
Dedicated
Dedicated infrastructure whose
Very low
Extremely high
communications
access is tightly controlled and not
contested by environmental or
adversarial activities
Uncontested
Standing infrastructure that is
Very low
High
broadband
broadly used that is not contested
by environmental or adversarial
activities
Contested
Standing infrastructure that is
Low
Low
communications
broadly used and is not contested
by environmental or adversarial
activities that produce periodic
outages and/or reduction in service
Over the horizon
Operations that involve periodic
High
Moderate
movement in areas that are beyond
communications range or involve
periodic, planned communications
blackouts
Having narrowed our focus into the information used to make a decision,
three information-theoretic distinctions using between an operator and an onboard
computer to determine a course of action are identified. These distinctions are as
follows: (a) the complexity of the scene that must be described to make a decision,
which dictates the size of the packets that must be communicated and the run time
of decision processes; (b) the entropic drag of the system being represented by
the data, which dictates the time for which the information is valid; and (c) the
communications delay time provided in Eq. (51.7) which defines the earliest time
at which the decision could be made. These values may be combined to form an
information value g(x) that defines the UAV control problem. The information value
is defined as the product of the complexity of the UAVs world and the entropic
drag of the worlds unpredictable change divided by the delay associated with
communicating the world state to the decision-maker s.t:
g.x/ D
P .x/.tm /
m .x/
(51.8)
1286
D.H. Scheidt
Table 51.3 The appropriate conditions for using tele-operated, automatic, or autonomous UAV
control are defined by the operational criteria and available communications
Dominant control
Dedicated
Uncontested
Contested
technique
communications
broadband
broadband
Over the horizon
Strategic
Tele-operated
Tele-operated
Automatic
Automatic
Operational
Tele-operated
Tele-operated
Automatic
Autonomous
Focused tactical
Tele-operated
Tele-operated
Automatic
Autonomous
Multiunit tactical
Autonomous
Autonomous
Autonomous
Autonomous
defined earlier, operational scenarios that are appropriate for autonomous, automatic, and tele-operated UAV control are defined and enumerated in Table 51.3.
As Table 51.3 indicates, there exist real-world circumstances in which UAV C2
should be tele-operated, automatic, or autonomous. Not surprisingly, those times
which autonomous C2 is dominant are complex, dynamic situations. Exactly the
sort of situation faced by Guderian and Rommel. So, having identified that complex,
constantly changing scenarios is best supported by autonomous UAVs, how might
we develop such as system.
1287
Fig. 51.4 OPISRs concept of operation allows UAVs of various sizes to communicate with
each other, with users, and with commanders through an ad hoc, asynchronous cloud. The cloud
communicates goals from users to vehicles and sensor observations from users to vehicles
that matches these queries is sent by the system to the handheld device. The
handheld interface provides a map of the surrounding area that displays real-time
tracks and detections and imagery metadata. The imagery metadata describes, at a
glance, the imagery available from the surrounding area. OPISR-enabled vehicles
are autonomous; if the information required by the war fighter is not available at
the time the query is made, OPISR unmanned vehicles autonomously relocate so
that their sensors can obtain the required information. OPISR-enabled unmanned
vehicles support multiple war fighters simultaneously, with vehicles self-organizing
to define joint courses of action that satisfy the information requirements of all war
fighters.
Because war fighters are required to operate in harsh, failure-prone conditions,
OPISR was designed to be extremely robust and fault tolerant. OPISRs designers
viewed communications opportunistically, designing the system to take advantage
of communications channels when available but making sure to avoid any/all
dependencies on continual high quality of service communications. Accordingly,
all OPISR devices are capable of operating independently as standalone systems or
as ad hoc coalitions of devices. When an OPISR device is capable of communicating
with other devices, it will exchange information through networked communications
and thereby improve the effectiveness of the system as a whole. However, if
communications are unavailable each device will continue to perform previously
1288
D.H. Scheidt
Fig. 51.5 OPISRs hardware architecture is based on a modular payload that can be fitted onto
different types of unmanned vehicles
identified tasks. When multiple devices are operating in the same area, they will
self-organize to efficiently perform whatever tasks have war fighters have requested.
1289
being serviced. While OPISR does not require traditional control and payload
communications, OPISR devices do support these legacy capabilities. Because the
OPISR nodes communicate over a separate channel, OPISR functionality may be
provided in tandem with traditional control. This is in keeping with the OPISR
dictum that OPISR is an entirely additive capability; unmanned vehicle owners
lose no functionality by adding OPISR. However, OPISR vehicles are responsive
to commands from human operators and will, at any time, allow an authorized
human operator to override OPISR processor decisions. Likewise, legacy consumers
of information will still receive their analog data streams. Note that even when
the OPISR processor is denied control by the UAV pilot, the OPISR system will
continue to share information directly with edge war fighters as appropriate.
Fig. 51.6 Each OPISR node contains a four major software components that manages information flow and decision-making
1290
D.H. Scheidt
1291
accomplished by storing the belief with the more recent time stamp. For example,
one belief might posit that there is a target at grid [x, y] at time t0 , and a second belief
might posit that there is no target at grid [x, y] at time t1 . More sophisticated conflict
resolution algorithms are scheduled to be integrated into OPISR in 2012. The second
TMS function is the efficient storage of information within the blackboard. When
performing this task, the TMS caches the most relevant timely information for rapid
access, and when long-lived systems generate more data than can be managed within
the system, the TMS removes less important information from the blackboard. For
caching and removal, the importance of information is defined by the age, proximity,
uniqueness, and operational relevance.
Coordination between agents is asynchronous, unscheduled, and completely
decentralized, as it has to be, for any centralized arbiter, or scheduled communications introduce dependencies that reduce the robustness and fault tolerance that is
paramount in the OPISR design. Because agent communication is asynchronous and
unscheduled, there is no guarantee that any two agents will have matching beliefs at
an instance of time. Fortunately, the control algorithms used by OPISR are robust to
belief inconsistencies. Cross agent truth maintenance is designed to the same criteria
as agentagent communications: Information exchanges between agents seek to
maximize the consistency of the most important information but does not require
absolute consistency between agent belief systems. Information exchange between
agents is performed by the agent communications manager. When communications
are established between agents, the respective agent communications managers
(ACM) facilitate an exchange of information between their respective blackboards.
When limited bandwidth and/or brief exchanges limit the amount of information
exchanged between agents, each ACM uses an interface control component to
prioritize the information to be transmitted. Information is transmitted in priority
order with priority being determined by information class (beliefs being the most
important, followed by metadata), goal association (e.g., if a war fighter has
requested specific information that information is given priority), timeliness, and
uniqueness.
1292
D.H. Scheidt
approaches: namely, the tendency of vehicles to become stuck in local minima and
the propensity to exhibit undesired oscillatory behavior. As implemented in OPISR,
DCF is used to effect specific behaviors such as search, transit, or track, as well
as behavioral selection. The DCF algorithm is encoded in the cSwarm software
module. All unmanned vehicles in OPISR execute cSwarm. DCF behaviors specific
to unique classes of vehicle are produced by tailoring the field formula which is
stored in a database within cSwarm. OPISR autonomous unmanned vehicles is a
variety of behaviors including:
Searching contiguous areas defined by war fighters.
Searching linear networks such as roads.
Transiting to a waypoint.
Blue-force over-watch.
Target tracking.
Perimeter patrol.
Information exchange infrastructure, in which unmanned vehicles maneuver to
form a network connection between an information source, such as an unattended
sensor, and war fighters that require information on the source. Note that the
war fighter is not required to specify this behavior; the war fighter need only
specify the information need, and the vehicle(s) utilizes this behavior as a means
to satisfy the need.
Active diagnosis, in which vehicles reduce uncertain or incomplete observations
through their organic sensing capabilities. For example, a UAV with a sensing
capability capable of classifying targets will automatically move to and classify
unclassified targets being tracked by a cooperating radar.
In addition to the mission-level behaviors enumerated above, OPISR vehicles
exhibit certain attributes within all behaviors. These universal attributes are:
Avoiding obstacles or user-defined out-of-bounds areas.
Responding to direct human commands. OPISR unmanned vehicles are designed
to function autonomously in response to mission-level objectives; however, when
operators provide explicit flight instructions, OPISR vehicles always respond to
the human commands in preference to the autonomous commands.
51.2.3.5 Experimentation
The current OPISR system is the culmination of a decade-long exploration in
autonomous unmanned vehicles. Experimentation with DCF began in 2002 as part
of an effort to investigate agent-based control of unmanned vehicles to support the
U.S. Armys Future Combat System. These early efforts focused predominantly on
cooperative search, the results of which are described by Chalmers (Scheidt et al.
2004). Since 2002 thousands of simulated engagements have been conducted with
DCF. These simulations have shown that DCFs computational load is independent
of the number of vehicles cooperating to solve a mission. Two-hundred vehicle realtime simulations have been run on a single-Pentium class processor. Simulations
from a variety of ISR missions have repeatedly shown that vehicle behavior is robust
to perturbations in the number of vehicles or the lay-down of those vehicles.
1293
1294
D.H. Scheidt
1295
Fig. 51.8 OPISR vehicles from the 2011 Webster field demonstration including one (of four)
ScanEagles, two custom surface vehicles, one (of six) Procerus Unicorns, and an OceanServer
Iver2 undersea vehicle
(Bamberger et al. 2004), and simultaneous support for multiple end users (Stipes
et al. 2007). As successful as these experiments have been prior to 2011, the full
suite of OPISR capabilities described in this chapter had not been demonstrated
on a large disparate set of vehicles. In September 2011 a multi-vehicle system
consisting of 16 OPISR-enabled nodes, including 9 UAVs in support of 3 users,
was conducted at Webster Air Field in St. Inigoes, MD, and the surrounding
Chesapeake Bay. The 2011 demonstration mixed air, ground, and sea ISR needs
with surveillance being conducted under the water, on the water, and on and over
land. The autonomous unmanned vehicles included four Boeing ScanEagles, six
Procerus Unicorns, a Segway RMP ground vehicles, custom surface vehicles, and
an OceanServer Iver2 undersea vehicle. The air, surface, and undersea vehicles
are shown in Fig. 51.8. These vehicles used a wide range of payload sensors to
detect, classify, and track waterborne vehicles, land vehicles, dismounts, and minelike objects, including EO, IR, radar, AIS, passive acoustic, side-scan sonar, and
LIDAR. ISR tasking was generated by three proxy operators, two of which were
on land (one mounted and one dismounted) and one of which was on the water.
ISR tasks requested required the use of all of the vehicle behaviors previously
described.
The OPISR capabilities demonstrated by OPISR UAVs at St. Inigoes included a
set of six enabling autonomous submission capabilities, which are:
1296
D.H. Scheidt
1. Area search When user(s) requests that one or more contiguous areas should
be searched, the autonomous vehicles respond by searching those areas.
2. Road network search When user(s) request that one or more roads should be
searched, the autonomous vehicles respond by searching those roads. This is
functionally identical to performing an area search over an area that is confined
to (or focused on) roads of interest.
3. Overwatch A convoy protection mode that when a user requests overwatch
protection, one or more vehicles circle the protected user. This is functionally
identical to track a moving target, except that the U V sensors will be directed
at the region surrounding the convoy, rather than directly at a target.
4. Communication relay When an area to be searched (see behavior #1) is farther
from the user interested in the area than the vehicle-user communications range,
one or more vehicles autonomously form a communications chain to relay data
from the searched area to the user.
5. Obstacle avoidance Obstacles that are known by a vehicle are avoided. Note
that obstacles may include physical obstacles detected by vehicle sensors (e.g.,
trees) and/or abstract obstacles provided by users such as no-fly zones.
6. Behavior switching Vehicles are capable of exhibiting multiple behaviors
depending upon current user goals and circumstances.
One highlight of the experiment was the indirect access to time-sensitive data
from a remote camera sensor located well outside the direct communication
ranges of the C2 ground stations and their radios to OPISR users by autonomous
UAV communications chains. This communications link was formed fully autonomously, even to the extent that there was no specific command given to
the ScanEagle UAV that it forms a communications chain. The ScanEagle was
tasked to patrol the region, and as it became aware of sensor data, it relayed
that data to the user, who immediately had the sensor data available on his
display.
Another key OPISR feature that was demonstrated in St. Inigoes was the OPISR
UAV ability to coordinate on tasks that it (the UAV) cannot satisfy without recruiting
other vehicle types. Very importantly in this experiment, command was shown not
only from one ground station to multiple heterogeneous vehicle platforms but also
commands from multiple users at multiple ground stations could set the goals of any
and all OPISR components.
During experimentation sensor and mission information was delivered to both
C2 user stations both directly and via UAV communications chaining. This delivery
occurred both automatically and in response to specific user requests. All relevant
mission data was made available and reported to the C2 user, including the status
of assigned search missions, represented as fog of war over the map geo display;
detections from the various UGS, the two USVs, and the UUV; and camera imagery
from the various platforms. Within the flight time windows available, over 65,000
images were collected by the various camera sensors, including onboard UAV
payload sensors and relayed to the C2 stations, available to any of the notional war
fighter users as their ground node was connected to cognizant portions of the mesh
network.
51.3
1297
Conclusion
References
R.C. Arkin, Governing Lethal Behavior in Autonomous Robots (CRC, Boca Raton, 2009)
R. Bamberger, R.C. Hawthorne, O. Farrag, A communications architecture for a swarm of
small unmanned, autonomous air vehicles, in AUVSIs Unmanned Systems North America
Symposium, Anaheim, 3 Aug 2004
B. Bethke, M. Valenti, J. How, Experimental demonstration of UAV task assignment with
integrated health monitoring. IEEE Robot. Autom. Mag., Mar 2008
Defense Systems Staff (2012) Army readies on-demand imagery tool for battlefield use. Defense
Systems, 1 June
R.C. Hawthorne, T. Neighoff, D. Patrone, D. Scheidt, Dynamic world modeling in a swarm of
heterogeneous autonomous vehicle, in AUVSI Unmanned System North America, Aug 2004
A.N. Kolmogorov, On tables of random numbers. Theor. Comput. Sci. 207(2), 387395 (1998)
V. Kumar, N. Michael, Opportunities and challenges with autonomous micro aerial vehicles. Int.
J. Robot. Res. 31(11), 12791291 (2012)
H. Kwon, D. Pack, Cooperative target localization by multiple unmanned aircraft systems using
sensor fusion quality. Optim. Lett. Spl. Issue Dyn. Inf. Syst. (Springer-Verlag) (2011)
M. Mamei, F. Zambonelli, L. Leonardi, Co-fields: a unifying approach to swarm intelligence, in
3rd International Workshop on Engineering Societies in the Agents World, Madrid (E), LNAI,
Sept 2002
H. Nii, Blackboard systems. AI Mag 7(2), 3853 (1986); 3, 82106
V.D. Parunak, Go to the Ant: engineering principles from natural multi-agent systems. Ann Oper
Res 76, 69101 (1997)
D. Scheidt, M. Pekala, The impact of entropic drag on command and control, in Proceedings of
12th International Command and Control Research and Technology Symposium (ICCRTS),
Newport, 1921 June 2007
D. Scheidt, K. Schultz, On optimizing command and control, in International Command and
Control Research Technology Symposium, Quebec City, June 2011
D. Scheidt, T. Neighoff, R. Bamberger, R. Chalmers, Cooperating unmanned vehicles, in AIAA 3rd
Unmanned Unlimited Technical Conference, Chicago, 20 Sept 2004
1298
D.H. Scheidt