0% found this document useful (0 votes)
10 views

Research Paper 4

Uploaded by

Eman Anjum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Research Paper 4

Uploaded by

Eman Anjum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/267411411

Embodied Conversational Agent Avatars in Virtual Worlds: Making Today's


Immersive Environments More Responsive to Participants

Article · October 2013


DOI: 10.1007/978-3-642-32323-2_4

CITATIONS READS

8 597

4 authors, including:

Jacquelyn Ford Morie Kip Haynes


All These Worlds, LLC University of Southern California
93 PUBLICATIONS 1,258 CITATIONS 12 PUBLICATIONS 41 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Jacquelyn Ford Morie on 30 October 2014.

The user has requested enhancement of the downloaded file.


Embodied Conversational Agent Avatars in
Virtual Worlds:

Making Today’s Immersive Environments More


Responsive to Participants

Jacquelyn Ford Morie, PhD, Eric Chance, Kip Haynes, Dinesh


Rajpurohit
University of Southern California/Institute for Creative Technologies

Abstract:
Intelligent agents in the form of avatars in networked virtual worlds (VWs) are a new
form of embodied conversational agent (ECA). They are still a topic of active re-
search, but promise soon to rival the sophistication of virtual human agents developed
on stand-alone platforms over the last decade. Such agents in today’s VWs grew out
of two lines of historical research: Virtual Reality and Artificial Intelligence. Their
merger forms the basis for today’s persistent 3D worlds occupied by intelligent char-
acters serving a wide range of purposes. We believe ECA avatars will help to enable
VWs to achieve a higher level of meaningful interaction by providing increased en-
gagement and responsiveness within environments where people will interact with and
even develop relationships with them.

TOC
2

1.0 Introduction

Embodied conversational agents (ECAs) refer to intelligent programs that are


delivered through an embodied persona, typically a two- or three-dimensional
graphic entity that has the ability to converse via text or speech with a human
interactor, either on the web or via standalone code running on a computer.
(Cassell 2001). Among Artificially Intelligent (AI)-driven entities, ECAs not
only have conversational interactions with humans, they are often able to re-
member information about those interactions from session to session (Bick-
more and Cassell 2005). They incorporate techniques from Natural Language
Processing and include a corpus of knowledge that can cover a single topic or
span multiple ones (Churcher 1997; Bos and Oka 2003; Bingiganavale et al.
2000). Some ECAs, often called Virtual Humans (VH), can include special-
ized reasoning and even emotions (Martin et al. 2005; Egges et al. 2004). The
newest form of ECAs are those deployed within persistent virtual worlds,
where they are embodied as 3D avatars – the same representations used by
human participants. Since these Virtual World AI entities share the same con-
straints and stylistic designs of avatar embodiment and interaction as the users
of the virtual world, we have elected to call them embodied conversational
agent avatars (ECAAs). Unlike their more well-known ECA counterparts,
ECAAs must be able to adapt to an environment that changes over time, and
recognize a wide range of users embodied by different forms of avatars. They
must remain persistent in a world that is theoretically never turned off, yet be
able to restart automatically should that world reboot for some reason.

Virtual human research has progressed rapidly over the last 15 years (See
Rickel et al. 2002; Gratch et al. 2002; Swartout 2006, Becker-Asano and
Wachsmuth 2010; Swartout 2010). Yet, Embodied Conversational Agent Av-
atars are still in their infancy, with the first ones being implemented only
around four years ago (Hill 2008). As the articles in this book show, making
better and more functional bots is a prominent research topic. This chapter fo-
cuses primarily on intelligent agent avatars in virtual worlds, and especially
the work being done at the University of Southern California’s Institute for
Creative Technologies (ICT), which has a large research effort in both VHs
and ECAAs. ECAAs can be considered sophisticated conversational bots that
3

look like other inhabitants of the world and interact meaningfully with humans
within that space. Because of this, they can contribute to more robust scenari-
os in virtual worlds, covering a wide range of topics, from training to health.

Today’s agent avatars in virtual worlds are the result of a merger of 3D virtual
reality environments with interactive artificially intelligent agents. These two
technologies started as separate lines of research, and in the last decade have
come together to mutual advantage.
Virtual reality (VR) technology provides digitally fabricated spaces where we
can experience that which we cannot in real life, whether we are barred from it
though distance, temporal displacement or its imaginary nature. VR relies on
building, through the use of computer graphics and specialized viewing sys-
tems, complete environments that our disembodied selves can traverse, as if
we were really, truly there (McLellan 1996). The task of early VR researchers
was to find ways to convince humans of the believability of these digital spac-
es built only with computer graphics tools, and no physical materials. Much
of the research focused on how to bring the viewer inside that intangible
world. Researchers designed displays that shut out signals from actual physi-
cal reality and replaced these with malleable and controllable computer
graphics (Ellis 1994). Zeros and ones became digital sirens that fooled our
minds by providing experiences that stimulated our neural circuits in ways
very similar to actual reality. Unlike actual reality, however, it was a bound-
less expandable frontier, limited only by the creator’s imagination.
The other area of research was Artificial Intelligence (AI) research, which fo-
cused on making machines truly intelligent. Rather than creating spaces we
could inhabit, the early AI community sought to capture the internal mecha-
nisms of human thinking processes within the confines of a computer system
(Nilsson 1995). The overarching concept here was to understand how the
brain worked, and to then make a machine appear smart in a way that mim-
icked basic human intelligence. One trajectory of this research was to develop
programs that could exhibit intelligence and interact with humans in a conver-
sational manner. Early efforts concentrated on text-based queries and re-
sponses, with a human asking a question and the machine answering as if it
was a perceptive thinking entity. Weizenbaum’s early program, Eliza, very
nearly did the trick – more than one person was convinced his interactions
were with a real person rather than a computer program (Weizenbaum 1966).
But it was a thin disguise and these early so-called “chat bots” began to evolve
4

into more sophisticated systems through dedicated hard work and ongoing ad-
vances.
Each of these technologies – VR and AI – struck a chord with the general pub-
lic, which began to believe that computers could do almost anything conceiva-
ble. Escape the real world? Download your brain into a computer? No prob-
lem! This led to unrealistic expectations and researchers simply could not
keep pace with the hype generated by public excitement fed by the overactive
imagination of the press, science fiction books and even film. A period of dis-
illusionment set in (Mims 2010). However researchers entering the second
decade of the 21st Century, have moved beyond these issues and are forging
ahead on the paths early visionaries trod only in their dreams.
Going beyond Eliza’s model of a disembodied conversation with computer
programs masquerading as a Rogerian psychotherapist, a key group of people
realized that conversing with a visible character would enhance the interaction
between human and machine (Catrambone et al. 2005). In the 1990s, these in-
telligent virtual characters began to take on graphical forms that could visually
depict human-like actions and responses during conversational interactions.
Unlike the more sophisticated depictions of computer-generated humans that
were part of movies (for example, the film Final Fantasy in 2001) where each
frame of a character’s existence took hours to create, these AI virtual humans
had to run in real time to support their interactive nature. This task was diffi-
cult given the capabilities of the computers of that time. Therefore, real time
depictions were necessarily less about realism and more about behavioral be-
lievability.
Computer games were also quickly advancing during this time, and game
makers adopted techniques from many domains, including VR and AI. As a
real time interactive medium driven by a new generation of demanding play-
ers, these games pushed the envelope of realtime graphics while also incorpo-
rating some basic forms of intelligence into their systems. Most of these AI
resources were allocated to the behaviors of non-player characters, including
rudimentary player interaction and simple pathing algorithms (Stout 1996).
However a few AI characters had “starring” roles. The SIMS, for example,
while not a goal-driven game, stands out as a prime example of characters act-
ing with complex human-like behaviors via scripted rule-based AI, decision
trees and neural networks (Laird 2001). These characters were given basic in-
telligence, beliefs and goals commensurate with the needs of the game system.
5

People developed strong associations with these virtual “humans” (Brooks


2002).
Another example of a game AI in a “starring” role is “The Creature” from the
2001 strategy game Black and White (developed by Lionhead Studios). It was
widely acknowledged as the best example of character game AI for its time.
With intelligence designed by Richard Evans, The Creature was programmed
to do your godly bidding as your representative on a planet. The Creature
learned from the way you interacted with, rewarded and punished it. It used
an AI architecture called BDI, which provided much of the same functionality
used in the SIMS. Black and White became an immensely popular game, due
in part to its sophisticated character AI (Champandard, 2007).
By the first decade of the 21st Century, VR was supplanted by games in the
popular imagination because these were much easier to access and did not
need any specialized display equipment. Some games used multiple players in
connected, persistent, and easily accessible graphic worlds, such as World of
Warcraft and Final Fantasy, and were therefore known as networked games.
Parallel to these, open-ended virtual worlds emerged, which depended more on
social interactions than quests or game mechanics. Though the first imple-
mentations happened a decade earlier1 virtual worlds have become popular re-
cently, with hundreds of existing worlds and millions of users (See current sta-
tistics from the marketing firm KZero Worldswide at
http://www.kzero.co.uk/universe.php). Unlike VR environments, which are
typically set up in a laboratory or clinician’s office where one must be physi-
cally present, these virtual spaces need no special equipment to experience,
and can be accessed from anywhere, on any standard computer connected to
the Internet. These worlds are always running, and therefore they grow,
change, and evolve as people use them, with the participants themselves often
customizing the world for their own purposes. Placing ECAAs within them
involves special challenges such as keeping them running in a persistent world
and giving them functionality to interact with a wide and constantly changing
variety of users.

1 The first graphical virtual world is widely acknowledged to be Lucasfilm’s 1986


Habitat, developed by Chip Morningstar and Randall Farmer. It had all the func-
tionality of today’s VWs, with the one limitation that it used only 2D graphics.
(Morningstar and Farmer, 1991)
6

2.0 Advancing Intelligent Virtual Human Research

2.1 Chatterbots
As noted previously, the earliest virtual agents were non-graphical conversa-
tional characters comprising computer programs that could typically “under-
stand” text typed by a human. The program achieved this by matching key
words to a database of responses. Such interactions were often limited in
scope and lacked the richness of normal in-person communication. The com-
mon term given these autonomous software programs was chatterbots, chat
bots or “bots” for short (Mauldin 1994).
2.2 Embodied Conversational Agents
The next advancement was to depict these interactive agents in some visual
form. These “embodied conversational agents,” or ECAs, did more than make
the agent a person – they set the stage for understanding that more complex
requirements were needed to support the believability of an embodied agent.
For example, a personality of sorts could be achieved by writing clever inter-
active dialog, but the visual depictions also needed to reflect this in some way.
Each stage of development revealed a new understanding of the cues we take
for granted in face-to-face communication. Much more work was needed in
combining AI and character depictions to make a visual agent appear convinc-
ing.
Researcher Justine Cassell describes ECAs as “multimodal interfaces” that
implement as many of the usual elements humans use for communication as
possible. These can include speech, gestures and bodily movements, facial an-
imation, and more complex aspects such as subtle body language and respons-
es with embedded emotional overtones (Cassell and Vilhjálmsson 1999).
Research in ECAs started in earnest in the late 1990s. Several investigators
and institutions took the lead in advancing the state of the art. In addition to
Dr. Cassell and her colleagues (then at MIT) advanced work was being done
by Joe Bates’s team at Carnegie Mellon University, and at the Information
Sciences Institute (ISI), part of the University of Southern California. The
work at ISI brought the virtual character Steve to life, which served as a peda-
gogical agent who could interactively teach you about the operation of a ship’s
control panel (See Figure 1).
7

Steve was aware of whether or not you were paying attention to his training,
and would urge you back on task via a synthesized voice. Steve was one of the
early forms of a pedagogical agent that actually possessed a 3D animated body
(albeit without legs!) and this opened up new avenues of engagement with pu-
pils using virtual training environments (Gratch et al. 2002).

Figure 1: Steve in the machine room of the ship.

2.3 Modeling Human Communication


The global research question was this: How many of the aspects of face-to-
face human interaction can be simulated via ECAs, and what disciplines might
inform the implementation of these affordances? Dr. Cassell advocated the
study of human communication as a preliminary, necessary step to creating be-
lievable intelligent embodied agents. She identified the following basic func-
tional requirements:
 The ability to recognize and respond to verbal and non-verbal input.
 The ability to generate verbal and non-verbal output.
 The ability to deal with conversational functions such as turn taking,
feedback, and repair mechanisms.
 The ability to give signals that indicate the state of the conversation, as
well as to contribute new propositions to the discourse (Cassell 2000).

In the 2001 AAAI Symposium, ISI researcher Jeff Rickel described the devel-
opment of autonomous agents that can take on a variety of human roles as “a
8

daunting task” that demanded integration of several core technologies, some in


place and some requiring development. All, however, were focused on a cre-
ating a common representation and exposition of task knowledge (Rickel
2001).
In 2000, several members of the virtual human research group at ISI moved to
the newly formed USC Institute for Creative Technologies (ICT), which was
tasked with building better and more immersive simulations for military train-
ing. A key aspect of this effort, the group realized, was to populate the 3D
training environments with believable characters playing various roles to en-
hance their immersive and compelling aspects. This focus expanded upon the
Steve project already developed at ISI, and enabled new collaborative areas of
investigation, taking the research into increasingly advanced levels of sophisti-
cation.
2.4 A Multidisciplinary Approach
From its inception the ICT built virtual humans as a multidisciplinary endeav-
or, and therefore the characters not only began to interact more naturally with
their real world counterparts, they did so within more complex virtual envi-
ronments. According to the head of the virtual human research team, Dr. Wil-
liam Swartout, the goal was to create “virtual humans that support rich interac-
tions with people [to] pave the way toward a new generation of interactive
systems for entertainment and experiential learning” (Gratch et al. 2002: 37).
This required integrating several core technologies as building blocks that
worked together to form a more multifaceted complex agent. No mere chat
bots, the research centered around creating agents that were visual, gestural,
aware of their surroundings, and even able to exhibit emotions that could be
changed on the fly. Over a decade of research and development at the ICT re-
sulted in smart pedagogical agents that could help people develop skills in task
analysis, negotiation, decision-making, and complex leadership skills. They
were given sophisticated appraisal systems and embedded goals. They fea-
tured complex emotional modeling and could get angry, or refuse to go along
with your plans. They integrated task representation and reasoning along with
natural language dialogue. They appeared as convincing visual representa-
tions, with realistic behaviors including subtle movements like body language,
idle behaviors, facial expressions, gaze models and the like. These virtual
humans knew about their environment and the other persons within it (both re-
al and virtual). They were programmed with the basic rules of face-to-face
9

spoken dialogue. They could show nonverbal behaviors that people exhibit
when they have established rapport. They could understand both text and spo-
ken word, and even deal with off-topic remarks. In short, these virtual intelli-
gent agents combined a broader range of capabilities than any other work be-
ing done at that time (Hill et al. 2003; Gratch et al. 2002; Swartout et al. 2006).
ICT’s virtual human architecture includes a number of components listed be-
low that support the agents’ intelligent behaviors (Kenny et al. 2007). The
simplest question-answer agents use the first three components; more complex
agents can use all the components listed.
 Speech recognition: parses the trainee’s speech and produces a string of
text as output.
 Natural Language Understanding: analyzes the word string produced by
speech recognition and forms an internal semantic representation.
 Non-verbal behavior generation: takes the response output string and
applies a set of rules to select gestures, postures and gazes for the vir-
tual characters.
 Intelligent Agent: reasons about plans and generates actions. Simple
Q&A agents use the NPC Editor, whereas complex agents are created
using other cognitive architectures. The agents contain task models, a
dialogue manager and a model of emotions.
 SmartBody: an in-house animation control system that uses Behavioral
Markup Language to perform advanced control over the characters in
the virtual environment by synchronizing speech output with gestures
and other non-verbal behavior (Thiebaux et al. 2008).
 Real time graphics: a range of commercially available game engines are
used for building the custom environments and as the foundation for
real time rendering. As of this writing, Gamebryo and Unity are the
most widely used engines at the ICT.
10

Figures 2 and 3: Typical ICT Virtual Human agents from 2007

ICT’s newest virtual human project, SimCoach, provides three versions of a


web-based intelligent coach to encourage people in need of help to take a first,
anonymous step towards finding that help. It will assist veterans in locating
medical advice from a variety of online and staffed resources. Three coach
figures have been designed to appeal to military personnel: a retired Viet Nam
veteran (Figure 4), a female aviator and an active duty soldier. Special atten-
tion is being given to the facial expressions and non-verbal body movements
to provide a high level of behavioral believability. The SimCoach can answer
typed queries in real time, while the movements are procedurally triggered via
SmartBody markups to the dialog. As the first ICT virtual human to be de-
ployed via the Internet, SimCoach takes interactive ECAs to the next level
(ICT, 2010a).
11

Figure 4: The SimCoach Character, Bill Ford, from 2010

3.0 Convergence: Virtual Agent Avatars in Real-time Virtual


Worlds.

As discussed above, the most advanced virtual humans or intelligent agent ap-
plications have been achieved in custom-built environments designed for a
specific purpose, providing a bounded framework to contain the extent of
knowledge and interaction the character must provide. This is true even for
the SimCoach. However, when the environment is more open, the domain
more permeable, or the world in which the agent exists subject to ongoing
change, it becomes more difficult to create intelligent characters that believa-
bly populate those spaces. This is the challenge faced when operating ECAAs
within virtual worlds.
3.1 Challenges of Virtual Worlds
Networked, persistent VWs are not perfect. A person can enter a virtual space
one time and find a group of interesting people interacting, chatting and doing
things together. Another time the space might be devoid of people, leaving
12

one to wonder what to do. Some spaces themselves can be confusing with lit-
tle clues regarding the purpose of their construction. Having to self-discover
what to do in a space can lead to frustration. The task of discerning how to in-
teract with the virtual world’s affordances, control one’s avatar, and navigate
the space is often overwhelming to a first time user. The luckiest initiates
learn from their friends, who take them “in hand” and see that they are men-
tored through all the basics. The unlucky ones may join the ranks of a rather
large drop-out statistic. In 2007 Linden Lab, the company responsible for Se-
cond Life (SL), one of the most popular VWs today, reported that their drop
out rate for new users – those who logged in once but never again – was a ra-
ther shocking 90% (Nino 2007).
3.2 ECAAs as Solutions
Adding ECAAs to virtual worlds seems like one obvious solution to these is-
sues because they have the same embodiment and interaction potential as real
users. Such agents can serve as helpers, information resources, orientation
aids and virtual tour guides. In addition, ECAAs may be employed indefinite-
ly to maintain spaces created for a specific purpose, whereas employing live
support staff for the same task may be untenable. This approach makes a great
deal of sense given the world-wide base of VW users and the expansive nature
of their spaces.
ECAAs can serve educational purposes as well. In fact, any of the purposes
for which virtual humans or intelligent agents have been created can be dupli-
cated within the virtual world with embodied agent avatars. However, in
2007, worlds like Second Life made surprisingly little use of any form of
agents, or their simpler cousins, chat bots. They were not part of the offerings
of the Linden Lab – the company that created Second Life – whose focus was
on building a platform that encouraged user-generated content such as build-
ings, clothes, furniture and the like – merchandise that could be used primarily
for commerce.
The first SL avatar-based bots were used to model clothing in virtual shops.
One was instructed not to talk to those models; they were just there to show
how the clothes would look on a 3D figure. So they were useful, but not intel-
ligent and certainly not conversational. Other practical uses for bots were to
monitor visitors within a space, using the built in aspects of an avatar to gather
information and the like. Less sanctioned uses included using them to increase
13

the visitor traffic count and make your area appear to be more popular than it
actually was (Nino 2009).
3.3 Using Virtual Worlds as ECAA Research Platforms
The ICT recognized a great opportunity for expanding its expertise in creating
virtual humans to the virtual world domain. VW space seemed like an ideal
platform for importing some or all of ICT’s virtual human technology. Not
only would the virtual world provide the underlying graphics and physics en-
gine for little or low cost (until now we had used game platforms such as
Gamebryo and Unreal Tournament), avatar agents could be designed with
much less overhead (no building the 3D characters or having to animated
them), allowing more focus on their intelligence and conversational attributes.
The virtual world, especially Second Life, also offered intrinsic benefits of
greater availability, affordability (and free to users), an in-world scripting lan-
guage for data gathering and other peripherals, persistent usage, and not hav-
ing to bring participants into a research lab for interaction. It also provided a
rather large pool of virtual world residents who could potentially interact with
agents we might deploy. Even though reports of its demise have been ru-
mored, SL continues to be a very stable platform with tens of thousands of us-
ers at any given time (Papp 2010).
With these thoughts in mind, in 2008 we set about adapting some of the tech-
nology behind ICT’s virtual humans to create ECAAs within Second Life.
This was made possible, in part, by leveraging an open source software library
then known as libsecondlife (now called libOpenMetaverse). 2 This software
enables communication with the servers that control the virtual world of Se-
cond Life (SL).
We had already built a large Iraqi Village in SL for a previous project, but the
village seemed quite empty and dull when no one was around. We chose this
location for a proof of concept exercise and filled it with simple avatar-based
bots acting as villagers, who went about their daily activities in a scripted fash-
ion. For example, a mother and her son would shop at the various stores in the
market place, conversing with several bot shopkeepers. An old man would
walk through his village, have tea served to him by a waitress bot, and then go
to the mosque to pray. Child bots played in the schoolyard, and the shopkeep-

2 http://openmetaverse.org/
14

ers even visited with each other. These were not interactive or intelligent agent
avatars, just avatars scripted to perform as actors, but they did give the village
a liveliness that was not often found in a Second Life environment.

Figure 5. Village bots going about their daily business.

3.4 ECAAs for Instruction and Training


In late 2008 a new project at the ICT was funded to build a virtual healing cen-
ter for U. S. Military veterans in Second Life, on a space called Chicoma Is-
land. This space, consisting of four “sims” (a.k.a. simulators, or standardized
large regions of virtual space in SL), would offer in-world activities that could
help reduce stress, build resilience, and serve as a supplement to standard med-
ical care. One of the activities added to the virtual space was a labyrinth, as
these are often associated with contemplation and relaxation. We realized that
people might not know how to walk a labyrinth, so the first agent avatar we
created was a simple scripted character that sat near its entrance, and could be
summoned to provide very brief non-denominational instructions. This char-
acter was a small step up from the village bot actors, in that it did interact, but
only when summoned.
15

Figure 6. The Labyrinth Guide on Chicoma Island.

We moved on to a more elaborate implementation for our next Second Life


agent. To prevent the widely acknowledged confusion that can occur when a
human-driven avatar comes to a new place, we implemented a guide for our
area. As soon as a new visitor was detected, this guide, an agent avatar named
“Info Clarity,” would appear near the visitor and welcome them to Chicoma
Island via a text chat message. Info could answer any questions typed to him
about the purpose of the island and what one could do there. The agent avatar
could also take the visitor on a walking tour of the space or teleport them to
specific destinations when requested. The challenge of building this naviga-
tional agent helped answer many questions regarding ECAAs.
The first question we needed to answer was if the ICT framework for ECAs
could be connected to avatar representations in Second Life. We were able to
solve this problem by implementing the connection to the NPC Editor (a statis-
tical classifier that could parse the text and match it to appropriate answers)
using OpenMetaverse, a set of reverse-engineered protocols allowing a bot to
mimic the interaction of a client program with the SL server. In other words,
our ECAA appeared to be an avatar like any human-driven avatar, logging into
the sim and accessing all the standard avatar features as well as the AI capabil-
16

ities we were providing. Soon we had a guide agent whose domain included
all parts of Chicoma Island.
Other questions were answered as the bot stayed logged in and interacted with
users in the world. We determined that the bot could stay running and stable
for an extended period of time, it could handle more than one person asking
questions, and it could respond to people who were not in proximity to it by
communicating over Instant Messaging, rather than local chat. When touring
a person around the four sims of the island (each being served by a separate
CPU), we solved the problem of handling the disruptions and navigational dis-
continuities caused by crossing sim boundaries. We analyzed conversational
logs between ECAA guides and visitors, and improved the range of topics and
questions that could be addressed. As was our standard practice, we also add-
ed responses to off-topic questions designed to bring the visitor back on track.
Shortly after we started this project, our work came to the attention of a train-
ing arm of the DoD. They were building areas within two virtual worlds, and
wanted to populate them with intelligent agents for various purposes. This
project – Virtual Intelligent Guides for Online Realms, or VIGOR – resulted in
a number of interesting instances of agent avatar technology in virtual world
space.
The first ECAA we created for VIGOR was to play the role of a knowledgea-
ble African diplomat in a virtual information center in Active Worlds. Active
Worlds was a much older VW platform, with fewer tools available to access
the internal workings of the system, but we produced a fairly simple conversa-
tional agent for our sponsors that could answer a range of questions about his
African country.
17

Figure 7: Sudanese Official in Active Worlds.

We were also tasked with creating a guide for a public Army-oriented space
they were setting up in Second Life. Building on the ideas present in the Chi-
coma Island guide, we created a sophisticated navigational and informational
agent to tour people around the Army space, answer their questions and give
them Army-themed virtual gifts. This guide went beyond the Island guide in
several ways. Its navigational functions included being able to guide groups,
and know when people were keeping up with him or not. It could not only an-
swer questions, it could handle both local and messaged chat inquiries and
even correct for spelling mistakes. If this guide ECAA did not know the an-
swer to a specific question, it could send a message or an email to a live per-
son who could send back an answer, which the bot would dutifully relay (Jan
et al. 2009).
The next task was to implement an embodied agent avatar that could tell peo-
ple how to make a parachute jump in the virtual Army world. What made this
request challenging was the specification to make him a “crusty old sergeant”
who would bark out orders and get annoyed if you weren’t doing things fast
enough. We had only made agents in SL that used text chat, and typing is not
an efficient way to convey “crustiness.” Therefore we decided to give this one
a recorded voice, with which he could speak to the participants. Standing at
18

the entrance to a rustic jump shack, he would greet visitors saying: “So you
wanna jump off my mountain, troop?” He’d then say: “Well ya better get on
one of those parachutes back there, before someone puts you in a body bag!”
motioning to a shelf of parachutes in the jump shack. The visitor could type
certain responses. For instance, if he or she said “No” to the jumpmaster’s
original question, then the jumpmaster agent would simply wait for the next
visitor. If the person took too long to get their chute on, he’d offer the exact
steps to do it, and yell at you impatiently if you still took your time.
The jumpmaster agent underscored some of the challenges that AI researchers
discovered when their characters became more human-like, and these chal-
lenges were exacerbated within the Second Life platform. While we had suc-
cessfully created a jumpmaster that yelled at you, SL did not offer tools that
allowed us to make facial expressions to visually support the behavior indicat-
ed by the vocal track. Second Life does offer a rudimentary lip-synching for
when avatars are using a microphone and voice functions in the world, but it is
not very sophisticated. It works moderately well for ordinary conversation,
but not for voices that are highly modulated, as in singing or yelling. Howev-
er, it is possible to access approximately a dozen key frames of default facial
expressions through the native scripting language, LSL. With no way to ac-
cess any control points on the agent avatar’s face, we instead used a script to
rapidly trigger and stop these available key frames in custom sequences to
produce the illusion that the intelligent virtual agent is speaking the phrases
being heard. It was moderately successful, and an interesting surprise to those
Second Life citizens who encountered our less than polite jumpmaster (Chance
and Morie 2009).
19

Figure 8: The Jumpmaster bot in Second Life

The next challenge for the VIGOR Project was to develop an agent that could
give what is called a “staff ride,” which is typically a visit to a battleground or
location of interest with someone relaying the events that took place on vari-
ous parts of the terrain. Other environmental conditions such as weather can
also be used as part of the analysis of the events that transpired. Staff rides are
valuable training mechanisms, and today are most often conveyed through
power point presentations in a classroom rather than by a visit to an actual site
of interest (Robertson 1987). Our staff ride guide was to tell the story of an
incident during the Iraqi war at a checkpoint along the road to the Baghdad
Airport. In this situation a car taking an Italian journalist to the airport failed
to stop at a temporary checkpoint, and was fired upon, resulting in casualties.
The geography of that area was built in Active Worlds, and an ECAA was de-
veloped with knowledge of the event. The tour started in an information cen-
ter where the initial details of the incident were conveyed to a group of human-
driven avatars. The virtual staff ride guide then led the group to that check-
point area, showing the terrain from several vantage points such as from an
overpass and from the soldiers’ location. Unlike staff rides where some of the
details of the area might have changed, the Active Worlds environment could
maintain the location of key items that were there during the original incident,
such as the temporary barriers and even the journalist’s car, for better under-
standing of how they played into the events.
20

The training, guide and informational ECAAs we created in Second Life were
fairly successful, and served as excellent proofs of concept. Back on Chicoma
Island we decided to make an intelligent agent avatar that was a more promi-
nent part of the activities we were building. The Warriors’ Journey Story
Tower was one such activity, where a veteran could go to see and hear a story
about a classic warrior figure from history such as the Cheyenne Dog Soldier,
or a Samurai Warrior. The stories, shown through a series of narrated panels
along the spiral path of a tower structure, were designed to highlight guiding
principles common to all soldiers, such as defending one’s homeland, fighting
with honor, protecting your comrades and returning from injury. We realized
that four narrated image panels could only begin to tell the full story of these
heroes, and that a conversational agent in the form of the warrior whose story
was being told would be an ideal means to provide context, history and addi-
tional personal information about the character.
After seeing and hearing the story along the spiral path, a visitor to the tower
reaches the topmost room, where the embodied conversational agent (whose
avatar is designed to appear historically accurate) is situated, surrounded by a
backdrop that represents where and when he lived. This ECAA finishes the
story with additional narration, and gestures towards elements of the space
around him as he does so. When he is done, text appears on the screen telling
the visitor they can now talk to the warrior and ask him anything they want to
know about his time, his life and the battles he has fought. The character cur-
rently has about 300 responses covering questions that could be asked about
these areas of interest, as well as responses that address off topic comments.
As veterans visit the Story Tower experience, we use the logs of their conver-
sations to add to the warrior’s knowledge base. At the present time, a visitor
can choose from two classic Warrior’s Journeys, each with their own set of
background narrative and conversational responses (Morie et al. 2010).
This activity uses the power of narrative to help reinforce a more positive atti-
tude and a connection to the long history of soldiers dedicated to protecting
their people. We are taking the next step with this work and making it possi-
ble for a veteran who has experienced these stories to author not only their
own story but also their own story-telling agent for the Story Tower. Author-
ing a conversational agent with full embodiment and a knowledge base is not a
simple task: there is no Virtual Human Toolkit available to help one through
the process (although this is an active area of research at the ICT). This effort
21

is just starting and promises to be a major advancement for ECAAs in Second


Life.
The implementations discussed thus far are all individual examples of agent
avatars that converse one-on-one with a human being. As part of another task
in VIGOR, we wanted to see if we could make an entire scenario dependent on
the interactions of many agents, including a virtual teammate that collaborates
on all tasks and investigations. We chose to use the Iraqi Village described
earlier as a stage for a checkpoint situation that would involve military person-
nel and a range of villagers, including children and a pregnant woman. While
this was a proof of concept project for us, and not designed as an actual train-
ing exercise, it still included aspects a soldier might encounter during a check-
point duty. These included having to stop, detain and search a person it they
looked or acted suspicious, having to interrogate villagers when looking for a
suspect, and dealing with cultural issues, like having to search a pregnant
woman. A soldier ECAA acts as your partner at the checkpoint. A group of
villagers are coming down the road and it is your job to make sure the people
going in the village actually belong or have legitimate business there. There
are two main scenarios: The first is where the crowd creates a diversion that
enables a child to sneak past the gate with either medical supplies for his
grandfather or bomb-making materials (the scenarios randomly selects one sit-
uation).
The second scenario involves a woman who appears to be wearing bulgy
clothing and says she is going to visit an aunt in the village. These scenarios
have randomized story outcomes, such as the woman either being secretly
pregnant or simply trying to smuggle contraband. These divergent outcomes
require that many of the characters be able to adjust their knowledge base to
account for the scenario at hand. A scripted Heads Up Display (HUD) pro-
vides the actions and tools needed to maintain the checkpoint: the ability to
stop an avatar, wand them, detain them and search their belongings. This
HUD works for both the user and his virtual teammate. Each scenario may al-
so require more information to be gained by walking through the village and
asking questions of the local inhabitants. Here there are street vendors, shop-
pers, teachers and children who, as ECAAs, can offer information if asked.
All these agents can be questioned (or pursued if they decide to flee) either by
the user or the virtual teammate, who is also an ECAA (Jan et al. 2011).
22

Figure 9. Some of the many bots that are part of the Checkpoint Exercise in Second Life

4.0 Creating Meaning with ECAAs


While we have done extensive work in creating useful ECAAs in virtual
worlds, there is much more to be done. For example these intelligent charac-
ters need to be able to communicate better, especially in the area of gestures,
behaviors, and natural body movements. However, this is also true even for
the human-driven avatars in virtual worlds. Current communication methods,
as provided by the makers of virtual worlds, require a person to choose ges-
tures or animations from a limited list, which is far from intuitive. One needs
to consider which gesture might match what one wishes to convey, and then
actually trigger the gesture to have it played in the world. All this takes plan-
ning and time (Verhulsdonck and Morie 2009).
ECAAs also would be much more useful if they could understand a person’s
voice, as this has become a preferred means of communication over texting in
most virtual worlds. While ICT’s traditional virtual humans understand and
respond to voice, implementing this functionality in a virtual world remains a
challenge. In most virtual worlds people communicate via a Voice Over IP so-
lution using a variety of consumer microphones, and this is not adequate to
properly train a voice recognition system. Using voice would also require
23

more sophisticated control of an avatar’s facial movements. However, much


of the complexity that exists in ICT’s advanced virtual human models is not
available in the VW avatars that form the basis of the ECAAs. Ultimately, we
would like to integrate these virtual world agents with our more advanced vir-
tual human tools, such as SmartBody, the ICT behavioral markup system in
development that is integrated with the intelligence of the virtual human sys-
tem to procedurally generate appropriate movements synchronized with spo-
ken audio and other control channels (Theibaux and Marsella 2007).
While we have now made several ECAAs that serve a variety of purposes.
The main goal driving our research is to have these agents help provide mean-
ing for one’s interactions in the virtual world. When we first began working
on the veterans’ healing center we consulted with an existing Second Life vet-
erans’ group that had set up its own space where its member could feel com-
fortable and welcome. They had built a hospitality center with resources rang-
ing from GI Bill information to legal aid, a chapel, separate buildings for each
of the services, a nightclub and more. The focal point of the space, however,
was a recreation of the Tomb of the Unknown Soldier, a highly meaningful
symbolic memorial for them. When we first approached them, they were ra-
ther wary, and reticent to share information or join our activities. However, as
they began to learn more about our work, especially the intelligent agent ava-
tars we had created, one of the members came to us with a request: could we
make the ceremonial honor guard for the virtual Tomb of the Unknown Sol-
dier? It had bothered them that their tomb stood unguarded, unlike its physical
counterpart at Arlington Cemetery that is watched over 24 hours a day, seven
days a week. We, of course, said we would do this thinking that, since the
guard does not actually have to talk to anyone, it would be easy. The chal-
lenge, however, was to make sure the honor guard conformed in its actions to
the precise ritualistic patrol the guard maintains during his watch. This walk,
21 steps with specific turnings and time specifications, was a very slow proce-
dure, in accordance with the solemnity of its purpose. It was not simple to
make a walk that proscribed and slow in Second Life because it offers only
two fixed speeds of movement. We devised a method that would animate an
invisible object along the walkway very slowly, and then attached the Honor
Guard bot to that while overriding his default animations and replacing them
with customized ones that could be synchronized with the movement of the
object. In this way, the actions were precise and at the correct pace.
24

One big challenge was to ensure that the tomb guard agent stayed online all
the time, which was more difficult than we thought it would be because the
persistence of the virtual world is sometimes volatile. The overall world is in-
deed constantly on and available, but it happens that a particular sim, or island,
may suddenly go offline for an unspecified amount of time or be restarted pe-
riodically by Linden Lab to install updates. An island’s owner might also de-
cide to intermittently reboot the sim to improve interaction and reduce lag, a
common problem. We set up scripts to monitor the agent’s status, and to relog
it into the world if its sim went offline. However, if it tried to log in before
that sim came back online, it would be automatically routed to some nearby
sim. The monitor would then indicate it was back online, but we would soon
hear from the veterans group that it was not in the right place (ICT 2010b).
The creation of the Honor Guard for the veterans gained a trusting relationship
for us with the group. It also serves as an excellent example of why such
agents should be encouraged in Second Life. While some people might find
ways to use agents or bots in objectionable ways, the potential of these agents
for positive interaction outweighs arguments for their exclusion.
5.0 Conclusion
We foresee a very active future for ECAAs deployed in virtual worlds. While
they are not yet as sophisticated as many of their virtual human counterparts in
the research arena, they can still enrich virtual spaces, serve as much needed
guides and docents, teach valuable lessons, provide symbolic presences, and
fill a myriad of uses – only some of which have yet to be imagined. The mer-
curial architectures of persistent, networked virtual worlds must be continually
assessed to both determine what these agents can potentially achieve, and to
plan for spaces that can support their newest uses. The advancement of virtual
world ECAAs will only be achieved through diligent study and exploration of
these worlds because it is the limitations of current virtual worlds that restrict
the agents’ abilities to exhibit intelligence, emotions, and learning capacity
similar to that of the more mature virtual human technologies. Virtual worlds
are here to stay, and will be an increasingly active part of how we interact with
one another in the future. We encourage more work in this field and look for-
ward to the many changes such efforts will bring.

Acknowledgements:
Some of the projects described herein have been sponsored by the U.S. Army Research,
25

Development, and Engineering Command (RDECOM). Statements and opinions expressed


do not necessarily reflect the position or the policy of the United States Government, and
no official endorsement should be inferred.

We would like to thank the many talented members of all the interdisciplinary teams that
comprise the Virtual Human research group at the ICT for their dedicated work and con-
tributions that enabled the adaptation of virtual human technology to virtual worlds.
More about their work can be found at www.ict.usc.edu

References

Artstein R, Cannon J, Gandhe S, Gerten J, Leuski A, Traum D, Henderer J


(2008) Coherence of off-topic responses for a virtual character. 26th Army
Science Conference (Orlando, FL, December 1 - 4, 2008)
Becker-Asano, C, and Wachsmuth, I. (2010) Affective computing with prima-
ry and secondary emotions in a virtual human. Autonomous Agents and
Multi-Agent Systems (20)1: 32-49
Bickmore, T, Cassell, J (2005) Social dialogue with embodied conversational
agents. In: van Kuppevelt, J., Dybkjaer, L., Bernsen, N. (eds.) Natural, In-
telligent and Effective Interaction with Multimodal Dialogue Systems.
Kluwer Academic, New York
Bindiganavale, R, Schuler, W, Allbeck, JM, Badler, NI, Joshi, AK, Palmer, M
(2000) Dynamically altering agent behaviors using natural language in-
structions. In: Proceedings of the fourth international conference on Auton-
omous agents, pp. 293–300
Bos, J, Oka, T (2003) Building spoken dialogue systems for believable charac-
ters. In: Proceedings of the seventh workshop on the semantics and prag-
matics of dialogue, Diabruck
Brooks, D (2002) Oversimulated Suburbia. The New York Times Magazine.
November 24, 2002. Available online at
http://www.nytimes.com/2002/11/24/magazine/24SIMS.html Accessed
Jun 12, 2011.
Cassell J (2000) More than just a pretty face Embodied conversational inter-
face agents. Communcations of the ACM. 43(4): 70–78.
26

Cassell, J (2001) Embodied Conversational Agents: Representation and Intel-


ligence in User Interfaces AI Magazine 22(4): 67-83.
Cassell, J. and Vilhjálmsson, H (1999). Fully Embodied Conversational Ava-
tars: Making Communicative Behaviors Autonomous. Autonomous Agents
and Multi-Agent Systems 2(1): 45-64.
Catrambone, R, Stasko, R, and Xiao, J (2005) ECA as User Interface Para-
digm: Experimental Findings within a Framework for Research in From
Brows To Trust, Zsófia Ruttkay and Catherine Pelachaud (eds). Human-
Computer Interaction Series, 2005. Volume 7(III) 239-267.
Champandard AJ (2007) Top 10 Most Influential AI Games. Published online
at AI Game Dev.com. Available at
http://aigamedev.com/open/highlights/top-ai-games/ Accessed 20 Decem-
ber 2010.
Chance E, Morie J F (2009), Method for Custom Facial Animation and Lip-
Sync in an Unsupported Environment, Second LifeTM. In Proceedings of
IVA 2009, Ruttkay Z, Kipp M, Nijholt A, Vilhjálmsson HH (eds). Spring-
er: 556-557.
Churcher, GE, Atwell, ES, Souter, C (1997) Dialogue management systems: A
survey and overview. Technical report, University of Leeds
Egges, A, Kshirsagar, S, Magnenat-Thalmann, N (2004) Generic personality
and emotion simulation for conversational agents. Computer Animation
and Virtual Worlds 15(1): 1-13
Ellis, SR (1994) “What are Virtual Environments?,” IEEE Computer
Graphics and Applications, Vol. 14, No. 1, Jan. 1994, pp. 17-22.
Gratch J, Rickel J, André E, Badler N, Cassell J, Petajan E (2002) Creating In-
teractive Virtual Humans: Some Assembly Required. IEEE Intelligent Sys-
tems, July/August, 54-63
Hill R, Gratch J, Marsella S, Rickel J, Swartout W, Traum D (2003) Virtual
Humans in the Mission Rehearsal Exercise System. Kunstliche Intelligenzi
(KI) (special issue on Embodied Conversational Agents), AI 4(03): 5-10.
Hill, M. (2008) Second Life is frontier for AI research: Intelligence tests use
virtual world’s controllable environment. MSNBC Online. Available at
27

http://www.msnbc.msn.com/id/24668099/ns/technology_and_science-
innovation/t/second-life-frontier-ai-research/ Accessed Jun 1, 2011.
ICT (2010a) SimCoach Project Description. Available at
www.ict.usc.edu/projects/simcoach Accessed 15 December 2010.
ICT (2010b) http://projects.ict.usc.edu/force/cominghome/honorguard.php
Jan D, Roque A, Leuski A, Morie J, Traum D (2009) A Virtual Tour Guide for
Virtual Worlds. In Proceedings of the Intelligent Virtual Agents, 9th An-
nual Conference, IVA 2009, Amsterdam, The Netherlands, September 14-
16, 2009. Lecture Notes in Computer Science. Ruttkay Z, Kipp M, Nijholt
A, Vilhjálmsson HH (eds). Springer: 372-378.
Jan D, Cahne, E, Rajpurohit, D, DeVault, D, Leuski, A Morie J, and Traum D
(2011) Checkpoint Exercise: Training with Virtual Acttors in Virtual
worlds. In Proceedings of the Intelligent Virtual Agents, 9th Annual Con-
ference, IVA 2011, Reykjavik, Iceland September 15-17, 2011.
Kallmann M, Marsella S (2005) Hierarchical motion controllers for real-time
autonomous virtual humans. Proceedings of the 5th International working
conference on Intelligent Virtual Agents (IVA): 243–265. (Kos, Greece).
Kenny P, Hartholt A, Gratch J, Traum D, Swartout W (2007) The More the
Merrier: Multi-Party Negotiations with Virtual Humans. AAAI 2007 (Van-
couver, British Columbia, Canada, July 2007)
Laird, JE (2001) Using a Computer Game to Develop Advanced AI. IEEE
Computer Magazine. 34(7): 70-75.
Lee J, Marsella S, Traum D, Gratch J, Lance B (2007) The Rickel Gaze Mod-
el: A Window on the Mind of a Virtual Human. Proceedings of the 7th In-
ternational Conference on Intelligent Virtual Agents: 296-303.
Lehman JF, Laird JE, Rosenbloom PE (2006) A Gentle Introduction to Soar:
2006 update. Available online at
http://ai.eecs.umich.edu/soar/sitemaker/docs/misc/GentleIntroduction-
2006.pdf Accessed 15 December 2010.
Martin, J.C., Pelachaud, C., Abrilian, S., Devillers, L., Lamolle, M., Mancini,
M.: Levels of representation in the annotation of emotion for the specifica-
tion of expressivity in ecas. In: Proceedings of Intelligent Virtual Agents
(2005)
28

Mauldin, M (1994) ChatterBots, TinyMuds, and the Turing Test: Entering the
Loebner Prize Competition, Proceedings of the Eleventh National Confer-
ence on Artificial Intelligence, AAAI Press.
McLellan, H (1996). Virtual realities. In D. H.Jonassen, (Ed.), Handbook of
research for educational communications and technology (pp. 457–487).
New York : Macmillan.
Mims, C (2010) Whatever Happened to … Virtual Reality? Technology Re-
view, MIT Press. Available online at
http://www.technologyreview.com/blog/mimssbits/25917/ Accessed May
14, 2011.
Morie JF, Haynes E, Chance E (2010) Warriors' Journey: A Path to Healing
through Narrative Exploration. In Proceedings of the International Confer-
ence Series on Disability, Virtual Rehabilitation and Associated Technolo-
gies, and ArtAbilitation. International Society for Virtual Rehabilitation.
Morningstar R, Farmer R (1991) The Lessons of Lucasfilm’s Habitat. In Cy-
berspace: First Steps, Benedikt M (ed). MIT Press, Cambridge, MA: 273-
302.
Nilsson, N J (1995) Eye on the Prize. AI Magazine. Vol 16, No 2. Pp 9-17.
Nino T (2007) Peering Inside – Second Life’s User Retention. Published
online at Massively by Joystiq, available at
http://massively.joystiq.com/2007/12/23/peering-inside-second-lifes-user-
retention/ Accessed June 20, 2011.
Nino T (2009) Second Life traffic gaming: A chat with a bot-operator, and dire
portents for Lucky Chairs. Published online at Massively by Joystiq, avail-
able at http://massively.joystiq.com/2009/06/03/second-life-traffic-gaming-
a-chat-with-a-bot-operator-and-dire/ Accessed December 15, 2010.
Papp, R. (2010). Virtual worlds and social networking: reaching the millenni-
als. Journal of Technology Research, 2: 1-15.
Rickel, J, Marsella, S, Gratch, J, Hill, R, Traum, D, and Swartout, W (2002).
Toward a new generation of virtual humans for interactive experiences.
IEEE Intelligent Systems: 32-38.
29

Rickel J (2001) Steve Goes to Bosnia: Towards a New Generation of Virtual


Humans for Interactive Experiences. AAAI Spring Symposium on Artifi-
cial Intelligence and Interactive Entertainment (Stanford University, CA,
March 2001)
Robertson WG (1987) The Staff Ride. United States Army Center of Military
History. Available at
http://www.history.army.mil/StaffRide/Staffr/staffr.html Accessed 12 Jan-
uary 2009
Stout, B (1997) Smart Moves: Intelligent Pathfinding. Game Developer Mag-
azine. Octobr 1996.
Swartout W (2006) Virtual Humans. Twenty-First National Conference on Ar-
tificial Intelligence (AAAI-06). (Senior paper) Boston, MA
Swartout W, Gratch J, Hill R, Hovy E, Marsella S, Rickel J, Traum D (2006)
Toward Virtual Humans, AI Magazine. 27(1)
Swartout, W. (2010) Lessons Learned from Virtual Humans. AI Magazine.
31(1)
Theibaux M, Marsella, S (2007) Smartbody: Behavior Realization for Embod-
ied Conversational Agents. In Proceedings of the 7th International Confer-
ence on Intelligent Virtual Agents (IVA) September 17-19, 2007. Paris
France.
Thiebuax M, Marsella S, Marshall, AN, Kallman, M (2008) Proceedings of the
7th international joint conference on Autonomous agents and multiagent
systems - Volume 1
Traum D, Swartout W, Gratch J, Marsella S (2008) A Virtual Human Dia-
logue Model for Non-team Interaction. In Dybkjær L and Minker W (eds)
Recent Trends in Discourse and Dialogue: 45–67. Springer.
Verhulsdonck G, Morie JF (2009) Virtual Chironomia: Developing Standards
for Non-verbal Communication in Virtual Worlds Journal of Virtual
Worlds Research 2(3): Technology, Economy, and Standards. Available
online at http://journals.tdl.org/jvwr/article/viewArticle/657. Accessed 20
January 2009.
30

Weizenbaum, J (1966) ELIZA – A Computer Program for the Study of Natu-


ral Language Communication between Man and Machine, Communications
of the Association for Computing Machinery 9: 36-45.

View publication stats

You might also like