Research Paper 4
Research Paper 4
net/publication/267411411
CITATIONS READS
8 597
4 authors, including:
All content following this page was uploaded by Jacquelyn Ford Morie on 30 October 2014.
Abstract:
Intelligent agents in the form of avatars in networked virtual worlds (VWs) are a new
form of embodied conversational agent (ECA). They are still a topic of active re-
search, but promise soon to rival the sophistication of virtual human agents developed
on stand-alone platforms over the last decade. Such agents in today’s VWs grew out
of two lines of historical research: Virtual Reality and Artificial Intelligence. Their
merger forms the basis for today’s persistent 3D worlds occupied by intelligent char-
acters serving a wide range of purposes. We believe ECA avatars will help to enable
VWs to achieve a higher level of meaningful interaction by providing increased en-
gagement and responsiveness within environments where people will interact with and
even develop relationships with them.
TOC
2
1.0 Introduction
Virtual human research has progressed rapidly over the last 15 years (See
Rickel et al. 2002; Gratch et al. 2002; Swartout 2006, Becker-Asano and
Wachsmuth 2010; Swartout 2010). Yet, Embodied Conversational Agent Av-
atars are still in their infancy, with the first ones being implemented only
around four years ago (Hill 2008). As the articles in this book show, making
better and more functional bots is a prominent research topic. This chapter fo-
cuses primarily on intelligent agent avatars in virtual worlds, and especially
the work being done at the University of Southern California’s Institute for
Creative Technologies (ICT), which has a large research effort in both VHs
and ECAAs. ECAAs can be considered sophisticated conversational bots that
3
look like other inhabitants of the world and interact meaningfully with humans
within that space. Because of this, they can contribute to more robust scenari-
os in virtual worlds, covering a wide range of topics, from training to health.
Today’s agent avatars in virtual worlds are the result of a merger of 3D virtual
reality environments with interactive artificially intelligent agents. These two
technologies started as separate lines of research, and in the last decade have
come together to mutual advantage.
Virtual reality (VR) technology provides digitally fabricated spaces where we
can experience that which we cannot in real life, whether we are barred from it
though distance, temporal displacement or its imaginary nature. VR relies on
building, through the use of computer graphics and specialized viewing sys-
tems, complete environments that our disembodied selves can traverse, as if
we were really, truly there (McLellan 1996). The task of early VR researchers
was to find ways to convince humans of the believability of these digital spac-
es built only with computer graphics tools, and no physical materials. Much
of the research focused on how to bring the viewer inside that intangible
world. Researchers designed displays that shut out signals from actual physi-
cal reality and replaced these with malleable and controllable computer
graphics (Ellis 1994). Zeros and ones became digital sirens that fooled our
minds by providing experiences that stimulated our neural circuits in ways
very similar to actual reality. Unlike actual reality, however, it was a bound-
less expandable frontier, limited only by the creator’s imagination.
The other area of research was Artificial Intelligence (AI) research, which fo-
cused on making machines truly intelligent. Rather than creating spaces we
could inhabit, the early AI community sought to capture the internal mecha-
nisms of human thinking processes within the confines of a computer system
(Nilsson 1995). The overarching concept here was to understand how the
brain worked, and to then make a machine appear smart in a way that mim-
icked basic human intelligence. One trajectory of this research was to develop
programs that could exhibit intelligence and interact with humans in a conver-
sational manner. Early efforts concentrated on text-based queries and re-
sponses, with a human asking a question and the machine answering as if it
was a perceptive thinking entity. Weizenbaum’s early program, Eliza, very
nearly did the trick – more than one person was convinced his interactions
were with a real person rather than a computer program (Weizenbaum 1966).
But it was a thin disguise and these early so-called “chat bots” began to evolve
4
into more sophisticated systems through dedicated hard work and ongoing ad-
vances.
Each of these technologies – VR and AI – struck a chord with the general pub-
lic, which began to believe that computers could do almost anything conceiva-
ble. Escape the real world? Download your brain into a computer? No prob-
lem! This led to unrealistic expectations and researchers simply could not
keep pace with the hype generated by public excitement fed by the overactive
imagination of the press, science fiction books and even film. A period of dis-
illusionment set in (Mims 2010). However researchers entering the second
decade of the 21st Century, have moved beyond these issues and are forging
ahead on the paths early visionaries trod only in their dreams.
Going beyond Eliza’s model of a disembodied conversation with computer
programs masquerading as a Rogerian psychotherapist, a key group of people
realized that conversing with a visible character would enhance the interaction
between human and machine (Catrambone et al. 2005). In the 1990s, these in-
telligent virtual characters began to take on graphical forms that could visually
depict human-like actions and responses during conversational interactions.
Unlike the more sophisticated depictions of computer-generated humans that
were part of movies (for example, the film Final Fantasy in 2001) where each
frame of a character’s existence took hours to create, these AI virtual humans
had to run in real time to support their interactive nature. This task was diffi-
cult given the capabilities of the computers of that time. Therefore, real time
depictions were necessarily less about realism and more about behavioral be-
lievability.
Computer games were also quickly advancing during this time, and game
makers adopted techniques from many domains, including VR and AI. As a
real time interactive medium driven by a new generation of demanding play-
ers, these games pushed the envelope of realtime graphics while also incorpo-
rating some basic forms of intelligence into their systems. Most of these AI
resources were allocated to the behaviors of non-player characters, including
rudimentary player interaction and simple pathing algorithms (Stout 1996).
However a few AI characters had “starring” roles. The SIMS, for example,
while not a goal-driven game, stands out as a prime example of characters act-
ing with complex human-like behaviors via scripted rule-based AI, decision
trees and neural networks (Laird 2001). These characters were given basic in-
telligence, beliefs and goals commensurate with the needs of the game system.
5
2.1 Chatterbots
As noted previously, the earliest virtual agents were non-graphical conversa-
tional characters comprising computer programs that could typically “under-
stand” text typed by a human. The program achieved this by matching key
words to a database of responses. Such interactions were often limited in
scope and lacked the richness of normal in-person communication. The com-
mon term given these autonomous software programs was chatterbots, chat
bots or “bots” for short (Mauldin 1994).
2.2 Embodied Conversational Agents
The next advancement was to depict these interactive agents in some visual
form. These “embodied conversational agents,” or ECAs, did more than make
the agent a person – they set the stage for understanding that more complex
requirements were needed to support the believability of an embodied agent.
For example, a personality of sorts could be achieved by writing clever inter-
active dialog, but the visual depictions also needed to reflect this in some way.
Each stage of development revealed a new understanding of the cues we take
for granted in face-to-face communication. Much more work was needed in
combining AI and character depictions to make a visual agent appear convinc-
ing.
Researcher Justine Cassell describes ECAs as “multimodal interfaces” that
implement as many of the usual elements humans use for communication as
possible. These can include speech, gestures and bodily movements, facial an-
imation, and more complex aspects such as subtle body language and respons-
es with embedded emotional overtones (Cassell and Vilhjálmsson 1999).
Research in ECAs started in earnest in the late 1990s. Several investigators
and institutions took the lead in advancing the state of the art. In addition to
Dr. Cassell and her colleagues (then at MIT) advanced work was being done
by Joe Bates’s team at Carnegie Mellon University, and at the Information
Sciences Institute (ISI), part of the University of Southern California. The
work at ISI brought the virtual character Steve to life, which served as a peda-
gogical agent who could interactively teach you about the operation of a ship’s
control panel (See Figure 1).
7
Steve was aware of whether or not you were paying attention to his training,
and would urge you back on task via a synthesized voice. Steve was one of the
early forms of a pedagogical agent that actually possessed a 3D animated body
(albeit without legs!) and this opened up new avenues of engagement with pu-
pils using virtual training environments (Gratch et al. 2002).
In the 2001 AAAI Symposium, ISI researcher Jeff Rickel described the devel-
opment of autonomous agents that can take on a variety of human roles as “a
8
spoken dialogue. They could show nonverbal behaviors that people exhibit
when they have established rapport. They could understand both text and spo-
ken word, and even deal with off-topic remarks. In short, these virtual intelli-
gent agents combined a broader range of capabilities than any other work be-
ing done at that time (Hill et al. 2003; Gratch et al. 2002; Swartout et al. 2006).
ICT’s virtual human architecture includes a number of components listed be-
low that support the agents’ intelligent behaviors (Kenny et al. 2007). The
simplest question-answer agents use the first three components; more complex
agents can use all the components listed.
Speech recognition: parses the trainee’s speech and produces a string of
text as output.
Natural Language Understanding: analyzes the word string produced by
speech recognition and forms an internal semantic representation.
Non-verbal behavior generation: takes the response output string and
applies a set of rules to select gestures, postures and gazes for the vir-
tual characters.
Intelligent Agent: reasons about plans and generates actions. Simple
Q&A agents use the NPC Editor, whereas complex agents are created
using other cognitive architectures. The agents contain task models, a
dialogue manager and a model of emotions.
SmartBody: an in-house animation control system that uses Behavioral
Markup Language to perform advanced control over the characters in
the virtual environment by synchronizing speech output with gestures
and other non-verbal behavior (Thiebaux et al. 2008).
Real time graphics: a range of commercially available game engines are
used for building the custom environments and as the foundation for
real time rendering. As of this writing, Gamebryo and Unity are the
most widely used engines at the ICT.
10
As discussed above, the most advanced virtual humans or intelligent agent ap-
plications have been achieved in custom-built environments designed for a
specific purpose, providing a bounded framework to contain the extent of
knowledge and interaction the character must provide. This is true even for
the SimCoach. However, when the environment is more open, the domain
more permeable, or the world in which the agent exists subject to ongoing
change, it becomes more difficult to create intelligent characters that believa-
bly populate those spaces. This is the challenge faced when operating ECAAs
within virtual worlds.
3.1 Challenges of Virtual Worlds
Networked, persistent VWs are not perfect. A person can enter a virtual space
one time and find a group of interesting people interacting, chatting and doing
things together. Another time the space might be devoid of people, leaving
12
one to wonder what to do. Some spaces themselves can be confusing with lit-
tle clues regarding the purpose of their construction. Having to self-discover
what to do in a space can lead to frustration. The task of discerning how to in-
teract with the virtual world’s affordances, control one’s avatar, and navigate
the space is often overwhelming to a first time user. The luckiest initiates
learn from their friends, who take them “in hand” and see that they are men-
tored through all the basics. The unlucky ones may join the ranks of a rather
large drop-out statistic. In 2007 Linden Lab, the company responsible for Se-
cond Life (SL), one of the most popular VWs today, reported that their drop
out rate for new users – those who logged in once but never again – was a ra-
ther shocking 90% (Nino 2007).
3.2 ECAAs as Solutions
Adding ECAAs to virtual worlds seems like one obvious solution to these is-
sues because they have the same embodiment and interaction potential as real
users. Such agents can serve as helpers, information resources, orientation
aids and virtual tour guides. In addition, ECAAs may be employed indefinite-
ly to maintain spaces created for a specific purpose, whereas employing live
support staff for the same task may be untenable. This approach makes a great
deal of sense given the world-wide base of VW users and the expansive nature
of their spaces.
ECAAs can serve educational purposes as well. In fact, any of the purposes
for which virtual humans or intelligent agents have been created can be dupli-
cated within the virtual world with embodied agent avatars. However, in
2007, worlds like Second Life made surprisingly little use of any form of
agents, or their simpler cousins, chat bots. They were not part of the offerings
of the Linden Lab – the company that created Second Life – whose focus was
on building a platform that encouraged user-generated content such as build-
ings, clothes, furniture and the like – merchandise that could be used primarily
for commerce.
The first SL avatar-based bots were used to model clothing in virtual shops.
One was instructed not to talk to those models; they were just there to show
how the clothes would look on a 3D figure. So they were useful, but not intel-
ligent and certainly not conversational. Other practical uses for bots were to
monitor visitors within a space, using the built in aspects of an avatar to gather
information and the like. Less sanctioned uses included using them to increase
13
the visitor traffic count and make your area appear to be more popular than it
actually was (Nino 2009).
3.3 Using Virtual Worlds as ECAA Research Platforms
The ICT recognized a great opportunity for expanding its expertise in creating
virtual humans to the virtual world domain. VW space seemed like an ideal
platform for importing some or all of ICT’s virtual human technology. Not
only would the virtual world provide the underlying graphics and physics en-
gine for little or low cost (until now we had used game platforms such as
Gamebryo and Unreal Tournament), avatar agents could be designed with
much less overhead (no building the 3D characters or having to animated
them), allowing more focus on their intelligence and conversational attributes.
The virtual world, especially Second Life, also offered intrinsic benefits of
greater availability, affordability (and free to users), an in-world scripting lan-
guage for data gathering and other peripherals, persistent usage, and not hav-
ing to bring participants into a research lab for interaction. It also provided a
rather large pool of virtual world residents who could potentially interact with
agents we might deploy. Even though reports of its demise have been ru-
mored, SL continues to be a very stable platform with tens of thousands of us-
ers at any given time (Papp 2010).
With these thoughts in mind, in 2008 we set about adapting some of the tech-
nology behind ICT’s virtual humans to create ECAAs within Second Life.
This was made possible, in part, by leveraging an open source software library
then known as libsecondlife (now called libOpenMetaverse). 2 This software
enables communication with the servers that control the virtual world of Se-
cond Life (SL).
We had already built a large Iraqi Village in SL for a previous project, but the
village seemed quite empty and dull when no one was around. We chose this
location for a proof of concept exercise and filled it with simple avatar-based
bots acting as villagers, who went about their daily activities in a scripted fash-
ion. For example, a mother and her son would shop at the various stores in the
market place, conversing with several bot shopkeepers. An old man would
walk through his village, have tea served to him by a waitress bot, and then go
to the mosque to pray. Child bots played in the schoolyard, and the shopkeep-
2 http://openmetaverse.org/
14
ers even visited with each other. These were not interactive or intelligent agent
avatars, just avatars scripted to perform as actors, but they did give the village
a liveliness that was not often found in a Second Life environment.
ities we were providing. Soon we had a guide agent whose domain included
all parts of Chicoma Island.
Other questions were answered as the bot stayed logged in and interacted with
users in the world. We determined that the bot could stay running and stable
for an extended period of time, it could handle more than one person asking
questions, and it could respond to people who were not in proximity to it by
communicating over Instant Messaging, rather than local chat. When touring
a person around the four sims of the island (each being served by a separate
CPU), we solved the problem of handling the disruptions and navigational dis-
continuities caused by crossing sim boundaries. We analyzed conversational
logs between ECAA guides and visitors, and improved the range of topics and
questions that could be addressed. As was our standard practice, we also add-
ed responses to off-topic questions designed to bring the visitor back on track.
Shortly after we started this project, our work came to the attention of a train-
ing arm of the DoD. They were building areas within two virtual worlds, and
wanted to populate them with intelligent agents for various purposes. This
project – Virtual Intelligent Guides for Online Realms, or VIGOR – resulted in
a number of interesting instances of agent avatar technology in virtual world
space.
The first ECAA we created for VIGOR was to play the role of a knowledgea-
ble African diplomat in a virtual information center in Active Worlds. Active
Worlds was a much older VW platform, with fewer tools available to access
the internal workings of the system, but we produced a fairly simple conversa-
tional agent for our sponsors that could answer a range of questions about his
African country.
17
We were also tasked with creating a guide for a public Army-oriented space
they were setting up in Second Life. Building on the ideas present in the Chi-
coma Island guide, we created a sophisticated navigational and informational
agent to tour people around the Army space, answer their questions and give
them Army-themed virtual gifts. This guide went beyond the Island guide in
several ways. Its navigational functions included being able to guide groups,
and know when people were keeping up with him or not. It could not only an-
swer questions, it could handle both local and messaged chat inquiries and
even correct for spelling mistakes. If this guide ECAA did not know the an-
swer to a specific question, it could send a message or an email to a live per-
son who could send back an answer, which the bot would dutifully relay (Jan
et al. 2009).
The next task was to implement an embodied agent avatar that could tell peo-
ple how to make a parachute jump in the virtual Army world. What made this
request challenging was the specification to make him a “crusty old sergeant”
who would bark out orders and get annoyed if you weren’t doing things fast
enough. We had only made agents in SL that used text chat, and typing is not
an efficient way to convey “crustiness.” Therefore we decided to give this one
a recorded voice, with which he could speak to the participants. Standing at
18
the entrance to a rustic jump shack, he would greet visitors saying: “So you
wanna jump off my mountain, troop?” He’d then say: “Well ya better get on
one of those parachutes back there, before someone puts you in a body bag!”
motioning to a shelf of parachutes in the jump shack. The visitor could type
certain responses. For instance, if he or she said “No” to the jumpmaster’s
original question, then the jumpmaster agent would simply wait for the next
visitor. If the person took too long to get their chute on, he’d offer the exact
steps to do it, and yell at you impatiently if you still took your time.
The jumpmaster agent underscored some of the challenges that AI researchers
discovered when their characters became more human-like, and these chal-
lenges were exacerbated within the Second Life platform. While we had suc-
cessfully created a jumpmaster that yelled at you, SL did not offer tools that
allowed us to make facial expressions to visually support the behavior indicat-
ed by the vocal track. Second Life does offer a rudimentary lip-synching for
when avatars are using a microphone and voice functions in the world, but it is
not very sophisticated. It works moderately well for ordinary conversation,
but not for voices that are highly modulated, as in singing or yelling. Howev-
er, it is possible to access approximately a dozen key frames of default facial
expressions through the native scripting language, LSL. With no way to ac-
cess any control points on the agent avatar’s face, we instead used a script to
rapidly trigger and stop these available key frames in custom sequences to
produce the illusion that the intelligent virtual agent is speaking the phrases
being heard. It was moderately successful, and an interesting surprise to those
Second Life citizens who encountered our less than polite jumpmaster (Chance
and Morie 2009).
19
The next challenge for the VIGOR Project was to develop an agent that could
give what is called a “staff ride,” which is typically a visit to a battleground or
location of interest with someone relaying the events that took place on vari-
ous parts of the terrain. Other environmental conditions such as weather can
also be used as part of the analysis of the events that transpired. Staff rides are
valuable training mechanisms, and today are most often conveyed through
power point presentations in a classroom rather than by a visit to an actual site
of interest (Robertson 1987). Our staff ride guide was to tell the story of an
incident during the Iraqi war at a checkpoint along the road to the Baghdad
Airport. In this situation a car taking an Italian journalist to the airport failed
to stop at a temporary checkpoint, and was fired upon, resulting in casualties.
The geography of that area was built in Active Worlds, and an ECAA was de-
veloped with knowledge of the event. The tour started in an information cen-
ter where the initial details of the incident were conveyed to a group of human-
driven avatars. The virtual staff ride guide then led the group to that check-
point area, showing the terrain from several vantage points such as from an
overpass and from the soldiers’ location. Unlike staff rides where some of the
details of the area might have changed, the Active Worlds environment could
maintain the location of key items that were there during the original incident,
such as the temporary barriers and even the journalist’s car, for better under-
standing of how they played into the events.
20
The training, guide and informational ECAAs we created in Second Life were
fairly successful, and served as excellent proofs of concept. Back on Chicoma
Island we decided to make an intelligent agent avatar that was a more promi-
nent part of the activities we were building. The Warriors’ Journey Story
Tower was one such activity, where a veteran could go to see and hear a story
about a classic warrior figure from history such as the Cheyenne Dog Soldier,
or a Samurai Warrior. The stories, shown through a series of narrated panels
along the spiral path of a tower structure, were designed to highlight guiding
principles common to all soldiers, such as defending one’s homeland, fighting
with honor, protecting your comrades and returning from injury. We realized
that four narrated image panels could only begin to tell the full story of these
heroes, and that a conversational agent in the form of the warrior whose story
was being told would be an ideal means to provide context, history and addi-
tional personal information about the character.
After seeing and hearing the story along the spiral path, a visitor to the tower
reaches the topmost room, where the embodied conversational agent (whose
avatar is designed to appear historically accurate) is situated, surrounded by a
backdrop that represents where and when he lived. This ECAA finishes the
story with additional narration, and gestures towards elements of the space
around him as he does so. When he is done, text appears on the screen telling
the visitor they can now talk to the warrior and ask him anything they want to
know about his time, his life and the battles he has fought. The character cur-
rently has about 300 responses covering questions that could be asked about
these areas of interest, as well as responses that address off topic comments.
As veterans visit the Story Tower experience, we use the logs of their conver-
sations to add to the warrior’s knowledge base. At the present time, a visitor
can choose from two classic Warrior’s Journeys, each with their own set of
background narrative and conversational responses (Morie et al. 2010).
This activity uses the power of narrative to help reinforce a more positive atti-
tude and a connection to the long history of soldiers dedicated to protecting
their people. We are taking the next step with this work and making it possi-
ble for a veteran who has experienced these stories to author not only their
own story but also their own story-telling agent for the Story Tower. Author-
ing a conversational agent with full embodiment and a knowledge base is not a
simple task: there is no Virtual Human Toolkit available to help one through
the process (although this is an active area of research at the ICT). This effort
21
Figure 9. Some of the many bots that are part of the Checkpoint Exercise in Second Life
One big challenge was to ensure that the tomb guard agent stayed online all
the time, which was more difficult than we thought it would be because the
persistence of the virtual world is sometimes volatile. The overall world is in-
deed constantly on and available, but it happens that a particular sim, or island,
may suddenly go offline for an unspecified amount of time or be restarted pe-
riodically by Linden Lab to install updates. An island’s owner might also de-
cide to intermittently reboot the sim to improve interaction and reduce lag, a
common problem. We set up scripts to monitor the agent’s status, and to relog
it into the world if its sim went offline. However, if it tried to log in before
that sim came back online, it would be automatically routed to some nearby
sim. The monitor would then indicate it was back online, but we would soon
hear from the veterans group that it was not in the right place (ICT 2010b).
The creation of the Honor Guard for the veterans gained a trusting relationship
for us with the group. It also serves as an excellent example of why such
agents should be encouraged in Second Life. While some people might find
ways to use agents or bots in objectionable ways, the potential of these agents
for positive interaction outweighs arguments for their exclusion.
5.0 Conclusion
We foresee a very active future for ECAAs deployed in virtual worlds. While
they are not yet as sophisticated as many of their virtual human counterparts in
the research arena, they can still enrich virtual spaces, serve as much needed
guides and docents, teach valuable lessons, provide symbolic presences, and
fill a myriad of uses – only some of which have yet to be imagined. The mer-
curial architectures of persistent, networked virtual worlds must be continually
assessed to both determine what these agents can potentially achieve, and to
plan for spaces that can support their newest uses. The advancement of virtual
world ECAAs will only be achieved through diligent study and exploration of
these worlds because it is the limitations of current virtual worlds that restrict
the agents’ abilities to exhibit intelligence, emotions, and learning capacity
similar to that of the more mature virtual human technologies. Virtual worlds
are here to stay, and will be an increasingly active part of how we interact with
one another in the future. We encourage more work in this field and look for-
ward to the many changes such efforts will bring.
Acknowledgements:
Some of the projects described herein have been sponsored by the U.S. Army Research,
25
We would like to thank the many talented members of all the interdisciplinary teams that
comprise the Virtual Human research group at the ICT for their dedicated work and con-
tributions that enabled the adaptation of virtual human technology to virtual worlds.
More about their work can be found at www.ict.usc.edu
References
http://www.msnbc.msn.com/id/24668099/ns/technology_and_science-
innovation/t/second-life-frontier-ai-research/ Accessed Jun 1, 2011.
ICT (2010a) SimCoach Project Description. Available at
www.ict.usc.edu/projects/simcoach Accessed 15 December 2010.
ICT (2010b) http://projects.ict.usc.edu/force/cominghome/honorguard.php
Jan D, Roque A, Leuski A, Morie J, Traum D (2009) A Virtual Tour Guide for
Virtual Worlds. In Proceedings of the Intelligent Virtual Agents, 9th An-
nual Conference, IVA 2009, Amsterdam, The Netherlands, September 14-
16, 2009. Lecture Notes in Computer Science. Ruttkay Z, Kipp M, Nijholt
A, Vilhjálmsson HH (eds). Springer: 372-378.
Jan D, Cahne, E, Rajpurohit, D, DeVault, D, Leuski, A Morie J, and Traum D
(2011) Checkpoint Exercise: Training with Virtual Acttors in Virtual
worlds. In Proceedings of the Intelligent Virtual Agents, 9th Annual Con-
ference, IVA 2011, Reykjavik, Iceland September 15-17, 2011.
Kallmann M, Marsella S (2005) Hierarchical motion controllers for real-time
autonomous virtual humans. Proceedings of the 5th International working
conference on Intelligent Virtual Agents (IVA): 243–265. (Kos, Greece).
Kenny P, Hartholt A, Gratch J, Traum D, Swartout W (2007) The More the
Merrier: Multi-Party Negotiations with Virtual Humans. AAAI 2007 (Van-
couver, British Columbia, Canada, July 2007)
Laird, JE (2001) Using a Computer Game to Develop Advanced AI. IEEE
Computer Magazine. 34(7): 70-75.
Lee J, Marsella S, Traum D, Gratch J, Lance B (2007) The Rickel Gaze Mod-
el: A Window on the Mind of a Virtual Human. Proceedings of the 7th In-
ternational Conference on Intelligent Virtual Agents: 296-303.
Lehman JF, Laird JE, Rosenbloom PE (2006) A Gentle Introduction to Soar:
2006 update. Available online at
http://ai.eecs.umich.edu/soar/sitemaker/docs/misc/GentleIntroduction-
2006.pdf Accessed 15 December 2010.
Martin, J.C., Pelachaud, C., Abrilian, S., Devillers, L., Lamolle, M., Mancini,
M.: Levels of representation in the annotation of emotion for the specifica-
tion of expressivity in ecas. In: Proceedings of Intelligent Virtual Agents
(2005)
28
Mauldin, M (1994) ChatterBots, TinyMuds, and the Turing Test: Entering the
Loebner Prize Competition, Proceedings of the Eleventh National Confer-
ence on Artificial Intelligence, AAAI Press.
McLellan, H (1996). Virtual realities. In D. H.Jonassen, (Ed.), Handbook of
research for educational communications and technology (pp. 457–487).
New York : Macmillan.
Mims, C (2010) Whatever Happened to … Virtual Reality? Technology Re-
view, MIT Press. Available online at
http://www.technologyreview.com/blog/mimssbits/25917/ Accessed May
14, 2011.
Morie JF, Haynes E, Chance E (2010) Warriors' Journey: A Path to Healing
through Narrative Exploration. In Proceedings of the International Confer-
ence Series on Disability, Virtual Rehabilitation and Associated Technolo-
gies, and ArtAbilitation. International Society for Virtual Rehabilitation.
Morningstar R, Farmer R (1991) The Lessons of Lucasfilm’s Habitat. In Cy-
berspace: First Steps, Benedikt M (ed). MIT Press, Cambridge, MA: 273-
302.
Nilsson, N J (1995) Eye on the Prize. AI Magazine. Vol 16, No 2. Pp 9-17.
Nino T (2007) Peering Inside – Second Life’s User Retention. Published
online at Massively by Joystiq, available at
http://massively.joystiq.com/2007/12/23/peering-inside-second-lifes-user-
retention/ Accessed June 20, 2011.
Nino T (2009) Second Life traffic gaming: A chat with a bot-operator, and dire
portents for Lucky Chairs. Published online at Massively by Joystiq, avail-
able at http://massively.joystiq.com/2009/06/03/second-life-traffic-gaming-
a-chat-with-a-bot-operator-and-dire/ Accessed December 15, 2010.
Papp, R. (2010). Virtual worlds and social networking: reaching the millenni-
als. Journal of Technology Research, 2: 1-15.
Rickel, J, Marsella, S, Gratch, J, Hill, R, Traum, D, and Swartout, W (2002).
Toward a new generation of virtual humans for interactive experiences.
IEEE Intelligent Systems: 32-38.
29