User Interface Design
User Interface Design
Preface
Contributors
Chapter 1Introduction: Bridging the Design Gap
Chapter 2Bridging User Needs to Object Oriented GUI Prototype via
Task Object Design
Chapter 3Transforming Representations in User-Centered Design
Chapter 4Model-Based User Interface Design: Successive
Transformations of a Task/Object Model
Chapter 5Lightweight Techniques to Encourage Innovative User
Interface Design
Chapter 6Interaction Design: Leaving the Engineering Perspective
Behind
Chapter 7Mind the Gap: Surviving the Dangers of User Interface
Design
Chapter 8Transforming User-Centered Analysis into User Interface:
The Redesign of Complex Legacy Systems
PARTICIPANTS
There were fourteen people who participated in the workshop, among whom there was
a wide variety of design experience. Including the organizers, there were three
from academia, ten from large software development companies, and one who operates
her own consulting firm. The participants included the following individuals:
Acknowledgments
I express my appreciation to the workshop participants for their willingness not
only to share their knowledge and experience in interface design at the workshop,
but especially for their efforts in writing the chapters that make up the substance
of this book. I regret that after his enthusiastic participation in the workshop,
Allan Risk was unable to complete a chapter to be included in the book. Likewise,
following his efforts at organizing the workshop, Ron Zeno was unable to contribute
to the book, which is unfortunate.
I also want to thank our CRC publisher, Ron Powers, and his assistant, Cindy Carelli,
for their patience and flexibility in working with us to produce this volume.
Finally, I express my gratitude to Shannon Ford, who found us and was willing
to provide helpful feedback on the chapters, expecially the introduction (Chapter
1).
The Editor
Larry Wood is a professor of cognitive psychology at Brigham Young University, who
has taught human-computer interaction and interface design courses and consulted
on design projects for 10 years. His research interests include all aspects of
user-centered design
Contributors
Tom Dayton
Bellcore
Piscataway, New Jersey
Thomas M. Graefe
Chapter 1
Introduction: Bridging the Design Gap
Larry E. Wood
Brigham Young University, Provo, Utah
email: [email protected]
TABLE OF CONTENTS
1. Good Interface Design
2. The Gap: Or Then a Little Magic Happens
3. Bridging the Gap: Major Issues/Considerations
4. Individual Chapter Descriptions
4.1. Dayton, McFarland, and Kramer (Chapter 2)
4.2. Graefe (Chapter 3)
4.3. Ludolph (Chapter 4)
4.4. Monk (Chapter 5)
4.5. Nilsson and Ottersten (Chapter 6)
4.6. Rantzer (Chapter 7)
4.7. Rohlfs (Chapter 8)
4.8. Scholtz and Salvador (Chapter 9)
4.9. Simpson (Chapter 10)
4.10. Smith (Chapter 11)
5. Conclusion
6. References
providing feedback regarding actions performed, and preventing users from making
errors).
The same principles and guidelines outlined by Norman can also be applied to the
design of a software application, particularly the user interface, which is the
focus of this book. To be usable, a user interface must provide access to the
functions and features of an application in a way that reflects users ways of
thinking about the tasks that a potential application will support. This requires
that the application not only provide support for necessary aspects of the users
work, but must also provide the means for them to interact with the application
in ways that are intuitive and natural. Great improvements in the effectiveness
of a user interface have been made during the last 15 years, through (1) the improved
components and resources available in Graphical User Interfaces (GUIs), pioneered
by such systems as the Xerox Star, precursor to the Apple Macintosh desktop and
in Windows (Smith et al., 1982) and (2) in the transition from system-centered
to user-centered design methods (Norman and Draper, 1986).
The Star and related GUI systems introduced new hardware resources and components,
while the user-centered design orientation focused design methods on the potential
users of an application. In essence, the new hardware and software resources
provided the building blocks of more usable computer applications, while the
user-centered orientation provided the impetus to develop methods to insure that
the building blocks were used in ways that fit the users way of thinking about
and performing their work. In this way an interface could be made more natural and
intuitive than had previously been the case.
Figure 1.1.
Considerable effort has been expended to document methods related to each of the
activities in Figure 1.1. In support of the activities for identifying users and
determining their support requirements, there are sources discussing methods for
gathering user information through field methods (e.g., Wixon and Ramey, 1996) and
formal task analysis methods (e.g., Johnson, 1992). Furthermore, there are sources
that emphasize the importance of representing work-related tasks via scenarios
(e.g., Carroll, 1995) and use cases (e.g., Constantine, 1995). For producing
potential designs, there are a variety of sources that provide guidelines regarding
the important characteristics of a usable interface (e.g., Fowler and Stanwick,
1995) and for producing design prototypes using both low- (e.g., Monk et al., 1993)
and high-fidelity methods (e.g., Hix and Shulman, 1991). Also, much has been written
about the methods for evaluating a user interface, once it has been produced, either
by expert review (e.g., Nielsen, 1993) or by formal testing with potential users
(e.g., Dumas and Redish, 1993).
As indicated above, while there are some excellent sources of information on user
interface design, none contains specific descriptions of how a designer transforms
the information gathered about users and their work into an effective user interface
design. This is indicated in Figure 1.1 by the question mark between User
Requirements and Interface Designs. Some might argue that is to be expected because
that process is a highly creative one and that creative processes are inexplicable
by their nature. While this may be true in a limited sense, designs dont really
appear as if by magic.1 They are largely the result of thoughtful, conscious
processes, and the chapters in this volume represent an attempt to make more
explicit just how designers bridge the gap.
For more on this topic, the interested reader is referred to the Creative Cognitive approach (Ward, Finke, and Smith,
1995) which assumes that the same methods used to understand normal cognitive processes apply equally to the
Functionally, they are fictional, but typical stories describing he users work.
They also function as effective means for evaluating the design alternatives.
The four representations described above are preparatory to actually producing an
interface design and are intended to enable the designer to incrementally and
iteratively refine the understanding of the necessary aspects of the users work
environment. Once that goal has been achieved, then the beginnings of a design are
formulated in terms of a dialogue model, which frequently consists of a set of rough
screen sketches and some indication of the flow among them. The dialogue model is
at the same level of abstraction as the WOD, the exception lists, and scenarios,
and thus can be verified against them. The verification might suggest a useful
metaphor for data representations or other presentation/ manipulation aspects of
the final interface. Necessary constraints on the order of the work can be imposed
in the dialogue model, but should be restricted to only those that are necessary.
For Nilsson and Ottersten the final phase of the bridging process begins with a
conceptual design, describing at a high level how the various parts of the users
work fit together in a way that matches the users mental model. These
conceptualizations are represented in rough sketches for further exploration in
design. In the second phase of design, a functional design is created by making
rough sketches of potential screen designs, also showing some preliminary
information about potential GUI controls. The controls are then used in user
evaluations with design guidelines based on principles of human perception and
cognition, which is part of Linn Datas AnvndarGestaltning method.
In exploratory design, new concepts are created through having potential users
consider user interaction scenarios (narrative descriptions of what people do and
experience as they attempt to make use of a new product). These new concepts are
then visualized through rough sketches and simple physical models, paper-based
storyboards, computer-based simulations with voice-over, and even scripted play
with live actors. Scenarios provide an effective method for brainstorming about
and for clarifying new product concepts. Once a potential set of viable concepts
is established by the design team, they are further refined in focus groups with
potential users. The final result of the exploratory stage is a set of high-level
user values, which encompass the intended customers future needs, wants, and
goals.
The goal of the analysis and refinement stage is to verify the user values and to
define the new product attributes. In defining new-generation products, there is
a larger than usual discrepancy between the existing task model (use of current
products) and an enhanced task model (how the tasks will change, given the new
product). Bringing them together is accomplished using more explicit and detailed
scenarios to help users understand the implications and impact of the new product
on their work. These scenarios are often presented as videos of actors using the
new products, which helps convey more realism to potential users and helps to elicit
more useful feedback.
In the formal design stage, scenarios are used to choose, design, and verify the
conceptual model and the metaphors that will be used in the final product. The
scenarios are presented as low-fidelity (paper) prototypes to design and verify
the high-level dialogue model of the interface. High-fidelity prototypes (computer
simulations) are then used to further refine the interface details. In the Orbitor
project, a composite metaphor was chosen from the various types of tasks that were
being combined into the final product. At the highest level, an environment metaphor
was used (call environment, message center environment, and filing cabinet
environment). Within each environment, appropriate visual metaphors were used
(e.g., telephone and file folder icons) for specific tasks.
5. CONCLUSION
User interface design is a complex process, resulting in many different issues that
need to be considered and many questions that need to be answered. Some issues are
more pervasive than others and recur across different contexts. To push the bridging
analogy, it is important to use effective building methods, but it is equally as
important to have the proper materials. Some of the issues most central to design
(as emphasized by the contributors to this volume) are scenarios and use-cases,
metaphor use, high-level dialogue (or interaction) design, high- and low-fidelity
prototype development, and the issue of object- vs. task-oriented dialogue style.
Those familiar with user interface design will note that none of these issues are
new or unique. Our attempt here is simply to make more explicit how they contribute
to the building of that elusive bridge. Various authors emphasize these issues to
a greater or lesser extent, as outlined in the descriptions of individual chapters
above. What is obvious from the various approaches described in this volume is that
there are many effective ways to build the bridge, each suited to particular
contexts and constraints. Our hope is that readers will be able to use them to suit
their own needs and circumstances as they also attempt to bridge that gap between
User Requirements and GUI design.
6. REFERENCES
Carroll, J. M., Scenario-Based Design: Envisioning Work and Technology in System
Development, John Wiley & Sons, New York, 1995.
Constantine, L. L., Essential modeling: use cases for user interfaces, Interactions,
ii.2, 34, 1995.
Dumas, J. S. and Redish, J. C., A Practical Guide to Usability Testing, Ablex,
Norwood, N.J., 1993.
Fowler, S. L. and Stanwick, V. R., The GUI Style Guide, AP Professional, Boston,
1995.
Hix, D. and Schulman, R. S., Human-computer interface development design tools:
A methodology of their evaluation, Communications of the ACM, 34(3), 74-87, 1991.
Johnson, P., Human Computer Interaction: Psychology, Task Analysis, and Software
Engineering, McGraw-Hill, London, 1992.
Monk, A. F., Wright, P. C., Davenport, L. and Haber, J., Improving your
Human-Computer Interface: A Practical Technique, BCS Practitioner Series,
Prentice-Hall, 1993.
Muller, M. J., Tudor, L. G., Wildman, D. M., White, E. A., Root, R. W., Dayton,
T., Carr, B., Diekmann, B., and Dykstra-Erickson, E., Bifocal tools for acenarios
and representations in participatory activities with users, in Scenario-Based
Design for Human Computer Interaction, Carroll, J. M., Ed., John Wiley & Sons, New
York, 1995, 135-163.
Nielsen, J., Usability Engineering, Academic Press, Boston, 1993.
Norman, D. A. and Draper, S. W., Eds., User Centered System Design: New Perspectives
on Human-Computer Interaction, Erlbaum Associates, Hillsdale, N.J., 1986.
Norman, D., The Design of Everyday Things, Doubleday, New York, 1990.
Smith, D. C., Irby, C. H., Kimball, R. B., Verplank, W. H., and Harselm, E. F.,
Designing the star user interface, Byte, 7(4), 242, 1982.
Ward, T.B., Finke, R.A., and Smith, S.M., Creativity and the Mind: Discovering the
Genius Within, Plenum Press, New York, 1995.
Wixon, D. and Ramey, J., Eds., Field Methods Casebook for Software Design, John
Wiley & Sons, New York, 1996.
Chapter 2
Bridging User Needs to Object Oriented GUI Prototype via
Task Object Design
Tom Dayton, Al McFarland, and Joseph Kramer
Bellcore, Piscataway, New Jersey
email: [email protected]
[email protected]
Table of Contents
Abstract
1. Introduction
1.1. A Typical Session
1.2. Broad Context of the Bridge
1.3. Overview of the Bridges Explicit Steps
2. Pervasive Techniques and Orientations
2.1. Participatory Analysis, Design, and Assessment (PANDA)
2.2. Whitewater Approach
2.3. Room Setup
2.4. Low-Tech Materials
2.5. Gradually Increasing Investment
2.6. Breadth and Context First
2.7. Task Orientation
2.8. Consistent, Object-Oriented GUI Style
2.9. Multiplatform, Industry-Standard GUI Style
2.10. Design First for Less-Experienced Users
2.11. Facilitators
3. Explicit Steps
3.1. Part 1: Expressing User Requirements as Task Flows
3.1.1. Big Picture
3.1.2. Current Task Flows
3.1.3. Issues and Bottlenecks in Current Task Flows
3.1.4. Scoping the Current Task Flows
3.1.5. Blue Sky Task Flows
3.1.6. Scoping the Blue Sky Task Flows
3.1.7. Realistic and Desirable Task Flows
3.1.8. Scoping the Realistic and Desirable Task Flows
3.2. Part 2: Mapping Task Flows to Task Objects
3.2.1. Identities of Task Objects
3.2.2. Attributes of Task Objects
3.2.3. Actions on Task Objects
3.2.4. Containment Relations Among Task Objects
3.2.5. Usability Testing of the Task Objects Against the Task Flows
3.3. Part 3: Mapping Task Objects to GUI Objects
3.3.1. Identities of GUI Objects
3.3.2. Views of GUI Objects
3.3.3. Commands for GUI Objects
3.3.4. Usability Testing of GUI Objects Against the Task Flows
4. Conclusion
5. Acknowledgments
6. References
ABSTRACT
This chapter sketches out The Bridge, a comprehensive and integrated methodology
for quickly designing object-oriented (OO), multiplatform, graphical user
interfaces (GUIs) that definitely meet user needs. Part 1 of The Bridge turns user
needs into concrete user requirements represented as task flows. Part 2 uses the
Task Object Design (TOD) method to map the task flows into task objects. Part 3
completes the bridge by mapping the task objects into GUI objects such as windows.
Those three parts of the methodology are done back-to-back in a single, intense
session, with the same team of about five participants (notably including real users)
working at a small round table through several consecutive days. The methodology
is unusual in its tight integration not only of its explicit steps, but also of
several pervasive techniques and orientations such as Participatory Analysis,
Design, and Assessment (PANDA) methods that involve users and other stakeholders
as active collaborators. This chapter describes both the underlying portions and
the explicit steps of this bridge over the gap between user needs and GUI design.
1. INTRODUCTION
Traditionally, designing the fundamentals of OO GUIs to meet user needs has been
done seemingly by magic. There have been methods for the surrounding steps
gathering user requirements before, and polishing the fundamental design after.
However, there have been few if any systematic ways to step over the gap in between.
Some concrete examples: Once the users task flows are designed, how does the
designer decide which of those flows data elements are to be represented as entire
windows and which merely as object attributes within the client areas of those
windows? How should users navigate between windows? Style guidelines help decide
the exact appearance of a menu in a window, but how does the designer decide which
user actions need to be represented at all, which windows they should be in, and
whether to represent them as menu choices or buttons? This chapter describes how
to bridge that gap between user requirements and OO GUI design by using task
objects in a participatory methodology we call The Bridge. Arguments for the
The full methodology has not yet been fully described in any generally available publications.
The most complete description other than this chapter is handed out as notes when the methodology
is taught in all its breadth and detail, for a consulting fee. Portions of the methodology have
been taught at conferences such as UPA (Dayton and Kramer, 1995, July; Kramer, Dayton, and Heidelberg,
1996, July), OZCHI (McFarland and Dayton, 1995, November), CHI (Dayton, Kramer, McFarland, and
Heidelberg, 1996, April), APCHI (Kramer and Dayton, 1996, June), and HFES (Dayton, Kramer, and Bertus,
1996, September).
For example, sessions typically last 3 days but for large complicated projects can
last 7 days. To give you a flavor before we get into the details of those variations,
here is a short story of a typical session:
Monday begins with the two facilitators directing the five participants (expert
user, novice user, usability engineer, developer, system engineer) to the five
seats at a small round table. The facilitators briefly explain the sessions goals
and approach, and lean into the table to start the group writing the Big Picture
index cards. Within a few minutes the participants themselves are writing task steps
on cards and arranging the cards into flows on the blank flip chart paper in the
tables communal center. After push-starting the group, the facilitators fade into
the background, floating around the edges of the table and becoming most noticeable
when introducing each step of the methodology. Within the first half day the five
participants have become a well-functioning team and feel that they are running
the session with only occasional help from the facilitators. The atmosphere is fun,
sometimes goofy, with periodic showers of paper as discarded index cards and sticky
notes are torn up and thrown over shoulders. At the end of Monday the walls are
covered with the task flows that were created on the table. Each flow is composed
of a set of index cards connected with sticky arrows, all attached with removable
tape to a piece of flip chart paper (see Figure 2.1 left side). Commentaries are
stuck on top of the task flows in a riot of fluorescent stickies of assorted sizes
and shapes.
By lunch time Tuesday the table holds an additional dozen cards, each having stuck
to its lower edge a sequence of blue, pink, and yellow sticky notes (top right of
Figure 2.1). The team designed those dozen task objects to represent the units
of information that are needed by users to do the task flows posted on the walls.
The floor is littered with an additional dozen cards that were discarded as the
team refined its notion of what information qualifies as a task object. The team
has verified that the set of task objects is usable for doing the tasks, by walking
through the task flows while checking that there exists a task object listing both
the data and the action needed for each task step.
Up to now the facilitators have banished the GUI from the conversation, to keep
the participants focused on the abstract task and its units of information. However,
after lunch on Tuesday, the facilitators give to the participants photocopies of
GUI windows whose client areas and title bars are blank. Participants make a window
for each task object by hand drawing some of the attributes from the blue sticky
of that task object into the client area of a window (bottom right of Figure 2.1).
By the end of this second day only eight of the original task objects remain on
the table, the participants notion of objecthood having been refined by their
act of translating task objects attributes into window views.
Figure 2.1 The three types of artifacts resulting from the three consecutive parts
of The Bridge methodology.
Part 1 yields a set of hand-lettered index cards and removable sticky arrows
representing task flows. Part 2 extracts task objects from those flows and
represents them as hand-lettered index cards with removable sticky notes. Part 3
expresses each task object as a GUI object, usually as a window; the window contents
are hand drawn on photocopied empty windows.
Wednesday starts with participants drawing additional paper windows to represent
additional views for some objects attributes. After the coffee break comes the
first GUI usability test. The facilitators use masking tape to delineate a computer
screen as a big empty rectangle on the table. A user points and clicks with a finger
within the rectangle, the other participants serving as the computer by adding and
removing paper windows as dictated by the users actions. As the task flows on
the wall are executed in this way, the team discovers flaws in the paper prototype,
revises the prototype and the task objects, and resumes the test. At the end of
this third day the team does another usability test, this time with windows to which
the team has added menus and buttons as GUI expressions of the actions that were
listed on the task objects pink stickies. There are only four objects left. The
window. The team would then immediately redesign and reprototype so that the
different data were displayed separately, such as in different views, different
notebook pages, or different Include settings. All of those activities would happen
within minutes, inthe same Bridge session, without any resources being wasted in
writing up or coding the poor design.
The above is an illustration that The Bridge is not the type of user-centered design
process that just records whatever users say they want. Instead, The Bridges
resulting design realistically can be developed, with usability compromised as
needed to get the GUI delivered on time. The Bridge has an advantage over many
competing approaches, in its provision of excellent support for making those
tradeoffs rationally and with minimum expenditure of resources on infeasible or
poorly usable designs. Not just users are present in the session; various management
camps are represented in the persons of the usability engineer, developer, system
engineer, and perhaps others. Those participants are selected partly for their
knowledge of management perspectives and for their power to represent management
in making at least rough commitments.
Another important support for management is that not all decisions need be made
finally during The Bridge session. The session outputs priorities on portions of
the task flows, on portions of the GUI, and on the issues that are documented during
the session. There is even a Blue Sky version of the task flows that is acknowledged
by everyone to be infeasible, but that is publicly documented as an ideal goal of
the project. All this information guides the development team during the months
or years after the session, as the teams adjusts the design to suit the unfolding
practical realities of the software development project.
This chapter does not explain any of the methodologies in the above list other than
The Bridge. Before we delve into details of The Bridge, here is a brief overview
of its three major, explicit steps.
Part 1 of The Bridge produces one or several well-documented task flows from the
user perspective. The task flows concretely represent what the user wishes to
accomplish with the proposed software product. However, the task flows do not refer
to underlying system architecture or existing system representations of the data
that the user wishes to access or manipulate. For example, the task flows would
indicate that the user wants to know the customers total billable amount of
charges rather than the user wanting to view The Billing Summary Screen. The
task flow representation is index cards taped to flip chart paper, lines of flow
between the cards being drawn on long, skinny, removable sticky notes
(stickies). Each step in the task is represented as a noun and a verb written
on a card, such as Print Customers Bill.
Part 2 (Task Object Design) bridges the task flows to the GUI design via creating
somewhat abstract task objects from the nouns embedded in the task flows that
came out of Part 1. Each task objects name is copied from a task flow step onto
an index card and thrown onto the table around which the group works. For example,
if one step in a task flow is Print Customers Bill, then Bill is written
on an index card that is thrown onto the table.
The mapping and verification process continues within Part 2 by noting each task
objects attributes on a sticky note attached to that objects index card. Many
attributes are copied from their mentions in the task flows, but others come from
the teams knowledge. This step eliminates many objects by discovering that their
data can be adequately represented merely as attributes of other objects. If a task
step is Print Customers Bill, then the Customer index cards sticky note
might get the attribute category name Billing Info written on it, allowing the
Bill index card to be thrown out. Then the actions that users take on that object
are listed on yet another sticky; if a task step is Print Customers Bill,
then the Customer index cards sticky has the action Print written on it.
The most difficult step in Part 2 is creating a strict hierarchy of all the task
objects. The hierarchy is the mental model that the users feel best represents the
relationships among the task objects on the table. For example, in designing an
interface for managing a hotel, a user may want the Hotel object to contain a
Customer object. This relationship can be totally different from the data structure
being used by the developers and does not necessarily imply object-oriented
inheritance. The attributes that are not child objects are properties. A
concrete example is that a Closet object will have the clothing it contains
as its child objects, and Type of Material The Closet Is Made Of as a property.
In the hotel example, a Customer object has multiple Reservations as child objects,
and an address as a property. Each objects index card gets a sticky that is marked
with that objects parent and children.
Part 3 translates the task objects, with their attributes, actions, and
hierarchical containment relations, into GUI objects. A GUI object is represented
as a window when open and as an icon, list row, or other GUI element when closed.
The window client area contents are GUI representations of the attributes listed
on the task object attributes sticky. The form in which an attribute is represented
in a client area depends partly on whether that attribute is listed as a child object
or just a property. Each GUI objects windows are roughly prototyped in paper,
with enough detail to give an idea of the type of representation (e.g., graphical,
textual, list). Any object can be represented by multiple types of windows
multiple views that can be open simultaneously.
What remains after the three-part Bridge is the filling in of design details. Those
details are important, of course. However, they are much less important than the
fundamental organization, appearance, and behavior that The Bridge does design.
The remaining details are most efficiently designed by the usability engineer
outside of a participatory session, though with consultation from users and others.
The output of the three-part Bridge methodology is a working paper prototype of
an object-oriented GUIs foundation, whose design has been driven by the user
requirements via the Task Object Design center span.
Every activity in the three-part Bridge involves, from start to finish at the small
round table, a team of representatives of the major stakeholders in the software
products usability: users, usability engineer, system engineer, developer, and
perhaps others such as subject matter experts, trainers, documenters, and testers.
You should substitute whatever titles you use for the people who have those
responsibilities, but in this chapter here is what we mean by those titles:
Users are real potential users of the software product. Users are not just
people who think they know what users want (e.g., users managers,
purchasing agents) nor people who were users once upon a time. If there are
several classes of users, for instance experts and novices, each class should
have a representative participating in the session. Users do act as designers
during the session. Like all the other participants, they are more expert
in some aspects of designing than in others, but the methodology facilitates
users acting to some degree as designers during all phases of the session.
Usability Engineers are the people primarily responsible for the
usability of the software and for all the activities that directly contribute
to usability. Other people also are involved, but the usability engineers
have formal responsibility and so initiate, coordinate, and complete the
activities:
Gathering preliminary data about the user, task, and work environment
populations and about the business needs that the worker-computer
combination is supposed to meet
Analyzing, documenting, and sometimes redesigning the users task flows
Designing the GUI
Prototyping the GUI
Designing, executing, and analyzing usability tests on the prototype,
then redesigning the GUI on the basis of those results
Writing and maintaining the user requirements formal document that drives
the GUI coding and testing
Helping the testers evaluate the final software products conformance
to the formal requirements document
Helping the learning support people design the user support documentation,
on-line help, and training
If there is room at the table for only one usability engineer, that person
should be the one with the most knowledge and skill in the GUI style and the
Bridge methodology. The other usability engineers may observe from an
adjacent room. The projects usability engineers probably should not be
facilitators of that projects sessions, because they have a stake in the
project (e.g., a complex design will take more time to write up in the user
requirements document). Even if the usability engineers really can act
neutrally, the other participants may suspect them of bias and so not respect
them as facilitators.
System Engineers are the people responsible for the underlying system
requirements, such as the exact natures and sources of the data that the
underlying computer systems must provide to the GUI code layer. The system
engineers write a formal system requirements document that, together with
the user requirements document, drives the developers work. Naturally, the
system engineers must work closely with the usability engineers to ensure
that the user requirements are feasible for the underlying systems, and
sometimes the system requirements document is combined with the user
requirements document. Ideally, The Bridge is used right at the projects
start, before any system engineering requirements have been written. That
allows the user needs to drive the software products deep functionality
as well as its look and feel. However, even if the system requirements are
already set, a system engineer must participate in the session to help the
team cope with those constraints and to help invent workarounds. If there
is room at the table for only one system engineer, that person should be the
one most capable of estimating feasibilities and of committing system
engineers to work.
Developers are the people who write the code that provides the
functionality that is specified in the GUI and system requirements documents.
Sometimes there are separate developers for the GUI layer of code than for
the underlying back-end system code; the GUI coders are the most relevant
to the GUI design sessions, but the back-end developers also can make
excellent contributions. The Bridge should be used at the projects start,
before any back end code, or contracts between the GUI and the back end, have
been written. Even if the GUI is being laid on top of a legacy system, the
GUI developers participating in the session can help the team by explaining
those underlying constraints as soon as they become relevant to the GUI
designing and by helping create GUI design workarounds. If there is room at
the table for only one developer, that should be the GUI developer most
capable of estimating feasibilities and of committing the developers to work.
All those participants work together at a single, small, round table, actively
collaborating in every activity. Of course, each participant contributes the most
from one area of expertise. Users, for example, are most expert in the task and
work environments, so are most active in designing the task flows and in doing the
usability testing. The usability engineer is the most active during designing of
the GUI per se. However, all the participants are quite capable of contributing
to some degree and from some perspective during every step of the methodology.
The Bridge facilitates everyone contributing to all the activities. One way is by
educating every participant in the other participants areas of expertise. That
education is done mostly informally, sprinkled throughout the analysis, design,
and testing activities as the topics naturally arise. Another way is getting
participants to support each other by using their particular expertises to fill
in the information missing from the other participants ideas. For example, a user
might compensate for the usability engineers lack of task knowledge by designing
a window by writing some field names on a photocopied empty window. The user
might then complain that there is so much information in that window that the pieces
needed at any one time are hard to find. The usability engineer might instantly
respond, filling in the users missing GUI design knowledge by suggesting
segmenting the windows contents into notebook pages. This symbiosis is no
different than the usability engineer relying on the developer andsystem engineer
for new ideas about how the underlying technology can help.
Getting even one representative of each of the stakeholder groups sometimes would
lead to more than the five or six session participants that we allow. In that case,
only the stakeholders most critical to the GUI per se sit at the table three
users, one usability engineer, one system engineer, and one GUI developer. The other
stakeholders are allowed to silently observe, preferably from another room. There
are other techniques for handling too many stakeholders for a single session, such
as having successive design sessions with overlapping membership, and running two
tables at once with synchronization of the tables after each substep of the
methodology. With techniques such as those, The Bridge has been used successfully
in projects having more than 30 distinct user populations, and in projects having
over 100 staff (developers, system engineers, usability engineers, etc.). Such
advanced techniques are beyond the scope of this chapter.
The presence of representatives from all the stakeholder groups at the same table
allows rapid iterations of designing and testing, via two mechanisms of increased
efficiency:
Iterations are speeded. Time is not spent creating a design, writing it
up, and sending it to the developers and system engineers for review, only
to have the developers and system engineers reject it as infeasible. Instead,
a developer and system engineer are sitting right there at the table,
commenting on feasibility as soon as the design ideas pop up. Similarly,
usability engineers do not waste time detailing, documenting, creating a
computerized prototype, and arranging and running formal usability tests,
only to have users reject the fundamental design. Instead, users are sitting
right there at the table, giving feedback the instant the design ideas arise.
Iterations are eliminated. Rapid feedback on ideas generated by other
people is not the main benefit or point of participatory methods.
Participation means active collaboration of all the stakeholders in
generating the ideas. The initial ideas are better than they would have been
if the stakeholders were working isolated from each other, so the initial
iterations of creating really bad designs and testing them are just skipped.
PANDA improves the effectiveness of design as well as its efficiency. The ideas
generated by the team collaborating in real time often are far superior to the ideas
that could have been generated by the same people working isolated from each other
for any amount of time, even if they were communicating with each other
asynchronously or through narrow bandwidth media (Muller et al., 1993).
The great efficiency and effectiveness of the PANDA approach cannot be attained
merely by throwing this bunch of people together into a room. Standard-style
meetings do not encourage true collaboration. These stakeholders can actively
collaborate best in a fluid atmosphere that is barely on the organized side of chaos
a whitewater model of analysis, design, and assessment.
Two ingredients are needed for whitewater: a medium that not just allows, but
encourages, a headlong pursuit, and strong but loose guidance to keep that energy
pointed toward the goal. The guidance is provided by two elements: the explicit
steps of the methodology and the session facilitators. (Facilitators are different
from participants; the facilitator role will be described later.) The medium is
provided by those same facilitators, by the composition of the participant group,
by the low-tech materials, and by the gathering of participants about a small round
table.
project. The short time available for the session makes premature work on details
dangerous, because that work could consume a substantial part of the sessions
time, and could be pointless if later, higher-level work reveals as irrelevant the
entire area that was detailed. A more important danger of spending time on details
before context, and depth before breadth, is thatthe motivation of the participants
could suffer. That danger is especially great in one of these sessions, because
of the sessions tenuous organization and high time pressure; participants can
easily become disoriented, frustrated, and dispirited. Skilled facilitators are
essential for skirting that danger.
OO GUIs provide a consistent and relatively small set of rules for users to follow
in creating, viewing, manipulating, and destroying GUI objects within and across
software products. OO GUIs provide a graphical environment that lets users
visually see the relationships among GUI objects (e.g., the containment
of a customer within a hotel is reflected as a Tom Dayton row in a list
within the Big Als Swank Joint window);
change those relationships naturally and directly, by direct manipulation
(e.g., move a customer from one room to another by dragging the Tom Dayton
icon from the Room 101 window into the Room 106 window);
change the way the GUI objects contents are displayed (e.g., change the
Big Als Swank Joint windows appearance from a list of customers and
rooms to a floor plan of the hotel).
Consistency of GUI style, that is, of rules for the user viewing and manipulating
the GUI, is important to users regardless of the GUIs other style attributes
(Dayton, McFarland, and White, 1994; McFarland and Dayton, 1995). Consistency
within the single GUI being designed in this Bridge session allows users to do many
tasks once they have learned to do any task. Consistency of this GUI with
industry-wide style standards allows users to utilize their knowledge of other
software products in their use of this product and vice versa. Consistency of GUI
style also is important to our methodology, because the participants in our sessions
need learn only a small set of GUI style rules for designing and testing the entire
GUI.
Global optimization of the GUI to suit the users entire job rather than any subset
of the job is an important property of OO GUI style and an important consequence
of our design methodology. The tradeoff is occasional local suboptimization the
GUIs usability for some narrow task may not be as good as in some alternative
designs. The reason for global optimization is that the users entire job is more
important than any individual task. That is not to say that all tasks are given
equal weight. The OO style is flexible enough to accommodate differences in task
frequency, difficulty, and importance, and our methodology actively seeks out and
accommodates such differences. In contrast, many procedural interfaces and the
methods to design them optimize locally at the expense of the entire job.
OO GUIs differ from procedural GUIs. Procedural GUIs are oriented around particular
procedures for doing tasks, with the users data elements shown only as support
for the procedures. Typically, procedural interfaces have one window for each step
in a procedure, with all the data elements relevant to that procedure being in that
window. If a data element is needed in several procedures, a procedural interface
represents the datum equivalently in each of those several windows. Consequently,
related data elements such as a customers name and address will not be in the
same window, unless those elements must be dealt with in the same step of the
particular procedure that the GUI designers have chosen. A consequence of such
procedural organization is difficulty in doing the task any other way.2 Doing the
task in a different way often requires using the data elements in a different order
than how they are organized in the current procedural interface. Users must move
among many windows to get at the scattered individual data elements they need, but
users get few clues to guide their search; for example, they cannot just look for
the single window that contains all the attributes of a customer. In contrast
to such a problematic procedural GUI, a well-designed OO GUI matches the users
mental model of the data elements relations to each other; therefore, users can
be guided by their mental model of the data elements, regardless of the procedure
they are trying to execute.
We agree with the great many people in the usability engineering community who
contend that OO GUI style is the most natural and easy to learn user interface style
for the majority of computer users, tasks, and task situations. Some of the places
to read those contentions, along with principles and rationales of OO GUI style,
are Collins (1995), the CUA3 style guide (IBM4, 1992), the Windows5 style guide
(Microsoft6, 1995), and McFarland and Dayton (1995). The industrys degree and
extent of conviction in the superiority of the OO GUI style is evidenced by the
reliance on OO GUI style by most major GUI platform vendors (e.g., CUA and Windows
styles). Partly for that reason, the OO style allows for multiplatform GUI design,
as the next section explains.
OSF, Motif, and OSF/Motif are trademarks of Open Software Foundation, Inc.
10
for GUI design, and on guidelines derived from successful GUI design at Bellcore
and elsewhere. Designs following that multiplatform style are completely
compatible with all four platform-specific styles.
Our methodology produces designs compatible with those multiple platforms mostly
by
developing an object model for all the data that users need for their tasks
using the highest common denominator of the four GUI 256 platform styles
leaving until last the details on which the platforms diverge 56 (e.g.,
exactly where to position command buttons in a dialogue box)
The multiplatform compatibility of the fundamental design guarantees that the
details can be expressed in any of the four platform-specific styles and that the
resulting detailed design can be easily translated into any of the other four styles.
The industry-standard style allows this GUI to be immediately usable, at least in
basic ways, by users having even a little experience with other industry-standard
GUIs.
be available (e.g., using the GUI while talking on the phone and gesturing to someone
in your office). An example of an expert feature is showing objects in an indented
hierarchical list so users can directly view and manipulate closed grandchild and
great-grandchild objects without having to open their parents first (see the top
left window in Figure 2.6).
At the other end of the spectrum are features for rank novices who lack knowledge
of even the basics for using an industry-standard GUI. The best example is a wizard
that steps users through a procedure for doing a task. A wizard should be layered
on top of the OO design, so that users can escape the wizard at any time to directly
manipulate the OO GUI themselves.
Features for experts and rank novices are two kinds of the polish that should be
designed only at the end of Part 3 of The Bridge, so that the foundation design
for less-experienced users is not distorted. Deciding when to shift the sessions
focus toward such design details is part of the skill of a good facilitator.
2.11. FACILITATORS
The facilitators are one of the sources of the loose but strong guidance that keeps
the session barreling along toward its goal instead of flying apart or bounding
off somewhere else. However, the facilitators are not leaders at least not
explicitly. They try to guide subtly enough for the participants to feel that there
is no particular leader, that the group as a whole is leading. The most difficult
part of the facilitators job is to simultaneously provide that strong guidance
and goad the team toward chaos. The facilitators must keep the session at the highly
productive but delicate balance between unproductive chaos and unproductive
rigidity.
As guides, the facilitators prompt the participants through the explicit steps in
the methodology by demonstrating, by diving in to start each activity themselves,
and sometimes by showing a few overhead transparencies. The facilitators act most
forcefully when reining the errant group back into the scope of the session and
when confronting the group with their limited time. (One of Toms favorite methods
is to lean over the table while staring at his watch and muttering tick, tick,
tick,....) Even at their most forceful, the facilitators avoid explicitly leading
the group. Instead they act as the groups conscience, forcing the group to
acknowledge the hard reality of the need to go through all of the methodologys
steps, and the hard reality of the groups previously and publicly documented
goal-and-scope definitions and time allocations. In each incident of reining in,
the facilitators first force the group to acknowledge their previous consensus
about goal and scope and time, then to acknowledge that they are at this moment
pursuing a different goal, or are out of scope, or are almost out of time. Then
the facilitators force the group to either get back on track or to change its
consensus definition of goal, scope, or available time (e.g., extend the session
by a day). In any case, it is the group that must decide what to do. The group should
view the facilitators as fancy alarm clocks that are indispensable, but which are
unavoidably annoying on occasion.
For the group to trust and value the facilitators in that way, the group must see
the facilitators as impartial. Impartial means not just toward the project
itself, but also toward the projects politics and the sessions participants.
Such impartiality is most thoroughly attained by having the facilitator be ignorant
of the project and its stakeholders. Ignorance is also helpful for preventing the
facilitators from getting sucked into the session listening, seeing, thinking,
and then behaving like a participant. Without knowledge of the projects domain,
the facilitators can do little more than what they are supposed to do float right
at the surface of the session, using as cues their observations of the social
interactions, facial expressions, long silences, too-long discussions, repeated
conversations, and other subtleties. But facilitating without knowledge of the
users domain is difficult because those general social and other cues are the
only hints available.
As goads, the facilitators constantly recommend that the participants take their
best guesses at decisions and move along, instead of discussing anything for very
long. When the team slows after making reasonable progress in a step, the usual
facilitator response is to prod the team to go to the next step. If the team seems
to have invested too much ego in an idea that no one really likes, the facilitators
encourage the team to crumple up that piece of paper and throw it on the floor.
It can always be picked up and straightened out if the team decides it really likes
the idea after all.
Another, more neutral, role of the facilitators is lubricant for the session.
Facilitators help transcribe statements onto the low-tech materials. Facilitators
get the participants to switch seats around the table periodically to prevent
stagnation of the social relations, to equalize power relations around the table,
to break up side conversations, and to give people a different view of the ideas
on the table. When participants mutter something, the facilitators hand them pens
as a subtle suggestion to write their comments on the public materials.
Many of the facilitators goals and activities are subtle. Their role most apparent
to the participants is facilitating the teams execution of the explicit steps
that complement the pervasive techniques and orientations we have described so far.
3. EXPLICIT STEPS
Having described some of the pervasive, easily undervalued parts of The Bridge
methodology, we turn to the explicit steps. The Task Object Design step is the center
span of the bridge from user needs to GUI, but you cant have a useful bridge without
entrance and exit spans as well.
Part 1: Expressing User Requirements as Task Flows
Goal: Translate user needs for the task into requirements that reflect
the task flows and that can be input to the next step.
Activities: Analyzing, documenting, redesigning, and usability testing
task flows.
Output: Scripts of what users will do with the new GUI. The format is an
index card for each step, sticky arrows between cards, and those materials
taped as flows onto pieces of flip chart paper.
Part 2: Mapping Task Flows to Task Objects
Goal: Map the user requirements output by Part 1 into task objects
discrete units of information that users manipulate to do the task with
specified behaviors and containment relations.
Activities: Task Object Design discovering, designing, documenting,
and usability testing task objects.
Output: Each task object shown as an index card, with the object attributes,
actions, and containments described on attached stickies.
Part 3: Mapping Task Objects to GUI Objects
Goal: Design a GUI guaranteed to be usable for executing the task.
Activities: Designing, documenting, paper prototyping, and usability
testing the GUI fundamentals by translating task objects from Part 2 into
GUI objects represented by GUI elements such as windows.
Output: Paper prototype, usability tested for goodness in doing the task
flows that were output from Part 1. Includes the fundamentals of the GUI
window definitions and window commands but not the fine details.
Throughout the following descriptions of the three parts we will illustrate with
the coherent example of a GUI for hotel desk clerks to make reservations, check
in customers, and check out customers. Do not be confused by the incompleteness
of the example as it is described here; this chapter shows only the pieces needed
to illustrate key points. Figure 2.1 shows examples of the outputs of the three
parts of the methodology; in reality this GUI would have more task flows, more steps
in the task flows, more task objects, and more windows. All the figures in this
chapter diagram materials that are mostly hand lettered and drawn by the session
participants.
are the triggers to these users work and the steps that are done with the output
of these users work. The scope of this GUI design session is shown on the Big
Picture by circling the portion of this high level set of flows that is to be done
by these users with this about-to-be-designed GUI. The Big Picture is posted on
the wall to act as a scope map for the entire multiday session. If the session drifts
to an out-of-scope topic, the corresponding index cards position outside the
scope circle is a public reminder and indisputable evidence that the topic must
be abandoned or the scope redefined.
Figure 2.2 does not show a Big Picture, but the format shown there is the same
hand-lettered index cards with hand-drawn sticky arrows. A Big Picture for the hotel
desk clerk GUI might include a single index card for each of the three tasks that
are the target of this GUI Make Reservation, Check In, and Check Out
with that set of three cards circled as being in scope. Index cards outside the
scope circle might include List the Rooms Needing Maintenance and List the
Customers With Outstanding Bills.
3.1.2. CURRENT TASK FLOWS
The team now magnifies the circled portion of the Big Picture into a moderate level
of detail on new pieces of flip chart paper. As before, the team places index cards
on flip chart paper that rests on the small round table about which the team is
huddled. The flows being documented are the current task flows, not redesigned ones.
The focus is on describing the flows rather than critiquing them, but if problems
are mentioned they are written on hot pink issues removable stickies and placed
on the appropriate steps index cards. As in the Big Picture, the flows must have
triggers, results, and process steps in between. Not much time should be spent
creating the Current task flows because the GUI will be based instead on the
redesigned flows that are the ultimate product of Part 1.
A desk clerks Current flows might look much like the desirable Realistic and
Detailed flows illustrated in Figure 2.2. Certainly there should be strong
resemblance of the triggers and results, since rarely can those be redesigned by
the team designing the GUI. Often, though, the current flows are more complicated
than the redesigned flows.
3.1.3. ISSUES AND BOTTLENECKS IN CURRENT TASK FLOWS
The completed, semi-detailed, Current flows now are critiqued by the team. Every
team member gets hot pink removable sticky notes on which to write their critiques,
just as throughout the entire methodology every team member has some of all the
materials; there is no single scribe. Each hot pink sticky gets only one issue or
bottleneck, and that sticky is placed on the index card representing the relevant
task step. Any kind of issue or bottleneck qualifies, not just overt problems; there
may be a task step that works just fine when considered in isolation, but that is
obviously a bottleneck when the entire task flow is considered. Issues are written
even if no one has a clue what the solutions might be. The focus of this step is
on documenting issues rather than solutions, but if solutions happen to be mentioned
they are written on square, blue, possible solutions stickies that are stuck
to the relevant hot pink issue stickies.
An issue with the desk clerks current task flows might be the need to go to
different screens to see a customers record for the different purposes of making
a reservation, checking in, and checking out. A separate issue (written on a
separate hot pink sticky) might exist within the check-out task; users might need
to go to one screen to print out the customers bill and to a different screen
to unassign the room to that customer. An issue at check-in time might be the need
for users to go to one screen to find the particular room to give to the customer
and a different screen to make that assignment. Similarly, at check-out time the
users might have to go to one screen to unassign the room to the customer and to
another screen to increment the hotels room availability tally.
3.1.4. SCOPING THE CURRENT TASK FLOWS
Users mark each hot pink sticky issue with its priority, for example, High,
Medium, and Low. Its okay for all issues to be the same priority. The
team then draws a circle around the portion of the Current flow that they will work
on during this session. They inform that scoping decision with the Current flows
publicly visible evidence of the breadth and depth of the users tasks and with
the arrangement and priorities of the hot pink issue stickies. In the best case,
all the high-priority hot pink stickies are clustered on a small portion of the
task flows, and the team has the power to throw most of the resources of the GUI
project into improving those steps. In the worst case, the hot pink stickies are
scattered evenly across the task flows, but at least that gives the team confidence
that it isnt wasting resources on an unimportant portion of the users work.
The scoped Current flows now are posted on the wall for reference during the rest
of the session.
Perhaps desk clerks think the issue of going to different screens is low priority
when the different screens are for the different tasks of making a reservation,
checking in, and checking out, because desk clerks have a fair amount of time to
shift gears between those three tasks. However, they might give high priority to
the related issue of different screens within the check-in task and within the
check-out task because within those tasks the users are pressed for time.
software projects resources. These flows must be more detailed than any of the
flows produced so far, and like the other flows these must have triggers, process,
results, and one noun and verb per index card. This is the set of flows that will
be input to Part 2.
An example of a set of Realistic and Desirable flows is in Figure 2.2. There are
three tasks that a desk clerk must do with the GUI about to be designed: make
reservations, check in customers, and check out customers. Those three tasks have
different triggers and results, so they are laid out as separate flows. But, the
single GUI being designed will cover all three tasks.
3.1.8. SCOPING THE REALISTIC AND DESIRABLE TASK FLOWS
The team now gets another opportunity to change its mind about how much of the task
set can be handled during this multiday session. If the scope needs to be smaller
than what is shown in the Realistic and Desirable flows, the team draws a circle
around the portion that can be dealt with. However, the team must include all the
major data objects in the circled steps. Otherwise the GUIs object framework will
be difficult to extend later to accommodate the other data objects and so to
accommodate the other tasks steps. If this circle differs from the scope circles
on the previous flows (Big Picture, Current, and Blue Sky), the team must redraw
the previous flows circles to match.
The final output of Part 1 is the set of Realistic and Desirable task flows, though
the other task flows should be retained as context and as documentation of how the
final flows were produced. The final set of flows must be detailed, well grounded,
and well understood by the entire team because it is the input to the methodologys
Part 2 for extraction of the task objects.
are represented by the Realistic and Desirable task flows output by Part 1. This
step is quite simple and fast and is similar to many object-oriented methods.
Participants merely copy each noun from the task flows onto an index card and place
the resulting cards on the table. They do not write verbs, just nouns. Participants
must not write GUI object names such as a scrolling list of customers, but only
abstract task object names such as a set of customers. Each card also gets a
one-sentence definition written on it. The object must not be defined purely in
the abstract, such as Hotel building used to accommodate travelers. Instead,
the object must be defined as it relates to the GUI being designed, for example,
Hotel collection of rooms that customers rent. To save time, everyone at
the table does all this in parallel; then the group as a whole reviews the cards,
revises them until getting consensus, and throws out the duplicates.
Participants should not spend much time debating whether all these task objects
really qualify as full-fledged objects, but they should go ahead and throw out any
cards that represent objects outside of the GUI being designed. For example, call
the bellhop may be a step in a hotel check-in task flow, but if that is purely
a manual procedure having nothing to do with the GUI (the desk clerk yells to the
bellhop), then the Bellhop index card should be thrown out.
The output of this step would be just the topmost rectangles in Figure 2.3 the
index cards labeled Hotel, Room, Customer, and Reservation. Each
task object is really an object class, so the Customer task object represents the
class of all customers. The actual user interface would have separate object
instances such as the customers Tom Dayton and Joseph Kramer, but object instances
are not represented explicitly in The Bridge methodology until Part 3.
3.2.2. ATTRIBUTES OF TASK OBJECTS
Next the participants more fully describe each object by writing its attributes
on a blue sticky note that they attach to the bottom edge of that objects index
card (see Figure 2.3). There are two basic kinds of attributes each object can have:
child objects, which are themselves objects and so have their own task object index
cards; and properties, which are not objects. Child objects are what the object
is made up of, such as a hotel being made up of rooms and customers and a customer
being made up of reservations. Properties are what the object is, such as a hotel
having a name, a type (e.g., luxury), and a status (e.g., sold out). At this point
in the methodology, no distinction is made between those two types of attributes;
participants merely write all the attributes they can think of on the attributes
sticky. Some of the attributes come from what is written on the task flows, but
many come from the participants knowledge of the domain.
Many of the index cards are discarded during this step, as the participants learn
more about these nominal objects and refine their notion of objecthood.
Participants should try to reduce the number of task objects during this step,
because users find it easier to deal with a few object classes each having lots
of attributes than with lots of object classes that are almost identical. Deciding
which units of data qualify as bona fide objects is not based on hard and fast rules
but on rules of thumb and the context of users doing their tasks. Objecthood is
necessarily a fuzzy concept, and not until Part 3 of the session do participants
get a good feel for it. Participants change their minds right through Part 3, usually
by continuing to reduce the number of objects, and that is perfectly all right.
Some rules of thumb:
If users domain knowledge has them think of the unit of data as an object
(e.g., hotel, room, customer, reservation), then make it an object.
If users ever want to see only a few of the attributes of the unit, but
sometimes want to see all of them, then the unit should be an object so that
it can be presented in the GUI as both closed (e.g., a row in a list) and
open (a window).
An index card having no attributes listed on its attributes sticky means
that this object should instead be merely an attribute of other objects,
listed on their attributes stickies but not having its own index card.
If there might be several instances of the unit of data, and especially
if the number of instances that might exist is unknown but could be large,
then the unit of data might be an object.
If users want to create, delete, move, and copy the unit of information,
they are treating it like they would a physical object, so the unit might
be an object.
In Figure 2.3, the second rectangle from the top in each of the four collections
is an attributes sticky for that task object. One of the attributes listed on the
Customer objects sticky is Bill Info, which stands for an entire category
of attributes. That attribute category was added to the Customer only after the
participants realized that the attribute category served the task just as well as
a separate Bill object did. In the Identities step they had created a separate
Bill task object by copying Bill from the Print Customers Bill task step
card in Figure 2.2. But in this Attributes step they realized that the phrasing
of the task step could be reinterpreted as Print the Bill Info attributes of the
Customer, making Customer instead of Bill the task object. Printing the bill would
then be done simply by printing the Customer, setting the Print actions parameters
to print only the bill information and in the bill format.
3.2.3. ACTIONS ON TASK OBJECTS
In this step, the participants record on pink sticky notes the actions that users
need to take on the objects (the third rectangle from the top in each of the
collections in Figure 2.3). These are actions done to an object, such as printing
it, rather than actions done by the object. The only actions of relevance here are
those actions done by users to the objects; actions done by the system are not the
focus here, though they can be added as parenthetical annotations to the user
actions. The starting point for discovering actions is the task flows, since each
step card in the task flows has a verb as well as a noun. In addition to actions
listed on the task flows, participants should consider standard actions such as
view, create, delete, copy, edit, save, run, and print. Users may write whatever
terms they like, such as change and discard instead of more
computer-standard terms such as edit and delete. For now, these action
names need not match the style guide standards for menu choices and buttons, since
the names on these stickies will be translated into GUI style standard terms during
Part 3.
This Actions step of Part 2 is important, because it finishes gaining one of the
benefits of the OO style of GUI using each window to represent a single data
object, with several actions easily taken on that data object. We and others contend
that is easier for users than is a typical procedural interfaces separate screens
needed for doing different actions to the same data. View and edit actions
are common examples: In many procedural interfaces users must tell the computer
that they want to view before the computer will ask them what data are to be
viewed, and they must issue a separate edit command before the computer will
ask what data are to be edited. An OO GUI instead lets users find the data object
without requiring them to specify what they want to do to it. Users can see the
objects attributes, and if these users have permission to edit this object, they
can just start changing what they are already looking at. For example, different
steps called Specify which customer to check in an Specify which customer
to check out might be replaced with a single Find Customer step that is common
to the check-in and check-out tasks. Checking in a customer and checking out a
customer might then become just Check-In and Check-Out actions on the same
menu of that customers window.
Figure 2.3 shows examples of actions on task objects. For instance, the task flow
step Create Customer (shown in Figure 2.2) would have already stimulated
participants to create a task object called Customer, and now the action
Create would be written on the actions sticky attached to the Customer task
object index card shown in Figure 2.3.
3.2.4. CONTAINMENT RELATIONS AMONG TASK OBJECTS
The goal of this step is to ease users navigation through the software by making
the GUI represent the users existing mental model. That is done by capturing the
users natural mental model of the real-world relationships among the task objects
completely aside from any GUI, so that in Part 3 those relationships can be
translated into visual containment among the GUI objects. The input to this step
in Part 2 is the set of task object cards, each card having two stickies. The output
is that set with a third (yellow) sticky on each task object, showing all parent
and child (no grandchild) containment relationships of that object. See the bottom
rectangle of each collection in Figure 2.3 for examples. The task flows are not
especially relevant for this step because the goal is to capture the users mental
model of the objects relations to each other aside from any particular task.
Parent and child mean only the visual containment relation between objects,
not any kind of subclassing relation as is meant by those terms in object-oriented
programming. These containment relations are not tied closely to the dynamics of
the task flows, but are static relations. They are the intuitive hierarchies among
the objects when users think about the objects they use to do their tasks. When
participants design this containment hierarchy, they are designing the foundation
of the users mental model of the GUI universe. For example, a car mechanics
universe of cars has the car containing the engine and the engine containing the
pistons, regardless of the particular activities the mechanic does with those
objects:
Car
Engine
Piston
In this Containment step the distinction between child objects and properties is
made sharply, in contrast to the Attributes step which listed both types on the
attributes sticky. Child objects are attributes that are themselves treated as
objects within the context of their parent. Properties are attributes that are not
themselves treated as objects within the context of their parent. In this step the
participants examine all the attributes listed on the attributes sticky for each
task object and copy those attributes that are child objects onto the In Me
(right hand) column of the containment sticky (see Figure 2.3). The child objects
now are listed on both the attributes and containment stickies, whereas the
properties are listed only on the attributes sticky. Then the Im In (left-hand)
column is filled with the names of the task objects on whose containment stickies
this object is listed as a child.
Figure 2.3 shows the example of Hotel, Room, and Customer. Hotel is the ancestor
of all objects within this desk clerks GUI, but Hotel itself lives within the
Desktop the mother of all objects in this GUI platform, which fills the entire
computer screen, in which all the other windows and icons appear (e.g., the
Microsoft Windows desktop). Therefore, Hotels third sticky has Desktop
written in its Im In column. Only two of the attributes of Hotel have their
own task object cards (Room and Customer), and desk clerks do think of those two
objects as children of Hotel, so Room and Customer are written in Hotels
Participants must check that the hierarchy is not tangled that each task object
has just one parent. In the polishing phase of the design process, multiple parents
may be allowed, but at this point in the methodology it is best to keep the hierarchy
simple. It is also important to check that objects do not contain their own parents
as their children.
Although the parent of an object must not be shown as a child object of that object,
it is perfectly legitimate to show the name of the parent as a property of the object.
For example, the Room objects attributes sticky may have Hotel written on
it as a property, but certainly Hotel will not be written in Rooms In Me
column. Rather, Hotel will be written in Rooms Im In column. In general,
context determines whether an object is shown on another objects card as a child
object or just the name shown as a property. Its okay for an object to be a child
of one object while having its name shown as a mere property of another object.
One difference between child object and property in the GUI to be designed during
Part 3 is that a child object can be opened by double-clicking on its closed
representation. In contrast, object names shown as mere properties are just strings
of text; to open the named object, users must find its representation as a true
child in some other window.
There are ways to shortcut the single-parent restriction, but our methodology
protects participants from such complexities until the end of Part 3. Here in Part
2, every object usually should have a single parent, with all other names of that
object being mere properties. This produces a simple, strict hierarchy that is a
solid basis for the steps of the remainder of the design process. During the GUI
designing (i.e., Part 3), some of those object-names-as-properties may be turned
into links, or a multiple-parent model may be adopted. Those activities usually
should not be done here in Part 2.
Many task objects are converted to mere attributes of other objects during this
step; in other words, their index cards are thrown out after their attributes
stickies are moved to other objects. The teams bias should be to have very few
task objects (i.e., object classes) even if that requires each object to have lots
of attributes. A plethora of attributes in an object can later be managed in the
GUI representation of the object by bundling the attributes into frames, notebook
pages, Included subsets, and views, all of which are more easily accessible than
are entirely different objects.
3.2.5. USABILITY TESTING OF THE TASK OBJECTS AGAINST THE TASK FLOWS
You dont need a GUI to do a usability test! The team (which, as always, still
includes real users) now tests whether the set of task objects is usable for
executing the task flows it designed in Part 1. One person talks through each step
in the task flows, with the other participants checking the task object cards and
stickies for the presence of all the objects, attributes, and actions needed to
execute that task step. The team usually will discover that the task objects or
even task flows are incomplete or incorrect. If so, the team should change them;
this entire workshop methodology is an iterative process.
As of yet, any GUI is irrelevant. The team must do this usability test only at the
rather abstract level of task flows and task objects. This is a usability test,
but of the conceptual foundation of the GUI instead of the surface of the GUI.
Discovering usability problems at this early stage saves the resources you might
otherwise have spent on designing the surface of the GUI incorrectly. The earlier
in the design process that usability testing happens, the more leverage is gotten
on resources.
The output of Part 2 is a set of task objects sufficient and usable for doing the
task, each task object documented as an index card with stickies as shown in Figure
2.3. This set of task objects is the input for Part 3, which maps those rather
abstract task objects into GUI objects.
It uses paper prototyping materials that reflect the desired style (e.g.,
photocopies of empty windows printed from a screen of the desired style).
For the few style guidelines that do require explanation during the
session, the methodology communicates each by brief lecturing with just a
few overheads, at just the time in the process when that particular guideline
is necessary.
It postpones designing of the details until after the participatory
session. It does this by skipping some of the details (e.g., some of the
standard buttons on secondary windows) and by letting participants draw some
components on the paper prototype in any way they like (e.g., put the buttons
either on the side or the bottom).
It includes style experts either as participants (usually the usability
engineer) or as facilitators.
Of course, many platform-specific characteristics must be specified during the
session, in order to draw even the basic pictures used as the paper prototype.
Therefore, one of the target GUI platform styles is chosen by the team for the
purpose of paper prototyping. If the GUI must also be produced in other styles,
then after the session the usability engineer copies the fundamental design while
changing the platform-specific details in minor ways to make them conform exactly
to the other styles.
Part 3 of the methodology designs only the fundamentals of the GUI per se. That
means the resulting window definitions include window types, window views, and
window commands (menu bar choices and control buttons), but not accelerator key
combinations and icons. Those and other details are filled in by the usability
engineer after the participatory session because the other participants usually
can contribute less to those activities, and a participatory session is not a
particularly efficient medium for doing that kind of detailed designing. Nor does
Part 3 of the methodology design user support such as documentation and on-line
help. Those things are important, but they need to be designed via methods that
fully focus on them.
There are several steps in Part 3; they involve designing, documenting, paper
prototyping, and usability testing the GUI fundamentals. As in the previous two
parts, no step is begun until the previous step is mostly complete, but there is
considerable small- and large-scale iteration among steps. Almost always, the
participants gain insights during this GUI designing part that prompt them to change
the task objects and even the task flows. Each step in Part 3 maps a different aspect
of the task objects to GUI objects, as exemplified by Figure 2.5. The following
sections briefly describe the steps that do those mappings.11
11
Readers wanting more GUI style descriptions as context for better understanding these method
descriptions should see McFarland and Dayton (1995), IBM (1992), or Microsoft (1995).
as closed objects in the GUI. There are many possible closed representations, such
as list row, icon, and menu choice (for device objects). The decision on the
particular closed representation is not made in this first step of Part 3. This
step instead requires participants only to decide whether each GUI objects closed
representation can be opened into a window, and if so, whether that should be a
primary window or a secondary window. The bias should be toward primary windows,
largely because primaries can remain open despite the closing of the windows from
which they were opened. (This GUI style uses only Single-Document Interface style,
SDI, not Multi-Document Interface style, MDI.)
Participants document their decision to show an object as a window by paper clipping
a photocopied empty primary or secondary window to the task object card. They write
in the windows title bar the name of an object instance; in the example in Figure
2.5, the title is Tom Dayton because Tom Dayton is one instance of a Customer.
If the window has a menu bar, they name the leftmost menu with the name of the object
class; in Figure 2.5 the leftmost menu is called Customer because the window
as a whole represents a particular customer. Figure 2.5 actually shows more than
the result of this first step, since this step uses only one photocopied window
per task object and leaves the windows client area empty. The next step defines
additional windows.
3.3.2. VIEWS OF GUI OBJECTS
Then the team decides the basic appearance of each windows content, in other words,
the views of the GUI object. The mapping is from each task objects attributes
sticky into the corresponding windows client areas (Figure 2.5). The team roughly
sketches the view appearances in the heretofore empty client areas of the window
photocopies and appends each title bars object instance name with a view name.
Each view of an object is drawn on its own photocopied empty window, with all the
views for the same object having the same leftmost menu name and the same title
bar object instance name.
Views can be used to show different attributes, such as child objects in one view
and properties in another (e.g., a customers reservations vs. a customers
general information see Figure 2.5); the containment sticky on the task object
helps by identifying which of the attributes are child objects. For instance, the
Room object might have one view showing a diagram of the room (see Figure 2.6) and
another view showing the rooms furniture, equipment, and maintenance history.
Views can also be used to show the same attributes, but in different ways. An example
is having one view showing customers as textual rows in a list and another view
showing customers as pictures of the customers faces.
sizing, layering, minimizing, maximizing, and closing to manage the large quantity
of information.
Users easily know how and where to open closed objects into windows because closed
objects are shown inside the objects that users naturally think of as their
containers (Figure 2.6). The Bridge methodology produces a GUI that reflects the
users natural mental model of those containment relations, in contrast to many
traditional approaches that force users to change their mental model to match the
GUI.
3.3.3. COMMANDS FOR GUI OBJECTS
Then the menu bars and entire-window-level command buttons are designed by mapping
them from the actions sticky of each task object (Figure 2.5). The menus are
represented by preprinted, style guide standard, menu pull-downs. The leftmost menu
has a blank for its preprinted name so that the team can write in the names of object
classes to match the paper windows hand-lettered leftmost menu names. For example,
if the actions sticky for the Customer task object says Print, then the team
leaves the standard choice Print in the preprinted leftmost menu that they have
labeled Customer. Preprinted menu choices that have no corresponding sticky
entries are crossed off. Actions on the sticky that have no corresponding preprinted
standard choice are written on the appropriate preprinted menus. To keep all the
materials together, the menus for each object are removable-taped to an index card
that is paper clipped to the objects windows and task object card.
Also in this step, participants design and paper prototype the major portions of
the transaction dialogue windows that support the commands most relevant to view
definition. The most important of these is the dialogue supporting the Include
action, which makes visible only the subset of information that a user wants to
see in a given window at the moment (i.e., it is an inverse filtering action). The
Include action allows the team to avoid designing a separate view for every subset
of information they might ever want to see. Some other dialogue windows that should
be designed at this point are those supporting the New, Sort, and Find actions.
The team has now produced a paper prototype that contains the data needed by the
users to do their tasks, the menu bars and window-level buttons to manipulate the
data, and the navigational paths among the windows. The paper prototype is now ready
to be thoroughly usability tested by these very same participants in this very same
session.
3.3.4. USABILITY TESTING OF GUI OBJECTS AGAINST THE TASK FLOWS
Throughout Part 3, the team has done quick usability tests of the incomplete paper
prototype. But now that even the commands have been prototyped, the usability
testing can be more realistic and thorough. A rectangle representing the computer
screen is marked off with masking tape on the table top. One participant points
to and reads each step in the task flows. One user uses a pen as the mouse pointer
for clicking through the paper prototype to execute the task step being read. The
rest of the participants act as the computer, responding to the users actions
by adding and removing the appropriate windows and pull-down menus from the screen.
Any time a problem is found, the team stops the test, changes the design and the
paper prototype, then restarts the test. Usually, the task objects and even the
task flows also are changed during this phase, in a rapidly iterative process. If
time allows during the session, the team now adds some polish and detail to the
paper prototype. For instance, they may add short cuts for expert users and complete
more of the dialogue windows.
After all the documented task flows have been successfully executed with the paper
prototype, the team tries other tasks to test the flexibility of the design. Thanks
to the object-oriented style, often the design can handle tasks that were not
foreseen during the initial design and the design can easily be modified to
accommodate many other tasks.
4. CONCLUSION
What remains to be done after the three-part Bridge session is the filling in of
design details such as some of the dialogue boxes, icons, precise window layouts,
colors, and fonts. Those remaining details are most efficiently designed by the
usability engineer outside of a participatory session, though the usability
engineer must continue consulting with users and other team members. Those
remaining details are, of course, important, but they are much less important than
the fundamental organization, appearance, and behavior that The Bridge does design.
This chapter is not a complete description of The Bridge methodology. More extensive
descriptions are in the notes handed out during educational sessions of the
methodology, but a complete account will require its own book. In the meantime,
this chapter should at least give readers a general orientation, especially to the
critical Task Object Design center span of our bridge over the gap between user
needs and GUI prototype.
5. ACKNOWLEDGMENTS
Many people have contributed in varying ways and degrees to development of The
Bridge. Michael Muller was largely responsible for initially convincing us of the
value of participatory methods in general. Michael also was a prime originator of
the early versions of the CARD method (with Leslie Tudor, Tom Dayton, and Bob Root)
and the PICTIVE method that we took as the starting points for developing Parts
1 and 3, respectively. Michael is also a prolific contributor to the usability
lexicon, the PANDA acronym being his most recent. Bob Root added to CARD some
explicit steps that we expanded and elaborated into Part 1. Jim Berney was a key
to providing the various resources we needed for developing The Bridge. Much of
what we have provided is just the overall structure and glue that made an integrated,
end-to-end process out of component ideas that we adapted from many public sources,
including common object-oriented methodologies. Our cofacilitators and the
participants in Bridge sessions have provided not just valuable feedback but also
original ideas. We heartily thank all the above people for their participation in
the collaborative development of this collaborative methodology. We also thank
Larry Wood, Andrew Monk, and Sabine Rohlfs for comments on this manuscript.
6. REFERENCES
Collins, D., Designing Object-Oriented User Interfaces, Benjamin-Cummings, Menlo
Park, CA, 1995.
Dayton, T., Cultivated eclecticism as the normative approach to design, in Taking
Kramer, J. and Dayton, T., After the Task Flow: Participatory Design of Data
Centered, Multiplatform, Graphical User Interfaces, tutorial presented at APCHI
96, the annual meeting of the Asia Pacific Computer Human Interaction Group,
Singapore, June, 1996.
Kramer, J., Dayton, T., and Heidelberg, M., From Task Flow to GUI: A Participatory,
Data-Centered Approach, tutorial presented at UPA 96, the annual meeting of the
Usability Professionals Association, Copper Mountain, CO, July, 1996.
McFarland, A. and Dayton, T., A Participatory Methodology for Driving
Object-Oriented GUI Design from User Needs, tutorial presented at OZCHI 95, the
annual meeting of the Computer Human Interaction SIG of the Ergonomics Society of
Australia, Wollongong, Australia, November, 1995. Summary in OZCHI 95 Conference
Proceedings, 1011.
McFarland, A., and Dayton, T. (with others), Design Guide for Multiplatform
Graphical User Interfaces (LP-R13, Issue 3), Piscataway, NJ, 1995. Bellcore (call
800-521-2673 from US and Canada, +1-908-699-5800 from elsewhere).
Microsoft, The Windows Guidelines for Software Design, Microsoft Press, Redmond,
WA, 1995.
Muller, M. J., Hallewell Haslwanter, J., and Dayton, T., Participatory practices
in the software lifecycle, in Handbook of Human-Computer Interaction, 2nd ed.,
Helander, M., Prabhu, P., and Landauer, T., Eds., Amsterdam, North-Holland, in
press.
Muller, M. J. and Kuhn, S., Eds., Participatory design [Special issue],
Communications of the ACM, 36(6), 1993.
Muller, M. J., Miller, D. S., Smith, J. G., Wildman, D. M.,White, E. A., Dayton,
T., Root, R. W., and Salasoo, A., Assessing a groupware implementation of a manual
participatory design process. InterCHI 93 Adjunct Proceedings, 105-106, 1993.
Muller, M. J., Tudor, L. G., Wildman, D. M., White, E. A., Root, R. W., Dayton,
T., Carr, B., Diekmann, B., and Dykstra-Erickson, E., Bifocal tools for scenarios
and representations in participatory activities with users, in Scenario-Based
Design for Human-Computer Interaction, Carroll, J. M., Ed., John Wiley & Sons, New
York, 1995, 135-163.
Open Software Foundation, OSF/Motif Style Guide: Rev. 1.2., Prentice Hall,
Englewood Cliffs, NJ, 1993.
Virzi, R., What can you learn from a low-fidelity prototype?, Proceedings of HFES
89, 224-228, 1989.
X/Open Company Ltd., Application style checklist, in X/Open Company Ltd., CAE
Chapter 3
Transforming Representations in User-Centered Design
Thomas M. Graefe
Digital Equipment Corporation, Littleton, Massachusetts
email: [email protected]
TABLE OF CONTENTS
Abstract
1. Introduction
2. What is the Gap?
3. Representations in Design: Decisions and Transformation
3.1. Process Overview
3.2. Viewing the Managed Resources: The Operators Window onto the World
3.2.1. Scenarios and Use Cases
3.2.2. Objects, Conceptual Models, Metaphors
3.2.3. Detailed Screen Design
3.2.4. Storyboards and User Interface Flow
3.2.5. Usability Goals
3.2.6. Prototype and Field Testing
4. Links Among System Representations
5. The Psychology of the Designer: Toward Useful and Usable Representations in Design
5.1. Decision Making in Design: Why Traditional Designers have Bad Intuitions about Usability
5.2. Heuristics and Metaheuristics: Guidance in Decision Making
5.2.1. Defining the Represented World: Start off on the Right Foot
5.2.2. Minimizing the Gap: Help the Magic Happen
5.2.3. The Right Tools for the Job: Using Converging Operations
6. Conclusion
7. Acknowledgments
8. References
ABSTRACT
This chapter describes the activity of bridging the gap between user-centered
analysis and a concrete design as a cognitive process of transforming
representations of information. Like other cognitive tasks, such as remembering
or problem solving, success in design hinges on the ability to code and recode
information effectively. A case study illustrates this process within the proposed
framework, and shows how the use of appropriate mediating representations
facilitates bridging the gap. Finally, it is argued that user-centered design can
1. INTRODUCTION
Planning, installing, controlling, and repairing the infrastructure on which
societys pervasive Information Technology (IT) is based has engendered a new
software application domain. Loosely defined this new industry comprises the
network and systems management applications used to support near real-time
monitoring and control of this widely distributed infrastructure. Any given IT
network is made up of many heterogeneous resources, each of which can be a complex
system itself. Moreover, the whole inevitably has emergent properties
unanticipated by the makers of the individual parts. This variability, and the
magnitude and technical content of the problem domain, creates a significant
mismatch between the complexity of the managed technology and the capabilities of
the people who manage it. User-centered design techniques must be used to simplify,
integrate, and infuse network and system management applications with purpose and
intelligence.
User-centered design prescribes involving the end users of a technology in the
definition of how the technology is applied within a problem domain. The diverse
techniques for bringing users into the development process all provide for
including user-based data to determine the make up of systems. However, these
techniques are less clear on how the user-centered data are transformed into a
specific design. This chapter examines the question of how user-centered analysis
becomes a concrete design with two particular issues in mind: what is the nature
of the transformations a designer makes to move from the user-centered analysis
to concrete design and how does the process of transformation relate to the
psychology of the designer? The next section describes a framework for discussing
these issues, after which an example is given. Finally, in light of this analysis
the last section looks at psychology of the designer.
transforming it, and expressing it in another. This ability to code and recode
information is basic to human cognition and has been the focus of much psychological
research. Psychological accounts of the efficacy of coding often rely on the concept
of representation whereby coding results in representations more amenable to
completing some particular cognitive task (e.g., Miller, 1956). Familiar
elementary school rhymes used to help recollect the number of days in each month,
or acronyms such as ROY G BIV for the colors of the spectrum, are simple
instances of coding. The psychological literature abounds with other examples,
including studies of chunking as a means to increase the amount of information that
can be retained, studies of mental rotation (Shepard and Metzler, 1971),
theoretical accounts of the structure of memory (e.g., Anderson, 1976), and
accounts of how choice of representation can be critical to success in problem
solving (e.g., Simon, 1981). In all cases a key part of the account given of human
performance has to do with the mediating role played by internal representation
or coding. Similarly, it is argued here that representation is key to defining the
gap bridged in design because this gap lies between representations used in the
design process.
One representational metatheory characterizes any discussion of representation as
necessarily a discussion of a representational system.1 A representational
system must specify:
Representation, p. 262. Norman (1993) also discusses the notions of represented and representing
worlds, but from the vantage of the end user rather than the designer.
1.
2.
3.
4.
5.
What
What
What
What
What
The epistemological question of how the represented world is known, its correspondence with the
real world, and the objectivity of this knowledge is an interesting one and affects how the
representational system is defined. However, for this context it will be sidestepped with the caveat
that how user-centered analysis is captured has a significant impact on the design process, as will
be discussed later.
The methods used to gather user feedback changed as the rendering of the design
changed, but they consistently linked to the originating scenario and use-case data.
Figure 3.3 shows for this project where the scenarios provided content and structure
for other steps in the interface design and testing process.
Figure 3.3 Use cases and scenarios provide data for many design steps.
The use cases had corresponding usability goals, usually defined in measurable
terms such as time to complete an operation or number of errors. The use cases also
provided a high level definition of the important system interactions, especially
since user-case structure defined related operations. Finally, the use cases were
executed as test tasks, using storyboards, a prototype, and in testing field test
versions of the product.
Thus, by combining an iterative design approach with the appropriate description
of the represented world, it is possible to shape the design and the design process
with end user information. The concrete design is derived from content of the
analysis and modified in accord with ongoing feedback. The process (e.g., setting
usability goals and test tasks and then testing via the storyboard walkthroughs
and prototype testing) is linked to the same analysis. The user-centered analysis
can be augmented as well during the design cycle (e.g., new use cases may emerge
as users note how their work is being changed by the application itself).
3.2. VIEWING THE MANAGED RESOURCES: THE OPERATORS WINDOW ONTO THE WORLD
In design a good deal of the work is detailed examination of the results of
user-centered analysis, extraction of key information, and laborious development
of correspondences between the data and the elements of the representing world.
The ability to recognize recurrent patterns in end user data and to manipulate the
possibilities of the representing world (e.g., knowing the grammar and style or
idioms) is developed with practice. The following subsections use the definition
of the main application window as a case study to elaborate on the process overview
provided above. Figure 3.4 gives an overview of this case study within the diagram
for a representational system. The four squares show the elements of the system
described earlier, but now labeled with the instances of information from this case
study. Each element is explained below.
There are several different types of end users of system management applications
who were observed at work in operations centers using existing tools and manual
procedures. In the case study scenario below, the end user is the operator, who
uses the application to watch the status of resources, gathers first level
information when there is a problem, and then usually notifies an expert for
problems they cannot solve. The operators report to a manager, who also has specific
requirements for the application, but is not typically using it day in and day out
to monitor (they often set up and administer the application, as well as set policies
for the operators). The scenario is simply called Monitoring, and was derived
from general and specific information from operators and their managers. The
comments below were descriptions given by end users and are accurate reflections
of observed work as well.
Managers
Our operators need to be able to monitor thousands of systems from a
central location and detect a problem in any one.
Our operators have specific areas of responsibility based on what
operating system technology they know UNIX, NT, and VMS. We also have
people responsible for applications that can cut across operating systems.
Often they will need to see problems in other areas.
We usually have some specific key systems that are particularly
important to the company, and we want to make sure these are highly visible.
Operators
I need to be able to quickly focus on a critical event.
I like to keep all my systems in view and watch for an icon to change
color.
I cant be swamped by thousands of events.
I cant afford to miss a single important event.
Often I need to find all the events for a system and their status.
Such quotations are taken from an overall discussion and are meant to show how simple
narrative serves as a starting point for design. Even this small sample of data
outlines several critical (and sometimes conflicting) requirements the user
interface needs to fulfill. One obvious example is the ability to monitor many
objects at once while at the same time avoid being swamped or creating a condition
where individual systems cannot readily be brought into focus. Also, users made
it clear that accurate detection of problems was critical, and discussion
surrounding this point suggested that, if not actual compensation, at least
customer satisfaction was strongly influenced by success in this regard.
As useful as the scenario data were, they lacked sufficient structure for all
aspects of design. Strength of the data in conveying the end user perspective was
also a weakness because it did not provide a systematic decomposition that could
be related into the overall software design process. Therefore, from the scenarios
Monitoring Managed Systems Events provide information about the systems, and
are used as the basic indicators of system state change for operators. Access to
the state change information comes from two modes: looking for a change in the system
icon or some related icon (e.g., parent icon) or through viewing the event log.
Viewing Managed Topology Viewing managed topology is started when the operator
launches the application. They use a graphic display of the managed systems to watch
for changes in health of the systems. The topology includes all managed systems
and any superimposed hierarchy (e.g., grouping by type). The health of the system
is visually indicated.
Viewing Event Log Viewing the event log is started as soon as the operator wants
to look at the text descriptions of the real-time event stream being received from
the managed systems. The event log is the default destination of all received events
and unless otherwise altered shows all events from all managed systems.
Figure 3.5 shows additional subcourses of transactions for the Viewing Event Log
use-case. View event properties and the four extension uses cases capture the
different likely actions the operator might take in reaction to a specific event
they have chosen to view. The structure of the use cases defines a relationship
between transactions with the system and captures an abstract view of the flow of
these transactions.
The use cases themselves provide summaries across observations and specific
individual differences, and can then support and structure design as a useful
chunk of information. The scenario data (e.g., the quotations) are important
just because they are not abstracted they capture the actual flavor of users
understanding of their work and give evidence of users schemas for
understanding their tasks.
3.2.2. Objects, Conceptual Models, Metaphors
Scenarios and use cases are the substance of the represented world. The information
they contain must be transformed to develop correspondences with the representing
world. Objects, conceptual models, and metaphors play a major role in this activity
and are derived from the scenarios and use-cases to facilitate design of specific
interface elements and flow. The objects defined here, although capable of having
a formal connotation within a full Object-oriented system, can be thought of less
formally as the entities upon which the user acts. The user actions can be thought
of as services being requested of the object, but no formal requirement is made
in this regard. A conceptual model is a simplified rendering of how proposed objects
will behave and interact. The behavior in question is not just that of the graphical
user interface per se, but of the software functions supporting that interface.
One analogy here is between a blueprint and a finished house. The blueprint
describes how the components of the house will be put together so that the finished
structure is sound and can be used for the designed purpose. When you walk into
the house you do not see many of the elements contained in the blueprint, especially
if it was built correctly. However, it is possible to relate the finished appearance
and the internal structure of the house to the blueprint. Therefore, a conceptual
model is in a sense a user view of the architecture of the system, if such a view
was to be revealed to the user.3 A conceptual model helps the user interface designer
and the development team make consistent decisions across user visible objects of
a system and to relate these objects to the internal system view. This definition
is somewhat different from others in the literature because it emphasizes the
integration of the user view with system behavior, instead of focusing only on the
users conceptual model. A metaphor effectively likens a model or an aspect of
the represented world to some other commonly understood object, behavior, or
characteristic of the world. The metaphor then provides both literal and behavioral
structure to the user interface. Conceptual models and metaphors should work
synergistically as system design centers.
Such models were used in documentation, information support, and training on this project, where
Conceptual modeling is useful for ensuring that internal system designs are
projected selectively to the end user, or conversely, hidden as required. Figure
3.6 illustrates some aspects of an important conceptual model underlying the
Monitoring scenario.
Figure 3.6 Conceptual model for distributed architecture and consolidated user
view.
This figure shows six agents4 on the left, each of which can detect important changes
within their domain (typically a single computer). These agents are software
modules within monitored corporate servers. They send notifications of these
changes to other servers running the system management application, which collect
these events and store them, each in its individual log file. The large rectangle
into which the servers place events is the event log viewed by the operator as part
of the use case Viewing the event log. It is a consolidated repository of data
(i.e., all the log files) based on a subscription constructed by the GUI client
and sent to distributed event servers. Thus, the underlying software is responsible
for supporting this virtual view. The end user sees this single log in the list
view and only cares about the distributed sources if there is some problem making
the virtual log incomplete. In the contextual inquiries, users made a clear
distinction between the distributed elements they managed and their own requirement
to have the application support a centralized, logical view of that distributed
domain. Part of the user interface design problem was arriving at the correct user
view of the information, its use, and the end users expectations. Part of the
overall system design problem for support of the Monitoring use case was ensuring
underlying software services were available to maintain and manage the virtual
view.5 The conceptual model is important because it makes the potential impact of
nominally internal design decisions explicit and thereby enables the designers
to address how such decisions might affect end users.
In this context Agent refers to an architecture component within a common management model
in which an agent communicates with and can act on instructions from a Manager application to effect
changes within the managed resource (object).
5
This indirection also affects the definition of the underlying system architecture and thereby
the structural elements or services made available to modules. A classic instance where system
internals are painfully revealed to the end user in many network management systems occurs within
the context of SNMP-based Management systems. Here typically the user must directly and in a
piecemeal fashion manipulate low-level data structures because a data model has been equated with
the user's view.
Table 3.1 summarizes some of the flow between the data gathered from users into
scenarios and use cases, the generation of metaphors and models, and the subsequent
selection of specific user interface components.
Table 3.1 Example Abstractions Drawn from User Data
Objects
Model
Metaphor
Systems
Log as
consolidated
list
Focus on Critical
Events
Events
For example, in discussing monitoring users often described their work by saying
I need the big picture most of the time, but then I need to be able to focus in
really quickly on a problem. Combined with the other information about operator
responsibilities (e.g., manager statements about central monitoring of many
systems) this user statement defines some basic structural issues for the user
interface display and navigation mechanisms. There is some form of representation
of the managed resources. These resources themselves have certain relationships
to one another and to the scope of the operators interest. There are specific
notifications of problems, a need to determine what the problem is, and so on. These
characteristics constitute the user elements of the conceptual model for how users
want to use this application for monitoring. Further, the users description
itself suggests a reasonable metaphor for the operation that of a lens onto the
managed world with both wide angle and zoom capabilities.Therefore, the general
requirements for the interface (expressed in Monitoring Managed Systems --> Viewing
Managed topology, Viewing Event Log), the model of the system supporting these
requirements, and a metaphor for the operation of that interface can be linked to
the detail of the user-centered data.
More generically, the use cases themselves were mined to create a candidate pool
of user visible objects and operations. These are the nouns and verbs of the use-case
narrative. The user descriptions that underlay the use-cases were examined for
specific language and nuance, and together this information was used to create
candidate dialogs for display of the objects and menu commands capturing the actions.
Specific terms can derive from either standards (e.g., for dialog control) or domain
semantics.
In summary, objects are the nominal targets within a direct manipulation interface,
but how they are portrayed their context and meaning within the concrete design
is shaped by the mediating devices of the conceptual model and metaphor with
which they are aligned.
3.2.3. Detailed Screen Design
The objects, models, and metaphors generated from the use-cases were first
transformed within Windows 95 Graphical User Interface standards into a paper
prototype storyboard. As discussed above, detailed analysis provides a view into
the users world in terms of the relevant objects and operations. Models and
metaphors are abstractions helping to link the user data to the software system
design and implementation. Detailed screen design is the step where specific user
interface components must be chosen to express these metaphors and models as they
will be used to manipulate the target objects. Figure 3.7 is a copy of a paper
prototype screen used in the storyboard for the Monitoring use case. It shows the
main application window and is made up of three subwindows within an overall
Multiple Document Interface (MDI) application window. The three panes of the child
window are the tree view (upper left), the list view (upper right), and the map
or icon view (bottom panel). Table 3.2 lists these elements and some brief notes
about the rationale for their design.
Figure 3.7 Paper prototype of main window for operator view used within storyboard
walkthrough.
GUI Component
Design Rationale
Tree control
Iconmap view
List view
Viewing topology and viewing event lists were two main ways of monitoring, as
reflected in the use case analysis. The icon color status provided quick alerting
and the event list provided more information about a specific event (the rows in
the list view were also color coded for severity). To support both overall
monitoring of a large number of systems and quick focus (wide angle and zooming)
whenever any object in the tree or map view was selected the event view would be
automatically filtered to show only events for the selected object and its children.
In this design the metaphorical focus is implemented with explicit feedback from
direct manipulation of the tree, as well as with filtering to reduce the data. Other
features helped focus and alerting. For example actions could be defined to produce
a pop-up message box when certain events were detected. Figure 3.8 shows a screen
capture of an early version of the actual running application.
For example, users pointed out a relationship among a group of objects they felt
was central to how they worked and suggested this relationship should be captured
(e.g., hierarchical organization in a tree view). Thus, two critical types of data
were gathered. First, initial usability data were gathered where were users
confused? Where did they make errors in trying to walk through the design? Second,
the users could talk about how they would use the capabilities embodied in the design
and describe how it integrated with their work practice.
3.2.5. Usability Goals
It is a great help in design and testing to establish early on some goals for the
usability of the system. Usability goals provide measurable targets for the design
and give a development team a tangible expression of the desired system behavior.
Scenarios, as stories, can provide the context for defining tasks and goals. Two
example goals (drawn from a set of about 30 goals) are defined below, using the
same data from which other elements of the design have been motivated.
1. Goal: Detect Key system status change immediately, regardless of current
task.
Metric: Note key system change within 5 seconds of appearance in display.
CI Data: Besides monitoring overall network and system health, there
is often a need to focus monitoring on specific systems that are particularly
important to the company.
We want simultaneous roll-up and key system view.
Make critical events highly visible.
We would like a pop-up window on an event.
2. Goal: Detect and display high priority event for any one system out of
100 systems rapidly (30 seconds or less).
Metric: Measure time from occurrence of high-priority event to user
displaying event detail for that event.
CI Data: We need an easy way to get specific information quickly.
It would be useful to be able to see what actions the event has triggered
and perhaps start some others.
Make critical events highly visible.
These goals were tested with the prototype application, as described below.
3.2.6. Prototype and Field Testing
Besides the storyboard walkthroughs, two other steps were used to gain user feedback.
A Visual Basic prototype was created about the same time a first draft of the
storyboards was available. This prototype was tested with five operators in 1-hour
sessions. Test results provided many comments about how the linked navigation
needed to work, how status should be indicated, and how transition from monitoring
the choice of representational models. The next three sections describe three
groups of rules of thumb. These metaheuristics reflect the experience described
in this paper and echo the perceptions of authors within this volume, as well as
others.
5.2.1. Defining the Represented World: Start Off on the Right Foot
The thrust of this metaheuristic is that it is critical to obtain the information
needed to define the represented world and that this information needs to be cast
into forms amenable to the larger goal of user-centered design (a point argued as
well by Monk in this volume). In particular, the work on this project suggests the
following rules:
1. Make the represented world sharable One of the strengths of the
scenario approach is that it retains the overall information and structure
of end user experience. This enables multiple diverse audiences to share this
representation and work with it throughout the course of the project. The
scenarios serve as the context for storyboard walkthroughs, usability goal
definition, and other design activities.
2. Preserve the end-user view in the represented world Though usually
an intrinsic part of the scenario form, it is worthwhile to isolate this
particular idea. Specific words and end user experiences are extremely
important in selecting aspects of the represented world to be modeled and
for choosing details in the representing world. Essentially, key data must
be kept alive through the course of the design cycle.
3. Define the represented world at an appropriate granularity One size
does not necessarily fit all for design. This project used both scenarios
and use cases as complementary forms for the represented world. Under other
circumstances, simply using scenarios may be adequate and because they are
so robust they may be the preferred option if a single representation must
be chosen. Alternatively other representations of end user work may work well
for specific tasks. This application was complex enough to require the
disciplined decomposition provided by use cases, while others may not require
it. In any event, it is useful to appreciate the relative strengths and
weaknesses of alternative representations as they are brought to bear during
a design cycle.
5.2.2. Minimizing the Gap: Help the Magic Happen
Part of the pleasure of design is watching the magic happen. Nevertheless, part
of the goal of a skilled designer or design team should be to create the conditions
for the magic to happen, and not to leave it to chance (hence a notion like
Systematic Creativity, Scholtz and Salvador, Chapter 9, this volume). One way
to accomplish this is to build a process that supports the coding and recoding of
information necessary for success.
1. Consciously create mediating abstractions As work progresses objects,
models and metaphors should be the blueprint guiding the use of the bricks
and mortar determined by the user interface syntax. This principle is based
on the belief that it is difficult to go from user-centered data to an
implementation without some mediating ideas; thus, explicitly defining them
makes the process more efficient. In effect, part of solving the problem of
design is creating different representations.
2. Make iteration work for you This and the next principle go together.
Often iteration is looked upon as the failure to get it right the first time.
Here the opposite is advocated. Do not expect to get it right the first time.
Early progress can be measured by the number of questions asked, rather than
the number answered. Use rapid cycles in iterative design to test alternative
views and designs, knowing the gap is bridged incrementally. However, be
disciplined enough to get the information needed to resolve the questions
and refine the design by creating paper prototypes, quick high-fidelity
prototypes and other approximations of the representing world.
3. Design is a bootstrap operation If the answer was clear, then the
design work would be finished. Expect early iterations to be fragmented and
imperfect. The goal of early iterations is create mediating representations
that lead to the finished product, not create the finished product itself.
5.2.3. The Right Tools for the Job: Using Converging Operations
Design of a system seems to require knowledge of all the system elements in order
to begin the design, otherwise how can you correctly design any one element? Part
of the answer to this dilemma lies in using multiple approaches to refine a design.
Using iterative design with a variety of techniques puts special demands on the
design process, and the next principles help guide the operations.
1. Use the strengths of different approximations of the representing world
Pick tools (e.g., low-fidelity prototypes) so that early work is mutable
at low cost, and expect it to change. Use techniques such as storyboard
walkthroughs to validate broad issues of organization and modeling, because
they are capable of representing whole subflows of a user interface at one
time. Later in the process focus on high-fidelity prototypes to look at
detailed issues in user interface structure.
2. Infuse the represented world into the entire design and development cycle
In moving from one representation to another, and from one step in the
process to the next, it is helpful to have some thread of the represented
world that links design artifacts and process steps. The end user via either
actual involvement or extension is key. In this project, scenarios were used
6. CONCLUSION
This chapter has reviewed one example of how the gap between user-centered analysis
and concrete design can be bridged. Essentially, it suggested in both broad process
terms, and in detailed data analysis, that the gap is bridged by transforming
alternative representations for the end users world within a representational
system. The process for creating this representational system can be facilitated
by systematically creating mediating representations that reveal and link the
represented world and the representing world. Placing the user at the heart of the
process corrects for inherent biases in the designer as they go about the cognitive
task of deciding what aspects of the represented world to model and how to model
them in the representing world. In this way the magic of design is grounded in
knowledge of the world and repeatedly tested with the people from whom that
knowledge was derived.
7. ACKNOWLEDGMENTS
I would like to thank Betsy Comstock, Peter Nilsson, Kevin Simpson, Colin Smith,
Dennis Wixon, and Larry Wood for comments on this chapter.
8. REFERENCES
Anderson, J. R., Language, Memory and Thought, Lawrence Erlbaum Associates,
Hillsdale, N.J., 1976.
Carroll, J. M., Scenario-Based Design, John Wiley & Sons, New York, 1995.
Jacobson, I., Ericsson, M., and Jacobson, A., The Object Advantage, Addison-Wesley,
Reading, MA, 1994.
Lakoff, G. and Johnson, M., Metaphors We Live By, The University of Chicago Press,
Chicago, 1980.
Landauer, T. K., The Trouble with Computers, MIT Press, Cambridge, 1995.
Chapter 4
Model-Based
User
Interface
Design:
Successive
6. Summary
7. References
ABSTRACT
There is a large gap between a set of requirements and a finished user interface,
a gap too large to be crossed in a single leap. This gap is most easily traversed
by using various sets of guidelines, based on patterns of human perception,
cognition, and activity, to transform background information first into an
essential model, then into a users model, and finally into a user interface design.
Scenarios form the primary thread in this process, keeping the designer focused
on the users activities. Rough sketches are used throughout this process, but
sketches cannot represent the dynamic nature of the interface in a comprehensive
manner. The designer must build interactive visual prototypes to better understand
the actual interactions, test the design before it is coded, and communicate the
finished design to developers and customers.
1. INTRODUCTION
The project is underway. There is a clear problem statement, the requirements have
been gathered, and the users have been interviewed. Now it is time to design the
user interface. How does the user interface designer transform an abstract list
of requirements into a concrete design?
The goals of a user interface are daunting: for the novice user it should be easy
to learn and recall; for the experienced user it should be flexible and efficient;
and it should extend every users abilities, enabling them to accomplish tasks
that they otherwise could not. However, all too often the resulting application
is difficult to learn, tedious to use, and forces people to focus on the computers
and applications rather than on their work.
Why do so many user interface designs fail? Because the goals are diverse, the
complexity is great, and we may fail to apply our knowledge of how people communicate
and interact with the world around them. However, there is a way. By combining
complexity management techniques, guidelines on human interaction and
communication, a bit of insight and creativity, and a good development process,
it is possible to design user interfaces that meet these goals. This chapter
describes one way to bridge the gap between requirements and a completed design
using modeling to manage complexity and a series of transformations that take us
across the gap a step at a time. This process is illustrated using a portion of
an application builder as a case study.
As the design develops, users can provide valuable comments at all stages. They
can help the designer select a preferred interface from among several alternatives
and polish the rough edges as the design is refined. However, end-users should not
be asked to design the interface or to participate directly in design sessions.
They are intimately involved in the details of the current system and are only
vaguely aware of the viewpoints of other users. They do not have the designers
knowledge of design, human interaction, and technology. Once they have participated
in creating a design, they are biased and their evaluations of subsequent designs
are suspect.
physical world, information processing, and social interaction; and guidelines for
high-level U/I design, visual and audio presentation, and platform/toolkit
specific layout and widget use. Figure 4.1 summarizes the design stages, models,
and transformation steps described in this chapter. The arrows indicate which
elements at each stage are transformed into elements at the next stage and the
information used to guide that transformation. Arrows within a stage indicate that
some elements help to define other elements at that level.
returns to the prior stage and other alternative transformations are tried. The
details of each stage are completed later as confidence in the trial design
increases.
the tasks and how are they dealt with? Describe any external artifacts and
their use within a scenario. What improvements have been suggested?
Even when designing for a new kind of application, there are usually real-life
examples of how people currently perform the tasks of interest, however, it may
be necessary to augment these with additional, made-up scenarios depicting new
tasks to be supported by the application.
scenarios. They should focus on the use cases, comparing them the with current
environment in order to identify missing tasks.
A use case is constructed by recasting the real-life scenario using the tasks and
objects of the essential model and discarding all references to how the activity
is currently performed:
Developers build applications, in part, by including selected routines from
libraries. They select the routines by finding a candidate set of routines and
evaluating each of them using its documentation.
Note that this use case does not include the information needs or specific problems.
This makes it easier to discern the underlying structure. The information needs
and problems are listed with the appropriate tasks and will be used later in the
process to evaluate the alternative metaphors and to insure that the users needs
are met. The information needs and problems in this simple example are:
a metaphor has been selected, the user model can usually be fleshed out quite rapidly.
The metaphor also provides a structure that can be easily extended to incorporate
new requirements as they arise.
For example, the physical world imposes limitations such as an object can only be
in one place at a time. A plausible extension would allow two geographically
separated people to see and work on the same object on their computer displays while
talking on the phone.
The preferred metaphor should be reviewed with the developers illustrating the
primary user scenarios with a couple of line drawings or short storyboards. The
intent is to keep the developers informed, giving them adequate time to consider
implementation problems that might arise rather than surprising them later. It is
the developers who will bring the design to life.
At this point the basic user model has been defined by remapping objects in the
essential model and restating some of the use cases into the metaphor. The rest
of the user model can be fleshed out by restating the rest of the use cases, making
necessary adjustments to the metaphor, and generating the task tree (described
below).
The parts cabinet metaphor doesnt include any support for describing what the
parts are or how they are used, and the only mechanism for finding candidates is
by grouping similar parts in labeled drawers.
The catalog and data handbook metaphors each contain descriptive information. They
both also have an index that supports locating items using keywords. The data
handbook also has example use information while the catalog metaphor suggests that
it can be used to obtain the items it shows. Each of these metaphors could be
stretched a little to cover all these points. This example will develop only the
catalog metaphor, but it is preferable to pursue a few alternatives to better
highlight the strengths and weaknesses of the favored choice.
We check the fit by restating the objects and tasks of the essential model in terms
of the metaphor:
Tasks
Find candidate routines turn to a section or look in the index.
Evaluate a routine read an items documentation.
Include a routine copy an item to the application.
Objects
Item an item in the catalog, a component (routine).
Catalog a collection of components (library).
Description a components documentation.
Application what were building.
Now that a preliminary mapping between essential model and metaphor has been
developed, a user scenario can be generated by restating the primary use case:
Developers build applications by including components selected from a catalog. The
catalog is divided into sections that list similar components. The developer can
browse the components in a section, reading their descriptions that include
information about how they are used. If developers cannot find a desired component
in a section, they can look in the index for other terms that might describe it.
When the desired component is found it can be copied to the application.
The information needs and problems listed in the essential model should be addressed
if possible in the user scenarios using elements of the metaphor. In the scenario
above, sections in the catalog and the catalog index are used to address the problem
of finding candidate components, and product descriptions provide information
about the components.
Much of the tree can be extracted from the user scenarios the use cases restated
in terms of the metaphor but these scenarios typically illustrate only the
primary functionality of the application, often missing secondary but necessary
functionality, e.g., the setting of user preferences. The task tree on the other
hand ideally should include everything a user can do.
Generating the complete task tree can be tedious and sometimes difficult because
the lower levels may involve significant design effort. Typically it is sufficient
to enumerate all the high-level tasks and flesh out their subtasks as the design
for the primary tasks solidifies. Use the structure and patterns of the metaphor
and platform standards as guidelines in designing these parts of the task tree.
Operations
Section
Attribute
Candidates
Component
Description
As mentioned above, it is likely that some of the objects and operations may not
occur in the metaphor. When possible, these should be cast as plausible extensions
to the metaphor. As these extensions are not part of the metaphor and prior user
experience, they must be clearly delineated and defined.
of how people perceive, think about, and interact with the real world. This
knowledge is used in all phases of the design process, from initial interview to
final usability studies and redesign. The author structures this knowledge for his
own work as a set of models used to guide the transformations as a design progresses
from step to step as shown in Figure 4.1. These models and the areas they affect
include:
Perceptual Grouping of visual elements; task layout; use of color,
sound, and animation.
Cognitive Response times; short-term memory; task flow.
Learning On-line reference materials; use of prompting queues.
Interaction Use of direct manipulation vs. language; social patterns.
Work Purposeful activity, collaboration.
Perceptual, learning, and cognitive processing models suggest how information is
best presented so that people can recognize and utilize it most easily, while models
of work and interaction with the physical world and with other people provide
guidelines for direct manipulation and the use of social cues during collaboration.
These models provide a number of patterns that can be used to structure the overall
design, fill gaps in the platform-specific guidelines, design for nonstandard
environments, and anticipate the consequences of design decisions.
The design of the user interface proceeds in three overlapping phases based
primarily on the representations used in each phase: rough layout, focused on
transforming task flow into simple sketches of proximate window layouts;
interaction design, which transforms the task tree and objects into interactive
prototypes; and detailed design specification, which defines the final graphics,
terminology, menus, messages, and dialogues. Just as with the earlier design stages,
these phases can be pipelined, and the completed portions checked for usability
and communicated to the engineering and documentation teams as the design elements
firm up.
When the underlying metaphor has a strong visual appearance an alternative design
approach is to mimic the visuals and interaction elements of the metaphor. Use this
with care as it often leads overly cute, cluttered graphics that are difficult to
visually parse and whose controls, such as tabs, are limited in comparison to the
graphical widgets in platform-specific tool kits. On the positive side, this type
of design often leads to a very distinctive visual appearance that can be quite
appealing to casual users, though it can also become tiresome when used day after
day.
The sketches of the layouts should follow applicable platform-specific guidelines
and show the different view areas, their essential content, and the controls, while
eliding unnecessary details. They can be shown to potential users and members of
the development team for comments and used in preliminary usability studies. The
quality of the comments however is limited because the study participants must use
significant mental effort to visualize the dynamic operation and feel suggested
by the static sketches, forcing a logical and intellectual consideration of an
interface intended to be used in an intuitive manner.
Readers are encouraged to develop their own sketches as the example evolves in the
following sections, or refer to Figure 4.3 which shows some of the alternative
layouts that were prototyped for this example.
Figure 4.3 Prototype of alternative layouts. (A) Tabs; (B) four views; (C)
overlaid views; and (D) web-browser.
if possible. Since the application construction window will likely be large, the
catalog window will fit best if it is tall and narrow.
Arranging the four views based on the scenario flow, the starting views,
sections and query are first, the candidates next, and the data
sheet last. Since the catalog window is tall and narrow, the views would be
arranged top-down. Initial sketches suggest that arranging four views vertically
will make them short so that they wont be able to display very much content. Since
the scenarios always begin with either the sections view or the query view
but never both, they can be overlaid in the same space and a button added to switch
between them.
5.1.4. Scenario Flow
The scenario flow through the sketches is checked by drawing arrows to indicate
the sequence of user actions for a given scenario. Usually a single sketch is
sufficient to illustrate a single scenario, though a complex one involving many
subtasks may require a storyboard, i.e., a sequence of sketches showing how a window
changes as the scenario proceeds. As additional subtasks are added, secondary
windows can be created for tool palettes and non-modal dialogues as necessary; modal
dialogues should be avoided when possible in object-oriented designs.
primarily for the watch me style of scripting in which the user performs some
actions that the computer will later mimic. The difficulty with demonstration is
in showing all the alternative actions to be used in different situations.
Social interaction deals with the use of cues, signals outside the main
communication channel, that help to direct the dialogue. Without them the
interaction between people is much more difficult and understanding suffers. For
example, just as the listener nods his head or frowns to indicate his reaction to
what the speaker is saying, computers need to acknowledge user input with some sort
of feedback, otherwise the user is uncertain that the input was accepted. In
applications where the computer serves as a communication channel between two
people, the computer is most effective when it supports the transmission of the
common social cues that people would use if they were face-to-face, e.g., facial
expression and hand movement and position.
5.2.2. Reviewing Scenario Flow
The rough layouts should be reviewed with respect to these interaction patterns.
The interaction style should feel right for the task and the controls should support
the chosen style of interaction. Particular attention should be given to items not
covered by the platform guidelines, e.g., the use of transitional animation or the
support of social cues. The number of explicit actions that the user must make to
accomplish a given task should be minimized, particularly those that just shuffle
things around, e.g., opening and closing windows, scrolling, and view changes.
After the sketches have been revised, it is time to prototype the primary user
scenarios and any interactions that have been developed specifically for this
interface.
5.2.3. Visual Prototyping
Sketches cannot be used to adequately evaluate the dynamics of an interface, nor
can they be used to determine the exact sizes and placement of windows, views,
controls, and information. For this, the designer must abandon paper and pencil
and turn to the computer. While it is standard practice for designers to use drawing
and illustration programs to create screen images of the visual elements, it is
less common for a designer to create an interactive visual prototype though it takes
only a little more time and effort. A visual prototype mimics the intended
appearance, interaction, and feedback of the proposed application. The author finds
it to be a far more effective design aid than a plain graphics program, often
revealing missing details and rough areas in the interaction. It may be the only
way to accurately evaluate alternatives. Visual prototyping is also the most
effective way to validate the actual feel of the interaction, obtain
pre-development, high-quality comments from users, and communicate the design to
developers, marketing, and potential end-users.
Too much time and effort are needed to prototype an entire user interface so it
is best to do only the primary user scenarios and those interactions that the
designer has little personal experience with such as those developed specifically
for this application. If a platform-specific GUI builder will be used to develop
the application, it might provide a good prototyping environment though it will
be difficult to prototype invented interactions and views and to fake the data
needed to run the prototype. A GUI builder also enables a designer to contribute
directly to the implementation, insuring that the finished application matches the
design. When the design contains significant invention or when a GUI builder will
not be used in the implementation, a number of commercial tools can be used for
visual prototyping such as Macromedias Director or Allegiants SuperCard.
Ideally the prototyping application should run on the same platform as the project
application, but this is not necessary if the prototyping platform has adequate
performance and the same user input and output devices, e.g., mouse, tablet, display
resolution, etc., as the target platform.
Prototypes range from slide shows, which are quick to build but of limited use,
to tightly scripted, which can mimic task interaction quite realistically, to fully
interactive, which can support many arbitrary interaction sequences. The tightly
scripted prototypes generally provide the best information for the least time
investment while fully interactive prototypes are used to gain deeper experience
with more radical designs.
Since the purpose of the prototype is to gain information about sizes, layouts,
and interactions, the initial graphics used in the prototypes should be first
approximations created from screen dumps of the target platform to save time, and
revised later as the design solidifies. The next step is to create a slide show
sequence that shows the screen appearance before and after the completion of each
subtask. User actions, such as clicks and drags, are then added to initiate the
transitions from one slide to the next. Finally, interaction feedback is inserted
at the proper points to provide a fully realistic experience. A tightly scripted
prototype of a high-level task or two can usually be built in a couple of days.
When the designer is satisfied with the prototype, it can be used to run low-cost
user studies, with revisions as needed between sessions. The prototypes are also
the best way to convey the design to the developers as they fully demonstrate the
designers intentions with little chance of misinterpretation. If the prototype
will not run on the developers platform or if the prototype is to be used as a
specification, a screen capture facility can be used to create a narrated,
platform-independent QuickTime movie that the developers can replay on their own
computer as many times as needed.
6. SUMMARY
There is a large gap between a set of requirements and a finished user interface,
too large to be crossed in one leap. This gap is most easy traversed by developing
two intermediate models: essential and user. Modeling strips away the irrelevant
details, freeing the designer to focus on the central issues. It is this focus on
the essential that often leads to significant insights and inspiration.
The design proceeds in four steps:
Scenarios form the primary thread in this process, keeping the designer focused
the users activities as the background scenarios are first extracted and then
transformed across these steps. Patterns of human perception, behavior, and
activity guide these transformations; metaphor is central to the essential
model-to-users model transformation while an understanding of human perception,
cognition, and interaction is central to the users model-to-interface design
transformation. Platform-specific guidelines also aid in the selection of
representations and controls used in the interface design.
Rough sketches are used throughout this process as an aid to visualizing the current
stage of the design, but sketches cannot represent the dynamic nature of the
interface in a compressive manner. The designer must build interactive visual
prototypes to better understand the actual interactions, to test the design before
it is coded and more difficult to change, and to communicate the finished design
to developers and customers.
Design is not a straight-line activity that moves smoothly from step to step. It
is messy, requiring the generation of many alternatives at each step and many
iterations as ideas are developed and tested, returning to users time and again
to check earlier decisions. This is what makes designing for people so much fun.
7. REFERENCES
Apple Computer, Inc., Apple Human Interface Guidelines, Addison-Wesley, Menlo
Park, CA, 1987.
Apple Computer, Inc., Newton User Interface Guidelines, Addison-Wesley, Menlo Park,
CA, 1996.
Constantine, L. L., Essential models, Interactions, 2(2), 3446, April, 1995.
GO Corp., PenPoint User Interface Design Reference, Addison-Wesley, Menlo Park,
CA, 1992.
Holtzblatt, K. and Beyer, H., Making customer-centered design work for teams,
Chapter 5
Lightweight
Techniques
to
Encourage
Innovative
User
Interface Design
Andrew Monk
University of York, York, United Kingdom
email: [email protected]
TABLE OF CONTENTS
1. Introduction
2. The Design Context
2.1. Providing a Firm Platform to Bridge from Getting the Representation Right
2.2. The Need for Lightweight Techniques
3. Representing Top-Level Concerns The Rich Picture
3.1. What its For
3.2. What it Looks Like
3.3. How to Do it
4. Representing Work as Objectives The WOD
4.1. What it is for
4.2. What it Looks Like
4.3. How to Do it
5. Problems and Interruptions The Exceptions List
5.1. What it is for
5.2. What it Look Like
5.3. How to Do it
6. Illustrative Stories Scenarios
6.1. What They are for
6.2. What They Look Like
6.3. How to Do it
7. Bridging the Gap The Need for a Dialogue Model
8. Afterthoughts
9. References
1. INTRODUCTION
This chapter describes four representations for recording and reasoning about the
users needs. It is argued that the gap between understanding the need of the user
and an initial design can be minimized by using these informal techniques:
A rich picture identifying the major stakeholders and their concerns to make sure the software fits
In each case there are sections describing what it is for, what it looks like and
how to do it. A further section describes how to use these representations to produce
an initial design.
Documentation, and the representations used within documents, have two important
functions in addition to those usually given of providing a record for management,
software maintenance, or whatever. The first is to communicate. Design is rarely
a solitary occupation. Members of a design team need to communicate and negotiate
their understanding of the design. In some organizations the before the gap
activities will be carried out by different individuals to the after the gap
activities and so the findings of the former group have to be communicated to the
latter. Even in design teams where everyone takes part in all activities there is
still a lot of communication to be done. Different team members will have different
viewpoints and a common vocabulary, of the kind provided by a representation, makes
it possible to negotiate an agreed understanding. The second function of a document
is to facilitate reasoning. Writing down your conclusions makes it possible to
reflect on and reason about them. Has everything been considered? Representing
conclusions about one thing will remind you of other things yet unconsidered. Are
the conclusions drawn consistent? Writing down one conclusion may make you realize
that an earlier formulation about something else can be improved.
There are very good reasons then for documenting the design process. However, to
be really useful as tools for communication and reasoning, the representations used
need to be tailored to the context they are going to be used in. Different
representations have different properties. First, they capture different aspects
of what they are used to describe. For example, representing an organization in
terms of a reporting hierarchy would make explicit different aspects of
organizational structure than a sociogram recording who communicates with whom.
Let us say that by making some additional assumptions it is possible to infer one
representation from the other. Even if this is the case, so that one representation
implicitly contains the information in the other, it is still the case that one
representation may be most useful for one purpose and the other for another. We
say that each representation makes explicit different kinds of information.
Second, different representations have different cognitive properties (Green,
1989). For example, diagrammatic representations are generally more difficult to
create and change than textual representations but if well designed may be easier
to read. Third, different representations require different skills and effort from
the people using them. Many of the techniques devised by academics for software
engineers, for example, require such a large investment in training and effort
during use that they unlikely to be justified in terms of the payback they produce
(Bellotti, 1988; Monk, Curry, and Wright, 1994).
and may serve a management function. Very large projects, e.g., military command
and control systems or off-the-shelf word processors, can only be coordinated
through the application of well-articulated and strictly adhered-to procedures.
This chapter, however, is not concerned with large projects. Rather, it is hoped
to offer something to the smaller low-profile design projects that make up so much
of the everyday work of software developers. These projects may only involve teams
of two or three people working for a few months or even weeks. The developers we
have in mind probably work for software houses or within user organizations. In
the latter case they might be part of a management information service or computer
department. Much of the software they use is bought in and most of their work is
to use the options provided to customize it for different subgroups within the
organization. For example, a particular department may request a Windows interface
onto the company database for some very specific data entry task that is then
implemented using Excel or Visual Basic.
The main difficulty faced by the small design team, in comparison with a large
project, is that the resources they have for recruiting people with special skills
or for learning new skills are very limited. Any technique they may apply must be
lightweight in the sense that it can be easily picked up and easily applied. If
a project has a total effort of a few man-weeks then the process of understanding
the users needs can only be a few man-days and the effort required to learn
techniques for doing this must be even less. Nielsen (1993) draws an analogy with
shopping to illustrate this requirement for lightweight techniques. He describes
his techniques as discount techniques: techniques that cost less than their
deluxe versions but nevertheless do the job, cost being measured in terms of how
much effort it takes to learn and then use them.
The techniques described here fit Nielsens requirements for discount techniques.
They are lightweight procedures that can be learned in a day or so and only take
man-days to apply. One of the reasons that this is possible is that they assume
that there is a well-specified and accessible user population doing some
well-defined tasks. The developers in one of these small projects can be very clear
about who their users are and the work to be supported. Potentially, it should be
easy for the developers to talk to these users and observe how they work. Compare
this situation with that faced by the designers of a word processor, for example.
Their users will all have very different skills and expectations, for example, some
may be scientists others secretaries. The work to be supported will also vary a
great deal, potentially, from typing a memo to producing a book. As will be explained,
being able to specify a small user population and a small set of tasks to support
makes it easier to guarantee task fit, the most important attribute of a usable
system.
The representations described below were originally developed in a collaborative
research project. The partners were Data Logic, a software house and System Concepts,
a human factors consultancy and the human computer interaction research group at
the University of York. The work most relevant to this chapter is described in (Curry
et al., 1993). As a part of this project the techniques developed were applied in
a case study of a warehouse handling food products for a large group of stores in
the UK. This study will be used as an example in the sections that follow. The
techniques have also been taught to, and used by, several generations of
computer science students at York University and to this extent they are well tried
and tested.
The body of this chapter is divided into four sections, each of which describes
a different representation that may be used to think about some aspect of the users
needs. In each case there is an introduction to the purpose of the representation,
an example of what it looks like, and some instructions on how to produce it. Finally
there is a section on how to use these representations when bridging the gap. The
representations described are: (1) a rich picture to capture top level concerns;
(2) a work objective decomposition (WOD) that serves a similar purpose to a
hierarchical task analysis (HTA) but is easier to produce and use; (3) a user
exceptions list of possible interruptions and mistakes, and (4) fictional scenarios
of use generated from the WOD and user exceptions list.
This is only possible if the developer has some understanding of the broad context
of the work being carried out. It also requires that all parties who may be affected
by the new system and procedures are consulted right at the start of the development
process. The rich picture is a representation that serves the purpose of identifying
these stakeholders their concerns and responsibilities. We have also found that
developing a rich picture is a very effective way of getting all the developers
in the design team up to speed with the aims of the design project.
as a reminder of figures peripheral to, but possibly critical in, the specific
operation one is designing. It is also a useful basis for identifying the
stakeholders who need to be consulted about the final design. It served as a compact
notation for recording and reasoning about the wider context of the job within the
design team and also for checking with our informants that we had understood what
was going on. For some purposes the drawing on its own may be a bit too compact
and we recommend that three short paragraphs are generated for each stakeholder
specifying, respectively, their responsibilities, an outline of their work, and
their concerns.
3.3. HOW TO DO IT
The information needed to construct a rich picture is obtained by talking to the
stakeholders identified. In most projects there will be a designated contact to
represent the users needs. While you should start by talking to them it is
important to talk to other people as well, particularly people who will end up
working with the system. It is a good idea to interview people in their place of
work. Here they will have access to documents, computer systems, etc. that can serve
as prompts and examples. You may like to tape record what they say and it is always
a good idea to go with a prepared set of questions to get you going (for practical
advice of interview technique see Clegg et al., 1988).
A rich picture is informal enough to be presented to the stakeholders themselves
so that your first stab at a rich picture can be taken back to your most useful
informants. The process of explaining it to them will often elicit important new
information and point up misunderstandings and errors of fact. Like all the other
representations described in this chapter a rich picture may go through several
iterations before the end of the design process.
The following steps should be taken to construct a rich picture. It is not necessary
to be good at drawing to do this, the crudest figures will suffice (as is clear
from Figure 5.1!). A drawing package has certain advantages given that you are going
to have to change and add to the drawing. It is probably a good idea to use a different
color for steps C and D.
A. In the middle of a large sheet of paper, draw a figure of some kind to
represent each of the kinds of people who will actually interact with the
system. These are the user roles (actors in Checklands terms). There will
probably be more than one; for example, there may be computer operators who
enter data and managers who use summary displays.
B. Add further figures for people who might be affected by the introduction
of the system even though they dont have to operate the system themselves.
These nonuser roles (other clients and owners in Checklands terms)
may include, among others; supervisors, supervisees, and peers of the user
roles; also customers, management, and the IT department who supply the
relevant hardware and software services.
C. Indicate the flow of work on the diagram with labeled arrows. Try to keep
this description at a fairly high level and avoid getting bogged down in too
much detail. For example, a customer may make an enquiry which is entered
into a database by an enquiry clerk and which then results in some action
by a repairman. A supervisor makes summary reports using the same database
and gives them in printed form to a manager who uses them in his reports to
the board of directors.
D. Indicate the major concerns of all the roles represented by writing a
short phrase next to each e.g., management are often concerned with cutting
costs, operators with losing their jobs or being deskilled.
E. On a separate sheet list each of the roles defined as drawings on the
diagram and write for each a concise definition (not more than two or three
sentences) of:
1. Their responsibilities (people are responsible to someone for
something).
2. Their work (this is simply an amplification of the work flow arrows, step
C, for that role).
3. Their concerns (this is similarly an opportunity to explain and add to
the concerns indicated in step D).
Each of the steps A to E will suggest changes and additions you want to make to
what you recorded in the earlier steps; this is a good thing to do. You will almost
certainly have to redo the whole thing to make it presentable anyway. The diagram
and these descriptions should stand on their own but may be supplemented by a short
paragraph of explanation.
You will generally need to prepare a before and an after picture, i.e.,
a rich picture of the situation before the introduction of the computer system and
a rich picture of the situation after it has been introduced. The before picture
is needed when interviewing informants, to play back your understanding of the
situation. The after picture is the basis of the design and used in the next
stages. It should also serve to alert management in the user organization of the
broader implications of the design.
target user population in more detail. WOD stands for Work Objective Decomposition.
Alternative approaches to representing this information tend to be (1) very time
consuming and (2) focus the designer on what happens at the moment rather than the
opportunities for creative and innovative design. The advantages of the WOD over
these alternatives will be illustrated by representing the work of making a
cup of tea.
Most people when asked to describe this task will generate a numbered list such
as in Table 5.1. This does not easily code conditionals (what if the kettle is
already full) or hierarchical structure (1 and 2 are part of the same subtask).
The commonest response to this, in human factors, is to use a Hierarchical Task
Analysis (HTA) as in Table 5.2. HTAs separate processes (the numbered items) from
control (the plan in italics). Designers with a systems background tend to get
deeply into specifying plans in great detail when this is really irrelevant to the
design. It is irrelevant as the purpose of design is to change (and improve) the
way the task is done. Focusing on processes has a similar unfortunate effect of
emphasizing what is done now rather than how it might be done better. A WOD describes
the task in terms of required states of the world, and avoids considering the
processes by which these are achieved and the order in which they are achieved.
This forces the designer to think clearly about the purpose of each process rather
than the interdependencies between them. An example is given in Table 5.3.
Fill kettle
Switch on kettle
Get out cups
Put tea bag in cup
Wait for water to boil
Poor water in cup
Get out milk
Remove tea bag
Add milk
Add sugar
Stir
water but in our experience it can lead to significant insights concerning how
procedures can be redesigned. Stretching this example a little, one can see how
formulating the objective have tea bag and boiling water in cup might lead one
to think of alternative methods of boiling the water such as boiling the water with
a heating element placed directly in the cup. Without the restriction to write down
the objective of each step it is too easy simply to use the name given to the process
in the old procedure without really thinking about what that process is for.
4.3. HOW TO DO IT
Start from the rich picture. You may have to return to some of your informants to
check on details. Like the rich picture, the WOD should stand on its own and will
only need minimal supportive text. You will need to create a WOD for each user role.
The following instructions may be used. As with the rich picture you will probably
need to prepare before and after WODs, though sometimes the objectives will
remain the same even though the processes have changed.
A. A work objective describes a state of the world the user would like to
achieve. Examine the steps taken by the user and ask yourself why each step
is carried out, i.e., what is the objective of that part of the work. Make
a list of these states of the world.
B. Next organize the list into objectives and subobjectives. The top level
objectives will probably correspond to the processes identified in the rich
picture. A subobjective is an objective that has to be achieved in order to
achieve a top-level objective.
C. Examine the hierarchy for objectives not in your original list. It may
be useful to decompose some of the subobjectives into sub-subobjectives but
avoid deep hierarchical structures. You will often only need to go to
subobjectives, it should not be necessary to go further than
sub-subobjectives. Also, you can be quite selective in how deeply a top-level
objective is decomposed. Those that are to be supported by the computer system
will be decomposed in some detail, while those that are not will not. Notice
that the objectives given above are still fairly abstract and stop short of
specifying how the system will work.
D. Number the objectives for reference. Note this does not imply they have
to be carried out in a specific order. There is no need to consider the order
in which things are normally done or the logical constraints on the order
in which objectives have to be achieved at this stage. Indeed we would advise
against thinking about this at all at this stage as it may impede creative
solutions at the design stage.
Some developers with a mathematical training may find the above process arbitrary
and informal. It is. The WOD is not a formal notation and the method specified above
will result in different representations when it is applied by different people.
This is not a problem. Design is a creative craft and this analysis is a part of
that creative process. The representations described here are better thought of
as artists materials than engineering tools. The representations make it possible
for the developer to create something new and useful by taking a new view on an
old problem.
5.3. HOW TO DO IT
Go back to your interviews with the users and note where they mention events that
were disruptive to the flow of work. Then go through the WOD and try to think of
all the things that could go wrong. Finally, gather together your thoughts under
the three headings described below.
A. List the application exceptions These will include physical
breakdowns and correct behavior (e.g., an unacceptable ID when the user
is logging in to a system). Work out where these exceptions could occur, i.e.,
which objectives might the user be working on when they occur.
B. Problems/mistakes List user exceptions due to the users making
mistakes or changing their minds. Take each subobjective to be computer
supported in turn. Ask yourself whether the user having achieved that
objective might want to undo it at some later stage and if so where such
a decision is likely to be taken, i.e., which objectives might the user be
working on at that point.
C. Interruptions List user exceptions due to interruptions. Most people
interleave several tasks in their daily work. Ask yourself when they could
be interrupted and what implications this could have for design. The higher
level work objectives and the rich picture will suggest what these
interruptions are likely to be. Again work out where in the WOD these
interruptions could occur.
that you need to illustrate the new way of working, after the system has been
introduced.
Scenarios flesh out a WOD and exception list by including sequences of actions and
some detail. They contain examples of typical data associated with real use. It
is particularly valuable to attach samples of the actual documents used such as
delivery notes or invoices. The stories should highlight crucial sections of the
users tasks. Scenarios are the first representation to be used in checking a design
and the best representation for describing the work to someone who has not seen
any of the representations before.
Driver Dave Hodges gives Jenny gate house pass 7492 and delivery note (see
sample documents)
2. Jenny enters gatehouse no. 7492, supplier code Smith34, product codes
and quantities (see sample delivery notes)
3. Jenny request printing of tally cards (see sample documents) for 7492 and gives
them to warehouseman John
4. Warehouseman Mike comes in with tally card for 6541 with locations (see sample
documents)
5. Jenny recalls delivery 6541 to screen and enters locations.
6. DMS prints store picking notes for cv49w to cv52z; Jenny gives these to
warehouseman George
7. Warehouseman John comes to window with tally card for 7492
8. Jenny recalls delivery 7492 to screen and checks quantities with John OK
9. Request printing of COD for 7492 (see sample documents)
10. Jenny gives COD to driver Dave Hodges
11. Jenny sends delivery record for 7492 to DMS
Table 5.7 Scenario of Work with a Data Entry Mistake and Interruption when Entering
Data from the Delivery Note
Scenario 2: exceptions
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Driver Dave Hodges gives Jenny gate house pass 7492 and delivery note
Jenny enters gatehouse no. 7492, supplier code Smith 34, and half of
the product codes and quantities (see sample delivery notes)
Warehouseman Tim informs Jenny of priority cold storage delivery and hands
her gate house pass 7581 and delivery note from driver
Jenny enters 7581, supplier code Browns67, and the product codes and
quantities
Jenny request printing of tally cards (see sample documents) for 7581 and gives
them to warehouseman Tim
Warehouseman Mike comes in with tally cards for 6541 with locations (see sample
documents)
Jenny recalls delivery 6541 to screen and enters location
Jenny recalls partly entered delivery 7492 to screen and enters remaining
quantities and product codes
Jenny request printing of tally cards (see sample documents) for 7492 and gives
them to warehouseman John
DMS prints store picking notes for cv49w to cv52z; Jenny gives these to
warehouseman George
Warehouseman John comes to window with tally card for 7492
Jenny recalls delivery 7492 to screen and checks quantities with John; one
product code was mistyped but the quantities correspond
Jenny changes product code and requests printing of COD for 7492
Jenny gives COD to driver Dave Hodges
Jenny sends delivery record for 7492 to DMS
Note that these scenarios are very selective in what they illustrate. There are
many ways an ideal scenario could have been generated from Table 5.4. Nevertheless,
the scenario illustrates well how the work proceeds. Similarly, only two exceptions
were selected from the list of 11 in Table 5.5. These were judged to be most important
as it was known that the existing system handled them very poorly.
6.3. HOW TO DO IT
It is useful to include sample documents and other data with a scenario, e.g.,
photocopies of delivery notes and invoices, printout, and so on. Wherever possible
these should be real documents and data. If real data are not available it is still
useful to make something up and show it to users and ask how it could be made more
realistic.
There is no need to be exhaustive in this exercise. It is unnecessary to have more
than five or six scenarios in total. Also, there is generally no need to have
scenarios illustrating the situation before the system is implemented; the
intention is to illustrate what will happen with the new system. To generate the
scenarios
A. Use the WOD to write one or more best-case scenarios. Use the sample
documents and your knowledge from interviews to flesh out the extra detail.
Choose orders of events that you think will be good tests for the new system,
e.g., those that are at present reported to be difficult to deal with.
B. Select the most important exceptions from the exceptions list. These
might be exceptions that are reported as being most disruptive to work at
present or that you have other reasons for thinking will be difficult for
the new system. Another criterion to think about is the frequency with which
exceptions occur. Clearly, something that happens very infrequently would
not need to be considered unless it is likely to have very severe consequences.
Write these exception scenarios by adding to the best-case scenarios
described in A.
or move from one screen to another. The reason for starting with a high-level
description of the behavior of the system is that this is the same level of
abstraction as is provided by the WOD, exceptions list, and scenarios and these
representations can thus provide inspiration for this part of the design.
Returning to the warehouse case study, the first step was to decide which objectives
in the WOD (see Table 5.4) it would be most useful to support. 2. Goods allocated
to stores and 3. Picking notes generated are already well automated as is 1.4
Delivery record sent to DMS for allocation to stores. Therefore, objectives 1.1,
1.2, and 1.3 were identified as the crucial work objectives to be supported. As
these objectives are essentially data entry tasks the next step was to specify a
data structure for storing the information to be entered. This took the form of
a list of the data fields associated with each delivery.
The first really creative step was to decide how these data fields should be
distributed across screens. It was decided that one basic screen, a form, would
suffice for all three tasks. This form would gradually be filled in as the objectives
were accomplished. The reasons for this were as follows:
1. It makes the dialogue model very simple, there is only one screen layout.
2. The commands needed would mainly be the standard commands for editing
a form and this would be familiar to the users who were all experienced Windows
users.
3. The additional commands needed to create a new blank form and two others
to hide or display existing ones could again be made similar to the commands
they already new for creating, hiding and revealing documents.
This central screen, the form, was sketched in pencil. This allowed various details
to be glossed over. Similarly, no commitment was made about how the commands,
hide_form, display_form, and make_new_form would be implemented. This lack of
commitment is important as it allows one to get the top level structure of the design
right. If the first design had been produced using a software prototyping tool or
user interface generator the developer would have been forced to make decisions
about low-level detail, such as the names for fields, menu items, etc. before the
overall structure of the design has been worked out. These details can pre-empt
the top-level design and are also difficult to get right without the overall picture
presented by a more abstract dialogue model.
A good dialogue model constrains the order in which operations are carried out only
where absolutely necessary. It is dangerous to constrain the order of operations
according to what you as a designer consider to be normal or logical. You
may have missed something in your analysis or the work situation can change. In
some applications there will be legal constraints on the order in which things are
done, while in others, constraints may by imposed by security considerations.
However, in office systems, order constraints are rarely this fundamental. As the
dialogue model is primarily concerned with such constraints, this is the time to
consider these issues.
This principle of minimizing constraints on the order in which things are done was
followed in the warehouse case study. Operators were allowed to change anything
up to the moment the data was sent to HQ (objective 1.4). Data cannot be changed
then because the consequences of making those changes would be very expensive. The
resulting dialogue model is thus very simple, any form can be accessed at any time
and any field on it changed, up to the point it is sent to DMS where upon it becomes
read only.
Having produced an initial dialogue model, the next step is to check it against
the scenarios. This is done by going through each step in the scenario
simulating or walking through what the user would have to do. This will
probably identify some problems such as points where the user cannot access the
relevant commands or where access is unnecessarily laborious. When the dialogue
model has been modified to cope with these problems, more exhaustive checking should
be undertaken using the WOD and then the complete exceptions list. The reason for
working from the most simple (scenario) to most complex (WOD) is to make the process
more tractable. There may be some exceptions that would be just too expensive to
deal with. These should be noted to be discussed with the customer.
The scenarios for the warehouse case study were checked against the dialogue model,
then the exceptions, and WOD. Because it was so simple and unconstrained no serious
problems were detected. Most dialogue models are more complex than this and the
process of checking them against scenarios, WOD, etc. is thus more complex. There
are software tools for producing dialogue models. The author has developed Action
Simulator (Monk and Curry, 1994) This allows a designer to build simple models of
the high level behavior of the user interface that can be executed to give a dynamic
view of the design. StateMate is another tool that allows modeling using Harel
diagrams (Harel, 1987). There are also multi-media tools such as MacroMind Director
that can be used to make prototype user interfaces that behave without requiring
the designer to fill in low-level details.
After all this work considering the needs of the user, filling the missing detail
needed to turn the dialogue model into a full design should be very straightforward.
A style guide for the platform being used, such as the CUA guidelines (IBM, 1991)
or the Apple Human Interface guidelines (Apple Computer Inc., 1987) should be
followed at this stage. The full design may take the form of a paper simulation
or a bare-bones simulation using an interface generator. There may still be a few
improvements that can be made in the wording of items, layout, and graphic design
to make the screens communicate what is required to the users. These improvements
are best identified using a technique that brings representative users in contact
with the design such as Co-operative Evaluation (Monk et al., 1993).
8. AFTERTHOUGHTS
The techniques described here do not require extensive training to use and the
effort required to use them has been minimized by using informal common sense
representations. Despite their informality the representations can help a design
team to communicate and to reason about the needs of users. They will result in
designs that take account of many usability problems that might arise if no analysis
of user needs had been carried out. However, they do not guarantee the exhaustive
identification of all such problems. This may be more closely approached by formal
techniques though these will certainly require more effort to apply and learn.
It is also important to recognize that these representations were developed for
use in a very specific design context. It is assumed (1) that the number of people
developing and using the representations is small so that all will have a great
deal of shared background knowledge, (2) that the users of the system can be readily
identified, and (3) that the development team will have access to them.
Assumption (1) makes it possible to use concise representations, even though they
lack a formal syntax or semantics. The analysis for a typical project may be
expressed in a few pages. This is only possible because of this assumed shared
understanding within a small design team. Assumptions (2) and (3) are also essential
for the ease of use of these techniques. Getting designers to see their designs
from the point of view of a user is the central point of user-centered design. The
more often that developers can get together with users, see them operating their
systems, and understand their work, the better the designs that will result. While
this is the normal modus operandi in many companies, in many others there are
obstacles to it. Developers may feel threatened by the process, management may not
be confident in the ability of developers and there may be political objections
from other groups. These problems are more often imagined rather than real, and
once developers and management have seen the advantages of techniques such as those
described above, they quickly come around (Haber and Davenport, 1991).
While the techniques described are tailored to a specific design context and so
make these important assumptions, the reader may see elements of the approach that
can be adapted to other contexts. More generally, it is hoped that this chapter
has illustrated the value of developing representations for reasoning and recording
users needs and that the form those representations take can either facilitate
or hinder the development of the understanding needed for effective design.
9. REFERENCES
Apple Computer, Inc., Human Interface Guidelines: The Apple Desktop Interface,
Addison-Wesley, Reading, Massachusetts, 1987.
Bellotti, V., Implications of current design practice for the use of HCI techniques,
Proceedings of HCI88: People and Computers V, Jones, D. M. and Winder, R., Eds.,
University of Manchester, (September 59): Cambridge University Press, 1334,
1988.
Campbell, R. L., Categorizing scenarios: a Quixotic Quest?, ACM SIGCHI Bulletin,
24, 1617, 1992.
Checkland, P., Systems Thinking, Systems Practice, John Wiley & Sons, Chichester,
1981.
Clegg, C., Warr, P., Green, T., Monk, A., Kemp, N., Allison, G., and Lansdale, M.,
People and Computers: How to Evaluate your Companys New Technology, Ellis Horwood,
1988.
Curry, M. B., Monk, A. F., Choudhury, K., Seaton, P., and Stewart, T. F. M.,
Enriching HTA using exceptions and scenarios, InterCHI93 Bridges Between
Worlds, Adjunct Proceedings, Ashlund, S., Mullet, K., Henderson, A., Hollnagel,
E., and and White, T., Eds., ACM Press, Amsterdam, 4546, 1993.
Gould, J. D., Boies, S. J., Levy, S., Richards, J. T., and Schoonard, J., The 1984
olympic message system: a test of behavioural principles of system design,
Communications of the ACM, 30, 758769, 1987.
Green, T. R. G., Cognitive dimensions of notations, Proceedings of the HCI89
Conference, People and Computers V, Sutcliffe, A. and Macaulay, L., Eds., Cambridge
University Press, Cambridge, 443460, 1989.
Haber, J. and Davenport, L., Proposing usability testing to management - an It
works therefore its truth approach, Human Factors in Computing Systems:
Reaching Through Technology, CHI91 Conference Proceedings, Robertson, S. P.,
Olson, G. M., and Olson, J. S., Eds., ACM Press, Amsterdam, 498, 1991.
Harel, D., Statecharts: a visual formalism for complex systems, Science of Computer
Programming, 8, 231274, 1987.
IBM, Common User Access (CUA). Systems Application Architecture, Basic and Advanced
Interface Design Guides, IBM technical publications, 1991.
Monk, A. F., and Curry, M. B., Discount dialogue modelling with action simulator,
HCI94 Proceedings: Computers and People 9, Cambridge University Press, Cambridge,
327338, 1994.
Monk, A. F., Curry, M. B., and Wright, P. C., Why industry doesnt use the wonderful
notations we researchers have given them to reason about their design, in
User-Centred Requirements for Software Engineering, Gilmore, D. J., Winder, R. L.,
and Detienne, F., Eds., Springer-Verlag, Berlin, 195188, 1994.
Monk, A. F., Wright, P., Haber, J., and Davenport, L., Improving your Human-Computer
Interface: A Practical Technique, Prentice-Hall, BCS Practitioner Series, Hemel
Hempstead, 1993.
Nielsen, J., Usability Engineering, Academic Press, New York, 1993.
Whiteside, J., Bennett, J., and Holtzblatt, K., (1988). Usability engineering: our
experience and evolution, in Handbook of Human-Computer Interaction, Helander, M.,
Ed., North-Holland, New York, 791817, 1988.
Chapter 6
Interaction Design: Leaving the Engineering Perspective
Behind
Peter Nilsson and Ingrid Ottersten
Linn Data, Frolunda, Sweden
email: [email protected]
[email protected]
TABLE OF CONTENTS
Abstract
1. A Design Story
2. Bridging the Gap
2.1. Generation of Ideas
2.2. Reflecting on Ideas
2.3. Making a Design Decision
2.4. An Example of Bridging the Gap
3. Design Context
3.1. Context and Requirements Analysis
3.2. Design
3.2.1. The Design Space
3.2.1.1. Customer, the Organization
3.2.1.2. User Values and Goals
3.2.1.3. Material
3.2.1.4. Context of Use (Situation)
3.2.2. The Design Process
3.2.2.1. Conceptual Design
3.2.2.2. Functional Design
3.2.2.3. Graphical Design
3.2.2.4. Working in the Design Process
3.3. Evaluation
4. Bubbling Technique
5. Our Values and Considerations
6. Conclusions
7. Acknowledgments
8. References
ABSTRACT
In this chapter we will address the intricate issues that arise while performing
interaction design. Our view of design is that it is a mental process. Therefore,
1. A DESIGN STORY
I (Peter) just received an assignment to perform the interaction design in a
software development project. I have engaged in some informal discussions with the
project leader and the Context and Requirements (C & R) analyst. These discussions
have focused on the customers ways of expressing themselves, the assignment as
a whole, and the customers organization.
My first step is to team up with a co-designer. When I began as an interaction
designer, it was difficult to convince project leaders that I should work with a
co-designer, because they were convinced that I would burn both time and money.
Fortunately, I had the opportunity to try co-designing on one project because one
of my colleagues was training to be an interaction designer. Therefore, the managers
were willing to assign her to work with me to gain some experience. The result was
that we produced three good design alternatives in half the time required for me
to produce one design alternative on the preceding project. Following that success,
the value of co-designers working together has never been questioned by our project
managers.
On one project I worked with Linna, a colleague that usually works with multimedia
productions because the system was a product aimed at the home market, at schools,
and at libraries. Linna was able to spend 1 days per week for the following 3
weeks, helping me out with the interaction design. We agreed to spend most of our
time together sketching, discussing, and hopefully having a wonderful time. We
searched for a room that suited us both, equipped with a whiteboard that we could
have exclusively for our use. During the time I spent working independent of Linna,
I reflected on the design at hand and discussed its technical aspects with other
members of the project team.
To acquire an understanding of the design space as a whole, Linna and I began our
work by reading the report produced by the C & R analyst. This helped us understand
the business goals of the system, situations where the system would be used, the
characteristics of users, and the overall character of the system and technical
limitations. We were aware that we would make incorrect assumptions while reading,
but design is a process of continuous learning, where those initial errors would
be corrected.
While reading the report, I actually started the process of designing. I made some
sketches on paper, and wrote down some of the most important things, from a design
perspective, that I found in the report. I also noted design issues related to them.
After reading the C & R report, I had a basic understanding of the design space,
but I also had many questions. My next activity was a 2-hour discussion with the
C & R analyst to gain an even richer understanding of the design space. My intention
at this stage was to understand how the potential users of the system think about
the tasks which the system will support, which goals they have, and what their
professional concerns are. In this case, I found that the two most critical goals
for the users were (1) avoiding losing face and (2) having the potential to
generate decision alternatives quickly. This understanding helps me to appreciate
the perspective of a potential user as I reflect on alternative designs.
Some designers use detailed descriptions of user tasks, but I have never found them
critical in my work. In the design of systems for use in offices, homes, and public
places, I find concentrated descriptions of users, situations, and usability goals
to be sufficient. I do strive to understand the situation as a whole, rather than
being concerned with details.
Although I now have a preliminary understanding of the design space, Im aware
that I still dont have a coherent picture of how things fit together. Therefore,
I feel overwhelmed by the difficulty of the task. This was particularly prevalent
during the early part of my career, but now I am filled with anticipation because
I am confident that some hard work and creative thinking will lead to significant
progress on the design. A feeling not unlike what I felt as a child on Christmas
just before opening the presents.
It is now time for creativity and reflection. Linna and I use our conception of
the design space and our experience with similar design problems to begin generating
ideas. As we work together, ideas begin to flow and we reflect on them as we go.
My method for capturing the results of this activity is to draw interaction sketches
(not screen shots) on a whiteboard. For this I use a visual language with arrows.
The specific technique is not as important as being able to visually represent the
ideas. Even more important is the ability to clarify the ideas for myself and my
co-designer. Presenting ideas visually helps us reflect on the design ideas to
evaluate them and to generate new ones.
When working with a whiteboard, I quickly shift from idea generation to reflection
and back again. For this I use techniques such as Bubbling (described in a later
section), but I never carry it out in detail or for very long, and I only record
it if I find something critical for the design. For me, recording design paths,
suggestions, alternatives, and decisions are vital, but must not hinder my
progress.
When designing, I generate conceptual, functional, and graphical suggestions
simultaneously. This is a result of reflecting on ideas from all aspects in the
design space. However, I am careful to never lose focus on the design space as a
whole. At the conceptual and functional level of design, I try to generate
directives for the graphical design and leave the graphic design details until later.
Later, when doing the graphic design I can then follow those directives as I attempt
to generate an effective graphical representation of the users mental model.
Linna first was very surprised, when after some hours of intense, productive work,
I took a step back and said, Well, this is quite good, but I dont know if I
can defend this design. At first, she thought I was joking, but as we discussed
my comment, we came to two interesting conclusions. One is that a designer must
be able to justify every design decision, whether it pertains to a detail or to
the design as a whole. If someone questions a part of a design or proposes something
that hasnt been considered, a designer should carefully evaluate it. Therefore,
one of the core abilities of designers is to be critical of their own designs. The
other thing Linna and I concluded is that designers should be aware of their
particular styles of designing if they are to improve them.
Because I can never be certain of how effective a design is, I conduct user-driven
evaluations to determine how well the design matches the way the users think and
act. In this case, we found that our conceptual foundation didnt match the users
way of thinking about their tasks. As a result, we abandoned that particular design
in favor of others.
Also, invariably I will think of an excellent design idea after the project is
completed. Even though it may be too late to incorporate this idea on the recently
completed project, these reflections are helpful for future projects. Linna also
reminded me that there is no single correct or ultimate solution to a design space.
On the contrary, there are many ideas that are never even considered.
The user may be frustrated by the delay when switching between tabs.
3. DESIGN CONTEXT
As interaction designers we have found that two key questions need to be answered
before design can begin. (1) What are the desired effects of this system? and
(2) What attributes are needed for the system to produce those effects? We are
convinced that the C & R analysis must be performed prior to the start of the
interaction design (see Ottersten and Bengtsson, 1996; Nilsson and Lachonius, 1996).
Otherwise, those questions will surface later, during the system development
process. Therefore, we have developed a method that ensures that these key
requirements are defined before interaction design begins.
Interaction design and C & R analysis are two of the activities in what we call
external design (see Figure 6.3). The other activities are:
Information modeling
Design evaluation
Technical writing
User training and support
We use the term internal design when discussing activities contributing to a good
technical realization of the system. The contents of external design are contained
in our two methods for usability work, AnvndarGestaltning,1 and VISA.1
The C & R analysis can also include interviewing and observing users while they
are conducting their work, following a structure that ensures the descriptions of:
Work tasks and their structure.
Work load.
General characteristics of users (e.g., age, sex, education).
Users experience with computers, the work to be performed, and the
organization.
Users motivation and values and their cultural and social environment.
Existing support systems, either manual or automated.
User goals and mental models.
The results of the interviews are described in a report delivered to the customer,
the users, and the interaction designers. Distributing the report allows both the
users and the customer to be involved in the design process and provides them an
opportunity to react to any misunderstandings. C & R analysis, when interviewing
and observing users, involves activities similar to those of Contextual Inquiry
and Contextual Design (Holtzblatt and Beyer, 1993). Some differences are that in
our C & R analysis:
(Usually) only two people from each user group are selected.
Very few formal modeling techniques are used primarily just written
and spoken language.
The same people conduct all the interviews and write the report.
3.2. DESIGN
3.2.1. THE DESIGN SPACE
The design space is created by the boundaries of the design (see Figure 6.4). These
boundaries are the customer, the user, the material, and the context of use
(situation). In addition, there are always some practical and political
constraints. For us, building information systems for customers on a contract basis
always means that the time and funds available for usability work are limited.
3.2.1.1. Customer, the Organization
The customers intent is one limitation for the design space. The customer is,
however, very seldom a single person. Furthermore, customers belong to an
organization, and they share the organizations values and mentality. This
is seldom something conscious or explicit and therefore something the customer
usually cannot reflect upon without some assistance. Our experience shows that
customers have great difficulty explaining their concerns and wishes. We have
developed a technique to explore possibilities with them and to help them express
these needs.
models of tasks. On the contrary, we suspect that detailed modeling could interfere
with a designers being innovative. This arises because the external designer is
trained to think in terms of such human characteristics as cognitive workload,
stress, and learnability.
3.2.1.3. Material
Designers are limited by the materials with which they have to work. We have found
that in many system development projects the material limitations are not explicit.
By material we mean all hardware and operating system limitations as well as
any functionality related to the internal structure of the system, such as
performance, adaptability, security, and maintainability.
3.2.1.4. Context of Use (Situation)
The physical and psychological context in which the system will be used is very
important. When the external designer considers situational issues (e.g., time of
use, placement of users, frequency of use, personal integrity, and ethics)
important design information is obtained.
3.2.2. THE DESIGN PROCESS
It is very important for the designer to focus on the right things at the right
times in the design process. For example, the manner in which users will perform
their tasks in the system (flow) must be considered prior to considering the exact
placement of buttons on the screens. This may seem obvious, but our experience has
shown that design work often begins with a heavy focus on graphical design. This
may be an unintentional, but natural consequence of using GUI prototyping tools
in early stages in the design process.
To help the designer focus on the right things at the right time, our design process
is divided into three phases as shown in Figure 6.5 (see also Nilsson and Lachonius,
1996). With this approach it is possible to ensure the usability of the design and
to avoid being concerned about the need to change the concept of the design after
doing the graphical design.
3.2.2.1. Conceptual Design
Design work should begin with conceptual design. At the conceptual stage, it is
important to ask questions such as, what are the components that users work with
(from the their point of view)? and how do those components combine to fit the
design space? The work in the conceptual phase is preferably done on a whiteboard
or with pen and paper. The result should be recorded in rough sketch form. Its
purpose is to communicate the conceptual design of the future system and should
reflect the users mental model.
These are, from the users point of view, the components upon which the system should
be built. Note the that the arrow, build journey, embodies the fundamental
concept of the system: to build journeys from possible resources. In this case,
one single sketch visualizes the entire fundamental concept of the design.
These sketches cannot be evaluated directly by a user trying to perform a task,
because they are much more abstract than actual screen layouts. Therefore, it is
often necessary to proceed to functional design (described in the following section)
to evaluate the conceptual design and verify that the system reflects the users
mental model in an effective way. It is in the conceptual phase that the designer
should begin thinking about how to provide the user with the desired emotional tone
of the system. An effective way of doing that is to use the Bubbling technique
described later.
getting caught up in graphical details, for example, during conceptual design. When
we worked with creating the conceptual idea above, we generated both functional
and graphical ideas. The functional idea was the use of tabs, which we recorded.
Later, we decided to use the idea because it allowed us to maintain a great deal
of information in one window. The graphical idea was to place the Sell button
at the top left of the screen. We also recorded this idea and later decided not
to use it, because it didnt fit naturally into the flow of how the user read the
screen. Sell is the last action taken by users and is always performed after
they scan the journey list to ensure its accuracy. Therefore, we placed it at the
bottom right-hand corner of the left area.
3.3. EVALUATION
There are basically two types of evaluations: expert evaluation and user-driven
evaluation. We conduct both of these evaluations with paper prototypes as well as
computer prototypes. The expert evaluation is a method involving an expert in design
guidelines (Skevik, 1994) which are based on principles of perception and cognition.
The so-called user driven evaluation is a method based on cooperative evaluation
with users (Wright and Monk, 1991). After years of performing these evaluations,
we have developed a specific set of work roles for performing the evaluations and
a question format for evaluating the design prototype afterwards.
We have also found it critical to formulate specific goals for the evaluations.
For an evaluation conducted at a very early stage, the most important thing is to
determine if the user can understand the conceptual idea behind the system. Later
on, it is more important to determine if the user can perform a specific task,
perhaps with a particular time constraint.
Following the use of our techniques, it is possible to determine if the system is
appropriate for the users. This is our major goal for all evaluations performed,
ranging from the first paper prototypes to the final delivery test. Some users are
involved in the evaluations throughout the course of the project and some are
involved only in selected evaluations.
4. BUBBLING TECHNIQUE
We have developed a technique called Bubbling that allows a quick launch into
the design process (Nilsson and Lachonius, 1996). Bubbling makes use of a persons
ability for quick associative thinking. All that is needed is a pen and paper. One
begins by placing a key issue from the design space in a bubble in the middle of
the paper as shown in Figure 6.8. That key issue can be a desired emotional response
(from users) to the system, a user goal, a user category, a business goal, a task,
or an information entity. For example, perhaps a goal is for the users to perceive
the system as supportive.
Figure 6.10
until they are evaluated by users in the appropriate circumstances. This is a fact
rarely understood by software designers and project leaders. Even customers tend
to be insensitive to user issues. At LinnData, we conducted a survey to assess
the level of knowledge and interest in usability work among potential customers.
Results showed that the major reason the customers have little interest in usability
work is that they rarely, if ever, have the organizational responsibility for the
people who are going to use the system.
There exist many techniques for generating a design. However, interaction design
is more than just creating an artifact that appeals users. Interaction design is
the very core of external design (i.e., producing an altered reality for people
and contributing to the realization of an organizations goals). The difficult
part in interaction design is focusing on human needs. It is the myriad of human
considerations that separates interaction design and software design, in general.
However, we do not agree with those that claim interaction design is an art form.
On the other hand, it seems obvious that some aspects of interaction design could
never be formalized.
We emphasize that design is a continuous learning experience. In a specific project,
the designer learns more and more about the design space, making the design process
iterative. Over time, a designers knowledge is organized into larger chunks that
consist of complex design issues and potential designs. The kind of learning that
best describes interaction design is entertainment (i.e., the need of
first-person-feeling and the urge to avoid interruption from the outside world or
from thoughts that hinder the process) (Norman, 1993). This is consistent with our
experience that interaction design is a social activity and is best done by people
working in pairs (Winograd, 1996).
A computer system inevitably reflects a mental model of the way the users are
believed to perceive the systems structure. The design should be carefully
crafted to reflect the users actual mental model. We use the term computer
interface to emphasize the perspective of the human beings, not computers
(Grudin, 1993).
6. CONCLUSIONS
We find effective external design, in general, and interaction design, in
particular, to be a challenging and engaging activity. When developing computer
systems for humans, it is the human-computer interface that embodies all
opportunities, values, information, knowledge, and feelings that the user will
experience when using the system. We claim that there are many issues to be
considered when performing interaction design and bridging the gap. We have
called this the design space.
We are constantly striving to find new and better ways to collect the data needed
for design. One difficulty lies in gaining access to all the implicit knowledge
and the self-evident matters that exist in the minds of customers and users. This
means that the design process can never be a simple transformation, because the
customer and the users are never able to express all their needs in detail at the
beginning of the design process. Therefore, the matter of design is to be able to
collect enough facts to begin a discussion of the proposed design. We perform
interaction design in this manner to ensure that important design decisions are
made in cooperation with the customer and the users. This is important because those
decisions effect costs in both the short and the long term and they affect users
everyday activities.
From the customers and the users points of view, it is critical that all work
concerning external design is focused on satisfying their needs, rather than
satisfying a formal system specification. We have been using our methods for
external design with users for all types of applications (e.g., public information
systems, multimedia production systems, production-planning systems, products for
homes, and for World Wide Web access). It is important that the method used for
project management can accommodate a process of continuous learning and revision
of the original system specification.
There is a great challenge in the future uses of computers. We strongly believe
that the area of interaction design will benefit from expertise in areas such as
architecture and commercial advertising, where knowledge about how to meet human
emotional needs are taken seriously. We also hope that the area of interaction
design will develop better means to respond to cultural and social values.
7. ACKNOWLEDGMENTS
The work described herein has benefited from the contributions of several
colleagues in many different ways. In particular the authors would like to
acknowledge the efforts of Anna Skevik, who founded the usability program at Linn
Data in 1989.
8. REFERENCES
Ehn, P., Scandinavian design: on participation and skill, in Usability: Turning
Technologies into Tools, Adler, P. and Winograd, T., Eds., Oxford University Press,
New York, 1992, 99-132.
Floyd, C., Mehl, V-M., Reisin, F-M., Schmidt, G., and Wolf, G., Out of Scandinavia:
alternative approaches to software design and system development, Human -Computer
Interaction, 4, 253-350, 1989.
Grudin, J., Interface-an evolving concept, Communications of the ACM, 36, 110-119,
1993.
Holtzblatt, K. and Beyer, H., Making costumer-centered design work for teams,
Communications of the ACM, 36, 93-103, 1993.
Nilsson, P. and Lachonius, J., Internal Linn Data Handbook for interaction design
using the VISA method Ver. 1.0, 1996.
Norman, D. A., Things that Make us Smart, Addison-Wesley, Reading, MA, 1993.
Ottersten, I. and Bengtsson, B., Internal LinnData Handbook for context and
requirements analysis using the VISA method, Ver. 1.0, 1996.
Ottersten, I. and Granson, H., Objektorienterad utveckling med COOL-metoden,
Studentlitteratur, Lund, 1993.
Skevik, A., Internal LinnData Handbook for graphical and textual interface design
using the AnvndarGestaltning method, Ver. 2.1, 1994.
Stolterman, E., The Hidden Rationale of Design Work, Ume University, Ume, 1991.
Sweden, Winograd, T., Bringing Design to Software, Addison-Wesley, Reading, MA,
1996.
Wright, P. and Monk, A., Co-operative evaluation. The York Manual, University of
York, York, UK, 1991.
Chapter 7
Mind the Gap: Surviving the Dangers of User Interface Design
Martin Rantzer
Systems Engineering Lab, Ericsson Radio Systems, Linkping, Sweden
email: [email protected]
Table of Contents
Abstract
1. Introduction
2. Overview of the Delta Method
2.1 What is Special about Delta?
3. The Case Study: TSS 2000
4. Before Crossing the Gap
4.1. System Definition
4.2. User Profiling
4.3. Task Analysis
4.4. Design Preparations
4.5. Usability Requirements
5. Bridging the Gap
5.1. Conceptual Design
5.1.1. The Workshop
5.1.2. Participants in the Workshop
5.1.3. The Design Room
5.1.4. Walk-Through of the Background Material
5.1.5. User Environment Diagrams
5.1.6. Conceptual Design in TSS 2000
5.1.7. The Result
5.2. User Interface Design
5.2.1. Opening Screen Main Window
5.2.2. Choosing a Metaphor
5.2.3. Transforming the Conceptual Model into a User Interface Design
6. After the Bridge
6.1. Usability Tests
6.2. Computer Prototype
7. Conclusions
8. Acknowledgments
9. Background to the Delta Method
10. References
ABSTRACT
The Delta Method is a systematic approach to usability engineering that is used
within the Ericsson Corporation. The method has successfully supported usability
work in a number of projects and continued to evolve during the process. We have
managed to introduce usability engineering as a way to establish a solid platform
of user data and to extend that platform to the other side of the design gap.
Based on a real-life case study we describe and exemplify the background information
processes necessary to bridge the design gap. The central activities during the
user interface design are two design workshops. During the first workshop we
structure the services of the system into a conceptual model and during the second
we transform the implementation independent structure into a paper prototype that
reflects the future user interface.
The case study shows that a method such as Delta does not stifle the creativity
that is needed to design good user interfaces. It helps to structure the work so
that the combined force and creativity of a usability team can be used efficiently
during the process.
1. INTRODUCTION
As usability engineering is making its way into the mainstream of software
development at Ericsson, we face the problem of integrating usability activities
with the design process and putting the results of the usability work to good use
in the design of products. This difficult integration often becomes acute when we
try to bridge the gap, transforming the user information into an effective user
interface design. To bridge this gap we use a usability engineering method called
Delta. In order to build an adequate bridge, the scope of the method spans from
user and task analysis to conceptual modeling and to the design of user-interface
prototypes.
The face of the telecommunications market is changing rapidly, and usability has
become a very important factor in attracting new customers. Following the
deregulation our traditional customers, the old, large, and skilled telephone
administrations, are forced to change their operation to meet the competition from
the new operators. They need powerful and flexible systems to support their
experienced and skilled personnel in order to stay ahead of the competition. To
continue to supply these customers with our products, our systems must evolve to
support the new situation.
We also want to attract the new telecom operators who want to turn the key on
the new system and start earning money. They want a system that is up and running
within minutes after delivery, that keep on running with no down time, and that
can be managed by unskilled personnel. In this case we have to provide most of the
relevant processes rather than being able to adapt existing ones within an
organization. The business processes and the general technical requirements are
the same in both cases, but the new context poses dramatically different usability
requirements on the user interface of the system.
The typical context for our usability engineering efforts is large, complex
technical computer systems within the telecom industry. It is often a technical
support system used to manage the installation, operation, and maintenance of large
telecom networks. We have used the Delta Method during the development of new
systems and redevelopment of existing ones. The methodology is sufficiently
scaleable to be used in small projects and it is also applicable outside the telecom
domain. It is more doubtful that it would support experimental prototyping as
described by Smith in Chapter 11.
This chapter presents the Delta Method and how it was used in the design of the
next generation of a test and simulation tool for telecom equipment. As always in
case studies the context and conditions are unique, but we believe that many of
the problems are universal and that our experience can be used to bridge the design
gap easier.
studies of users interaction with prototypes. The tight integration of the method
into the existing system development process raises the usability requirements to
the same level as the technical and functional requirements. They are no longer
optional or add-ons to the requirement specification. The Delta method is
summarized in following inset.
The method is designed to be used by system developers and technical communicators
with limited formal knowledge of usability work. Trained usability engineers and
human factors specialists are still very scarce in Swedish software development
companies, a situation that has led us to adopt on-the-job training. We supply a
method that produces good results even if the work is performed by workers not
trained as usability specialists. Selected system developers and technical
communicators take part in basic usability training and then receive support from
an experienced usability engineer during their first projects.
System definition
Define the scope of the proposed system and identify preliminary user
categories and system services.
User profiling
Create detailed user profiles for each relevant user category.
Task analysis
Identify and record all relevant user activities of the current work
practice.
Design preparations
Restructure and reinvent the current work tasks.
Usability requirements
Define usability requirements that the prototype and the finished system must
meet.
Conceptual design
Create a conceptual model of the system services that support all future work
tasks.
Prototyping
Design prototypes, using paper or computer, that reflect the conceptual
model.
Usability tests
Observe representative users testing the prototype. Evaluate how well the
system meets the requirements and redesign the prototype if necessary.
UI implementation
Support the system developers with user interface development skills during
the implementation of the applications.
Other sources for information regarding the Delta Method are the Delta Method
Handbook (Ericsson Infocom, 1994) and The Delta Method A Way to Introduce
Usability (Rantzer, 1996).
she often has both training and experience in how to explain complicated technical
issues to inexperienced users.
The usability engineer will plan and lead the design of the user interface and ensure
that it fulfills predefined usability requirements. This is accomplished through
usability testing and evaluation of user interface prototypes. The usability
engineer has formal training and/or experience in usability engineering or human
factors. He or she will work together with the system designer and the technical
communicator in the usability team.
Traditionally the user interface has been defined as the graphical layout of the
program, as it is seen on the computer screen. The Delta Method expands this concept
of user interface to have a wider meaning. According to the Delta Method a user
interface consists of three equally important parts:
The services offered to the user by the system (i.e., the functions
directly supporting user tasks).
The actual graphical presentation of the services on the screen
(traditionally referred to as the user interface).
The enabling information (e.g., the user documentation) needed by the user
in order to be able to use the system efficiently.
All three aspects of the user interface are taken into consideration when the
requirements on the system are defined. The process of defining the system
functionality is influenced by how it is to be presented to the users and the
knowledge and information that is needed to use the functions. Since the enabling
information is a part of the user interface, it is important that the technical
communicators and the system designers cooperate during the design process. This
ensures that the different parts of the user interface are designed in a consistent
way and that the services offer a comprehensive view of the system. In other words,
working according to the Delta method means that system designers and technical
communicators work side-by-side during analysis and design, with the aim of
preserving the interests of the users.
the same time to find out whats for dinner. TSS 2000 offers the possibility to
generate a realistic traffic load and simulate how the calls are handed over between
base stations along the track.
The goal of the TSS 2000 was to integrate the capabilities of two existing test
tools and also offer additional services. The focus of the project was to merge
the tools and rework them as necessary. It was believed that poor usability was
a dominant factor in many of the user complaints about the old products. Some of
the more technical problems would be addressed in the new system, but there was
a need to address the usability problems in a more systematic way. During the spring
of 1996 a group of system developers and technical communicators initiated a study
to improve the usability of the new tool.
The purpose of the usability study was to produce
An on-line, evolutionary prototype.
A test specification for the user interface.
A design specification that contains measurable requirements for both the
user interface and the user documentation.
Guidelines and recommendations for a conversion from Open Look to
CDE/Motif.
Although the fact that usability had been identified as an important aspect of the
new product, substantial lobbying and internal marketing was still required to get
the usability study going. The effort to design and implement the functionality
of the system was massive and, compared to the task of getting the system up and
running, usability was considered a minor problem. Once the study was approved
we faced some restrictions:
Only a new UI The study was limited to improving the user interface
of the tool. At the start of the project there was very little understanding
for the need of an open-ended usability study.
Just port it The old test tools ran on PC and UNIX platforms using a
character-based user interface and the Open Look look and feel. Ericsson
is in the process of migrating all development and test tools to Motif/CDE.
The need for moving TSS 2000 to Motif/CDE was used as an argument taking
the opportunity to make a complete redesign of the user interface.
Ready by yesterday The time schedule for the usability study was very
tight. Everything that was not considered to be critical to the study was
left for later, and every activity that we performed had to be trimmed to
fit the deadlines. This has surely affected the quality of the study, but
we simply had to produce the best result possible with the given limitations.
This case study is far from the schoolbook example of usability engineering; it
is more of Usability shooting from the hip. The initial time plan for the
usability study, roughly 3 months, was very short. In this time we had to perform
all the necessary usability activities, and there was little background
information that could be reused to help us shorten the time for the study. This
meant that we had to cut every corner and constantly publish reassuring results
in order to secure more time and resources.
Due to the limited resources, design decisions sometimes had to be based on results
that were a bit incomplete or unfinished, but in those cases we had to trust our
intuitions. A method for usability engineering has to survive in the real world,
adapting to the changing characteristics of the projects. To us usability is an
engineering in practice; we will never design the ultimate user interface, but
we always strive to produce the best possible result with the given resources.
Many times the industrial usability engineer has neither time nor interest to study
exactly how unusable a product is. The mere fact that the users have problems is
enough for considering a redesign.
System definition
User profiling
Task analysis
Design preparations
Usability requirements
things from the viewpoint of the organization and to find out how the computer system
fits into the companys business.
During the system definition the design group and customer representatives perform
a rough analysis of the proposed system on an abstract level. The customer
representatives are typically upper or middle managers, marketing people, and
technical or organizational specialists. The intention is primarily to set a scope
of the project, and users are not present during this activity. The purpose is to
gather the customer requirements and set the stage for future, more user-centered
work. The information is documented as a System Vision, a mix of the customers
expectations, concrete requirements, and preliminary user categories and system
services.
The requirements on the system tend to differ a great deal, depending on whether
information is gathered from the customers or the users of the system. The customers
emphasize qualities such as low price, fast delivery and long life, whereas the
users want a system that is fast, easy to learn, and effectively supports their
work tasks. These differences in requirements are seldom observed in traditional
systems design, and most often the users are completely cut off from the design
work. The aim of the Delta method is to make sure that both parties are involved.
The usability team makes a rough draft of the categories of system users and their
requirements on the system to create an Information Matrix. Preliminary user
categories are placed on one axis and important work tasks on the other. The
intersection describes what requirements a user category has on a specific task.
There was no formal system definition activity in the TSS 2000 usability study.
Much of the work that is normally performed during the system definition had been
conducted as internal usability marketing activities. This included seeking
management support for the work, finding relevant background information, and
planning the scope of the project. Therefore, when the study was approved there
was neither a pressing need nor adequate time for a full scale system definition,
although the scope and objectives of the TSS 2000 usability study were described
in a manner similar to a system vision.
get the big picture of the user categories. It is important to verify the results
and add more details to the user profiles during the user interviews.
The questionnaire used in the case study was based on a general user questionnaire
that was adapted to the target group. It included two parts: general characteristics
such as age, gender, and level of education and a domain-specific part with
questions regarding experience of certain work tasks and test tools. We also asked
for a rating of the current tools and documentation. An excerpt of a user profile
follows:
User Profile: Tester
Work experience
The turnover for people working as testers is high. Testing is often used as a way
to introduce new personnel into the organization. By performing the tests they learn
all parts of the system. After some time they often move to other parts of the
development organization.
The questionnaire showed that 20% of the testers had been working in that position
for less than a year, 60% between one and three years and 20% of the testers were
veterans with more than five years experience.
On the question of how long they expected to stay on as testers 40% expected they
would move on within one to three years time, 40% expected to stay three to five
years, and 20% planned to stay more than five years.
Work Situation
The Testers work very independently and have a quite hectic work situation. A
summary of the questionnaire shows the following characteristics:
Teamwork
To some extent
Individual work
Quite a lot
Enabling information
Possibly at hand
Hardly sufficient
Hardly sufficient
Not sufficient
Existing Tools
Most testers had no experience with the current version of the product. One that
had some experience rated one of the tools as being supportive and easy to learn.
The other tools received low ratings.
None of the testers found the language in the manuals or the user interface hard
to understand.
The users were also asked to describe their work tasks on a keyword level. These
general work descriptions were used to prepare the task analysis and as a basis
for selecting representative users to interview. The questionnaire was sent to
approximately 100 intended users, of whom 70 percent completed and returned it.
To induce the users to help us, we promised them a small gift (a pen with the TSS
2000 logo) for returning the questionnaire on time.
The analysis of the questionnaire identified three user categories: test program
developer, tester (as shown in the inset above), and support person. The user
categories were created based on their work title, their description of their work,
the tools they were using, and knowledge of the customer organization. Further
analysis, after performing a number of user interviews, led us to eliminate one
of the user categories, support person. The category of tester was split into
two separate categories: system tester and function tester. The work tasks of these
two categories were found to have different characteristics, which imposed some
conflicting requirements on the system.
interviewers understanding of the work climate, but usually there is little time
to study this in detail.
During the interview, the user describes his or her work situation and tasks. The
interviewers will try to capture the tasks in an activity graph (see Figure 7.2)
which describes the users tasks and environment. The activity graphs method of
taking notes is described in a Swedish book Verksamhetsutveckla datorsystem
(Enterprise Development of Computer Systems) by Goldkuhl (1993). To verify that
the information in the graphs is correct and complete, the interviewers play
back the description of the tasks to the user. This means that the interviewer
describes the work tasks to the user as they are described in the activity graphs.
It allows the user to listen and concentrate on correcting the description and
adding missing information.
Eight future users of TSS 2000 were selected for interviews based on the
questionnaire or after being suggested by key contacts within the customer
organization. The users represented all the categories and the user characteristics
found by the questionnaire. Each interview lasted approximately 2 hours and
resulted in activity graphs, updated user profiles and general notes on the design
of the future system.
The user interviews of the TSS 2000 project were conducted at the users workplace.
The actual interviews were conducted in an ordinary conference room, but the visit
also included a guided tour of the facilities and an opportunity to observe the
users performing their tasks. During the tour we also met and talked to other users
that were not scheduled for the interviews. They were still very eager to tell us
about their situation. The users taking part in the interviews had no problem
mastering the notation and most of them actively took part in the drawing of the
activity graphs. The graphs were a good way for the users to structure their story,
and it helped them to recall the information needed to perform the tasks. It was
also helpful in keeping the users on track. Some users were primarily interested
in giving their opinion on the shortcomings of the current system and their wishes
for improvement. Focusing the user on the graph helped to structure and control
the interview. The activity graphs drawn in cooperation with the users were later
merged into supergraphs representing a consolidated description of all the tasks
of a user category.
The usability team also created general descriptions of the work tasks based on
the interviews and questionnaires. The tasks of the System and Function Testers
are described below:
Work Descriptions:
The System Tester is handed a fairly complex test case from the Test Program
Developer, with 5 to 8 test programs and a parameter set that is to be tuned during
test execution. The system tests are executed over a long period of time, during
which different parameters such as traffic load are constantly changed.
The Function Tester writes the test programs, the test specification and the test
instruction himself. The test programs are usually written from scratch. The
function test programs are written to test a specific function in the telecom
equipment. The Function Tester is not interested in observing the test tool while
the test is running unless something goes wrong during the test set-up.
It is important to focus on the work tasks of the users and not to become involved
in a discussion about the shortcomings of the current system. The expectations of
the users can sometimes lead to difficulties. During a customer satisfaction survey
in Japan, the users and customers reported a number of problems to the interviewer,
expecting them to be fixed in the next release. Since this was not the focus of
the survey, the problems remained in the next release, much to the annoyance of
the Japanese customers.
report of usability work so far has primarily focused on the current state of
workplace. The results reflect the users current requirements, needs, and
shortcomings of the existing systems, but only vague ideas or hints of how the
system should support the work.
We now start the work of carefully restructuring the current work tasks and adding
new tasks. The graphs are transformed into scenarios and/or new activity graphs
that reflect the new way of performing the tasks. The activity graphs are often
used as the starting point when writing the scenarios, but the scenarios also
reflect the characteristics of the physical environment of the users. That is
something that cannot easily be expressed in the graphs. The scenarios are also
easier for outsiders to understand, and they are often presented to the users and
customer in order to verify the services of the new system. Just as the scenarios
are created to describe the future work tasks, the user profiles should be updated
to define the skills and characteristics the future users will need.
One of the major mistakes of the case study was the decision to try to save time
by not writing scenarios as part of the design preparations. The following inset
contains a reconstruction of a potential scenario. The scenario illustrates how
new tasks (in bold) are integrated into the current work.
Scenario for a Function Tester:
The function tester Tom enters the test lab one Tuesday morning. He is a bit late
since he stayed late last night scheduling a break & destroy test. He had estimated
that the test program would crash the system when the load reached about 10 000
simultaneous calls. He therefore specified a number of alternative tests that
should be run at such a breakdown. Just as he suspected the test had come to a halt
halfway through the normal path sometime during the night. With TSS 2000 it is not
difficult for Tom to analyze the log containing the relevant parameters and to
recreate the exact situation of the breakdown. He fills in the preformatted trouble
report and attaches the relevant part of the log file and mails it to whom it may
concern.
The usability team also developed design recommendations to capture needs and
requirements from the user and task analysis that had not surfaced through work
with the scenarios. Two examples are:
Problem Generally the users do not work as Function Testers for more than a year
or two. The personnel at some test sites consist of almost no regular testers, only
consultants.
Recommendation The initial impression of using the test tool must be positive.
It must be simple to learn and use; otherwise, the testers will be reluctant to
use it. It is also important to provide documentation and training courses that
are adapted to inexperienced testers.
Problem Many Function Testers are newcomers that do not have the skill or the
time to develop extensive test programs. The time between when they get their
assignment and when the testing is expected to be finished is very short.
later and some of the initial ones evolved during the prototyping and usability
tests.
Usability Requirements:
Test case: You have been enrolled in project Gizmo as a function tester. A number
of test scripts have been prepared by a colleague. Use TSS 2000 to start the test
script Alpha and increase the traffic load until the number of dropped calls
reaches 100.
Relevance: 90% of the users are required not to invoke more than one irrelevant
command.
Efficiency: 90% of the users are required to complete the task in less than two
minutes.
Attitude: All users are required to prefer the new system over the present.
Learnability: The time spent using the documentation shall be less than 1 minute.
Availability: 95% of the users are required to find relevant help in less than 20
seconds.
The concept list is a collection of user terms and their meaning. It is continually updated during
A
A
A
A
A
A
moderator
system designer
technical communicator
GUI-prototyper
system architect
system tester (of TSS 2000)
The workshop is led by a moderator who is responsible for the flow of the work,
for effective management of discussions, and for keeping the group focused on the
work process. In the case of the TSS 2000 workshop, the moderator had limited
knowledge of the previous products and the work domain of the users. This proved
very productive because it kept discussions from lapsing into unnecessary details.
It also forced the participants to describe the tasks and problems of the users
in a manner that made it easier to form a conceptual model.
The usability study was jointly led by a system developer and a technical
communicator, both of whom were experienced in their respective fields, and also
were knowledgeable with the Delta Method. The technical communicator had developed
the user manuals for some of the previous tools and was therefore aware of the many
difficulties involved in explaining the use the old system. She also provided most
of the suggestions on how the user information should be divided among the on-line
help, on-line documentation, and the user guide. The system developer had the best
knowledge of the current users and their tasks because of having documented the
results of the task analyses and the activity graphs earlier.
The GUI-prototyper had not taken part in the user and task analysis and therefore
needed to quickly understand the most important aspects of the earlier work so as
to effectively implement the computer prototype later in the study. During the later
stages of the conceptual design, he could also offer advice on how different
concepts could be represented graphically.
The system architect had been one of the driving forces during the development of
the current versions of the test tools. He was extremely knowledgeable on the design
of the current test tools and on all the technical aspects of the equipment the
users test, such as the operating system and communication protocols.
The system tester had long experience of testing and debugging the current version
of one of the test tools. Several of these tests were performed at the customers
work sites in cooperation with the users of the system, so he had developed a good
understanding of the characteristics of the users tasks and their workplace.
5.1.3. The Design Room
The design room acts as a repository for all usability information that is gathered
during the course of the project. We strongly recommend the Design Room approach
suggested by Karat and Bennet (1991). In some projects, including this one, we have
experienced problems enforcing the sanctuary of the design room. The reasons
have varied from No vacancies to I need the relevant information available
at my desk.
In the usability study of TSS 2000 we did not use a permanent Design Room for the
usability work. Instead the workshop was held at a small conference hotel away from
the city. This arrangement allowed us to focus on the work without being disturbed.
The walls of conference room were used for posting the background information and
new information was constantly being added to the walls.
The moderator was constantly on his feet heading the discussion, moving between
the different walls. All conclusions were first sketched on the white board; when
the concept started to stabilize it was summarized on a flip chart; finally, when
the subject was closed, the paper was moved to the appropriate wall. This way
of working allowed the concepts to gradually mature and then be added to the facts
and assumptions concerning the users and their tasks or to the emerging design
described in the User Environment Diagram.
5.1.4. Walk-Through of the Background Material
When all the background material has been posted on the appropriate walls, the team
conducted a walk-through of the information. The walk-through during the TSS 2000
workshop developed into an interview where the moderator interviewed the
participants, walking through all the background information that was visible on
the walls. This gave all the participants a basic understanding of the facts, with
more details available from the system developer and the technical communicator
on request. The other participants offered their reflections on the data as they
were presented, and relevant comments were added to the user profiles and activity
graphs.
The walk-through has several purposes:
It is an effective way to introduce the material to participants that has
not been part of the user and task analysis.
are easier to verify with the users than are some forms of graphical work
description.
5.1.5. User Environment Diagrams
The goal of User Environment Diagrams (UEDs) is to structure the services and
objects of the system in a way that support the users tasks without imposing a
strict task order. The structure is implementation independent, even if in reality
the choice of platform and look and feel is often obvious or not under the
control of the design team.
A focus area describes what system services and objects a user requires to solve
a defined task. The services might be simple, possibly corresponding to a single
function in the user interface, or to a complex set of functions, where there is
a need to decompose functions into new focus areas with additional but less complex
services and objects. An example from the case study can be found in Figure 7.3.
to be flexible and to keep an open mind when identifying candidates for the focus
areas. There is not a one-to-one mapping between an activity in the activity graphs
and a focus area in the user environment diagram. Several activities might be
intimately connected and might need to be merged to form a focus area. On the other
hand, an activity might be too complex for a single focus area and might need to
be decomposed to identify relevant services and objects.
Some system services might already have been identified in the information matrix
as services essential to the success of the system. These services are often
described on a very general level and need to be analyzed to identify the extent
to which they are already covered by the activity graphs. If they are vital, they
should already be accounted for. If they are not present, the customer might have
a very different view of what services the system should offer!
Another potential source of system services are scenarios describing the future
work practice. The scenarios and the activity graphs reflect the same reality, but
scenarios often offer more details and also include more of the potential
difficulties and exceptions that the system must be able to accommodate. A simple
and straightforward way of analyzing the scenarios is to search for verbs as
candidate services and nouns as possible objects.
The process of finding possible user objects is tightly connected with identifying
system services. A service usually operates on an object and, if a service is
identified, the related object(s) are usually close by. The information objects
in the activity graphs are often prime candidates for becoming objects or focus
areas for managing the information objects (data entry, database management, etc.).
Another source of information is the concept list, which defines the users
meanings of all terms. It is used to identify potential objects and to ensure that
services and objects have appropriate names.
In most cases it is easy to describe the flow between the focus areas. They might
be identified from the connections in the activity graphs or be quite obvious, for
example, when a focus area has been decomposed into subareas. The situation can
get more complex if several focus areas use the same system services or even the
same subareas. This is an indication that perhaps further investigation is needed,
possibly resulting in restructuring of the focus areas. There is no rule that
prohibits this multiple use of services, but our experience is that the resulting
user interface often becomes more complex. Therefore, it is easier to resolve the
matter during the conceptual design.
5.1.6. Conceptual Design in TSS 2000
The conceptual design of the case study was highly interactive, involving the entire
usability team. The moderator selected a central activity from the activity graphs
and led the discussion regarding services and objects that would be relevant to
the user when performing the activity. The session began with the design team
suggesting the services and objects to include in a focus area. The suggestions
were written on colored notes and added to the focus area on the white-board. Whether
or not the group found it easier to identify the services or objects first was
dependent on the problem domain. In practice, it did not really matter because the
two sets were interdependent. If a service was identified first, it was usually
easy to identify related objects and vice versa.
All system services (yellow notes) and user objects (white notes) were organized
on the white-board under a suitable title for the focus area. If a service was
complex it was decomposed into subareas, each of which consisted of more detailed
services and objects. The flow between the focus areas was indicated by arrows on
the white-board.
Once a focus area or group of areas seemed to be stable (i.e., no additional services
or objects could be found and no system service was judged to be overly complex),
it was copied from the white-board to a flip-chart and then posted on a wall. The
white-board allowed the freedom to quickly add or revise services, subareas and
objects. Moving the information onto paper was a natural way to accomplish a sense
of closure to the work process, giving the structure greater stability, but still
allowing further comments or revisions. This process allowed us to maintain some
structure in the design process and to avoid excessive debate on issues. The
moderator determined when the level of detail was sufficient and when it was time
to consider a new focus area.
When an activity graph had been fully analyzed and converted into a number of focus
areas, the scenarios were used to verify that the service structure described by
the UED was correct and complete. Since we lacked explicit user scenarios in the
case study, we used the user profiles and the activity graphs to create hypothetical
scenarios. When they are the basis for the focus areas, it may seem circular to
test the diagrams using scenarios from the activity graphs. Surprisingly, the
hypothetical scenarios revealed several problems and inconsistencies in the design
and were an excellent substitute for real scenarios, given our situation.
5.1.7. The Result
The mood of the design team after the workshop was a feeling of urgency mixed with
weariness regarding the application being developed. We still managed to muster
up enough enthusiasm to publish the results of the workshop. All material from
the workshop was put on a wall in the hallway back at the office. We arranged a
walk-through of the user environment diagrams for other developers, relevant
management, and knowledgeable customer representatives. We felt it was important
to continually communicate the status of the project, and it was an excellent
opportunity to get buy-in and reactions from managers before the design. It was
also important to quickly establish a common view of the outcome of the workshop,
which could serve as a baseline for future discussions.
Posting results in the hallway also proved to have drawbacks. When there was a need
for closed discussion, all material had to be moved from the hallway into a
conference room. When the meeting was finished, all the material had to be returned
to the hallway. After a time the material was simply kept in rolls and piles between
meetings.
5.2.1. Opening Screen Main Window
One of the first and most important tasks of the prototype design is to find an
effective way to introduce the users to the system. The main window is usually the
first thing the users come in contact with, and it is a window that will continue
to be the basis for their work. We basically look at three different aspects when
designing the main window of the application:
Vital tasks those that are vital for the user to perform well, even
under heavy stress. Example: A network supervisor continually monitors the
traffic in a network. If something goes wrong (e.g., someone cuts off a cable),
it is important that the user be able to asses the situation quickly and
reroute the traffic.
Frequent tasks those tasks that users spend a majority of their time
performing need to be done effectively, and the system should offer
alternative ways of performing them. Example: Directory inquiries, or any
similar data query application, need to offer good support for the limited
number of tasks that the users perform during each call.
Navigational aid the users need to understand quickly and easily what
the application is capable of doing and how they should go about accomplishing
the tasks. Example: Information kiosks, which need to be easy to learn and
use without instructions.
It is desirable to design a user interface that supports all these aspects well.
In most cases this is not possible, and the characteristics of the users and their
work tasks should be used to determine how to optimize the user interface for the
most important aspects.
In the main window of TSS 2000, we focused on providing navigational aid by
introducing the central concept or metaphor of a test case. The system was to
be used by several categories of users. All of them had a number of important (but
not vital) tasks to perform, and none of the tasks were highly repetitive. With
good navigational aid, each category of users would find it easy to find and work
with their section of the application. We also wanted to offer good support for
the frequently performed tasks, but these tasks were better suited to their own
windows, depending on the different categories of users and their work.
The metaphor also supported the testers when they reused test programs and data
from old test cases or from each other. Reuse was not supported by the current tool.
Instead it involved a very cumbersome procedure of making a complete copy of the
database in use and then removing the irrelevant parts.
Previous experience outside this project provides a good example of how the choice
of an appropriate metaphor is very important since it will affect many design
decisions. During a preliminary study of what was to become one of the existing
test tools, we used a prototype to help customers visualize the suggested
capabilities of the system. The developers of the prototype had very limited
knowledge of the domain and no usability activities had been performed. The target
group for the prototype was in fact the customers and not the end users. The
prototype could therefore show a very simplified view of two difficult areas that
the tool needed to support running the tests and presenting the test results.
The metaphor used to present the information during a test run was conceived from
the layout of an audio mixer used to record music in studios. The channels of the
mixer represented the eight test ports of the test tool. The status of the test
ports was presented at run-time as VU-meters and test parameters could be
changed dynamically by turning knobs on the mixer board. The execution of the
test could be controlled by play and stop buttons on each channel. The
metaphor supported a very simple and attractive visualization of the test execution,
but it failed to support several important aspects of the task. Instead of making
the system more usable, the metaphor became a liability and constricted the use
of the tool.
The metaphor failed to capture the fact that most test cases would run for several
hours, usually during the night. There would not be any tester to review the test
results on the screen or to change the parameters during the test run. Furthermore,
it did not include support for presenting or analyzing logs or other historical
data that could reveal what had happened during the night. The easy operation of
the tests and the instant visual feedback looked very impressive to the customers
when they used the system only for a few minutes during the prototype demonstration
but it did not support the work of the real users.
5.2.3. Transforming the Conceptual Model into a user Interface Design
The work of transforming the UEDs into user interface elements is a combination
of straightforward mapping of the focus areas to window elements and sparks of
innovation. In most cases the focus areas translate into a child window or a dialogue
box with the services as menu items and with the attributes of the objects as GUI
widgets (e.g., fields, check boxes, radio buttons, etc.).
It is often possible to use the metaphor or the content of the main window as a
starting point for translating a group of focus areas representing a complete task
into interface components. The relevant objects in a focus area are either visible
in the main window or there should be a service to make them visible. The service
of selecting an object is often implemented as a mouse operation or as the result
of searching or filtering.
The services connected to the selected object should now be available as menu items
or through some form of direct manipulation such as a double-click or as an item
on a tool bar. A typical result would be a change of focus to a new area in the
UED and the opening of a dialogue box or child window in the user interface. The
window would display the objects of the new focus area such as lists, fields, or
other means of data entry. The services might be accessible through menu items,
buttons, or keyboard entry.
Sparks of innovation are needed to know when to depart from this general approach
and to search for similarities between areas and services that can be combined into
more complex windows or dialogue boxes. A typical example of merging two focus areas
into the same window or dialogue box is the creation of an object and the editing
of its contents. Initially two separate areas may result from the users having
different foci during the conceptual design. However, when areas contain basically
the same objects and services, they can often be merged. An example of the conversion
of a focus area into a dialogue box is shown in Figure 7.5.
Figure 7.6 An early version of a paper prototype. Note the mouse/printer that the
users used to manipulate the interface.
The main window was split into a new main window containing the test case tree,
a separate child window for information, and several additional child windows. The
child windows contained the services and objects to solve specific tasks, such as
writing a test program or controlling the execution of a test. The split was
generated by the users need to have information relating to several focus areas
available at the same time. A later version of the user interface with these windows
can be seen in Figure 7.7.
We also developed a prototype of the user documentation for the system. In the
initial prototype, it consisted only of a table of contents with an alphabetic index
on the back. When the users needed help they turned to the manual and selected what
they thought to be a suitable entry. One of the designers then acted as a talking
manual giving the user the kind of advice they would find in the finished
documentation. We tried to avoid making the talking manual overly interactive, thus
avoiding excessive user support. We were interested in what specific enabling
information the user needed to solve the tasks.
interview. During these interviews, the users were encouraged to talk freely both
about the paper prototype and the tasks that were not yetsupported by the tool.
7. CONCLUSIONS
The way of working that we have described does not eliminate the magic or
creativity needed to bridge the design gap, but usability engineering enables us
to structure the work and make better use of the skills in the design team. We believe
that one part of the magic is the learning process that the individual designer
and the usability team take part in during analysis and design. The joint discovery
of the user information and the subsequent discussions within the team enable them
to manage more complex information and to develop solutions that encompass more
aspects of the work tasks.
One of the initial restrictions of the study was that the scope was limited to
improving the user interface of the tool. There was very little understanding
of the need for an open-ended usability study. This turned out to be a smaller
problem than one might expect. During the early phases of the usability work (user
and task analysis), this constraint did not become an issue. During the conceptual
design and prototyping we had gained enough credibility to be able to suggest
changes that would affect the underpinnings of the system. Having one of the system
architects in the usability group allowed us to suggest solutions that were
inexpensive and easy to implement, but which improved the usability of the system
considerably. Still, the unwillingness of project managers to allow us to design
from scratch affected what services we chose to support and how we wanted to
implement them in the prototype.
We believe that the need for a general usability questionnaire will diminish or
change over time. As the Delta Method is used as an integrated part of system
development, we will be able to build a repository of user information that can
be reused in later studies. This will allow us to focus on other issues such as
internationalization and localization of the user interface.
Producing effective activity graphs and scenarios has proved to be very important
to the success of the usability work. They are very good ways to capture and
visualize the tasks that the system is to support and an excellent platform for
modeling a suitable user interface.
The central usability activities presented in this chapter are the two workshops
to produce the conceptual model and the user interface design. Concentrating the
work to a limited time and space (a few days in a design room) provides a number
of benefits:
The design room acts as a repository for all relevant information and
reduces the disturbances from the outside world.
There is a foreseeable end to the workshop that enables intense work,
followed by a period of contemplation.
The mix of skills allows for different views on a problem or design idea
and it gives the team a broad knowledge base. Including people from later
stages in the project (e.g., testers) also introduces a measure of concurrent
engineering into the work.
An outside moderator can concentrate on keeping the design process going,
and can reduce low-level discussions.
The case study has provided us with valuable experience that has led to improvements
of the Delta Method. Some of them have already been used in subsequent projects
and have resulted in improvements of both the process and the result. An example
of this is the work process of the workshops that has improved considerably in
subsequent projects. Discussing possible metaphors during time-outs was introduced
as a way to cool down heated design discussions, but it also proved to be an effective
way to interleave high- and low-level design activities. During the walk-through
of the background information, we now also identify one or two core activities for
each activity graph. These activities are the first ones to be transformed into
focus areas. This helps us address the most important, and often most difficult,
parts first and forces the remainder of the design to conform. In later projects
we have also added time (20 to 30 minutes) for the team members to work individually
on transforming the activities into focus areas. The parallel designs are then
presented on a white-board and merged into one solution, retaining the best ideas
from all designs.
The most tangible result from the conceptual design phase is a structure of system
services described with UEDs. Since it is based on the activity graphs, it generally
reflects the order in which work tasks are carried out. However, the focus areas
also include the user objects. This creates a more object-oriented user interface
with greater freedom of choice of the order in which to perform tasks. So far the
user interfaces have been object oriented rather than structured, but a heavier
focus on the system services would allow for a more structured interface.
The second workshop, turning the model into a user interface paper prototype, has
become much more structured as a result of the case study, and we have moved our
posting of intermediate results from the hallway into a design room. The level of
detail in the paper prototype has varied considerably across projects and also
between different parts of the same prototype. We are still working to find the
proper balance.
One of the most important conclusions from the study is that a method such as Delta
does not stifle the creativity that is needed to design good user interfaces.
Instead it helps to structure the work so that the combined force and creativity
of a usability team can be used efficiently during the design of the user interface.
8. ACKNOWLEDGMENTS
I would like to acknowledge the members of the unofficial Delta Group at Ericsson
Radio Systems in Linkping, with special thanks to sa Bckstrm, sa Dahl, and
Cecilia Bretzner. Thanks also to Jonas Lwgren of Linkping University Susan Dray
of Dray & Associates and the members of ZeLab, The Systems Engineering Lab at
Ericsson, who have made valuable comments on this chapter.
10. REFERENCES
Ericsson Infocom, The Delta Method Handbook., Internal Ericsson Document, 1994.
Goldkuhl, G., Verksamhetsutveckla datsystem, Intention AB, Linkping Sweden, 1993.
Gould, J. (1988). Designing usable systems, in Handbook of Human-Computer
Interaction, Helander, M., Ed., Elsevier, Amsterdam, 1988, 757-789.
Holzblatt, K. and Beyer, H., Making customer-centered design work for teams,
Communications of the ACM, 36(10):93-103, 1993.
Karat, J. and Bennett, J.L., Using scenarios in design meetings a case study
example, in, Taking Software Design Seriously: Practical Techniques for
Human-Computer Interface Design, Karat, J., Ed., Harcourt Brace Jovanovich, San
Diego, 1991, 6394.
Lwgren, J., Human-Computer Interaction What Every System Developer Should Know,
Studentlitteratur, Lund Sweden, 1993.
Rantzer, M., The Delta Method A Way to Introduce Usability, in Field Methods
Casebook for Software Design, Wixon, D. and Ramey, J., Eds., Wiley Computer
Publishing, New York 1996, 91-112.
Chapter 8
Transforming User-Centered Analysis into User Interface: The
Redesign of Complex Legacy Systems
Sabine Rohlfs
IF Interface Consulting Ltd., Ottawa, Canada
TABLE OF CONTENTS
1. Introduction
1.1. Redesigning Complex Legacy Systems: Characteristics of Projects
1.2. Overview
2. Usability Project Planning Planning the Construction of the Bridge
2.1. Structure for User Involvement
2.2. Required Usability Engineering Skill Set
2.3. Usability Engineering Tool Kit
2.4. Usability Engineering Room
2.5. Usability Engineering Project Plan
3. Usability Analysis Laying the Foundation of the Bridge
3.1. Business Objectives for the Application
3.2. Definition and Description of Current and New User Classes
3.3. Current Task Definitions
3.4. Malfunction Analysis of Current System
3.5. System Requirements, Functional Requirements, and Functional Specifications
3.6. Training and Documentation Strategy
3.7. Hardware/Software Environment
4. Usability Design Building the Bridge
4.1. New Task Definitions
4.2. Usability Performance Objectives
4.3. Building a Model for the User
4.3.1. Design and Use of Metaphors
4.3.2. Task vs. Object Orientation in the User Interface
4.3.3. Designing the User Interface Horizontally First Opening Screen and Navigation Principles
4.3.4. Vertical User Interface Design Second Design for Each Task
4.4. Detailed Design
4.5. Tips for Managing Design Iterations
5. Usability Deliverables Documenting the User Interface Design
5.1. User Interface Architecture
5.2. Application Style Guide
5.3. User Interface Specification
6. Conclusions
7. Acknowledgments
8. References
1. INTRODUCTION
This chapter describes a pragmatic approach to building a bridge from user/system
requirements to a reasonably polished user interface design. Managing the building
process efficiently and effectively is of critical importance in bridge building
and, therefore, usability project management issues are addressed throughout. The
goal is to provide heuristics for scoping and estimating usability project tasks,
the content of usability deliverables within a widely used framework for
information systems architecture, and suggestions for managing user involvement.
The emphasis here is on practical suggestions and solutions, because this reflects
the authors experience developing and implementing such user interface designs.
The bridge from user/system requirements to a reasonably polished user interface
design is built by carrying out a sequence of well-defined project tasks. For each
task, building the bridge requires a carefully thought out project methodology,
approach, and plan, appropriate tools, methods, and people. It also requires
metrics to measure the quality of the results at each step.
The scalability of methods and their appropriate selection is a significant
challenge; yet it is crucial to success. The authors experience has been that
there is no one-size-fits-all set of user interface design methods, guidelines,
and techniques that are applicable to all projects. Rather, each project requires
a custom-tailored set of methods, guidelines, and techniques. When deciding which
ones to apply and how to custom-tailor them, several factors shape the decision:
Product for sale vs. in-house application.
Product/application upgrade vs. new product/application (no manual or computerized system to
accomplish the user tasks exists (i.e., this has never been done before).
Degree of novelty of the technology (e.g., Graphical User Interfaces (GUIs) are tried and proven,
whereas an interface combining a small display and voice recognition technology is relatively new,
with many unanswered questions and few guidelines for design).
The availability of real vs. surrogate users.
The nature of the user interface assignment: firefighting (we have major usability problems, the
project is already over budget, and the deadline for roll-out is x weeks/months from today) vs. fire
prevention (we want to integrate usability into the project right from the start).
Size of the project in terms of function points, budget, staff, and number of stake holders.
The budget, staff/consultants, and time available for the user interface work.
This chapter presents a pragmatic approach to redesigning the user interface for
so-called legacy systems (by contrast, design for novelty products is addressed
in the chapters of this volume by Smith, Chapter 11, and by Scholtz and Salvador,
Chapter 9).
1.2. OVERVIEW
Section 2 describes the necessary usability planning and infrastructure in terms
of user involvement, skill sets, tool kits, specially equipped rooms, and the
project plan. Section 3 describes the necessary work/task analysis process and
results from which to begin user interface design, i.e., the foundation for the
bridge. In a usability engineering project plan, this is usually called the
usability analysis phase. Section 4 describes the actual design process, i.e., the
building of the bridge. In a usability engineering project plan, this is usually
called the usability design phase. It includes iterative design, prototyping, and
testing. Formal usability evaluations with the coded application are conducted in
the usability evaluation phase, which is not addressed here. Table 8.1 provides
an overview of usability project phases and tasks and their corresponding sections.
Table 8.1 Overview of Usability Project Phases and Tasks and Their Corresponding
Sections
Usability analysis
laying the foundation of
the bridge
Usability
design
building the bridge
Section Number
2.1
2.2
2.3
2.4
2.5
3.1
3.2
3.3
3.4
3.5
3.6
3.7
Hardware/software environment
4.1
4.2
4.3.1
4.3.2
4.3.3
4.3.4
4.4
Detail design
and
usability
acceptance
In each section, the heuristics for the choice of methods and approaches will be
provided for fire prevention and firefighting assignments. In fire prevention
assignments, usability is integrated into the overall project plan right from the
start, allowing for a reasonably polished user interface design in an orderly
fashion. By contrast, the usability work done in firefighting assignments is aimed
at quickly addressing the most glaring usability problems within the confines of
project schedule, resources, and budget. Usually there is not much opportunity for
exploring user interface design alternatives. It is by no means a comprehensive
usability effort, and many shortcuts are taken often discounting even discount
usability (Nielsen, 1989). Extra effort put into the same project task in fire
prevention means higher-quality information for decision making and more
exploration of design alternatives. In fire prevention assignments, there is a
higher level of quality and a higher level of confidence in the resulting user
interface design. Some remarks on critical success factors for building a solid
bridge (Section 6) conclude the discussion.
Throughout the chapter, a fictitious project for redesigning a complex legacy
system will be used to illustrate some of the issues, though these illustrations
are not intended as a comprehensive case study. The project takes place at the XYZ
Finance & Insurance Company, a fictitious company that offers a wide range of
financial and insurance services to its customers through a network of branch
offices and sales representatives, who visit customers in their homes.
A
A
A
A
A
able to provide (when the users are co-designing instead of doing their regular
job, someone has to fill in for them). The intensity of work on a User Committee
tends to cycle from several days or weeks of continuous effort to periods of little
or no activity. Typically, staff members serving on a User Committee are seconded
to the project for 2 to 5 days every week, depending on the amount and the nature
of the work to be done.
In addition, real or surrogate users are involved in iterative prototyping and
usability testing. These users must represent the larger user community in terms
of regional differences, specialist knowledge and requirements, and new employees
or transfer employees.
A Usability Steering Committee consists of the usability engineering project
manager (usually without decision-making and voting power), three to five senior
managers from the business area affected by the project, as well as senior managers
from the training, policies and procedures, and information technology departments.
The Steering Committee meets as required, approximately every 2 weeks. The mandate
of the Steering Committee is to:
Review and approve all usability deliverables.
Review and approve designs before they become part of a deliverable. In
particular, the Steering Committee chooses one of the proposed user interface
design alternatives in situations where a business issue masquerades as a
user interface issue (see Section 4.3.2).
Resolve issues requiring senior management approval. Typical examples are
policy issues, modifications to business processes, and changes in
organizational responsibility and accountability.
Make decisions on project direction. These are directly associated with
the project and business objectives. Typical examples are decisions on
usability and usefulness of system functions, determining appropriate levels
of involvement of participants in decision making beyond the Usability
Steering Committee, and assessing the cost and benefit of alternative
designs.
It is possible that the views and recommendations of the User Committee do not agree
with the views and recommendations of the Usability Steering Committee. In that
case, the views and recommendations of the Usability Steering Committee prevail.
The usability project manager may sometimes find him- or herself in the role of
a mediator between the two committees, but must respect the authority of the
Steering Committee at all times. This often requires considerable diplomatic skill.
The usability project manager must be a member of the overall Project Management
Committee, which consists of the managers of the teams involved, e.g., the software
development team, the database team, the network team, etc. Both the Usability
Steering Committee and the Project Management Committee report to the Project
Steering Committee, which is comprised of senior executives from the business area
affected by or involved in the project.
The User Committee and the Usability Steering Committee are required for both fire
prevention and firefighting assignments. However, in a firefighting assignment,
it may not be possible to establish the committees formally. In this case, the
usability project manager must secure the full time or at least part time
involvement of two to three subject matter experts (i.e., users with a wide range
of task experience and some supervisory experience) to function as a User Committee.
The usability project manager must also identify key senior managers who, as a group,
function as a Usability Steering Committee. Without this kind of minimal user and
management support, a firefighting usability assignment is unlikely to succeed.
prototyping method up to and including the first pass of detailed design for the
development of most applications or products. Only after this stage is it worthwhile
to build computer-based prototypes for polishing the user interface design. A wise
choice of the computer-based prototyping tool will allow the polished prototype
to be carried over into development.
familiarity with the task at hand (familiar vs. undergoing major changes in terms
of business process re-engineering vs. completely new tasks). For example, at the
XYZ Finance & Insurance Company a new user class consisted of users who would be
conducting a limited range of transactions with customers over the phone. These
users were familiar with conducting the transactions face-to-face with a customer,
but they were not familiar with conducting them over the phone. The definition of
new user classes also includes patterns of staff turnover. This will yield the ratio
of experienced to novice users throughout the lifetime of the application.
Usability effort and money can then be allocated to learnability vs. efficient use.
The number of users per class with a 1-3-5 year projection and the characteristics
of each class are documented in a report and posted on flip charts in the usability
engineering room. For fire prevention assignments, a more thorough analysis is
conducted to include actual numbers and input from the User Departments, Human
Resources, and Training Departments. The user class descriptions are more
comprehensive and are matched to employee classifications. For firefighting
assignments, user classes are identified in a JAD session with the User and Steering
Committees, including estimates for the total number of users in different classes.
User class descriptions are in point form, approximately half a page per user class.
provides an important part of the foundation for the design process: it allows the
user interface designers to:
Understand the users current work environment and the idiosyncrasies
of the business processes and policies that are often overlooked in
high-level work redesign.
Maintain those aspects of the current tasks that work well and are,
therefore, not changed in the revised application (the if it aint broke,
dont fix it principle).
Understand the users (mental) model of the current application, i.e.,
the available objects and actions, the metaphor(s) through which they are
expressed, and the sequence in which to use them to accomplish a task (see
Section 4.3). This model should be preserved in the revised application to
the degree possible in order to minimize learning when migrating to the
re-designed version.
Appreciate how big the leap will be for the users from the old to the new
application. This information is very useful for making decisions on the
training and documentation strategy (see Section 3.6).
In fire prevention assignments, the current task definitions are documented in a
report. In firefighting assignments, the current task definitions are not developed;
rather, the User Committee provides this information (with prompting) during the
new task definition effort (see Section 4.1).
used codes from memory and used them efficiently for quick customer service. Novice
users required the use of a reference booklet for several weeks, until they
memorized the frequently used codes. Users felt that having to use a reference
booklet in the presence of a customer did not project a professional image.
Malfunction analysis is essential for application redesign so that usability
problems will not be repeated. When carrying out a malfunction analysis, it is also
very useful to identify those parts or features of the application that users deem
particularly useful so that they can be preserved where appropriate.
Malfunctions are rated according to their severity and classified as critical,
serious, or minor. The severity of a usability problem is a combination of three
factors:
Frequency: how common is the problem?
Impact: how easy or difficult will it be for the users to overcome the
problem?
Persistence: is it a one-time problem that users can overcome once they
know about it or will users repeatedly be bothered by the problem?
These three components are combined into a single severity rating as an overall
assessment of each malfunction. The ratings (described below) are provided to
assist in prioritizing improvements and decision making:
Critical problems make successful task completion impossible,
unacceptably long, or are fraught with significant (costly or embarrassing)
errors or they have a detrimental effect on how an organization conducts its
business.
Serious problems have a major impact on the users performance and/or
the operation of the system overall, but do not prevent users from eventually
performing the task.
Minor problems do not greatly affect the users job performance, but when
taken as a group, negatively affect the users perceptions.
In many cases, it is difficult to separate the causes of malfunctions and to rate
them only on an individual basis, as submalfunctions often also contribute to, or
are part of the cause of, these malfunctions. Therefore, ratings are given to
malfunctions which may also include less severe sub-malfunctions, though these
submalfunctions are not rated individually. For example, malfunctions with
navigation may be rated as Critical. However, these also include related
submalfunctions, such as problems with the wording of menu choices which, taken
on their own, may have been rated as only Minor. Critical malfunctions must be
addressed in the redesign; serious malfunctions should be addressed in the
redesign.
may include functionality that is out of project scope (the scope is defined through
the system/application model). Inadvertently exceeding project scope by designing
the user interface without an understanding of the system/application model may
create user expectations that cannot be satisfied, leaving the users disappointed.
It has been the authors experience that managing user expectations is one of the
key concerns of senior management and a main obstacle to securing user involvement.
Demonstrating an understanding of and respect for the project scope by
understanding the system/application model is an effective way of addressing these
concerns. If the usability engineers base the user interface design on the
system/application model that is used by all other teams on the project, the
usability engineers will significantly enhance their credibility with the software
developers.
There are also projects where development is fast tracked, going from high level
system requirements directly to abbreviated requirements analysis/software design
and coding, skipping the detailed definition of functional requirements and
business processes. In such situations, the usability engineer has to take on the
role of the systems analyst in order to define the functional requirements and
business processes to a level of detail from which new user work flow and task design
can begin.
Except for the definition of usability performance objectives, there are several
tightly coupled design/prototype/evaluate iterations within each task, and there
are also some iterations between these tasks. For example, discoveries in detailed
design may lead to modifications of the users model or the new user task
definitions, causing some rework in these earlier project tasks and possibly
causing a revision of the usability performance objectives for these tasks. A
frequent source of such causes for partial rework are idiosyncrasies of business
process exception and error handling conditions. Another frequent source for rework
is a change in project scope. For example, development of a part of the functionality
may be moved into later releases of an application in order to meet the original
roll-out deadline. If the usability design process addresses only the reduced
functionality, then moving to full functionality may involve extensive user
interface design rework to accommodate the full functionality in the user interface.
This, in turn, increases the software development cost for later releases.
In most legacy system redesigns, users will quickly migrate from being novice users
to becoming efficient users for many years to come (see Section 3.2). Therefore,
it is recommended that designs first reflect efficiency of use and then address
learning issues.
will be called the system model here. The system model represents the designers
understanding of the application and the users work.
When working with an interactive computer system, users form a (mental) model of
the objects and actions available to them including how the system works, i.e.,
which objects and actions to use, and in what sequence, in order to accomplish a
task (Norman, 1986; Preece et al., 1994). This (mental) model will be referred to
here as the users model.
When users begin to work with a system for the first time, they try to match the
objects and actions they know from their work/tasks to the objects and actions they
see on the screen, i.e., they try to match their (mental) model with the systems
model. As part of their model, users determine how to navigate within the system:
where to find objects and actions and how to activate them. If there is a good match
between the systems and the users models, then the system is intuitive and users
will be able to complete a task on the first attempt with little help, and they
will quickly progress to mastering the use of the system. If, on the other hand,
the systems and users models do not match well, users will have difficulty
completing a task on the first attempt and will require frequent help.
The users model of the current application should be preserved in the system model
of the revised application to the greatest degree possible in order to minimize
learning when migrating to the redesigned version. The users model of the current
application is discovered as part of the current task definitions (see Section 3.3),
and the malfunction analysis of the current application identifies problems with
the users current model. Measuring against usability performance objectives
provides reliable data about the quality of the new systems model.
This section describes the four project tasks involved in building a system model
appropriate for the users:
Design and use of metaphors (Section 4.3.1).
Design decisions on task vs. object orientation in the systems model
(Section 4.3.2).
Horizontal user interface design first: design of the opening screen,
navigation principles, and principles for displaying objects (Section
4.3.3).
Vertical user interface design second: design for each task (Section
4.3.4).
Some tips for managing design iterations are provided in Section 4.5, and some
suggestions for documenting the design are contained in Section 5.
Within each of these four project tasks, there are several tightly coupled
design/prototype/evaluate iterations, and there are also some iterations between
them. For example, discoveries in individual task design may lead to modifications
of the opening screen or the navigation techniques, causing some rework in these
earlier project tasks. A frequent source of such causes for partial rework are
ommissions of objects and functions that allow users to manage all their tasks in
an efficient manner, e.g., to look up the status of an application, or to mark the
progress of certain aspects of their work.
4.3.1. Design and Use of Metaphors
Frequently, an object in a systems model or even the entire systems model is
described by a metaphor (i.e., an analogy to the real world) such as a desktop,
a receipt, a list, or a shop floor (Carroll et al., 1988; Preece et al., 1994).
The metaphor may be represented through words on the screen, e.g., appropriate
screen/field/widget titles, or the metaphor may be represented pictorially, e.g.,
through icons or graphics. For example, an alphabetical list of customer names can
be represented through words by a drop-down list box or pictorially by the graphical
representation of a rolodex or a box of index cards. In each case, the underlying
object is an alphabetical list of customer names, but the metaphor chosen to
represent the object is different (note that a list is also a metaphor...).
When designing metaphors, it is important to separate the metaphor from its verbal
or pictorial representation on the screen. The method used to represent the metaphor
on the screen (i.e., through words or pictures or a combination of both) is
influenced by the underlying technology and the time available for development.
The authors experience has been that, for the redesign of legacy systems, the
detailed graphic representation of metaphors (folders with tabs, rolodexes, index
cards, etc.) is less important than identifying the appropriate objects themselves
and the grouping of objects according to steps in a task. In usability evaluations,
it was found that users rarely noticed the detailed graphical representations, i.e.,
the graphical representations did not help the users find the correct object for
a step in a task. When the objects were poorly structured, the detailed graphical
representations actually interfered with task performance, and when they were
brought to the users attention, the representations did not improve users
understanding. Users thought of groups and sequences of screens as representing
a task and its associated objects and actions.
4.3.2. Task vs. Object Orientation in the User Interface
At the beginning of dialogue design, it is necessary to determine whether the system
should provide a task-oriented dialogue (i.e., the user is guided by the system
through the tasks to be accomplished) or an object-oriented dialogue (i.e., the
user knows which objects to work with and selects them in the appropriate order).
the usability engineering room, and the new task definitions are annotated to
indicate where a sequence of steps must be adhered to. In fire prevention
assignments, sufficient resources should be allocated to this project task. In
firefighting assignments, only sample definitions for key tasks are annotated.
4.3.3. Designing the User Interface Horizontally First Opening Screen and
Navigation Principles
The foundation for designing the opening screen and navigation principles is a
database, mapping relationships among user objects, functions, user classes, new
task definitions, and frequency of task performance. The authors experience has
been that such a database is invaluable for making user interface design decisions
about foregrounding/backgrounding objects and functions at the interaction level,
as well as detail design level of the user interface. Generally, information or
cues to frequently performed tasks will be in the foreground, at higher levels of
a menu hierarchy, or on special hot keys/fast paths. On the other hand,
information or cues to less frequently performed tasks will be placed in lower
levels of a menu hierarchy or be presented in such a way that they are always
accompanied by an explanation of when to invoke this task. Note that the
foregrounding/backgrounding of tasks does not necessarily imply a task-oriented
user interface.
The starting point for navigation design is the database of user objects, functions,
user classes, new task definitions, and frequency of task performance. It is
documented in the User Interface Architecture (see Section 5). Clusters in the user
class and new task definition matrix suggest initial groupings for the navigation
design, and the new task definitions themselves provide the initial flow. The first
round of navigation design is aimed at ensuring that all tasks and user classes
are covered by the high-level navigation design and that users are able to find
and start key tasks.
The following example from the XYZ Finance & Insurance Company is intended to
illustrate some of the points raised above. In the first round of navigation design,
the clusters in the user class and new task definition/frequency matrix indicated
that some user classes performed a fairly narrow range of advisory-type tasks
several times during the day. Other user classes performed a wide range of tasks,
with a core set of transaction-oriented tasks being performed many times during
the day and a much wider range of advisory-type tasks being performed infrequently,
i.e., once or twice a day.
A comparison of steps in all tasks revealed that both advisory- and transaction-type
tasks had a common first step: the identification of a customer. As it was one of
the clients business objectives to provide a more personal service to customers,
it was decided at the very outset that the customer should be identified on the
The rough designs are documented in the User Interface Architecture (see Section
5) and posted in the usability engineering room. In a fire prevention assignment,
rough designs are prepared for each task. In a firefighting assignment, rough
designs are prepared only for key tasks.
architecture (Sowa and Zachman, 1992), the user interface architecture (called a
Human Interface Architecture in Sowa and Zachman, 1992) expresses the model of the
information system from the users view. By comparison, a data model expresses
the data view, and a distributed systems architecture expresses the network view
of the application. The user interface architecture document should contain:
Key information from the usability analysis phase and its implications
for the user interface design:
Business objectives for the application.
A brief definition of new user classes and the implications from the
comparison to existing user classes.
Salient points from the current task definitions.
A summary of the malfunction analysis of the current system.
A summary of training and documentation strategies and their implications
for user interface design.
A brief description of the hardware/software environment and its
implications for user interface design.
New task definitions.
Usability performance objectives.
A system model, including metaphors.
The degree of task vs. object orientation.
Implications from the analysis of the database of user objects, functions,
user classes, new task definitions, and frequency of task performance.
The opening screen (rough sketch) and navigation principles.
The rough designs for each task.
Each section of the user interface architecture document should contain the design
rationale (i.e., a brief description of which alternatives were examined and
accepted or rejected) and a rationale (e.g., based on test results from sessions
with users or because of technology constraints or opportunities). Just like a
distributed systems architecture, a user interface architecture is written over
a period of time as the tasks during the user interface design phase are being
completed.
The audiences of the user interface architecture include the software engineers
who will be building the application and those who will work on fixing usability
problems or extending the functionality of the redesigned application after initial
roll-out. Usually, these are software engineers who were not part of the initial
redesign effort and therefore are not familiar with the history and rationale of
the user interface design.
In fire prevention assignments, the user interface architecture is part of the set
of architecture documents prepared during the application design phase. In
firefighting assignments, no such document usually exists, and there usually is
no time to prepare one. However, where time and budget permit, the findings from
a firefighting assignment should be documented in point form in a reduced user
interface architecture document or they should be included in the application style
guide.
architecture (Sowa and Zachman, 1992), the user interface specification would be
part of the technology model. The user interface specification expresses the
technology model from the users view (called the Human/Technology Interface in
Sowa and Zachman, 1992). A user interface specification contains all the screens
for all the tasks except for error conditions and help, which are standardized in
the application style guide.
The user interface specification is geared toward the software engineers who will
be building the application and those who will work on fixing usability problems
or extending the functionality of the redesigned application after initial roll-out.
The user interface specification is also important for the quality assurance
specialists who will be checking the software for conformance to the user interface
specifications. Software engineers working in testing often find the user interface
specification useful for developing test cases.
In fire prevention assignments, the user interface specification is prepared during
the detailed user interface design phase. In firefighting assignments, there is
usually no time to prepare the user interface specification or it must be reduced
to a sample specification. It has been the authors experience that, when usability
resources are scarce in a firefighting assignment, time is best spent working with
the software developers to review screens they have developed and coach and advise
them on good user interface design rather than write user interface specifications.
6. CONCLUSIONS
It is appropriate to recognize that, regardless of the methods and techniques
employed, the success of a usability engineering program depends to a large extent
on the willingness of developers and senior management to accept and implement
usability recommendations. Usability engineering is more often than not about
making or recommending trade-off decisions between development, usability, and
project issues. These decisions are business decisions (though they at first appear
to be usability decisions). Usability engineers are not usually the final decision
makers for high-profile issues, but can provide the information necessary for
making informed choices. The recommended approach is to build a business case which
supports the argument in favor of usability. A solid business case will often
demonstrate the value of usability efforts to project managers and senior
management, opening the door for the effective employment of usability engineering
methods and techniques.
7. ACKNOWLEDGMENTS
The author would like to thank Diane McKerlie of DMA Consulting Inc. for her
excellent contribution to this chapter both as a reviewer and fellow firefighter.
Many of the ideas and methods presented in this chapter have been fine tuned in
consulting assignments we worked on together. Her support, her sense of humor and
the chocolate cookies she brings when the going gets tough are much appreciated.
The author would like to thank the reviewers Tom Graefe, Kevin Simpson, Ron Zeno,
and Tom Dayton for their detailed comments and thoughtful observations. They
contributed much to clarifying the descriptions of methods and heuristics. The
author would like to thank Marie-Louise Liebe-Harkort for her careful editing and
encouragement. Finally, the author would like to thank Larry Wood, Ron Zeno, and
the workshop participants for the opportunity to compare ideas and experiences and
learn from each other at the workshop.
8. REFERENCES
Carroll, J. M., Mack, R. L., and Kellogg, W. A., Interface metaphors and user
interface design, in Handbook of Human-Computer Interaction, Helander, M., Ed.,
North Holland, Amsterdam,1988.
Jacobson, I., Ericsson, M., and Jacobson, A., The Object Advantage, Addison-Wesley,
New York, 1994.
Molich, R. and Nielsen, J., Improving a human-computer dialogue, Communications
of the ACM, 33(3), 338-348, 1990.
Monk, A., Wright, P., Haber, J., and Davenport, L., Improving Your Human-Computer
Interface: A Practical Technique, Prentice Hall, New York, 1993.
Nielsen, J., Usability engineering at a discount, in Designing and Using
Human-Computer Interfaces and Knowledge Based Systems, Salvendy, G. and Smith, M.
J., Eds., Elsevier, Amsterdam, 1989.
Nielsen, J., Finding usability problems through heuristic evaluation, in Human
Factors in Computing Systems, Proceedings CHI, 1992, Baversfield, P., Bennett, J.,
and Lynch, G., Eds., ACM Press, New York, 1992.
Norman, D. A., Cognitive engineering, in User-Centered System Design, Norman, D.
A. and Draper, S., Eds., Lawrence Erlbaum Associates, Hillsdale, NJ, 1986.
Paulk, M. C., Curtis, B., Chrissis, M. B., and Weber, C. V., Capability maturity
model version 1.1, in IEEE Software, July 1993, 18-27, 1993.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., and Carey, T,.
Human-Computer Interaction, Addison-Wesley, Reading, MA, 1994.
Sowa, J. F. and Zachman, J. A., Extending and formalizing the framework for
information systems architecture, IBM Systems Journal, 31(3), 1992.
Chapter 9
Systematic Creativity: A Bridge for the Gaps in the Software
Development Process
Jean Scholtz
National Institute of Standards and Technology, Gaithersburg, Maryland
email: [email protected]
Tony Salvador
Intel Corporation, Hillsboro, Oregon
email: [email protected]
TABLE OF CONTENTS
Abstract
1. The Problems
1.1. Producing Requirements for Novel Products
1.2. Minimizing Time to Market
1.3. Customers and Users
1.4. Communicating with Team Members
1.5. Integrating Requirements with Usability Testing
1.6. A Method that can be Started at any Stage of Product Development
1.7. Summary of our Needs
2. The Gaps
3. Systematic Creativity
3.1. The Framework
3.1.1. Product Goals and Objectives
3.1.2. User Goals, Objectives, and Tasks
3.1.3. Facilitators and Obstacles for Tasks and Goals
3.1.4. Actions and Objects Derived from Marketing Input and Human Factors Input
3.1.5. New User Tasks and Subtasks
4. The Interview Process
4.1. How the Interview is Conducted
4.2 Analyzing the Data
5. Using Systematic Creativity
5.1. The Basic Product
5.2 Product Definition
5.2.1. Product Definition for CNN@Work
5.3. Product Design
5.3.1. Product Design For CNN@Work
5.4. Product Implementation
5.4.1. Product Implementation For CNN@Work
ABSTRACT
This chapter describes a method that was developed and used by the authors while
they were co-managers of the Human Factors Services group within the Personal
Conferencing Division at Intel Corporation. The method described here was developed
to help bridge several gaps we saw occurring in the process of incorporating user
information into the definition, design, implementation, and evaluation of new
software products. The method was targeted for use in developing new products based
on new technology and therefore, not currently available. Our primary goal was to
ensure that early adopters of these new products could see immediate, as well as
potential, benefits of this new technology. Achieving this goal would facilitate
the acceptance of this technology in the market place. The largest gap we saw was
transforming user information into product design, both in terms of required
functionality and in providing the necessary support for this functionality via
the user interface. Additionally, we saw smaller gaps during the definition and
development process that also needed to be bridged. In essence, what we wanted to
create was a framework that could be used at every step during design, development,
and testing to make decisions based on the original user information. In this
chapter, well describe the framework we created and well describe how we use
the methodology in theory, along with examples from one of the systems we worked
on. This product was a second version of an earlier product, but with some radically
different functionality added.
1. THE PROBLEMS
One goal at Intel is to make the personal computer the communication device of choice.
Intels software development efforts focus on producing technology and products
to enhance the home and professional lives of the general public. As human factors
engineers, we were faced with the usual problems, as well as having to produce user
requirements for products that users did not currently have and, possibly, did not
even foresee.
We need a way to quickly identify the most critical usability problems to fix,
thereby helping to achieve a shorter time to market.
these problems need to be produced within the context of the integrated set of
requirements. We needed a way to capture and structure the input from marketing,
engineering, and human factors to make it accessible and useful to the team on a
daily basis.
2. THE GAPS
Early human factors work was relatively new to researchers and developers at Intel
at the time we developed our methodology. In assessing our needs, we identified
gaps between user input and decision making in the following stages of product
development:
1.
2.
3.
While the first gap was of major importance to us, we realized that smaller gaps
existed at other points in the current process. We had no way to make sure that
smaller decisions made (with or without our input) during development were not
negatively effecting decisions we had made during product definition and design.
We had no way to tie the usability decisions we made back to the original definition
and design criteria. We decided that a more comprehensive solution was needed. We
needed to develop a vehicle that could be used as the basis for communication and
decision making throughout the entire software lifecycle.
3. SYSTEMATIC CREATIVITY
The Systematic Creativity process was developed by Dr. Anthony Salvador and Dr.
Jean Scholtz (Salvador and Scholtz, 1996). The name, Systematic Creativity, stems
from the fact that design is creative. It should be creative. It must be creative
in order to achieve a competitive lead. However, that creativity needs to have a
systematic basis. The process used in product definition, design, and
implementation needs to be systematic to achieve a time to market advantage.
Additionally, all decisions made during the software process need to be made in
the context of the integrated requirements.
marketing through customer visits, surveys, focus groups, etc. Customer goals
usually are in terms of different types of networks, servers, platforms, and
operating systems that the product should run on. Other goals are related to
robustness, bandwidth, and compatibility with other applications currently in
place. Objectives are not usually obtained from customers.
In addition, marketing contributes goals they have for the product from marketing
research. Examples of these goals are that they must include the functionality of
a previous version of the product, must include the functionality of another product
currently on the market, must be compatible with a specified list of operating
systems. Marketing also produces certain requirements of their own. A goal such
as being able to collect information through registration of the software is an
example of this type of input. An objective for this could be to collect information
as to which users would be interested in other products being developed.
3.1.2. User Goals, Objectives, and Tasks
User goals, objectives, and tasks are now acquired by the human factors team, using
an interview process (described in the next section). We are primarily interested
in the goals and objectives for use in product definition. We use this input, along
with the marketing input on product goals and objectives, as the basis for product
definition and design. We also collect the tasks that the users carry out to achieve
their goals. Depending on the scope of an envisioned product, we may do one set
of interviews or two. If we are envisioning a large product or one that supports
many different types of users, we may do an initial set of interviews to collect
goals, objectives, and prioritization of goals. However, integrating our user input
with marketing input, we can agree on the functionality essential for the product.
We would then do another set of interviews to collect task information before we
start the design phase.
Human factors engineers may also collect information about customer tasks. If
customers express goals of installing, monitoring, or regulating use of the
envisioned product, it may be necessary to provide another interface to access such
functionality. In this case, the human factors staff interviews system
administrators about tasks they currently do in other systems to support such goals.
3.1.3. Facilitators and Obstacles for Tasks and Goals
There are two more data classes not shown on the diagram in Figure 9.1 that are
also collected: facilitators and obstacles. Facilitators and obstacles are
associated with tasks, that is, what makes it easy or difficult for users in their
current work environment to achieve their goals and objectives. Figure 9.2 shows
a one-to-one relationship between tasks and facilitators and obstacles. As with
goals, objectives, and tasks, facilitators and obstacles may also have many-to-one
facilitators and obstacles. We also evaluate the new tasks in relationship to the
original user goals and objectives or to the marketing goals and objectives.
the interview, the interview team needs to interact with each other. The person
conducting the interview will, at times, ask the person structuring the data if
there are points that need elaboration. If any are identified, the interviewer then
pursues this direction until the necessary information is obtained.
We do not have a list of questions that we ask the user. The team member doing the
interview must listen very carefully to what the user says and construct each
successive question based on previous input. However, we do learn from previous
interviews we conduct. Therefore, if a previous interview has uncovered some
surprising information, we will probe in that direction with future users to see
if that information is validated. If, during an interview, we uncover some
information that we havent heard previously, well probe in depth in that
direction.
Facilitators and obstacles are often volunteered by the users. When they describe
how they do something, they often include phrases such as we try to do this but
its not always easy because..... If facilitators and obstacles are not
volunteered, we do ask is that easy to do? or is that difficult to do?.
We use the same two people do conduct all the interviews if possible. This is just
to minimize mind sharing between interviewers during analysis. If the time or
number of interviews prohibit this, we use several teams and switch around so that
all possible pairs are formed to do interviews. Frequent discussions are then
encouraged among the interviewers. Ongoing analysis is encouraged, as is frequent
viewing of the structured data, while teams are in the interview phase.
In the best situation the interview team is able to analyze one interview before
conducting the next. If a complete analysis is not possible, the team has a short
discussion after each interview to summarize any new information, to identify any
conflicts with previous information, and to note any information that verifies what
they have already collected. This produces several possible areas to probe in
succeeding interviews. However, this has to be evaluated with respect to the
overview information supplied by the next user. If any of these areas appear to
be within that users job description, then probing is possible. However, the
interviewer needs to determine whether other information supplied by the user is
more interesting.
We also switch the roles for the interview team members. We usually try to switch
every other interview. We feel that this helps us to maintain better balance. The
team member structuring the data gets a better sense of the users tasks than does
the team member conducting the interview. The interviewer has to listen for and
recognize new information and probe in that direction if anything new is identified.
It is important to note that all our interviews are conducted at the users
workplace. In addition to conducting the interview, we do a careful observation
of the users office. Often we find items there that we ask about. For example,
a user may have a large whiteboard schedule on his wall. We would ask (if the user
doesnt mention it) what role this object plays in his daily tasks. Users sometimes
open up file cabinets and explain how they store information or take us on short
tours to explain how a certain process flows within their organization.
We take audio tape recorders along if companies will permit us to tape our interviews.
This allows us to listen to prior interviews to obtain or clarify information we
may have missed. We also ask for permission to take a still photo of the people
we interview. Taking a photo back to incorporate with the structured data helps
to make the information more real to the product team.
The interviewing process is not an easy one. Both members of the team must be skilled
interviewers. They need to know how to listen and quickly interpret what the user
says and how to probe for information rather than just asking a set of predetermined
questions. Structuring the data is also difficult as it needs to be done quickly.
We take breaks at times and assess how were doing with the data. The interviewer
cannot keep track of all the information and needs feedback from the team member,
structuring the data, about points that were not adequately covered.
collected for product definition. The number of users will depend on the homogeneity
of the target market.
As we collect the data, we structure it, using an outline form of a word processor.
We use the different levels in the outline to represent the different data classes
we collect. The first level is the objective. The next level lists goals that support
this objective. At the next level, goals are expanded to show the tasks needed to
support the goal, followed by subtasks. Facilitators and obstacles are expanded
under the tasks. The next level contains actions and objects needed for the task
or subtask. At this point, we move the data to spreadsheets. We sort the data in
different ways, producing a sheet for each view. We have a sheet sorted by objectives,
another sheet sorted by goals, etc. We use a numbering system to tie each data class
back to the original objective and goal. We also note where the objective and goals
came from human factors input or marketing input. Using this method we can easily
ask questions during design. For example, we can see which actions are performed
on the same objects, or we can view all the goals that a task supports.
Third-party brands and names are the property of their respective owners.
As always, we asked users to tell us about their jobs. After collecting the overview
information, we started probing for the different responsibilities that might
include using information from the general news.
We found that most people spent little time sitting at their desks and, therefore,
capturing news was a higher priority than watching news live. In fact, one obstacle
that people described was not being able to watch broadcast news at a certain time
due to other commitments. Our interviewees also mentioned the difficulty involved
in catching up on the news after a business trip. They often gave up on this task
as there simply wasnt time to review past news. We discovered that general news
was not, in most cases, time critical. Time critical information was supplied by
more specialized sources. People did not need to know about general news the minute
it was captured.
We found that the type of news that people watched for their jobs did not change
frequently. For example, people in advertising typically had several fields of
expertise that did not change. They wouldnt suddenly switch from working in the
food division to working in the automobile division. This implied to us that people
would be more likely to spend a little more time constructing and refining filters
for capturing the news they needed as these filters would be used for a long period
of time.
The biggest obstacle for users was having to go to many different sources to look
for news and finding the time to do so. Finding the news, given a source and time
to look, was a relatively easy task. Users either scanned for particular sections
in magazines or newspapers, listened or watched news broadcasts at a certain time,
or just scanned for particular words in news publications that were signals of
possible articles they should read. Users were pretty certain that they didnt
miss any news of interest, assuming they had the time to look.
We had considered putting together a filing system within the program to allow
people to categorize the captured news in ways that facilitated retrieval. However,
we found that people used unique filing systems. Some were based on categories
storing everything about a certain category together. Some used dates to organize
their files. The majority of users wanted to integrate information from many sources
and many also wanted to use hard copy for filing. We found that people wanted to
share portions of the news they captured with others in and outside of their company.
This led us to prioritize exporting news stories to existing file managers, rather
than developing a complex filing mechanism within the product.
In summary, our initial work allowed the product team to prioritize the needed
functionality. We knew that we could eliminate developing an elaborate filing
system and that users were more interesting in capturing information than in
watching it live. We knew that users were not particularly interested in being
alerted when news was captured but that they wanted to immediately see how much
had been captured whenever they went to look. We also knew that we had to make the
process of defining filters to capture the news easy and give users enough feedback
to refine these filters. As the type of information people looked for did not change
frequently, we were encouraged that they could see the benefits of taking time to
explicitly define filters that could be used for an extended period of time.
Requirements work produced these user goals (among others):
Selectively filter text and video stories on keywords and text.
Refine filters based on feedback.
Obtain feedback on the number of stories captured (to get information
useful in refining filters).
View captured news and save stories in any filing system (the ability to
export data).
Share news with others in organization.
Capture news that occurs at a particular time.
Produce basic filters easily and quickly.
View news in the background while doing other PC work.
Once weve decided upon a particular design direction or metaphor, we need to work
on the precise representation for the actions and objects (note that weve already
factored in feasibility about these representations when selecting design
possibilities) and we need to design the new user tasks and subtasks. We use the
prioritization of goals to determine the amount of visibility and access for tasks
and subtasks. We use the facilitators and obstacles in the users current
environments to evaluate the new possibilities for tasks and subtasks. At times,
automating tasks results in adding more obstacles than in the current user tasks.
We must, therefore, make sure that users can see reduced obstacles in later tasks
and that these reduced obstacles will produce a sufficient reason to use the product.
As were designing these tasks and subtasks, we continually bring in users to test
out our designs. We use a combination of high- and low-fidelity prototypes depending
on the type of information we need to get froma user. If were only interested
in whether they can find the path needed for doing a task, we will probably use
paper screens with paper pull-down menus. If we need to evaluate task details, we
will produce a high-fidelity prototype of at least that portion of the product.
We use general design guidelines and platform-style guidelines as design work
progresses. We do not have corporate style guides for new products as most of these
products have very novel interfaces. We do keep track of other products being
developed and try to be consistent with designs already used for similar
functionality. In some cases, we use this opportunity to learn from any earlier
design mistakes.
As our design progresses, new tasks and hence, new actions and objects, are added.
These must all be evaluated with respect to the goodness of the fit within the
proposed design and with respect to the goals that they support. When tradeoffs
need to be made, we use goal prioritization to help our user-centered decision
making. The most important user goals must be streamlined at the cost of lesser
goals. The Systematic Creativity framework allows us to easily check on
prioritization of goals.
At the more detailed levels of design, actions are given initiators and feedback
is defined to show the results of an action on an object. An initiator defines how
users will perform an action. For example, a user could select a menu item, select
an icon in a toolbar, or the action could be automatic. Does the same initiator
work for all objects on which this action is performed? Feedback is needed to
indicate to the user the results of the action. Is the same feedback appropriate
for that action on all objects? Our framework is continually updated as new actions
and objects are added. We can then look to see which actions are used on each object
by simply sorting spreadsheets as needed. This helps us make decisions about
initiators and feedback at a global level. That is, will changing an initiator for
one action on one object work for all other objects on which this action is performed?
We can ensure that the design takes into consideration sets of actions and sets
of objects. This helps us design both terminology and visual feedback by taking
sets of objects and actions into consideration.
5.3.1. Product Design for CNN@Work
The user goals included:
Selectively filter text and video stories on keywords and text.
Refine filters based on feedback.
Obtain feedback on the number of stories captured (to get information
useful in refining filters).
View captured news and save stories in any filing system (the ability to
export data).
Share news with others in organization.
Capture news that occurs at a particular time.
Produce basic filters easily and quickly.
View news in the background while doing other PC work.
Marketing goals included:
Display that text stories are available.
Give customers (not users) the ability to monitor the system.
Customers have the ability to create an internal channel used for
distribution of company talks, news, and training.
Give users information about what types of information (text, audio, video)
are available on each channel.
Engineering informed us that not all channels from CNN would have video, text, and
audio information. Also, customers defining their own channels could choose which
types of information would be broadcast. Therefore, we identified a new goal: users
should be able to quickly determine what type of information would be broadcast
on each channel.
We had identified actions and objects that needed to be represented in our interface.
The objects identified, along with a subset of actions on them, included:
CNN@Work used a TV metaphor and allowed the user to change channels, change volume,
adjust the picture, record information, and bring up the stock view. Figure 9.3
illustrates the original interface.
Figure 9.3
A window similar to this was used for the opening window in CNN@Work,
Version 1.
The thought among the product team was that we should use the same metaphor for
the main window of the new version of CNN@Work and display the availability of text
stories. We brought in users to try out versions of this using paper prototypes,
as well as very simple prototypes built on top of the original product. We were
not successful in portraying this functionality to users. Users were certain that
they could watch video using this application, but they were completely unaware
that text stories were also available. Users had no idea they could set up any sort
of individualized capture mechanism to save video stories that came across while
they were away from their desk. They knew that they could do timed recording in
the first version but had no idea that they could do recording based on the content
of video stories. Most of the efforts to try to get this information across
concentrated on menu items and on icons in the toolbar. Due to the limited space
we had for the interface, this was a necessity. One goal supported by the original
product was to watch the news live in the background while working on another task.
This meant that the interface had to be reduced to a small size that still allowed
users to see the video.
Given that capturing news stories and viewing text stories were high priority goals
for the users, we felt that a drastic redesign was needed. We decided to abandon
the TV metaphor and make text stories and capturing news the focus of the interface.
As viewers had prioritized capturing news above watching TV we felt we needed to
make sure they could easily find this type of functionality in the interface. Figure
9.4 shows a design for the main window which is similar to the final design.
Figure 9.4 Opening window for CNN@Work, Version 2 showing text story and capture
capability.
We used several different ways in the final design to display that text stories
were available and that news could be captured. We displayed the active channel
and noted what type of information was available on that channel. Recall that
corporations using CNN@Work could choose to have their own channels, and that CNN
had several channels: HeadLine News, CNN, and CNN FN (the financial news channel).
Of these, only HeadLine News would currently carry text stories. Local corporate
channels might only have text announcements on them and occasionally might
broadcast a speech by one of the corporate officers. We didnt want users to be
confused about which services were available on each channel. This also gave us
the opportunity to display the different types of information available. We
discovered during design testing that users were confused about what channel or
channels filters would monitor. We needed to convey that filters only applied to
the currently selected channel. We used the label active channel to convey this
message to users.
We gave the users a window displaying story titles that were currently available.
We could cache stories for a certain period of time before they would be replaced
by new stories coming in. We presented the story names in a scrolling list, including
the present video story and the next video story to play. Recall that since many
of the actions for text stories were the same as those for video stories, we wanted
to treat those objects in the same fashion. From the main window, users could view
either kind of story (by double clicking or by selecting a story title and pressing
a view button that was added to the final window design), set a filter based on
a story title for either video or text, and save the story directly to the inbox.
The one exception was the upcoming video story. We used the method of appending
next to play and playing next to these story titles to alert the user to
this exception, while still portraying that this list was a list of stories that
were available for a limited amount of time.
We wanted to assist users in defining filters for capturing news stories as we found
this was a high-priority goal. We found that finding news was easy for users to
do currently. They scanned newspapers searching for keywords or for company names.
They listened to news for the same types of keywords. They looked in certain sections
of the paper or tuned into certain broadcasts that they knew would provide the
information they needed. During initial design testing, we experimented with
different ways of having users describe filters to capture the news and found (not
surprisingly) that using Boolean expressions was a difficult task. In our case,
the problem was more severe as the news stories were available for viewing and
capturing only temporarily. News that is not captured is replaced by new stories,
thus making it impossible for the user to tell what, if anything, he or she has
missed. We needed to provide feedback on the number of stories transmitted during
a given time. We put in a count of the number of stories that had appeared in the
story window since the user had been logged in. Users could also view captured
stories by the name of the filter that had captured this story. Thus, if a user
found that several hundred stories had appeared and that his/her filter had failed
to capture any stories, he/she might suspect that the filter needed to be revised.
We put an autofilter button on the opening window. Not only did this serve to
alert users to the filtering functionality, it provided easy access to an example
of a filter. All a user had to do was to select a story displayed in the story window
and press autofilter. The auto filter window (shown in Figure 9.5) then opened
to display a filter which would capture a story just like the one the user had
selected. This filter could be saved as is or it could be modified by the user
slightly to capture more or fewer stories about this particular topic.
Figure 9.5 The autofilter windows that appears when a story is selected and the
autofilter command is given.
The filter creation window (shown in Figure 9.6) provided several aids to users.
Keywords were provided and were divided into different categories. We worked with
CNN to ensure that these same keywords would be used for classifying and capturing
stories. As users entered keywords and supplied the proper AND or OR connectors,
a summary window gave feedback in natural language about what the filter would
capture. For example, a response might say this filter will find stories with
the industry classification of government and the subject classification of
aviation and the key words of jet propulsion. In addition, we predefined some
filters that could be used as is or could be refined slightly according to users
requirements. For example, high tech, defense, and environmental
regulations could serve as predefined filters.
provided users with the ability to select a given filter and modify it to create
a new filter.
looking to see the affect on any other parts of the application. Perhaps an initiator
for an action is changed to be more useful for a particular object. If the developer
were looking at the global level, he or she would make sure that this change would
be a positive one for all objects on which this action is defined. While several
small less-than-optimal decisions rarely affect the overall usability of the
product, many such decisions can have a negative impact. It is impossible (and most
likely not even safe) for human factors engineers to stand over the shoulder of
every developer and check every decision made or to do usability testing on every
decision. We needed a way to convey information to developers so that they can make
implementation decisions within the global framework. We feel that the Systematic
Creativity framework facilitates thiscommunication. Developers can use the
framework to see the effect that a decision will have on the entire product. If
this decision affects a high priority goal or many goals, then a human factors expert
should definitely be called in. If the decision is at a lower level and affects
very low-priority goals, the developer can use the framework to help evaluate his
decision.
5.4.1. Product Implementation for CNN@Work
At this point in the development of our process of Systematic Creativity, we were
still working on the best way to format information in the spreadsheets and, given
the time restrictions we were under, we decided that we needed to share the
information quickly with the product team. We did this by having the team as a whole
do a walkthrough of several of the critical tasks, step by step. We used overheads
of the interface as it currently existed. We put up the main window and asked the
team members to write down what they would do to achieve a particular task. We asked,
after each choice in the process, how many had selected the correct choice and which,
if any, other choices had been selected. Then we went on to the next step. We
continued this process for the subset of tasks we had selected. Team members quickly
got a different view of the interface when they looked at the tasks users were
expected to do in order to accomplish the ultimate goal. This process succeeded
in convincing team members of the value added by viewing the system from the users
perspective. In fact, we would recommend that product revision teams start by doing
a walkthrough of the previous product to quickly understand the extent of the
changes that should be made and to start off with the users view in mind.
We were working closely with the developers during product implementation. This
was a relatively small project and the head developers had been convinced of our
added value during our design work. Therefore, major decisions during
implementation were not made without consulting one of the human factors staff on
the team.
The first failure is catastrophic but should not happen if we have done sufficient
design testing. Questions of supporting goals should be identified during our
design iterations and incorporated into the product. However, if our involvement
with the product occurs late in the cycle, this may be a possibility. The development
team will need to decide if the lack of support for certain goals still results
in a viable product.
Failure to sufficiently reduce the obstacles associated with the tasks is also a
very serious error. Again, this should have been identified during design testing.
Producing a more efficient user task can be a large redesign problem that can
seriously affect the product schedule. The Systematic Creativity framework can be
used to identify the importance of this task which can help the team members decide
if the time for redesign needs to be taken.
Failures 3 and 4 are major failures and again should have been detected earlier
in the design iteration. If little up-front work has been done, these failures will
be detected at this time. Using the framework we can see what goals and tasks need
the particular action or object. We can then evaluate the seriousness of this
failure based on the priority of the affected goals and the difficulty of adding
the visibility or the action or object needed.
Failures 5 and 6 are the least severe at least from the standpoint of how much
redesign needs to be done to correct them. Having the Systematic Creativity
framework in place, the team can locate an individual action or object and see how
changes made in the context of that particular task will affect other tasks using
that particular action or object. The tasks in which the action or object is used
can be traced back to user goals, thus allowing the team to make the corrections
in a more global fashion, rather than making a change and subsequently discovering
effects in other portions of the interface.
5.5.1. Usability Testing of CNN@Work
It is always difficult to determine when design testing stops and usability testing
begins. We tend to think of usability testing as more formal testing later in product
implementation. In usability testing we concentrate on testing that our usability
goals have been verified. In defining the usability goals for this product, we
specified percentages of users that should be able to accomplish basic tasks during
their first encounter with the system. We also specified what percentage of users
should rate each task within a specified range we had defined to represent
perceived ease of use. Specifying these usability goals is an extremely
difficult task, especially when the product or the functionality is new. We were
able to use some information from usability tests on the first version to produce
specifications for previously supported functionality. We based the specifications
for new functionality on the priorities of user goals in relationship to priorities
of previously supported goals. More important than having exact specifications
in the requirements document, we now had agreement from the team that these
requirements were as real as functionality requirements.
Basic tasks for this product included:
Changing channels.
Setting up a simple filter from scratch.
Identifying where a story was captured and retrieving that story.
Viewing a story currently available.
Viewing a story currently available and storing it.
Viewing a story currently available and creating a filter based on this
story.
A usability test determined if our product met the specified usability goals we
had established. The usability test focused on the same tasks that we concentrated
on during design, those that were of the highest priority to users.
involved in the up-front work is less than ideal, we feel that this process gives
us a way to deal more effectively than most with the situation.
Suppose we are involved at the design phase, but not early enough to do any
interviewing and requirements work with a user population. We still have the
customer goals from marketing to use. In this situation, we produce, along with
marketing and the development team, what we believe to be valid user goals. Then
our early low fidelity prototypes reflect these goals. As we bring in users to assess
the prototypes, we collect two types of information: what functionality they think
the prototypes reflect and how this functionality fits with their work goals. We
collect information about the validity of our goal assumptions along with
information about the proposed design. This may result in more iterations than
normal if we discover that our goal assumptions are incorrect.
Suppose that we arent consulted until the implementation is being done. Producing
a usability test plan essentially constitutes putting together assumptions about
user goals and goal priorities. It is essential that the development team be
consulted and agree upon the test plan. Are these the goals that they think this
product supports? Again, as testing is done, questions need to be asked about the
appropriateness of the goals, as well as measuring the problems users have in
completing certain tasks. The problem with finding this information out at this
time is that it is often too late in the production cycle to do add more functionality.
We have a sound basis for recommending a starting place for a revision or even
slipping the release schedule for the current product if the problem is serious
enough. If work starts during implementation, we would probably expand the
framework beyond goals which would be the basis for our testing scenarios. If we
find particular problems in a given task, then we might dosome probing with
usability test participants to determine how they currently do these tasks.
a limitation for us, as we have considerable difficulty finding team members who
are able to take the time to accompany us on these interviews. It would, however,
be nice to give product team members more of a firsthand look at the users rather
than seeing impersonal text. One way of doing this would be to incorporate audio
or video clips attached to different data classes that illustrate issues users told
us about during data collection. Rather than bringing the team to the users, we
would like to use intranet technologies to bring the users to the team by providing
a multimedia framework. Use of an intranet site could also be used to generate and
collect discussion about particular points. A framework diagram containing hot
spots that team members could use to view current discussions or to contribute ideas
would facilitate input from the team and would promote more of a shared view of
the product.
9. SUMMARY
Systematic Creativity facilitates team communication and decision making.
Providing a way to lay out all the issues and see conflicts of goals and priorities
of goals is a great facilitator in product definition. It gives all team members
the same basis of information to start from and enables user-centered decision
making. The same is true of the design process. Creativity is not stifled but it
is evaluated by assessing all the information and noting how all the needed goals
are supported. Having such a framework produces better product design as all goals
are taken into account when the initial brainstorming is done. Having the
information in text form, rather than visual form, has proven to be a good idea.
Too often we find ourselves arguing about the form of the visual rather than the
basic ideas of what functionality it supports. Having only text there for the
beginning of product definition allows us to concentrate on the real issues what
functionality do we have to support and what can we support well.
We feel that this method has great promise in creating new products as well as
revising existing products. The ability to generate and view data about user
requirements systematically allows us to do a more complete product definition and
design. Decisions about what to include in the product and how to include particular
features can be made knowing what it is that users really want to do.
10. ACKNOWLEDGMENTS
This work was done by the authors along with other human factors engineers at Intel
Corporation. We thank Doug Sorensen and James Newbery for their work in information
gathering, information displaying, and their devotion to using this method for
design and design testing. Michael Mateas worked on the development of the CNN@Work
product and the screen shots in this chapter are from a simulation written by Michael.
Thanks also to the user interface developers on the CNN@Work team for their support
of our work.
11. REFERENCES
Holtzblatt, K. and Jones, S., Contextual inquiry: a participatory technique for
system design, in Participatory Design, Principles and Practices, Schuler, D. and
Namioka, A., Eds., Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1993,
177210.
Mateas, M., Salvador, T., Scholtz, J., and Sorenson, D., Engineering ethnography
in the home, in Common Ground, CHI96 Conference Companion, Tauber, M., Ed., ACM
Press, New York, 283284, 1996.
Moore, G.A., Crossing the Chasm, Harper Collins, New South Wales, 1991.
Mueller, M., PICTIVE: democratizing the dynamics of the design session, in
Participatory Design, Principles and Practices, Schuler, D. and Namioka, A., Eds.,
Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1993, 211238.
Salvador, A.C. and Scholtz, J. C., (1996). Systematic creativity: a methodology
for integrating user, market and engineering requirements for product definition,
design and usability testing, in Engineering for Human-Computer Interaction, Bass,
L. J. and Unger, C., Eds., Chapman & Hall, London, 1996, 307329.
CHAPTER 10
The UI War Room and Design Prism: A User Interface Design
Approach from Multiple Perspectives
Kevin T. Simpson
Financial Models Company, Mississauga, Canada
email: [email protected]
Table of Contents
Abstract
1. Introduction
1.1. Why this Chapter
1.2. Design Context
2. The User Interface War Room as a Metaphor for Interface Design
2.1. Design Context The UI War Room
2.2. User Requests
2.3. User Objects
2.3.1. Recording the Origins of Objects
2.3.2. Organizing the Objects
2.4. The Whiteboard
2.4.1. Too Few Cooksll Spoil the Broth
2.4.2. User Involvement in Design
2.4.3. Existing Interfaces as a Source of Ideas
2.4.4. Flirting with Scenario-Based Design
2.5. Prototypes
2.5.1. Choosing between Idea Sketches
2.5.2. Concurrent Top-Down Iteration
2.5.3. Separating out Mutually Exclusive Prototype Systems
2.5.4. Matching User Requests to Developed Features
3. Designing from Multiple Perspectives The Design Prism
3.1. Design Context The Design Prism
3.2. Extract User Elements
3.3. Define Interrelations
3.4. Prototype Each Perspective
3.4.1. Idea Sketching from each Perspective
3.4.2. Screen Design using Design Guides
3.5. Consolidate Perspectives
4. Some Suggestions for Making Simultaneous Use of the War Room and Design Prism Methods
5. Benefits, Limitations, and Difficult Questions
6. Acknowledgments
7. References
ABSTRACT
This chapter focuses on two practical design approaches to bridging the gap: the
User Interface War Room method and the Design Prism method. The walls of the user
interface war room served as a metaphor used in redesigning the interface of a
computer-aided software engineering (CASE) tool for telecommunications research.
To begin, user requests (derived from task analysis interviews with software
developers) were posted on the first wall of the war room. The English text
describing these requests was grammatically parsed to yield user objects (and the
functions which acted on them). These user objects were organized into hierarchical
relationships and posted on the second wall. Design ideas were then generated in
a creative process that drew ideas from team members, users, existing software,
and scenario data. This led to the development of several mutually exclusive
prototype systems, which could then be verified against the original user requests
to make sure that concrete designs matched user requirements.
In the Design Prism work, which involved redesigning the interface for a power plant,
the user objects and functions of the war room were subdivided into four categories
of user elements: Information, Objects, Goals, and Actions. Relationships between
these user elements were captured in a table of relations. Prototype ideas were
then sketched from the perspective of each category (e.g., from a user Goals
perspective first). These idea sketches were fleshed out into actual screen designs,
with the help of plant-specific design guides for coding information and for
displaying graphical objects. Finally, the alternative designs from each of the
four perspectives were consolidated to yield either a single best-of prototype
or several distinct alternatives that could then be evaluated with users.
1. INTRODUCTION
tools that can be used under particular circumstances. I encourage the reader to
try out the methods used in this chapter and to try combining them with other methods
from this book and from personal experience. Each interface design project is unique
and different circumstances call for different measures.
One of the ideas which has taken form in my methods is the exploration of multiple
perspectives... multiple ways of looking at and approaching design. In this spirit,
I present two different techniques that I have found to be useful in practice
the User Interface War Room and the Design Prism. I would ask you to consider these
two methods separately to begin with, since they were developed under different
circumstances. After Ive introduced and explained each of them, I will give some
suggestions on how they might be used in conjunction, based on my own future
intentions to explore.
Though we were fortunate enough to have the use of a dedicated UI war room, the methods described
Figure 10.1 shows a picture of the room, to give an idea of its layout. As you can
see, this is your standard four-walled room, with floor and door. Each of the walls
in the room was designated for a particular step in the design process.2 At this
stage of the cycle, we had already extracted key user requests from our task analysis
interviews and documented them in an overall user task analysis report. These user
requests were prioritized and placed one user request per sheet on the
left-hand wall of our UI war room. On the back wall we pasted user objects. These
came directly from the interviews. They were things like module, architecture
diagram, key element, layer of software words that represented some
sort of concrete conceptual item to users, in their language (our users were
software developers). We took these objects and arranged them on the wall to show
hierarchies and interrelations. Then we would flesh out some creative ideas on the
whiteboard (usually working with a partner) and post the more refined ideas on the
right-hand wall. We constructed and refined higher-level, top-down views of the
system, while at the same time developing the sketches that came from bottom-up
styles of idea generation. In the end, we could check the resultant paper prototypes
against the user requests and user objects, to make sure that the prototypes
reflected the user needs and information we had gathered.
The reader might find it interesting to compare the war room we set up here to the room described
by Karat and Bennett (1991) in their four-walls design environment, which I have since come
across. The walls of their room were designated for Scenarios, Design Constraints, Open Questions,
and Design Sketches.
Figure 10.1
The team, as a whole, had relative freedom to pursue desired directions in both
design and implementation. This meant that we had the luxury of working on
high-level design for the interface as we wanted it to look some 2 years down the
road, to give us a well-thought-out migration path. The product design can be
characterized by:
Fire-preventing, rather than fire-fighting.
Providing a graphical view of an existing software system where
existing code, the basis of design, was structured hierarchically.
Lots of physical space available (a roomful).
Several developers indicated that they would like to see graphical representation
of data structures, for both global and local data. They want to see tables, fields,
pointers to other tables (type resolution), where those are initialized, and to
what. They also want to see initial values of parameters and identifiers, and
where/when they are assigned. For any given data store, they also want to be able
to determine who reads it, who writes it, and where its allocated.
We prioritized the user requests, using team consensus, and stuck them up on the
left-hand wall of the war room. This was an informal process. We made the assumption
that all of the key user requests summarized in the task analysis report were of
equal value to users, and allowed the team to prioritize the items through
discussion and voting. A more formal method may have ensured a closer match between
user priorities and our priorities, but this method helped provide team focus and
commitment, which are not to be taken lightly in a real-world context.
portion of the assembled user objects. However, if it seems that too many user
objects are not being used (particularly if they are objects of high validity),
it might be a cause to rethink the interface.3
Table 10.1 Assessing the Validity of User Objects in the Interface
Validity
Comments
Specific
user
task
analysis High
interview
(e.g., UTA Cindy Lee)
Medium
Caution required
Replication
from
previous
releases is generally desirable
if there have been no customer
complaints
Low
Object
derived
(stolen?,
liberated?) from other software
Be sure to test these objects and
parts of the interface with users
It could also just mean that the scope of your task analysis was particularly broad.
Rebecca Mahony is now working to understand 6 new modules of code. She relies heavily
on her buddy as a resource, since he originally developed some of this code.
But the code has evolved and now the original few modules interact with many others.
Rebecca often has to contact the group heads who own these modules, because
the module and procedure headers (which are supposed to describe them) are so out
of date. Otherwise, she is stuck reading sections upon sections of code. The code
library lists the names of group heads. Rebecca would really like to see parameters
into and out of a procedure, as well as the structure and flow of data that is
accessed by procedures. Neither of these is provided in the text-based
cross-reference tool she currently uses.
Figure 10.2
of the resources branch. However, the data branch seems more closely
related to the module branch than resources would. For this reason, I chose
the horizontal position I did to capture the horizontal affinity.
One day in the UI war room a fellow team member was looking over the prototypes
we had developed and began to ask me some questions about them. The conversation
gradually turned to the current implementation of a piece of our tool. He was not
fond of the way it worked and had some ideas about how it might function more
effectively. He was, however, having some trouble getting the ideas across to me
in words. As this was the case, I quickly persuaded him to draw them. He ended up
after some poking and prodding illustrating what he meant on a piece of paper,
which was then placed on the wall for general discussion... and to trigger our
memories when the time came to further develop that feature. This poking and
prodding I have playfully labeled the Spanish Inquisition Method and it becomes
another tool for gathering ideas from people-sources.
2.4.2. User Involvement in Design
Some interface components that are developed in the war room may have come directly
from suggestions or drawings made by users. These may include graphical, navigation,
and layout ideas. It should come as no surprise that some of the most lucid design
solutions are offered by users.
One strategy we used to elicit design ideas from users, during the task analysis
interviews, was to ask them for drawings that they had made in the course of their
work (or we copied down what was currently drawn on their own whiteboards). In many
ways, these drawings give a good picture of how users visualize and understand
information.
We also involved users throughout the task analysis process itself (of course),
and in usability evaluations of prototypes at various stages. We did not, however,
explore any truly participatory design work with users.
2.4.3. Existing Interfaces as a Source of Ideas
Existing interfaces are a valuable source of information and ideas that should not
be overlooked. Interface solutions may come from other tools within your product,
from tools outside (related or unrelated), from old ideas on the shelf, and
from past designs (dont fix it if it aint broke).
Studying other applications that users currently employ in their day-to-day work
is also important in ensuring that your own tool will fit within their work
environment.
2.4.4. Flirting with Scenario-Based Design
During our task analysis, we had the good fortune of working very closely with one
user, with whom we conducted an observational interview. We watched and listened
as he reviewed in the space of an hour how he had solved a software problem report
over 2 weeks time. The task information we recorded from this interview was
heavily detailed.
I took the scenario for this one user problem and used it to drive the design directly.
I designed a depth-first prototype (using very rough sketches) of an interface,
based on the information we had gathered in the observational interview. I followed
his process exactly, and simply mocked up part of a screen at each step that would
fulfill what I perceived to be his needs during that step. I didnt worry about
how full a screen was, because I was only intent on prototyping the part that our
user was concerned with at that moment. I wasnt worried about originality at this
point either, so I often took my drawings from other interfaces I had seen or from
my own previous prototypes. This method kept me in a very creative, fast-flowing
frame of mind. Every part-of-a-screen I designed would be labeled with an arrow
pointing to the part-of-a-screen that came next in our users task. Sometimes I
would just label a screen with a title and leave it at that. Other times I might
have an option box here and a few lines of code there or maybe just a link to another
tool (or a picture of a telephone so that he could call his buddy for help).
Not everything I drew was going to end up on a computer, let alone in our software.
I ended up constructing a series of fairly illegible and incohesive diagrams...but
this did not matter. The main significance of the effort lay in evoking the frame
of mind of an individual user at his task and in helping me to generate creative
new ideas that could be expanded upon later, during the more slow-and-steady periods
of prototyping. The development of these screens also helped to give me a stronger
sense of the order and sequencing necessary for our interface. It gave me a very
definite method of approach that of translating our subjects verbal
information into a prototype intended to capture all of the information he had given
me. This forced me to be thorough in a new way: depth-first rather than
breadth-first.
I was later able to transform some of the parts-of-screens I had created into more
concrete prototyping ideas. In Figure 10.3, I provide an abstract view of the
interface pieces I constructed (on the left) and how I might put these into an actual
prototoype (on the right). In this example, the piece numbered 2 was used more
than once in the users task. This meant that he was doing roughly the same thing
at two different stages (though there might have been small variations in context),
and needed the same sort of information or control in both spots. To support him
in this, I decided to place sketch number 2 at the top of his screen as a
static section of the interface while the other pieces of the task were arranged
beneath, on two separate screens. The intent was to make his most-used task
available from any screen. In ways such as this, I experimented with synthesizing
the scenario sketches I had created into meaningful prototypes.
Figure 10.3
We would have required scenario information from many more users if we had wanted
to take the scenario-based design further and come up with representative user
interface designs on this basis. Instead, the scenario mostly just helped me to
generate new ideas, to step out of my current perspective, and to verify that our
existing design attempts were on the right track.
2.5. PROTOTYPES
Most ideas would go through at least one iteration before they were tangible enough
that we would feel comfortable posting them, autographed, on the Prototype Wall.
We later found it possible to separate out mutually exclusive prototyping ideas
that could not exist in the same interface. We developed these separate alternatives
further in turn or in parallel as desired. If you consider the Whiteboard
as the brainstorming stage, then the Prototype Wall represents the stage where
as-yet-uncriticized ideas from brainstorming are evaluated and synthesized and are
organized into a cohesive whole to meet strategic needs.
2.5.1. Choosing between Idea Sketches
Earlier, on the User Object Wall, I had placed the word module above
procedure to represent the relationship between these two concepts (Figure
10.2). The next part involved mapping these relationships to the interface
prototypes themselves (using the whiteboard). A number of representations of the
relationship could have resulted. We could have indented procedures in a list
directly beneath each module name (Design 1, in Figure 10.4), or, we could have
used a box (labeled with the module name) to encompass all of the procedures within
a module (Design 2). There was no deterministic process for choosing which idea
sketch would make it into a prototype. For the most part, we used informal criteria
to help us decide which of the proposed methods to use. Screen real estate was of
concern, for instance, so one of the more convincing strategies was to list
procedures beneath a module name and to make the module names collapsible (Design
3; similar to Macintosh file directories). The user could then decide whe ther or
not they needed to see the procedures within that module.
Figure 10.4
combining the Data and Definition pieces, we had cut the number of categories in
half. We were left with Calls, Data/Definition, Runtime, and Structure as our major,
structural features (see Figure 10.5).
Figure 10.5
Later, as we did some more bottom-up prototyping, we found that it made sense to
reorganize the high-level structure we had created. We now combined the Calls and
Data pieces, instead of the Data and Definition pieces which were combined earlier.
This changed the high-level categories to Cross-reference, Definition, Runtime,
and Structure. Iterate and iterate again.
It was important to keep re-defining our interface at the top as well as at the
bottom. For instance, if we had prototyped a Cross-reference part of the tool,
without explicitly defining at the top-level that we were combining the Data and
Calls, then the front end of our prototype would no longer have been in synch with
the individual prototypes for each feature.
2.5.3. Separating out Mutually-Exclusive Prototype Systems
It is important that alternatives be explored in prototyping, for at least a couple
of reasons. First, more than one prototype should be tested with users: It is too
easy for a user to quickly say, given a single prototype, Sure that looks good,
leaving the designer with little valuable feedback. Second, many good ideas will
come up during prototyping sessions, from a variety of sources. Not all of these
ideas can possibly fit together into a cohesive whole. Some ideas will conflict
with others.
We had a number of team members working on the interface. Most of the work in the
early stages of the war room was being done by myself and by one of the software
developers on the team: myself, because it was my responsibility as user interface
designer, and the software developer because he had the time and the interest. To
this point, the two of us had spent most of our time working together on a single
prototype system (the one in Figure 10.5), with our view narrowed to the completion
of this one framework.4
It should be noted, though, that many of our spur-of-the-moment, right-then-and-there ideas didn't
fit within out main framework and were left sitting around like leftovers waiting for the right
high-level prototype to pick them up.
Other members of the team were also encouraged to use the war room and to post their
ideas there. Some had been working in parallel on their own high-level views of
the system. Feature ideas had been stewing throughout the previous release cycle,
to the point where team members had even coded fairly sophisticated computer
prototypes of potential features. These prototoypes were considered valid and
valuable contributions to the war room but they were given no more weight than
rough paper sketches.
Once a fair number of prototyping ideas were up on the wall, we were able to pick
through the pieces of interfaces, organizing and rearranging them to come up with
the smallest number of mutually exclusive systems possible. Some of the pieces
represented different (or overlapping) ideas for one feature and, therefore, were
unlikely to appear in the same prototype system. Other sketches could fit in any
of the alternative systems, so the distribution of these pieces became an executive
decision based on best fit. Still other pieces might end up in all of the alternative
systems, if no other ideas were generated for that particular feature.
We ended up with six different prototype systems where each alternative system
could not be fully meshed with any of the others. Even so, pieces of systems could
still be moved around, from one system to the other. As each of the systems was
developed in more detail, the number of alternative systems could grow or contract
as appropriate.
2.5.4. Matching User Requests to Developed Features
Once we had identified and developed a paper prototype of a system (however rough),
we could then take individual user-request and user-object sheets and paste them
on the prototype. In this way, we verified that important features were being
covered and that the interface was built on user information, rather than on
intuition alone. Simple as this may sound, pasting the sheets on the prototypes
represented the completion of this part of our journey, for all we meant to do was
to make our way across the room. Later, of course, user evaluation and iteration
would occur. However, for the time being, the UI War Room process had given us some
large and practical stepping stones to get from user information to first-draft
concrete designs.
Another approach to preventing the loss of design alternatives is to carefully record unpursued
alternatives and design decisions. While this is an important process (and I recommend it), it
doesn't help with the actual generation of design alternatives themselves. In this respect, another
alternate strategy would be to simply have more team focus in design. This might only succeed however,
if team members were to work in isolation, since the same problem of losing alternatives is likely
to occur for teams, as well as for individuals (i.e., groupthink).
I use the analogy of a prism to illustrate the design approach I have taken. Just
as a prism splits light into a visible spectrum of color, so similarly I take raw
user data and split it into distinct elements. The types of elements (or design
perspectives) are user Information, Objects, Goals, and Actions. I create a table
showing which elements in each perspective are related to which others. I then draw
up rough sketches from each perspective (e.g., from the perspective of user Goals
first). Finally, I consolidate these different views of the same problem. Depending
on the nature of the task, this may result in a single best-of prototype or
it may result in several distinct alternatives. These activities are illustrated
in Figure 10.6.
Figure 10.6
To extract the different user elements from the task analysis, I perform a variation
of the noun-verb parse used in object-oriented design (Pressman, 1991, pp. 404-408).
Given an English description, the nouns in the sentence fall into either the
Information or Objects category, and the verbs fall into the Goals or Actions
category. Then, it is a relatively intuitive process of dividing concrete
objects from information and user goals from actions. (In keeping
with the prism analogy, I happen to use different colored highlighters to classify
each of these element types, by color, as I read through the text). The process
is illustrated in Figure 10.7.
Figure 10.7
I have found not only that these particular categories can be used in a mutually
exclusive way, but that they also tend to be exhaustive (covering all of the
important pieces of information from the task analysis). I find, also, that this
categorization forces me to think about user elements that have not been explicitly
defined in the task analysis (e.g., an Action is mentioned in the task analysis,
but there is no mention of the Information needed for the user to confirm that it
had the desired effect).
As the designer, you may find that Information, Objects, Goals, and Actions are
not appropriate for the work you are doing. Though I have found these categorical
distinctions to be helpful in my own work, the results may be different in another
design context. I encourage you to try out your own user element classification
and perspectives from which to design. The same principles of design should still
apply, regardless of the categorization used.
Once having divided task analysis data into the four categories, I then take each
category individually and approach design from the perspective of that category.
I sometimes use brackets after an element in the Table of Relations to further define
the nature of the relationship or the options available. For example, I added the
options open | closed | partially open, in brackets, after the status of
valve Information element in Table 10.3.
Table 10.3 Sample Entries in the Table of Relations, for each Perspective
Related Elements
User Element
under
Consideration Information
Objects
Pressure is
dangerously
high
(information)
Pressure
Return
release pressure to
valve
normal
level
Current
pressure
Goals
Actions
Decrease pressure
Tank
Tank
Control
tank level
Increase
Decrease
Open
Close
At what level
did it become
dangerous?
Tank level
Pneumatic
valve
(object)
Status of
valve (open |
closed |
partially
open)
Tank
Tank level
Tank 1,
Control
Tank 2, tank level
Liquid 1,
Liquid 2
Control tank
level (goal)
Tank level
Tank
Start
agitator
(action)
Is agitator
already
started?
Agitator
Keep
solute
dissolved
In most cases, the related elements listed in Table 10.3 will already have been
identified earlier in the process. For instance, tank level was already listed
under the Information elements available, so when it came time to define relations
for the Goal control tank level, it was just a matter of noting that the two
are related. In other cases, though, I identify new user elements while filling
in the table. For instance, while completing the Information column, I realized
that if the pressure is dangerously high, then the user may want to know: How
long has it been dangerously high? and What is the cut-off level between
dangerous and normal?.
Organizing the user elements in the table will sometimes also bring to light
elements that were hidden or assumed. If, for instance, blank space is being used
to convey information to the user, then this should be stated explicitly in the
design model. Some information will inevitably be conveyed by location, whether
this is a simple left-to-right hierarchy of importance, or some more complex form
of coding. Reviewing ones prototypes from this point of view, to see what
information is being conveyed by location, or color, or size, or shape, can be
instructive.6 It may also highlight instances in which the information might better
be conveyed in another manner.
A further example of this can be found in Collura et al. (1989). They specified in a design that
what they referred to as hot information would appear at the bottom of the screen, while cold
information (such as status information or summaries) would appear at the top. Making this
distinction between types of information is important, but a distinction such as this comes only
by stepping back and explicitly identifying user information, if not from past experience.
The Table of Relations may appear to be a bit lengthy, but keep in mind that each
table is completed for only one task at a time. It would, inarguably, be tedious
to draw up a table for all user elements of an entire interface at one sitting.
Instead, I draw up the table one task at time, then immediately move on to sketching
ideas from each of the perspectives on that task. Only then do I start drawing up
the Table of Relations for the next task. I find that this approach helps me to
concentrate on one particular perspective at a time, and allows me to focus on and
design separate alternatives in parallel.
Figure 10.8
I find that this method of design allows me to free myself from the constraints
of a particular perspective (by forcing me to focus on four different perspectives),
while I generate design ideas and solutions. Note, however, that some of the
interface elements will be repeated across the sketches. Though I am designing from
the perspective of one particular element at a time, I still put other element types
into the interface as necessary, as long as they are within the focus of my
perspective (e.g., in Figure 10.8 I use a tank object in the Action perspective
because it falls within the focus of the design for that perspective). The user
elements from other perspectives that do show up in the prototypes for the current
perspective should be those elements that appeared in the Related Elements
column of the Table of Relations (Table 10.3).
At this point, I tend to use only rough sketches of the design elements that are
laid out as I would like them to appear in the interface. In the example, for instance
(Figure 10.8), I presented a generic tank without measurement units or level
markings. Similarly, if I needed to represent trend information in the interface,
I might use a generic trend graph: in which axes would be labeled, but no scale
given, and a generic line or curve would be drawn on the graph to represent the
type of relationship.
3.4.2. Screen Design using Design Guides
Having drawn up some idea sketches from each perspective, I then want to resolve
in detail how the objects will appear on the screen. I have again found it useful
to draw up a table (such as the one shown in Table 10.4) for each task.
In Table 10.4, the Recommended Display Object for tank level (vertical gauge)
would be defined precisely (and pictured) in a Graphical Display Objects design
guide. This design guide illustrates and specifies, in detail, the properties of
graphical objects to be used in the interface. Its purpose is to maintain
consistency across the system and to reduce the need to redefine and redraw objects
that have already been created. If an object wasnt in the Graphical Display
Objects design guide, then the design guide would need to be updated to include
the new object. For objects that are used only once or twice, however, it may not
be worth the effort of including them in a design guide. In our power plant design,
we drew up formal design guides for the use of all designers across the plant (who
included both human factors specialists and software developers). In a smaller
operation, something less formal would do.
Table 10.4 Detailed Design
Documentation from an Information Perspective
Recommended
Information
Display Object
(or Information
Required
Type
Resolution
Coding)
Tank level
Variable w/
precise value
0400 cm
Vertical gauge
Pressure is
dangerously high
Binary
Is dangerously high, is
not dangerously high
Annunciation
(Level 2)
Filter method
being used
Discrete
Highlight (using
a box)
Plant state
Discrete
Annunciation,
state diagram
You may notice that the Recommended Display Object for filter method being used
is actually a type of Information Coding (rather than a graphical object).
Guidelines on types of information coding to be used would be found in the
Information Coding design guide. This design guide presents recommendations for
the use of visual and auditory coding in an interface. Visual coding techniques
include color, shape, magnitude, positioning, texture, flashing, and alphanumerics.
Though I developed such a design guide specific to our power plant design
environment, more general guidance is abundant in the Human-Computer Interaction
literature (e.g., Banks and Weimer, 1992).
Using a table such as Table 10.4 encourages documentation of the design process.
Others can now refer to the tables Ive created (in coordination with the design
guides) to find out why I chose a particular display object. In some cases, I would
give a choice of two or more recommended display objects.
In the idea-sketching phase, I arranged the location of rough user elements on a
prototype. Now I am able to fill in the details with specific graphical objects
and information coding choices, to create what I could legitimately call a screen
design. This could be done in the context of higher-fidelity computer prototyping,
if desired.
Figure 10.9
Consolidating perspectives
At the same time as I was designing for individual tasks, other members of our human
factors group were working from the top-level, using their own methods to design
the interface from above.7 We were taking simultaneous top-down and bottom-up
approaches. There is a danger in this of having to redesign large parts of the
interface if they dont mesh. However, an element of risk often accompanies design
attempts, and this particular risk is accompanied by at least two positive features.
First, this way of doing things can easily be more time and resource efficient.
Two groups can work from different directions at the same time. Second, it again
supports the multiple-perspectives approach. If we did come up with different
solutions (from top and bottom) that did not mesh, then in the process we would
have provided ourselves with a new set of alternatives to test with our users. For
the sake of conservatism, communication between the two groups can help offset the
risks.
I have not yet tried designing the overall, high-level structure of an interface using the Design
Prism method, but there is no reason why the method couldn't be used for this purpose. For instance,
the overall structure of a power plant control room could be design from the perspective of the
Information that an operator needs to run it. This might lead to an annunciation-centered control
room. On the other hand, if the overview were designed again from the perspective of Actions, this
might lead to an input-based control room. Consolidate the two and you get a typical modern control
room.
The consolidation stage is an important time to test alternatives with real users
of your system. End users are the ones who can best tell you which design works
for them. They can help you to decide which perspective, or combination of
perspectives, your final design should take.
4. SOME SUGGESTIONS FOR MAKING SIMULTANEOUS USE OF THE WAR ROOM AND DESIGN
PRISM METHODS
So far I have presented the Design Prism and the UI War Room as entirely separate
entities. This form of presentation is as much for historical reasons as anything
else, since these two approaches were grounded firmly in each of two separate design
contexts. In the future, I plan to experiment with ideas for making simultaneous
use of both methods. For now, I will just speculate on how I think this might be
done. I leave the interpretation rather loose. It should be read in that spirit
and in the spirit of maintaining multiple approaches/perspectives in design.
Let us start with user requests, as we did in the first stage of the original UI
War Room. Paste these up on the wall. Prioritize the requests by team consensus
or by another method if you prefer (e.g., you might want your priorities to reflect
proportionately the number of users who requested an item, either explicitly or
implicitly).
Instead of extracting and organizing user objects, as we did in the war room, this
time parse the detailed task analysis data to yield Information, Objects, Goals,
and Actions. This changes the User Object Wall into a Perspectives Wall.
Make up your own perspectives if you want (but dont use too many or theyll become
difficult to manage). Assign a color to each perspective, and write out every user
element (Information, Object, Goal, or Action) onto its appropriately colored piece
of paper. Organize these user element sheets on the wall creating four separate
hierarchies.
Create a high-level overview of the system from each of the perspectives, taking
the organization of user elements on the wall as the basis for your design. Rearrange
the elements to see how a different high-level view might change things. The view
(or views) you create will be continually refined as more ideas come in from
bottom-up idea generation processes.
Meanwhile, proceed on the whiteboard. Start bottom-up prototyping. Take one of the
most important user requests, or take an individual user task, and gather together
the user elements for that task/request. Focus on one color at a time. Lay out all
the elements of that color on a table or a wall. Map out everything you think the
user needs to understand that group of elements, continually adding elements of
other colors to make the picture clearer. Make up new elements if you find that
there is something missing (but dont forget to make sure that these elements get
posted back up on the Perspectives Wall, initialed with your name). Arrange the
pieces of paper hierarchically or in sequence (or a bit of both) to show
relationships between them. Write any additional information you need on or
in-between these pieces of paper (use yellow stickies if it helps) to further define
these relationships.
Get other people in the room with you. Use team membersbounce ideas off of each
other. Get users to come in and help you out. If this isnt possible, then use
ideas you gathered from them when you went to see them during the task analysis.
You dont have to stick with their ideas if they dont work. Look at previous
versions of your own software. What was good about them? What didnt users complain
about? Have a peek at competitive products. Take ideas from software that has
nothing to do with your own. If youre stuck, try out something youve seen in
another chapter of this book to help get yourself unstuck. Ask a colleague for an
idea. Put on a funny hat.
Take the sketches you have created and plug them into your high-level overviews.
If they dont fit, then start a new high-level prototype. Keep iterating these
top-down views to maintain the smallest (or most sensible) number of mutually
exclusive prototype systems you can.
Once you have some of the main user requests covered, you are going to want to test
out your ideas with users. However, at the moment you only have rough idea sketches,
not concrete screen designs. Maybe you have the outline of a table, and possibly
some of its headings, but nothing inside the table. You might want to test your
crude sketches with a user or two (in fact, its probably a good idea), but at
some point you are going to want to refine them a little. Heres where the Design
Guides come in.
Build up a Graphical Display Objects design guide. Start with a tentative guide
early on in the process. Leave it rough, as sketches. Continue to iterate this design
guide until the objects in it are well defined. Check it out with users, in isolation
from the interface. What makes sense to them? Fill in your rough idea sketches with
objects from this design guide, and continue to feed new objects back into the guide.
Start to capture design rules that you are following, so that your colleagues can
refer to them later on. Look up human factors research relevant to your users
environment and write up some Information Coding guidelines for your interface.
Refer to the look-and-feel style guide for your interface platform. Apply your
knowledge of Human-Computer Interaction. Record the decisions you make and why you
are making them.
In the introductory chapter of the book Taking Software Design Seriously, David
Wroblewski considers the construction of human-computer interfaces as a craft. In
keeping with this notion, perhaps then apprenticeship is one of the best methods
for teaching this craft. I still firmly believe that we need to try to push ourselves
to create well-defined and contextualized tools that can be used by beginners (and
experts) as stepping stones toward the other side of the gap. However, I also
consider myself fortunate to have started my own career in an apprenticeship-type
position where I could supplement the structure provided by a few walls and a
handful of articles with somebody elses magic.
6. ACKNOWLEDGMENTS
Id like to acknowledge the help of Tim Dudley, my mentor from Bell-Northern
Research (now Nortel). As senior user interface designer on our team, he was
responsible for procuring and directing the setup of our war room and originating
the overall process. Id also like to thank Tom Carey, at the University of Guelph,
for reviewing drafts of this document and for suggesting new ways of approaching
the topic that I had not previously seen. The idea for Table 10.4, Detailed Design
Documentation from an Information Perspective came from Mark Feher, at Atomic
Energy of Canada Limited.
7. REFERENCES
Anderson, J. R., Cognitive Psychology and its Implications, 3rd ed., W. H.
Freeman and Company, New York, 1990.
Banks, W. W., Jr. and Weimer, J., Effective Computer Display Design,
Prentice-Hall, Englewood Cliffs, New Jersey, 1992.
Cohen, L., Quality function deployment: an application perspective from
Digital Equipment Corporation, National Productivity Review, 7(3), 197-208,
1988.
Collura, T. F., Jacobs, E. C., Burgess, R. C., and Klem, G. H., User-interface
design for a clinical neurophysiological intensive monitoring system, in
Kieras, D. E., Towards a practical GOMS model methodology for user interface
design, in Handbook of Human-Computer Interaction, Helander, M., Ed.,
Elsevier Science Publishers (North-Holland), New York, 1988, 135-157.
MacLean, A., Young, R., Bellotti, V., and Moran, T., Questions, options, and
criteria: elements of design space analysis, Human-Computer Interaction, 6,
201-250, 1991.
Pressman, R. S., Software Engineering: A Practitioners Approach, 3rd ed.,
McGraw-Hill, New York, 1991.
Wroblewski, D. A., The construction of human-computer interfaces considered
as a craft, in Taking Software Design Seriously: Practical Techniques for
Human-Computer Interface Design, Karat, J., Ed., Academic Press, New York,
1991, ch. 1.
Chapter 11
Transforming User-Centered Analysis into User Interface: The
Design of New-Generation Products
Colin D. Smith
Nortel Technology (Northern Telecom), Ottawa, Ontario, Canada
email: [email protected]
Table of Contents
Abstract
1. Introduction
1.1. Example of a New-Generation Product
2. Exploratory Design Stage
2.1. Visualization and Ideation
2.2. Scenarios
2.2.1. Role of Scenarios in the Exploratory Stage
2.3. Product Concept Hypotheses
2.4. Output of the Exploratory Design Stage
3. Concept Refinement and Analysis Stage
3.1. Limitations of Task Analysis for New-Generation Product Concepts
3.2. Antecedent Products Task Analysis of the Current Work Model
3.3. New Generation Products: User Values
3.4. Output of Concept Refinement and Analysis Stage
4. Formal Design Stage
4.1. Role of Scenarios in the Formal Design Stage
4.2. Role of Prototypes in the Formal Design Stage
4.2.1. Low-Fidelity Paper Prototypes
4.2.2. High-Fidelity Interactive Simulations
4.3. The Conceptual Model
4.3.1. Task Model Dialog Design and Detail Design
4.3.2. Metaphors
4.3.3. Hierarchy of Control and Metaphors
4.3.4. Animation Extending the Range of a Metaphor
4.4. Structured vs. Unstructured Interface
4.5. Heterogeneous User Group
5. Managing and Documenting the Iterative Design Process
5.1. Potential Problem with Iterative Process
5.2. Decision Criteria for Translating User Information into a User Interface
5.3. Design Rationale and Design Intent Documentation
6. Summary
7. Acknowledgments
8. References
ABSTRACT
The challenge of Bridging the Design Gap is examined in a case study of the
design of a new wireless personal communication product. This chapter discusses
applicable design methods, their relative strengths and weaknesses, how much
information is needed before proceeding, and how to get that information.
A multidisciplinary team used an iterative design approach to (1) explore, (2)
discover, (3) define, (4) design, and (5) evaluate new product opportunities. A
three-stage design transformation process is recommended: Exploratory, Refinement
and Analysis, and Formal Design. Scenarios are a key device used to bridge the stages.
User feedback is solicited to guide the evolution of the design in all stages.
The purpose of the Exploratory Design Stage is to identify and conceptualize
potential new high-value products and services that will satisfy key user needs
not being met by todays solutions. The goal of the Refinement and Analysis Stage
is to verify the key user values and define the attributes required of successful
product. The Formal Design Stage is characterized by the design of the users
conceptual model, which is communicated in the interface through dialog design as
well as the use of metaphors. At this stage, low- and high-fidelity prototypes are
built and subjected to usability testing. High-fidelity prototypes are also used
to capture the design intent and communicate it to product implementation partners.
1. INTRODUCTION
The methods described in this chapter apply to the design of exploratory
new-generation products. The techniques used for the design of new-generation
products are different from those used to upgrade an existing product, add a new
product to an existing market (e.g., designing another spreadsheet application)
or add a new product to extend an existing suite of products. The characteristics
of new-generation projects include:
No defined product direction given to the design team at the beginning of the project (the team both
defines the problem space and the product requirements).
No clear understanding of user requirements.
No clear definition of who will use the product.
Involves new or not-yet-existing hardware and software technology.
Constantly evolving product features.
No comparable existing product to benchmark against.
These characteristics have a number of consequences for the design process. The
Gap between analysis of user requirements and the design of a new generation product
is far too great to be bridged in a single transformation. With new-generation
products, the target market, the user values and tasks, and the technologies are
all typically unknown or poorly understood. Thus, the usual starting stage of
understanding user requirements is preceded by an Exploratory Design Stage. The
combined Exploratory and Refinement and Analysis Stages are much longer relative
to the other types of projects, and the level of early user involvement is much
higher.
In the design process described in this chapter, there is not a single large Gap
to be bridged between analysis and design. Design occurs before analysis (in the
Exploratory Design Stage), during analysis (in the Concept Refinement and Analysis
Stage), and after analysis (in the Formal Design Stage). However, the three stages
are not cleanly separated; rather there is a gradual shift of emphasis from one
stage to another, with some overlap between the stages. Different techniques are
used as iterative stepping stones, to move the design through the three stages (see
Figure 11.1). This chapter discusses the techniques, when they are most useful,
their relative strengths and weaknesses, how much information is needed before
proceeding, and how to get that information.
Figure 11.1
Figure 11.2
The Exploratory Design Stage is characterized by confusion and unease within the
design team. Nothing is settled. Although the design team has a set of constraints,
many are often no more than accidents of circumstance. Typically, the design team
will... have some indications of appropriate directions or application areas which
may have been provided by upper management or distilled from the corporate
zeitgeist... (Erickson, 1995, p.43).
The Exploratory Stage may be pursued by a variety of groups or individuals,
including a marketing group, a product manager, an executive, or an entire design
team. In the design of the Orbitor, the Exploratory Stage was done by a
multi-disciplinary design team which included User Interface Designers, User Needs
Assessment Professionals (Cognitive Psychologists), Industrial Designers, and
Mechanical Engineers in partnership with an Advanced Hardware/Software Technology
Group and a Marketing Group.
The emphasis at this phase is on the creation of a large number of diverse sketches
illustrating new product value and functionality without any attempt at critical
analysis. The use of analysis techniques is avoided because they can overly
constrain the creative thinking of the design team. Also, there is not any great
importance attached to the first conceptual sketches of a design; the issue of
importance is articulating and communicating ideas.
The exploratory UI design process is similar to the classic ideation techniques
used by architects, industrial designers, and graphic designers. Many of the
concept-generating techniques perfected in these professions are directly
transferable to the design of new-generation user interfaces. For example, using
the ideation process a designer will typically sketch many rough thumbnail concept
drawings which reflect a wide variety of different ideas. The better thumbnail
concept-sketches are redrawn at a larger scale, and then continuously refined using
trace paper overlays. The act of sketching in itself is a design and learning tool
(see also Hanks, 1977; Porter, 1979; Ching, 1979).
2.2. SCENARIOS
With new-generation products, scenarios are a key device used to bridge the
Exploratory, Refinement and Analysis, and Formal Design Stages. Scenario
techniques have been applied in many different ways, and indeed the term
scenario itself has many interpretations (see Caroll, 1995). This chapter will
focus on user interaction scenarios, which are narrative descriptions of what
people do and experience as they try to make use of a product and its applications.
One of the characteristics of new-generation products is that potential users can
have difficulty understanding the proposed new product or articulating what they
would want the new product to do. Unlike lists of features, scenarios provide
potential users with a feel for how the product would perform in a specific situation
or context. Scenarios include a concrete description of the users activities,
focusing on a particular instance of use. Scenarios are a user-centric perspective
on task/application interaction: what happens, how it happens, and why it happens.
Particularly with consumer products, the scenario should encompass and present a
totally integrated package for evaluation by the user. This package can include
the interface, the physical product, the context of use, and the various users.
The user-centric perspective is important in that the users model and the
designers model are usually different. Spool (1996) uses the example of Walt
Disney Worlds Haunted House: The designers decide what ghosts, lighting,
screams and general spookiness the guests would experience... If the guests come
out of the ride thinking that the 180 watt surround sound speaker system really
added to the ride, then the ride itself has failed.
are designed to provoke reaction and feedback from users. The concepts are not
designed to represent a final product; rather they are designed to represent
alternative design solutions (see Figure 11.3).
Figure 11.3
The subjects should be chosen to represent a sample of the target market population.
Lead users, power users, or early adopters (e.g., real estate agents in
the case of pagers and cellular telephones) should be included in the research.
It can be very helpful to study less proficient potential users because they will
give the UI designer insights into what design concepts are workable for a
heterogeneous user group.
Analysis Stage of development is to verify the user values and define the product
attributes.
Task
Analysis
Function
Incoming
call/message
Associated Element
Location
User
*
of Use Priority/Freq.
Vide
1 cs.
o
2
Ref.
Hands
No.
Call alert
Adjust
ringer
volume
during alert
H E S
High
H S
Med.
H E S
High
H S
Med.
H E S
High
LINK softkey or
feature-programmed softkey
H E S
Low
Terminate
calls and
exit the user
from any task
End
H E S
High
Enter extra
numbers (IVR
systems)
Dialpad
H S
Med.
H E S
Low
H E S
Low
Switch to
headset
Headset jack
Switch to
handsfree
Handsfree
Adjust
speaker
volume
Volume switch
H E S
Med.
Mute
H E S
Med.
Dial out
Directory
TBD
H S
Med.
items while
offhook
Using radio
link
(general)
Out of range
warning
Battery Low
warning
H E S
High
H E S
High
H S
High
H S
High
H S
High
H S
High
Initiating a
call
Dial a call
(active deal Dialpad [harr] Talk, flap, DIAL
or predial)
softkey
Dial from
Directory
Dial back a
Caller from
the Messages
List
Redial a
number
Note: User may be interacting with the device with or without a headset.
they outline a specific task with specific goals. Based upon customer feedback,
the key customer values and the required product attributes become known to the
design team.
Storyboards scenarios can be developed into video-based scenarios. Video is useful
to seamlessly and realistically convey new product concepts to end users, possibly
resulting in better user feedback.
Video can be a particularly effective technique because:
The designer is not constrained by prototyping software.
The designer does not have to resolve the all of the user interface
details.
Futuristic technology can be easily conveyed (Tognazzini, 1994) (e.g.,
natural voice recognition).
Concepts are more immersive and thus more readily understood by most
users.
A video is the equivalent of an architects scale building model. It is understood
at some level by all the clients, and captures the overall look and feel of the
building without necessitating any premature detailed design work.
In the Orbitor project, storyboards and a video were created using scenarios to
illustrate key capabilities. The scenarios depicted the product in context and
permitted a comparison with the use of existing products (to give users a baseline
for evaluation). Concepts were also communicated to users with physical models.
With consumer electronic products it is important to assess both the display-based
interface and the physical interface together in the users environment. As Alan
Kay noted over 28 years ago in his visionary design of the Dynabook laptop computer,
even the computers weight is very much part of the user interface (Davidson,
1993).
Figure 11.4
From the user feedback to scenario-based concepts, the team can also extract some
of the following:
In addition to this information, other factors are equally if not more important
in understanding how users do real work. The design team might also consider
the following:
Understanding any additional constraints in the environment in which the
work is done (e.g., office, home, car).
Noting if this task is done while doing another related or unrelated task;
e.g., driving a car, working on computer, writing, participating in a
business meeting.
Identifying other people who may be involved in completing the task.
Identifying who initiates which task.
Understanding what defines closure of the task for the user.
Understanding significant ergonomic issues; e.g., is this a one handed
or two handed task?; or is this task done while sitting at a desk, standing,
walking?
Table 11.3 Example of Detailed Attributes from the Orbitor Project
of user interface design work will already have been completed prior to the Formal
Design Stage, in both the Exploratory and Refinement and Analysis Stages. As well,
when the design team reaches the Formal Design Stage, they will have internalized
much of what is required to deliver a successful final product (this advocates
having a single design team participating in and completing all of the stages of
the design process).
discovery of the human limitations of multitasking early in the Formal Design Stage
was highly beneficial.
critical ergonomic issues (text legibility, size of icons and touch targets)
associated with a small display interface.
Figure 11.5
Paper prototype.
Informal usability testing was conducted with local users (often fellow employees)
on an ongoing basis. This was a way of receiving fast and inexpensive feedback to
support the ongoing iterative design process. Overall scenario-storyboarding and
paper prototyping were the most useful and widely used techniques for- rapid design
of new generation products such as Orbitor.
4.2.2. High-Fidelity Interactive Simulations
As the design team begins to focus on the optimized and preferred user interface,
the paper prototypes are transferred to an interactive computer simulation. Some
of these simulation applications include MacroMedia Director, Super Card, and
Visual Basic. A high-fidelity interactive simulation is used to integrate all of
the disparate elements of the paper prototype. During the transition stage between
paper and a computer simulation, it is fastest to revisit problem areas of the user
interface simulation and first refine them using paper prototyping. This takes
advantage of the speed of paper, while capturing only the output in a more finished
Figure 11.6
An interactive simulation is further used for formal usability testing with end
users. The information received from usability test sessions is a key element in
the design decision process, and ongoing usability feedback is required for
efficient iterative design. The subject of detailed usability testing is outside
the focus of this chapter (see Neilsen, 1993; Wiklund, 1994; Bauersfeld, 1994; Hix
et al, 1993).
In the Orbitor project, seven formal usability test sessions were conducted with
over 50 subjects in total, to refine the various elements of the interface. The
interactive and realistic nature of the simulation served to convincingly
demonstrate the strengths and weaknesses of the Orbitor interface as it evolved.
The implications of new developments in the simulation could be quickly grasped
both by users who had seen previous iterations and those encountering the simulation
for the first time. The simulation also proved an effective tool for communicating
the intricacies of the user interface to managers, partners, and potential
customers.
require the use of more sophisticated tools, such as flow charts, state transition
diagrams, or an on-line dialogue design and management tool.
With user interfaces for standard operating systems (Macintosh, Windows, etc.) the
designer can refer to style guides to help with the detailed design. With
new-generation products such as Orbitor, the detail design comprises a significant
proportion of the new look and feel of the product. It also constitutes a major
amount of the design effort. For this type of project, graphic and visual
interaction designers should be part of the design team from the Exploratory Stage
onward.
4.3.2. Metaphors
A metaphor is used to map an object or action in an interface to something else
the user might already understand. Metaphors should communicate the users
conceptual model of the new interface by expressing it in terms of other objects
or actions with which the user is already be familiar.
Metaphors should be selected for their appropriateness to the target market and
they should also be matched to the experiences and capabilities of typical users.
All of the metaphors used in an interface should also be unified with respect to
each other (i.e., they should follow a common theme). For new generation products,
the goal is to build upon and extend users experience with existing products.
Metaphors, particularly visual metaphors (e.g., graphical icons) are useful for
improving the initial discoverability of an interface for new users. While the user
interface concept being conveyed may be either an object or an action, it is easier
to communicate a metaphor that is related to real-world physical objects. The real
value of graphical metaphors is not that the user intuitively understands it from
prior experience with the mechanical real-world equivalent, but that it is both
simple to discover and memorable. Well-designed graphical icons can make use of
recognition memory.
One of the problems with overly-explicit metaphors is that they become constrained
by the properties of the physical real world object. Thus, a major functional gap
evolves between the capabilities of the application and the more limited
capabilities of real world objects. Some designers have shown that users do not
perceive explicit graphic metaphors and that they will perform the same with or
without them. (Spool, 1996).
A well-designed metaphor should strongly encourage exploration, discovery, and
learning without penalty (e.g., including the ability to undo mistakes and try
again). For products which will be used on an ongoing basis, the interface should
be designed to facilitate the transformation of new users into expert users (who
do not require explicit graphical metaphors).
Using a brainstorming technique (see Michalko, 1991) with a team of designers is
a fast way of developing a range of metaphors. For new generation products,
antecedent products (studied in the Refinement and Analysis Stage) can be a good
source of metaphors. As computers become ubiquitous, established ideas in the
desktop computing environment can also be a source of metaphors.
4.3.3. Hierarchy of Control and Metaphors
The interface design should be based upon an understanding of which are the highest
priority features and of the order in which a user would want to move through the
steps of any given task. The interface should be organized so as to maximize the
discoverability and perceived ease of use of high priority features. Perceived ease
of use is defined to include a minimum number of key presses, a minimum number of
dialogue screens, and a minimum amount of time and effort required to complete a
task and achieve a goal (this is verified in usability testing).
The location of a feature is determined by its importance to the user and the
frequency of access, as verified in the Refinement and Analysis Stage. Frequently
used features are allocated to the top level of the interface. In the case of the
Orbitor interface, priority is given to the most important time-critical task:
real-time voice communication. Highly used features are either located on the first
screen or on hard keys that can be accessed directly at any time (e.g., navigation
arrow keys). Less-used features are located on-screen, one layer down (e.g.,
telephone configuration controls were located in a folder within the filing cabinet
environment; see Figure 11.7).
Figure 11.7
The first screen should support the most important frequent task. With the Orbitor,
a system of spatial, object-based metaphors was used to create an understandable
hierarchy of primary and secondary foci of attention. The size of an object, the
location, and the level of detail were all used to communicate the hierarchy. The
most important objects were large, highly detailed, and located in the foreground.
Smaller, less-detailed objects in the background were still easily accessible,
despite the visual cue indicating their reduced importance. Also, within any screen
there was generally a top-to-bottom, left-to-right hierarchy (building upon the
users experience with the typical layout of newspapers, books, and other
paper-based graphical information).
At the highest level, an environment metaphor was used to logically organize the
various features into three groups (Call Environment, Message Center Environment,
and Filing Cabinet Environment). This metaphor was chosen to make it explicit to
the user that there were three distinct modes in the interface. For example, the
Call Environment was used only for real-time voice or note communication. The
Message Center Environment was used only to view stored voice messages, ink messages,
or text messages. We wanted the user to view these not as abstract or arbitrary
modes, but as spatially distinct places (environments). I know I am in the Message
Center because I see messages and they look graphically distinct from objects in
the Call Environment or the Filing Cabinet environment. Within each of the
environments, the interface was modeless, i.e., any input from the user had the
same consistent response anywhere within that environment. The user could also
navigate at any time between any of the three environments without any destructive
effects.
At the next level down within each environment an object metaphor was used; all
communications links were represented by graphical objects. For example, a voice
call between two people was represented by a single call object in the
foreground. This call object had text to identify the callers name and
telephone number, an icon to identify the location (home, office, cellular, other),
and an icon to represent the type of link (e.g., a voice call on Hold, an active
voice call, message, etc.).
4.3.4. Animation Extending the Range of a Metaphor
Animation is a technique that can be used to extend the useful range of a small
and simple set of icons. For example, a static telephone icon could indicate a free
line, while an animated version of the same icon could signal an incoming call.
Animation can also be used to provide visual cues to the internal structure of the
interface, as well as providing feedback to the user as to what is happening. In
this way, users can discover the function and location of various objects from the
interface itself. With the Orbitor, for example, if a user did not answer an incoming
call because she was busy writing a note, the incoming call object would be
automatically shrunk and moved to the Message Center. At a later time the user would
know exactly where to go to look for a message left by the caller.
Figure 11.8
One of the key project management challenges is to keep the iterative process on
track and moving toward a more usable design. For an iterative design process to
work within the constraints of the product development cycle, two conditions must
be met. First, the initial design must be reasonably close to the final product
design. Second, successive iterations must converge on the final improved product
at acceptable speed to meet project delivery deadlines.
11.4
Example
of
Hierarchical
Decision-Making
Process
Level 1. Does the new design meet the list of user values and detailed product
attributes?
Level 2. Is the users conceptual model clearly communicated in the design?
Are the metaphors appropriate to the user and to the task
Does the flow of the dialogue in the user interface match the users task
flow?
Level 3. Does the interface follow basic principles and guidelines for the good
UI design?
Some of these guidelines as suggested by Norman (1988) and Neilsen (1993) include:
Provide meaningful feedback to the users actions
Provide visual clues (affordances) to the user about the function of an
object
Show an understandable relationship between a control and its function
(mapping)
Use a simple and natural style of dialogue and interaction
Provide a consistent method of interaction for the user
The key advantage to this method is that the user interface can be rapidly iterated
without having to update a separate design specification document. Also, it is
relatively easy to ensure consistency between the user interface design intent and
the interface of the final product as it is being coded (these may progress
concurrently). This will considerably shorten the development time for the product.
6. SUMMARY
The methods described in this chapter apply to the design of new-generation products.
They are illustrated by a case study of a new wireless personal communication
product called Orbitor. The Orbitor team had to determine the set of key user values
and relevant product attributes and integrate these into a superior small, mobile
communicator (see Figure 11.9). The list of potential features and services was
long, but there were many constraints. The small display size, the need to use the
product while mobile, and the limitations of multitasking identified in
scenario-creation sessions meant the Orbitor could not be all things to all people.
Figure 11.9
7. ACKNOWLEDGMENTS
The development of the Orbitor user interface was the joint effort of numerous
individuals within the Corporate Design Group at Nortel Technology. In particular,
the author wishes to acknowledge the effort of the other members of the UI design
team: Brian Beaton for visual interaction design and Bruce Stalkie for simulation
development. The author also wishes to thank Jeff Fairless, Des Ryan, Mike Atyeo,
Gord Hopkins, Arnold Campbell, Peter Trussler, and John Tyson for the contributions
they made to this paper.
8. REFERENCES
Bauersfeld, P., Software By Design, M&T Books, New York, 1994.
Carroll, J.M., Ed., Scenario Based Design, John Wiley & Sons, Toronto, 1995.
Ching, F.D.K., Architecture: Form, Space and Order, Van Nostrand Reinhold, New York,
1979.
Collins, D., Designing Object-Oriented User Interfaces, Benjamin/Cummings,
Redwood City, CA, 1995.
Cooper, A., About Face: The Essentials of User Interface Design, IDG Books, Foster
City, CA, 1995.
Davidson, C., The man who made computers personal, New Scientist, 138, 30-36, 1993.
Erickson, T., Notes on design practices: stories and prototypes as catalysts for
communication, in Scenario Based Design, Carroll, J., Ed., John Wiley & Sons,
Toronto, 1995, 37-58.
Hanks, K.and Belliston, L., Draw: A Visual Approach to Thinking Learning and
Communicating, William Kaufmann, Los Altos, CA, 1977.
Hix, D. and Hartson, R. H., Developing User Interfaces: Ensuring Usability Through
Product and Process, John Wiley & Sons, Toronto, 1993.
Laurel, B., Ed., The Art of Human-Computer Interface Design, Addison-Wesley, Menlo
Park, CA, 1990.
Michalko, M., Thinkertoys: A Handbook of Business Creativity for the 90s, Ten
Speed Press, Berkley, CA, 1991.
Neilsen, J., Usability Engineering, Academic Press, Boston, MA, 1993.
Norman, D. A., Design of Everyday Things, Basic Books, New York, 1988.
Porter, T., How Architects Visualize, Van Nostrand Reinhold, New York, 1979.
Potts, C., Using schematic scenarios to understand user needs, Proceedings of
Designing Interactive Systems (DIS95), Ann Arbor, Michigan, August 23-25, p.
247-256, 1995.
Spool, J. M., Users do not see metaphors, discussion in Comp.Human-Factors Usenet
Group, March, 1996.
Tognazzini, B., The starfire video prototype project: a case history, Proceedings
of CHI94, Addison-Wesley, Reading, MA, 1994, 99-105.
Wicklund, M. E., Usability in Practice, Academic Press, New York, 1994.
Winograd, T., Ed, Bringing Software to Design, Addison-Wesley, Menlo Park CA, 1996.
Zetie, C., Practical User Interface Design, McGraw-Hill, London, 1995.