Chapter One: Introduction To Human Computer Interaction: Software Engineering Department, Wolkite University
Chapter One: Introduction To Human Computer Interaction: Software Engineering Department, Wolkite University
1.1. Introduction
Human Computer Interface (HCI) was previously known as the man-machine studies or man-
machine interaction. It deals with the design, execution and assessment of computer systems and
related phenomenon that are for human use.
HCI can be used in all disciplines wherever there is a possibility of computer installation. Some of
the areas where HCI can be implemented with distinctive importance are mentioned below:
The world’s leading organization in HCI is ACM − SIGCHI, which stands for Association for
Computer Machinery − Special Interest Group on Computer–Human Interaction. SIGCHI defines
Computer Science to be the core discipline of HCI. In India, it emerged as an interaction proposal,
mostly based in the field of Design.
1.2. Definition
HCI (human-computer interaction) is the study of how people interact with computers and to what
extent computers are or are not developed for successful interaction with human beings. A
significant number of major corporations and academic institutions now study HCI. Historically
and with some exceptions, computer system developers have not paid much attention to computer
ease-of-use. Many computer users today would argue that computer makers are still not paying
enough attention to making their products "user-friendly." However, computer system developers
might argue that computers are extremely complex products to design and make and that the
demand for the services that computers can provide has always outdriven the demand for ease-of-
use.
One important HCI factor is that different users form different conceptions or mental models about
their interactions and have different ways of learning and keeping knowledge and skills (different
"cognitive styles" as in, for example, "left-brained" and "right-brained" people). In addition,
cultural and national differences play a part. Another consideration in studying or designing HCI
is that user interface technology changes rapidly, offering new interaction possibilities to which
previous research findings may not apply. Finally, user preferences change as they gradually
master new interfaces.
1.3. Goals
The term Human Computer Interaction (HCI) was adopted in the mid-1980s as a means of
describing this new field of study. This term acknowledged that the focus of interest was broader
than just the design of the interface and was concerned with all those aspects that relate to the
interaction between users and computers.
A basic goal of HCI is to improve the interactions between users and computers by making
computers more usable and receptive to the user's needs.
The goals of HCI are to produce usable and safe systems, as well as functional systems. These
goals can be summarized as ‘to develop or improve the safety, utility, effectiveness, efficiency and
usability of systems that include computers’ (Interacting with computers, 1989). In this context
the term ‘system’ derives from systems theory and it refers not just to the hardware and software
but to the entire environment---be it organization of people at work at, home or engaged in leisure
pursuits---that uses or is affected by the computer technology in question. Utility refers to the
functionality of a system or, in other words, the things it can do. Improving effectiveness and
efficiency are self-evident and ubiquitous objectives. The promotion of safety in relation to
computer systems is of paramount importance in the design of safety-critical systems. Usability, a
key concept in HCI, is concerned with making systems easy to learn and easy to use. Poorly
designed computer system can be extremely annoying to users, as you can understand from above
described incidents.
Part of the process of understanding user’s needs, with respect to designing an interactive system
to support them, is to be clear about your primary objective. Is it to design a very efficient system
that will allow users to be highly productive to their work, or is to design a system that will be
challenging and motivating so that it supports effective learning, or is it something else? We call
these talk-level concerns usability goals and user experience goals. The two differ in terms of how
they are operational zed, i.e., how they can be met and through what means. Usability goals are
concerned with meeting specific usability criteria (e.g., efficiency) and user experience goals are
largely concern with explicating the quality of the user experience (e.g., to be aesthetically
pleasing).
to design systems that minimize the barrier between the human's cognitive model of what
they want
to accomplish and the computer's understanding of the user's task
To recap, usability in generally regarded as ensuring that interactive products are easy to learn,
effective to use, and enjoyable from user perspective. It involves optimizing the interactions people
have with interactive product to enable them to carry out their activities at work, school, and in
their everyday life. More specifically, usability is broken down into the following goals:
Effectiveness: It is a very general goal and refers to how good a system at doing what it is supposed
to do.
Efficiency: It refers to the way a system supports users in carrying out their tasks.
Safety: It involves protecting the users from dangerous conditions and undesirable situations.
In relation to the first ergonomics aspect, it refers to the external conditions where people work.
For example, where there are hazardous conditions---like x-rays machines or chemical plants---
operators should be able to interact with and control computer-based system remotely. The second
aspect refers to helping any kind of user in any kind of situation avoid the danger of carrying out
unwanted action accidentally. It also refers to the perceived fears users might have of the
consequences of making errors and how this effects their behavior to make computer-based system
safer in this sense involves:
Preventing the user from making serious error by reducing the risk of wrong keys/buttons
being mistakenly activated (an example is not placing the quit or delete-file command right
next to the save command on a menu.) and
Providing users with various means of recovery should they make errors. Save interactive
systems should engender confidence and allow the users the opportunity to explore the
interface to carry out new operations.
Other safety mechanism include undo facilities and confirmatory dialog boxes that give users
another chance to consider their intentions (a well-known used in email application is the
appearance of a dialog box after the user has highlighted the messages to be deleted, saying: “are
you sure you want to delete all these messages?”)
Utility: It refers to the extent to which the system provides the right kind of functionality so that
user can do what they need or want to do. An example of a system with high utility is an accounting
software package providing a powerful computational tool that accountants can use to work out
tax returns. An example of a system with low utility is a software drawing tool that does not allow
users to draw free hand but forces them to use a mouse to create their drawings, using only polygon
shapes.
Learnability: It refers to how easy a system is to learn to use. It is well known that people do not
like spending a long time learning how to use a system. They want to get started straight away and
become competent at caring out tasks without too much effort. This is especially so far interactive
products intended for everyday use (for example interactive TV, email) and those used only
infrequently (for example, video conferencing) to certain extent, people are prepared to spend
longer learning more complex system that provide a wider range of functionality (for example web
authoring tools, word processors) in these situations, CD ROM and online tutorials can help by
providing interactive step by step material with hands-on exercises. However, many people find
these tedious and often difficult to relate to the tasks they want to accomplish. A key concern is
determining how much time users are prepared to spend learning a system. There seems little point
in developing a range of functionality if the majority of users are unable or not prepared to spend
time learning how to use it.
Memorability: It refers to how easy a system is to remember how to use, once learned. This is
especially important for interactive systems that are used infrequently. If users haven’t used a
system or an operation for a few months or longer, they should be able to remember or at least
rapidly be reminded how to use it. Users shouldn’t have to keep relearning how to carry out tasks.
Unfortunately, this tends to happen when the operation required to be learning are obscure,
illogical, or poorly sequenced. Users need to be helped to remember how to do tasks. There are
many ways of designing the interaction to support this. For example, users can be helped to
remember the sequence of operations at different stages of a task through meaningful icons,
command names, and menu options. Also, structuring options and icons so they are placed in
relevant categories of options (for example, placing all the drawing tools in the same place on the
screen) can help the user remember where to look to find a particular tool at a given stage of a
task.
The realization that new technologies are offering increasing opportunity for supporting people in
their everyday lives has led researchers and practitioners to consider further goals. The emergence
of technologies (for example, virtual reality, the web, mobile computing) in diversity of application
areas (e.g., entertainment, education, home, public areas) has brought about a much wider set of
concerns. As well as focusing primarily on improving efficiency and productivity at work,
interaction design is increasingly concerning itself with creating systems that are:
Satisfying
Enjoyable
Fun
Entertaining
Helpful
Motivating
Aesthetically pleasing
Supportive of creativity
Rewarding
Emotionally fulfilling
The goals of designing interactive products to be fun, enjoyable, pleasurable, aesthetically pleasing
and so on are concerned primarily with the user experience. By this we mean what the interaction
with the system feels like to the users. This involves, explicating the nature of the user experience
in subjective terms. For example, a new software package for children to create their own music
may be designed with the primary objectives of being fun and entertaining. Hence, user experience
goals differ from the more objective usability goals in that they are concerned with how user
experience an interactive product from their perspective, rather than assessing how useful or
productive a system is from its own perspective. Recognizing and understanding the trade-offs,
between usability and user experience goals, is important. In particular, this enables designers to
become aware of the consequences of pursuing different combinations of them in relation to
fulfilling different users’ needs. Obviously, not all of the usability goals and user experience goals
apply to every interactive product being developed. Some combination will also be incompatible.
For example, it may not be possible or desirable to design a process control system that is both
safe and fun.
HCI is very important because it will be fundamental to make products more successful, safe,
useful and functional. In the long run, it will make more pleasurable for the user. Hence, it is
important to have someone with HCI focused skills involved in all phases of any product of system
development. HCI is also important to avoid products or project going wrong or totally fail.
“As interaction designers, we need to remember that it is not about the interface, it’s about what
people want to do. To come up with great designs, you need to know who those people are and
what they are really trying to accomplish.” ~ Cordell Ratzlaff, Designing Interactions, 2007.
HCI is extremely important when designing clear intuitive systems which will be usable for people
with a varied range of abilities and expertise, and who have not completed any formal training.
HCI takes advantage of our everyday knowledge of the world to make software and devices more
understandable and usable for everyone. For example, using a graphic of a miniature folder in a
computer’s interface helps the user understand the purpose of the folder, as everyone has
experience with real paper folders in their everyday lives. Ultimately, if a system is well designed
with HCI techniques, the user should not even have to think about the intricacies of how to use the
system. Interaction should be clear, intuitive, and natural.
Today technologies permeate every aspect of our daily lives. Even if a person does not directly
own or use a computer, their life is affected in some way by computing. ATM machines, train
ticket vending machines, and hot drinks dispensing machines are just a few examples of computer
interfaces a person can come into contact with daily without needing to own a personal computer.
HCI is an important factor when designing any of these systems or interfaces. Regardless if an
interface is for an ATM or a desktop computer, HCI principles should be consulted and considered
to ensure the creation of a safe, usable, and efficient interface.
HCI is an important consideration for any business that uses technology or computer in their
everyday operation. Well-designed usable systems ensure that staff are not frustrated during their
work and as a result are more content and productive. HCI is especially important in the design of
safety critical systems, such as, for example, those found in power plants, or air traffic control
centers. Design errors in these situations can have serious results, possibly resulting in the death
of many people.
1.4.3. Accessibility
HCI is a key consideration when designing systems that are not only usable, but also accessible to
people with disabilities. The core philosophy of HCI is to provide safe, usable, and efficient
systems to everyone, and this includes those with different sets of abilities and different ranges of
expertise and knowledge. Any system properly designed with HCI user-centered techniques and
principles will also be maximally accessible to those with disabilities.
Good use of HCI principles and techniques is not only important for the end user, but also is a very
high priority for software development companies. If a software product is unusable and causes
frustration, no person will use the program by choice, and as a result sale will be negatively
affected.
Today, very few computer users actually read the manual accompanying the software, if one exists.
Only very specialized and advanced programs require training and an extensive manual. Computer
users expect to understand the main functionality of an average program within a few minutes of
interacting with it. HCI provides designers with the principles, techniques, and tools necessary to
design effective interfaces that are obvious and easy to use, and do not require training.
The user interface (UI) is the point of human-computer interaction and communication in a device.
This can include display screens, keyboards, a mouse and the appearance of a desktop. It is also
the way through which a user interacts with an application or a website. The growing dependence
of many businesses on web applications and mobile applications has led many companies to place
increased priority on UI in an effort to improve the user's overall experience.
computer mouse
remote control
virtual reality
ATMs
speedometer
the old iPod click wheel
Websites such as Airbnb, Dropbox and Virgin America display strong user interface design. Sites
like these have created pleasant, easily operable, user-centered designs (UCD) that focus on the
user and their needs.
The UI is often talked about in conjunction with user experience (UX), which may include the
aesthetic appearance of the device, response time and the content that is presented to the user
within the context of the user interface. Both terms fall under the concept of human-computer
interaction (HCI), which is the field of study focusing on the creation of computer technology and
the interaction between humans and all forms of IT design. Specifically, HCI studies areas such as
UCD, UI design and UX design.
An increasing focus on creating an optimized user experience has led some to carve out careers as
UI and UX experts. Certain languages, such as HTML and CSS, have been geared toward making
it easier to create a strong user interface and experience.
A well-designed interface and screen are important to the users because that’s the window for them
to view the capabilities of the system. It is the vehicle through which many critical tasks are
presented which has a direct impact on the organization’s relation with the customers.
A well-designed interface and screen are terribly important to our users. It is their window to view
the capabilities of the system. It is also the vehicle through which many critical tasks are presented.
These tasks often have a direct impact on an organization's relations with its customers, and its
profitability.
A screen's layout and appearance affect a person in a variety of ways. If they are confusing and
inefficient, people will have greater difficulty in doing their jobs and will make more mistakes.
Poor design may even chase some people away from a system permanently. It can also lead to
aggravation, frustration, and increased stress.
The need for people to communicate with each other has existed since we first walked upon this
planet. The lowest and most common level of communication modes we share are movements and
gestures. Movements and gestures are language independent, that is, they permit people who do
not speak the same language to deal with one another.
The next higher level, in terms of universality and complexity, is spoken language. Most people
can speak one language, some two or more. A spoken language is a very efficient mode of
communication if both parties to the communication understand it.
At the third and highest level of complexity is written language. While most people peak, not all
can write. But for those who can, writing is still nowhere near as efficient as means. of
communication as speaking. In modem times, we have the typewriter, another step upward in
communication complexity. Significantly fewer people type than write. (While a practiced typist
can find typing faster and more efficient than handwriting, the unskilled may not find this the case.)
Spoken language, however, is still more efficient than typing, regardless' of typing skill level.
Through its first few decades, a computer's ability to deal with human communication was
inversely related to what was easy for people to do:
The computer demanded rigid, typed input through a keyboard; people responded slowly
using this device and with varying degrees of skill.
The human-computer dialog reflected the computer's preferences, consisting of one style
or a combination of styles using keyboards, commonly referred to as Command Language,
Question and Answer, Menu selection, Function Key Selection, and Form Fill-In.
Throughout the computer's history, designers have been developing, with varying degrees of
success, other human-computer interaction methods that utilize more general, widespread, and
easier-to-learn capabilities: voice and handwriting.
Systems that recognize human speech and handwriting now exist, although they still lack
the universality and richness of typed input.
Graphical User Interfaces (GUIs) were born in this high-tech era in response to users’ demands
for computers being easy to use and understand. Those in the Windows series are currently the
most widely known and used GUI applications. Compared with traditional DOS and UNIX
systems, GUI applications have been better received by the lay public for their visibility, ease of
use, intuitable operation, and so on. The user interface (UI) is typically employed for two purposes:
displaying information and acquiring information. By clicking on one or more command buttons,
the user interacts with a set of pre-written codes for a specific UI, which allows the user to dictate
program flow in an event-driven manner. For instance, a user may key some data, select any of a
number of menu options, or click the mouse somewhere in the application window. These user
actions are all regarded as events and typically result in particular program codes in the UI being
executed. How program execution proceeds are therefore determined by the sequence of user
actions. A UI can vary in style types; single document interface (SDI) and multiple document
interface (MDI) are most common, with SDI being even more popular.
A GUI (graphical user interface) is a system of interactive visual components for computer
software. A GUI displays objects that convey information, and represent actions that can be taken
by the user. The objects change color, size, or visibility when the user interacts with them.
GUI objects include icons, cursors, and buttons. These graphical elements are sometimes enhanced
with sounds, or visual effects like transparency and drop shadows.
The GUI was first developed at Xerox PARC by Alan Kay, Douglas Engelbart, and a group of
other researchers in 1981. Later, Apple introduced the Lisa computer with a GUI on January 19,
1983.
A GUI uses windows, icons, and menus to carry out commands, such as opening, deleting, and
moving files. Although a GUI operating system is primarily navigated using a mouse, a keyboard
can also be used via keyboard shortcuts or the arrow keys.
As an example, if you wanted to open a program on a GUI system, you would move the mouse
pointer to the program's icon and double-click it.
Unlike a command-line operating system or CUI, like Unix or MS-DOS, GUI operating systems
are much easier to learn and use because commands do not need to be memorized. Additionally,
users do not need to know any programming languages. Because of their ease of use and more
modern appearance, GUI operating systems have come to dominate today's market.
Microsoft Windows
Apple System 7 and macOS
Chrome OS
Linux variants like Ubuntu using a GUI interface.
No. Early command line operating systems like MS-DOS and even some versions of Linux today
have no GUI interface.
GNOME
KDE
Any Microsoft program, including Word, Excel, and Outlook.
Internet browsers, such as Internet Explorer, Chrome, and Firefox.
A pointing device, such as the mouse, is used to interact with nearly all aspects of the GUI. More
modern (and mobile) devices also utilize a touch screen. However, as stated in previous sections,
it is also possible to navigate a GUI using a keyboard.
No. Nearly all GUI interfaces, including Microsoft Windows, have options for navigating the
interface with a keyboard only.
While developers have been designing screens since a cathode ray tube display was first attached
to a computer, more widespread interest in the application of good design principles to screens did
not begin to emerge until the early 1970s, when IBM introduced its 3270-cathode ray tube text-
based terminal.
A 1970s screen often resembled the one pictured in Figure. It usually consisted of many fields
(more than are illustrated here) with very cryptic and often unintelligible captions.
It was visually cluttered, and often possessed a command field that challenged the user to
remember what had to be keyed into it. Ambiguous messages often required referral to a manual
to interpret. Effectively using this kind of screen required a great deal of practice and patience.
Most early screens were monochromatic, typically presenting green text on black backgrounds. At
the turn of the decade guidelines for text-based screen design were finally made widely available
and many screens began to take on a much less cluttered look through concepts such as grouping
and alignment of elements, as illustrated in Figure below.
User memory was supported by providing clear and meaningful field captions and by listing
commands on the screen, and enabling them to be applied, through function keys. Messages also
became clearer. These screens were not entirely clutter-free, however. Instructions and reminders
to the user had to be inscribed on the screen in the form of prompts or completion aids such as the
codes PR and Sc. Not all 1980s screens looked like this, however. In the 1980s, 1970s-type screens
were still being designed, and many still reside in systems today.
The advent of graphics yielded another milestone in the evolution of screen design, as illustrated
in Figure above.
While some basic "design principles did not change, groupings and alignment, for example,
borders were made available to visually enhance groupings, and buttons and menus for
implementing commands replaced function keys.
Multiple properties of elements were also provided, including many different font sizes and styles,
line thicknesses, and colors. The entry field was supplemented by a multitude of other kinds of
controls, including list boxes, drop-down combination boxes, spin boxes, and so forth. These new
controls were much more effective in supporting a person's memory, now simply allowing for
selection from a list instead of requiring a remembered key entry. Completion aids disappeared
from screens, replaced by one of the new listing controls. Screens could also be simplified, the
much more powerful computers being able to quickly present a new screen. In the 1990s, our
knowledge concerning what makes effective screen design continued to expand. Coupled with
ever-improving technology, the result was even greater improvements in the user-computer screen
interface as the new century dawned.
1.13.1. Advantages
1.13.2. Disadvantages
Direct manipulation is a style of Human Machine Interaction (HMI) design which features a
natural representation of task objects and actions promoting the notion of people performing a task
themselves (directly) not through an intermediary like a computer. Virtual Reality can be viewed
as a field which can draw upon the principles of direct manipulation for Human-Computer
Interaction (HCI) design or as an example or extension of direct manipulation itself. In VR, not
only can task objects and actions be naturally represented, the task environment can be naturally
Direct manipulation is a topic in the interdisciplinary field of HCI. Computer science, psychology,
linguistics, graphic design, and art all contribute to this field. The computer science foundation
areas of computer architecture and operating systems provide you with an understanding of the
machines upon which human computer interface styles such as direct manipulation are
implemented. This understanding allows you to determine capabilities and limitations of computer
platforms, providing boundaries for realistic HCI designs. For instance, though productive in a
visionary sense, many purported VR HCI concepts are not practical on today's computer systems.
As with any computer application development, direct manipulation interface development
benefits greatly from the computer science foundation areas of algorithms and programming
languages. The specialized field of computer graphics can also make a key contribution.
A favorite example to elucidate direct manipulation principles in contrast with the intermediary
style of interaction (e.g. traditional keyboard based, command driven interfaces), is travel in a car.
With direct manipulation, you drive the car by manipulating the steering wheel and pedals. The
car responds immediately to your actions, and these responses are immediately evident. If you are
making a mistake such as turning too sharply, you can quickly recognize this and perform a
corrective measure. With an intermediary style of interaction, you sit in the backseat of the car
giving a stranger direction. Further, imagine the stranger possessing poor interpersonal skills and
having a limited vocabulary. You've lost the feel for the road and you don't have a direct view of
where you are going. Worse yet, you have to rely on a stranger who, if they don't receive explicit
directions using particular phrases in a fixed order, idles in the middle of the road or takes you to
unfamiliar places from which you don't know the way out.
In relation to interactive computer systems, a direct manipulation interface possesses several key
characteristics. As mentioned earlier, a visual representation of objects and actions is presented to
a person in contrast to traditional command line languages. Further, the visual representation
usually takes the form of a metaphor related to the actual task being performed. For instance
computer files and directories represented as documents and file cabinets in a desktop publishing
system. The use of metaphors allows a person to tap their analogical reasoning power when
determining what actions to take when executing a task on the computer. For example, this
property is drawn upon heavily by desktop metaphors in their handling of windows like sheets of
paper on a desk. With direct manipulation, actions are rapid, incremental, and reversible with
results being immediately visible. This enhances the impression that the person is performing the
task and is in control not that the computer is responding to requests while the person waits
powerlessly wondering if the computer is doing the job correctly.
Given a thoughtful design and strong implementation, an interactive system employing direct
manipulation principles can realize many benefits. Psychology literature cites the strengths of
visual representations in terms of learning speed and retention. Direct manipulation harnesses these
strengths resulting in systems whose operation is easy to learn and use and difficult to forget.
Because complex syntax does not have to be remembered and analogical reasoning can be used,
less errors are made. When they are made, they are easily corrected through reversible actions.
Reversible actions also foster exploration because the fear of breaking something has been
diminished. Also, a person can gain confidence and mastery because they are in control and
because the system responses are predictable and immediate.
Because these benefits of direct manipulation are also desired in VR systems, direct manipulation
principles should be drawn from when designing VR systems especially in the use of VR's special
input devices. For instance, when using a data glove, a person should be able to select actions
rapidly and easily by pointing and gesturing. Gestures should be natural and intuitive in the
particular virtual environment. Actions should be represented visually and be incremental,
immediate, and reversible to give a person the impression of acting directly in an environment. If
voice recognition is employed, care must be taken to assist it with visual cues and complement it
with hand gestures. Otherwise, the recreation of a complex command syntax minus the keyboard
is a lurking danger.
Advantages:
Disadvantages:
Indirect manipulation:
In practice, direct manipulation of all screen objects and actions may not be feasible because of
the following:
A graphical system possesses a set of defining concepts. Included are sophisticated visual
Presentation, pick-and click interaction, a restricted set of interface options, visualization, object
orientation, extensive use of a person's recognition memory, and concurrent performance of
functions.
Visual presentation is the visual aspect of the interface. It is what people see on the screen. The
sophistication of a graphical system permits displaying lines, including drawings and icons. It also
permits the displaying of a variety of character fonts, including different sizes and styles.
The meaningful interface elements visually presented to the user in a graphical system include
windows (primary, secondary, or dialog boxes), menus (menu bar, pulldown, pop-up, cascading),
icons to represent objects such as programs or files, assorted screen-based controls (text boxes, list
boxes, combination boxes, settings, scroll bars, and buttons), and a mouse pointer and cursor. The
objective is to reflect visually on the screen the real world of the user as realistically, meaningfully,
simply, and clearly as possible.
To identify a proposed action is commonly referred to as pick, the signal to perform an action as
click. The primary mechanism for performing this pick-and-click is most often the mouse and its
buttons and the secondary mechanism for performing these selection actions is the keyboard. The
user moves the mouse pointer to the relevant element (pick) and the action is signaled (click).
Pointing allows rapid selection and feedback. The hand and mind seem to work smoothly and
efficiently together.
The array of alternatives available to the user is what is presented on the screen or what may be
retrieved through what is presented on the screen, nothing less, and nothing more. This concept
fostered the acronym WYSIWYG (What You See is What You Get).
1.16.4. Visualization
Visualization is a cognitive process that allows people to understand information that is difficult
to perceive, because it is either too voluminous or too abstract.
The goal is not necessarily to reproduce a realistic graphical image, but to produce one that conveys
the most relevant information. Effective visualizations can facilitate mental insights, increase
productivity, and foster faster and more accurate use of data.
A graphical system consists of objects and actions. Objects are what people see on the screen as a
single unit.
Objects can be composed of sub objects. For example, an object may be a document and its sub
objects may be a paragraph, sentence, word, and letter.
Objects are divided into three meaningful classes as Data objects, which present information,
container objects to hold other objects and Device objects, represent physical objects in the real
world.
Objects can exist within the context of other objects, and one object may affect the way another
object appears or behaves. These relationships are called collections, constraints, composites, and
containers. A collection might be the result of a query or a multiple selection of objects. Operations
can be applied to a collection of objects.
A constraint is a stronger object relationship. Changing an object in a set affects some other object
in the set. A document being organized into pages is an example of a constraint. A composite exists
when the relationship between objects becomes so significant that the aggregation itself can be
identified as an object. Examples include a range of cells organized into a spreadsheet, or a
collection of words organized into a paragraph. A container is an object in which other objects
exist. Examples include text in a document or documents in a folder.
A container often influences the behavior of its content. It may add or suppress certain properties
or operations of objects placed within it, control access to its content, or control access to kinds of
objects it will accept. These relationships help define an object's type. Similar traits and behaviors
exist in objects of the same object type.
Properties are the unique characteristics of an object. Properties help to describe an object and can
be changed by users.
Actions
People take actions on objects. They manipulate objects in specific ways (commands) or modify
the properties of objects (property or attribute specification).
Views: Views are ways of looking at an object’s information. IBM’s SAA (Systems Application
Architecture) CUA (Common User Access) describes four kinds of views: composed, contents,
settings, and help.
Continuous visibility of objects and actions encourages to eliminate “out of sight, out of mind”
problem
Graphic systems may do two or more things at one time. Multiple programs may run
simultaneously. It may process background tasks (cooperative multitasking) or preemptive
multitasking. Data may also be transferred between programs. It may be temporarily stored on a
“clipboard” for later transfer or be automatically swapped between programs.
The expansion of the World Wide Web since the early 1990s has been truly amazing. Once simply
a communication medium for scientists and researchers, its many and pervasive tentacles have
spread deeply into businesses, organizations, and homes around the world.
Unlike earlier text-based and GUI systems that were developed and nurtured in an organization's
Data Processing and Information Systems groups, the Web's roots were sown in a market-driven
society thirsting for convenience and information. Web interface design is essentially the design
of navigation and the presentation of information. It is about content, not data.
Proper interface design is largely a matter of properly balancing the structure and relationships of
menus, content, and other linked documents or graphics. The design goal is to build a hierarchy of
menus and pages that feels natural, is well structured, is easy to use, and is truthful. The Web is a
navigation environment where people move between pages of information, not an application
environment. It is also a graphically rich environment.
Web interface design is difficult for a number of reasons. First, its underlying design language,
HTML, was never intended for creating screens to be used by the general population. Its scope of
users was expected to be technical. HTML was limited in objects and interaction styles and did
not provide a means for presenting information in the most effective way for people.
Next, browser navigation retreated to the pre-GUI era. This era was characterized by a "command"
field whose contents had to be learned, and a navigational organization and structure that lay
hidden beneath a mostly dark and blank screen. GUIs eliminated the absolute necessity for a
command field, providing menus related to the task and the current contextual situation.
Browser navigation is mostly confined to a "Back" and "Forward" concept, but "back-to where"
and "forward-to where" is often unremembered or unknown. Web interface design is also more
difficult because the main issues concern information Architecture and task flow, neither of which
is easy to standardize.
It is more difficult because of the availability of the various types of multimedia, and the desire of
many designers to use something simply because it is available. It is more difficult because users
are ill defined, and the user's tools so variable in nature. The ultimate goal of a Web that feels
natural, is well structured, and is easy to use will reach fruition.
While the introduction of the graphical user interface revolutionized the user interface, the Web
has revolutionized computing.
It allows millions of people scattered across the globe to communicate, access information,
publish, and be heard. It allows people to control much of the display and the rendering of Web
pages. Aspects such as typography and colors can be changed, graphics turned off, and decisions
made whether or not to transmit certain data over non secure channels or whether to accept or
refuse cookies.
Web usage has reflected this popularity. The number of Internet hosts has risen dramatically:
in 1990, 300,000;
in 1992 hosts exceeded one million.
Commercialization of the Internet saw even greater expansion of the growth rate. In 1993,
Internet traffic was expanding at a 341,634 percent annual growth rate. In 1996, there were
nearly 10 million hosts online and 40 million connected people (PBS Timeline).
User control has had some decided disadvantages for some Web site owners as well. Users have
become much more discerning about good design. Slow download times, confusing navigation,
confusing page organization, disturbing animation, or other undesirable site features often results
in user abandonment of the site for others with a more agreeable interface. People are quick to vote
with their mouse, and these warnings should not go unheeded.
Devices: In GUI design, the characteristics of interface devices such as monitors and modems are
well defined, and design variations tend to be restricted. (In GUI design, the difference in screen
area between a laptop and a high-end workstation is a factor of six, in Web page design this
difference may be as high as 100.) In GUI design, the layout of a screen will look exactly as
specified, Web page look will be greatly influenced by both the hardware and software.
User focus: GUI systems are about well-defined applications and data, about transactions and
processes. Web use is most often characterized browsing and visual scanning of information to
find what information is needed.
Data/information: GUI data is typically created and used by known and trusted sources, people
in the user’s organization or reputable and reliable companies and organizations. Web content is
usually highly variable in organization, and the privacy of the information is often suspect.
User tasks: GUI system users install, configure, personalize, start, use, and upgrade programs
Web users do things like linking to sites, browsing or reading pages, filling out forms, registering
for services, participating in transactions, and downloading and saving things.
User’s conceptual space: A user’s access to data is constrained, Little opportunity for meaningful
organization of personal information exists.
Presentation elements: They are also generally standardized as a result of the toolkits and style
guides used. Elements are presented on screens exactly as specified by the designer. Web systems
possess two components: the browser and page. Within a page itself, however, any combination
of text, images, audio, video, and animation may exist. user can change the look of a page by
modifying its properties.
Navigation: GUI users navigate through structured menus, lists, trees, dialogs, and wizards. Web
users Navigation is a significant and highly visible concept with few constraints.
Context: Paths are restricted, and multiple overlapping windows may be presented. Web pages
are single entities with almost unlimited navigation paths. Contextual clues become limited or are
hard to find.
Interaction: GUI interactions consist of such activities as clicking menu choices, pressing buttons,
selecting choices from list, keying data, and cutting, copying, or Pasting. The basic Web interaction
is a single click. This click can cause extreme changes in context.
Response time: Compared to the Web, response times with a GUI system are fairly stable,
Visual style: In GUI systems, the visual style is typically prescribed and constrained by toolkit.
little opportunity exists for screen personalization. In Web page design, a more artistic, individual,
and unrestricted presentation style is allowed and encouraged.
System capability: GUI only limited in proportion to the capability of the hardware, and the
sophistication of the software. The Web is more constrained, being limited by constraints imposed
by the hardware, browser, and software.
Task efficiency and Consistency: GUI system design, an attempt is made to be consistent both
within applications and across applications. In Web page design, the heavy emphasis on graphics,
a lack of design standards, and the desire of Web sites to establish their own identities results in
very little consistency across sites.
User assistance: User assistance is an integral part of most GUI systems applications. little help
that is available is built into the page.
Integration: A primary goal of most GUI applications is the seamless integration of all pieces.
interoperability between sites is almost nonexistent.
Security: security and data access can be tightly controlled, attempts to create a more trustworthy
appearance are being made through the use of security levels and passwords to assure users that
the Web is a secure environment.
An interface must really be just an extension of a person. This means that the system and its
software must reflect a person's capabilities and respond to his or her specific needs. It should be
useful, accomplishing some business objectives faster and more efficiently than the previously
used method or tool did. It must also be easy to learn, for people want to do, not learn to do.
Finally, the system must be easy and fun to use, evoking a sense of pleasure and accomplishment
not tedium and frustration. The interface itself should serve as both a connector and a separator. A
connector in that it ties the user to the power of the computer, and a separator in that it minimizes
the possibility of the participants damaging one another.
While the damage the user inflicts on the computer tends to be physical (a frustrated pounding of
the keyboard), the damage caused by the computer is more psychological.
Throughout the history of the human-computer interface, various researchers and writers have
attempted to define a set of general principles of interface design. What follows is a compilation
of these principles. They reflect not only what we know today, but also what we think we know
today. Many are based on research, others on the collective thinking of behaviorists working with
user interfaces. These principles will continue to evolve, expand, and be refined as our experience
with Gills and the Web increases.
The design of the Xerox STAR was guided by a set of principles that evolved over its lengthy
development process. These principles established the foundation for graphical interfaces.
Displaying objects that are selectable and manipulable must be created. A design challenge is to
invent a set of displayable objects that are represented meaningfully and appropriately for the
intended application. It must be clear that these objects can be selected, and how to select them
must be Self-evident. When they are selected should also be obvious, because it should be clear
that the selected object will be the focus of the next action. Standalone icons easily fulfilled this
requirement. The handles for windows were placed in the borders. Visual order and viewer focus:
Attention must be drawn, at the proper time, to the important and relevant elements of the display.
Effective visual contrast between various components of the screen is used to achieve this goal.
Animation is also used to draw attention, as is sound.
Feedback must also be provided to the user. Since the pointer is usually the focus of viewer
attention, it is a useful mechanism for providing this feedback (by changing shapes).
Revealed structure: The distance between one's intention and the effect must be minimized.
Most often, the distance between intention and effect is lengthened as system power increases. The
relationship between intention and effect must be, tightened and made as apparent as possible to
the user. The underlying structure is often revealed during the selection process.
Consistency: Consistency aids learning. Consistency is provided in such areas as element location,
grammar, font shapes, styles, and sizes, selection indicators, and contrast and emphasis techniques.
Appropriate effect or emotional impact: The interface must provide the appropriate emotional
effect for the product and its market. Is it a corporate, professional, and secure business system?
Should it reflect the fantasy, wizardry, and bad puns of computer games?
A match with the medium: The interface must also reflect the capabilities of the device on which
it will be displayed. Quality of screen images will be greatly affected by a device's resolution and
color-generation capabilities.
The design goals in creating a user interface are described below. They are fundamental to the
design and implementation of all effective interfaces, including GUI and Web ones. These
principles are general characteristics of the interface, and they apply to all aspects. The compilation
is presented alphabetically, and the ordering is not intended to imply degree of importance.
Provide visual appeal by following these presentation and graphic design principles:
1.23.2. Clarity
Visual elements
Functions
Metaphors
Words and Text
1.23.3. Compatibility
The user
The task and job
The Product
Adopt the User’s Perspective
1.23.4. Configurability
1.23.5. Comprehensibility
A system should be easily learned and understood: A user should know the following:
What to look at
What to do
When to do it
Where to do it
Why to do it
How to do it
The flow of actions, responses, visual presentations, and information should be in a sensible order
that is easy to recollect and place in context.
1.23.6. Consistency
A system should look, act, and operate the same throughout. Similar components should:
The same action should always yield the same result. The function of elements should not change.
The position of standard elements should not change.
1.23.7. Control
The context maintained must be from the perspective of the user. The means to achieve goals
should be flexible and compatible with the user's skills, experiences, habits, and preferences. Avoid
modes since they constrain the actions available to the user. Permit the user to customize aspects
of the interface, while always providing a Proper set of defaults
1.23.8. Directness
1.23.9. Flexibility
A system must be sensitive to the differing needs of its users, enabling a level and type of
performance based upon:
1.23.10. Efficiency
Transitions between various system controls should flow easily and freely.
Navigation paths should be as short as possible.
1.23.11. Familiarity
Employ familiar concepts and use a language that is familiar to the user.
Keep the interface natural, mimicking the user's behavior patterns.
Use real-world metaphors.
1.23.12. Forgiveness
1.23.13. Predictability
The user should be able to anticipate the natural progression of each task.
1.23.14. Recovery
1.23.15. Responsiveness
The system must rapidly respond to the user's requests Provide immediate acknowledgment for all
user actions:
Visual.
Textual
Auditory.
1.23.16. Transparency
Permit the user to focus on the task or job, without concern for the mechanics of the interface.
Workings and reminders of workings inside the computer should be invisible to the user.
1.23.17. Simplicity