0% found this document useful (0 votes)
12 views

Wobbrock-2012

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Wobbrock-2012

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Seven Research Contributions in HCI

Jacob O. Wobbrock
The Information School | DUB Group
University of Washington
Seattle, WA USA 98195
[email protected]
ABSTRACT Examples of Empirical Contributions
Research in human-computer interaction (HCI) addresses Bragdon, A., Nelson, E., Li, Y. and Hinckley, K. (2011).
both technological and human-behavioral concerns. It Experimental analysis of touch-screen gesture designs in mobile
follows that the contributions made in HCI are usually environments. Proceedings of the ACM Conference in Human
separately familiar to engineering, design, or the social Factors in Computing Systems (CHI '11). Vancouver, British
Columbia (May 7-12, 2011). New York: ACM Press, 403-412.
sciences, but rarely brought together under one roof. The
seven research contribution types covered here are (1) Burke, M., Kraut, R. and Williams, D. (2010). Social use of
empirical, (2) artifact, (3) methodological, (4) theoretical, computer-mediated communication by adults on the autism
(5) benchmark / dataset, (6) survey, and (7) opinion. Of spectrum. Proceedings of the ACM Conference on Computer
course, some research articles make more than one type of Supported Cooperative Work (CSCW '10). Savannah, Georgia
(February 6-10, 2010 ). New York: ACM Press, 425-434.
contribution. The goal of this paper is to give researchers
insight into the contribution types found in HCI papers, and Casiez, G., Vogel, D., Balakrishnan, R. and Cockburn, A. (2008).
to provide examples for further reading. I do not claim that The impact of control-display gain on user performance in pointing
the chosen examples are the “best of breed;” rather, they are tasks. Human-Computer Interaction 23 (3), 215-250.
examples with which I am familiar and that I feel illustrate a Chilana, P.K., Wobbrock, J.O. and Ko, A.J. (2010). Understanding
given contribution. usability practices in complex domains. Proceedings of the ACM
Conference on Human Factors in Computing Systems (CHI '10).
Author Keywords
Atlanta, Georgia (April 10-15, 2010). New York: ACM Press,
Contributions, methods, research, science, invention. 2337-2346.
ACM Classification Keywords Clarkson, E., Clawson, J., Lyons, K. and Starner, T. (2005). An
H5.m. Information interfaces and presentation (e.g., HCI): empirical study of typing rates on mini-QWERTY keyboards.
Miscellaneous. Extended Abstracts of the ACM Conference on Human Factors in
1. EMPIRICAL CONTRIBUTIONS Computing Systems (CHI '05). Portland, Oregon (April 2-7, 2005).
New York: ACM Press, 1288-1291.
Empirical research contributions consist of new findings
based on systematically gathered data. Empirical Czerwinski, M., Horvitz, E. and Wilhite, S. (2004). A diary study
contributions may be quantitative or qualitative (or mixed), of task switching and interruptions. Proceedings of the ACM
and usually follow from scientific studies of various kinds Conference on Human Factors in Computing Systems (CHI '04).
(e.g., laboratory, field, ethnographic, etc.). In HCI, the Vienna, Austria (April 24-29, 2004). New York: ACM Press, 175-
182.
purpose of empirical contributions is to reveal formerly
unknown insights about human behavior in relation to Dawe, M. (2006). Desperately seeking simplicity: How young
information or technology. Empirical research methods adults with cognitive disabilities and their families adopt assistive
commonly used in HCI include formal experiments, field technologies. Proceedings of the ACM Conference on Human
experiments, field studies, interviews, focus groups, surveys, Factors in Computing Systems (CHI '06). Montréal, Québec (April
22-27, 2006). New York: ACM Press, 1143-1152.
usability tests, case studies, diary studies, ethnography,
contextual inquiry, experience sampling, and automated data Findlater, L., Wobbrock, J.O. and Wigdor, D. (2011). Typing on
collection (e.g., sensing, logging). flat glass: Examining ten-finger expert typing patterns on touch
surfaces. Proceedings of the ACM Conference on Human Factors
How Empirical Contributions Are Evaluated in Computing Systems (CHI '11). Vancouver, British Columbia
Empirical contributions are considered trustworthy when the (May 7-12, 2011). New York: ACM Press, 2453-2462.
methods that produce them are executed with rigor and
precision. “The devil is in the details” in empirical work. Grudin, J.T. (1984). Error patterns in skilled and novice
transcription typing. In Cognitive Aspects of Skilled Typewriting,
Identifiable confounds and biases must be avoided in studies W. E. Cooper (ed.). New York: Springer-Verlag, 121-143.
of all types. If methods are sound and findings important,
empirical contributions should be judged favorably. Hwang, F., Keates, S., Langdon, P. and Clarkson, P.J. (2004).
Mouse movements of motion-impaired users: A submovement
analysis. Proceedings of the ACM SIGACCESS Conference on
Copyright © 2012 Jacob O. Wobbrock
Last updated: Nov. 6, 2015

1
Computers and Accessibility (ASSETS '04). Atlanta, Georgia Wigdor). With new designs, form is the priority over
(October 18-20, 2004). New York: ACM Press, 102-109. function.
Kurtenbach, G. and Buxton, W. (1994). User learning and How Artifact Contributions Are Evaluated
performance with marking menus. Proceedings of the ACM Artifact contributions are often accompanied by empirical
Conference on Human Factors in Computing Systems (CHI '94). evaluations but they do not have to be. New systems,
Boston, Massachusetts (April 24-28, 1994). New York: ACM
Press, 258-264.
architectures, tools, and toolkits are often evaluated in a
holistic fashion on the basis of what they make possible and
Lee, S. and Zhai, S. (2009). The performance of touch screen soft how they do so. Interaction techniques, on the other hand, are
buttons. Proceedings of the ACM Conference on Human Factors in almost always evaluated precisely and quantitatively, as
Computing Systems (CHI '09). Boston, (April 4-9, 2009). New human performance is central to understanding the merits of
York: ACM Press, 309-318.
most interaction techniques. New desigs, in general, are
Patel, K., Fogarty, J., Landay, J.A. and Harrison, B. (2008). evaluated according to how compelling, how richly painted,
Examining difficulties software developers encounter in the and how informed is their vision. Designs are often presented
adoption of statistical machine learning. Proceedings of the 23rd as results of competing tradeoffs resolved by sound
AAAI Conference on Artificial Intelligence (AAAI '08). Chicago, theoretical, conceptual, or empirical means. Designs that are
Illinois (July 13-17, 2008). Menlo Park, California: AAAI Press,
deeply implemented may also be considered systems and
1563-1566.
evaluated as such.
Poltrock, S.E. and Grudin, J. (1994) Organizational obstacles to
interface design and development: Two participant-observer Examples of Artifact Contributions
studies. ACM Transactions on Computer-Human Interaction 1 (1), Baudisch, P., Sinclair, M. and Wilson, A. (2006). Soap: A pointing
52-80. device that works in mid-air. Proceedings of the ACM Symposium
on User Interface Software and Technology (UIST '06). Montreux,
Shinohara, K. and Wobbrock, J.O. (2011). In the shadow of Switzerland (October 15-18, 2006). New York: ACM Press, 43-46.
misperception: Assistive technology use and social interactions.
Proceedings of the ACM Conference on Human Factors in Dixon, M. and Fogarty, J.A. (2010). Prefab: Implementing
Computing Systems (CHI '11). Vancouver, British Columbia (May advanced behaviors using pixel-based reverse engineering of
9-12, 2011). New York: ACM Press, 705-714. interface structure. Proceedings of the ACM Conference on Human
Factors in Computing Systems (CHI '10). Atlanta, Georgia (April
Wobbrock, J.O. and Gajos, K.Z. (2007). A comparison of area 10-15, 2010). New York: ACM Press, 1525-1534.
pointing and goal crossing for people with and without motor
impairments. Proceedings of the ACM SIGACCESS Conference on Gajos, K.Z., Weld, D.S. and Wobbrock, J.O. (2010). Automatically
Computers and Accessibility (ASSETS '07). Tempe, Arizona generating personalized user interfaces with SUPPLE. Artificial
(October 15-17, 2007). New York: ACM Press, 3-10. Intelligence 174 (12-13), 910-950.

2. ARTIFACT CONTRIBUTIONS Greenberg, S. and Fitchett, C. (2001). Phidgets: Easy development


of physical interfaces through physical widgets. Proceedings of the
Artifact contributions in HCI describe inventions, which
ACM Symposium on User Interface Software and Technology
include new systems, architectures, tools, techniques, or (UIST '01). Orlando, Florida (November 11-14, 2001). New York:
designs that reveal new opportunities, enable new outcomes, ACM Press, 209-218.
facilitate new insights or explorations, or impel us to
consider new possible futures. Artifact contributions are, by Grossman, T. and Balakrishnan, R. (2005). The Bubble Cursor:
Enhancing target acquisition by dynamic resizing of the cursor's
definition, dependent upon never-before-seen inventions that
activation area. Proceedings of the ACM Conference on Human
are instantiated as prototypes, sketches, mockups, demos, or Factors in Computing Systems (CHI '05). Portland, Oregon (April
other envisionments, and are often but not always at least 2-7, 2005). New York: ACM Press, 281-290.
partially functional. Artifacts tend to be one of three types:
systems, techniques, or designs. Kane, S.K., Avrahami, D., Wobbrock, J.O., Harrison, B., Rea, A.,
Philipose, M. and LaMarca, A. (2009). Bonfire: A nomadic system
Novel systems, including architectures, tools, and toolkits, for hybrid laptop-tabletop interaction. Proceedings of the ACM
provide new knowledge by showing how to accomplish new Symposium on User Interface Software and Technology (UIST
things formerly impossible, or how to accomplish formerly '09). Victoria, British Columbia (October 4-7, 2009). New York:
possible things more easily (e.g., Dixon, Gajos, Greenberg, ACM Press, 129-138.
Myers, Patel, Wobbrock). Kristensson, P.-O. and Zhai, S. (2004). SHARK2: A large
vocabulary shorthand writing system for pen-based computers.
Novel interaction techniques provide new ways of inputting Proceedings of the ACM Symposium on User Interface Software
information or controlling systems, usually striving to be and Technology (UIST '04). Santa Fe, New Mexico (October 24-
reusable across myriad platforms or situations (e.g., 27, 2004). New York: ACM Press, 43-52.
Baudisch, Grossman, Kristensson).
Myers, B.A., McDaniel, R.G., Miller, R.C., Ferrency, A.S.,
Novel designs may be prototypes, sketches, mockups, Faulring, A., Kyle, B.D., Mickish, A., Klimovitski, A. and Doane,
demos, or other envisionments whose purpose is to convey P. (1997). The Amulet environment: New models for effective user
or motivate new possible futures (e.g., Kane, Schwesig, interface software development. IEEE Transactions on Software
Engineering 23 (6), 347-365.

2
Patel, S.N., Gupta, S. and Reynolds, M.S. (2010). The design and Kjeldskov, J. and Stage, J. (2004). New techniques for usability
evaluation of an end-user-deployable, whole house, contactless evaluation of mobile systems. International Journal of Human-
power consumption sensor. Proceedings of the ACM Conference Computer Studies 60 (5-6), 599-620.
on Human Factors in Computing Systems (CHI ’10). Atlanta,
Georgia (April 10-15, 2010).New York: ACM Press, 2471-2480. Guiard, Y. (2009). The problem of consistency in the design of Fitts'
law experiments: Consider either target distance and width or
Schwesig, C., Poupyrev, I. and Mori, E. (2004). Gummi: A movement form and scale. Proceedings of the ACM Conference on
bendable computer. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '09). Boston,
Human Factors in Computing Systems (CHI '04). Vienna, Austria Massachusetts (April 04-09, 2009). New York: ACM Press, 1809-
(April 24-29, 2004). New York: ACM Press, 263-270. 1818.
Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J. and Shen, C. Palen, L. and Salzman, M. (2002). Voice-mail diary studies for
(2007). LucidTouch: A see-through mobile device. Proceedings of naturalistic data capture under mobile conditions. Proceedings of
the ACM Symposium on User Interface Software and Technology the ACM Conference on Computer Supported Cooperative Work
(UIST '07). Newport, Rhode Island (October 7-10, 2007). New (CSCW '02). New Orleans, Louisiana (November 16-20, 2002).
York: ACM Press, 269-278. New York: ACM Press, 87-95.
Wobbrock, J.O., Wilson, A.D. and Li, Y. (2007). Gestures without Price, K.J. and Sears, A. (2009). The development and evaluation
libraries, toolkits or training: A $1 recognizer for user interface of performance-based functional assessment: A methodology for
prototypes. Proceedings of the ACM Symposium on User Interface the measurement of physical capabilities. ACM Transactions on
Software and Technology (UIST '07). Newport, Rhode Island Accessible Computing 2 (2), 10:1-10:31.
(October 7-10, 2007). New York: ACM Press, 159-168.
Soukoreff, R.W. and MacKenzie, I.S. (2003). Metrics for text entry
3. METHODOLOGICAL CONTRIBUTIONS research: An evaluation of MSD and KSPC, and a new unified error
Methodological research contributions add to or refine the metric. Proceedings of the ACM Conference on Human Factors in
methods by which researchers or practitioners carry out their Computing Systems (CHI '03). Ft. Lauderdale, Florida (April 5-10,
work. Research methods enable scientists to make new 2003). New York: ACM Press, 113-120.
discoveries. Practitioner methods enable designers and Soukoreff, R.W. and MacKenzie, I.S. (2004). Towards a standard
engineers to apply their skills to greater effect. Entirely new for pointing device evaluation, perspectives on 27 years of Fitts' law
methods of either sort are infrequent; method variations are research in HCI. International Journal of Human-Computer Studies
more common. 61 (6), 751-789.

How Methodological Contributions Are Evaluated Wobbrock, J.O., Aung, H.H., Rothrock, B. and Myers, B.A. (2005).
Methodological contributions are evaluated largely on the Maximizing the guessability of symbolic input. Extended Abstracts
basis of the utility of the new or improved method. of the ACM Conference on Human Factors in Computing Systems
(CHI '05). Portland, Oregon (April 2-7, 2005). New York: ACM
Demonstrating the utility of a method often requires
Press, 1869-1872.
empirical validation. Such validation may be formal in
nature (e.g., an experiment in which one of two groups uses Wobbrock, J.O., Morris, M.R. and Wilson, A.D. (2009). User-
the new method, while the other group uses an extant de facto defined gestures for surface computing. Proceedings of the ACM
method), or a case study (e.g., where the method is applied Conference on Human Factors in Computing Systems (CHI '09).
Boston, Massachusetts (April 4-9, 2009). New York: ACM Press,
in a particular setting and outcomes are analyzed and
1083-1092.
reported). The goal of validating a methodological
contribution is to convince readers that the new method or Wobbrock, J.O., Findlater, L., Gergle, D. and Higgins, J.J. (2011).
method improvement is useful, valid, and reliable for its The Aligned Rank Transform for nonparametric factorial analyses
intended purpose. As the method is to be used by others, it using only ANOVA procedures. Proceedings of the ACM
Conference on Human Factors in Computing Systems (CHI '11).
must be described well enough to be employed by
Vancouver, British Columbia (May 7-12, 2011). New York: ACM
researchers or practitioners, including with warnings of its Press, 143-146.
pitfalls and shortcomings.
4. THEORETICAL CONTRIBUTIONS
Examples of Methodological Contributions Theoretical contributions consist of new or improved
Blomberg, J., Giacomi, J., Mosher, A. and Swenton-Wall, P.
concepts, definitions, models, principles, or frameworks.
(1993). Ethnographic field methods and their relation to design. In
Participatory Design: Principles and Practices, D. Schuler and A. These thought-vehicles may be quantitative or qualitative in
Namioka (eds.). Hillsdale, New Jersey: Lawrence Erlbaum, 123- nature, and structured so as to be useful in the pursuit of
155. future knowledge. Theories are built over time, and in some
fields (e.g., psychology, physics), after repeated validation,
Consolvo, S. and Walker, M. (2003). Using the Experience
theories may attain the status of laws. Theories are both
Sampling method to evaluate ubicomp applications. IEEE
Pervasive Computing 2 (2), 24-31. descriptive and predictive in nature; that is, they reveal the
essential features of what is (descriptive) while accurately
Holtzblatt, K. and Jones, S. (1993). Contextual Inquiry: A foretelling what will be (predictive). Theories must be
participatory technique for system design. In Participatory Design: explanatory in nature. They must not only state that a
Principles and Practices, D. Schuler and A. Namioka (eds.).
relationship holds, but why it holds the way it does. Scientific
Hillsdale, New Jersey: Lawrence Erlbaum, 177-210.
theories must also be falsifiable; they must assert something

3
that may or may not be true. If a theory cannot be falsified 5. BENCHMARK / DATASET CONTRIBUTIONS
even in principle, it is not a scientific theory. Theoretical Benchmarks or datasets are infrequent contributions in HCI,
contributions significantly advance our understanding of but they do occur. A benchmark or dataset contribution
phenomena by providing inherently reusable constructs and provides a new and useful corpus, often accompanied by an
ways of thinking about phenomena of interest. analysis of its characteristics, for the benefit of the research
community. Benchmarks are offered along with standard
How Theoretical Contributions Are Evaluated
tests to facilitate cross-project comparisons. Datasets enable
Theoretical contributions are evaluated based on their
evaluations of common data repositories by new algorithms,
novelty, importance, descriptive power, and predictive
systems, or methods. Benchmark or dataset contributions are
power. A theory that accounts well for observed data from a
more common in the artificial intelligence, algorithms,
specific situation but has no ability to generalize to a new
operating systems, and database communities, to name a
situation is inherently limited. Such a theory may be “over-
few.
fit” to the observed data. Conversely, a theory that is so broad
it can account for anything probably does not contain any How Benchmark / Dataset Contributions Are Evaluated
real descriptive power. It lacks specifics and is “under-fit.” A benchmark or dataset contribution is judged favorably the
For these and other reasons, theory validation is almost extent to which it supplies the research community with a
always accompanied by empirical work, although such work much-needed corpus against which to test future innovations.
occasionally precedes and give rise to theory. Also, benchmarks or datasets should be accompanied by
explanations of how the benchmark was created or how the
Examples of Theoretical Contributions
data was gathered, in what ways it is (or is not)
Bellotti, V., Back, M., Edwards, W.K., Grinter, R.E., Henderson,
A. and Lopes, C. (2002). Making sense of sensing systems: Five representative, and common procedures to employ with it.
questions for designers and researchers. Proceedings of the ACM Often, benchmarks or datasets are published with new tools
Conference on Human Factors in Computing Systems (CHI '02). that enable researchers to work with the new corpus. Where
Minneapolis, Minnesota. New York: ACM Press, 415-422. new methods or tools are released with new data, benchmark
or dataset contributions may be part of methodological or
Buxton, W. (1990). A three-state model of graphical input.
Proceedings of the IFIP TC13 Third Int'l Conference on Human- artifact contributions as well.
Computer Interaction (INTERACT '90). Cambridge, England Examples of Benchmark / Dataset Contributions
(August 27-31, 1990). Amsterdam, The Netherlands: North- Hse, H. and Newton, A.R. (2003). Sketched Symbol Recognition
Holland, 449-456. using Zernike Moments. Technical Memorandum UCB/ERL
Cao, X. and Zhai, S. (2007). Modeling human performance of pen M03/49, Electronics Research Lab, Department of EECS,
stroke gestures. Proceedings of the ACM Conference on Human University of California, Berkeley.
Factors in Computing Systems (CHI '07). San Jose, California Llorens, D., Prat, F., Marzal, A., Vilar, J.M., Castro, M.J.,
(April 28-May 3, 2007). New York: ACM Press, 1495-1504. Amengual, J.C., Barrachina, S., Castellanos, A., España, S.,
Card, S.K., Mackinlay, J.D. and Robertson, G. (1990). The design Gómez, J.A., Gorbe, J., Gordo, A., Palazón, V., Peris, G., Ramos-
space of input devices. Proceedings of the ACM Conference on Garijo, R. and F. Zamora. (2008). The UJIpenchars database: A
Human Factors in Computing Systems (CHI '90). Seattle, pen-based database of isolated handwritten characters. Proceedings
Washington (April 1-5, 1990). New York: ACM Press, 117-124. of the Sixth International Conference on Language Resources and
Evaluation (LREC ’08). Marrakech, Morocco (May 28-30, 2008).
Guiard, Y. (1987). Asymmetric division of labor in human skilled Paris, France: European Language Resources Association, 2647-
bimanual action: The kinematic chain as a model. Journal of Motor 2651.
Behavior 19 (4), 486-517.
MacKenzie, I.S. and Soukoreff, R.W. (2003). Phrase sets for
MacKenzie, I.S. (1992). Fitts' law as a research and design tool in evaluating text entry techniques. Extended Abstracts of the ACM
human-computer interaction. Human-Computer Interaction 7 (1), Conference on Human Factors in Computing Systems (CHI '03). Ft.
91-139. Lauderdale, Florida (April 5-10, 2003). New York: ACM Press,
754-755.
Schön, D.A. (1992). Designing as reflective conversation with the
materials of a design situation. Knowledge-Based Systems 5 (1), 3- Myers, B. et al. (1997). Using benchmarks to teach and evaluate
14. user interface tools. Available at
http://www.cs.cmu.edu/~amulet/papers/benchmarks.pdf
Wobbrock, J.O., Cutrell, E., Harada, S. and MacKenzie, I.S. (2008).
An error model for pointing based on Fitts' law. Proceedings of the Paek, T. and Hsu, B.-J.P. (2011). Sampling representative phrase
ACM Conference on Human Factors in Computing Systems (CHI sets for text entry experiments: A procedure and public resource.
'08). Florence, Italy (April 5-10, 2008). New York: ACM Press, Proceedings of the ACM Conference on Human Factors in
1613-1622. Computing Systems (CHI '11). Vancouver, British Columbia (May
7-12, 2011). New York: ACM Press, 2477-2480.
Zhai, S., Kong, J. and Ren, X. (2004). Speed-accuracy tradeoff in
Fitts' law tasks—on the equivalency of actual and nominal pointing Plaisant,C., Fekete, J.-D. and Grinstein, G. (2008). Promoting
precision. International Journal of Human-Computer Studies 61 (6), insight-based evaluation of visualizations: From contest to
823-856. benchmark repository. IEEE Transactions on Visualization and
Computer Graphics 14 (1), 120-134.

4
Willems, D., Niels, R., van Gerven, M. and Vuurpijl, L. (2009). Plamondon, R. and Srihari, S.N. (2000). On-line and off-line
Iconic and multi-stroke gesture recognition. Pattern Recognition 42 handwriting recognition: A comprehensive survey. IEEE
(12), 3303-3312. Transactions on Pattern Analysis and Machine Intelligence 22 (1),
63-84.
6. SURVEY CONTRIBUTIONS
Survey contributions are attempts to review and synthesize Sawilowsky, S.S. (1990). Nonparametric tests of interaction in
work done in a research field with the goal of exposing experimental design. Review of Educational Research 60 (1), 91-
trends, themes, and gaps. Survey contributions take a step 126.
back (and often a step up), organizing the literature on a Shaer, O. and Hornecker, E. (2009). Tangible user interfaces: Past,
particular topic and reflecting on what it means. Often, present and future directions. Foundations and Trends in Human-
survey contributions are conducted after a topic has reached Computer Interaction 3 (1-2), 1-137.
a certain level of maturity. It is not uncommon for surveys to Welford, A.T. (1960). The measurement of sensory-motor
be over fifty pages in length, with references numbering in performance: Survey and reappraisal of twelve years' progress.
the hundreds. The journal ACM Computing Surveys is Ergonomics 3 (3), 189-230.
exclusively devoted to publishing survey contributions in
7. OPINION CONTRIBUTIONS
computing. In HCI, the journal Foundations and Trends in
Papers making opinion contributions seek to change the
HCI regularly publishes survey contributions.
minds of readers through persuasion. Although the term
How Survey Contributions Are Evaluated “opinion” might suggest a less-than-scientific effort, in fact,
To be effective, survey contributions must not be mere opinion contributions, to be persuasive, must draw upon
laundry lists of prior work. Rather, they must review and many of the above contribution types to advance their case,
synthesize this work, extracting emergent themes and trends, especially empirical results. Opinion contributions are
identifying gaps where new opportunities lie. Surveys are considered a separate contribution type not because they lack
judged on their completeness, depth, organization, maturity, scientific bases, but because of their goal, which is to
synthesis, and fairness. Surveys are also judged favorably the persuade rather than to just inform. Along with persuasion,
extent to which they uncover promising new areas for future the goal of opinion contributions is to compel discussion,
work. reflection, and even dissention or a change of course for the
As an example, consider the stated scope of ACM Computing field. Opinion articles advance a specific point of view more
Surveys: “[To] present new specialties and help practitioners overtly than articles from other contribution types.
and researchers stay abreast of all areas in the rapidly How Opinion Contributions Are Evaluated
evolving field of computing. Computing Surveys focuses on Opinion contributions are evaluated on the credibility and
integrating and adding understanding to the existing use of their supporting evidence and examples, on their fair
literature. [It] does not publish ‘new’ research. Instead, [it] consideration of alternate perspectives, and on the strength
focuses on integrat[ing] the existing literature and put[ting] of their articulated position. Opinion contributions should
its results in context. [S]urveys … must develop a framework focus on topics of interest to a broad community, and should
or overall view of an area that integrates the existing therefore have widespread appeal. Often opinion
literature. Frequently, such a framework exposes topics that contributions appear in semi-scholarly venues such as ACM
need additional research. Basically, a [survey] article Interactions to reach a wide audience.
answers the questions, ‘What is currently known about this
Examples of Opinion Contributions
area, and what does it mean to researchers and practitioners?’ Bannon, L. (2011). Reimagining HCI: Toward a more human-
It should supply the basic knowledge to enable new centered perspective. Interactions 18 (4), 50-57.
researchers to enter the area, current researchers to continue
developments, and practitioners to apply the results.” Bernstein, M.S., Ackerman, M.S., Chi, E.H. and Miller, R.C.
(2011). The trouble with social computing systems research.
Examples of Survey Contributions Extended Abstracts of the ACM Conference on Human Factors in
Balakrishnan, R. (2004). "Beating" Fitts’ law: Virtual Computing Systems (CHI’11). Vancouver, British Columbia (May
enhancements for pointing facilitation. International Journal of 7-12, 2011). New York: ACM Press, 389-398.
Human-Computer Studies 61 (6), 857-874.
Dourish, P. (2006). Implications for design. Proceedings of the
Holden, M.K. (2005). Virtual Environments for Motor ACM Conference on Human Factors in Computing Systems (CHI
Rehabilitation: Review. CyberPsychology and Behavior 8 (3), 187- '06). Montréal, Québec (April 22-27, 2006). New York: ACM
211. Press, 541-550.
Johnson, G., Gross, M.D., Hong, J. and Do, E.Y.-L. (2009). Greenberg, S. and Buxton, B. (2008). Usability evaluation
Computational support for sketching in design: A review. considered harmful (some of the time). Proceedings of the ACM
Foundations and Trends in Human-Computer Interaction 2 (1), 1- Conference on Human Factors in Computing Systems (CHI '08).
93. Florence, Italy (April 5-10, 2008). New York: ACM Press, 111-
120.
MacKenzie, I.S. and Soukoreff, R.W. (2002). Text entry for mobile
computing: Models and methods, theory and practice. Human- Harper, S. (2007). Is there design for all? Universal Access in the
Computer Interaction 17 (2), 147-198. Information Society 6 (1), 111-113.

5
Newell, A. and Card, S.K. (1985). The prospects for psychological and Technology (UIST '07). Newport, Rhode Island (October 7-10,
science in human-computer interaction. Human-Computer 2007). New York: ACM Press, 251-258.
Interaction 1 (3), 209-242.
Shneiderman, B. (2000). Universal usability. Communications of
Norman, D.A. (1999). Affordance, conventions, and design. the ACM 43 (5), 84-91.
Interactions 6 (3), 38-43.
Taylor, A. (2015). After interaction. Interactions 22 (5), 48-53.
Norman, D.A. (2006). Logic versus usage: The case for activity-
centered design. Interactions 13 (6), 45, 63. ACKNOWLEDGEMENTS
I thank Scott E. Hudson for our numerous discussions of
Olsen, D. (2007). Evaluating user interface systems research. “activities of discovery” and “activities of invention.”
Proceedings of the ACM Symposium on User Interface Software

You might also like