100% found this document useful (2 votes)
2K views

Developmental Research For Information and Technology

This document discusses developmental research in engineering and technology. It defines developmental research as the systematic study of designing, developing, and evaluating instructional programs and products. There are three main types of developmental research: analyzing product development processes and evaluating outcomes, focusing on learner or organizational impact, and analyzing design and development processes broadly. Developmental research aims to establish empirical foundations for creating new instructional tools and models through knowledge production and understanding.

Uploaded by

Zelop Drew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
2K views

Developmental Research For Information and Technology

This document discusses developmental research in engineering and technology. It defines developmental research as the systematic study of designing, developing, and evaluating instructional programs and products. There are three main types of developmental research: analyzing product development processes and evaluating outcomes, focusing on learner or organizational impact, and analyzing design and development processes broadly. Developmental research aims to establish empirical foundations for creating new instructional tools and models through knowledge production and understanding.

Uploaded by

Zelop Drew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Developmental Research for Information and technology

Reported By: Raymond A. Ramirez

Developmental Research

When it comes to Engineering and technology:

The systematic study of design, development and evaluation processes with the aim of establishing an
empirical basis for the creation of instructional and non-instructional products and tools and new or
enhanced models that govern their development.

Another definition of Developmental Research under Engineering and Technology is that it is the
systematic study of designing, developing, and evaluating instructional programs, processes, and
products that must meet criteria of internal consistency and effectiveness.

Types of Developmental Research

Developmental research is particularly important in the field of instructional technology.

 The first and most common type of developmental research involve situations in which the
product-development process is analyzed and described, and the final product is evaluated.

 A second type of developmental research focuses more on the impact of the product on the
learner or the organization.

 A third type of study is oriented toward a general analysis of design development or evaluation
processes as a whole or as components.

Purpose:

One of The general purposes of developmental research have been described as knowledge production,
understanding and prediction. Within this framework, developmental research has particular emphases
which vary in terms of the extent to which the conclusions are generalizable or contextually-specific.

Also We purposely use the term design and development throughout this topic because together they
have a broad meaning especially in the research context. The focus of a design and development study
can be on front-end analysis, planning, production, and/or evaluation. This approach can also center on
the design and development of products and tools or on the development, validation and use of design
and development models. In essence, this is the study of design and development processes as opposed
to performing them.
Defining a Research Problem

Identifying a research problem and related questions is the first step in planning any empirical study.
This can be difficult, especially for those who are planning their first study or for those searching for a
new research agenda. This difficulty is further compounded by the search for important problems and
questions. Research problems in design and development should address important questions that
contribute to our knowledge base and to the improvement of our practice.

Guidelines in Selecting a Research Problem

Other authors suggest some practical guidelines for identifying a research problem. Patten (2002), for
example, recommends that novice researchers should start by identifying a few broad problem areas of
interest (such as distance education, performance analysis, or rapid prototyping)
and then evaluate each area by asking, “Is the problem area in the mainstream of the field?” “Is there a
substantial body of
literature in the area?” “Is the problem timely?”
Gall, Gall, and Borg (2003) advise that research
problems should be based on factors such as (a) significance (Is it important?), (b) feasibility (Do you
have the resources and expertise necessary to study it?) and (c) benefit (Is it directly related to your
professional goals?). Tuckman (1999) indicates that a good research problem is clearly and
unambiguously stated in question form; is testable by empirical, data-based methods; and does not
represent a moral or ethical position.

Sources of Developmental Research Problem


Problems in workplace:

For those involved in design and development research, the problems are often found by listening to
practitioners. The workplace is a primary source of research problems. An astute researcher can
identify many problems and questions by observing how design and development is done in a particular
setting, by discussing ID practices with designers, or by reflecting on his or her own practice. Not only
are the experiences and concerns of those in various workplace settings important stimuli to this type of
research, but actual projects themselves can also serve as the focus of design and development
research. Thus, the research is rooted in the objectives and the complexities of practice. The problems
are explicitly defined and the solutions are interpreted in light of the workplace's contextual details and
characteristics.

What are the problems that are found routinely on the job?
Certainly, there are many. However, to be noticed as a researchable workplace problem and to
be considered critical, a situation typically needs to be:
 Recurring and common to many settings.
 Viewed as basically solvable.
 Reflective of broad areas of current interest in the field.
Problems Related from emerging technologies
Technology permeates our personal lives as well as that of the design and development profession.
According to Milrad, Spector, and Davidsen (2000):

Technology changes. Technology changes what we do and what we can do. People change on account of
technology. Technology in support of learning and instruction is no different. Instructional technology
changes what teachers and learners do and can do. Today, many of the “cutting edge” research topics
relate to the development and use of emerging technologies. Technology has always served as an
impetus to design and development research with formal inquiry typically following the initial practical
exploration and experimentation with new technologies.

Problems Related to Design and Development Theory


In keeping with this position, the evolution of design and development as a discipline requires that
empirical evidence be collected to serve as the foundations of this science and of our theory.

Characteristics of theory-based problems.

The design and development theory base is growing and becoming more diverse as the discipline
expands. While the practice of instructional design itself rests upon many types of theory (principally
systems theory, and theories of learning, instruction, and communication), instructional design theory
tends to relate to ID models and processes, designer decision-making, and emerging areas in which ID
principles and theories are being applied. These are the sources, in general, of theory based design and
development research problems. Within each of these theory clusters, more specific issues and
problems emerge.

Representative theory problems.

How has this problem-identification process actually worked with


some completed projects that were rooted in theory problems? Recently, there have been some model
validation studies that follow the tenets of design and development research. Tracey (2002) constructed
and validated an instructional systems design model that incorporated Gardner's notion of multiple
intelligences. This study encompassed both an internal and external validation of the model. The
external
validation took place in a natural training setting. Tessmer, McCann, and Ludvigsen (1999) empirically
tested the validity of a new model of needs assessment used to reassess existing training programs to
determine if training excesses and deficiencies exist. This study included a partial internal validation
and a formal external validation, again taking place in a “real” work environment. These two studies
represent alternative approaches to model research. One addressed a model of the entire design and
development process and the other examined one part of the process.
Focusing the Research Problem

Once an important problem has been identified, the next task is to focus the problem in such a way that
the research effort can lead to specific new knowledge for the field. This focusing process gives the
study a “design and development twist,” and narrows it so that new findings can be attributed to
particular aspect of the complex design and development process.

While there are some unique focusing strategies used in design and development research, as with all
research, the focusing process ultimately results in a set of research questions that reflect the critical
components of the problem selected for study This involves a process of transforming a general topic
into specific questions1 that frame the study. In this process, the research topic is narrowed. Figure 2-1
summarizes this narrowing process.

Transforming Research Problems into Research Questions


Once an area of interest and the problem have been identified using the techniques discussed earlier
, the narrowing task has just begun. This process involves determining the various
components of the problem. If you are going to study the use of automated design tools, for instance,
what are the critical parts of the problem? Should we concentrate on the type of content typically
addressed using these tools? What is the expertise of the designer using the tool? What are the
attitudes of the designer toward automation? What are the available resources? The knowledge needed
to identify the important parts of the problem situation comes primarily from your command of the
literature, your familiarity with the problem, and your familiarity with the research context.

Defining the Parameters of the Study

The problem definition stage also includes establishing the parameters of the particular research
project. Establishing these parameters is a part of focusing the study.

General parameter decisions. One must make some standard decisions when determining the
parameters of any design and development study.
These include:

 Will all phases of the design and development cycle be addressed, or will the research
concentrate on just one particular aspect of the process?
 Will the research be conducted while design and development activities are occurring or will
retrospective data be collected on an instructional program or performance intervention that
was previously designed and developed? Or will one consider both options?

Other parameter decisions, however, vary depending upon whether one is conducting research on a
product, tool, or model
Parameters of research on product and tool design and development.
Much design and development research is conducted during the design of a product or tool. When
establishing the parameters of this type of study, the following considerations should be addressed:

 What will be the scope of the study? Will it address the analysis of learning needs and goals?
Will it address the planning and production of interventions, materials, and activities? Will it
address the tryout and revision of these materials and activities?
 Will evaluation data be collected and reported? Will formative, summative, and/or confirmative
evaluation be conducted?
 Will enroute and outcome measures be used? Will student-student, student-teacher, or student
technology interactions be investigated? Will reaction, learning, performance, or return on
investment be measured?

The parameters of research on design and development models.

Research may involve constructing and validating unique design models and processes, as well
identifying those conditions that facilitate their successful use. When one is establishing the parameters
of this type of design and development study, the following decisions must be made:

Will the study address model development, model implementation or model validation, or a
combination of these phases?

Will the study encompass data from one design and development project or many?

Will the design and development tasks being studied be situation specific or will the tasks encompass a
variety of design settings?

The Review of Related Literature

The Role of Related Literature in Problem Identification

The literature contains original ideas and concepts that form the collective body of prior work in a field
(Tuckman, 1999). It provides a knowledge base for a researcher seeking important problems and
questions. For example, an article published in a journal aimed at practitioners may stimulate an idea
related to the use of design and development in the workplace, or a paper presented at a conference
may suggest possible questions about the design of technology-based instruction. The discussion section
of research reports is a good source for potential problems and questions as they often give “directions
for future research.”

The literature also offers information to help a researcher refine his or her problem statement and
questions. According to Gall, Gall, and Borg (2003) you should conduct a thorough study of the
literature after identifying a potential research problem to answer the following questions:
 Has research on this problem been previously conducted?
 If so, what has been learned?
 What can this study contribute to what is already known?
 Is the research problem significant or are there more important problems that should be
addressed?

Furthermore, the literature can suggest ideas for research procedures, data sources, and instruments.
For example, a well-written research article on the validity of an ID model might recommend the range
of participants or data collection tools to include in another validation study.

Sources for a Literature Review in Design and Development


Researchers should consult a variety of sources for literature in their field of study. Patten (2002)
suggests several general sources to identify broad problem areas including textbooks, review and
reference publications, “signature” publications of major professional associations, and journals that
specialize in research reviews. These are always good places to find ideas for research. However, what
sources are available to someone who is interested in conducting design and development research?

Traditional sources of literature.

Journals that publish articles on research, theory, development, and utilization of instructional design
and technology (IDT) provide a primary source of literature. They contain original ideas and material
that can be used in a literature review. According to Klein and Rushby (2007) journals in our field focus
on a wide variety of topics, including distance education, instructional development, multimedia, and
performance improvement. These authors describe 75 journals in IDT and related fields.

Other traditional sources of literature are books and journals that publish review and synthesis
articles.
Dissertations. While journals and books provide the foundation for a literature review, some of
them may not have the most current or “cutting edge” information. It is not unusual for an author to
submit
a manuscript for publication, only to see it take two or more years before it’s reviewed, revised, and
finally published. This process of “refereed publication” helps to ensure the quality of scholarship in a
field. It also increases the length of time it takes before a new, emerging process or technology is
exposed to a field.
Recently completed dissertations provide a good source of up-to-date information on workplace
issues, emerging technologies, innovative tools, and promising ID models. A well-written dissertation
also provides a comprehensive review of literature on a topic (but we caution you not to rely on
secondary sources alone). Furthermore, a well-constructed dissertation may include complete data sets
that often must be left out of subsequent publications since journals do not provide enough space for
detailed documentation of design and development research data (Richey & Klein, 2005).

Dissertations–
While journals and books provide the foundation for a literature review, some of
them may not have the most current or “cutting edge” information. It is not unusual for an author to
submit a manuscript for publication, only to see it take two or more years before it’s reviewed, revised,
and finally published. This process of “refereed publication” helps to ensure the quality of scholarship in
a field. It also increases the length of time it takes before a new, emerging process or technology is
exposed to a field. Recently completed dissertations provide a good source of up-to-date information on
workplace issues, emerging technologies, innovative tools, and promising ID models.

A well-written dissertation also provides a comprehensive review of literature on a topic (but we caution
you not to rely on secondary sources alone). Furthermore, a well-constructed dissertation may include
complete data sets that often must be left out of subsequent publications since journals do not provide
enough space for detailed documentation of design and development research data (Richey & Klein,
2005).

Conference papers. - Papers presented at conferences of professional associations also offer


information on new ideas and trends in a field. However, many conference papers are not subjected to a
rigorous peer-review process. Klein and Rushby (2007) point out, “as an educated consumer of
information, you should critically analyze the content of what you read regardless of where it has been
published” (p. 262). In many cases, full-text conference papers are not distributed at the presentation
itself, and often those papers that are distributed do not include empirical data. One way to obtain a
complete paper is to send a message to the author requesting a copy of the paper and any others the
author has written on the topic. We also suggest that you examine the contents of a convention
program for topics and papers of interest, even if you didrit attend the conference. Many learned
societies now post searchable conference programs on their Web sites.

Documents from Work Settings –

We previously identified the workplace as a primary source of


design and development research problems. Correspondingly, documents from work settings can
contribute to the knowledge base of the researcher who is constructing design and development
research
questions. They can also be used as data sources to answer these research questions. For example,
needs assessment reports, detailed design documents, memoranda that summarize meetings with
clients, and evaluation reports can be reviewed to search for important issues and problems. While the
proprietary nature of such documents and artifacts can pose a special challenge for researchers, many
researchers have secured similar documents with pledges to maintain anonymity and confidentiality.

Design and Development Research Methodology

A standard question of novice researchers is “What’s the hardest part of doing research/” One realistic
reply is “Whatever you’re working on at the time.” Problem definition is a complex task, but that should
be accomplished by now. The next major quandary is determining what methods and strategies you will
use to produce meaningful data and insightful conclusions—in other words, constructing the research
design. In design and development research, as with other types of research, this, too, can be a difficult
undertaking.
A good research design makes it possible for you to answer the questions posed by the research
problem or test the hypothesized solutions to the problem. A good design makes it possible to
determine how the findings can be applied in other situations. A faulty research design, on the other
hand, produces data that is suspect, which ultimately leads to unsubstantiated conclusions that have
little likelihood of being used in practice situations. Poor research designs make the research process a
waste of time.

Nature of Research Design


Similar to designing instruction and other types of interventions, designing research is a planning
process. Research designs have been called the blue-prints that guide researchers throughout their
projects (Frankfort-Nachmias & Nachmias, 2000). A research design establishes the general framework
of a study, addressing each phase of the investigative process. However, research designs are not rigid
prescriptions for completing a study. Expert researchers design their studies and then implement these
designs with flexibility as they respond to situations that arise as their projects progress.
Research designs vary depending to a great extent upon whether the study has a quantitative or
qualitative orientation. Nonetheless, there are some general concerns that should be considered when
designing any research project. These include:

 Establishing the validity of the final conclusions.


 Establishing conditions that make causal inferences and assertions plausible.
 Facilitating generalization and interpretation.
 Anticipating problems that may arise in the course of conducting the research.]

Components of a Research Design:

Researchers struggle with a myriad of problems when constructing their research designs. Their
solutions to these problems depend upon both technical and intuitive analyses. Typically, a
comprehensive research design describes the decisions that have been made with respect to:
 The types of observations that will be required to answer the research questions or test the
hypotheses, that is the data collection that will be necessary.
 The strategies that will be used to make these observations, that is the research methods
that will be employed.
 Variables that are central to the study and variables that should be controlled.
 The participants in the research project.
 Instrumentation and measurement of variables.
 Data analysis.

Product and Tool Research: Methods and Strategies

Many design and development studies focus on a specific product or program. Frequently, this type of
research examines the entire design and development process from analyses to evaluation. However,
some of this research concentrates only on one or two phases of design and development. Furthermore,
researchers have recently examined the development and use of tools that can be used to assist
designers and developers or support the teaching/learning process. In studies of products, programs,
and tools, there is a tendency to combine the tasks of doing design and development and studying it.

STRATEGIES OF PRODUCT DEVELOPMENT RESEARCH

A representative mixed methods case study

The classic product development study is descriptive research using case study methods. Many of these
studies describe the entire lifespan of the product development process in detail (see Russell, 1990 or
Shellnut, Knowlton, & Savage, 1999). These projects provide an extensive description of design and
development, as well as pertinent technological details. If you were planning a product development
study, it would be useful to examine reports of similar efforts found in both dissertations (such as the
Russell study) and journal articles (such as the Shellnut et al. study).

Visser et al.’s (2002) research culminated in an instructional product that was designed to be used in
conjunction with other distance education materials. Specifically, they designed, developed, and tested
a technique for creating motivational messages. This approach has been called the Motivational
Messages Support System (Visser, 1998).1 The product attends to learners’ motivational requirements
and, in turn, is intended to reduce the dropout and non-completion rates of distance education
programs. The research design of Visser et al. provides for a systematic process of data collection that
results first in a prototype of the final product. Then, throughout the study, the design allows for
continued “development and the improvement of the product (the motivational message), focused on
the process and assessed the validity, the practicality and the effectiveness of the product” (Visser,
1998, p. 17). The entire product development effort was an empirical process.

A Representative Multiple Qualitative Methods Study Corry, Frick, and Hansen’s (1997)
This type of project-specific procedural detail is typical of product development studies. The research
design here is a blend of data-collection techniques and the design and development procedures
themselves. Often studies will also provide details of exactly how the product is developed, although it
was not done in this particular research report.

STRATEGIES OF PROGRAM DEVELOPMENT RESEARCH


While much product development research focuses on specific instructional materials, other research
has a broader concern and addresses the development of the entire program.
While many of the design and development principles are used in both product and program design,
techniques often vary when one deals with large bodies of content.

A Representative Program Evaluation Study

Sullivan, Ice, and Niedermeyer’s (2000) study is representative of program development research that
focuses on the impact of an instructional program rather than the design and development procedures
per se. Practitioners can learn a great deal from research of this type, even though instructional design
(ID) processes themselves are not highlighted.

STRATEGIES FOR RESEARCH ON DESIGN AND DEVELOPMENT PHASES


However, not all design and development research pertains to a complete project. There is a large body
of work that describes only specific phases of a design and development effort. This research
demonstrates a wide variety of approaches to each phase. For example, Link and Cherow-O’Leary
(1990) describe research only on the needs assessment phase that uses surveys, polls, in-school testing,
and focus groups. Currently, research on design and development phases is more likely to speak to data
gathering phases of the ID process: needs assessment, and formative and summative evaluation. The
literature is less robust with respect to newer types of evaluation, such as confirmative evaluation, or
design phases, such as content sequencing and strategy selection.

A Representative Mixed Methods Study of Formative Evaluation

Fischer, Savenye, and Sullivan’s (2002) research addresses formative evaluation and pertains to a
computer-based course on an online financial and purchasing system. Typical of most formative
evaluation endeavors, its basic purpose was to verify program effectiveness and identify necessary
course revisions. The formative evaluation, however, was designed with concern for process-cost
effectiveness, even as it assumed a comprehensive orientation. The formative evaluation occurred in
three stages in the design and development effort. These were: An expert review of content and user
interface design during the development phase. One-on-one evaluations of the instruction prior to
tryout. A full-scale tryout.

A Representative Multiple Quantitative Methods Study of Integrated Evaluation

Teachout, Sego, and Ford’s (1997/1998) research describes a method for combining three different
approaches to summative evaluation of instruction: measuring training effectiveness, training efficiency,
and transfer of training. First, Teachout et al. had to operationally define each variable and identify their
indicators.

The uniqueness of this approach to evaluation is not simply the distinctive measures, but the manner in
which the data are combined. The training efficiency, effectiveness, and transfer data are linked and
their relationships highlighted in tables. In summary, the researchers found that “tasks that are
overtrained are performed more frequently on the job than the other tasks and are performed to a
higher level of effectiveness” (p. 177). These evaluation results provide insights that can be easily
directed toward
further course development; in a manner, it is similar to using formative evaluation data.
Teachout et al. were engaged in exploratory design and development research. Their methodologies
are all quantitative— primarily survey strategies combined with quantitative analyses of course content
and an array of statistical techniques.

STRATEGIES FOR RESEARCH ON TOOL DEVELOPMENT AND USE

Recently, some design and development researchers have been concentrating on studying the
development and use of tools, rather than on products or programs. These tools either make design and
development itself easier, or support the teaching/learning process.

Tool development research uses many of the same methods and strategies as does product
development research. It relies greatly on case study methodologies and evaluation techniques.

We will examine two representative tool studies. Chou and Sun (1996) exemplifies research on the
development of an instructional support system and Nieveen and van den Akker’s (1999) study reports
on the design and evaluation of a computer system that supports designers.

A Representative Tool Development Case Study (Chou and Sun’s (1996)

This case study follows the phases of the traditional ISD model: analysis and design, development, and
evaluation.

This case study documents an ISD process that is intertwined with a variety of research methods.
Chou and Sun document the use of surveys, literature reviews, expert reviews, field observations, and
in-depth interviews. The bulk of these methods were qualitative. However, the case as a whole included
elements of exploratory (e.g., the literature reviews and field observations), descriptive (e.g., expert
reviews and in-depth interviews), and explanatory (e.g., analyses of system attributes and student
behaviors) research.

A Representative Tool Use Study (Nieveen and van den Akker’s (1999)

A Strategy used by Nieveen and van den Akker’s (1999) in their study. This particular study concentrates
on the use of the tool; it seeks to assess not only the tool’s effectiveness, but its practicality as well.
Like many other examples of design and development research, this tool-use study used a mixed
methods design. In Phase 1 the methods employed were survey research techniques and content
analysis.

SUMMARY OF PRODUCT AND TOOL RESEARCH DESIGNS

We have now analyzed seven examples of methods and strategies used in product and tool research.
Hopefully, these will stimulate your thinking as you design similar studies of your own. As you think
about the designs, note the many ways in which these researchers have dealt with the standard
concerns of any research design: establishing validity, facilitating causal inferences, facilitating
generalizations and interpretations, and anticipating and avoiding problems. Each study has its own
unique way of dealing with these concerns.

The table below highlights many of the ways that these concerns have been dealt within the seven
representative studies we have just examined.

*Explain the Table*

Products and tools are deemed successful only if the research can produce data that provides
evidence of noteworthy changes in learner knowledge, attitudes, and behavior. Causal effects attributed
to the product may be identified using these measures. In addition, product and tool studies can
determine relationships between the product’s usability, practicality, cost effectiveness, and design
characteristics.

Model Research: Methods and Strategies

After the methods and strategies for research on products and tools. In addition to these types of
studies, research can focus on the development, validation, or use of a model.
Some studies address more than one of these concerns. These studies are the most generalized form of
design and development research. As with product and tool research, their primary goal is the
production of new knowledge; in this case, the knowledge is often in the form of a new or an enhanced
design model. This type of research highlights either comprehensive models or particular design
techniques or processes.

STRATEGIES OF MODEL DEVELOPMENT RESEARCH


In spite of the widespread use of models in the field of instructional design, there is a paucity of
research on model formation. Dick’s (1997) discussion of the initial formation of the influential Dick
and Carey model shows model construction as a process of applying a diverse body of research and
thinking of the times to the task of creating instructional products.

A Representative Multiple Qualitative Methods Study

The Jones and Richey (2000) research produced a revised rapid prototyping ID model that describes
designer tasks performed, the concurrent processing of those tasks, and the nature of customer
involvement.

Data were obtained from a natural design and development work setting. Designers and
clients from two contrasting projects participated in the study. The projects varied in terms of size,
product, and industry. One produced paper-based instructional materials; the other produced electronic
based materials. Both projects had been completed at the time of the research.

A Representative Mixed Method Study

Used by Spector, Muraida, and Marlino (1992) to produced cognitively oriented model for designing
computer based instruction (CBI) based upon simulated design and development activities.

A key objective of the model was to describe the shifts between analytic and intuitive thinking that
occurred during the ID process.

Data pertaining to each of these components were collected from 16 designers. Prior to the design task,
participants completed a biographical profile inventory. Designers then had 30 hours to complete a
specified ID task and keep a log of their observations about the development software they had been
provided. In addition, external observers recorded the designers’ reactions to the task and questions.
Design time was tracked in the development software. When the lessons were completed, they
underwent a peer review and the designers were given a structured exit interview.

The primary methods used in this model development research were survey, field observations, and a
content analysis of the logs written during the design task.

STRATEGIES OF MODEL VALIDATION RESEARCH


In contrast to the gaps in the model construction literature, there is more literature focused on the
systematic validation of ID models.

According to Richey (2005) he describes five different approaches to validation,


including three methods of internal validation (expert review, usability documentation, and component
investigation), and two methods of external validation (field evaluation and controlled testing).

A Representative Expert Review Study (Weston, McAlpine, and Bordonaro (1995)

Weston, McAlpine, and Bordonaro (1995) constructed and validated a model directed toward the
formative evaluation phase of the ID process. Their model emphasized four components of data
collection and revision: participants, roles, methods, and situations. Their validation procedures utilized
a type of expert review. As with other types of model research using expert review, this study sought to
determine if there were data to support the components of their proposed model.
]

A Representative Usability Documentation Study

Another way of validating an ID model is by studying the ease with which the model can be used by
designers and developers. It is a type of internal model validation that is often a part of a larger design
and development research project.

The Forsyth (1998) study is a good example of how usability documentation can be embedded into a
model construction and validation effort. In this case, Forsyth constructed an ISD model that would be
appropriate for community-based train-the-trainer programs. This model encompassed the
development and use of needs assessment and contextual analysis instruments, specification of
content, development of instruments to self-check prerequisite skills, development of instructional
materials (including a participant’s guide and audiovisual aids), and evaluation instruments.

A Representative Investigation of Component Variables

The internal validation studies we have discussed thus far pertain to ID procedural models. However,
there is a genre of research that pertains to conceptual models, models that identify critical variables
and
the relationships between them. This research is concerned with those factors that are critical to the
teaching/learning process and should therefore be addressed in an ID model

Quiñones et al. research was concerned with transfer of training and examines the relationships
between individual characteristics, transfer environment characteristics, and the opportunity to
perform. The participants in the research were 118 graduates of a U.S. Air Force training program and
their supervisors. The design made use of basic survey methodologies.

A Representative Field-Evaluation Study ( Taylor and Ellis (1991)

Design and development models can also be validated by systematically studying the effects of the
products that have been created as a result of their use. Taylor and Ellis (1991) exemplify this line of
external validation research when they evaluated classroom training to determine how effective the ISD
model was as applied in programs of the U.S. Navy. Taylor and Ellis selected 100 courses that were
representative of the training available for enlisted personnel. After interviews with course instructors
and managers, representative one-week samples of each course were selected. Objectives, test items,
and classroom presentations related to these smaller samples were evaluated using a six-step process.
Previous research had established the reliability and validity of this evaluation system.

A Representative Controlled Testing Study

Design models can also be externally validated by establishing experiments that isolate the effects of the
given ID model as compared to the use of another model or approach. This is the object of controlled
testing validation, a form of explanatory research. Tracey (2002) is one example of this type of
validation effort.
Tracey compared the use of the Dick and Carey (1996) model with an ISD model enhanced with a
consideration of multiple intelligences. This served as an external validation of the newly constructed
Multiple Intelligences (MI) Design Model. She established two design teams, each with two novice
designers. One team worked with the Dick and Carey model, and the second used the MI model. Both
teams were instructed to design a two-hour, instructor-led, classroom-based, team-building course for a
non-profit organization. The teams each received (a) materials regarding the organization, (b) written
content on team building, (c) audience, environment, and gap analysis information, and (d) an ISD
model. One team received the validated MI Design Model and the other the Dick and Carey model.

STRATEGIES OF MODEL USE RESEARCH


Design and development research that focuses on model use is typically characterized as being either
exploratory or descriptive.

Exploratory research addresses ID processes as they occur naturally and


intuitively in a variety of settings. Examples include Le Maistre’s (1998) study of expert formative
evaluation performance and Visscher-Voerman and Gustafson’s (2004) study of alternative design
paradigms. Descriptive research, on the other hand, tends to concentrate on the use of a particular
model, such as in Roytek’s (2000) study of rapid prototyping techniques. These studies also represent
the three major lines of model use research-studies of the conditions impacting model use, designer
decision-making research, and designer expertise and characteristics research. Although these three
studies used different research designs, the designs all tend to be more qualitative than quantitative.

Selecting Participants and Settings

At this point in the design and development research process, you have identified an important problem
to study, formulated research questions related to a product, tool or model, and designed the study so
you
can make valid and generalizable conclusions. Now it’s time to select who will participate in your
research project and determine where the study will be conducted.

SELECTING THE SETTING OF THE STUDY

Design and development research is typically context-bound, and the nature of the conditions in which
people work is typically critical Consequently, we are going to place nearly as much emphasis on the
setting of the study as we will on the people participating in the study.

The range of general settings in which education and training takes place today is broad—and growing.
The traditional view of education as only being formal courses and programs in schools and colleges is
obsolete. ID applications are now made extensively in business and industrial settings, healthcare
organizations, and community and government agencies. Within each of these general environments,
there are many more specific types of settings can be shown

When you select a setting for your research, or when you select people to participate in your research
who come from a particular work setting, you are shaping and providing further structure to your
research design. You are pro-viding the context in which your research questions will be answered. The
setting typically houses the problems on which your research is focused. The setting is filled with
elements that can ultimately account for the findings of your research. Setting is critical!

SELECTING THE PARTICIPANTS OF THE STUDY

Participants involved in design and development research may or may not be selected because of their
association with a particular organization; however, they are most always selected because of their
particular role in the design and development process. Here we will examine the kinds of participants
that are typically selected for these studies and the various methods researchers use to identify them.

Types of Participants

Even though persons in a given role can participate in many disparate design and development research
projects, different participant patterns are seen in product and tool studies, as well as model studies.
Studies documenting and analyzing the development of an instructional product would obviously
include those persons involved in designing the actual product—designers, developers, perhaps clients
and perhaps evaluators.

Participants in design and development research tend to focus on designers and


developers, rather than learners and instructors. This is a key distinction between design and
development research and traditional teaching-learning research. Even so, there are a wide variety of
people serving in many roles that play critical parts of design and development research.

In some design and development studies, the participants are organizations themselves. Types of
organizations may be selected to provide a systematic distribution of setting constraints within a
particular study.

Sampling Participants

Like other educational researchers, a scholar who conducts design and development research typically
selects a sample of participants from a population of interest following prescribed quantitative or
qualitative sampling techniques.1 These techniques help establish the validity of the ultimate
conclusions and facilitate the generalizabilty and interpretations that can be made. Following, we
discuss two sampling issues.
Random sampling - Is used to meat the population validity of a study. Every member of a given
population has a chance of being chosen for the study.

Purposeful sampling – participants and setting can be chosen according to the needs of the study

Convenience sampling – selection of participants is based on the easily available persons and easy to
observe.

Collecting Data in Design and Development Research

While it may seem that the bulk of planning your design and development study should be complete,
another major task lies in front of you: planning for data collection. Clearly, your data collection plans
were started as you were devising the research design of your study; however, as with so many things,
the activity seems to expand as you near the actual task.

What kind of data will you be collecting?

Profile Data
Projects can be profiled in terms of their scope, the resources available to the project, and the nature
of the particular product to be produced. Of particular importance are records of the time, monies,
facilities, equipment, and personnel that had been allocated to the project. Key product data include
descriptions of its scope, content, delivery mechanisms, and intended use.

Researchers routinely collect demographic and profile data from project participants. Design and
development research is no different; however, the type of data typically collected may vary from that
which is collected in other research projects.

Context Data

By now you’ve probably noticed that context is as critical a part of many design and development
studies as it is critical to ID projects themselves. At least three different contexts have major
implications for design and development research: (a) the environment in which the design and
development takes place, (b) the environment in which the intervention is implemented, and (c) the
performance environment in which skills and knowledge are applied. Each of these contexts widely
vary.

In-progress project data – these kinds of data are taken within an on going project. Examples of these
are time sheets. It is important to note the importance and value of in-progress data as it differs very
from much from end-results kind of data.
Try-out data – evaluation data is focused more on describing the design and development system, and
its major players. It describes how a system has succeeded or failed.

Data collection instruments

Work logs – most common type of data collection instrument used for “in progress” type of projects.

They can be used for collecting data that relate to both current and past projects, although they are
most commonly used for “in-progress” projects. Their most typical use is with designers and developers;
however, there are some studies that also use work logs to collect client and instructor data. Work logs
typically document the precise nature of tasks and decisions made during the various design and
development phases, time expended, tools used, and reactions to the process.

Surveys and Questionnaires– most common type of data collection instrument used for other kinds if
research (especially quantitative and qualitative researches).
These tools are used for a very wide range of research functions. They can
be used to collect data such as participant demographics, attitudes of designers and learners, and
evaluation information.

Interview protocols – relies on face to face interviews for data collection, and also makes use of surveys
and questionnaires. It collects more information provided by interviewees.

In many respects, the interview protocol is simply an open-ended questionnaire. The protocol is
especially critical with long interviews, as is typical in many design and development studies. It
establishes controls on the interview process, ensuring that all content is covered and that similar
prompts are used with all participants.

Observation guides – these provide researches goals on what to take note of while the subject of
research is working, or while a certain phenomena is about to happen.

Design and development researchers at times collect data by direct observation. Typically, this involves
observing one or more of the following:
 Designers and developers as they work.
 Instructors employing instructional products or tools in their teaching.
 Learners using a newly produced instructional product.
 On-the-job performance of persons who have participated in a particular intervention.

TECHNOLOGY BASED DATA COLLECTION STRATEGIES


Currently, more and more data collection is being facilitated by technology. The computer is no longer
simply a data analysis tool. It now creates possibilities for collecting accurate time data and product
usability data. It provides opportunities to capture and store designer work data. It provides ways of
gathering survey data in a speedy and cost effective manner. Here we will examine the role of:
 Web-based data collection.
 Software-based data collection.
 Laboratory-based data collection.

Web-based Data Collection

Web-based surveys are rapidly becoming the norm in many areas of research. There is a wide variety of
low-cost software that formats survey instruments for delivery over the Internet. Thus, it is possible to
easily deal with participant samples from a wide geographical range, and from a broad range of work
settings, if these populations can be easily connected to the Internet. In addition, these programs
automatically create data files that are directly compatible with the major data analysis software
programs.

Software-based data collection.

Many design and development research projects rely on the ability of software programs to collect user
data, as well as to perform their main functions. These capabilities are proving invaluable for studying
all types of user behavior, whether the users are designers, developers, or learners.

Laboratory-based Data Collection

Some researchers using advanced technologies to collect data are working in a laboratory environment.
For the most part, design and development research laboratories are found in universities and large
research centers. These laboratories can vary widely in terms of levels of sophistication, and
correspondingly, they can vary widely in terms of the financial resources required to build and maintain
the facility.

Interpreting Design and Development Findings


Research is about creating knowledge, and as such the goals of any research project are not simply to
collect data but to derive meaning from the data. The process of extracting meaning from data is fraught
with dangers of logic fallacies, personal biases, and professional blinders. It is also a process that can
be filled with excitement and promise.

Data derived from design and development research can usually be used to add more information in the
knowledge base, the introduction to new research, and establish new foundations or theories. The
interpretation of findings only begin when you understand its roles in the field.
Generally, the findings from design and development studies can determine the efficiency of a
certain design and development scheme, and can help to determine what tools and products are best to
use, as well as establish models that would be best used for other researches in the field.

It is also important to note that the success of research leads to the foundation of new theories
that can be applicable to other researches. Understanding the validity of these new theories can help
greatly to support or debunk other theories in the field. However, with the nature of design and
development research especially in tool and product development, the findings can be very specific and
context-bound.

THE CONTRIBUTIONS OF DESIGN AND DEVELOPMENT RESEARCH

Fundamentally, we conduct research in an effort to expand the field’s knowledge base, which will, in
turn, impact practice. This is not a simple process, but one which ideally involves being aware of the
field’s current literature, having insights into the demands of the workplace, and having the foresight to
envision new research that will facilitate disciplinary and professional progress. While few of us may be
able to attain all of these goals, we can be mindful of them when interpreting the findings of design and
development studies—both product and tool research and model research. The findings of both of
these types of research can be understood in terms of how they:

 Expand the knowledge base.


 Lead to new research.
 Establish the foundations of new theory.

INTERPRETING PRODUCT AND TOOL RESEARCH FINDINGS

The conclusions emanating from research can take many forms. They can be principles confirmed by
statistical analyses and generalized to a larger population. They can be cause-and-effect determinations.
They can be themes or patterns that emerge across cases. Or they can be lessons that have been
learned from specific projects, as is characteristic of product and tool studies. These are the unique
contributions of this type of design and development research.

Here we will explore the nature of a wide variety of conclusions that come from product and tool
studies. Researchers typically have two major areas of interest when they study the design and
development of products and tools. They are usually interested in either:
 Product and tool design and development processes.
 Product and tool use.

Some Lesson Learned in INTERPRETING PRODUCT AND TOOL RESEARCH FINDINGS

 Product and Tool Design and Development Processes


 Product and Tool Use

INTERPRETING MODEL RESEARCH FINDINGS

Model research generates conclusions that (unlike product and tool research) are more generalized and
less context-specific. They are directed toward general principles which are applicable to a wide range of
design and development projects. They all pertain to design and development models, rather than to
products, programs, or tools.
The conclusions of model research tend to be heuristics and broadly applicable principles. The
“lessons learned” terminology is seldom used in relation to model research due to its more generalized
nature. We will now explore the conclusions of each of the three facets of model research.

 Understanding Model Development Findings


 Understanding Model Validation Findings
 Understanding Model Use Findings

INTERPRETATION ISSUES

When researchers deal with data from natural work environments, they often encounter complications
not faced by those working in laboratories or simulated environments. We have discussed this issue
with respect to data collection, but there are also ramifications for data interpretation. Here we will
discuss two particular issues:

You might also like