0% found this document useful (0 votes)
3 views

Intrument Development

The document discusses the complexities of measuring attitudes and the development of questionnaires, emphasizing the importance of constructs and operational definitions in research. It outlines the process of item generation, development, and scaling for assessing constructs like interpersonal conflict in workplaces. Additionally, it highlights the significance of pretesting and validating scales to ensure content adequacy and reliability in measurement.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Intrument Development

The document discusses the complexities of measuring attitudes and the development of questionnaires, emphasizing the importance of constructs and operational definitions in research. It outlines the process of item generation, development, and scaling for assessing constructs like interpersonal conflict in workplaces. Additionally, it highlights the significance of pretesting and validating scales to ensure content adequacy and reliability in measurement.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 62

The Measurement of Attitudes &

Development of Questionnaires
Dr. Sajid Bashir,
Associate Professor /Head of the Department
Attitudes
Evaluative statements (+ or -) about people,
events and objects.
Difficult to measure because
– Can not be seen
– A-B relationship
– Cognitive Dissonance Theory
– Theory of Planned Behavior
How do we know if someone has a
positive attitude towards ice cream?
Why its difficult
• It can not be seen

• A-B relationship

• Cognitive Dissonance Theory

• Theory of Planned Behavior


Indicators of Attitudes

• Behavior (She eats it)


• Affective reaction (She likes eating it)
• Self-Report (She tells us she likes it)
• Peer-Report (Her mom tells us)
• Physiological Measures (heart rate)
How to do it
• Identify a Construct
– Not yet Measured
– Not properly measured
Concept
• An abstraction encompassing observed
events; a word that represents the similarities
or common aspects of objects or events that
are otherwise quite different from one
another.
• The purpose of a concept is to simplify
thinking by including a number of events (or
the common aspects of otherwise diverse
things) under one general heading (Ary
1985).
• Chair, dog, tree, liquid, a doughnut, etc…
Construct
• Constructs are the “highest highest-level abstractions” of
complicated objects and events, created by combining
concepts and less complex constructs. – used to account
for observed regularities and relationships, and to
summarize observations and explanations (Ary 1985).
• A concept with added meaning of having been deliberately
and consciously invented or seriously adopted for a
special scientific purpose.
1) it enters into theoretical schemes and is theoretical related
in various ways to other constructs.
2) it is defined and specified so that it may be observed or
measured (Kerlinger 1986).
Construct
• Scientists measure things in three classes: direct
observables, indirect observables (not
experienced or observed first hand), and
constructs.
• These constructs are defined as constructs
theoretical creations based on observations but
cannot be observed directly or observed indirectly
(Kaplan 1964).

• Motivation, visual acuity, justice, problem solving


ability, …not a doughnut, …but hunger. problem-
Operational Definition
• It describes meaning to a concept or construct by
specifying the operations that must be performed in
order to measure or manipulate the concept, as the
data collected during research is in terms of
observable events (Ary 1985).
• It defines or gives meaning to a variable by spelling
out what the investigator must do to measure it
(Kerlinger 1986).
• “Operational definitions are essential to research
because they permit investigators to measure
abstract concepts and constructs and permit scientists
to permit move from the level of constructs and theory
to the level of observation” (Ary 1985).
Operational Definition

• Two Types of Operational Definitions:

• Measured Operational Definition:


Operations by which investigators may
measure a concept.

• Experimental Operational Definition:


Steps taken by a researcher to produce
certain experimental conditions.
Operational Definition

• Examples of an Operational Definition:

• Measured Operational Definition: An actual (score)


value from a test or questionnaire the researchers
would develop to measure “hunger.”

• Experimental Operational Definition: A


manipulated scenario to produce the condition of
“hunger.” (such as preventing the subject from
consuming anything for x number of hours)
Variable
• Characteristics or attributes of an object,
individual or organization that can be
measured or observed, and that varies
among those objects or individuals being
studied (Creswell 2002).
• They possess values and levels (the
dimensions on which they vary) (Sommery
1997).
• “The concepts that are of interest in a study
become the variable variables for s
investigation (Ary 1985).”
Different Kinds of Variables
• Dichotomous: Two valued variables.
Example: Sex (male/female) Two-
• Polytomous: Multiple values for
variables. Example: Religion
(Catholicism, Islam, Judaism,
Hinduism, Buddhism, etc…)
• Continuous: A variable that takes on an
infinite number of values within a
range. Example: Height & Weight
More Kinds of Variables

• Independent: The variable manipulated by the


experimenter (also: Experimental, Predictor, Manipulated,
Antecedent, Treatment).
• Active: Any variable that is manipulated by the researcher
• Attribute: Any variable that cannot be manipulated by the
researcher. For example, all human characteristics are
attribute variables: intelligence, sex, socioeconomic status
etc.
• Dependent: The dependent variable is the phenomenon
that is the object of study and investigation (also:
Outcome, Response, Criterion, Effect). - -
More Kinds of Variables
• Categorical: Referred to as nominal measurements.
One creates ‘categories,’ and classifies all variables
that fall under this definition without rank order. All
variables under the same category are considered of
equal value, and not differentiated.
• Latent: An unobserved ‘entity’
• Control: An independent variable that is measured in
a study because the y potentially influence the
dependent variable. It is a more clearly defined
independent variable in attempts to eliminate all bias
in regard s to its effects on the dependent variable.
(Keeps the study in check). they clearly regards
Concepts, Construct
• A concept is a verbal abstraction drawn from
observation of a number of specific cases
• A theoretical definition explains what is
meant by the concept.
• Operational definitions translate the verbal
concepts into corresponding variables which
can be measured. Operational definitions can
be either: measured, or experimental.
• Also, a variable can be either measured (e.g.,
surveys) or manipulated (e.g., experiment).
Concepts, Construct
• A construct serves the same function as a concept, but it
is more abstract.
• It is not characterized by a direct link between the
abstraction and its observed manifestations.
• For instance, “source credibility” is a construct which has
been used in studying persuasion.
• This term can be used in the same way as a concept,
but we should recognize that we cannot directly observe
different levels of source credibility in individuals.
• However, we can observe the various parts which make
up the construct individually, and then combine them to
get some overall summary.
Concepts, Construct
• Constructs are built from the logical combination
of a number of more observable concepts. In the
case of source credibility, we could define the
construct as the combination of the concepts of
expertise, objectivity, and status.
• Each of these concepts can be more directly
observed in an individual.
• We might also consider some of these terms to be
constructs themselves, and break them down into
combinations of still more concrete concepts
An Example
Social Stress Theory
An Example
Social Stress Theory
Social Stress Theory
• Construct
– Social Stressors
• Operationalization
– Stress caused by socio environment conditions.
– Can be applied on different contexts
– Workplace is a social context
– Interpersonal conflict(variable) at workplace is an important
social stressor(Keenan & Newton, 1985).
– Interpersonal conflict causes anger, annoyance, and
frustration (Narayanan et al. 1999a,b).
– How to measure Interpersonal Conflict in Organizations
– We need a scale?
Scale Development
• Item Generation and Development
– Literature Review
• The Interpersonal Conflict and
– Depression symptoms
– Negative emotional state
– Workplace attitudes
– Turnover intentions
– OCB/CWB
– Social support
– Trust
Item Generation
• The scale development process begins with the creation of items to assess
a construct under examination.
• This process can be conducted inductively, by generating items first, from
which scales are then derived, or deductively, beginning with a theoretical
definition from which items are then generated.
• Both of these approaches have been used by behavioral researchers and
the decision must be made about which is most appropriate in a particular
situation.
• The inductive approach is usually used when exploring an unfamiliar
phenomenon where little theory may exist. Experts on the subject are
typically asked to provide descriptions of their feelings about their
organizations or to describe some aspect of behavior. Responses are then
classified into a number of categories by content analysis based on key
words or themes. From these categorized responses, items are then
derived.
Item Generation
• Deductive scale development uses a theoretical
definition of a-construct which is then used as a
guide for the creation of items (Schwab, 1980).
• This approach requires an understanding of the
relevant literature and of the phenomenon to be
investigated and helps to ensure content
adequacy in the final scales. In most situations
where some theory exists, the deductive
approach would be most appropriate.
Item Development
• Item Development
There are a number of basic guidelines that should be followed to
ensure that the items are properly constructed. Some of the most
important and often overlooked practices will be presented.
• Items should address only a single issue; “double-barreled” items such
as “My employees are dedicated and hardworking” may represent two
constructs and result in confusion on the part of the respondents.
• It is also important to keep all items consistent in terms of perspective,
being sure not to mix items that assess behaviors with items that
assess affective responses to or outcomes of behaviors (Harrison and
McLaughlin, 1993). As an example, in the examination of supervisory
behavior, “My supervisor treats me fairly” should not be included in a
scale with the outcome “I feel committed to my supervisor.”
Item Development
• Statements should be simple and as short as possible
and the language used should be familiar to target
respondents.
• Negatively-worded or reverse-scored items should be
used with caution as a few of these items randomly
interspersed within a measure can have a detrimental
effect on its psychometric properties (Harrison and
McLaughlin, 1991).
• Items must be understood by the respondent as
intended by the researcher if meaningful responses are
to be obtained.
Number of Items
• There are no specific rules about the number of items to be retained but some helpful
heuristics exist.
• A measure needs to be internally consistent and be parsimonious, comprised of the
minimum number of items that adequately assess the domain of interest
(Thurstone,1947).
• Adequate internal consistency reliability can be obtained with four or five items per
scale(Harvey, Billings and Nilan, 1985; Hinkin and Schriesheim, 1989).
• Keeping a measure short is an effective means of minimizing response biases caused
by boredom or fatigue (Schmitt and Stults, 1985).
• Additional items also demand more time in both the development and administration of
a measure (Carmines and Zeller, 1979).
• These issues would suggest that a quality scale comprised of four to six items could be
developed for most constructs or conceptual dimensions.
• It should be anticipated that approximately one-half of the new items will be retained for
use in the final scales, so at least twice as many items should be generated than will be
needed for the final scales. Once the scale has been developed it is time to pretest the
scale for the content adequacy of the items.
Content Adequacy Assessment
• An often overlooked yet necessary step in the scale development
process is pretesting items for content adequacy.
• In many instances researchers have invested substantial time and
effort in collecting large data sets only to find that an important
measure is flawed. Assuring content adequacy prior to final
questionnaire development provides support for construct validity as
it allows the deletion of items that may be conceptually inconsistent.
Several content assessment methods have been described in the
research methods literature (cf., Nunnally, 1978).
• One common method requires respondents to categorize or sort
items based on their similarity to construct definitions. This can be
conducted using experts in a content domain.
Focus Group
(40-60 employees)
• Semi-structured group interviews
– 1.What do you think defines conflict in the workplace?
– 2. In a scale of 1 to 5, where 1 is never and 5 is everyday, do
you encounter conflict in the workplace?
– 3. Tell me about an instance when you experience conflict at
work? Has anyone else experienced a similar situation?
– 4. What are some sources of conflict in your workplace?
– 5. Do you think that your reaction to conflict varies depending on
the source of the conflict?
– 6. In a scale of 1 to5, where 1 is never and 5 is every time, how
often do you think your conflict has been resolved?
– 7. In a scale of 1 to 5, where 1 is never and 5 is always, how
often do you think conflict can be seen as being positive?
Focus Group
• 8. Can you give me some examples of times when conflict was
positive?
• 9. Can you now give me some examples of times when conflict was
negative?
• 10. Do you think that others in your department experience the
same amount of conflict that you do?
• 11. Would you say that you are encountering more conflict in your
current job than in a past job?
• 12. Do you think conflict in your workplace varies as a function of
time (e.g., day, month, or year)?
• 13. In a scale of 1 to 5, where 1 is never and 5 is always, do you
think that conflict is a consequence of your emotional state?
• 14. How often do you think that incivility precedes conflict? For
example, someone being rude to you.
• 15. Do you think conflict is associated with gender or race?
Focus Group
At the end of each focus group, participants were asked to write down two
critical incidents of conflict in their workplace. For each critical incident,
participants were asked to describe what led up to the incident and the
context in which it occurred, exactly what happened during the conflict, the
perceived consequences of the conflict, and whether or 30 not the
consequences were within the control of the employee.

A total of 117 critical incidents regarding the experience of interpersonal


conflict at work were collected.

Three research assistants sorted the incidents into categories, including


task outcome conflict, task process conflict, relationship conflict, non-task
organizational conflict, and other.

Consensus was reached among raters for any incident on which they did
not initially agree.
Item Development
Literature + Focus group
• Literature suggests, there are four types of
conflicts
– task outcome conflict
– task process conflict
– Relationship conflict
– non-task organizational conflict
Item Scaling

As previously mentioned, Likert scales are the most
commonly used in survey research using questionnaires
(Cooket al., 1981: Schmitt and Klimoski, 1991).
• Likert scales include several “points" along a continuum that
define various amounts or levels of the measured attribute or
variable (e.g., agreement, frequency, importance etc.).
• Measures with five- or seven-point scales have been shown
to create variance that is necessary for examining the
relationships among items and scales and create adequate
coefficient alpha (internal consistency) reliability estimates
(Lissitz and Green, 1975).
Scale Refinement
Some experts/ students sorted based on
operational definitions into four categories

Headings not shown rather definitions to


match the questions

Delete items with consensus(14 deleted)


Sort items as per following
definitions
Pretesting- Sample 1
• 137 items scale sent to a pilot sample
• Items divided as per four types
• Respondents must have experience and
knowledge about conflicts
• Use of appropriate scale
Scale Validation - Sample 2
• Items deleted with low factor loading
• New sample again sent a revised
questionnaire
• CFA
FACTOR ANALYSIS

. . . analyzes the structure of the interrelationships


among a large number of variables to determine a set
of common underlying dimensions (factors)….
FACTOR ANALYSIS
The original idea dates back to the early 1900s, and it is generally acknowledged
that The English psychologist Charles spearman first applied early forms of this
Approach to study the structure of human abilities. Spearman (1904) proposed
that an individual’s ability scores were manifestations of a general Ability (called
general intelligence) and other specific abilities, Such as verbal or numerical
abilities. The general and specific factors combined to produce the ability
performance. This idea was labeled the Two-factor theory of human abilities.
However, as more researchers became interested in this approach (e.g.,
Thurstone, 1935), the theory was extended to accommodate more factors and the
corresponding analytic Method was referred to as factor analysis.
Understanding Factor Analysis
• Factor analysis is commonly used in:
– Data reduction
– Scale development
– The evaluation of the psychometric quality of a
measure, and
– The assessment of the dimensionality of a set
of variables.
Two Types of Factor Analysis
1. Exploratory Factor Analysis (EFA) = is used
to discover the factor structure of a
construct and examine its reliability. It is
data driven.

2. Confirmatory Factor Analysis (CFA) = is


used to confirm the fit of the hypothesized
factor structure to the observed (sample)
data. It is theory driven.
Two Types of Factor Analysis
• In general terms, factor analysis is a modeling approach for studying hypothetical
constructs by using a variety of observable proxies or indicators of them that can be
directly measured. The analysis is considered exploratory, also referred to as
exploratory factor analysis (EFA), when the concern is with determining how many
factors, or latent constructs, are needed to explain well the relationships among a
given set of observed measures.
• Alternatively, the analysis is confirmatory, formally referred to as confirmatory factor
analysis (CFA), when a preexisting structure of the relationships among the measures
is being quantified and tested. Thus, unlike EFA, CFA is not concerned with
discovering a factor structure, but with confirming and examining the details of an
assumed factor structure.
• In order to confirm a specific factor structure, one must have some initial idea about
its composition. In this respect, CFA is considered to be a general modeling approach
that is designed to test hypotheses about a factor structure, when the factor number
and interpretation in terms of indicators are given in advance. Hence, in CFA (a) the
theory comes first, (b) the model is then derived from it, and finally (c) the model is
tested for consistency with the observed data.
An Exploratory Factor Model
EFA Features
• The potential number of factors ranges from
one up to the number of observed variables
• All of the observed variables in EFA are
allowed to correlate with every factor
• An EFA solution usually requires rotation to
make the factors more interpretable. Rotation
changes the correlations between the factors
and the indicators so the pattern of values is
more distinct
A Confirmatory Factor Model
EFA vs CFA
• Both types of analysis try to reproduce the
observed relationships among a set of indicators
with a smaller set of latent variables.
• EFA is data driven and used to determine the
number of factors and which observed variables
are indicators of each latent variable.
• In EFA all the observed variables are
standardized and the correlation matrix is
analyzed
Model Fit in CFA
• Chi-squared test
The chi-squared test indicates the difference
between observed and expected
covariance matrices. Values closer to zero indicate
a better fit; smaller difference between expected and
observed covariance matrices. Chi-squared
statistics can also be used to directly compare the fit
of nested models to the data. One difficulty with the
chi-squared test of model fit, however, is that
researchers may fail to reject an inappropriate
model in small sample sizes and reject an
appropriate model in large sample sizes.As a result,
other measures of fit have been developed.
Table1: An oblique five factor pattern solution (N=170)
Factor Patterns
Confirmatory Factor Analysis
Covariances between exogenous latent traits
Non-recursive model
Comparison of Fit Indices
Validity/Reliability
Validity/Reliability
Validity/Reliability
Validity/Reliability
Validity/Reliability
Validity/Reliability
Validity/Reliability
Validity/Reliability

You might also like