0% found this document useful (0 votes)
49 views

Lecture 5 - Research Design and Measurement

This document discusses research design and measurement. It outlines different research design strategies like experiments, surveys, case studies, and mixed methods. It also discusses measurement, including defining concepts, developing scales, and levels of measurement from nominal to ratio. The key points are that research design provides plans and procedures for research, measurement assigns numbers to represent concepts, and scales are tools used to associate qualitative constructs with quantitative values.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Lecture 5 - Research Design and Measurement

This document discusses research design and measurement. It outlines different research design strategies like experiments, surveys, case studies, and mixed methods. It also discusses measurement, including defining concepts, developing scales, and levels of measurement from nominal to ratio. The key points are that research design provides plans and procedures for research, measurement assigns numbers to represent concepts, and scales are tools used to associate qualitative constructs with quantitative values.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

RESEARCH DESIGN AND

MEASUREMENT
Christian A. Hesse (Ph.D.)
OUTLINE OF PRESENTATION
• Purpose of the research design
• Design Strategies
• Multiple methods choices—combining
qualitative and quantitative techniques and
procedures
• Measurement
• Credibility of research findings
RESEARCH DESIGN AND METHODS
A Research Design provides the plans and
procedures for research that span a number of
decisions from broad assumptions to detailed
methods of data collection and analysis.

A Research Method is simply a technique for


collecting data. Choice of research method
reflects decisions about the type of instruments
or techniques to be used.
IMPORTANCE OF RESEARCH DESIGN
A good research design should help the
researcher to anticipate what it will take to
complete the research justifiably and credibly.
RESEARCH DESIGN BASE ON PURPOSE
The classification of research design based on
PURPOSE is mostly in threefold
– Exploratory
– Descriptive
– Explanatory
RESEARCH DESIGN BASE ON METHOD
The classification of research design based on
METHOD is in threefold
– Quantitative
– Qualitative
– Mixed
THE RESEARCH ONION
METHODOLOGICAL CHOICES
MIXED METHODS RESEARCH DESIGN
MULTIPLE METHODS CHOICES
• Single research study may use quantitative
and qualitative techniques and procedures in
combination as well as use primary and
secondary data
• Mixed methods approach is the general term
for when both quantitative and qualitative
data collection techniques and analysis
procedures are used in a research design
REASONS FOR CHOOSING MIXED
METHODS
• Triangulation
• Complementarity
• Generality
• Aid interpretation
• Study different aspects
• Solving a puzzle when results of initial method
can not be satisfactorily explained
RESEARCH STRATEGIES
Each strategy can be used for exploratory,
descriptive and explanatory research (Yin 2003).
– Experiment
– Survey
– Case study
– Action research
– Grounded theory
– Ethnography
– Archival research.
EXPERIMENT
• Experiment is a form of research that owes much to
the natural sciences, although it features strongly in
much social science research, particularly psychology
• The simplest experiments are concerned with whether
there is a link between two variables.
• More complex experiments also consider the size of
the change and the relative importance of two or more
independent variables
• Experiments therefore tend to be used in exploratory
and explanatory research to answer ‘how’ and ‘why’
questions
EXPERIMENT
SURVEY
• The survey strategy is usually associated with the deductive
approach and is most frequently used to answer WHO,
WHAT, WHERE, HOW MUCH AND HOW MANY questions.

• Tends to be used for exploratory and descriptive research

• Often obtained by using a questionnaire administered to a sample

• The survey strategy allows you to collect quantitative data which you can
analyze quantitatively using descriptive and inferential statistics

• The data collected can be used to suggest possible reasons for particular
relationships between variables and to produce models of these
relationships
CASE STUDY
• Case study is ‘a strategy for doing research which involves
an empirical investigation of a particular contemporary
phenomenon within its real life context using multiple
sources of evidence’ (Robson, 2002:178)
• Within a case study, the boundaries between the
phenomenon being studied and the context within which it
is being studied are not clearly evident
• The case study strategy particularly useful in eliciting
answers to the question ‘why?’ as well as the ‘what?’ and
‘how?’ questions, although ‘what?’ and ‘how?’ questions
tend to be more the concern of the survey strategy.
• The case study strategy is most often used in explanatory
and exploratory research
ACTION RESEARCH
• Concerned with the resolution of organizational issues
• Involvement of practitioners in a collaborative partnership
with researchers or external consultants
• The researcher is part of the organization within which the
research is taking place
• The findings of action research should have implications
beyond the immediate project or should inform other
contexts
• Action research differs from other research strategies
because of its explicit focus on action, in particular
promoting change within the organization
GROUNDED THEORY
• Grounded theory is often thought of as coming very
close to inductive approach,
• Aims at building a theory
• Data collection starts without the formation of an
initial theoretical framework
• Theory is developed from data generated by a series
of observations.
• Data collected leads to the generation of predictions
which are then tested in further observations that
may confirm, or otherwise, the predictions
ETHNOGRAPHY
• Grounded theory is often thought of as coming very
close to inductive approach,
• Aims at building a theory
• Data collection starts without the formation of an
initial theoretical framework
• Theory is developed from data generated by a series
of observations.
• Data collected leads to the generation of predictions
which are then tested in further observations that
may confirm, or otherwise, the predictions
ARCHIVAL
• Archival research makes use of
administrative records and documents
as the principal source of data.
• An archival research strategy allows
research questions which focus upon
the past and changes over time to be
answered, be they exploratory,
descriptive or explanatory
MEASUREMENT
DEFINING MEASUREMENT
In the social and behavioural sciences, it is not unusual
for a researcher to engage participants or respondents
in a way that will help him or her to ascertain and
describe the respondents feelings, attitudes, opinions,
and evaluations in some measurable form.

The process of assigning numbers to various attributes


of people, objects or concepts is known as
measurement and this is our primary concerned in this
lecture.
GIGO (garbage in garbage out)
CONCEPT AND MEASUREMENT
A concept is a mental abstraction or idea formed by the
perception of some phenomena.
Examples of concepts in business include job
satisfaction, job commitment, brand awareness, brand
loyalty, service quality, image, risk, channel conflict,
empathy, and so on.
Measurement involves assigning numbers to a
phenomenon according to certain rules that reflect the
characteristics of the phenomenon being measured.
MEASUREMENT PROCESS
The measurement process involves specifying the
variables that serve as proxies for the concepts
(constructs).

A proxy is a variable that represents a single component


of a larger concept and, taken together, several proxies
(indicator variables) are said to measure a concept.
SCALE
Scaling is the branch of measurement that involves the
construction of an instrument that associates qualitative constructs
with quantitative metric units.

A scale is a measurement tool that can be discrete or


continuous. Discrete scales measure only direction, but
continuous scales measure both direction and intensity.

In many ways, scaling remains one of the most arcane and


misunderstood aspects of social research measurement. And, it
attempts to do one of the most difficult aspect of research tasks --
measure abstract concepts.
STEPS IN DEVELOPING A SCALE
1. Definition of the concept(s) to be measured.
2. Identification of the components of the concept.
3. Specification of a sample of observable and measurable
items to represent the components.
4. Selection of the appropriate scales to measure the items.
5. Combination of the items into a composite scale to measure
the concept.
6. Administer the scale to a sample and assess respondent
understanding.
7. Revise the scale as needed.
OBJECTS, PROPERTIES AND INDICATORS
Measuring object or properties
✓ In fact, we do not measure objects or phenomena,
rather we measure specific properties of the object or
phenomena:
❖ e.g. A medical doctor may be interested in
measuring properties such as height, weight or blood
pressure.
✓ To map such properties, we use indicators, that is the
scores obtained by using our operational definitions
for example responses to a questionnaire (See fig. on
next slide).
OBJECTS, PROPERTIES AND INDICATORS
LEVELS (SCALES) OF MEASUREMENT
❑Nominal level (scale): This is the lowest level of
measurement. At this level numbers are used to
classify objects or observations.
– e.g. It is possible to classify a population into females (1)
and males (0).
❑Ordinal level (scale): Some variables are not only
classifiable, but also exhibit some kind of
relationship, allowing for rank order.
– e.g. If we do not know the exact number or distance
between , for example, A and B or A greater than B, we
can construct such as the following scale. In the
following case, B is more satisfied than A, but we cannot
say that how much more satisfied.
e.g. Very A B Very
dissatisfied -3 3 satisfied
LEVELS (SCALES) OF MEASUREMENT
❑Interval Level
✓ when we know that the exact distance between each
observations and this distance is constant, then an
interval level measurement has been achieved. This
means that the difference can be compared.
• i.e. One should be compared to another one, the
temperature rises from 80 C to 100 C.
❑ Ratio Level
✓ The ratio scale differs from an interval scale and with
a ratio scale, we can the comparison of absolute
magnitude of numbers is legitimate.
• i.e. A person weighing 200 pounds is said to be twice
as heavy as one weighing 100 pounds.
SCALES OF MEASUREMENT
TYPES OF SCALES
Metric Scales
▪ Summated Ratings
▪ Numerical
▪ Semantic Differential
▪ Graphic Ratings
Nonmetric Scales
▪ Categorical
▪ Rank Order
▪ Sorting
▪ Constant Sum
▪ Paired Comparison
SUMMATED RATING SCALE
The final score for the respondent on the scale is the sum of their
ratings for all of the items (this is why this is sometimes called a
"summated" scale). There could be several statements that relates
to a single concept, such as opinions about a company or product.
When you sum the scales for all the statements, it is referred to as
summated rating scale.

When you use the scale individually, it is referred to as a Likert


Scale

Italian Restaurant has a wide variety of menu choices.


Strongly Disagree Neither Agree Agree Strongly
Disagree nor Disagree Agree
1 2 3 4 5
NUMERICAL SCALE
This scale has a number rather than verbal
description as response option.

Using a 10-point scale, where 1 is “not at


all important” and 10 is “very important”
how important is ________ in your
decision to do business with a particular
vendor?
SEMANTIC DIFFERENTIAL SCALE
It is another approach to measuring attitudes. It uses 5 or 7 point
scales depending on the level of precision desired and the
education level of the target population.
The distinguishing feature of this is that it uses bipolar end points
with the intermediate points typically numbered. The end points
are usually chosen to describe individuals, objects or events with
opposite adjectives or adverbs.
e.g. “My supervisor is . . . . “
Courteous ___ ___ ___ ___ ___ Discourteous
Friendly ___ ___ ___ ___ ___ Unfriendly
Helpful ___ ___ ___ ___ ___ Unhelpful
Supportive ___ ___ ___ ___ ___ Hostile
Competent ___ ___ ___ ___ ___ Incompetent
Honest ___ ___ ___ ___ ___ Dishonest
Enthusiastic ___ ___ ___ ___ ___ Unenthusiastic
GRAPHIC RATING
It is one that provides measurement on a continuum in
the form of a line with anchors that are numbered and
named. The respondent gives their opinion by placing a
mark on the line. Sometimes the midpoint is labeled
and other times it is not.
On a scale from 0 to 10 how would you rate the
atmosphere of Samuel’s Greek Cuisine restaurant?
Indicate by placing an “X” at the appropriate place on
the line.
Poor OK Excellent
|_______________|_______________|
0 5 10
CATEGORICAL
It is a nominally measured opinion scale that has two or more
responses categories. When there are more categories, the
researcher can be more precise in measuring a particular concept.
It is often used to measure categories as age gender or education.
How satisfied are you with your current job?
[ ] Very Satisfied
[ ] Somewhat Satisfied
[ ] Neither Satisfied nor Dissatisfied
[ ] Somewhat Dissatisfied
[ ] Very Dissatisfied
How interested are you in learning more about the benefits that
are offered with this health plan?
[ ] Very Interested
[ ] Somewhat Interested
[ ] Not Very Interested
RANK ORDER
e.g. “Please rank the five attributes listed below
on a scale from ‘1’ (the most important) to ‘5’
(the least important) in searching for a job.”
Job Attributes Ranking
Pay
Benefits
Co-workers
Flexible Scheduling of Work Hours
Working Conditions
SORTING
This type of scaling approach asks
respondents to indicate their beliefs or
opinions by arranging objects (items) on
the basis of perceived similarity,
preference, or some other attribute.
CONSTANT SUM
“Please allocate 100 points across the
following four attributes to indicate their
relative importance.”
Attributes Score
On-Time Delivery
Price
Tracking Capability
Invoice Accuracy
Sum 100
PAIRED COMPARISOM
Below you will find ten pairs of attributes that have been identified a being important when choosing a
restaurant. For each pair mark the attribute you feel is more important to you in choosing a restaurant
to dine at.

Pairs Attribute 1 Attribute 2


Pair 1 Food Quality Atmosphere
Pair 2 Food Quality Prices
Pair 3 Food Quality Service
Pair 4 Food Quality Cleanliness
Pair 5 Atmosphere Prices
Pair 6 Atmosphere Service
Pair 7 Atmosphere Cleanliness
Pair 8 Prices Service
Pair 9 Prices Cleanliness
Pair 10 Service Cleanliness
CREDIBILITY OF RESEARCH FINDINGS
• Reliability and Validity – Related to quantitative
Research
• Reliability refers to the extent to which your data
collection techniques or analysis procedures will
yield consistent findings
– Will the measures yield the same results on other
occasions?
– Will similar observations be reached by other
observers?
– Is there transparency in how sense was made from
the raw data?
TYPES OF RELIABILITY TESTS
Three types:
1. Test-retest reliability
2. Alternative forms reliability
3. Internal consistency reliability
TEST-RETEST RELIABILITY
Test-retest reliability is obtained through repeated
measurement of the same respondent or group of
respondents using the same measurement device
and under similar conditions. Results are compared
to determine how similar they are. If they are
similar, typically measured by a correlation
coefficient, we say they have high test-retest
reliability.
Problems
The first time respondents take the test may
influence their response the second time they take
it. Also situational factors such as how one feels on
a particular day may influence how respondents
answer the questions and something may change in
the time between repeated usage of the test.
ALTERNATIVE FORMS RELIABILITY
Alternative forms reliability can be used to reduce
some of the above problems. To assess this type of
reliability the researcher develops two equivalent
forms of the construct.

The same respondents are measured at two


different times using equivalent alternative
constructs.

The measure of reliability is the correlation


between the responses to the two versions of the
construct
INTERNAL CONSISTENCY RELIABILITY
This type of reliability is used to assess a summated scale where several
statements are summed to form a total score for a construct.

There are two types of internal reliability tests:


1. Split-half Reliability: to determine the split-half reliability, the researcher
divides the scale items in half and correlates the two sets of items. A
high correlation between the two halves indicates high reliability.
2. Coefficient of alpha: also referred to as Cronbach’s alpha. To obtain the
coefficient of alpha you calculate the average of all possible
combinations of split halves. Coefficient alpha ranges from 0 to 1. we use
statistical packages to compute the coefficient of alpha
CREDIBILITY OF RESEARCH FINDINGS
• Validity is concerned with whether the findings are
really about what they appear to be about.
Types of validity
– Measurement (or construct) validity – do measures reflect
concepts?
– Internal validity – are causal relations between variables
real?
– External validity – sometimes referred to as
generalisability. Can results be generalized beyond the
research setting?
– Ecological validity – are findings applicable to everyday
life?
ALTERNATIVE CRITERIA IN QUALITATIVE
RESEARCH
Trustworthiness (Lincoln and Guba (1985) :
– Credibility, parallels internal validity - i.e. how believable are the
findings?
– Transferability, parallels external validity - i.e. do the findings
apply to other contexts?
– Dependability, parallels reliability - i.e. are the findings likely to
apply at other times?
– Confirmability, parallels objectivity - i.e. has the investigator
allowed his or her values to intrude to a high degree?

Relevance (Hammersley 1992) :


– Importance of a topic in its field
– Contribution to the literature in that field
IMPROVING MEASUREMENTS
✓Elaborate the conceptual definitions
✓ Develop operational definitions
(measurement).
✓ Correct and redefine measurement.
✓ Pre-test the measures for their reliability
✓ Use the final measurement instrument

You might also like