PR2-CH4
PR2-CH4
PRACTICAL RESEARCH 2
Second Semester, SY: 2023-2024
WHY MEASURE?
o Quantitative research – examines precise concepts and issues that are the focus of their
studies
o Concepts – categories for the organization of ideas and observations (Bulmer 1984)
Represent “a label that we give to elements of the social world that seem to
have common features and that strike us as significant.” (Bryman 2008, p. 143)
E.g., Political Science – politics, power, state, government, nation, ideology, and
democracy
o Challenge of social science researchers – many of the phenomena that they study
involve abstract concepts that are not easily observable
E.g., the political science terms above require you to specify clearly what you
mean when you use any of these concepts when doing quantitative research
o IF A CONCEPT IS TO BE USED IN QUANTITATIVE RESEARCH, THE CONCEPT WILL HAVE
TO BE MEASURED.
2 tasks in the process of measurement - distinct but interrelated processes
1. Conceptualization
2. Operationalization
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 1 of 13
o 1. Conceptualization – “the process through which we specify what we mean when we
use particular terms (concepts) in research” (Babbie 2010, p. 126)
How? Don’t rely on Merriam-Webster dictionary!
Examine the social theories and prior research relevant to the concept -> RRL
Concepts - can be defined, approached, and explained in many ways; your task
is to specify how you are going to use it in your study
Variable – a specific dimension or aspect of the concept
o 2. Operationalization – “the process through which we specify what we will observe”
that will indicate to use the presence or absence of the concept that we are studying
(Babbie 2010, p. 126)
This moves our understanding of a concept from its abstract (theoretical) form
to its concrete (empirical/observable and measurable) form.
It includes the creation or formulation of an indicator or a set of indicators that
will stand for and measure the concept or any of the dimensions of the concept,
the variable or variables
Measure – “refer to things that can be relatively or unambiguously
counted”, i.e., income, age, number of children, number of years spent
in school (Bryman 2008, p. 145)
o Measures are quantities
Indicator – “less directly quantifiable” and “something that is devised or
already exists and that is employed as though it were a measure of a
concept”
o Indicators are indirect measures of a concept
o Ways to devise indicators (Bryman):
Through a question (or series of questions) that is part
of a structured interview schedule or self-completion
questionnaire (e.g., attitude -> job satisfaction, social
situation -> poverty, behavior - > leisure pursuits)
Through the recording of individuals’ behavior using a
structured observation schedule (e.g., pupil behavior in
a classroom)
Through official statistics, i.e., use of crime statistics to
measure criminal behavior
Through an examination of mass media content through
content analysis, e.g., to determine changes in the
salience of an issue, i.e., AIDS
o SUMMARY: Why measure? 3 Roles:
1. Allows us to delineate fine differences between people in terms of the
characteristics in question.
2. Gives us a consistent device or yardstick for making distinctions.
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 2 of 13
3. Provides the basis for more precise estimates of the degree of relationship
between concepts.
LEVELS OF MEASUREMENT
o “The values that an indicator can take but they say nothing about the indicator itself”
(Bhattacherjee 2012, p. 45)
o Variables – has categories of some sort, providing different types of information
Discrete – each separate category represents a different status
Continuous – a number represents a quantify that can be described in terms of
order, spread between the numbers, and/or relative amounts (Engel and Schutt
2012, p. 89)
o 4 Levels of Measurement
1. Nominal – not quantitative
2. Ordinal – quantitative
3. Interval – quantitative
4. Ratio – quantitative
o 2 important implications of knowing your variable’s level of measurement
1. You can better understand the differences of your cases on a variable,
allowing you to understand more fully what you are actually measuring.
2. You can identify the type of statistics or the type of statistical analysis that can
be applied to the variable.
o 1. Nominal Level of Measurement
“Identifies variables whose values have no mathematical interpretation; they
vary in kind or quality, but not in amount.” (Engel and Schutt 2012, p. 89)
E.g., gender – male or female; known as dichotomous variables,
variables with only two categories
There is no order implied in the attributes of the nominal variable; no one
response is somehow higher or lower than another.
The attributes that you use to categorize cases must be mutually exclusive and
exhaustive; when your variable meets these two requirements, every case
corresponds to one and only one attribute (Engel and Schutt 2012, pp. 90-91)
Mutually exclusive – every case can have only one attribute
Mutually exhaustive – every case can be classified into one of the
categories
E.g., marital status – single, married, divorced, separated, widowed
“other/s” – another category to put when you are unsure; captures the
attribute of every possible case
o 2. Ordinal Level of Measurement
“The numbers assigned to cases specify only the order of the cases, permitting
greater than and less than distinctions.” (Engel and Schutt 2012, p. 89)
Like nominal variables, these must be mutually exclusive and
exhaustive.
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 3 of 13
E.g., level of education – high school, college, post-graduate
Here, what matters is that you rank-order the categories; the distance between
categories does not matter (Babbie 2010, p. 138)
o 3. Interval Level of Measurement
The variable values “represent fixed measurement units but have no absolute,
or fixed, zero point.” (Engel and Schutt 2012, p. 91)
Also exclusive, exhaustive, and there is an order BUT the gaps between
the numbers are meaningful
E.g., temperature and IQ tests
Here, we can have 3 sets of data (Babbie 2010, p. 139):
1. People are different from one another on tis variable
2. One person is more than another person on this variable
3. We can say “how much” more the person is than the other person
o 4. Ratio Level of Measurement
Represents fixed measuring units and an absolute zero point (zero means
absolutely no amount of whatever the variable indicates).
Ratio numbers can be added and subtracted; because the numbers
begin at an absolute zero point, they can be multiplied and divided (so
ratios can be formed between the numbers).
E.g., ages – can be represented by values ranging from 0 years (or some
fraction of a year) to 120 or more.
The numbers also are mutually exclusive, are exhaustive, have an order,
and have equal gaps. (Engel and Schutt 2012, p. 92)
Ratio measures – have all the characteristics of interval measures BUT the
former have an absolute zero – a true point where zero really means the
absence of the attribute.
E.g., age, income, length of residency, number of organizations, and
number of times you attend Church
o Types of Variables (Bryman 2008, p. 322 and Black 1999, p. 52)
1. Ratio – the distances between the categories are identical across the range,
but there is an absolute zero and it has meaning – there is nothing there
E.g., achievement test scores
2. Interval – the distances between the categories are identical across the range
but there is no zero point where the trait does not exist
E.g., IQ scores
3. Ordinal – categories can be rank-ordered but the distances between the
categories are not equal across the range
E.g., social class, opinions, job position
4. Nominal – categories cannot be rank-ordered and have name value only;
must have at least 2 categories; binary – a subject either belongs or does not
belong
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 4 of 13
E.g., gender, profession, school attended, country, etc.
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 5 of 13
Done by asking the experts of the field if the concept’s measure
captures what the concept is or what the concept means.
2. Concurrent validity – requires using a criterion on which cases (e.g., people)
are known to differ and that is relevant to the concept in question.
E.g., absenteeism – measures job satisfaction
3. Predictive validity – a researcher uses a future criterion measure rather than
a contemporary one, as in concurrent validity
4. Construct validity – a researcher is encouraged to deduce hypotheses from a
theory that is relevant to the concept
E.g., examining the link between job satisfaction and job routine
o Satisfied – less likely to work on routine jobs
o Unsatisfied – more likely to work on routine jobs
5. Convergent validity – done by comparing s new measure to measures of the
same concept developed through other methods
E.g., measure how much time managers spend on various activities
Potential Problem - it is not necessarily easy to establish which of the
two measures represents the more accurate picture or a definitive, valid
measure.
o 3 sources of unreliability
1. The observer’s subjectivity
2. Asking ambiguous questions
3. Asking about issues that are respondents are not very familiar with or care
o How to create reliable measures
1. Use questionnaires (objective) rather than observations (subjective)
2. Ask questions on issues that the respondents know
3. Avoid ambiguous measures
Simplify the wordings in your indicators
o Types of Reliability Tests or Techniques
1. Inter-rater reliability – aka. Inter-observer reliability; a measure of
consistency between two or more independent raters (observers) of the same
construct (concept).
Raters check off which category each observation falls in, and the
percentage of agreement between the raters is an estimate of inter-
rater reliability.
2. Test-retest reliability - a measure of consistency between two measurements
(tests) of the same construct administered to the same sample at two different
points in time.
If the observations have not changed substantially between the two
sets, then the measure is reliable.
3. Split-half reliability – a measure of consistency between two halves of a
construct measure.
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 6 of 13
If you have a 10-item measure of a given construct, randomly split the
10 into 2 sets of 5 (unequal halves are not allowed)
Administer the entire instrument to a sample of respondents
Calculate the total score of each half for each respondent
The correlation between the total scores for each half is a measure of
split-half reliability.
4. Internal consistency reliability – a measure of consistency between different
items of the same construct
If a multiple-item construct measure is administered to respondents,
the extent to which respondents rate those items in a similar manner is
a reflection of internal consistency.
o Random error – the error that can be attributed to a set of unknown and uncontrollable
external factors that randomly influence some observations but not others
E.g., respondents with nice and bad moods
o Systematic error – an error that is introduced by factors that systematically affect all
observations of a construct across an entire sample.
E.g., a financial crisis impacted the performance of financial firms
disproportionately more than other types of firms
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 7 of 13
1. Defining the target population – using a unit of analysis, i.e., person, group,
organization, country, object, etc.
2. Choosing a sampling frame – which is the list from which you can draw your
sample
3. Choosing a sample from the sampling frame using a well-defined sampling
technique which are:
A. Probability (random) sampling
B. Nonprobability sampling
o Types of Probability Sampling Techniques
1. Simple Random – from whole population; highly representative if all subjects
participate; not possible without complete list of population members
2. Stratified Random – from identifiable groups (strata), subgroups, etc.; can
ensure that specific groups are represented, even proportionally, in the sample;
this requires greater effort and the strata must be clearly defined
3. Cluster – random samples of successive clusters of subjects until small groups
are chosen as units; possible to select randomly when no single list of
population members exists but local lists do; clusters in a level must be
equivalent but some natural ones are not for essential characteristics
4. Stage – combination of cluster (randomly selecting clusters) and random or
stratified random sampling; can make up probability sample by random at
stages and within groups; but is complex and combines limitations of cluster and
stratified random sampling
SURVEY RESEARCH
o A research method involved the use of standardized questionnaires or interviews to
collect data about people and their preferences, thoughts, and behaviors in a systematic
manner.
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 8 of 13
o Strengths:
1. Excellent vehicle for measuring a wide variety of unobservable data
2. Ideally suited for remotely collecting data about a population that is too large
to observe directly
3. Questionnaire surveys are preferred by some respondents due to their
unobtrusive nature
4. Interviews may be the only way of reaching certain population groups
5. Large sample surveys may allow detection of small effects even while
analyzing multiple variables.
6. Survey research saves researcher time, effort, and cost than other methods
o 2 ways of conducting surveys
1. Questionnaire surveys (written) – may be mail-in, group-administered, or
online surveys
2. Interview surveys (verbal) – may be personal, telephone, or focus group
interviews
EXPERIMENT
o One of the most rigorous of all research designs
o 3 key features:
1. One or more independent variables are manipulated by the researcher (as
treatments)
2. Subjects are randomly assigned to different treatment levels (random
assignment)
3. The results of the treatments on outcomes (dependent variables)
o This is best suited for explanatory research (rather than for descriptive or exploratory
research) – the goal of the study is to examine the cause-and-effect relationships and
independent variables that can be manipulated or controlled
o 2 Possible Settings
1. Laboratory experiments – tend to be high in internal validity but low in
external validity (generalizability)
The artificial (laboratory) setting in which the study is conducted may
not reflect the real world
2. Field experiments – high in both internal and external validity; e.g., done in a
real organization
But these are relatively rare because of the difficulties in manipulating
treatments and controlling extraneous effects in a field setting
o 2 Broad Categories of Experimental Research – both require treatment manipulation
1. True experimental designs – require random assignment
2. Quasi-experimental designs – doesn’t require random assignment
o Basic concepts in experimental research
Treatment and control groups
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 9 of 13
Treatment group – a treatment, an experimental stimulus, is
administered to them (many treatments can be given)
Control group – they are not given the stimulus
MEASURE OF SUCCESS – if the treatment group rate more favorably on
outcome variables than the control group
Treatment manipulation
This helps control for the “cause” in the cause-effect relationships.
MEASURE OF SUCCESS – the validity of experimental research depends
on how well the treatment was manipulated
Checked in 2 steps:
o 1. Pretest – measurements done before the treatment
o 2. Posttest – measurements done after the treatment
Random Selection and assignment
Random selection – process of randomly drawing a sample from a
population or a sampling frame
o More closely related to external validity (generalizability) of
findings
Random assignment – a process of randomly assigning subjects to
experimental or control groups
o Related to internal validity since it is related to design
Experimental research – can use either or both
Quasi-experimental research – doesn’t use neither
o Strength
Its internal validity (causality) that is due to its ability to link cause and effect
through treatment manipulation while controlling for the spurious effect of
extraneous variable.
o Weaknesses
1. Without theories, the hypotheses being tested tend to be ad hoc, possibly
illogical, and meaningless
2. Many measurement instruments are not tested for reliability and validity
3. Many experimental research use inappropriate research designs; these lack
internal validity and are highly suspect
4. The treatments (tasks) used in experimental research may be diverse,
incomparable, and inconsistent across studies and sometimes inappropriate for
the subject population
OFFICIAL STATISTICS
o Advantages
1. The data have already been collected
2. The problem of reactivity will be much less pronounced than when data are
collected by interview or questionnaire
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 10 of 13
3. Precisely because the data are compiled over many years, it is possible to
chart trends over time and perhaps to relate these to wider social changes
4. There is the prospect as well of cross-cultural analysis since the official
statistics from different nations can be compared for a specific area of activity
5. Unobstrusive measure or method – “any method of observation that directly
removes the observer from the set of interactions… being studied.”
CONTENT ANALYSIS
o “An approach to the analysis of documents and texts that seeks to quantify content in
terms of predetermined categories and in a systematic and replicable manner”
o Quantitative Model (Deductive)
1. Research question and hypothesis
2. Conceptualization – what valuables are used and how they will be defined
3. Operational measures – aimed at gaining internal validity and face validity
3a. unit of analysis
3b. measurement
Categories can be exhaustive and mutually exclusive or a priori
4. Coding
5. Sampling – random sampling a subset of content
6. Reliability
Can use: two codes for intercoder reliability or computer program for
validation
7. If reliability was determined by hand (step 6) then apply a statistical check
8. Tabulation and representation
o Its Contributions by Social Scientists
1. Quantitative researchers have shown the value of the method in gaining
“hard data.”
2. Examining patterns and themes within the objects produced in a given
culture, analyzing preexisting data to expose and unravel macro processes, and
presenting their findings on easily readable charts and tables has been
significant when getting public attention.
3. Became the standard method for analyzing the role of mass-produced texts in
the socialization process.
4. Helps shape social policy by calling attention to systematic inequalities in
need of change
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 11 of 13
STATISTICS
o Defined in the following ways:
1. The overall domain concerned with the mathematical treatment of variability.
2. A subset of theoretical or mathematical statistics - concerned with the use of
mathematical principles and probability theory in the development and testing
of methodology for the treatment of variability.
3. A subset of applied statistics – use of already developed and accepted
statistical methodology as an aid in the research effort
4. The specific procedures developed by statisticians for the mathematical
treatment of variability.
o The textbook’s definition – statistics is concerned with (1) the application of accepted
statistical procedures as an aid in the research effort; (2) as specific statistical
procedures used in modern research.
o 2 Levels of Statistics
1. Descriptive
2. Inferential
1. DESCRIPTIVE STATISTICS
o Definition – “to statistically describing, aggregating, and presenting the constructs of
interest or associations between these constructs.”
The Researcher – “stops after looking at a finite data set to numerically
delineate some variable or to portray, explain, or predict variability.”
Goal – treatment of variability in the case of samples
o 2 Types of Analysis
A. Univariate – analysis of a single variable; a set of statistical techniques that
can describe the general properties of one variable; it includes:
i. Frequency distribution – summary of the percentages of individual
values or ranges of values for that variable
ii. Central tendency – an estimate of the center of a distribution of
values
o mean – average
o median – middle
o mode – most frequent
iii. Dispersion – the way values are spread around the central tendency
B. Bivariate – examines how 2 variables are related to each other
i. Bivatiate correlation (aka. “correlation”) – most common bivariate
statistic; “the statistical procedure associated with the correlation
model when 2 variables are involved”
ii. Cross-tabulation (aka. “cross-tab” or “contingency table”) – describes
the frequency of all combinations of 2 or more nominal or categorical
variables.
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 12 of 13
2. INFERENTIAL STATISTICS
o Definition – “the statistical testing of hypotheses (theory testing)” or “the statistical
procedures that are used to reach conclusions about associations between variables”
The Researcher – attempts to see how far she can go in generalizing statistical
findings to groups other than those examined
Goal – test the validity of generalizations from samples to populations
o Hypothesis testing – you test generalizations from samples to populations
o Hypothesis – “states something about whether what is true of the sample is also true of
the population”
2 Forms of hypothesis
i. null form
ii. alternative form
o A. Two-group comparison – one of the simplest inferential analyses; “done by
comparing posttest outcomes of treatment and control group subjects in a randomized
posttest only control design”
o B. Factor analysis – a data reduction technique that is used to statistically aggregate a
larger number of observed measures (items) into a smaller set of unobserved (latent)
variables, called factors, based on their underlying bivariate correlation patterns.
o C. Discriminant analysis – a classificatory technique that aims to place a given
observation in one of several nominal categories based on a linear combination of
predictor variables.
o D. Logistic regression or logit model – a general linear model (GLM) in which the
outcome variable is binary (0 or 1) and is presumed to follow a logistic distribution.
o E. Probit regression/model – is a GLM in which the outcome variable can vary between
0 and 1 (or can assume discrete values 0 and 1) and is presumed to follow a standard
normal distribution.
o F. Path analysis – a multivariate GLM technique for analyzing directional relationships
among a set of variables.
o G. Time series analysis – a technique for analyzing time series data or variables that
continually changes with time
Lesson 1: Measurement and Types of Data in Quantitative Research (pp. 101-137) Page 13 of 13