Chapter 11 (Sample and Sampling)(Encrypted) (1)
Chapter 11 (Sample and Sampling)(Encrypted) (1)
DATA COLLECTION
LEARNING OBJECTIVES
SAMPLING
FRAME SAMPLING FRAME
It is a listing of all elements of a population.
Researcher has to prepare a sampling frame by listing
SAMPLE all members of the accessible population.
QUALITIES OF A GOOD SAMPLING FRAME
SAMPLE ELIGIBILITY CRITERIA
INCLUSION CRITERIA
Attributes of subjects that are
essential for their selection to
participate.
Inclusion criteria removes the
influence of specific confounding
variables.
EXCLUSION CRITERIA
Characteristics or responses of
subjects that require their removal as
subjects.
DETERMINANTS OF SAMPLE ELIGIBILITY
CRITERIA
PURPOSES OF SAMPLING
CRITERIA FOR GOOD SAMPLING
HOMOGENOUS
• Selected samples from the population should be homogenous and should not have
any differences when compared with the population.
TRUE REPRESENTATIVE
• The selected sample should have the similar characteristics as the original
population from which it has been selected.
MUTUALLY EXCLUSIVE
• The individual items composing the sample should be independent from each other.
STEPS IN SAMPLING
1. PROCESS
IDENTIFY THE TARGET POPULATION (POPULATION OF
INTEREST)
Target population refers to the group of individuals or objects to which
researchers are interested in generalizing their findings. Researcher need
to identify population of interest based on research topic.
2.IDENTIFY THE ACCESSIBLE
POPULATION (SOURCE POPULATION )
The accessible population is the group of individuals or objects, from
which the sample might be taken. A well-defined population reduces the
probability of including the participants who all are not suitable for the
research objective.
3. SELECT A SAMPLING FRAME
It is the list of all units in a study population from which the sample is
taken. For example, researcher takes toddlers of three playschools in
sampling frame to conduct study.
6. DETERMINE THE SAMPLE SIZE
The sample size is simply the number of units in the sample. Sample
size determination depends on many factors such as time, cost, and
facility. In general, larger samples are better, but they also require more
resources. Follow the principle of optimum sample size.
7. EXECUTE THE SAMPLING PLAN
Once all the above stated steps are completed, researcher can use all that
information to choose sample for the research study.
Types of Sampling Techniques
SAMPLING TECHNIQUE
Stability
• It is the extent to which similar results are obtained on two separate occasions(test–retest
reliability)
Equivalence
• Different observers scoring a behavior or event using the same instrument (interrater-
reliability).
Internal consistency
• Extent that items measure the same trait and nothing else.
TYPES OF RELIABILITY MEASURE
RELIABILITY
Internal
Consistency Equivalance
Stability
1. Cronback alpha 1. Parallel
1. Test-retest
2. Split-half 2. Rater (Kappa)
3. KR-20
STABILITY MEASURES OF
RELIABILITY
It is the extent to which similar results
are obtained on two separate occasions.
The reliability estimate focus on the
instrument’s susceptibility to
extraneous influences over time, such
as participant fatigue.
Test-Retest Reliability It is determined by
administering a test at two different points in
time to the same individual and determining the
correlation or strength of association of the two
sets of scores. Reliability coefficient is an index
of magnitude of test’s reliability that can be
computed using correlation coefficient statistics.
The possible value of correlation coefficient may
range from -1 through .00 to +1. (Negative to
zero and zero to positive correlation coefficient).
INTERNAL CONSISTENCY
Inter-rater
Two raters are used with one rating instrument.
Intra-rater reliability
Intra-rater reliability uses one rater to rate the
same instrument or observation twice.
a. Inter-Rate Reliability (also called inter-observer agreement)
With inter-rater reliability two or more trained raters are asked to
independently rate the same subject or even at the same time using the
same instrumentation or plan. The score obtained are then used to obtain
a percentage of agreement between the rater. If the rating instrument is
reliable then the scores obtained by the two raters should be
comparable. The researcher looks for a reliability assessment score of
70% agreement or higher. Percentage of agreement tends to over-
estimate reliability because the possibility of chance agreement is on the
low side of acceptability; the researcher should be cautious in assuming
the reliability.
b. Intra-Rater Reliability
Intra-rater reliability relies on one rater to rate a subject or event twice.
For example, videotape of a mother-infant interaction could be viewed
and scored by the same person on two separate occasions. Again the
percentage of agreement is the most common method of statistical
procedure to assess reliability.
FACTORS INFLUENCING RELIABILITY
Time interval
between testing
Condition
Variation with under which
the testing measurements
situation were obtained