0% found this document useful (0 votes)
9 views33 pages

PPNCKH (Slide Thư Viện)

The document outlines the role and importance of research, defining it as a process of discovering new knowledge while distinguishing it from unethical practices like plagiarism and data falsification. It details the research process, including hypothesis formulation, variable types, sampling methods, and the significance of reliability and validity in measurement. Additionally, it emphasizes ethical standards in research and provides guidance on selecting research problems and reviewing literature.

Uploaded by

Thư Đỗ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views33 pages

PPNCKH (Slide Thư Viện)

The document outlines the role and importance of research, defining it as a process of discovering new knowledge while distinguishing it from unethical practices like plagiarism and data falsification. It details the research process, including hypothesis formulation, variable types, sampling methods, and the significance of reliability and validity in measurement. Additionally, it emphasizes ethical standards in research and provides guidance on selecting research problems and reviewing literature.

Uploaded by

Thư Đỗ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Chapter 1: The role and importance of research

I. What research is:


 Research is a process through which new knowledge is discovered.
 A theory helps us to organize this new information into a coherent body, a set
of related ideas that explain events that have occurred and predict events that
may happen.

II. What research isn’t:


 Looking for sth important when it simply isn’t to be found.
 Plagiarizing other people’s work,
 Falsifying data to prove a point,
 Misrepresenting information
 Misleading participants.

III. A model of scientific inquiry:

1. Asking the
Question
2. Identifying
8. Asking new
the important
questions
factors

7.
3. Formulating
Reconsisdering
a hypothesis
the theory

6. Working 4. Collecting
with the relevant
hypothesis information
5. Testing the
hyphothesis
IV. Different types of research:
1. Non-experimental research:
 Descriptive research
 Correlational research
 Qualitative research
2. Experimental research:
 True experimental research
 Quasi-experimental research
***Research design “cheat” sheet
Chapter 2: The research process: Coming to terms
I. From problem to solution:
Research problems are educational issues, controversies, or concerns studied by
researchers.

II. Variable:
 A characteristic or attribute of an individual or an organization that:
 Can be measured or observed by the researcher;
 Varies among individuals or organizations studies.
E.g: hair color, height, weight, age, number of words
 Types of variables:
 A dependent variable represents the measure that reflects the outcomes of a
research study.
 An independent variable (treatment variable) represents the treatments or
conditions that the researcher has either direct or indirect control over to test
their effects on a particular outcome.

CAUSE relationship OUTCOME

Independent variable Dependent variable

Variable levels
 A control variable is a variable that has a potential influence on the
dependent variable; consequently, the influence must be removed or
controlled.
 An extraneous variable is variable that has an unpredictable impact upon the
dependent variable.
 A moderator variable is a variable that is related to the variables of interest
(such as the dependent and independent variable), making the true
relationship between the independent and dependent variable.

III. Hypotheses:
 They are predictions about the expected relationships among variables.
 Null hypothesis: a prediction that in the general population, no relationship or
no significant difference exists between groups on a variable
E.g: There is no linguistic difference in students’ writing performance
corresponding to the two teaching methods.
 Non-directional hypothesis: a prediction about differences, but the researcher
cannot specify the exact form of differences (e.g, higher, lower, more, less)
E.g: There is a difference between the two groups.
 Directional hypothesis: a prediction about the expected/potential outcome
(based on prior literature).
E.g: Scores will be higher for Group 1 than for Group 2.

***Differences between the null hypothesis and the research


hypothesis:
1. The null hypothesis states that there is no relationship between variables (an
equality), whereas the research hypothesis states that there is a relationship (an
inequality).
2. Null hypotheses always refer to the population, whereas research hypotheses
always refer to the sample.
3. Because the entire population cannot be directly tested (again, it is impractical,
uneconomical, and often impossible), you can never really say that there is
actually no difference between groups (or an inequality) on a specified
dependent variable (if you accept the null hypothesis).
4. Null hypotheses are always stated using Greek symbols, whereas research
hypotheses are always stated using Roman symbols.
5. Because you cannot directly test the null hypothesis (remember that you rarely
will have access to the total population), it is an implied hypothesis. The
research hypothesis, on the other hand, is explicit.

IV. What makes a GOOD hypothesis?


 Stated in declarative form, not a question
 Posits an expected relationship between variables
 Reflects the theory or literature upon which they are based
 Should be brief and to the point
 Testable hypotheses

V. Samples & Populations:


 Given the constraints of limited time and limited research funds, the best
strategy is to take a portion of a larger group of participants and do the research
with that smaller group
 Sample: the smaller group selected from a population
 Population: a larger group of participants
 Samples should represent the population as much as possible.
 The results based on the sample can be generalized to the population.
(generalizability)
Chapter 3: Selecting a problem & reviewing the research
I. Starting steps:
 Select an area of interest
 Formulate a research question & a formal hypothesis
 Review the literature

 Use the results of previous studies to fine-tune your research ideas and
hypothesis.
 Ongoing review of the literature & changing ideas about the relationships
between the variables.

II. Selecting a problem:


Select a problem which genuinely interests you

III. Defining your interests:


 Personal experiences and firsthand knowledge more often than not can be the
catalyst for starting research.
 Using ideas from your mentor or instructor will probably make you very current
with whatever is happening in your field.
 Look for a research question that reflects the next step in the research process.
Perhaps A, B, and C have already been done, and D is next in line.
 Come up with a research question
IV. Ideas, ideas, ideas!
 Go to that body of literature & start reading!
 Keywords are words or phrases that capture the most important aspects of a
paper.

V. From idea to research question to hypothesis:


A well-written hypothesis:
 Is stated in declarative form
 Posits a relationship between variables
 Reflects a theory or body of literature upon which it is based
 If brief and to the point
 Is testable

VI. Reviewing the literature:


The review of literature provides a framework for the research proposal.
VII. Reading and evaluating research:
 Research articles take all kind of shapes and forms, but their primary purpose is
to inform and educate the reader.
 Criteria for judging a research study:
1. Review of previous research
2. Problems and purpose
3. Hypothesis
4. Method
5. Sample
6. Results and discussion
7. References
8. General comments about the report

VIII. Using electronic tools in your research activities:


 Searching online
 The great search engines
 Word order and repetition

IX. Using bibliographic database programs:


Although one of the most tedious, time-consuming parts of creating the research
document is tracking and dealing with bibliographic references, there are now
several different software programs that can greatly reduce the necessary time and
effort: EndNote; ProCite and Biblioscape

X. Using the internet: beyond searchers


Research activities and the internet
Using Mailing Lists or Listservs

XI. Writing a literature review:


 Read other literature reviews
 Create a unified theme
 Use a system to organize your materials
 Work from an outline
 Build bridges between the different areas you review

XII. Basic principles of ethical research:


 Protection from Harm
 Maintenance of Privacy
 Coercion
 Informed Consent
 Confidentiality
 Debriefing
 Sharing benefits

XIII. Ensuring high ethical standards:


 Do a computer simulation in which data are constructed and subjected to the
effects of various treatments.
 When the treatment is deemed harmful, do not give up. Rather, try to locate a
population that has already been exposed to the harmful effects of some
variable.
 Always secure informed consent.
 When possible, publish all reports using group data rather than individual data.
 Use a small, well-informed sample until you can expand the sample size and the
ambitiousness of the project. Also, be sure to check with you institutional
review board.
 Ask your colleagues to review your proposal, especially your experimental
procedures, before you begin.
 Ask institutional review board.
Chapter 4: Sampling & Generalizability
I. Populations & Samples
 Population: a group of potential participants to whom a researcher want to
generalize the results of study.
 Sample: is a subset of a population.
 Generalization can often be the key to a successful study.
II. Probability & Non-probability sampling:
 Probability sampling: the likelihood of any one member of the population being
selected is known.
 Non-probability sampling: the likelihood of selecting any one member from the
population is not known.
E.g: 1000 seniors out of 4500 total students.
 A researcher doesn’t know the population of students who are enrolled.
1. Probability sampling:
 Simple random sampling: each member of the population has an equal and
independent chance of being selected to be part of the sample.
 Table of Random Numbers
 Computer to Generate Random Samples
 Systematic sampling:
 Size of step [N] = size of population/size of sample
 Random starting point
 Select every (ordinal number) name
 Stratified sampling: to ensure the profile of sample matches the profile of the
population; reflecting the true proportion in the population of individuals with
certain characteristics
 Cluster sampling: groups of occurrences that occur together
E.g: police in each district of a large city
2. Non-probability sampling:
 Convenience sampling: naturally formed groups (e.g: classroom, organization,
family)
 Easy? Yes
 Random? No
 Representative? Perhaps, but to some extent
 Quota sampling: number of participants when the quota is reached.
E.g: 10 males and 10 females although the distribution of males and females is
approximately a 50/50 split.
III. Sample size:
 Too small sample: not representative of the population
 Too large sample: overkill realistically
 Rule of thumb: n ~ 30
IV. Sampling error:
 Sampling error is the lack of fit between the sample and the population
 Reducing sampling error is the goal of any sampling technique
Chapter 5: Measurement, Reliability, and Validity
I. The measurement process:
 “I really like the way he presented that material.”  Informal judgment.
 The “assignment of numerals to objects or events according to rules.” (Stevens,
1951)

II. Levels of measurement:


1. Nominal level variables:
 Are categorical in nature.
 Differ in quality rather than quantity
E.g: hair color (blonde, red, or black,...); political affiliation (republican, democrat,
or independent); gender (males, females); research sites (HCM, Hanoi,..)
2. Ordinal level variables:
 Can be ordered along some type of continuum.
 Are rankings of various outcomes
 Spacing between rankings is uneven
E.g: grades (high distinction, first class honor, second class honors, pass); rankings
3. Interval level variables:
 Allow to determine the difference between points along the same type of
continuum.
 Have equidistant points along some underlying continuum.
E.g: temperate (difference between 10o and 20o = difference between 30o and 40o);
scores
4. Ratio level variables:
 Have equal intervals
 Have an absolute zero (one possible value is zero; the actual absence of
variable/trait).
 Answers are twice, three times,…as much
E.g: temperate; scores; ages
III. Continuous vs. Discrete variables:
 A continuous variable: assume any value along some underlying continuum.
E.g: height is a continuous variable in that one can measure height as 64.3 inches
or 64.31 inches or 67.000324 inches
 A discrete or categorical variable: one with values that can be placed only into
categories that have definite boundaries.

IV. Importance of Reliability & Validity:


“Respected levels of reliability and validity are hallmarks of good measurement
practices.”
1. Reliability:
 Reliability consists of both an observed score and a true score component.
 Reliability occurs when a test measures the same thing more than once and
results in the same outcomes.

 Observed score: the score is recorded or observed


 True score: a perfect reflection of the true value of a variable, given no other
internal or external influences. A true score is assumed to accurately (and
theoretically) reflect the true value.
 Error score: all of those factors that cause the true score and the observed score
to differ
 Unreliability: both trait and method errors contribute to the unreliability of tests.
 Method error: the difference between true and observed scores resulting from
the testing situation.
 Trait error: the difference between the true and observed scores resulting from
characteristics of the person taking the test

True Score
Reliability =
True Score + Error Score

 The closer a test or measurement instrument can get to the true score, the more
reliable that instrument is.
 As the error score gets smaller, the degree of reliability increases and
approaches 1.
 The reliability would be 1 because the true score would equal the observed
score.
a. Increasing reliability:
 Increase the number of items or observations.
 Eliminate items that are unclear.
 Standardize the conditions under which the test is taken.
 Moderate the degree of difficulty of the tests.
 Minimize the effects of external events.
 Standardize instructions.
 Maintain consistent scoring procedures.
b. Types of reliability:
 Test-retest reliability: examines consistency over time.
 Parallel-forms reliability: examines consistency between forms.
 Inter-rater reliability: a measure of the consistency from rater to rater
 Internal consistency: examines the uni-dimensional nature of a set of items.
2. Validity:
 ~ truthfulness, accuracy, authenticity, genuineness, and soundness
 That the test or instrument you are using actually measures what you need to
have measured
 Types of validity:

3. The relationship between reliability and validity:


A test can be reliable but not valid, but a test cannot be valid without first being
reliable.
 Closing (and very important) thoughts
Chapter 6: Methods of measuring behavior
I. Tests & their development
 The purpose of a test is to measure the nature and the extent of individual
differences.
 A good test should be able to differentiate people from one another reliably
based on their true scores

II. Why use tests?


 Tests help researchers determine the outcome of an experiment.
 Tests can be used as diagnostic and screening tools, where they can provide
insight into an individual’s strengths and weaknesses
 Tests assist in placement.
 Tests assist in selection.
III. Types of tests:
1. Achievement tests:
 They are used to assess expertise in a content area.
 Used to measure learning as the outcome
 Used to measure the effectiveness of the instruction that accompanied the
learning.
E.g: school districts sometimes use students’ scores on achievement tests on
evaluate teacher effectiveness.
 Different types of achievement tests:
 Standardized tests: administered with a standard set of instructions and
scoring procedures (e.g: TOEIC, TOEFL, IELTS)
 Researcher/Teacher-made tests: designed for a much more specific purpose,
like a course or a treatment.
 Norm-referenced tests: compare an individual’s test performance to the test
performance of other individuals.
 Criterion-referenced tests: compare an individual’s test performance to a
specific criterion or level of performance.
2. Multiple-choice achievement items:
 The stem of a multiple-choice item should be written as clearly as possible to
reduce method error.
 Multiple-choice items have clear advantages and disadvantages.
 Tests can take many different forms depending on their design and intended
purpose.
 Item analysis results in a difficulty and discrimination index for each item on a
test, not for the test as a whole
3. Attitude tests:
 To assess an individual’s feelings about an object, person, or event.
 Attitude tests (sometimes called scales) are used to know how someone feels
about a particular thing.
E.g: customers’ preference for or feelings about a brand of microwave popcorn.
 Thurstone scales come very close to measuring at the interval level.
E.g: I believe the church is the greatest institution in America today.
I believe in religion, but I seldom go to church.
 Likert scales are the most popular type of attitude assessment scale.
4. Personality tests:
 Assess stable individual behavior patterns
 Projective tests & structure tests
5. Observational techniques:
 To record behavior without interference
E.g: studying play behavior among children with disabilities and those without
disabilities.
 Techniques for recording behavior

6. Questionnaires:
 Save time because individuals can complete them without any direct assistance
or intervention from the researcher
 By using snail mail or email, you can survey a broad geographical area.
 They are cheaper (even with increased postage costs) than one-on-one
interviews.
 People may be more willing to be truthful because their anonymity is virtually
guaranteed.
Chapter 7: Data collection & Descriptive statistics
I. Four steps of data collection:

1 • The construction of a data collection form used to


organize the data you collect

2 • The designation of the coding strategy used to


represent data collection form

3 • The collection of the actual data

4 • Entry onto the data collection form

II. Ten commandments of data collection:


1. Go through the tedious process of getting permission from your institutional
review board (IRB) that grants permission for you to collect data.
2. Begin thinking about the type of data you will have to collect to answer your
question.
3. Think about where you will be obtaining the data.
4. Make sure that the data collection form you are using is clear and easy to use
5. Make a duplicate copy of the data file and keep it in a separate location
6. Do not rely on other people to collect or transfer your data unless you
personally have trained them and are confident that they understand the data
collection process as well as you do
7. Plan a detailed schedule of when and where you will be collecting your data.
8. Cultivate possible sources for your participant pool
9. Try to follow up on subjects who missed their testing session or interview
10.Never discard original data
III. Descriptive statistics:
 Distribution of scores: histogram
 Compare different distributions of scores
 Measures of central tendency:
 Mean: average:
 Add all the scores in the group to obtain a total.
 Divide the total of all the scores by the number of observations

X1 + X 2 + X 3 + ⋯ + X n ∑ni=1 xi
̅=
X =
n n

 Median: the score or the point in a distribution above which one-half of the
scores lie:
 Order the scores from lowest to highest.
 Count the number of scores.
 Select the middle score as the median = median (range)
 Mode: the score that occurs most frequently = mode (range)

IV. Measures of variability:


 Variability is the degree of spread or dispersion that characterizes a group of
scores, and it is the degree to which a set of scores differs from some measure
of central tendency, most often the mean.
 Measures of variability:
 Range
 Standard deviation
 The range is the difference between the highest and the lowest scores in a
distribution (highest – lowest)
 The standard deviation is the average amount that each of the individual scores
varies from the mean of the set of scores.
 The larger the standard deviation, the more variable the set of scores

S = the standard deviation


Ʃ = the summation of a set of scores
∑(X − ̅̅̅
X)2
s= √ X = an individual score
n−1
̅
X = the mean of all the scores
n = the number of observations
V. Understanding distributions:
***The Normal (Bell-Shaped) Curve
 The mean, the median, and the mode
are all the same value (represented by
the point at which the vertical line
crosses the X-axis).
 It is symmetrical about its midpoint,
which means that the left and right
halves of the curve are mirror
images.
VI. Standard scores:
 They are scores that have the same reference points and the same standard
deviation.
 Z score is the result of dividing the amount that an individual score deviates
from the mean by the standard deviation
z = the standard score

(X − ̅
X) x = the individual score
z= x̅ = the mean of the group of scores to which X belongs
S
s = the standard deviation of the group of scores to which X
belongs
 Standard scores allow for the comparison of scores from different distributions,
which enables accurate and straightforward comparisons.
 Z scores not only are essential for comparing raw scores from different
distributions but they are also associated with a particular likelihood that a raw
score will appear in a distribution.
Chapter 8: Non-experimental research: Qualitative methods
I. Qualitative research:
 Is social or behavioral science research
 Explores the processes that underlie human behavior
 Uses exploratory techniques as interviews, surveys, case studies, another
relatively personal techniques.

II. Research sources


Research sources are where you obtain the information you need to make your
argument.

Documentation

Archival
Focus group
records

Research resources

Participant Physical
observation artifacts

Direct
observation
1. Documentation:
 Documentation that is composed and released either internally or for public
consumption can provide a wealth of information.
 Documents also serve to confirm or contradict information gathered through
other means.
2. Archival records:
 Give the researcher descriptive data about the composition of an organization.
E.g: organizational charts and budgets which help track change in the organization
being studied
Which employees have (not) been promoted in recent years?
3. Physical artifacts:
Physical objects or elements that are open to interpretation
E.g: Is information technology policy effective in high schools/a company in
HCMC/Binh Duong Province?
Does a company have a creative culture? (Such as furniture & office layout,
dress norms, etc.)
4. Participant observation:
 Requires the researcher:
 To be an active participant in the social network being studied
 Maintain sufficient objectivity
 Provides some terrific and very useful information,
E.g: a member of a Peace Corps studies company’s activities and provides a
personal perspective
5. Focus group:
 A participant group is interviewed by a moderator/researcher
 Strengths: In a relatively short period of time:
 Gather information
 Generate insight
 Determine how group members reach decisions
 Encourage group interaction
6. Case studies:
 Investigate an individual or an institution in a unique setting or situation in as
intense and as detailed a manner as possible
 Take a long time to complete but can yield a great deal of detail and insight
***Advantages and Disadvantages:

Advantages Disadvantages
 Enable a very close examination  Be time-consuming
and scrutiny and the collection of a  Reflect only one reality
great deal of detailed data  Lose in breadth if providing in
 Encourage the use of several depth
different techniques to get the  Fail to study cause-and-effect
necessary information relationships
 Get a richer account of what is  Limited in generalizability of the
occurring findings
 Suggest directions for further study,
rather than testing hypotheses.

III. Ethnographies:
 An ethnography is geared toward exploring a culture.
 Key characteristics:
1) Holistic perspective
2) Naturalistic orientation
3) Prolonged filed activity
4) Preconceived ideas

IV. Historical research


 Understanding the past can lend significant understanding to the future. Take
that history course!
 Steps in Historical research:
1) Define a topic a problem
2) Formulates a hypothesis, which often is expressed as a question
3) Utilize a variety of sources to gather data
4) Evidence needs to be evaluated for its authenticity as well as for its accuracy
5) Data need to be synthesized or integrated to provide a coherent body of
information
6) Interpret the results in light of the argument you originally made
 The limitations of historical research:
 Results will likely be limited in their generalizability
 Ignore other types of data presented by history
 Historical research is often a long and arduous task that can require
hundreds, if not thousands, of hours of poring over documents
 Other less rigorous (but more comprehensive) criteria are used to evaluate
measurement tools.
Chapter 9: Pre- and True experimental research methods
I. Experimental research method:
 Experimental method tests for the presence of a distinct cause and effect,
establishing a causal relationship
 A does cause B to happen or that A does not cause B to happen

Independent variable AFFECTS Dependent variable

~ Treatment variable ~ Outcomes


CAUSES

II. Experimental research design:


1. Assumptions:
 Two groups were equivalent from the start of the experiment
 Any observed difference at the end of the experiment must be due to the
treatment
 Understanding the causal relationships between variables
 Three general categories:
 Pre-experimental designs
 Quasi-experimental/causal-comparative designs
 True experimental designs
2. Randomness:
 Randomness ~ an equal and independent chance of being selected
 Including 3 steps to ensure true randomness
1) Select subjects randomly from a population to form a sample
2) Assign subjects randomly to different groups
3) Assign the treatment randomly to groups

Pre-experimental design Quasi-experimental design True experimental design


NO NO YES
III. Pre-experimental designs:
1. One-shot case study designs:

1. Participants are designed to ONE group

2. A TREATMENT is administered

3. A POSTTEST is administered

 Shortcomings of one-shot case study designs:


 No randomization
 No cause-and-effect relationship
 Factors and their effects are unknown
 One-group pretest posttest designs:
 Comparisons between the pretest & posttest scores
 The observed difference between pre- & posttest may not be due to the
treatment.

1. Participants are assigned to ONE group

2. A PRETEST is administered

3. A TREATMENT is administered

4. A POSTTEST is administered
IV. True experimental designs:
 Randomness
 Control group
 Stronger argument for a cause-and-effect relationship
 There are 3 types:
1) A pretest posttest control group design
2) A posttest-only control group design
3) The Solomon four-group design
 A pretest posttest control group design

Step 1
Random assignment to a CONTROL Random assignment to an
group EXPERIMENTAL group

Step 2

A PRETEST is administered A PRETEST is administered

Step 3

NO TREATMENT is administered. A TREATMENT is administered

Step 4

A POSTTEST is administered. A POSTTEST is administered


 Assumptions:
 The subjects in both groups are equivalent at the beginning of the
experiment.
 Any differences observed at the end of the experiment must be due to the
treatment.
 The design can be applied to more than 2 groups
 A posttest-only control group design

Step 1
Random assignment of participants Random assignment of participants
to a CONTROL group to an EXPERIMENTAL group

Step 2

NO TREATMENT is administered A TREATMENT is administered

Step 3

A POSTTEST is administered A POSTTEST is administered


 Assumptions: no need for a pretest (because of random assignment)
 Disadvantage: the group might not be equivalent at the start (even random
assignment)
 Solomon four group design

Exp grp 1 Pre test Treatment Post test

Cont grp 1 Pre test Post test


Radom
assignment
Exp grp II Treatment Post test

Cont grp II

 Many types of comparisons to determine factors might be responsible for


certain types of outcomes.
 Pre- & posttest comparison
 Control group & experimental groups comparison
 Time-consuming in performing lots of testing

V. Experimental designs: Internal & External validity


 Internal validity: ~the quality of an experimental design such that the results
obtained are attributed to the manipulation of the treatment (i.e, independent
variable)
 If there are several different explanations for the outcomes of an experiment,
the experiment does not have internal validity
 External validity: ~the quality of an experimental design such that the results
can be generalized from the original sample to another sample and then, by
extension, to the population from which the sample originated.

VI. Threats to internal validity:


1. History: ~uncontrolled events can occur outside of the experiment that
might affect its outcome
2. Maturation: ~changes over time, often caused by biological or psycho-
logical forces might overshadow result of a treatment
3. Selection: ~without randomness, a systematic bias might make the
participating groups different from each other
4. Testing: ~the pretest affects posttest performance
5. Instrumentation: ~the scoring of an instrument, the scoring procedure
6. Regression: ~the unreliability of the test and the measurement error
which places people more in the extremes than they probably belong.
7. Mortality: ~participants move, refuse to participate any further, or
unavailable

VII. Threats of external validity:


1. Multiple treatment interference: ~a set of subjects might receive an
unintended treatment.
2. Reactive arrangements: ~the participants received special attention from
the researchers
3. Pretest sensitization: ~pretests can inform people about what is to come
and thus affect their subsequent scores

You might also like