0% found this document useful (0 votes)
30 views

Chapter 4

1. There are multiple possible sources of research ideas including personal and professional experience as well as knowledge of the scientific literature. Researchers' theoretical orientations influence how they generate and interpret research ideas and findings. 2. Clinical psychology practice must change when evidence shows that an intervention does not work as intended. Research methods like case studies can test innovative strategies but have validity limitations. More rigorous designs allow testing of hypotheses. 3. Correlational designs examine associations between variables but cannot determine causation due to limitations like directionality and third variables. Experimental designs with manipulation and random assignment can establish causal relationships.

Uploaded by

seyfelizeliha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Chapter 4

1. There are multiple possible sources of research ideas including personal and professional experience as well as knowledge of the scientific literature. Researchers' theoretical orientations influence how they generate and interpret research ideas and findings. 2. Clinical psychology practice must change when evidence shows that an intervention does not work as intended. Research methods like case studies can test innovative strategies but have validity limitations. More rigorous designs allow testing of hypotheses. 3. Correlational designs examine associations between variables but cannot determine causation due to limitations like directionality and third variables. Experimental designs with manipulation and random assignment can establish causal relationships.

Uploaded by

seyfelizeliha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

CLINICAL

PSYCHOLOGY I

Fall 2022
Aslıhan Koyuncu, MS.c.
Chapter 4

Research Methods in
Clinical Psychology
• There are often logical inconsistencies in the way that people process
information and make decisions.

• That’s why, we cannot simply rely on common sense as a guide to


appropriate decision-making.
What would happen if I tell you
not to think about “pink
elephant?”
• Exp: The evolution of the treatment of obsessive-compulsive
disorder (OCD)

• In the 1970s and 1980s à thought-stopping component was included in


the treatment

• The client yells “Stop” or makes a loud noise whenever unwanted,


intrusive thoughts occurred.

• Research indicated that trying not to think about something often has a
paradoxical effect: increases persistence of intrusive thoughts!

• Clinical psychology practice must change when evidence shows that a


theoretically sound intervention does not work.
How do we decide what to
study?

What are the possible source of


research ideas?
Generating Research Hypotheses
• Some of the many possible sources of research ideas, including personal
experience, professional experience, and knowledge of the scientific literature.

• Our thinking is always influenced by the type of theory we hold about human
behaviour

• Using a formal theory to generate a research idea à deductive process

• Deriving an idea from repeated observations of everyday events à inductive


process

• Inductive process is influenced by the researcher’s informal theories, including


his or her theoretical orientation and general world view.

• They also influence the way the researcher interprets the data he or she obtains
from the completed research study
Generating Research Hypotheses
• There are number of steps to ensure that the hypothesis is properly formulated
and tested

1. conducting a systematic search of the published research on the phenomenon of


interest. ilgi olgusu üzerine yayınlanan araştırmanın sistematik bir araştırmasını yapmak.

2. formalizing ideas (operational definition) so that they can be tested in a


scientific manner

3. considering the extent to which the research idea may be based on cultural
assumptions that may limit the applicability or relevance of the planned
research (American Psychological Association, 2003a)
araştırma fikrinin, planlanan araştırmanın uygulanabilirliğini veya alaka düzeyini sınırlayabilecek kültürel varsayımlara ne ölçüde dayanabileceği göz önüne alındığında

4. considering ethical issues in testing of the idea.

5. drawing together all the results of the previous steps to sketch out the study
procedures.
“Increased anxiety is associated
with more errors in social
interactions.”
Ethics in Research
• These principles underline that
attention to the welfare of
research participants (and animal
subjects) and to honesty in the
presentation of research findings
are overarching themes to which
psychologists must attend.

• Once researchers have published


the results of their studies, they
also have an ethical obligation to
share their data with other
researchers.
Ethics in Research
• Participants may be vulnerable
due to their psychological
distress and/or may be
receiving psychological services
as part of the research.

• Consent form provides an


assurance that a research
participant is fully aware of the
possible benefits and risks of
research involvement.
Research Design
• All designs have advantages and disadvantages.

• Some designs are better than others in their capacity to control certain
threats to research validity

• No single study can answer all of the important questions in a research


area

• Research must be seen as cumulative, with each study contributing to


the knowledge base of an area.

• Researchers should remain cautious about study results until the study
is replicated, preferably by a different group of researchers.
Research Design
• Threats to validity;

o Internal validity: the degree to which the relationship between a dependent and
an independent variable can be established.
Internal Validity
Research Design
• maturation and regression to the mean à extending the period of time that the
person is assessed and the frequency with which the assessments occur.

• changing criteria or definitions of the problems/symptoms (i.e., instrumentation)


à the same measures can be used at each assessment

• The measures should be standardized and well established.

• The possibility that observed changes are due to extra-treatment events à


defining the nature of the therapeutic intervention and precisely noting when it
occurred
Research Design
• Threats to validity;

o External validity: Generalizability of findings to other populations and settings


External Validity
Research Design
Case Studies

• involves a detailed presentation of an individual patient, couple, or family


illustrating some new or rare observation or treatment innovation

• makes preliminary connections between events, behaviours, and symptoms that


have not been addressed in extant research.

• are initial testing ground for innovative assessment or intervention strategies

• but they do not allow for the rigorous testing of hypotheses

• Weakness: Most threats to internal validity cannot be adequately addressed.


Alternative explanations cannot be ruled out in research designs.
Research Design
Single Case Designs
• an approach to the empirical study of a process that tracks a single unit (e.g.,
person, family, class, school, company) in depth over time.

• Exp: A single-case design for a small group of patients with a tic.

After observing the patients and establishing the number of tics per hour, the
researcher would then conduct an intervention and watch what happens over time.

• A number of statistical tests can be used to determine if statistically significant


changes occurred.
Research Design
Single Case Designs
• A period representing the
level of symptoms prior to
the intervention (also known
as the baseline) and the B
period representing the level
of symptoms following the
intervention.

• A-B design can rule out threat


of history to the validity of
the study.
Research Design
Single Case Designs
• conduct a small series of A-B designs using the same intervention with a
number of individuals who present with similar problems.

• If the data for three or four cases are collected sequentially and the symptom
levels consistently appear to change following the intervention, then the
possibility that the intervention was responsible for the change is very strong.
Research Design
Correlational Designs
• the most commonly used
research designs in clinical
psychology.

• examine the association


between variables.

• the most common error à


making causal statements
about associations in the data
Why can’t we say anything
about causality when we use
correlational design?
Research Design
Correlational Designs
• Direction-of-causation problem: do not know which variable is causing
which

• Third-variable problem: an unmeasured third variable may be affecting


both of the variables measured
“Anxiety about speaking in public and
performance are negatively correlated.
Therefore, high anxiety must cause low
performance.”

TRUE or FALSE?
Research Design
Correlational Designs
• The use of experimental manipulation and random assignment to conditions are
absent in correlational designs.

o can be used to examine the underlying structure of a measure or a set of measures


à factor analysis

o Factor analysis is often used in the development of a measure to determine which


items contribute meaningfully to the test.

o can reveal which items “work” in assessing the construct the test was designed to
evaluate

o can also be used to determine the conceptual dimensions that underlie a set of
tests.
Research Design
Correlational Designs
• Exp: A researcher may have data
from participants who completed
measures on a range of variables
such as anger, anxiety, loneliness,
shyness, and dysphoria.

By using factor analysis, the


researcher can determine whether
these measures all assess distinct
constructs or whether they are
better understood as tapping into a
single, broad construct often
labelled general distress or
negative affectivity.
Research Design
Correlational Designs
• Two basic forms;

o Exploratory factor analysis is used when the researcher has no prior hypotheses
about the structure of the data. à provides the evidence for the underlying factor
structure in the data.

o Confirmatory factor analysis is used to test a specific hypothesis regarding the


nature of the factor structure. à specifies what the factor structure should be and
how each variable or test item contributes to this structure.
Research Design
Correlational Designs
• Moderator variable influences the strength of the relation between independent variable
and dependent variable.

• Exp: The relation between the experience of stressful life events and psychological
distress may be moderated by the type of coping strategies used.

The type of coping


strategies

The esperience of Psychological


stressful events distress

• Moderator MUST NOT be the causal result of independent variable.

• are used to enhance the researcher’s ability to predict as much variance as possible.
Research Design
Correlational Designs
• A mediator variable accounts for the relation between one variable and another

• Exp: The relation between parental psychopathology and child adjustment may be due,
partially or entirely, to the quality of the relationship between parent and child.

The quality of the


relationship

Parental Child Adjustment


Psychopathology

• Mediator MUST be a causal result of independent variable and a causal antecedent of


dependent variable.

• are used to explicate the conceptual link among variables.


Research Design
Quasi Experimental Designs
• involve some form of manipulation by the researcher

• But do not involve random assignment to experimental conditions, because in many


situations, it is simply not ethical or feasible to randomly assign participants to conditions.

• involve the comparison of two previously established groups of participants.

• Exp: One group receives the intervention, the other doesn’t.

• Weakness: The effect of the independent variable on the dependent variable may be
confounded with extraneous influences (i.e., confounding variable).

• Exp: The two groups may differ substantially before the intervention, thereby confounding
the results.
Research Design
Experimental Designs
• are typically known as randomized controlled trials (random assignment of
participants into one of two or more treatment conditions)

• involve both random assignment to condition and experimental manipulation

• However, results may be confounded by unplanned variability in the manner in


which the manipulation occurred

• Exp: Therapists who are supposed to be providing the same treatment may differ in
how closely they follow the treatment protocol

• provide the best protection against threats to internal validity.


Selecting Research Participants and Measures
Selecting the Sample

• Since, it is rarely possible to obtain data from all members of a population,


researchers selects small group of population members who reflects important
characteristics of the population.

Biased sample Population


does not reflect characteristics of larger population. All Çankaya students
(e.g., only psychology students)
Selecting Research Participants and Measures
Selecting the Sample
• Since, it is rarely possible to obtain data from all members of a population,
researchers selects small group of population members who reflects important
characteristics of the population.

Reprentative sample Population


Small group that accurately reflects a larger population All Çankaya students
Research Design
Setting the Sample Size
• Without a sufficient number of participants, a study will not have the statistical
power needed to detect the very effect it was designed to examine.

• Many tools, developed using the statistical work of Jacob Cohen, are available to
assist researchers (e.g., G*Power)

https://www.youtube.com/watch?v=veaEDIM13Ks

Kang H. (2021). Sample size determination and power analysis using the G*Power software. Journal of
educational evaluation for health professions, 18, 17. https://doi.org/10.3352/jeehp.2021.18.17
Research Design
Measurement Options and the Importance of Psychometric
Properties
• A multitude of measurement options are available to clinical psychologists

• No option is necessarily the best for all types of studies.

• the strengths and limitations of a measurement option (or a research design or a


sampling strategy) must be carefully considered

• In many studies, multiple measures of each variable are selected to make sure that
the variable of interest has been fully or adequately measured in the study.
Measurement Options
Research Design
Measurement Options and the Importance of Psychometric
Properties
• The psychometric properties of a measurement strategy have a dramatic effect on
the outcome of a study

• Reliability—the degree of consistency in the measurement—and validity— the


degree to which the construct of interest is accurately measured—both affect the
quality of a study and the likelihood that a hypothesis is tested appropriately

• In some instances researchers may choose to develop a measurement tool


specifically for the study.
Analyzing Data
• Once data are collected, the researcher must conduct data analyses to determine
the extent to which his or her research hypotheses have been supported

• Guidelines on statistical methods and how to report research findings are available
to assist a researcher in making these important decisions (e.g., APA Publications
and Communications Board Working Group on Journal Article Reporting Standards,
2008)

• There many threats to the statistical conclusion validity of a study à aspects of the
data analysis that influence the validity of the conclusions drawn about the results
of the research study.

• Careful attention to these threats during the design of a study can increase the
likelihood of accurately detecting an effect in the study.
Analyzing Data
Analyzing Data
Statistical and Clinical Significance

• rely on statistical tests to determine the outcome of a study and the degree to which
a research hypothesis was supported.

• Knowing that two groups differ in a statistically significant manner on their scores
on a particular does not provide information about whether the difference is a
meaningful one.

• Statistical significance is necessary but not sufficient to fully evaluate the results of
a study.

• clinical significance à the degree to which the intervention has had a meaningful
impact on the functioning of the treated participants.
Research Synthesis
Systematic Reviews

• Reviews that synthesize findings on research addressing clinically important


questions have been conducted for many years.

• To address the issue of bias in the selection of articles, a systematic review involves
the use of a systematic and explicit set of methods to identify, select, and critically
appraise research studies

• The researcher describes the method used to select articles for a review in sufficient
detail so that another person could follow the same steps and locate the same
articles.
Research Synthesis
Systematic Reviews
There are five main steps in conducting a systematic review

1) Determining clear, unambiguous questions that will guide the literature search (e.g., “How
does parental divorce affect the psychosocial functioning of young children?”)

2) Conducting an extensive electronic search typically using more than one electronic
database (e.g., PsycINFO, PubMed).

3) Making decisions about the minimal quality required for an identified study to be included
in the review (e.g., only studies with a sample size above a specific number; only studies
that used psychometrically sound measures).

4) Summarizing the results reported in the included studies

5) Interpreting the results, bearing in mind the limitations associated with the research
included in the review (e.g., sampling considerations, potential for publication bias).
Research Synthesis
Meta-Analysis
• is a quantitative form of research review to make a general statement about the
findings in a research field.

• Following the steps employed in systematic reviews, researchers attempt to obtain


all relevant studies to include in their analyses.

• Data are then extracted from these studies and subjected to statistical analysis.

• The “participants” in a meta-analysis are research studies rather than individuals.

• combines the results of prior research using a common metric called an effect size
(Borenstein, Hedges, Higgins, & Rothstein, 2009).
Research Synthesis
Meta-Analysis
• Effect sizes can be calculated for almost all types of research designs and statistical
analyses

o For correlational analyses, the correlation coefficient is typically used as the effect
size.

o For analyses involving differences among groups, the effect size is obtained by
calculating the difference between the means of two groups (e.g., the treatment and
no-treatment groups) and then dividing by the standard deviation of either one of
the groups or the pooled sample of both groups.
Research Synthesis
Meta-Analysis
• Advantages:

o Statistical analyses (not authors’ impression) guide the conclusions drawn about a
research topic

o The number of research participants is dramatically increased.

o It can address the issue of publication bias

• improves the generalizability of the conclusions drawn on the basis of the literature.
Thank you!
Any Questions?

You might also like