QR Notes
QR Notes
Searching it again and again means Re-search. Research is defined as human activity
based on intellectual application in the investigation of matter. The primary purpose for
research is discovering, interpreting, and the development of methods and systems for
the advancement of human knowledge on a wide variety of scientific matters of our
world and the universe.
Types of Research
TYPES OF RESEARCH
Descriptive Research
• It is used to answer questions of who, what, when, where, and how associated with
a particular research question or problem.
Basic research, also called pure research or fundamental research, has the
scientific research aim to improve scientific theories for improved understanding or
prediction of natural or other phenomena.
Qualitative Research
▪ It provides insights into the problem or helps to develop ideas or hypotheses for
potential quantitative.
Quantitative Research
• Is used to quantify the problem by way of generating numerical data that can be
transformed into usable statistics.
• Is a structured way of collecting and analyzing data obtained from different sources.
Longitudinal Research
▪ Is a method in which data is gathered for the same subjects repeatedly over a
period of time.
▪ longitudinal study subjects are followed over time with continuous or repeated
monitoring of risk factors.
▪ Longitudinal research is used to study individuals at different stages in their lives.
▪ Cohort Study: Involves selecting a group based on a specific event such as birth,
geographic location.
Cross-sectional Research
Cross sectional research is a study in which subjects of different ages are compared at
the same time. It is often used in developmental psychology, but also utilized in many
other areas including social science, education and other branches of science.
Action Research
▪ Is a disciplined process of inquiry conducted by and for those taking the action.
Ethnographic Research
Ethnographic research is a qualitative method where researchers observe and/or
interact with a study's participants in their real-life environment. Ethnography was
popularized by anthropology, but is used across a wide range of social sciences.
Experimental Research
▪ Is a scientific approach, where a set of variables are kept constant while the other
set of variables are being measured as the subject of experiment.
▪ Maintain control over all factors that may affect the result of an experiment.
Exploratory Research
Exploratory research is the process of investigating a problem that has not been
studied or thoroughly investigated in the past. Exploratory type of research is usually
conducted to have a better understanding of the existing problem, but usually doesn't
lead to a conclusive result.
• Is conducted for a problem that has not been studied more clearly, intended to
establish priorities.
Phenomenological Research
Grounded theory (GT) is a systematic methodology in the social sciences involving the
construction of theories through methodical gathering and analysis of data.
This research methodology uses inductive reasoning, in contrast to the hypothetical-
deductive model of the scientific method.
▪ Enables to seek out and conceptualize the latent social patterns and structures of
an area.
QUALITATIVE RESEARCH
Qualitative research may include field notes, or notes regarding the behaviors and
actions of people and other events happening in the situation where data are collected;
like Methodological documentation; Also Analytic documentation reflecting the
researcher’s thought processes during data analysis; Documentation of personal
responses to capture the investigator’s role and reactions as the study progresses.
Qualitative Research is primarily exploratory research. It is used to gain an understanding
of underlying reasons, opinions, and motivations. It provides insights into the problem or
helps to develop ideas or hypotheses for potential quantitative research.
→ Qualitative research methods usually collect data at the sight, where the participants
are experiencing issues or problems. These are real-time data and rarely bring the
participants out of the geographic locations to collect information.
→ Since its a more communicative method, people can build their trust on the
researcher and the information thus obtained is raw and unadulterated.
2. Focus groups: A focus group is also one of the commonly used qualitative research
methods, used in data collection. A focus group usually includes a limited number
of respondents (6-10) from within your target market.
This method requires the researchers to adapt to the target audiences’ environments
which could be anywhere from an organization to a city or any remote location. Here
geographical constraints can be an issue while collecting data.
4. Case study research: The case study method has evolved over the past few years and
developed as into a valuable qualitative research method. As the name suggests it is
used for explaining an organization or an entity.
This type of research method is used within a number of areas like education, social
sciences and similar. This method may look difficult to operate, however, it is one of the
simplest ways of conducting research as it involves a deep dive and thorough
understanding of the data collection methods and inferring the data.
5. Record keeping: This method makes use of the already existing reliable
documents and similar sources of information as the data source. This data can be used
in a new research. This is similar to going to a library. There one can go over books and
other reference material to collect relevant data that can likely be used in the research.
Survey Research
Survey Research is defined as the process of conducting research using surveys that
are sent to survey respondents. The data collected from surveys is then statistically
analyzed to draw meaningful research conclusions.
Content analysis
Content analysis is a research technique used to make replicable and valid inferences by
interpreting and coding textual material. By systematically evaluating texts (e.g.,
documents, oral communication, and graphics), qualitative data can be converted into
quantitative data.
Quantitative Research
Quantitative research, the data are collected and presented in the form of numbers-
average scores for different groups on some task, percentages of people who do one
thing or another, graphs and tables of data, and so on. Qualitative research, on the other
hand, is concerned with qualitative phenomenon, i.e. Phenomena relating to or
involving quality or kind.
Experimental research
Experimental research is any research conducted with a scientific approach, where a set
of variables are kept constant while the other set of variables are being measured as the
subject of experiment. Experimental research is one of the founding
quantitative research methods.
Experimental research designs are the primary approach used to investigate causal
(cause/effect) relationships and to study the relationship between one variable and
another. This is a traditional type of research that is quantitative in nature.
→ Static-group Comparison
Meta-analysis
Qualitative Research L2
Here we will be discussing about characteristics of qualitative research and methods
involved in qualitative research.
Attempt: 1
Qualitative Research
Qualitative Research
Meaning
Qualitative research may include field notes, or notes regarding the behaviors and
actions of people and other events happening in the situation where data are collected;
like Methodological documentation; Also Analytic documentation reflecting the
researcher’s thought processes during data analysis; Documentation of personal
responses to capture the investigator’s role and reactions as the study progresses.
Definitions:
→ As defined by Leshan (2012) this is a method of qualitative data analysis where
qualitative datasets are analyzed without coding.
→ Qualitative research is empirical research where the data are not in the form of
numbers (Punch, 1998, p. 4)
→ This type of research method works towards solving complex issues by breaking
down into meaningful inferences, that is easily readable and understood by all.
→ Since its a more communicative method, people can build their trust on the
researcher and the information thus obtained is raw and unadulterated.
→ Aims at discovering the underlying motives and desires, using in depth interviews for
the purpose.
→ It provides insights into the problem or helps to develop ideas or hypotheses for
potential quantitative.
2. Focus groups: A focus group is also one of the commonly used qualitative research
methods, used in data collection. A focus group usually includes a limited number
of respondents (6-10) from within your target market.
3. Ethnographic research: Ethnographic research is the most in-depth observational
method that studies people in their naturally occurring environment.
This method requires the researchers to adapt to the target audiences’ environments
which could be anywhere from an organization to a city or any remote location. Here
geographical constraints can be an issue while collecting data.
4. Case study research: The case study method has evolved over the past few years and
developed as into a valuable qualitative research method. As the name suggests it is
used for explaining an organization or an entity.
This type of research method is used within a number of areas like education, social
sciences and similar. This method may look difficult to operate, however, it is one of the
simplest ways of conducting research as it involves a deep dive and thorough
understanding of the data collection methods and inferring the data.
5. Record keeping: This method makes use of the already existing reliable
documents and similar sources of information as the data source. This data can be used
in a new research. This is similar to going to a library. There one can go over books and
other reference material to collect relevant data that can likely be used in the research.
7. Grounded theory: involves the construction of hypotheses and theories through the
collecting and analysis of data. Grounded theory involves the application of inductive
reasoning. The methodology contrasts with the hypothetico-deductive model used in
traditional scientific research. A study based on grounded theory is likely to begin with a
question, or even just with the collection of qualitative data. As researchers review the
data collected, ideas or concepts become apparent to the researchers. These
ideas/concepts are said to "emerge" from the data. The researchers tag those
ideas/concepts with codes that succinctly summarize the ideas/concepts. As more data
are collected, and re-reviewed, codes can be grouped into higher-level concepts, and
then into categories. These categories may become the basis of a hypothesis or a new
theory.
10. Triangulation refers to the use of multiple methods or data sources in qualitative
research to develop a comprehensive understanding of phenomena (Patton, 1999). It is a
qualitative research strategy to test validity through the convergence of information
from different sources. Denzin (1978) and Patton (1999) identified four types of
triangulation: (a) method triangulation, (b) investigator triangulation, (c) theory
triangulation, and (d) data source triangulation.
Ethics in Research
Determining Risk
INFORMED CONSENT
Researchers and participants enter into a social contract, often using an informed
consent procedure.
Researchers are ethically obligated to describe the research procedures clearly,
identify any aspects of the study that might influence individuals’ willingness to
participate, and answer any questions participants have about the research.
Research participants must be allowed to withdraw their consent at any time
without penalties.
Individuals must not be pressured to participate in research.
DEBRIEFING
Attempt: 2
Quantitative Research
Meaning
Quantitative research, the data are collected and presented in the form of numbers-
average scores for different groups on some task, percentages of people who do one
thing or another, graphs and tables of data, and so on. Qualitative research, on the other
hand, is concerned with qualitative phenomenon, i.e. Phenomena relating to or
involving quality or kind.
Definitions:
Matthews & Ross (2010) quantitative research methods are basically applied to the
collection of data that is structured and which could be represented numerically.
liu (2008) said that quantitative research methods are typically adopted because they
are scientific methods and provide immediate results.
Berg (2004) argued that quantitative research is usually given more respect and
acceptance reflecting the tendency of general public to regard science as it uses
scientific methods and implying precisions
characteristics:
• Is used to quantify the problem by way of generating numerical data that can be
transformed into usable statistics.
• Is a structured way of collecting and analyzing data obtained from different sources.
→ Static-group Comparison
Meta-analysis
Parametric Tests L5
To understand when and how to apply parametric tests in research.
Parametric Tests
Meaning
Parametric tests are those that make assumptions about the parameters of the
population distribution from which the sample is drawn. This is often the assumption
that the population data are normally distributed. Non-parametric tests are “distribution-
free” and, as such, can be used for non-Normal variables.
Basic Assumptions:
Parametric tests can perform well with skewed and non normal distributions.
Parametric tests can perform well when the spread of each group is different.
Types:
Z-tests are statistical calculations that can be used to compare population means to a
sample's. T-tests are calculations used to test a hypothesis, but they are most useful
when we need to determine if there is a statistically significant difference between two
independent sample groups.
→ To test the significant difference between means
F test
F-tests are named after its test statistic, F, which was named in honor of Sir Ronald Fisher.
The F-statistic is simply a ratio of two variances.
→ An F-test is any statistical test in which the test statistic has an F-distribution under
the null hypothesis
Pearson r
The Independent Samples t Test compares the means of two independent groups in
order to determine whether there is statistical evidence that the associated population
means are significantly different. The Independent Samples t Test is a parametric test.
This test is also known as: Independent t Test.
→ Is used when you want to compare the means of a normally distributed interval
dependent variable for two independent groups.
→ The independent samples t-test compares two independent groups of observations
or measurements on a single characteristic.
ANOVA
The one-way analysis of variance (ANOVA) is used to determine whether there are any
statistically significant differences between the means of two or more independent
(unrelated) groups (although you tend to only see it used when there are a minimum of
three, rather than two groups).
→ Sample independence – that each sample has been drawn independently of the
other samples.
Attempt: 1
Non-Parametric Tests
Meaning
Non parametric tests are used when your data isn't normal. Therefore the key is to figure
out if you have normally distributed data. For example, you could look at the distribution
of your data. If your data is approximately normal, then you can
use parametric statistical tests.
Basic Assumptions:
The study is better represented by the median.
You have ordinal data, ranked data, or outliers that you can’t remove.
Types:
Chi-square Test
→ The Chi Square statistic is commonly used for testing relationships between
categorical variables.
Mann-Whitney U Test
The Mann Whitney U test, sometimes called the Mann Whitney Wilcoxon Test or the
Wilcoxon Rank Sum Test, is used to test whether two samples are likely to derive from
the same population (i.e., that the two populations have the same shape).
→ You have one dependent variable that is measured at the continuous or ordinal level.
→ You have one independent variable that consists of two categorical, independent
groups (i.e., a dichotomous variable).
→ For two levels, consider using the Mann Whitney U Test instead
Rank-difference Methods
→ This method is applied to the ordinal set of numbers, which can be arranged in
order.
Coefficient of Concordance
→ The W is a measure of correlation always among more than two sets of rankings of
events, objects, and individuals.
Median test
→ Median test is used to see if two groups(not necessary of the same size) come from
the same population.
→ The Kruskal-Wallis H test (sometimes also called the "one-way ANOVA on ranks") is a
rank-based nonparametric test that can be used to determine if there are statistically
significant differences between two or more groups of an independent variable on a
continuous or ordinal dependent variable.
→ The test is more commonly used when you have three or more levels.
Wilcoxon Test
The Wilcoxon test is a nonparametric statistical test that compares two paired groups,
and comes in two versions the Rank Sum test or the Signed Rank test. The goal of
the test is to determine if two or more sets of pairs are different from one another in a
statistically significant manner.
→ The Wilcoxon test, which refers to either the Rank Sum test or the Signed Rank test,
is a nonparametric statistical test that compares two paired groups.
→ The test essentially calculates the difference between each set of pairs and analyzes
these differences.
→ compare two related samples, matched samples, or repeated measurements on a
single sample to assess whether their population mean ranks differ (i.e. it is a paired
difference test).
Friedman test
The Friedman test is the non-parametric alternative to the one-way ANOVA with
repeated measures. It is used to test for differences between groups when the
dependent variable being measured is ordinal.
→ The Friedman test is the non-parametric alternative to the one-way ANOVA with
repeated measures.
→ The Friedman test is a nonparametric test that compares three or more matched or
paired groups.
Regression
Regression analysis is a powerful statistical method that allows you to examine the
relationship between two or more variables of interest. While there are many types
of regression analysis, at their core they all examine the influence of one or more
independent variables on a dependent variable.
Research Design
Research design
• Simply stated, it is the framework, a blueprint for the research study which guides
the collection and analysis of data.
• The research design, depending upon the needs of the researcher may be a very
detailed statement or only furnish the minimum information required for planning the
research project.
Definitions:
Kerlinger (1986) defines research design as “the plan and structure of investigation so
conceived as to obtain answers to research questions.“
Rosenthal and Rosnow (1991) define design as a "blueprint that provides the scientist
with a detailed outline or plan for the collection and analysis of data."
A good design is often characterized by adjectives like flexible, appropriate, efficient, and
economical and so on.
Generally, the design which minimizes bias and maximizes the reliability of the data
collected and analysed is considered a good design.
The design which gives the smallest experimental error is supposed to be the best
design in many investigations.
A research design appropriate for a particular research problem, usually involves the
consideration of the following factors:
Within-Group Design
Is a type of experimental design in which all participants are exposed to every
treatment or condition. The term "treatment" is used to describe the different
levels of the independent variable, the variable that's controlled by the
experimenter.
Assumptions:
• This design the subjects are tested within the group
• Pre-tests and post-tests can be adopted in this design
• Randomization of the subjects can be followed.
• Helps to determine the homogeneity of the group.
• It is good design for the small group studies and as the size increases this
design becomes very difficult to adapt.
One-shot pre-experimental design
A type of pre-experimental design where a single group of test units is exposed to
an experimental treatment and a single measurement is taken afterwards. It only
measures the post-test results and does not use a control group.
Assumptions:
• Treatment is given to a single group and its effect is noted through the
observation
• No pre-testing is done in this design
• The design is very rarely used as there is no control over the extraneous
variable
• Principle of randomization is not used in this design
• X >> O
• O-observation
• X-treatment
Statistic Group Design
In this design of experiments, a between-group design is an experiment that has
two or more groups of subjects each being tested by a different testing factor
simultaneously.
Assumptions:
• In static group design, two intact groups are taken and only one group is
given treatment.
• Whereas the other serves as control group
• After the treatment is over both the groups are tested
• There is no pre-testing for both the groups.
• There is no check on the initial conditions of both the intact groups.
Group 1 X O1
Group 2 O2
Post-test Random Group Design
A type of true experimental design where test units are randomly allocated to an
experimental group and a control group. The experimental group is exposed to a
treatment and both groups are measured afterwards.
Assumptions:
• This design is similar to static group design, except the investigator uses the
randomization Before the treatment.
• The whole lot of subjects are divided in to two groups in random order.
• One of the groups is given treatment where as other group serves as control.
• This design takes cares of internal threats to validity.
R X O1
R X O2
• Anova and t test can be used
Pre-test-Post-test Randomized Group Design
A pretest post-test design is an experiment where measurements are taken both
before and after a treatment. The design means that you are able to see the
effects of some type of treatment on a group. Pretest post-test designs may be
quasi-experimental, which means that participants are not assigned randomly.
Assumptions:
• In this design, two random samples are drawn and pre-tested on the Criterion
variable.
• One of the samples are given treatment and observations are obtained on the
experimental and control variable.
• The analysis of the covariance is used for testing the hypothesis.
• Randomization is done in this design.
• The internal validity is strong
R O1 X O2
R O3 X O4
Solomon Four Group Design
Occur naturally without the experimenter's intervention. What is the basic
difference between the classical experiment design and the Solomon four-group
design? The Solomon four-group design repeats the classical design but
adds groups that are not pretested.
Assumptions:
• This design is the extension of pre-test-post-test design.
• In this design,four groups are randomily selected by the researcher.
• Two groups act as experimental groups whereas two groups serve as control
group.
• The first experimental group is pre-tested(O1) and after treatment in the
experimental group is over again post-tested(O4)
• The second experimental group is given treatment(X) without pre-testing and
after treatment is over, it is post-tested(O5).
• The second control group is neither given any treatment nor it is pre-tested
and once the treatment is over in the experimental groups, this control group is
post tested.(O6).
R O1 X O2
R O3 O4
R X O5
R O6
Assumptions:
• After treatment is over the observations are obtained to compare the three-groups.
R X1 O1
R X2 O2
R X3 O3
Assumptions:
• Randomized block design is better managed design than the completely random
design.
• In this design the subjects are divided into homogenous groups, this is done to
ebsure blocking.
X1 O1 O1 O1
X2 O2 O2 O2
X3 O3 O3 O3
Factorial Design
Factorial design involves having more than one independent variable, or factor, in a
study. Factorial designs allow researchers to look at how multiple factors affect a
dependent variable, both independently and together. Factorial design studies are
named for the number of levels of the factors.
Assumptions:
• In factorial design, there are n factors, where each of the n factors has m levels.
• Thus n X m factors are required to be investigated and the design is known as n X m
design.
Factor – A
A1 A2
B1 A1B1 A2B1
Factor - B
B2 A1B2 A2B2
Assumptions:
• They are not true experimental designs where compete randomization is not
possible.
• In this design the same subjects are tested over a period of time on different
intervals.
• Can know the effects on different interval and helps in improvising the techniques
Correlational Design
Assumptions:
Assumptions:
• In this design it uses the mixed approach of qualitative and quantitative methods
• Nested design is a research design in which levels of one factor (say, Factor B) are
hierarchically subsumed under (or nested within) levels of another factor (say, Factor A).
As a result, assessing the complete combination of A and B levels is not possible in
a nested design.
Cohort Design
Cohort studies are a type of medical research used to investigate the causes of disease
and to establish links between risk factors and health outcomes.
Strengths
Weaknesses
· Prone to confounding.
An ex post facto research design is a method in which groups with qualities that already
exist are compared on some dependent variable. Also known as "after the fact" research,
an ex post facto design is considered quasi-experimental because the subjects are not
randomly assigned - they are grouped based on a particular characteristic or trait.
Although differing groups are analyzed and compared in regards to independent and
dependent variables it is not a true experiment because it lacks random assignment.
The assignment of subjects to different groups is based on whichever variable is of
interest to the researchers.
Review of Literature
Review of Literature
Meaning:
A good literature review is Not simple a list describing or summarizing several article, a
literature review is discursive prose which proceed to conclusion by reason or argument.
A good literature review shows sign of synthesis and understanding of the topic.
A literature review is a search and evaluation of the available literature in your given subj
ect or chosen topic area. It documents the state of the art with respect to the subject or t
opic you are writing about.
The literature review surveys scholarly articles, books, and other sources relevant to a
particular area of research. The review should enumerate, describe, summarize,
objectively evaluate and clarify this previous research. It should give a theoretical base
for the research and help you (the author) determine the nature of your research. The
literature review acknowledges the work of previous researchers, and in so doing,
assures the reader that your work has been well conceived. It is assumed that by
mentioning a previous work in the field of study, that the author has read, evaluated, and
assimiliated that work into the work at hand.
A literature review may consist of simply a summary of key sources, but in the social
sciences, a literature review usually has an organizational pattern and combines both
summary and synthesis, often within specific conceptual categories. A summary is a
recap of the important information of the source, but a synthesis is a re-organization, or
a reshuffling, of that information in a way that informs how you are planning to
investigate a research problem. The analytical features of a literature review might:
Give a new interpretation of old material or combine new with old interpretations,
Depending on the situation, evaluate the sources and advise the reader on the
most pertinent or relevant research, or
Usually in the conclusion of a literature review, identify where gaps exist in how a
problem has been researched to date.
Initially we can say that a review of the literature is important because without it, you will
not acquire a comprehensive understanding of the topic, of what has already been done
on it.
According to Jackson (1980), effective literature reviews should do the following (any of
these may be more or less helpful for your own purposes):
If you are writing the literature review section of a dissertation or research paper, you will
search for literature related to your research problem and questions.
If you are writing a literature review as a stand-alone assignment, you will have to
choose a focus and develop a central question to direct your search. Unlike a dissertation
research question, this question has to be answerable without collecting original data.
You should be able to answer it based only on a review of existing publications.
Make sure the sources you use are credible, and make sure you read any landmark
studies and major theories in your field of research.
You can find out how many times an article has been cited on Google Scholar – a high
citation count means the article has been influential in the field, and should certainly be
included in your literature review.
The scope of your review will depend on your topic and discipline: in the sciences you
usually only review recent literature, but in the humanities you might take a long
historical perspective (for example, to trace how a concept has changed in meaning over
time).
This step will help you work out the structure of your literature review and (if applicable)
show how your own research will contribute to existing knowledge.
Depending on the length of your literature review, you can combine several of these
strategies (for example, your overall structure might be thematic, but each theme is
discussed chronologically).
Chronological
The simplest approach is to trace the development of the topic over time. However, if
you choose this strategy, be careful to avoid simply listing and summarizing sources in
order.
Try to analyze patterns, turning points and key debates that have shaped the direction of
the field. Give your interpretation of how and why certain developments occurred.
Thematic
If you have found some recurring central themes, you can organize your literature review
into subsections that address different aspects of the topic.
For example, if you are reviewing literature about inequalities in migrant health
outcomes, key themes might include healthcare policy, language barriers, cultural
attitudes, legal status, and economic access.
Methodological
If you draw your sources from different disciplines or fields that use a variety of research
methods, you might want to compare the results and conclusions that emerge from
different approaches. For example:
Theoretical
A literature review is often the foundation for a theoretical framework. You can use it to
discuss various theories, models, and definitions of key concepts.
You might argue for the relevance of a specific theoretical approach, or combine various
theoretical concepts to create a framework for your research.
SAMPLING L11
To understand the need and importance of sampling for conducting Research.
Sampling
Meaning
Sampling is a process used in statistical analysis in which a predetermined number of
observations are taken from a larger population. The methodology used to sample from
a larger population depends on the type of analysis being performed, but it may include
simple random sampling or systematic sampling.
SAMPLING FUNDAMENTALS
example: All primary teachers, college teachers, university students, all housewives etc.,
Infinite population: whose size is unlimited, members cannot be counted. example: fishes
in river.
when we work out certain measures such as mean, median, mode or the like ones from
samples, then they are called statistic(s) for they describe the characteristics of a sample.
But when such measures describe the characteristics of a population, they are known as
parameter(s). For instance, the population means (u)
⮊ Sampling unit may be a geographical one such as state, district, village, etc., or a
construction unit such as house, flat, etc., or it may be a social unit such as family, club,
school, etc., or it may be an individual. The researcher will have to decide one or more of
such units that he has to select for his study.
5. Source list: It is also known as ‘sampling frame’ from which sample is to be drawn. It
contains the names of all items of a universe (in case of finite universe only).
⮊ If source list is not available, researcher has to prepare it. Such a list should be
comprehensive, correct, reliable and appropriate. It is extremely important for the source
list to be as representative of the population as possible.
6. Budgetary constraint: Cost considerations, from practical point of view, have a major
impact upon decisions relating to not only the size of the sample but also to the type of
sample. This fact can even lead to the use of a non-probability sample.
CHARACTERISTICS OF A GOOD SAMPLE DESIGN
(b) Sample design must be such which results in a small sampling error.
(c) Sample design must be viable in the context of funds available for the research study.
(d) Sample design must be such so that systematic bias can be controlled in a better
way.
(e) Sample should be such that the results of the sample study can be applied, in
general, for the universe with a reasonable level of confidence.
(ii) Number of classes proposed: If many class-groups (groups and sub-groups) are to be
formed, a large sample would be required because a small sample might not be able to
give a reasonable number of items in each class-group.
(iii) Nature of study: If items are to be intensively and continuously studied, the sample
should be small. For a general survey the size of the sample should be large, but a small
sample is considered appropriate in technical surveys.
(iv) Type of sampling: Sampling technique plays an important part in determining the
size of the sample. A small random sample is apt to be much superior to a larger but
badly selected sample.
1. Sampling can save time and money. A sample study is usually less expensive than a
census study and produces results at a relatively faster speed.
2. Sampling may enable more accurate measurements for a sample study is generally
conducted by trained and experienced investigators.
3. Sampling remains the only way when population contains infinitely many members.
4. Sampling remains the only choice when a test involves the destruction of the item
under study.
5. Sampling usually enables to estimate the sampling errors and, thus, assists in
obtaining information concerning some characteristic of the population.
Data collection is a systematic method of collecting and measuring data gathered from
different sources of information in order to provide answers to relevant questions. An
accurate evaluation of collected data can help researchers predict future phenomenon
and trends.
Data collection can be classified into two, namely: primary and secondary data. Primary
data are raw data i.e. fresh and are collected for the first time. Secondary data, on the
other hand, are data that were previously collected and tested.
• The system of data collection is based on the type of study being conducted.
Depending on the researcher’s research plan and design, there are several ways data can
be collected.
• The most commonly used methods are: published literature sources, surveys (email
and mail), interviews (telephone, face-to-face or focus group), observations, documents
and records, and experiments.
1. Literature sources
• This involves the collection of data from already published text available in the
public domain. Literature sources can include: textbooks, government or private
companies’ reports, newspapers, magazines, online published papers and articles.
2. Surveys
• There are several ways by which this information can be collected. Most notable
ways are: web-based questionnaire and paper-based questionnaire (printed form). The
results of this method of data collection are generally easy to analyse.
3. Interviews
• Interview is a qualitative method of data collection whose results are based on
intensive engagement with respondents about a particular study. Usually, interviews are
used in order to collect in-depth responses from the professionals being interviewed.
4. Observations
• For instance, an organization may want to understand why there are lots of negative
reviews and complains from customer about its products or services. In this case, the
organization will look into records of their products or services and recorded interaction
of employees with customers.
6. Experiments
• In experimental research, data are mostly collected based on the cause and effect of
the two variables being studied. This type of research are common among medical
researchers, and it uses quantitative research approach.
• If you are interested in my services, drop me a message or what you need. I will get
back to you as soon as possible.
• Quantitative methods are cheaper to apply and they can be applied within shorter
duration of time compared to qualitative methods. Moreover, due to a high level of
standardisation of quantitative methods, it is easy to make comparisons of findings.
• Secondary data is a type of data that has already been published in books,
newspapers, magazines, journals, online portals etc. There is an abundance of data
available in these sources about your research area in business studies, almost regardless
of the nature of the research area. Therefore, application of appropriate set of criteria to
select secondary data to be used in the study plays an important role in terms of
increasing the levels of research validity and reliability.
• These criteria include, but not limited to date of publication, credential of the
author, reliability of the source, quality of discussions, depth of analyses, the extent of
contribution of the text to the development of the research area etc.
Types of Sampling
Probability Sampling
→ Each element of the individual in the population must have an equal chance of
included in the sample
→ In which each and every individual of the population has equal chance of being
included
Example: 40 students 10 students if we select then write their roll numbers and pick the
chits
→ Is divided into two or more strata which may be on the criteria of Age, Gender
→ These divided populations are called subpopulation which are not overlapping but
constitute the whole population
Proportionate
• Is known characteristics of the population
Disproportionate
• Can assign the samples even if all characteristics are equally present
• Does not draw sample till the desired size of the sample is achieved
→ When larger geographical areas are to be covered this method becomes easier
• The sample being handpicked typically from the representative of the population
• The researcher makes the prior judgments for selecting the elements from the
sample
Quota sampling
→ Similar like proportionate stratified but here it is not randomly selected where as in
proportionate stratified is randomly selected
→ can guarantee the inclusion of the individuals from different strata of the population
Snowball Sampling
→ snowball becomes difficult when the sample size is more than 100 and increased
Saturation Sampling
→ Investigator selects the individuals having similar traits of the population such as
doctors, engineers
Dense sampling
Double Sampling
→ As the name implies the sample is drawn again from the same sample
Example: 1000 sample 300 is selected and studied later again 700 is drawn later and
studied
Systematic Sampling
Example: every 7th phone number from the phone diary(yellow pages)
→ equal chances may not be feasible for including all the elements from the
population.
Diagram Showing Sampling Methods
⮊ Computerized Methods
⮊ Sociometry
Fishbowl
Procedure
1. Select a Topic
Almost any topic is suitable for a Fishbowl discussion. The most effective prompts
(questions or texts) do not have one right answer or interpretation, but rather
allow for multiple perspectives and opinions. The Fishbowl strategy is excellent
for discussing dilemmas, for example.
2. Set Up the Room
A Fishbowl discussion requires a circle of chairs (“the fishbowl”) and enough room
around the circle for the remaining students to observe what is happening in the
“fishbowl.” Sometimes teachers place enough chairs for half of the students in the
class to sit in the fishbowl, while other times teachers limit the chairs further.
Typically, six to 12 chairs allows for a range of perspectives while still giving each
student an opportunity to speak. The observing students often stand around the
fishbowl.
3. Prepare for the Discussion
Like many structured conversations, Fishbowl discussions are most effective when
students have had a few minutes to prepare ideas and questions in advance.
There are many ways to structure a Fishbowl discussion. Sometimes teachers have half
the class sit in the fishbowl for ten to 15 minutes before announcing “Switch,” at which
point the listeners enter the fishbowl and the speakers become the audience. Another
common Fishbowl discussion format is the “tap” system, where students on the outside
of the fishbowl gently tap a student on the inside, indicating that they should switch
roles.
5. Debrief
After the discussion, you can ask students to reflect on how they think the
discussion went and what they learned from it. Students can also evaluate their
performance as listeners and as participants. They could also provide suggestions
for how to improve the quality of discussion in the future. These reflections can
be in writing, or they can be structured as a small- or large-group conversation.
1. Let’s assume that we have a population of 185 students and each student has
been assigned a number from 1 to 185. Suppose we wish to sample 5 students
(although we would normally sample more, we will use 5 for this example).
2. Since we have a population of 185 and 185 is a three digit number, we need to
use the first three digits of the numbers listed on the chart.
3. We close our eyes and randomly point to a spot on the chart. For this example,
we will assume that we selected 20631 in the first column.
4. We interpret that number as 206 (first three digits). Since we don’t have a
member of our population with that number, we go down to the next number
899 (89990). Once again we don’t have someone with that number, so we
continue at the top of the next column. As we work down the column, we find
that the first number to match our population is 100 (actually 10005 on the chart).
Student number 100 would be in our sample. Continuing down the chart, we see
that the other four subjects in our sample would be students 049, 082, 153, and
164.
5. Researchers use different techniques with these tables. Some researchers read
across the table using given sets (in our examples three digit sets). For our class,
we will use the technique I have described.
SAMPLING ERRORS
⮊ Sampling error occurs because researchers draw different subjects from the same
population. Subjects have individual differ.
⮊ It is the deviation of the selected sample from the true characteristics, behaviors,
figures of the entire population.
⮊ Greater sample size will have smaller standard error, because closer your sample is
to the actual population itself.
1). Minimize the selection of the bias through random sampling: Random sampling is a
systematic approach for selecting a sample.
2). Increase the sample size: By square root formula the standard error is reduced by half
if the sample size is quadrupld.
4). sampling bias: A consistent error that arises due to the sample selection.
5). Replicate the study by taking the same measurement repeatedly, use more than one
group or multiple studies.
HYPOTHESIS L14
To understand the hypothesis in order to formula for Research.
HYPOTHESIS
TYPES OF HYPOTHESIS
☞ Simple Hypothesis
☞ Complex Hypothesis
☞ Empirical Hypothesis
☞ Logical Hypothesis
☞ Statistical Hypothesis
Simple hypothesis
Complex Hypothesis
Example: Overweight adults who 1) value longevity and 2) seek happiness are more likely
than other adults to 1) lose their excess weight and 2) feel a more regular sense of joy.
Empirical Hypothesis
An empirical hypothesis, or working hypothesis, comes to life when a theory is being put
to the test, using observation and experiment. It's no longer just an idea or notion. It's
actually going through some trial and error, and perhaps changing around those
independent variables.
Example: Roses watered with liquid Vitamin B grow faster than roses watered with liquid
Vitamin E. (Here, trial and error is leading to a series of findings.)
Null Hypothesis
A null hypothesis (H0) exists when a researcher believes there is no relationship between
the two variables, or there is a lack of information to state a scientific hypothesis. This is
something to attempt to disprove or discredit.
There is no significant change in my health during the times when I drink green tea only
or lemon tea only.
Alternative Hypothesis
Alternative hypothesis (H1) enters the scene. In an attempt to disprove a null hypothesis,
researchers will seek to discover an alternative hypothesis.
→ My health improves during the times when I drink green tea only, as opposed to
lemon tea only.
Logical Hypothesis
→ Cacti experience more successful growth rates than tulips on Mars. (Until we're able
to test plant growth in Mars' ground for an extended period of time, the evidence for
this claim will be limited and the hypothesis will only remain logical.)
Statistical Hypothesis
→ If you wanted to conduct a study on the life expectancy of Adivasi communities, you
would want to examine every single resident of Adivasi communities. This is not
practical. Therefore, you would conduct your research using a statistical hypothesis, or a
sample of the Adivasi communities’ population.
VARIABLES
A variable in research simply refers to a person, place, thing, or phenomenon that you
are trying to measure in some way. The best way to understand the difference between a
dependent and independent variable is that the meaning of each is implied by what the
words tell us about the variable you are using.
→ Variables as those attributes of events, objects, things and being which can be
measured
TYPES OF VARIABLES
Qualitative variables
→ are variables that can be placed into distinct categories, according to some
characteristic or attribute.
For example, if subjects are classified according to gender (male or female), then the
variable gender is qualitative. Other examples of qualitative variables are religious
preference and geographic locations.
Quantitative variables
→ Quantitative variables are numerical and can be ordered or ranked. For example, the
variable age is numerical, and people can be ranked in order according to the value of
their ages.
Other examples of quantitative variables are heights, weights, and body temperatures.
→ By manipulating context
Levels of Measurement
Levels of Measurement
Nature of Measurement
→ Rules are the procedures to transform qualities of attributes (qualities) into numbers
→ Measurement is always concerned with certain attributes or features of the object.
3.1. a+b = b+a This process denotes that in the process of addition.
3.2. if a = p and b = q then a+b = p+q this process indicates addition identical numbers.
3.3. (a+b) +c = a+ (b+c) Indicates the process of addition the order of combinations
objects or numbers makes no difference.
Scale of Measurement
Nominal Scale
In nominal measurement the numerical values just “name” the attribute uniquely. No
ordering of the cases is implied. For example, jersey numbers in basketball are measures
at the nominal level. A player with number 30 is not more of anything than a player with
number 15, and is certainly not twice whatever number 15 is.
Characteristics:
→Nominal scale is the lowest form of measurement
→Nominal scale use to name,identify or classify the persons, objects, groups etc.
→classifications would be an example of nominal scale of measurement
→In nominal scale members of any two groups are never equivalent but all members of
any one group are equivalent
Examples: Hindu,Christian, Muslim,Sikhs
Girls or Boys
Rural or Urban
→Statistical operations
are counting or frequency, percentage, proportion, mode, addition, subtraction, multiplic
ation, division and coefficient of contingency
→The drawback of nominal scale is most elementary and simple.
Ordinal Scale of Measurement
In interval measurement the distance between attributes does have meaning. For
example, when we measure temperature (in Fahrenheit), the distance from 30-40 is same
as distance from 70-80. The interval between values is interpretable.
Characteristics:
→ Zero point does not tell the real absence of the property being measured
ratio measurement there is always an absolute zero that is meaningful. This means that
you can construct a meaningful fraction (or ratio) with a ratio variable.
Characteristics:
Examples:
Magnitude:
Equal intervals:
Absolute Zero:
An absolute zero is said to exist when nothing of the property being measured exists
Functions of Measurement
→ In selection
→ In Classification
→ In Research
Problems of Measurement
1. Indirectness of Measurement
2. Incomplete of Measurement
3. Relativity of Measurement
4. Errors in Measurement
Scaling Methods
⮊ Scaling methods are through which stimuli or individuals are sorted according to
some known attributes or characteristics
⮊ Absolute threshold refers to Reiz Limen that a minimal value which produces
a response 50% of the time
Method of Limits
→ It is presenting the stimulus in two modes Increasing mode and decreasing mode
→ In each trial one standard and one variable are presented to subject
→ In this method subject is provided with standard and comparable stimulus, he/she is
require to adjust the comparable stimulus until he matches with standard stimulus
→ The difference between standard and point of subjective equality gives constant
error(PSE-RL)
→ If PSE is smaller than it is understood that the subject is underestimating the stimulus
with (-Negative sign)
→ If PSE is Larger than it is understood that the subject is overestimating the stimulus
with (+positive sign)
Weber's Law
→ The relationship between the size of the standard stimulus and the size of JND is
technically known as Weber Law
→ “The law states that for a given stimulus dimension the DL bears a constant ratio to
the dimension (standard stimulus) at which DL was measured.”
∆R= K
DL = constant
Standard stimulus
Fetchners Law
→ Psychological sensation is the sum of all those JND steps which becomes its origin
→ Example:
If one stimulus value is 20 units the other stimulus value should be 1/4 of 20 or (0.20*20)
= 5 units
Psychological Tests
Psychological Tests
Psychological tests are used to assess a variety of mental abilities and attributes,
including achievement and ability, personality, and neurological functioning.
Characteristics of a Test
Assessment on the other hand is more comphrensive and wider that includes the entire
process of compiling and synthesizing information
Test Construction
Item Writing
Norms Development
⮊ When the raw scores are compared to the norms, a scientific meaning emerges
Types of Norms
Test Construction
Item Writing
Characteristics of Item writing
Norms Development
⮊ When the raw scores are compared to the norms, a scientific meaning emerges
Types of Norms
Reliability
Reliability
The term reliability in psychological research refers to the consistency of a research study
or measuring test. For example, if a person weighs themselves during the course of a day
they would expect to see a similar reading. Scales which measured weight differently
each time would be of little use.
Characteristics:
⮊ Test Re-test Reliability: Is a single form of test administered twice on the same
sample with a reasonable time gap.
Split-half Reliability
Inter-rater Reliability
⮊ Used to assess the degree to which different raters give consistent estimates of the
same phenomenon
Used to assess the degree to which different raters/observers give consistent estimates
of the same phenomenon.
Whenever you use humans as a part of your measurement procedure, you have to worry
about whether the results you get are reliable or consistent. People are notorious for
their inconsistency. We are easily distracted. We get tired of doing repetitive tasks
Case Study
A case study is an in-depth study of one person, group, or event. Much of Freud's
work and theories were developed through the use of individual case studies. Some
great examples of case studies in psychology include Anna O, Phineas Gage, and Genie.
• In a case study, nearly every aspect of the subject's life and history is analyzed to
seek patterns and causes of behavior.
• The hope is that learning gained from studying one case can be generalized to
many others. Unfortunately, case studies tend to be highly subjective and it is
sometimes difficult to generalize results to a larger population.
• One of the greatest advantages of a case study is that it allows researchers to
investigate things that are often difficult to impossible to replicate in a lab. The case
study of Genie, for example, allowed researchers to study whether language could be
taught even after critical periods for language development had been missed.
• In Genie's case, her horrific abuse had denied her the opportunity to learn a
language at critical points in her development. This is clearly not something that
researchers could ethically replicate, but conducting a case study on Genie allowed
researchers the chance to study otherwise impossible to reproduce phenomena.
Types
There are a few different types of case studies that psychologists and other researchers
might utilize:
• Descriptive case studies: These involve starting with a descriptive theory. The
subjects are then observed and the information gathered is compared to the pre-
existing theory.
• Explanatory case studies: These are often used to do causal investigations. In other
words, researchers are interested in looking at factors that may have actually caused
certain things to occur.
• Exploratory case studies: These are sometimes used as a prelude to further, more
in-depth research. This allows researchers to gather more information before developing
their research questions and hypotheses.
• Instrumental case studies: These occur when the individual or group allows
researchers to understand more than what is initially obvious to observers.
• Intrinsic case studies: This type of case study is when the researcher has a personal
interest in the case. Jean Piaget's observations of his own children are good examples of
how an intrinsic cast study can contribute to the development of a psychological theory.
• There are also different methods that can be used to conduct a case study,
including prospective and retrospective case study methods.
• Prospective case study methods are those in which an individual or group of people
is observed in order to determine outcomes. For example, a group of individuals might
be watched over an extended period of time to observe the progression of a particular
disease.
• There are a number of different sources and methods that researchers can use to
gather information about an individual or group. The six major sources that have been
identified by researchers are:
• Archival records: Census records, survey records, and name lists are examples of
archival records.
• Direct observation: This strategy involves observing the subject, often in a natural
setting. While an individual observer is sometimes used, it is more common to utilize a
group of observers.
• Documents: Letters, newspaper articles, administrative records, etc are the types of
documents often used as sources.
• Interviews: Interview are one of the most important methods for gathering
information in case studies. An interview can involve structured survey-type questions or
more open-ended questions.
• Physical artifacts: Tools, objects, instruments and other artifacts are often observed
during a direct observation of the subject.
Interview Methods
Interview is one of the popular methods of research data collection. The term interview
can be dissected into two terms as, ‘inter’ and ‘view’. The essence of interview is that one
mind tries to read the other. The interviewer tries to assess the interviewed in terms of
the aspects studied or issues analyzed.
• To elicit information.
• It has definite values for diagnosis of emotional problems and for therapeutic
treatments.
• It is one of the major bases upon which counseling procedures are carried out.
• There are different types of interviews used in the research data collection. An
interview is either structured or unstructured, depending upon whether a formal
questionnaire has bean formulated and the questions asked in a prearranged order or
not. An interview is also either direct or indirect as a result of whether the purposes of
the questions asked are plainly stated or intentionally disguised. Cross-classifying these
two characteristics provides four different types of interviews. That is, an interview may
be:
Structured-Direct Interview:
• The usual type of interview conducted during a consumer survey to obtain
descriptive information is one using a formal questionnaire consisting of non-disguised
questions, a questionnaire designed to “get the facts”. If the marketing search manager
of a television set manufacturer wants to find out how many and what kinds of people
prefer various styles of television cabinets, for example, he may have a set of questions
drawn up that asks for these facts directly. Assuming that personal interviewing is being
used, each interviewer will be instructed to ask the questions in the order given on the
questionnaire and to ask only those questions. The resulting interviews will be
structured-direct in nature.
Unstructured-Direct Interview:
• Focus-Group Interviews: Perhaps the best-known and most widely used type of
indirect interview is the one conducted with a focus group. A focus-group interview is
one in which a group of people jointly participate in an unstructured-indirect interview.
The group, usually consisting of 8 to 12 people, is generally selected purposively to
include persons who have a common background or similar buying or use experience
that relates to the problem to be researched. The interviewer, moderator, as he or she is
more often called, attempts to focus the discussion on the problem areas in a relaxed,
nondirected manner. The objective is to foster involvement and interaction among the
group members during the interview will lead to spontaneous discussion and the
disclosure of attitudes, opinions, information on present or prospective buying and use
behavior.
• The Personal Interview: As the name implies, the personal interview consists of an
interviewer asking questions of one or more respondents in a face-to-face situation. The
interviewer’s role is to get in touch with the respondent(s), ask the desired questions,
and to record the answers obtained. The recording of the information obtained may be
done either during or after the interview. In either case, it is a part of the interviewer’s
responsibility to ensure that the content of the answers is clear and unambiguous and
that it has been recorded correctly.
Observation Method
Observation Method
• The versatility of the method makes it an indispensable primary source of data and
a supplement to other methods.
Directness
• The main advantage of observation is its directness. We can collect data at the time
they occur. The observer does not have to ask people about their behavior and reports
from others.
• He or she can simply watch as individuals act and speak. While the survey
respondents may have a hazy or lapse memory about events that occurred in the distant
past, the observer is studying events as they occur.
Natural environment
• Whereas other data collection techniques introduce artificiality into the research
environment, data collected in an observation study describe the observed phenomena
as they occur in their natural settings.
Longitudinal analysis
Non-verbal behavior
Lack of control
• Despite the advantage as achieved from the natural environment, the observation
study, however, has little control over extraneous variables that may affect the data.
• The presence of a stranger (the observer) and the error involved in human
observation and the recording of data, which may remain out of control of the
observer, are likely to bias the observations to a great extent.
Difficulties in quantification
• Because observational studies are generally conducted in-depth, with data that are
often subjective and difficult to quantify, the sample size is usually kept at a minimum.
• Also, the in-depth nature of the observation studies generally requires that they are
conducted over an extended period, then the survey method or experiments. This
feature tends to limit the size of the sample.
• This technique can generate either quantitative or qualitative data but tends to be
used more for small-scale exploratory studies than for large-scale quantitative studies.
This is because it usually requires
Technique of Observation
• When an observation study is conducted with the first two approaches, we call it
a non-participant observation study.
• Participant observation: The observer takes part in the situation he or she observes.
Direct observation
• Direct observation refers to the situation when the observer remains physically
present and personally monitors what takes place.
• This approach is very flexible because it allows the observer to react to and report
subtle aspects of events as they occur.
• During the act of observation, the observer is free to change the focus of
observation, concentrate on unexpected events, or even change the place of observation
if the situation demands.
Indirect observation
• For example, a special camera may be set in a department store to study customers’
or employees’ movements.
• The second approach of observation concerns whether the presence of the observer
is known (overt) or unknown (covert) to the subjects. In an overt study, the observer
remains visible to the observer, and the subjects are aware that they are being observed.
• In a covert study, on the other hand, subjects are unaware that they are being
observed.
• The major problem with the overt study is that it may be reactive. That is, it may
make the subjects ill at ease and cause them to act differently than they would if they
were not being observed.
• The covert study uses a concealment approach where the observers shield
themselves from the object of their observations.
• Often technical means are used, such as one-way mirrors, hidden cameras, or
microphones.
• This method reduces the risk of observer bias but brings up a question of ethical
issues in the sense that hidden observation is a form of spying.
Participant observation,
• The third approach of data collection in natural settings is through participant
observation, which refers to an observation in which an observer gains firsthand
knowledge by being in and around the social setting that is being investigated.
• With this method, the observer joins in the daily life of the group or organization he
is studying.
• He watches what happens to the members of the community and how they behave,
and he also engages in conversations with them to find out their reactions to and
interpretations of the events that have occurred.
• Prolonged and personal interaction with the subjects of the research is the prime
advantage of participant observation.
• Extended contact with the subjects helps them feel comfortable in the participant
observer’s presence. The observer’s task is to place himself in the best position for
getting a complete and unbiased picture of the life of the community, which he is
observing.
• To ensure this, the observer needs to learn the language, habits, work patterns,
leisure activities, and other aspects of their daily life. In participatory research, the
researcher assumes either a complete participant role or a participant-as- observer role.
Controlled Observation
• Rather than writing a detailed description of all behavior observed, it is often easier
to code behavior according to a previously agreed scale using a behavior schedule (i.e.
conducting a structured observation).
The researcher systematically classifies the behavior they observe into distinct categories.
Coding might involve numbers or letters to describe a characteristic, or use of a scale to
measure behavior intensity. The categories on the schedule are coded so that the data
collected can be easily counted and turned into statistics.
Here we will understand how to conduct meta- analysis and its usefulness for
conducting research.
Here we will also understand the approach of Thematic-Analysis for Qualitative Research.
Role Play
Role Play
Role playing was developed by Jacob Moreno, a Viennese psychologist who contended
that people could gain more from acting out their problems than from talking about
them. This method requires a protagonist (the client whose problems are being acted
out); auxiliary egos (group members who assume the roles of other people in the
protagonist's life); an audience (other group members who observe and react to the
drama); and a director (the therapist). The protagonist selects an event from his or her
life and provides the information necessary for it to be reenacted. Although every detail
of the event cannot be reproduced, the reenactment can be effective if it captures the
essence of the original experience. The group members who serve as auxiliary egos
impersonate significant people from the protagonist's past or present, following the
protagonist's instructions as closely as possible. Techniques used in the reenactment
may include role reversal, doubling, mirror technique, future projection, and dream work.
• A variation on the theme of role playing is called Fixed Role Therapy. In fixed role
therapy you act as though you have certain characteristics that you aspire to have, but
don't currently have. For a period of time set by yourself, you pretend to have these
desired characteristics as you go about your life and interact with people. For example, if
you are a shy person, you act as though you are more outgoing. The purpose of fixed
role therapy is not to help you develop a fake personality, but rather to allow you the
experience (and practice) of living your life from another perspective which you would
normally never consider. The artificiality of the task tends to free people up to take it on.
Though they might not be able to be outgoing on their own, they are able to do it when
it is prescribed play acting. Having acted out such a fake fixed role, people then have the
experience they need to integrate desirable aspects of that role into their normal selves.
In other words, having play acted at being outgoing, people now know how to be more
outgoing within their own personalities and feel more comfortable doing so.
Here we will understand how to conduct meta- analysis and its usefulness for
conducting research.
Here we will also understand the approach of Thematic-Analysis for Qualitative Research.
Meta-analysis
• Meta-analysis is the statistical procedure for combining data from multiple studies.
When the treatment effect (or effect size) is consistent from one study to the next, meta-
analysis can be used to identify this common effect.
Characteristics:
• The validity of the meta-analysis depends on the quality of the systematic review on
which it is based.
• A Good meta-analyses aim for complete coverage of all relevant studies, look for
the presence of heterogeneity, and explore the robustness of the main findings using
sensitivity analysis.
• The precision with which the size of any effect can be estimated depends to a large
extent on the number of patients studied.
• The validity of the meta-analysis depends on the quality of the systematic review on
which it is based.
• A Good meta-analyses aim for complete coverage of all relevant studies, look for
the presence of heterogeneity, and explore the robustness of the main findings using
sensitivity analysis.
• The precision with which the size of any effect can be estimated depends to a large
extent on the number of patients studied.
Other Characteristics
• Location of studies
• Quality assessment
• Heterogeneity
A clinical research question is identified and a hypothesis proposed. The likely clinical
significance is explained and the study design and analytical plan are justified.
• Once studies are selected for inclusion in the meta-analysis, summary data or
outcomes are extracted from each study.
• In addition, sample sizes and measures of data variability for both intervention and
control groups are required.
• Depending on the study and the research question, outcome measures could
include numerical measures or categorical measures.
• Having assembled all the necessary data, the fourth step is to calculate appropriate
summary measures from each study for further analysis.
• These measures are usually called Effect Sizes and represent the difference in
average scores between intervention and control groups. For example, the difference in
change in blood pressure between study participants who used drug X compared with
participants who used a placebo.
• Since units of measurement typically vary across included studies, they usually need
to be ‘standardized’ in order to produce comparable estimates of this effect. When
different outcome measures are used, such as when researchers use different tests,
standardization is imperative.
• The final stage is to select and apply an appropriate model to compare Effect
Sizes across different studies.
• The most common models used are Fixed Effects and Random Effects models. Fixed
Effects models are based on the ‘assumption that every study is evaluating a common
treatment effect’.
• This means that the assumption is that all studies would estimate the same Effect
Size were it not for different levels of sample variability across different studies.
• In contrast, the Random Effects model ‘assumes that the true treatment effects in
the individual studies may be different from each other’. 5 and attempts to allow for this
additional source of inter study variation in Effect Sizes. Whether this latter source of
variability is likely to be important is often assessed within the meta-analysis by testing
for ‘heterogeneity’.
Advantages
Disadvantages
• Not all studies provide adequate data for inclusion and analysis
Limitations
• Was the quality of the individual studies assessed using an appropriate checklist of
criteria?
Was the search strategy comprehensive and likely to avoid bias in the studies identified
for inclusion?
Thematic Analysis
Thematic Analysis
• Briefly, thematic analysis (TA) is a popular method for analysing qualitative data in
many disciplines and fields, and can be applied in lots of different ways, to lots of
different datasets, to address lots of different research questions!
• Although the title of this paper suggests TA is for, or about, psychology, that’s not
the case! The method has been widely used across the social, behavioural and more
applied (clinical, health, education, etc.) sciences.
• One of the advantages of (our reflexive version of) TA is that it’s theoretically-
flexible. This means it can be used within different frameworks, to answer quite different
types of research question.
• It also suits questions relating to the construction of meaning, such as ‘How is race
constructed in workplace diversity training?’
• There are different ways TA can be approached – within our reflexive approach all
variations are possible:
• An inductive way – coding and theme development are directed by the content of
the data;
• A semantic way – coding and theme development reflect the explicit content of the
data;
• A latent way – coding and theme development report concepts and assumptions
underpinning the data;
• Although these phases are sequential, and each builds on the previous, analysis is
typically a recursive process, with movement back and forth between different phases.
These are not rules to follow rigidly, but rather a series of conceptual and practice
oriented ‘tools’ that guides the analysis to facilitate a rigorous process of data
interrogation and engagement. With more experience (and smaller datasets), the analytic
process can blur some of these phases together.
• Familiarisation with the data | This phase involves reading and re-reading the
data, to become immersed and intimately familiar with its content.
• Coding | This phase involves generating succinct labels (codes!) that identify
important features of the data that might be relevant to answering the research
question. It involves coding the entire dataset, and after that, collating all the codes and
all relevant data extracts, together for later stages of analysis.
• Generating initial themes | This phase involves examining the codes and collated
data to identify significant broader patterns of meaning (potential themes). It then
involves collating data relevant to each candidate theme, so that you can work with the
data and review the viability of each candidate theme.
• Reviewing themes | This phase involves checking the candidate themes against the
dataset, to determine that they tell a convincing story of the data, and one that answers
the research question. In this phase, themes are typically refined, which sometimes
involves them being split, combined, or discarded. In our TA approach, themes are
defined as pattern of shared meaning underpinned by a central concept or idea.
• Defining and naming themes | This phase involves developing a detailed analysis
of each theme, working out the scope and focus of each theme, determining the ‘story’
of each. It also involves deciding on an informative name for each theme.
• Writing up | This final phase involves weaving together the analytic narrative and
data extracts, and contextualising the analysis in relation to existing literature.
Attempt: 1
Research Design
Research design
• The function of a research design is to ensure that requisite data in accordance with
the problem at hand is collected accurately and economically.
• Simply stated, it is the framework, a blueprint for the research study which guides
the collection and analysis of data.
• The research design, depending upon the needs of the researcher may be a very
detailed statement or only furnish the minimum information required for planning the
research project.
Definitions:
Kerlinger (1986) defines research design as “the plan and structure of investigation so
conceived as to obtain answers to research questions.“
Rosenthal and Rosnow (1991) define design as a "blueprint that provides the scientist
with a detailed outline or plan for the collection and analysis of data."
• The Importance of good Research design is needed because it facilitates the smooth
sailing of the various research operations, thereby making research as efficient as
possible yielding maximal information with minimal expenditure of effort, time and
money.
• Research design stands for advance planning of the methods to be adopted for
collecting the relevant data and the techniques to be used in their analysis, keeping in
view the objective of the research and the availability of staff, time and money.
• Even then the need for a well thought out research design is at times not realised by
many. The importance which this problem deserves is not given to it. As a result many
researches do not serve the purpose for which they are undertaken. In fact, they may
even give misleading conclusions.
• The design helps the researcher to organize his ideas in a form whereby it will be
possible for him to look for flaws and inadequacies. Such a design can even be given to
others for their comments and critical evaluation. In the absence of such a course of
action, it will be difficult for the critic to provide a comprehensive review of the proposed
study.
• Preparation of the research design should be done with great care as any error in it
may upset the entire project. Research design, in fact, has a great bearing on the
reliability of the results arrived at and as such constitutes the firm foundation of the
entire edifice of the research work.
A good design is often characterized by adjectives like flexible, appropriate, efficient, and
economical and so on. Generally, the design which minimizes bias and maximizes the
reliability of the data collected and analysed is considered a good design. The design
which gives the smallest experimental error is supposed to be the best design in many
investigations.
A research design appropriate for a particular research problem, usually involves the
consideration of the following factors:
Although the research works and studies differ in their form and kind, they all still meet
on the common ground of scientific methods employed by them. Hence, scientific
research is expected to satisfy the following criteria:
The aim of the research should be clearly mentioned, along with the use of
common concepts.
The procedures used in the research should be adequately described, in order to
permit another researcher to repeat the research for further advancement, while
maintaining the continuity of what has already been done.
The researches procedural design should be carefully planned to obtain results
that are as objective as possible.
The flaws in the procedural design should be sincerely reported by the researcher
to correctly estimate their effects upon the findings.
The data analysis should be adequate to reveal its significance.
The methods used during the analysis should be appropriate.
The reliability and validity of the concerned data should be checked carefully.
The conclusions are needed to be confined and limited to only those data, which
are justified and adequately provided by the research.
In case, the researcher is experienced and has a good reputation in the field of
research, greater confidence in research is warranted.