0% found this document useful (0 votes)
58 views

Research Methodology: Dr. P. Suganda Devi

Uploaded by

K Srivarun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Research Methodology: Dr. P. Suganda Devi

Uploaded by

K Srivarun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 96

RESEARCH METHODOLOGY

Dr. P. Suganda Devi


Professor,
Department of Commerce and
Business Management
Chaitanya( Deemed to be University)
Hyderabad
Unit I Introduction
Unit II Measurement Techniques
Unit III Parametric and Non- Parametric
tests
Unit IV Report Writing
Today's Session

Meaning, Objectives & Scope of Research


Types of Research, Research Process & Research
Design
Hypothesis
Sample
Data types and Tools
Methods of Data Collection
Measurement and Scaling
Meaning of Research
• Research can be described as a systematic and
organized effort to investigate a specific problem
encountered in the work setting, that needs a
solution.
– It comprises a series of steps designed and executed, with
the goal of finding answers to the issues that are of
concern to the manager in the work environment.
• The entire process by which we attempt to solve problems is
called research.
• Research thus encompasses the processes of inquiry,
investigation, examination, and experimentation.
These processes have to be carried out
systematically, diligently, critically, objectively, and
Definition
• Business research is an organized, systematic,
data-based, critical, objective, scientific
inquiry or investigation into a specific
problem, undertaken with the purpose of
finding answers or solutions to it.
– In essence, research provides the needed
information that guides managers to make
informed decisions to successfully deal with
problems.
• According to Francis G. Cornell
• “To be sure the best research is that which is
reliable verifiable and exhaustive, so that it
provides information in which we have
confidence.
– The main point here is that research is, literally
speaking, a kind of human behaviour, an activity in
which people engage.
• By this definition all intelligent human behaviour
involves some research.”
• P.M. Cook has given a very comprehensive and functional
definition of the term research “Research is an honest
exhaustive, intelligent searching for facts and their
meanings or implications with reference to a given
problem.
• The product or findings of a given piece of research should be an
authentic, verifiable and contribution to knowledge in the field studied.”

 It is an honest and exhaustive process.


 The facts are studied with understanding.
 The facts are discovered in the light of problem.
 Research is problem-centered.
 The findings are valid and verifiable.
 Research work should contribute new knowledge in that
field.
Objectives of Research
• To Gain Familiarity with a Phenomenon or to
Achieve New Insights into It: This is often termed
as exploratory or formulative research studies.
– This type of research is typically undertaken when a
problem is not well understood or is new.
• To Portray Accurately the Characteristics of a
Particular Individual, Situation, or a Group: This
is known as descriptive research studies.
– Descriptive research aims to describe the current
status of a phenomenon and does not necessarily
seek to establish cause-effect relationships.
Objectives of Research Cont…..
• To Determine the Frequency with Which Something
Occurs or with Which It Is Associated with
Something Else: This involves diagnostic research
studies.
– Diagnostic research attempts to understand underlying
causes of a phenomenon by identifying the frequency and
association with other variables.
• To Test a Hypothesis of a Causal Relationship
Between Variables: This is referred to as hypothesis-
testing research studies.
– Such studies aim to test the validity of hypotheses about
the relationships between variables, often through
experiments or other controlled methods.
Thus Research objectives include -
– Exploration
– Description
– Explanation
– Prediction
– Development
– Evaluation
– Comparison
– Theory Building
– Problem Solving
– Documentation
– Advancement of Knowledge
Scope of Research
Research for Knowledge: Expanding the horizons of
knowledge in various fields.
– Understanding phenomena, behaviors, and relationships.
Research for Planning and Policy: Informing planning
and policy-making processes.
– Providing empirical data to guide decisions and actions.
Research for Decision Making and problem solving:
Supporting decision-makers with data-driven insights.
– Facilitating better management and organizational
strategies.
Research for Theory Development: Developing new
theories or refining existing ones.
– Contributing to the theoretical foundations of various
Scope of Research Cont…
Research for Prediction and Control: Predicting future trends
and events.
– Developing control mechanisms to manage or influence outcomes.
Research for Evaluation: Assessing the effectiveness of
programs, policies, and interventions.
– Providing feedback for improvement and accountability.
Research for Documentation:
Documenting historical, social, economic, and cultural
phenomena.
– Creating comprehensive records for future reference.
Interdisciplinary Research: Encouraging collaboration across
various fields and disciplines.
– Addressing complex problems that require a multidisciplinary
approach.
Types of Research
1. Descriptive vs. Analytical

2. Applied vs. Fundamental

3. Quantitative vs. Qualitative

4. Conceptual vs. Empirical

5. Other Types of Research


Descriptive vs. Analytical
• Descriptive research includes surveys and fact-
finding enquiries of different kinds. The major
purpose of descriptive research is description of
the state of affairs as it exists at present.
– In social science and business research we quite often
use the term Ex post facto research for descriptive
research studies.
• The methods of research utilized in descriptive
research are survey methods of all kinds, including
comparative and correlational methods.
• In analytical research, on the other hand, the
researcher has to use facts or information already
available, and analyze these to make a critical
Applied vs. Fundamental
• Research can either be applied (or action)
research or fundamental (to basic or pure)
research.
• Applied research aims at finding a solution
for an immediate problem facing a society or
an industrial/business organization, whereas
fundamental research is mainly concerned
with generalizations and with the formulation
of a theory
Quantitative vs. Qualitative
Quantitative research is based on the
measurement of quantity or amount. It is
applicable to phenomena that can be
expressed in terms of quantity.
Qualitative research, on the other hand, is
concerned with qualitative phenomenon, i.e.,
phenomena relating to or involving quality or
kind. For instance, when we are interested in
investigating the reasons for human behavior
(i.e., why people think or do certain things)
Conceptual vs. Empirical
Conceptual research is that related to some
abstract idea(s) or theory. It is generally used by
philosophers and thinkers to develop new
concepts or to reinterpret existing ones.
On the other hand, empirical research relies on
experience or observation alone, often without
due regard for system and theory. It is data-
based research, coming up with conclusions
which are capable of being verified by
observation or experiment.
– We can also call it as experimental type of research.
Some Other Types of Research
a. From the point of view of time, we can think
of research either as one-time research or
longitudinal research. In the former case the
research is confined to a single time-period,
whereas in the latter case the research is
carried on over several time-periods.
b. Research can be field-setting research or
laboratory research or simulation research,
depending upon the environment in which it is
to be carried out.
c. clinical or diagnostic research follow case-
study methods or in-depth approaches to
reach the basic causal relations. Such studies
usually go deep into the causes of things or
events that interest us, using very small
samples and very deep probing data gathering
devices.
d. Historical research is that which utilizes
historical sources like documents, remains,
etc. to study events or ideas of the past,
including the philosophy of persons and
groups at any remote point of time.
Research as a Process
Research Design
• “A research design is the arrangement of
conditions for collection and analysis of data
in a manner that aims to combine relevance to
the research purpose with economy in
procedure.”
– Decisions regarding what, where, when, how
much, by what means concerning an inquiry or a
research study constitute a research design.
Types of research design
• Action Research Design - The essentials of action
research design follow a characteristic cycle
whereby initially an exploratory is adopted, where
an understanding of a problem method is
developed and plans are made.
– Then the intervention is carried out (the "action" in
Action Research) during which time, pertinent
observations are collected in various forms.
• The new interventional strategies are carried out, and this
cyclic process repeats, continuing until a sufficient
understanding of (or a valid implementation solution for) the
problem is achieved
Case Study Design
• A case study is an in-depth study of a particular
research problem rather than a sweeping
statistical survey or comprehensive comparative
inquiry.
• It is often used to narrow down a very broad field
of research into one or a few easily researchable
examples.
– The case study research design is also useful for testing
whether a specific theory and model actually applies to
phenomena in the real world. It is a useful design when
not much is known about an issue or phenomenon.
Causal Design
• Causality studies may be thought of as
understanding a phenomenon in terms of
conditional statements in the form, “If X, then
Y.”
– This type of research is used to measure what
impact a specific change will have on existing
norms and assumptions.
• Most social scientists seek causal explanations
that reflect tests of hypotheses.
Cohort Design & Cross sectional Design
• Often used in the medical sciences, a cohort study
generally refers to a study conducted over a period
of time involving members of a population which
the subject or representative member comes from,
and who are united by some commonality or
similarity.
• Cross-sectional research designs have three
distinctive features: no time dimension; a reliance
on existing differences rather than change following
intervention; and, groups are selected based on
existing differences rather than random allocation
Descriptive Design
• Descriptive research designs help provide answers
to the questions of who, what, when, where, and
how associated with a particular research
problem;
– A descriptive study cannot conclusively ascertain
answers to why.
• Descriptive research is used to obtain information
concerning the current status of the phenomena
and to describe "what exists" with respect to
variables or conditions in a situation
Experimental Design
• The classic experimental design specifies an
experimental group and a control group.
– The independent variable is administered to the
experimental group and not to the control group,
and both groups are measured on the same
dependent variable.
• Subsequent experimental designs use more
groups and more measurements over longer
periods.
Exploratory Design
• An exploratory design is conducted about a
research problem when there are few or no
earlier studies to refer to or rely upon to
predict an outcome.
• The focus is on gaining insights and familiarity
for later investigation or undertaken when
research problems are in a preliminary stage
of investigation.
Historical Design
• The purpose of a historical research design is
to collect, verify, and synthesize evidence from
the past to establish facts that defend or
refute a hypothesis.
• It uses secondary sources and a variety of
primary documentary evidence, such as,
diaries, official records, reports, archives, and
non-textual information [maps, pictures, audio
and visual recordings].
Longitudinal Design
• A longitudinal study follows the same sample
over time and makes repeated observations.
• For example, with longitudinal surveys, the
same group of people is interviewed at regular
intervals, enabling researchers to track
changes over time and to relate them to
variables that might explain why the changes
occur.
– It is a type of observational study sometimes
referred to as a panel study.
Meta-Analysis Design
• Meta-analysis is an analytical methodology
designed to systematically evaluate and
summarize the results from a number of
individual studies, thereby, increasing the
overall sample size and the ability of the
researcher to study effects of interest.
– The purpose is to not simply summarize existing
knowledge, but to develop a new understanding
of a research problem using synoptic reasoning.
.
Mixed-Method Design

• Mixed methods research represents more of


an approach to examining a research problem
than a methodology. It is a combination
method. Mixed method is a combination of
above methods.
Hypothesis
• The word hypothesis consists of two words:
Hypo + thesis = Hypothesis
• ‘Hypo’ means tentative or subject to the
verification and ‘Thesis’ means statement about
solution of a problem.
• “A hypothesis is a tentative generalisation the
validity of which remains to be tested.
• In its most elementary stage the hypothesis may
be any hunch, guess, imaginative idea which
becomes the basis for further investigation.”
Some basis characteristics of a Hypothesis

Hypothesis must possess the following characteristics:


• Hypothesis should be clear and precise. If the
hypothesis is not clear and precise, the inferences
drawn on its basis cannot be taken as reliable.
• Hypothesis should be capable of being tested.
• Hypothesis should state relationship between
variables
• Hypothesis should be limited in scope and must
be specific.
• Hypothesis should be stated as far as possible in
most simple terms
Kinds of Hypothesis
Declarative Statement
A hypothesis may be developed as a declarative
which provide an anticipated relationship or
difference between variables.
H : There is a significant relationship between
demographic factors and ……………..

Directional Hypothesis
A hypothesis may be directional which connotes an
expected direction in the relationship or difference
between variables.
H : There is a positive relationship between …
Non-Directional Hypothesis

A hypothesis may be stated in the null form


which is an assertion that no relationship or
no difference exists between or among the
variables. The following are the examples of
null form of hypothesis

H0 : There is no significant relationship between


intelligence and achievement of students.
Variables in a Hypothesis
A hypothesis is made testable by providing
operational definitions for the terms or variables of
the hypothesis.
Variables - There are five types of variables.
(i) Independent variable
(ii) Dependent variable
(iii) Moderator variable
(iv) Control variable
(v) Intervening variable
The Independent Variable

•The independent variable is that factor which is


measured, manipulated, or selected by the
experimenter to determine its relationship to an
observed phenomena.
–If a researcher is studying the relationship between
two variables X and Y. If X is independent variable,
then it affects another variable Y:
The Dependent Variable:
•The dependent variable is that factor which is
observed and measured to determine the effect
of the independent variables.
–It is the variable that will change as a result of
variations in the independent variable.
• It is considered dependent because its value
depends upon the value of the independent
variable.
The Moderator Variable
• The term moderator variable describes a special
type of independent variable selected for study
to determine if it affects the relationship between
the primary independent variable and the
dependent variable.
• The moderator variable is defined as that factor
which is measured, manipulated or selected by
the experimenter to discover whether it modifies
the relationship of independent variable to an
observed phenomena.
– Example - gender
Control Variable
• All the variables in a situation can not be studied at
the same time, some must be neutralized to
guarantee that they will not have a differential or
moderating effect on the relationship between the
independent and dependent variables.
– These variables whose effects must be neutralized or
controlled are known as control variables.
• They are defined as those factors which are
controlled by experimenter to cancel out or
neutralize any effect they might otherwise have on
the observed phenomena.
– While the effects of the control variables are neutralized,
the effect of moderator variables are studied.
Intervening Variable
• An intervening variable is that factor which
affects the observed phenomenon but cannot
be seen and measured or manipulated, Its
effect must be inferred from the effects of the
Independent and moderator variables on the
observed phenomena.
– The attitude, learning process, habit and interest
function as Intervening variables
• After selecting the independent and
dependent variables the researcher must
decide which variables are to be included as
moderator variables and which are to be
excluded or hold constant as Control variables.
• He must decide how to treat the total part of
the other variables (other than the
independent). That might effect the
dependent variables.
Sample and types of samples
• All items in any field of inquiry constitute a ‘Universe’
or ‘Population.’
– A complete enumeration of all items in the ‘population’ is
known as a census inquiry.
• When field studies are undertaken in practical life,
considerations of time and cost lead to a selection of
respondents i.e., selection of only a few items.
– The respondents selected should be as representative of
the total population as possible in order to produce a
miniature cross-section.
• The selected respondents constitute what is
technically called a ‘sample’ and the selection process
is called ‘sampling technique.’
Types of sampling methods

• Probability sampling and


• Non-Probability sampling
Probability Sampling
• Method of sampling which gives the
probability that our sample is representative
of population is known as probability
sampling.
• Probability sampling is also known as ‘random
sampling’ or ‘chance sampling’. Under this
sampling design, every item of the universe
has an equal chance of inclusion in the
sample.
Non-probability Sampling

• Also known as non-parametric sampling


• Non-probability sampling is that sampling
procedure which does not afford any basis
for estimating the probability that each item
in the population has of being included in
the sample.
• Non-probability sampling is also known by
different names such as deliberate sampling,
purposive sampling and judgment sampling.
Characteristics of Probability Sampling
• In probability sampling we refer from the
sample as well as the population.
• In probability sampling every individual of the
population has equal probability to be taken
into the sample.
• Probability sample may be representative of the
population.
• The observations (data) of the probability
sample are used for the inferential purpose.
Characteristics of Non-probability Sampling

• There is no idea of population in non-


probability sampling.
• There is no probability of selecting any
individual.
• The observations of non-probability sample
are not used for generalization purpose.
• Non-parametric or non-inferential statistics
are used in non probability sample.
Types or Techniques Probability Sampling

There are a number of techniques of taking


probability sample. But here only six important
techniques have been discussed :
• Simple random sampling.
• Systematic sampling.
• Stratified sampling.
• Multiple or Double sampling.
• Multi-stage sampling.
• Cluster sampling.
Types of Non-probability Sample
There are the following four types of non-
probability sample:
• Incidental or accidental sample.
• Purposive sample.
• Quota sample.
• Judgment sample
Simple Random Sampling

• A simple random sample is one in which each


element of the population has an equal and
independent chance of being included in the
sample
(a) Tossing a coin.
(b) Throwing a dice.
(c) Lottery method.
(d) Blind folded method.
(e) By using random table
Systematic Sampling
• Systematic sampling is an improvement over the
simple random sampling. This method requires
the complete information about the population.
There should be a list of information of all the
individuals of the population in any systematic
way. Now we decide the size of the sample.
Let sample size = n
and population size = N
Now we select each N/nth individual from the list
and thus we have the desired size of sample
which is known as systematic sample.
Stratified Sampling

• The researcher divides his population in strata


on the basis of some characteristics and from
each of these smaller homogeneous groups
(strata) draws at random a predetermined
number of units.
Multi-Stage Sampling

• This sample is more comprehensive and


representative of the population.
• Stages of a population are usually available
within a group or population, whenever
stratification is done by the researcher.
• The Individuals are selected from different
stages for constituting the multi-stage
sampling
Cluster Sampling
• To select the intact group as a whole is known
as a Cluster sampling. In Cluster sampling the
sample units contain groups of elements
(clusters) instead of individual members or
items in the population.
Non-probability Sampling Techniques

• Judgement Sampling
– This involves the selection of a group from the
population on the basis of available information
thought. It is to be representative of the total
population. Or the selection of a group by
intuition on the basis of criterion deemed to be
self-evident.
Purposive Sampling
• The purposive sampling is selected by some
arbitrary method because it is known to be
representative of the total population, or it is
known that it will produce well matched
groups.
• The Idea is to pick out the sample in relation
to some criterion, which are considered
important for the particular study.
Quota Sampling
• The population is classified into several
categories: on the basis of judgement or
assumption or the previous knowledge, the
proportion of population falling into each
category is decided.
• Thereafter a quota of cases to be drawn is
fixed and the observer is allowed to sample as
he likes. Quota sampling is very arbitrary and
likely to figure in Municipal surveys.
Data and Sources of Data Collection

• Data means observations or evidences. The


scientific educational researches require the
data by means of some standardized research
tools or self-designed instrument.
• The data can be classified into two broad
categories:
– Qualitative data or attributes.
– Quantitative data or variables
• 1. Quantitative Data or Attributes: The
characteristics or traits for which numerical
value can not be assigned, are called
attributes, e.g. motivation, confidence,
honesty integrity etc.
• 2. Quantitative Data or Variables: The
characteristics or traits for which numerical
value can be assigned, are called variables,
e.g. Achievement Intelligece, Aptitude Height,
Weight etc.
Types of data source
• There are two sources of data viz., primary
and secondary.
– The primary data are those which are collected
afresh and for the first time, and thus happen to
be original in character.
– The secondary data, on the other hand, are those
which have already been collected by someone
else and which have already been passed through
the statistical process.
Tools and techniques of Primary data collection

• Observation Method
– Under the observation method, the information is sought by way
of investigator’s own direct observation without asking from the
respondent.
• Interview method
– The interview method of collecting data involves presentation of
oral-verbal stimuli and reply in terms of oral-verbal responses.
• Focused interview is meant to focus attention on the given experience of
the respondent and its effects.
• The clinical interview is concerned with broad underlying feelings or
motivations or with the course of individual’s life experience
Questionnaires
• Questionnaire method
– A questionnaire consists of a number of questions printed or typed
in a definite order on a form or set of forms. The questionnaire is
either mailed to respondents, or responses are collected by the
researcher or through an enumerator.
Tools & Techniques of collecting secondary data
• Publicly Available Sources: Secondary data can be collected from
publicly available sources such as government agencies,
international organizations, research institutions, and academic
journals.
• Online Databases: Online databases provide access to a wide
range of secondary data sources across different fields and
disciplines
• Library Research: Libraries house a wealth of secondary data
sources, including books, journals, newspapers, periodicals,
government publications, and archival records
• Surveys and Questionnaires: Secondary data can be collected
from surveys and questionnaires conducted by other researchers,
organizations, or government agencies.
• Commercial Sources: Commercial sources such as market
research firms, data brokers, and business intelligence providers
Measurement and Scales
• Measurement is the process of assigning
numbers or labels to objects, events, or their
attributes according to specific rules. It
involves determining the quantity or quality of
variables in a systematic and standardized
way.
SCALES
• A scale is a tool or mechanism by which individuals
are distinguished as to how they differ from one
another on the variables of interest to our study.
• There are four basic types of scales: nominal,
ordinal, interval, and ratio. The degree of
sophistication to which the scales are fine-tuned
increases progressively as we move from the
nominal to the ratio scale.
– That is, information on the variables can be obtained in
greater detail when we employ an interval or a ratio
scale than the other two scales
Types of Scale
Nominal Scale
• Characteristics: Categorizes data without any order or
ranking. Each category is unique and has no
quantitative value.
• Example: Gender (male, female, other), blood type
(A, B, AB, O).
Ordinal Scale
• Characteristics: Categorizes data with a meaningful
order or ranking but without consistent intervals
between categories.
• Example: Satisfaction level (very satisfied, satisfied,
neutral, dissatisfied, very dissatisfied).
Interval Scale
• Characteristics: Ordered categories with equal intervals
between them, but no true zero point.
• Example: Temperature in Celsius or Fahrenheit, IQ scores.
Ratio Scale
• Characteristics: Ordered categories with equal intervals and
a true zero point, allowing for the calculation of ratios.
• Example: Height, weight, time, income

– A true zero point refers to the absence of the quantity being


measured and is a feature of ratio scales in research. It means
that a value of zero on the scale indicates a complete lack of the
attribute being measured
A Review of Scale
• The four scales that can be applied to the measurement of
variables are the nominal, ordinal, interval, and ratio scales.
– The nominal scale highlights the differences by classifying objects or
persons into groups, and provides the least amount of information on
the variable.
– The ordinal scale provides some additional information by rank-ordering
the categories of the nominal scale.
– The interval scale not only ranks, but also provides us with information
on the magnitude of the differences in the variable.
– The ratio scale indicates not only the magnitude of the differences but
also their proportion.
• As we move from the nominal to the ratio scale, we obtain
progressively increasing precision in quantifying the data, and
greater flexibility in using more powerful statistical tests.
– Hence, whenever possible and appropriate, a more powerful rather
than a less powerful scale should be used to measure the variables of
interest
Rating Scales & Ranking Scales
• Now that we know the four different types of scales
that can be used to measure the operationally
defined dimensions and elements of a variable, it is
necessary to examine the methods of scaling.
• There are two main categories of attitudinal scales
(not to be confused with the four different types of
scales)—the rating scale and the ranking scale.
– Rating scales have several response categories and are
used to elicit responses with regard to the object, event,
or person studied.
– Ranking scales, on the other hand, make comparisons
between or among objects, events, or persons and elicit
the preferred choices and ranking among them.
List of Rating Scales
• The following rating scales are often used in
organizational research:
– Dichotomous scale
– Category scale
– Likert scale Numerical scales
– Semantic differential scale
– Itemized rating scale
– Fixed or constant sum rating scale
– Stapel scale
– Graphic rating scale
– Consensus scale
• Dichotomous Scale
– The dichotomous scale is used to elicit a Yes or No answer.
• Category scale
– The category scale uses multiple items to elicit a single
response out of several options given
• Likert Scale
– The Likert scale is designed to examine how strongly
subjects agree or disagree with statements on a 5-point
scale
• Semantic Differential Scale
– Measures the meaning respondents assign to a concept
using bipolar adjectives. A scale with opposite adjectives
at each end (e.g., good-bad, happy-sad).
RANKING SCALES
• Ranking scales are used to tap preferences between two or
among more objects or items.
• For instance, let us say there are four product lines and the
manager seeks information that would help decide which
product line should get the most attention.
• Let us also assume that 35% of the respondents choose the
first product, 25% the second, and 20% choose each of
products three and four as of importance to them.
• The manager cannot then conclude that the first product is
the most preferred since 65% of the respondents did not
choose that product.
• Alternative methods used are the paired comparisons, forced
choice, and the comparative scale.
• Simple Ranking Scale
– Description: Respondents rank items in order of
preference or importance.
– Format: A list of items that respondents arrange in
order from highest to lowest.
– Example: Rank your favorite fruits from 1 to 5 (Apple,
Banana, Orange, Grapes, Pineapple).
• Paired Comparison Scale
– Description: Respondents choose their preferred item
from a series of pairs.
– Format: Each item is compared with every other item
in pairs.
– Example: Given pairs of fruits (Apple vs. Banana, Apple
vs. Orange, etc.), select the preferred fruit in each pair.
• Forced Ranking Scale
– Description: Similar to simple ranking but forces respondents
to place items in a specific order without ties.
– Format: A list where respondents must assign a unique rank to
each item.
– Example: Rank the following job benefits in order of
importance (Salary, Health Insurance, Vacation Time,
Retirement Plan, Flexible Hours).
• Comparative Scale
– Description: Respondents compare items against a common
standard or against each other.
– Format: Items are ranked based on how they compare to a
benchmark or each other.
– Example: Rank the following cars based on fuel efficiency
compared to a standard benchmark (Car A, Car B, Car C, Car D).
Reliability of a Scale
• The reliability of a measure indicates the
extent to which it is without bias (error free)
and hence ensures consistent measurement
across time and across the various items in the
instrument.
– In other words, the reliability of a measure is an
indication of the stability and consistency with
which the instrument measures the concept and
helps to assess the goodness‖ of a measure.
Reliability cont….

• Internal Consistency:
– Description: Measures the consistency of results
across items within a test.
– Example: Using Cronbach’s alpha to assess the
consistency of responses to a set of survey
questions intended to measure the same
construct.
Validity

• Validity refers to the extent to which an


instrument measures what it is intended to
measure. A valid instrument accurately
reflects the concept it is supposed to measure.
• Content Validity:
– Description: Ensures that the instrument covers
the entire range of the concept being measured.
– Example: A test on mathematical skills should
cover all relevant areas, such as algebra,
geometry, and calculus.
Types of validity cont….
• Construct Validity:
– Description: Evaluates whether the instrument
truly measures the theoretical construct it claims to
measure.
– Example: A depression scale should accurately
measure the construct of depression and not
anxiety or stress.
• Criterion-Related Validity:
– Description: Assesses how well one measure
predicts an outcome based on another measure.
– Example: A new job aptitude test should correlate
well with job performance.
GETTING DATA READY FOR ANALYSIS
• After data are obtained through
questionnaires, interviews, observation, or
through secondary sources, they need to be
edited. The blank responses, if any, have to be
handled in some way, the data coded, and a
categorization scheme has to be set up. The
data will then have to be keyed in, and some
software program used to analyze them.
• Editing Data
– Data have to be edited, especially when they relate to responses to
open-ended questions of interviews and questionnaires, or
unstructured observations.
• Handling Blank Responses
– One way to handle a blank response to an interval-scaled item with a
mid- point would be to assign the midpoint in the scale as the response
to that particular item.
• Coding
– use a coding sheet first to transcribe the data from the questionnaire
and then key in the data.
• Categorization
– it is useful to set up a scheme for categorizing the variables such that
the several items measuring a concept are all grouped together.
Responses to some of the negatively worded questions have also to be
reversed so that all answers are in the same direction.
• Entering Data
– Raw data can be entered through any soft- ware program. For instance,
DATA ANALYSIS
• In data analysis we have three objectives:
– Getting a feel for the data,
– Testing the goodness of data, and
– Testing the hypotheses developed for the research.
• The feel for the data will give preliminary ideas of how
good the scales are, how well the coding and entering of
data have been done, and so on.
• Testing the goodness of data—can be accomplished by
submitting the data for factor analysis, obtaining the
Cronbach‘s alpha, and so on.
• Hypotheses testing—is achieved by testing each of the
hypotheses using the relevant statistical test. The results
of these tests will determine whether or not the
hypotheses are substantiated.
Null and Alternate Hypotheses
• The null hypothesis is a proposition that states
a definitive, exact relationship between two
variables.
– In general, the null statement is expressed as no
(significant) relationship between two variables or
no (significant) difference between two groups.
– The alternate hypothesis, which is the opposite of
the null, is a statement expressing a relationship
between two variables or indicating differences
between groups.
Steps in Hypothesis testing
– State the null and the alternate hypotheses.
– Choose the appropriate statistical test depending on whether
the data collected are parametric or nonparametric
– Determine the level of significance desired (p = .05, or more, or
less).
– See if the output results from computer analysis indicate that
the significance level is met.
• P value rules
– If ‘p’ value is < significance level(0.05) reject null hypothesis and
accept alternative hypothesis
– If ‘p’ value is > significance level(0.05) accept null hypothesis
and reject alternative hypothesis
• Every test of SPSS is based on p values and
– If p < sig( accept alternative hypothesis)

THE RESEARCH REPORT
The Research Proposal
– The research proposal consists of problem to be
investigated, the methodology to be used, the duration
of the study, and its cost, which the researcher submits
and gets approved by the sponsor, who issues a letter
of authorization to proceed with the study.
The Research Report
– The results of the study and the recommendations to
solve the problem are effectively communicated to the
sponsor, so that the suggestions made are accepted
and implemented which is done through a research
report
Guidelines for preparing a Good Research report.

Preparing a good research report involves several key


steps and considerations.
1. Understand the Purpose and Audience
– Purpose: Clearly define the objective of your research
report. Are you informing, persuading, analyzing, or
presenting findings?
– Audience: Identify who will read your report and tailor
the content to their knowledge level and interests.
2. Choose a Clear and Concise Title
– Relevance: Ensure the title accurately reflects the
content and scope of the research.
– Clarity: Use precise language that is easy to understand.
3. Structure Your Report

A well-structured report typically includes the


following sections:
a. Abstract
– Summary: Provide a brief overview of the research,
including the main findings and conclusions.
– Length: Keep it concise (usually 150-250 words).
b. Introduction
– Background: Provide context for the research topic.
– Research Problem: Clearly state the problem or
question your research addresses.
– Objectives: Outline the goals and scope of the study.
Structure cont….
c. Literature Review
– Previous Research: Summarize relevant studies and highlight their
contributions and gaps.
– Theoretical Framework: Explain the theories or models that
underpin your research.
d. Methodology
– Research Design: Describe the research approach (e.g., qualitative,
quantitative, mixed methods).
– Data Collection: Explain how data was gathered (e.g., surveys,
experiments, observations).
– Analysis: Detail the methods used to analyze the data.
– Ethics: Mention any ethical considerations and how they were
addressed.
Structure cont….

e. Results
– Findings: Present the data and main results of the
study.
– Visuals: Use tables, graphs, and charts to illustrate key
points.
– Clarity: Ensure the results are clear and logically
organized.
f. Discussion
– Interpretation: Explain the significance of the findings.
– Implications: Discuss the implications for theory,
practice, or future research.
– Limitations: Acknowledge any limitations of the study.
Structure cont…
g. Conclusion
– Summary: Recap the main findings and contributions of the
research.
– Recommendations: Suggest practical applications or areas for
further research.
h. References
– Citations: List all sources cited in the report, following a consistent
citation style (e.g., APA, MLA, Chicago).
– Completeness: Ensure all references are complete and accurate.
i. Appendices
– Supplementary Material: Include any additional material (e.g.,
raw data, questionnaires) that supports the report.
Cont…..
4. Writing Style and Language
– Clarity: Use clear and straightforward language.
– Precision: Be precise in your descriptions and explanations.
– Consistency: Maintain a consistent tone and style
throughout the report.
– Grammar and Spelling: Proofread to eliminate errors.
5. Visual Presentation
– Layout: Use a clean, professional layout with headings,
subheadings, and bullet points.
– Graphics: Ensure all graphics are high-quality and clearly
labeled.
– Formatting: Follow any specific formatting guidelines
provided (e.g., font size, margins).
6. Review and Revise
– Feedback: Seek feedback from peers or advisors.
– Revisions: Make necessary revisions based on
feedback.
– Proofreading: Conduct a final proofread to ensure
accuracy and completeness.
7. Ethical Considerations
– Plagiarism: Ensure all sources are properly credited to
avoid plagiarism.
– Confidentiality: Respect the confidentiality and
privacy of any participants involved in the research.
• End of the Session!!!!!!!

You might also like