0% found this document useful (0 votes)
17 views

Advanced Research Methodology Book

Research methodology encompasses the systematic procedures for data collection and analysis to address research problems and achieve credible findings. It includes various types of research such as exploratory, descriptive, causal, and predictive, each serving distinct objectives. The document outlines the characteristics, objectives, and classifications of research, emphasizing the importance of ethical practices and precision in data collection.

Uploaded by

Rakeysh Densel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Advanced Research Methodology Book

Research methodology encompasses the systematic procedures for data collection and analysis to address research problems and achieve credible findings. It includes various types of research such as exploratory, descriptive, causal, and predictive, each serving distinct objectives. The document outlines the characteristics, objectives, and classifications of research, emphasizing the importance of ethical practices and precision in data collection.

Uploaded by

Rakeysh Densel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 206

Advanced Research

Methodology

CHAPTER-1:Research
Methodology

1.1. Introduction

Research methodology refers to the procedures


that will be used to collect data and draw
conclusions. It's a methodical, well-thought-out
strategy for fixing a research issue. A research
technique describes the steps taken by the
researcher to produce credible and legitimate
findings that are relevant to the study's stated
goals. All aspects of data collection and analysis,
including what data will be gathered along with
where it will be gathered, are included.

1.1.1.Meaning of Research

Although there are some commonalities among the


many definitions of research, there doesn't appear
to be one, comprehensive one that is accepted by
everyone that are involved in it.

Basically, research is the process of looking for


something. The standard description of research
refers to "the systematic investigation of a problem
by means of a selected strategy," which entails
selecting a methodology, developing research
hypotheses, deciding on or creating techniques
and approaches, collecting information, analyzing
the data, interpreting the results, and finally
providing a solution.

Evidence gathering and analysis for the purpose of


furthering one's knowledge of a subject requires a
special focus on minimizing the potential for bias
and inaccuracy. These actions are defined by
taking into consideration and adjusting for any
biases. A study project might be an extension of
earlier studies in the same area. Research could
involve recreating parts of, or the complete
project, in order to assess the reliability of
equipment, techniques, or experiments.

1.1.2.Characteristics of research

 In order to collect reliable data, good


researchers use a methodical methodology.
When conducting experiments or collecting
data, researchers have a responsibility to
operate ethically and within a set of
guidelines.
 The evaluation is grounded on logic and
makes use of both deductive and inductive
approaches.
 Information and data collected in real-
time are based on empirical research
conducted in the wild.
 All obtained data is thoroughly examined to
rule out the possibility of any unexpected
results.
 It opens up a channel for thinking about
further inquiry. Further study avenues may
be uncovered with the use of already
collected data.
 It takes an analytical approach, using all
available facts, to remove any room for
interpretation uncertainty.
 There are few things more important in
research than precision. The data should be
verified as reliable. For instance, data
collection in a laboratory setting is
conducted under strict conditions. The
precision of an experiment is determined by
the precision of the tools employed, the
precision of their measurements, and the
precision of the experiment's ultimate
conclusion.

1.1.3.Objectives of Research

The objective of doing research regardless of


discipline is, of course, to learn more about the
topic at hand via the use of scientific techniques.
Despite the fact that every study has its own
unique objectives, one may generally classify
scientific inquiry into four categories.

 Exploration and Explorative Research


 Description and Descriptive Research
 Causal Explanation and Causal Research
 Prediction and Predictive Research

1. Exploration and Explorative Research

The act of discovering something about the world


that has not been studied before is called
exploration. That is to say, an exploratory
investigation defines and categorizes unanticipated
issues. The purpose of an exploratory investigation
is to learn more about a topic or uncover
previously unknown information. When
researchers don't have a good sense of what
challenges they'll face, exploration might help
them find solutions. The goal of scientific
investigation is to:

 Develop easier-to-comprehend notions;


 Determine the order of importance for a set
of potential solutions;
 Constrain variables to their operational
meanings;
 Establish working hypotheses and fine-tune
study goals;
 Enhance the approach used, and adjust the
study's framework as necessary.

Exploration is accomplished by studies classified


as exploratory. Once researchers are satisfied with
the way they have defined the main aspects of the
research topic, the study is considered complete
and no more exploration is necessary.

2. Description and Descriptive Research

The process of acquiring data is fundamental to


many research projects. Data-driven data
gathering is referred to within the description. In-
depth descriptions of people, places, and things
are what descriptive research is all about.

Descriptive research is an effort to provide an


accurate account of real-world occurrences. When
a great deal of background information already
exists, researchers may proceed with confidence.
The goals of descriptive research include
elucidating "who," "what," "when," "where," &
"how" inquiries. Data gathering and the generation
of the distribution of the total number of
occurrences of a specific event or feature, referred
to as research variables, may be necessary steps in
such investigations.

It is possible for a descriptive research to


encompass the examination of the connection
between many variables, each of which is
individually examined. A correlational study seems
a kind of research that attempts to establish a link
between two or more variables. A correlational
study is one that makes an effort to find a
connection between (or a co-relation of) three or
more independent variables. It's possible that a
descriptive research might provide answers to the
following categories of questions:

 Who are the individuals who engage in


criminal activity in the city, and what
defines them? How old are they? Middle-
aged? Poor? Muslim? Educated?
 Who may be the best target market for this
innovative product? Male or female? Who is
more urban than rural?
 Do women in rural areas get married at a
younger age than their city-dwelling
counterparts?
 Is there any correlation between an
applicant's level of experience and their
starting salary?
Descriptive research is useful for describing what
happened, but it is limited in its ability to explain
what led up to the observed phenomenon.
Therefore, one cannot establish a causal link
between two variables using just descriptive data.

There is less of an emphasis on reliability and


validity within descriptive studies. Descriptive
studies include anything that can be measured and
analyzed. However, there are additionally always
limitations to it. It is essential that research has an
effect on the world around humans. Descriptive
research may be used to determine things like
which diseases are more prevalent in a certain
population. However, those who read this study
will have a good idea of what caused this to occur
and how to stop the spread of the illness,
increasing the likelihood that more individuals will
have long, healthy lives. It requires that researcher
investigate the circumstance in question from a
causal standpoint and provide a causal
explanation.

3. Causal Explanation and Causal Research

A good explanation will shed light on the reasoning


behind and process behind an event. Explanatory
research goes beyond mere description to
investigate potential links among variables. It
elucidates the driving force behind the phenomena
discovered in the descriptive research.

For example, when a researcher discovers that


neighborhoods with greater family sizes
experience more child fatality and that smoking is
connected with lung disease, one is doing a
descriptive analysis. Research that aims to clarify
why something is the way it is and identify the
factors that contribute to its existence is called
explanatory as well as causal study. The
researcher develops hypotheses or ideas to explain
the circumstances that led to observed
phenomena. Analyze the following cases, which
illustrate the need for causal research:

 Why do some individuals choose a life of


crime? Can this be attributed to the current
economic downturn or an absence of
parental guidance?
 Will people be interested in buying a
product only because it comes in a different
package? Is it possible that an appealing
commercial may convince them to try
something new?
 Why have stock prices on the stock market
dropped more dramatically than they ever
have before? Is it due of the excessive
availability of new shares as well as the
International Monetary Fund's (IMF)
warnings and recommendations about the
risk of financial institutions to the stock
market?

4. Prediction and Predictive Research

Once have a good enough explanation for


something, one can use it to make a prediction
about when and how often it will happen. The
connection between explanation and forecasting
has been discussed, although its exact nature is
unclear.

Some people think that explaining something and


predicting it is the same thing, with the only
difference being that explaining something
happens after something has happened, whereas
predicting something happens before it has
happened. Another theory holds that explaining
and foreseeing are two entirely separate activities.
Individuals may safely ignore this controversy and
instead point out that its capacity to explain and
foresee events would go well beyond their after-
the-fact explanations.

1.1.4.Types of Research

Research may be broken down into the following


categories:

1. On the Basis of Application

There are two broad categories for research that


may be broken down by their practical
applications:

i. Basic/Pure/Fundamental Research

Pure research has been frequently referred to as


"basic research" and "fundamental research." This
is the first step in each study project. The study's
authors want to use their findings to provide
explanations for certain observed phenomena.
Instead of focusing on applications or putting
hypotheses and ideas to the assessments, this kind
of study aims to learn as much as possible about
the topic at hand. The goal of basic research has
been to learn as much as possible about a topic
without focusing on how to solve the presenting
issue or how the findings could be used in the real
world. Designing a study model for adolescent
reading behavior, for instance, serves no useful
purpose beyond expanding understanding of the
topic. The following are all examples of basic
research:

 Discovery

Whenever the goal of a study is discovery, the


research will look to empirical facts to provide
novel explanations or suggestions for the problem
being studied.

 Invention

The primary goal of fundamental study may be the


development of novel approaches and procedures.

 Reflection

Different organizational and social settings are


examined by the researchers to test the validity of
the ideas, models, and methods developed.

ii. Applied/Practical/Need-Based/Action Based


Research

In addition to these more academic-


sounding names, "practical research," "need-based
research," and "action-based research" all describe
aspects of applied research. In contrast to
fundamental research, which focuses on expanding
the comprehension of the world for its own sake,
applied research seeks to address a real-world
problem using established ideas and
methodologies. Fundamentally, it entails making
use of the theoretical frameworks established by
fundamental studies. It's an effort to address the
issues that plague today's corporations,
communities, and governments. The goal of
applied research is to identify and then eliminate
all kind of real-world and societal issues.

2. On the Basis, of Objectives

These aims provide a framework for the following


types of research:

i. Exploratory/ Formulative Research

The term "formulative research" may be used


interchangeably with the term "exploratory
research." The overarching goal of this
investigation is to uncover hitherto undefined facts
or occurrences. Through the process of
establishing and evaluating hypotheses,
researchers undertaking exploratory studies want
to learn more about a topic and create novel ideas
and theories. Whenever a concept is either too
broad or too narrow, it becomes challenging to
construct a testable hypothesis. due to this, it is
necessary to do exploratory research in order to
gather data that may be used to build hypotheses
and guide future experiments. Exploratory studies
aid researchers in determining the most
appropriate approaches, methodologies, and data
gathering strategies for their specific research
questions.
ii. Descriptive/Statistical Research

Descriptive research refers to the study with the


objective to provide light on the unique qualities of
the population being studied. This study's guiding
theoretical framework is "reflective thinking,"
which involves a critical examination of a research
project's aims and presuppositions. Research that
focuses on description seeks to address
fundamental questions including "who," "what,"
"when," "where," & "how" in relation to a given
phenomenon or scenario. It may be used to any
topic that can be measured quantitatively. The
frequency distributions, mean values, and
correlation coefficients shown here were all
derived from the data described in this study. It
may be more efficient to first perform surveys
before diving into the descriptive study.

iii. Experimental/Causal/Explanatory Research

The goal of experimental research, often known as


"causal research" and "explanatory research," aims
to isolate the variables that contribute to an
outcome. The model calculates the changes within
the variable that is dependent as a result of
changes within the variable that is independent. It
is standard practice in experimental research to
compare the results obtained from an experiment
using two control groups. A study's "experimental
group" refers to those participants who get the
intervention, whereas the "control group" consists
of those who do not. The effectiveness of the
therapy is evaluated by contrasting the results
from the experimental group with those obtained
from the control group. The outcomes of the
therapy are therefore determined. The results of
experimental studies are not always simple and
straightforward because of the influence of
variables. Thus, it is important to maintain the
status quo of the control group while making
adjustments to the experimental population in
order to get reliable findings from the experiment.

3. On the Basis of Extent of Theory

Two distinct forms of study may be distinguished


depending on the level of theory employed:

i. Theoretical Research

The goal of theoretical study is to expand


understanding by uncovering previously unknown
concepts and theories that can be explained by and
contribute to current understanding. However,
secondary data along with primary data are relied
on for the most part, with an emphasis on
researching rather than verifying the ideas and
models. Although theoretical study offers
numerous upsides, it has also received much
criticism throughout its history. The lack of a test
component within theoretical research constitutes
the source of many of the criticisms leveled against
it. There are others in the academic community
who argue that theoretical work is not legitimate
since it does not need proof to be formulated.
However, this argument cannot stand on its
foundation. The process of conceptualization is
essential to any kind of academic inquiry. The goal
of theoretical inquiry is to learn more about pre-
existing ideas so that new insights may be added
to the current pool of knowledge.

ii. Empirical Research

Research that relies on empirical methods relies


heavily on empirical data. This sort of study gains
understanding through seeing or experiencing
something. The basic data used in this study was
gathered, analyzed, and put to the test in order to
disprove certain assumptions. There are two main
methods for doing empirical research: qualitative
as well as quantitative. For instance, large-scale
studies devoted to health concerns often rely on
empirical research methods. Empirical studies rely
on direct observation and quantitative analysis as
opposed to theoretical assumptions. Its goal is to
generate original thoughts through amassing first-
hand information. Therefore, the fundamental
distinction between empirical and theoretical
studies is that within the former, the researcher
draws conclusions on the basis of the prior
literature, but in the latter, they go out and gather
data to put the theory through its paces.

4. On the Basis of Methodology

Research may be broken down into two categories,


depending on the techniques used:

i. Qualitative Research

Qualitative research methods are used to learn


about and understand human behavior. It is the
first step in doing quantitative studies. Whenever
there is a demand for fresh ideas and hypotheses,
qualitative research is conducted so that they may
be examined and analyzed using quantitative
methods. Findings are attempted to be applied
generally. The primary purpose of qualitative
research has been to examine and get in-depth
understanding of a certain behavior via the
collection of novel data using a number of
methods. Qualitative research helps understand
how people in the study population interpret their
surroundings, how those surroundings influence
their behavior, etc.

ii. Quantitative Research

The findings of quantitative studies contradict


those of qualitative studies. is a method used in
science that analyzes data by using statistical
metrics to draw conclusions about a study's
results. Quantitative methods are used in many
fields of study, including social and natural
sciences, education, and others. The goal of
quantitative research has been to systematically
generate and test hypotheses with the use of
statistical and mathematical approaches. For
instance, studies may be compared to see how
different gun buyback programs affect crime rates.

5. Other Types

There are various forms of research besides the


aforementioned main ones:

i. Evaluation Research

The term "evaluation" may have a wide range of


meanings depending on the context of the study,
the methods used, and the conclusions sought.
Measurement and evaluation of a condition in
order to provide constructive criticism is a
common definition of feedback. There are usually
criteria used to make the assessment. Advantages,
efficiency, longevity, usefulness, and other similar
factors are often considered while assessing the
worth of a certain item. The assessment process
often places a premium on an item's practical uses.
The purpose of analysis is to learn more about
something by evaluating it against a set of
standards. In a nutshell, it's an exercise in
comparative analysis, wherein the means by which
the study's stated aims have been achieved are
analyzed for areas of potential improvement. A
research might be summative if it draws
conclusions that imply the study's limitations.

ii. Action/Participatory Research

Action research, commonly referred to as


"participatory research," constitutes an approach
to problem-solving in which members of a team or
organization actively work to find solutions to
existing problems. In this method, members of a
company's staff work together to do research that
would ultimately lead to positive transformation.
Large organizations also engage in action research
in order to better the methods and procedures
they use in their daily work. As a method of
inquiry, action research is structured,
collaborative, and critical. It makes an effort to link
problem-solving techniques with scholarly study in
order to foresee potential shifts in an organization.
iii. Historical Research

Research into the past includes systematic


methods of gathering, evaluating, and interpreting
relevant historical material. Research is conducted
to learn from the past by gaining insight into the
events that had place and the patterns that
emerged as a result. The study of the past may aid
in the creation and improvement of many
contemporary methods and practices. Although
qualitative methods predominate in historical
studies, quantitative techniques are nevertheless
occasionally applied. The goal of historical study is
to uncover the influences of the past on present-
day events. Investigating the past may help learn
about important organizational techniques that
can be used now.

iv. Ex-Post-Factor:

The goal of ex-post factor study is to identify and


understand the conditions that led to a desired
outcome. This study identifies the causes of the
effects and then applies them to other contexts
with the same characteristics. It takes place after
an event or phenomenon has ended. Scientific
study of both dependent and independent factors
is what ex-post factors research is all about.
Although the occurrence has already happened,
the researcher had no direct influence over the
factors that led to the observed consequences.
Consequently, inferences are drawn deductively
about the relationships between the variables.

1.1.5.Research Approaches
Approaches to research may be thought of as the
set of practices and schemes that govern the study
as a whole. The procedures for gathering data,
analyzing it, and drawing conclusions are
determined by the research strategy chosen. Every
step of the investigation adheres to the principle of
research methodology. Research objectives,
previous research expertise, & the target
demographic all play a role in determining the
most appropriate methodology to use.

The research methodology is a predetermined


order of operations beginning with overarching
hypotheses and progressing through minutely
specified procedures for gathering data and
analyzing it. Consequently, it depends on the
specifics of the study's research topic. There are
two main categories for how researchers approach
their studies:

 Methodology of information gathering


 Methodology used in making inferences
from data

These three research methods are:

 Qualitative,
 Quantitative,
 Mixed methods.

1. Quantitative research

The positivist and postpositivist epistemological


frameworks are often used to describe quantitative
studies. It often entails amassing and translating
information to a numerical format for use in
statistical computations and drawing of
conclusions.

The goal of quantitative research is to empirically


test hypotheses by analyzing the interrelationships
of several variables. These factors may then be
quantified via the use of equipment and subjected
to statistical analysis.

There is a predetermined format for the final


written report that includes an introduction,
literature & concept, methodology, results, and
commentary.

People who participate in this kind of study


assume the same things as qualitative researchers
do, such that hypotheses can be tested
deductively, that safeguards can be built in to
prevent bias, that alternative explanations can be
controlled for, and that the results can be
generalized and replicated.

2. Qualitative research

Qualitative research is most often linked to the


social constructivist model, which places
importance on the fact that reality is itself socially
produced. It involves keeping track of people's
thoughts, deeds, and feelings — even their
seemingly contradicting attitudes, actions, and
sentiments — in order to analyze and understand
them more deeply. The goal of research is to better
comprehend individual experiences rather than to
collect data that can be extrapolated to a wider
population.
The goal of qualitative research serves to uncover
the significance that people and communities
attach to an issue that is human or social in
nature. The method of study consists of the
following steps: the emergence of questions and
processes, the collection of data within the context
of the applicant, the inductive development of
analysis from specifics to concepts, including the
researcher's evaluation of the information
gathered. The final copy of the report might be
organized in a variety of ways. Researchers that
adopt this approach value inductive reasoning,
personal interpretation, and an emphasis on
accurately depicting a situation's nuanced
complexity.

3. Mixed methods

Mixed-methods research from a pragmatic


perspective. Instead of getting bogged down in
philosophical discussions about whether
the technique is superior, scientists who take a
pragmatic approach just choose the one that
seems to work best for their particular research
challenge.

Since this is the case, pragmatic researchers allow


themselves the flexibility to employ either
qualitative or quantitative processes, strategies,
and approaches. They understand that there are
constraints to every strategy and that these
methods may supplement one another.

The term "mixed methods research" refers to a


methodology for inquiry that combines both
qualitative and quantitative information collection,
analysis, and integration via the use of separate
but complementary designs informed by
philosophical presumptions and conceptual
frameworks. It is based on the premise that a more
comprehensive knowledge of a study subject may
be attained via the integration of both quantitative
and qualitative techniques.

1.1.6.Significance of Research

The importance of research can't be overstated,


since it aids in both issue solving and increased
knowledge of the world. It's crucial in expanding
horizons and creating cutting-edge tools.

Since it allows academics to build on prior


knowledge and further enhance their
comprehension of the world, research plays a
crucial role in the field of academia. The public
should care about it because it has the potential to
enhance the way researcher live by providing
solutions to challenges.

The numerous subfields of academic inquiry each


have their own unique value. The goal of basic
research is to increase understanding of the world,
whereas the goal of applied research is to apply
that understanding to the solution of specific
issues. Medical advances depend on clinical
research, whereas advances in knowledge of
human behavior may be aided by studies
conducted in the social sciences.

 Significance of Research in the


Development of New Technologies
The advancement of technology is impossible
without study. Research involves the process of
systematically investigating a topic with the goal of
gaining new knowledge or confirming the accuracy
of previously held beliefs. Essentially, it's the act of
questioning and investigating something to learn
more.

New discoveries and innovations in knowledge


would not be possible without study. The wheel
and fire, two technologies that most of probably
never gave much thought to before, were really
the product of extensive study. Furthermore,
people cannot address issues or get insight into
their environment without study.

Therefore, it is quite evident that research is


crucial. It is crucial for the growth of
understanding and the introduction of innovative
technology. It's useful for learning about the world
and figuring out how to fix issues.

 Significance of Research in the Academy

The significance of research has been widely


acknowledged by the academic world for quite
some time. Research is crucial in the academic
community since it helps move the field forward.
The comprehension of the world outside of what
can be directly seen would be severely hampered
without investigation. It is only via research that
researchers are able to investigate events which
are too little, too huge, too quick, or too sluggish
for direct observation.
The creation of cutting-edge technology also relies
heavily on research. In many cases, researchers
use a step-by-step approach, with each subsequent
study expanding on the foundation laid by their
predecessors. Research builds on itself throughout
time, leading to discoveries and advancements.
Scientific study is essential to the Academy's goal
of expanding human understanding and enhancing
human flourishing. That's why colleges and other
places of higher education and research play such
a crucial role in driving economic and social
development.

 Significance of Research in the Public


Arena

It is impossible to overstate the value of research


within the political sphere. It is crucial for
educating the general public on crucial matters
and generating novel approaches to solving urgent
concerns. Research enables to comprehend
intricate problems and gives a basis for wise
choice-making.

Much of the discussion concerning research also


occurs in the public sphere. It's not uncommon for
people to disagree on the best ways to conduct
studies and the most pressing issues to investigate.
Research should be directed toward the most
urgent societal concerns, thus these discussions
are vital.

The importance of research in enhancing the


standard of living for every person of society
cannot be overstated. Research is crucial for
solving current problems and educating the public
about crucial subjects. Research's impact on the
public sphere should not be underestimated.

1.1.7.Research Methods versus Methodology

Here are some key distinctions that help set


research methodology apart from research
methods:

 A researcher's methodology may be thought


of as the series of steps that they follow
while conducting an investigation. Instead, a
research technique is a set of procedures
that may be applied to a research topic in a
systematic way.
 Research methodology refers to the process
through which a research strategy is
developed. The opposite is true of research
technique, which refers to the analytical
science of doing research in the most
effective way possible.
 Research methods include things like
conducting tests, questionnaires, interviews,
and other similar activities. In contrast, the
focus of research methodology revolves
around developing skills useful for carrying
out experiments, tests, and surveys.
 The research approach encompasses a wide
range of inquiry strategies. In contrast, a
research technique takes a holistic approach
that is geared towards the same goal.
 The goal of any research strategy should be
to unearth a workable solution to the issue
at hand. On the other hand, the goal of
research methodology is to use useful
methods in order to identify answers.

1.1.8.Research and Scientific Method

In science, research simply refers to the process of


collecting and evaluating data in order to draw
conclusions. Due to their superior understanding,
scientists can come off as eccentric to the general
public. It's true that they have specialized
expertise, but what makes them scientists seems
their rigorous, tested, and generally replicable
approach to solving problems. Research that
doesn't adhere to scientific procedures is bound to
fail since scientific inquiry and analysis may easily
debunk the veracity of the findings.

The scientific approach may be summed up as a


means of considering issues and their answers; it
is straightforward and easy to grasp. They adhere
strictly to the principle of cause and effect and are
thus methodical, logical, & sequential in their
arrangement. Research is an in-depth examination
of a topic from a variety of angles in order to draw
fresh conclusions or uncover previously unknown
information. The scientific method simply refers to
a certain approach of performing studies. Since
any person of average intelligence may do
research effectively using standard scientific
procedures, this is an essential component of every
study. Even if one employs scientific
methodologies, conducting a study still requires
logical thought and keen observation skills.
A researcher who is confident in the validity of his
research and its findings will not hide the scientific
methods one employs. There are fundamentally
three stages to each research project:

 Recognizing and analyzing the issue


 Methodologies for exploring questions that
need an explanation
 Measurement, analysis, and testing

If there's any uncertainty regarding the veracity of


the findings, it may be essential to repeat this
process many times, at which point scientific
techniques come in very helpful. They guarantee
that experiments may be repeated with the same
results every time. In addition, these procedures
allow any reader to attempt to reproduce the
approach on their own and reach the findings
described at the completion of every investigation.

1.1.9.Research Process

A researcher follows a research process, or a


series of procedures, in order to guarantee that
their inquiry is thorough and accurate from start to
finish. Adhering to the research method ensures
that the researcher covers all bases and presents
the material gathered in an organized and
convincing manner.

1. Research Process Steps

The research process comprises a sequence of


steps that have to be taken in a methodical fashion
if the researcher intends to provide useful
information for the project and zero in on the right
subject matter. Knowing and sticking to the phases
of the research process is essential for producing
reliable results. In order to make the research
more manageable, consider the following steps:

Step 1: Identify the Problem

The first stage in doing research is identifying a


problem or developing an issue. A well-
articulated research question will serve as a
roadmap for the whole study, from goal-setting
through methodology selection. There are many
ways to learn more about a subject and develop an
in-depth comprehension of it. For example:

 A preliminary survey
 Interviews with a small group of people
 Case studies
 Observational survey

Step 2: Evaluate the Literature

The research process would be incomplete without


a careful review of the applicable studies. This aids
the researcher in zeroing down on the specifics of
the issue at hand. The next step for an investigator
and researcher after discovering a problem is to
gather additional information about it.

This phase entails supplying context about the


trouble spot. It educates the researcher about the
methods used in past studies and their findings.
Through reviewing the prior study of other people,
they may ensure his own findings are consistent. A
thorough review like this one introduces them with
a wider range of information and guides him
through the research process with more ease.

Step 3: Create Hypotheses

Once a study subject has been refined and defined,


the following stage is to formulate a testable
hypothesis. The idea resolves the logical
connections between the different factors.
Researchers need to know a lot about their topic of
study in order to formulate a working hypothesis.

When developing a hypothesis, it's is crucial that


researchers maintain the study issue in mind. The
development of guiding ideas helps researchers
channel their energies and maintain focus on their
goals.

Step 4: The Research Design

The design of a study is its blueprint for reaching


its goals and resolving its issues. Specifically, it
describes where to look for the necessary data.
The purpose of this method is to aid in the making
of decisions by guiding the development of
research plans that may be used to test hypotheses
and answer research inquiries.

The plan for the study is made with efficiency in


mind, so that as little resources as possible are
spent on gathering the data. This strategy may be
broken down into four parts:

 Exploration and Surveys


 Data Analysis
 Experiment
 Observation

Step 5: Describe Population

The focus of most studies is on a particular


department, set of buildings, or method of
applying a technological tool to a firm. The word
"population" is used in scientific contexts to
describe this sample. The study population is
selected with consideration given to the research
question and objectives.

Let's say a researcher has an interest in learning


more about a certain section of the local
population. In such a circumstance, the study's
subjects may be narrowed down by age, gender,
sexual orientation, country of origin, or ethnicity.
The last stage in research design is to define the
population and sample from which the study will
draw its findings.

Step 6: Data Collection

In order to get the information or expertise needed


to address the research question, data gathering is
crucial. All studies gathered information, either
from pre-existing sources or the subjects
themselves. Both types of scientists need to
contribute to the data collection process.

Possible primary sources from the following:

 Experiment
 Observation
 Questionnaire
 Interview
The following are examples of secondary
categories of data:

 Literature survey
 An approach based on library resources
 Official, unofficial reports

Step 7: Data Analysis

The researcher prepares for data analysis as part


of the study design process. The analysis of data
comes after the data collection phase. In this
stage, the data is analyzed in light of the method
used. The results of the study are discussed and
presented.

There are many interconnected steps in data


analysis, including the creation of categories, the
application of those classifications to raw data
using coding as well as tabulation, including the
inference of statistical findings. Several statistical
techniques are at the researcher's disposal for
analyzing the collected data.

Step 8: The Report-writing

Following these procedures, the researcher is


responsible for writing up a comprehensive report
of his results. The report has to be meticulously
written bearing these things in mind:

 The Layout: The title, date,


acknowledgements, and introduction should all
appear on the opening page of the report, along
with a brief prologue. After the table of
contents, if there are any graphs, tables, or
charts, they should be listed.
 Introduction: The introduction must outline
the objectives and methodology of the study.
The nature and constraints of the investigation
should be outlined there.
 Summary of Findings: After the introduction,
there will be a brief, layman's overview of the
report's outcomes and suggestions. If the
results are extensive, they must be
summarised.
 Principal Report: The primary report should
include a coherent main body that is clearly
divided into parts.
 Conclusion: In summary, the researcher must
rehash his results at the conclusion associated
with the original paragraph. This is the end
outcome.

1.1.10. Criteria of Good Research

The following are some of the most crucial


requirements for a successful study:

 Clearly Defined Objectives

A research project's goals need to be specified. If


the goals of the study are crystal clear,
researchers will have a much easier time staying
on track. It aids researchers in identifying the
kinds of information needed for effective study.

 Ethically Conducted

In order to get reliable results from their studies,


researchers must act ethically. In order to keep
things open and honest with the individuals, the
study findings and the restricting variables should
be thoroughly reviewed, explained, and recorded.
There shouldn't be any tampering with the data to
make the results fit. Proper documentation of the
study's findings and evidence-based inferences are
prerequisites for a credible research paper.

 Flexibility

Research entails re-evaluating the data until the


right conclusions are reached. This can be
achieved, but only when the research strategy is
adaptable. There must always be room to either
add new, substantial data or modify the present
data to better suit the situation at hand.

 Reliability

The term "reliability" describes how often results


from studies, methods, and instruments can be
replicated. The consistency between studies is an
indicator of the study's validity. The findings of a
study are considered credible if they hold up when
applied to several, independently collected
samples by the same populations using the same
methods and circumstances. The impact of an
English composition class on a cohort's final
grades, for instance, maybe the subject of a
research study. If comparable research conducted
with a different set of students yields identical
results, then those findings may be trusted.

 Validity
Research validity refers to how widely its findings
may be used. It alludes to how well the research
tool or method works for solving the research
question. The problem-measuring precision of the
instrument is evaluated. It is a criterion for
evaluating the practical significance of the study.
The validity of a study finding, premise, or
hypothesis is the foundation for determining its
veracity. Maintaining the reliability of studies
requires precise concept definitions.

 Accuracy

Accurate research may be identified by the


correlation between the research procedure, the
equipment used, and the results obtained. It
ensures that the right methods are chosen for the
study being conducted. For instance, when a study
were to be conducted on people with mental
illnesses, observation might be the most effective
method of data collection since they aren't always
able to complete questionnaires or provide
accurate answers in an interview.

 Credibility of Sources

To ensure the validity of the findings, reliable


information must have been used in the study.
Even while a researcher may save time by using
secondary data, doing so comes at the expense of
his or her reputation since secondary data are
sometimes modified and, as a result, drawing
conclusions based only on them might be
problematic. When doing research, it is preferable
to utilize primary sources wherever available.
Secondary data may be utilized in place of main
data up to a certain threshold. However, it is
possible to undermine the reliability of a study by
relying only on secondary sources.

 Generalisable Results

Generalizability is a measure of how well a study's


findings may be extrapolated for the population at
large. Research requires selecting a representative
subset of a larger population. Thus, the samples as
well as the results of the study are representative
of the population of interest. The results of a study
are said to be generalizable if they may be used
with additional samples drawn from the same or a
comparable population.

1.1.11. Problems Encountered by


Researchers in India

There are a number of issues confronting


researchers in India, especially those who are
doing empirical studies. The following are
examples of a few of the most pressing issues:

 Researchers within the nation face a significant


barrier due to a lack of specialized education in
research technique. There is a dearth of
qualified scientists in the field. The majority of
researchers take a shot in the dark since they
aren't familiar with research procedures. The
vast majority of activities dubbed "researches”
are not really conducted in a scientifically
reliable manner. A lot of researchers, along
with some of their supervisors, determine
research as nothing more than a "scissor and
paste" task, with little or no new information
gleaned from the compiled resources. The
apparent repercussion of this is that the
findings of studies seldom represent actual
conditions. This highlights the critical need for
a comprehensive examination of research
methods. Researchers must have a firm grasp
of all methodological considerations before
beginning initiatives. Therefore, it is important
to provide accelerated learning options to fulfill
this need.

 There's a lack of communication between the


academic research community and the
commercial world, government agencies, and
various other research organizations. Due to a
lack of connections, researchers are unable to
access and process a large quantity of primary
data that is not considered sensitive. Better,
more realistic studies may be conducted if all
parties involved can find a way to communicate
effectively. In order for academics to obtain
concepts from professionals on what requires to
be investigated and for professionals to utilize
the research conducted by academics, the
processes of a university-industry interactions
programmer must be developed.

 Businesses in nation are often hesitant to


provide researchers with the necessary
information because they lack faith that it
would be handled responsibly. Businesses in
this nation seem to hold confidentiality in the
highest regard, creating an insurmountable
wall between themselves and academics.
Therefore, it is important to instill trust that a
company's data and information would be
utilized appropriately once collected.

 Due to a lack of data, many studies that cover


the same ground are often conducted. This
wastes time and money due to unnecessary
repetition. The solution to this issue is to
maintain an up-to-date list of the topics being
studied and the locations where these studies
are being conducted. There has to be more
focus on identifying research topics in different
areas of applied science that are of urgent
importance to industry.

 Researchers lack a code of behavior, and


institutional and disciplinary rivalry are
commonplace. Therefore, it is necessary to
establish a code of behavior for researchers
that, if strictly followed, may resolve this issue.

 The lack of timely and competent secretarial


support, including IT support, is a problem for
many American academics. The results of these
experiments are slowed down unnecessarily as
a result. The best possible efforts should be
taken in this area to ensure that researchers
have access to timely, effective secretarial
support. The University Grants Commission has
to take an active part in finding a solution to
this problem.

 The administration and operation of many


libraries is subpar, requiring scholars to spend
more time and effort tracking down the books,
journals, papers, etc., than tracking down the
important content inside them.

 A further issue is that majority nation's libraries


lack access to duplicates of Acts/Rules, reports,
and various other government documents as
soon as they are made available. The impact of
this issue is seen most acutely in libraries
located far from Delhi as well as the state
capitals. Therefore, people must work to ensure
that those libraries are consistently and rapidly
supplied with all government publications.

 Another challenge is the slow pace at which


government and other organizations that
collect and disseminate this kind of information
make it public. There is a challenge for
researchers since there is a great deal of
inconsistency in the reported data due to
discrepancies in the scope of coverage provided
by the relevant authorities.

 The difficulties of conceptualizing and the


difficulties of data collecting and similar
matters may arise at times.

1.2. Defining the Research Problem

1.2.1.Research Problem

The term "research problem" is used to refer to a


particular issue that calls for more study and
analysis. It's a pithy remark that sums up the gap
between the existing state of knowledge and the
information that needs to be gathered, or the gap
between the present scenario and the ideal one.

Researches questions are examined to assist


determine the most important ideas and
terminology to use in the study. A well-defined
research challenge will serve as a road map for the
investigation and will help shape the study's goals,
methodology, and findings. It is the backbone of
every research effort; therefore a well-formulated
research topic is essential for any study to provide
reliable result

1. Characteristics of a research problem

In order to have a productive research problem, be


sure it has these qualities. It is impossible to instill
all of these traits because of the wide range of
research one can do. To help others see, analyze,
and comprehend the marketing research
challenge, nevertheless, it's important to take into
account and address most of these aspects.

 Covers the essential needs or issues

The researcher has to have a clear explanation of


the issue they're attempting to solve. The research
effort won't amount to anything until the major
concerns are addressed. If they are not major
concerns, the extensive study effort may be a
waste of time and resources. When developing the
strategies for marketing, be careful not to ignore
the most pressing issues and requirements.

 The problem is stated logically and clearly


The study project's topic is either too vague or too
abstract to be worth pursuing if one has trouble
articulating it clearly and rationally. In order to
verify this, researcher should summarize the issue
in a single paragraph and check to see whether the
description makes sense and touches on all the
relevant topics. Discuss the issue with other
people; if they don't get it, researcher may need to
take a different, more methodical approach to
defining the problem.

 The research project is based on actual


facts and evidence (non-hypothetical)

Truth is objective, whereas opinion is subjective.


Don't add any fictitious elements. The research
needs to be based on hard evidence, not
speculation. The study will not benefit from
considering what-if scenarios. In the absence of
supporting data, the results of the study cannot be
accepted as reliable. A relevant and falsifiable
hypothesis must be proposed.

 The research problem generates and


encourages research questions

The study needs to generate a number of queries.


The study highlighting various facets of the issue
necessitates additional targeted inquiry. These
inquiries should strengthen the research's
foundation so that it can more effectively address
the problem. The difficulty of properly formulating
such queries requires careful consideration.

 It fits the budget and time frame


Verify whether the research project can be
completed in the allotted time and money.
Research effectiveness may be ensured by paying
attention to the logistics involved. It would be a
terrible waste of time when the study had to be
abandoned because of a lack of resources (both
financial and human) that would have allowed it to
be completed on schedule. Focus exclusively on
manageable issues.

 Sufficient data can be obtained

The conclusions of a study can only be taken at


face value if they are supported by a sufficient
number of independent examples. The validity of a
theory cannot be assessed or satisfied by research
based on insufficient evidence. There's no use in
doing research if the necessary data is not
accessible.

 The problem has an unsatisfactory answer,


or is it a new problem

Make sure there's next to no literature on the


subject. If a solution to the issue has previously
been identified and is known to be effective, it's
generally not worth continuing to look for it again.

2. Components of a research problem

These elements make up a research problem:

 Research consumer

In order for there to be an issue, there must be a


group of people who are having trouble. It's
possible that the researchers are a member of the
affected population. The term "research
consumers" refers to everyone else who deals with
the issue but is left out of the study.

 Research-consumers objective

The person doing the study must have an issue or


need that they want to solve. Without an issue,
there's no need to look into it.

 Alternative means to meet the objective

When tackling a marketing research issue, it's


important to constantly have a "Plan B." This
indicates that there should be no less than two
potential avenues or strategies for achieving the
study goal. There isn't any room for complaint on
the part of researchers if there are no other
options or ways to achieve the study goal.

 Doubts in the selection of alternatives

The availability of complementary approaches to


achieving the goal is but one aspect of the issue at
hand. Both methods should be sufficiently different
from one another to leave the researcher
wondering which one to choose. This significantly
improves the credibility of the study.

 There must be more than one environment

It is essential that the issue be present in several


contexts. It's possible that if certain external
circumstances were altered, the situation might
improve. There's a chance that a researcher will
have doubts regarding the best approach in setting
'A,' but that same doubts won't exist in setting 'B.'
1.2.2.Selecting the Problem

There are a lot of things to bear in mind when


choosing a research subject or topic to investigate
to guarantee the research is feasible thus ensuring
that researcher can keep working on it without
losing interest. The factors to think about are:

 Interest: Choosing a research issue based on


factors other than pure interest is a bad idea. A
study project often takes a long period of time,
requires a lot of effort, and may encounter
unexpected difficulties. Selecting a subject that
does not really interest the might make it tough
to stay motivated and devote the necessary
time and effort to seeing it through to
completion.
 Magnitude: The magnitude of the
comprehension of the research process will be
shown by the ability to imagine the time and
effort required to carry out the suggested
study. Reduce the scope of the conversation to
something understandable and doable. It's
crucial to choose a subject researchercan
tackle in the allotted time along with the
available materials. It is important to think
about the scope of any research project, even if
it is just descriptive in nature.
 Measurement of Concepts: When doing
quantitative research, it is important to define
and assess key indications of success. For
instance, if researcherwant to evaluate a health
promotion program's efficacy, researchermust
first define success and how you'll track it.
Concepts for which no clear metrics exist
should be avoided in any given research task.
This, however, does not exclude the creation of
a measuring strategy as the research develops.
 Level of Expertise: Before offering to do
anything, be sure there's the right amount of
experience. Although one should expect to
learn new things and have some assistance
from the research supervisors along with other
people go through the study, one will still be
responsible for carrying out the bulk of the
work.
 Relevance: Focus on something that has
practical application for people in the area of
expertise. Make sure a research project is
meaningful in terms of filling in gaps in
comprehension or informing policy decisions.
This will keep the passion for learning high.
 Availability of Data: Ensure the data that's
required is readily accessible regardless of the
format one choose before settling on a subject
if it requires one to gather data from secondary
sources such as office documents, customer
records, census data, among similar already-
published reports.
 Ethical Issues: When defining a research
topic, it is also crucial to think about any
relevant ethical considerations. The people
being studied may be put through emotional or
psychological stress if they are asked questions
that aren't ideal, aren't answered, require them
to reveal private or personal information, or
otherwise make them feel like they're being
used as guinea pigs within an experiment.
Ethical implications for the research population
and potential solutions should be carefully
considered throughout the problem
conceptualization phase.

1.2.3.Necessity of Defining the Problem

A research issue has to be defined in order to be


formulated correctly. A common saying among is
that outlining the issue helps immensely in finding
a solution. It is clear from this remark that a
research issue has to be specified. A clear
definition of the issue at hand is essential for
isolating useful information from noise. A well-
defined research issue will keep the researcher on
course, whereas a fuzzy one will only serve to
throw up roadblocks. Such as, "What information
needs to be collected?" Which aspects of data
matter most and should be investigated? What
connections need to be made? How do researchers
approach this problem? And comparable additional
questions arise in the thoughts of researchers who
can adequately organize his approach and discover
solutions to all such inquiries only after the
research issue has been clearly stated. Therefore,
correctly establishing a research issue is crucial to
any investigation. The process of issue
conceptualization is frequently more important
than the solution itself. In order to figure down the
research strategy and carry through all of the
subsequent procedures associated with doing
research, a thorough description of the research
issue is required.

1.2.4.Technique Involved in Defining a


Problem
The research study's definition of the research
problem seems an essential step and should never
be rushed. This represents a major source of
trouble in the real world, although it is often
ignored. Thus, the research issue has to be stated
methodically, with all relevant considerations
taken into account. The procedure for
accomplishing this goal entails, in general, the
execution of the following actions in sequential
order:

 State the problem in a general way

Introduce the issue in broad terms, then explain


how it relates to anything of practical,
mathematical, or philosophical relevance. The
researcher can either do extensive reading on the
topic at hand or consult an authority in the field for
this purpose. Typically, the guide will present the
issue at hand in broad strokes, leaving it up to the
researcher to choose how specific to be. It's
important to look for uncertainty and practicality
in the issue statement.

 Understand the nature of the problem

The next step aims to identify the cause of the


issue. For the investigator to fully comprehend the
problem's history, nature, goals, and the context in
which it will be investigated, the researcher must
speak with persons who have expertise in the area.

 Survey the available literature

It is necessary to research and analyze all relevant


theoretical works, reports, documents, and other
written materials on the issue. This will enable the
researcher to better understand what information
is already accessible, what methodologies may be
employed, what challenges could arise, where
there could be analytical gaps, and where there
might be opportunities for novel approaches to the
topic at hand.

 Go for discussions for developing ideas

The researcher can speak with other experts in the


field and colleagues about the issue. This allows
researchers to better concentrate on certain
elements of the topic while also generating new
ideas, identifying other facets of the issue, and
receiving recommendations and opinions from
others. Nevertheless, conversations shouldn't
center just on the issue at hand; they also need to
touch on broader questions of methodology,
potential approaches, potential outcomes, etc.

 Rephrase the research problem into a


working proposition

The researcher next has to restate the issue as a


testable hypothesis. Basically, rephrasing the issue
entails restating it in more manageable words that
might aid in the creation of workable hypotheses.
Following the preceding procedure, the researcher
may easily reframe the issue in analytical &
practical terms.
CHAPTER-2:Reviewing the
literature

2.1. Introduction

The procedures of creating a literature review


include analyzing, synthesizing, as well as
critically examining the material that was
uncovered in the framework of a literature
research. The information might serve as a
springboard for further exploration, or provide
perspective for primary research. The research has
to be reviewed for several reasons:

 Determine recent progress in the subject of


study.
 Study the research techniques and information
resources.
 Find unanswered inquiries in the existing body
of knowledge.
 Prove that the study is really unique.
 Determine the efficacy of the approaches
 Identify common pitfalls
 Draw attention to the area's benefits,
drawbacks, and contentious issues.
 Identify the relevant specialists

The following goals should guide one's work on the


review:

 Establish credibility by updating the


audience on recent advancements in the
industry.
 Defend the value and importance of the
inquiry(s).
 Describe the larger context in which the
process was developed.
 Defend the applicability and significance of
the methodology.

Many factors, including the review's intended use


and readership, might affect how extensive or
detailed the review must be. For a doctoral
dissertation or thesis, for instance, it can be
helpful to conduct a thorough literature review
which covers not only the most recent and easily
accessible literature on the subject, but also more
obscure, scholarly works that might not appear in
standard library databases.

1. Place of the literature review in research

Literature reviews are an excellent reference tool


for learning more about a certain subject. The
review of the literature may provide an overview
or serve as a jumping off point for further research
when researchers are pressed for time. These
papers serve an important purpose for
professionals by keeping them abreast of
developments in their respective fields. Scholars
will place more stock in a piece of writing if it
features a comprehensive literature review that
covers a wide range of relevant topics. The review
of literature may also serve as a strong foundation
for the study in a research report. The majority
of research papers need the author to have a
thorough familiarity with the relevant literature.
Literature reviews are more common in the social
sciences and natural sciences, yet they do appear
on rarely in humanities fields as well; they also
make up a significant chunk of experiments as well
as laboratory reports. The literature review section
of a study may sometimes stand on its own.

At the start of the dissertation or thesis will be


where one should be focusing on a literature
review. It follows the introduction & gives context
to the study before moving on to discuss the
theoretical structure or research methods.

2.1.1.Functions of literature review

There are primarily three goals to be achieved by


doing a literature review. It's useful for:

1. Bring clarity and focus to your research


problem

There is a contradiction in the research paper.


However, researcher can’t conduct a useful
literature search lacking first having a rough
notion of the issue researcher want to look into.
However, the act of examining the literature
enables researcher to grasp the topic area better,
which in turn aids researcher in conceptualizing
your research issue clearly and accurately, which
may play an incredibly vital part in developing
your research problem. As an added bonus, doing
so clarifies the connection between the researched
topic and the existing literature.

2. Improve your methodology


Reading the works of previous researchers may
help researcher understand the methods that have
been successful in answering problems
comparable to that which they are pursuing. When
doing a literature review, researcher may learn
whether or not others have utilized the same or
comparable techniques and approaches as your
own, which ones have been successful, and which
ones have caused issues. When they knows what to
look out for, they may choose a research approach
that will provide reliable results for your inquiries.
This will strengthen your conviction in the
approach researcher want to take and provide with
the tools to defend that approach.

3. Broaden your knowledge in your particular


research area

The primary purpose of a literature review serves


to ensure thorough background reading on the
topic area whereby the research will be conducted.
To answer your research questions, researchers’
need to be familiar with the results of prior
studies, the hypotheses and models which have
been proposed, and the gaps in the current body of
expertise. The study's results may be better
contextualized after you've read through the
relevant literature. Finally, a comprehensive
literature review demonstrates that researcher
have mastered your field.

2.1.2.Enabling contextual findings


2.1.3.Review of the literature
When writing reviews of literature, writers often
return to the source material for guidance as they
refine their hypotheses and methods. It is very
uncommon for new information to emerge in the
literature before a research is finished,
necessitating revisions to integrate the new data.
Therefore, the literature review isn't something
researcher write all at once, but rather something
researcher work on prior to, during, and following
the research is finished.

The review of literature is composed of the


following five sections:

 Search for relevant literature


 Evaluate sources
 Identify themes, debates, and gaps
 Outline the structure
 Write your literature review

A thorough literature review involves more than


merely summarize the relevant literature; it also
evaluates, generates, & critically examines the
material to provide a comprehensive picture of the
current state of knowledge in the field.

2.1.4.Searching the existing literature

Literature searching is the process of locating


pertinent material on a subject from the existing
research literature. There are a wide variety of
approaches to searching the literature, from quick
exploratory studies to extensive, well-funded
systematic reviews. If researcher wishes to
perform original research, researcher may first be
interested in doing a literature review to make
sure no one has previously done it. If this is the
case, researcher must conduct a thorough
investigation to confirm suspicions.

The goal of a literature search seems the same


regardless of scope: to add information to the
decision-making process. They are intrinsic to
research and development in the scientific
community. Evidence-based practice seems a
method in which judgments are made after
thoroughly reviewing the relevant literature.

If researcher often do literature searches within


your field or conduct in-depth literature
evaluations, they are:

 Ability to set the stage for and defend your


research
 Investigating Alternative Research
Techniques
 Bringing to light the need for further
research
 Reviewing the prior research for its
applicability
 Defining the gap between your research and
the current body of knowledge
 Finding the limitations and biases of the
current research
 Studying the jargon and frameworks specific
to your area
 Observant of broad patterns
 Recognizing the consensus view of scholars
on certain issues.

2.1.5.Reviewing the selected literature


Since it is unlikely that researcher will have time
to read everything published on your subject,
researcher will need to choose the sources that are
best suited to answering your research question.

 Which issue/question is the author trying to


solve?
 How can researcher determine the most
important ideas?
 What are the major frameworks, examples,
and approaches?
 Does the research make use of well-known
methodologies, or does it use a fresh
strategy?
 How did the study go, and what did
researcher find?
 How does this work fit in with previous
research in the field? Does it provide new
information or throw into question
previously held beliefs?
 Can the research's strengths and flaws be
summarized?

2.1.6.Developing a theoretical framework

Theoretical frameworks provide an overview of


relevant prior work and may be used as a guide for
creating original arguments.

Researchers create theories to make sense of the


world around them, find patterns, and anticipate
the future. Explaining the preexisting theories that
underpin your research seems an important step in
establishing the relevance and reliability of the
paper's as well as dissertation's subject.
This means that the first stage in writing the
dissertation, thesis, or research paper is to develop
a theoretical framework that will justify &
contextualize the rest of the paper. A solid
theoretical foundation prepares researcher for
future research and writing.

2.1.7.Developing a conceptual framework

The conceptual structure is a visual representation


of the assumptions behind your research. It lays
out the goals of your research and shows how they
connect to produce a logical final product. The
conceptual structure is a visual depiction of your
hypothesized link between the research's
variables, or among the variables and the qualities
or features researcher want to investigate.

It is common practice to do a literature study of


related research before beginning developing a
conceptual framework, which may take either a
textual or visual form.

Step 1: Choose your research question

An effective research process requires a well-


defined goal, and this goal is determined by the
research topic.

However, it is recommended that researcher build


a conceptual structure before beginning data
collection. This will aid in outlining the variables to
be measured and the expected relationships
between them.
Step 2: Select your independent and
dependent variables

Identifying the variables that are both dependent


and independent is a necessary first step in
answering the research problem and conducting a
cause-and-effect analysis.

It's important to remember that several factors


might influence a single variable in a causal
connection. As an example, let's assume that
"hours of study" seems the only independent
variables.

Step 3: Visualize your cause-and-effect


relationship

The first stage in developing a conceptual


framework is drawing a picture of the predicted
cause-and-effect connection between the
researched issue and its factors.

Every arrow must begin at the factor that causes it


also referred to as the independent variable &
terminate at the consequence also called the
dependent variable to depict a causal connection.

Step 4: Identify other influencing variables

It's important to start thinking about how the


dependent as well as independent variables could
be affected by other factors at an early stage in the
research procedure. Moderating, mediating, and
regulating factors are often included:

1. Moderating variables
The influence of independent variables upon a
dependent variable may be modified by including
one or more moderating variables. The "effect"
part of the cause-and-effect connection is modified
by the presence of moderators.

2. Mediating variables

The next step is to include a mediating variable


into the existing structure. The connection
between both dependent and independent
variables may be better described with the help of
the mediating factors that serve as a bridge
between them.

3. Control variables

Finally, control factors should be considered. The


findings are not affected by these factors, which
are kept fixed. Although researcher won’t be able
to quantify them in your research, it's still
important to have a thorough understanding of
every one of them as possible.

2.1.8.Writing about the literature reviewed

The literature review, such as every other


academic paper, should include an introductory
paragraph, body, and summary. The contents of
each section are determined by the following
questions:

1. Introduction

The opening has to make it very clear what the


literature study will be focusing on and why.
2. Body

If the literature review is very lengthy, then may


find it helpful to break up the main body into
subchapters. Each separate topic, era, or research
strategy might be presented under its own
subsection.

These guidelines may help researcher when put


pen to paper:

 Summarize and synthesize: Summing up


the key aspects from each source and
bringing them together to form a unified
whole is called "summarizing and
synthesizing."
 Analyze and interpret: Don't merely
repeat what previous researchers have said;
instead, examine and assess the data,
emphasizing its importance in light of the
existing literature.
 Critically evaluate:
 Evaluate the facts critically by pointing up
their advantages and disadvantages.
 Write in well-structured paragraphs:
Use subject sentences and transitional
phrases to help readers make the
associations and distinctions
researcherwant them to make in your
writing.

3. Conclusion

The last section of the literature review should


include a brief summary of the key points and
underline why they are important.
After researcher have completed writing and
editing the review of literature, be sure to check it
carefully for errors before turning it in.

2.2. Research Design

2.2.1.Meaning of Research Design

The study's research structure is the overarching


strategy that guides how data will be collected,
analyzed, and interpreted. Designing studies in
this way helps researchers hone down on the most
effective research methodologies for their specific
topics.

Experimental, surveys, correlational, and semi-


experimental, including review studies, as well as
their sub-types, experimental design, research
challenge, and descriptive case studies are all
explained in the process of developing a research
subject.

There are three fundamental research methods:

 Data collection
 Measurement
 Data Analysis

Organizational research issues should drive the


design process, rather than the other way around.
The study's design stage is when the choice of
instruments and methodology is made.

2.2.2.Need for Research Design


The following are some of the reasons why
research designs are employed:

1. Reduces Cost

Planning the research activity in advance through


a research design is necessary to cut down on the
unnecessary expenses of effort, time, and money.

2. Facilitate the Smooth Scaling

There must be a well-thought-out research plan in


place before any attempt at scaling can be
successful. It facilitates research processes that
are efficient enough to provide the most applicable
results possible.

3. Helps in Relevant Data Collection and


Analysis

Researchers benefit from a well-thought-out


research design because it allows them to organize
data collecting and analysis procedures to best
achieve their goals. Since it serves as the basis for
all research, it is also accountable for producing
trustworthy results. If the research design isn't
properly planned out, the whole study might be
compromised.

4. Assists in Smooth Flow of Research


Operations

A proper research design is required to provide a


more solid framework for the study. Consequently,
research design allows for the efficient execution
of research operations & minimizes the likelihood
of difficulties arising from their execution since all
choices are decided in advance.

5. Helps in Getting Reviews from Experts

Research design aids in creating an overarching


perspective of the research process, which in turn
facilitates collecting comments and evaluations
from many specialists in the subject.

6. Provides a Direction to Executives

The purpose of a well-planned research project is


to guide both the researcher and any executive
participants in providing useful input.

2.2.3.Features of a Good Design

It is generally agreed that the quality of a research


design may be judged by its capacity to minimize
bias while maximizing the dependability of the
data used in the study. The many facets of a
research topic should be reflected in the research
design. It needs to provide maximal data collection
while minimizing experimental error. Therefore, it
is clear that the research topic and the kind of
study dictate the research design used. In general,
a successful research design will include the
following elements:

 Objectivity

The term "objectivity" describes a research


method's capacity to draw unbiased conclusions
from its data. A well-thought-out research plan will
carefully choose just those methods that provide
neutral results. Keeping an objective perspective
may seem simple in theory, but in practice, it may
be challenging while doing research and analyzing
data.

 Reliability

The dependability of respondents is also a crucial


part of any sound research plan. When posed the
same question, research tools should provide
consistent results. Inconsistent results indicate
that the instrument is untrustworthy. And hence,
the consistency of answers is a proxy for the
trustworthiness of a research strategy.

 Validity

A solid research design will be able to provide


appropriate responses to the research problems. It
has to zero in on the research's intended outcome
and provide a clear strategy for getting there.

For instance, the goal of advertising research must


be to ascertain the impact of commercials on
viewers, rather to increase product sales.

 Generalisability

If the results of the study can be extrapolated to


the larger population for which the sample was
drawn, then the research design may be
considered to be generalizable. Generalizability in
research may be achieved by meticulous
demographic and sample definition and selection,
statistically sound data analysis, &
methodologically sound preparation.
Consequently, the research strategy is more
effective the more applicable the results are.

 Sufficient Information

Any research is done to unearth hitherto unknown


particulars. The researcher has to be able to get a
comprehensive understanding of the research
issue, and this can only be done if the research
design is robust enough to support it. Both the
study's fundamental issue and its overarching goal
should be stated explicitly in the research plan.

 Other Features

In addition to the aforementioned qualities, there


are additional aspects that contribute to a solid
research design. Qualities such as these include
adaptation, adaptability, effectiveness, etc. An
effective research plan will reduce inaccuracies
while maximizing precision.

2.2.4.Important Concepts Relating to


Research Design

Examine the following fundamental research


design ideas:

 Research Methodology

The term "research methodology" is used to


describe the strategy used to carry out a study.
This includes the study's design, the means by
which data will be collected, and the methods by
which the data will be analyzed.

 Data Collection Methods


Methods for collecting data include things like
conducting surveys, interviews, and undertaking
observations.

 Data Analysis Techniques

Data analysis methods are the particular statistical


procedures, including t-tests, ANOVA, as well as
regression analyses that will be utilized to examine
the gathered information.

 Sampling

The term "sampling" is used to describe the


process of choosing a statistically significant
portion of individuals to take an active role in a
research study.

 Validity and Reliability

When discussing research results, validity relates


to how well they match prior knowledge, while
reliability describes how well those results hold up
over time.

2.2.5.Different Research Designs

The ability to methodically look into, analyze, and


analyze occurrences of interest requires a well-
thought-out research plan. Research designs may
be selected by studying examples of previous
studies' methodologies and gaining knowledge
about various research design options. Types of
research designs that researchermight think about
adopting for your study include as follows:

1. Exploratory research design


The exploratory design is widely used in the field
of research. Despite the absence of a well-defined
research issue, the exploratory research approach
may be quite helpful. A research issue may be
identified with the use of this approach, which is
often less formal than additional study design
possibilities.

2. Observational research design

The observational research method is also widely


used. The focus of an observational research
design is on gathering information about a
phenomenon without influencing those phenomena
in any way. An alternative to performing
experiments is to just watch and document
behaviors or events through an observational
research approach.

3. Descriptive research design

One additional approach to research design seems


descriptive design. Whenever researcher require
additional information about the problem, a
descriptive research approach is helpful. Finding
out the "what," "where," "when," & "how"
regarding the research subject is also possible
using a descriptive research approach. Descriptive
research designs can only provide explanations for
what happened, not why.

4. Correlational research design

The correlational research method is frequently


utilized to complement the causal research
method. Similar to the causal layout, the
correlational study design format finds connections
between factors. When conducting a correlational
study, researchers simply take readings from the
data without making any changes.

5. Cohort research design

Observational studies like cohort study are another


form of research strategy. The medical field is a
natural home for this research methodology, but it
has broad potential outside that field. Ethical
studies of medical themes and risk factors benefit
greatly from the cohort design, which entails
analyzing data from people who have previously
been exposed to the research topic. The use of this
design style is not limited to primary data but is
applicable to secondary sources as well.

6. Experimental research design

The use of experiments in research is also


frequent. The experimental study design's
adaptability makes it a good choice if it's important
to determine the impact of several variables on a
given circumstance. Scientific method
components, as used within an experimental
research design, involve:

 Hypothesis: The purpose of a research


hypothesis consists of stating what
researcher expect to find as a result of your
investigation.

 Independent variable: The term


"independent variable" refers to a variable
which doesn't depend on another factor in
the study.

 Dependent variable: A variable which is


dependent is one that relies on the value of
another variable in order to be calculated.

 Control variable: The term "control


variable" refers to a factor in a research
study that is held constant at all times.

2.2.6.Basic Principles of Experimental


Designs

Randomization, repetition, and local control


represent the cornerstones of experimental design.
A reliable significance test is enabled by these
guidelines. The following sections provide a
succinct summary of each:

1. Randomization. Randomization, the act of


randomly allocating treatments to the various
experimental units, represents the fundamental
premise of experimental design. The
probabilities of each treatment allocation are
the same since the procedure is randomized.
The treatment is any variable in an experiment
that's impact on the experiment's outcome is to
be quantified and examined, while an
experimental unit represents the smallest
measurement of the substance being tested. To
eliminate uncontrollable causes of variance,
such as bias, randomization is sometimes used.
The foundation of each reliable statistical test
provides randomization (along with replication),
which is still another benefit. Thus, the
treatments should be randomly distributed
throughout the experimental units. Common
methods of randomization include picking cards
from a deck that has been well mixed, picking
balls from the container that has been
thoroughly shaken, or consulting randomized
number tables.

2. Replication. Replicating the original


experiment is the second tenet of scientific
experimentation. That is to say, this is a full
cycle of the experiment, during which all of the
treatments will be applied. Due to the inherent
inability of experimental units like humans and
plots that are used in agricultural trials to be
phenotypically the same, some degree of
variation is always introduced. A large enough
sample size in an experiment may eliminate this
kind of fluctuation. Consequently, researcher
do the experiment several times, or repeat the
first experiment. The term "replicate" is used to
describe any one specific instance of a repeat.
The duplicates' size, shape, and quantity are all
determined by the materials being tested. A
replication serves to:

 A more precise estimate of the error caused


by the experiment, the variation that could
be seen if the exact same treatments were
administered multiple times to similar
experimental units, should be obtained.
 Precision may be defined as the ratio of
actual error to its standard deviation, hence
reducing experimental error would enhance
precision.
2
 Since
σ , wherein n specifies the
σ ²=
n
number of replications, researcher may get
a more accurate estimate of the average
effect of a therapy.

3. Local Control. Although randomization &


replication are useful tools, they do not
eliminate all causes of variability. Therefore, it
is necessary to improve the experimental
methodology. That is to say, researcher have to
choose a plan such that researcher can
minimize the impact of everything except the
actual experiment itself on the results. Local
control, which describes the degree to which
experimental units are balanced, blocked, and
clustered, is used for these reasons. Treatments
must be allocated to the experimental areas in
such a manner that the resulting treatment
configuration is equitable. When conducting
experiments, similar units should be blocked
together to provide sufficient statistical power.
A replica is the same thing as a block. The
notion of local control was developed to reduce
experimental error and improve the
effectiveness of experimental designs. There,
"local control" isn't meant to be mistaken with
"control," and that's an important distinction to
make. When determining the efficacy of other
treatments via comparison, the term "control"
within experimental design refers to a therapy
that doesn't get any treatment.

2.2.7.Important Experimental Designs


The following are some of the most important
reasons why experimental research is so
important:

 Cause and Effect

The primary benefit of an experimental study


design is that it allows one to determine what
factors brought about a certain outcome. The
events are explained or the population is
described, but the source of the impact is not
investigated in any of the other study approaches.
Observing any discrepancy related to the
experiment might be possible if it were conducted
with random assignments & individuals blinded to
the causes.

 Reliable Outcomes

The results of experiments are also very


trustworthy since they are done in a laboratory
setting with standardized procedures, objective
metrics, and random assignments. Since the
sample was taken from the same broader
population, it is possible to extrapolate the results
and repeat the experiment. The data obtained from
the samples are accurate representations of the
whole.

 Provides Helpful Insight

These layouts are excellent resources for quickly


gaining relevant knowledge that may be applied to
pressing issues.
For instance, an appropriate and efficient
motivation approach for an organization's staff
may be built by studying several motivational
strategies.

 Control over Variables

There are several factors that a researcher may


manipulate in an experimental study.
Consequently, the researcher may assess the
effects of each potential variable separately. More
precise measurements of interdependence among
variables are now possible.
CHAPTER-3:Design of Sample
Surveys

3.1. Design of Sampling

3.1.1.Introduction

The term "sampling design" refers to a


predetermined strategy for selecting a subset of a
larger population. It is a term used to describe the
method or series of steps that a researcher would
take in order to choose a representative sample of
data. The sample size, or the number of elements
to be selected at random, may also be specified
during the sample design process. Data collection
is preceded by a determination of the sample
design. Researchers have a lot of options when it
comes to sample designs. When compared to
others, certain designs are more straightforward
and accurate. The researcher is tasked with
choosing and preparing a valid and relevant
sample design for his investigation.

The sampling method is defined as the process by


which a researcher selects a subset of the
population for in-depth study in order to draw
generalizable conclusions about the whole
population. The process of choosing a
representative sample is crucial. The researcher is
responsible for determining everything from
sample size and type to testing procedures.

3.1.2.Sample Design
The procedure by which a sample is selected is
known as the sampling design. The choosing of the
survey sample might be guided by a variety of
sampling strategies. Sample designs are developed
to guarantee that results from a subset of the
target population may be extrapolated to the whole
population. When creating the questionnaire,
remember to take into account the following:

 Define the universe of your study: This


refers to the collection of things that researcher
want to analyze. The people in a city, the
employees at a storage facility, or the viewers
of a TV program are all examples.
 Consider your sampling unit: Think about
the population be sampling from, whether it's
geographical reasons social, and individual.
 Gather your sampling frame: The names on
this list will be randomly selected to make up
the sample.
 Determine sample size: Sample size may be
calculated with the use of the aforementioned
equation or using convenient sample size
calculator.
 Factor in budgetary limitations: The size
and kind of sample, along with the need of
using a non-probability sample, will be affected
by the available funds.

3.1.3.Sampling and Non-sampling Errors

Sampling Error: A sampling mistake is a kind of


statistical mistake that occurs when a sample is
collected that is not statistically equivalent to the
whole population. A sampling mistake happens
when the data collected from a subset of a
population does not accurately represent that
group as a whole.

A sampling mistake occurs when a representative


sample is taken from a population in which
members may exhibit subtle differences. They may
also result from the enumerator's convenient
replacement of sampling units, incorrect choice of
statistics, and sloppy delineation of units. Thus, it
is interpreted as the gap among the real mean
value of the original sample as well as the general
population as a whole.

Non-Sampling Error: The phrase "Non-Sampling


Error" is used to collectively refer to any and all
errors that are not a result of sampling mistakes.
Errors in the issue description, questionnaire
design, methodology, coverage, respondent-
provided information, data preparation, gathering,
compilation, and analysis are just a few of the
many sources of these issues.

Non-sampling errors may be divided into two


categories:

1. Response Error: A mistake that occurred


because respondents gave incorrect responses,
or because their replies were misread or
recorded. It may be broken down into three
distinct types of error: those committed by the
researcher, the respondents, and the
interviewers.

i. Researcher Error
 Measurement Error
 Surrogate Error
 Data Analysis Error
 Sampling Error
 Population Definition Error

ii. Respondent Error


 Unwillingness Error
 Inability Error

iii. Interviewer Error

 Questioning Error
 Respondent Selection Error
 Recording Error
 Cheating Error

2. Non-Response Error: The possibility of error


because some respondents within the sample
did not fill out the survey.

Table 1 Difference Between Sampling and Non-


Sampling Error

Basis For
Non-Sampling
Compariso Sampling Error
Error
n
Sampling error is An error occurs
a type of error, due to sources
occurs due to the other than
sample selected sampling, while
Meaning
does not perfectly conducting survey
represents the activities is
population of known as non-
interest. sampling error.
Deviation Deficiency and
between sample analysis of data
Cause
mean and
population mean
Random Random or Non-
Type
random
Only when Both in sample
Occurs sample is and census.
selected.

3.1.4.Census Survey versus Sample Survey

The primary purpose of both censuses and sample


surveys aims to gather statistical data from a wide
range of fields and industries relevant to a certain
topic or question. These surveys are administered
to a statistically valid sample of the population of
interest. There are several applications for the
data collected during demographic studies.

The Complete Enumeration Survey Method seems


another name for the Census Method. With this
strategy, every possible object throughout the
universe is included in the data gathering. The
data you've chosen to explore represents a full
collection of objects that are relevant and of
interest depending on the setting in which you're
using them. This may be a physical location, a set
of people, or some other kind of area. The
government often employs this strategy when
conducting a nationwide population, housing,
agricultural, etc. census, all of which need
extensive specialized expertise.

Census Method: Statistics are gathered on all


units or people in the population of interest using
the "census method," so named because it is used
to conduct a census. The term "population" refers
to the total number of data points collected for a
certain research project. A good example of a
study's population is when college students are
asked to evaluate their professors.

Sample Method: The sampling approach selects a


variety of entities representative of the whole to
serve as a representative sample. This technique
makes use of statistical analysis to draw
conclusions about a bigger population from a
smaller subset of data. There are many various
types of sample methodologies that may be used.
Some examples include: simple random sampling,
cluster sampling, systematic sampling, stratified
sampling, etc.

Table 2 Difference between Census and sampling


Survey Method

Census Sampling
Parameters
Method Method
The extensive The limited
enquiry is enquiry is
conducted at conducted as
Nature of
each and every only a few units
enquiry
unit of the of the
population. population are
studied.
More Time, Less Time,
Money, and Money, and
Labour It Labour
Economy requires a Relatively less
large amount of money, time,
money, time, and labour are
and labour. required.
Suitability It is more It is more
suitable if the suitable if the
population is population is
heterogeneous homogeneous
in nature. in nature.
The results are The results of
quite reliable the sampling
and accurate method are less
Reliability under the reliable
and Accuracy census method. because a high
degree of
accuracy is not
achieved.
It is very The sampling
difficult to method is
Organisation
organise and comparatively
and
supervise the easy to
Supervision
census method. organise and
supervise.
Under this Under this
method, the method, the
results of the results can be
Verification investigation tested by
cannot be taking out
verified. another small
sample.
The census The sampling
method is an method is a
old method of new and
Nature of
investigation practicable
method
and not a very method. It is a
scientific scientific
method. method.

3.1.5.Types of Sampling Designs

There are two main categories of sampling used in


market action research:

 Probability sampling: Probability sampling


seems a method of sampling in which
individuals are selected at random from a
population that meets certain criteria set by
the researcher. Using this criterion for
sample selection ensures that every member
of the group has a fair shot at being
included.
 Non-probability sampling: Unlike
probability sampling, with non-probability
sampling, researchers pick participants at
random. The criteria for selection within this
kind of sampling are not predetermined.
This renders it challenging to ensure that
every subset of the population is adequately
represented in any given sample.

1. Probability Sampling Techniques

Among the many different kinds of sampling


methods, probability sampling methods are among
the most useful. Using a probability sample, every
person who is part of the population has an equal
opportunity to be picked. This method is often
used in quantitative studies with the end goal of
generating generalizable findings.

 Simple Random Sampling

Researchers use simple random sampling when


they choose people at random. Data analytics
techniques, such as generators of random numbers
and random number tables, are often used, and
many of them depend purely on chance.

 Systematic Sampling
In systematic sampling, just as in basic random
sample, numbers are assigned to each population.
Yet, rather of using a random number generator,
the samples are selected at predetermined
intervals.

 Stratified Sampling

Stratified sampling involves dividing the


population into smaller segments (or "strata")
depending on demographic variables like age,
gender, socioeconomic status, etc. Once subgroups
have been established, samples may be drawn at
random or according to a predetermined plan. The
strategy guarantees that each subgroup has
enough representation, allowing for more accurate
conclusions to be drawn.

 Cluster Sampling

Cluster sampling is methods of sampling from a


population in which subgroups are created that
have similarities with the larger sample. It is more
efficient to randomly choose an entire subgroup
than to select samples from every subgroup. This
strategy is useful when working with vast and
varied populations.

2. Non-Probability Sampling Techniques

One of the most common kinds of sampling


methods is called "non-probability sampling." The
use of non-probability sampling excludes certain
potential participants from consideration. There is
a substantial potential for sample bias with this
strategy, despite its convenience and low cost.
Qualitative and exploratory studies, help
researchers get a first impression of the population
they're studying.

 Convenience Sampling

The researcher uses this technique while choosing


participants from among those who are convenient
to interview. Although this method of data
collection is straightforward, it is impossible to
know with certainty when the sample is really
equivalent to the full population. There are no
further requirements outside people's availability
and interest.

For instance, a researcher may wait outside an


office building and ask workers who enter the
building to fill out a questionnaire or survey.

 Voluntary Response Sampling

The sole requirement for inclusion in a survey


conducted using the voluntary response sample
method is that respondents be willing to do so.
Participants are not selected by the researcher but
rather by the participants directly.

For instance, the researcher may distribute a


survey to all of an organization's staff members
and invite their responses.

 Purposive Sampling

Purposive sampling seems a kind of sampling in


which the researcher makes a selection based on
their knowledge and experience. It's common
practice when the sample size is limited and the
study's focus is on learning about a phenomena
rather than drawing statistical conclusions.

For instance, a study may focus on the


perspectives of handicapped workers at a certain
firm. Therefore, this population serves as the basis
for the sample.

 Snowball Sampling

Participants within a study use a technique called


"snowball sampling," in which they invite others to
take part in the research. It's utilized in situations
when sourcing the necessary study subjects would
be very challenging. The term "snowball sampling"
refers to the exponential growth of a sample size
as it gathers increasing numbers of individuals,
much like a rolling snowball.

For instance, the study's investigators are keen to


learn more about the lives of the city's homeless
population. A random sample cannot be conducted
because there is no comprehensive list of homeless
individuals from which to draw. The sample can
only be obtained by first making contact with a
single homeless individual, who will in turn bring
researcher in contact with other homeless persons
in the region.

3.2. Measurement and Scaling

Once data is acquired for a research, analysis may


begin, with the specifics of that analysis dependent
on the methods of data gathering. For example, if
researcher wishes to gather qualitative
information, researcher may use predetermined
labels (nominal scales) to ask respondents to
choose one of many possible answers. Researchers
may utilize numerical representations of
quantitative data by using interval or ratio scales.
Such information may be gathered with the use of
a scale labeled with categories like "electric cars,"
"diesel cars," "hybrid cars," and so on. Therefore,
the nominal scale of measurements should be used
for this aim. The same ratio scale of measurements
may be used to determine the average weight of a
city's residents.

1. Properties of Measurement
 Identity: The term "identity" is used to
emphasize that every single value has a
specific purpose.
 Magnitude: Magnitude denotes that there
is a distinct order to the variables, which
indicates the values are related in a
hierarchical fashion.
 Equal intervals: When the distance
between a pair of data points upon the scale,
say 1 and 2, is identical as the distance
between 5 and 6, researcher say that the
distance intervals between these data points
are comparable.
 A minimum value of zero: The scales
genuine zero point is indicated by its lowest
value of 0. For instance, a degree may still
have significance even if it is below 0.
However, if researcher doesn’t have any
mass, research never exists.

3.2.1.Qualitative and Quantitative Data


Qualitative data: Non-statistical and often
unstructured as well as semi-structured, data that
is qualitative is a different kind of information
altogether. This information is not always
quantifiable in the same way that it is when
creating charts and graphs. Rather, it is sorted
into groups according on its characteristics.

The term "qualitative data" is used to describe


details that cannot be quantified. It is often
verbose and illustrative of the situation. One such
example is the make and model of one's
automobile. It's frequently utilized to classify
responses of either "yes" or "no" during surveys.

Quantitative data: Quantitative information is


data that can be represented numerically. It is
used to specify quantifiable data. Quantitative
information may be characterized by a range of
dimensions, such as distance, velocity, height, and
mass. Quantitative information focuses on
numbers, whereas qualitative information
emphasizes subjective experiences.

3.2.2.Classifications of Measurement Scales

Data scientists may choose the appropriate


statistical test by learning the range of the data's
measurements.

1. Nominal scale of measurement

A data value's identity is defined by its nominal


scale. Despite sharing certain features with other
scales, this one cannot be used to quantify
anything. The information can be sorted into
groups, but it cannot be operated on numerically.
The distance between data points cannot be
quantified either.

Nominal information might be as simple as an


individual's eye color or birthplace. There are
three further groups into which nominal data
might be organized:

 Nominal with order: Some nominal


information may be further categorized in
an ordered fashion, for example, "cold,
warm, hot, & very hot."
 Nominal without order: Nominal data, like
genders, may be further broken down into a
nominal without-order category.
 Dichotomous: Data that can be categorized
into just two options, like "yes" or "no," is
said to be dichotomous.

2. Ordinal scale of measurement

Data that has been ranked in some kind is defined


by the ordinal scale. Although there is a ranking
for every value, there isn't an explanation of the
criteria used to establish those ranks. It is not
possible to add or remove from these numbers.

For example, One = pleased, two = indifferent,


and three = dissatisfied, are satisfaction indicators
in a questionnaire. Ordinal information may be
thought of as a person's place in a competition.
First, second, and third place indicate the order in
which the runners crossed the finish line, but they
do not indicate the gap between the first and
second place finishers.
3. Interval scale of measurement

Interval data share characteristics with ordered


and nominal data; however the gap between
observations may be measured. The precise
discrepancies between the variables as well as
their relative positions are shown in this form of
data. The only operations that work between them
are addition and subtraction, not multiplication
and division. For instance, 40° is not the same as
20° times two.

The fact that 0 exists as a numeric value on this


scale is another distinguishing feature. Zero
indicates the absence of information on an ordinal
scale. For instance, if you're using a temperature
scale, 0 degrees represents a certain temperature.

The distance among interval data points is


constant. On a similar scale, the variation between
10 to 20° feels the same as that 20 to 30 °. When
comparing three variables, a such scale is
employed to put a numerical value on the variance,
while the other two are reserved for qualitative
descriptions. The year an automobile was
manufactured is another good example of an
interval scale, as are the months that make up of
the year.

4. Ratio scale of measurement

The ratio scale incorporates characteristics from


the other four measuring systems. The information
is nominal, characterized by an identity, ordered,
interval- and decimal-valued, and may be
decomposed into its component parts. Common
examples of ratio factors are height, weight, &
distance. All four operations, plus, minus, division,
and multiplication, can be performed on ratio data.

The 'real zero' on a ratio scale is another way in


which it differs from interval scales. The absence
of any numerical value in the data is represented
by the value 0. An individual cannot have 0
centimeters of height or zero kilograms of weight,
for example. This measurement system is used for
things like stock valuation and sales forecasting.
Data scientists have the most latitude with ratio
data compared to other scales of measurements.

In conclusion, nominal scales are employed in the


process of labeling or describing properties.
Ordinal scales are often employed in satisfaction
surveys because of the information they give
regarding the relative positioning of data items. An
interval scale is utilized to comprehend their
relative positioning and distinctions. Additional
information on similarity, hierarchy, and
difference, as well as a numerical analysis of every
statistic, is provided by the ratio scales.

3.2.3.Goodness of Measurement Scales

It's crucial that every tool built to quantify any


abstract idea reliably measures its target variable
while the target idea itself is being quantified. This
assures that in the process of operationally
establishing perceptual and attitude variables, no
crucial dimensions or features have been left out,
and no superfluous ones have been incorporated.
Attitude factors are notoriously difficult to
quantify, in part because the instruments used to
assess them are prone to inaccuracy. Higher-
quality equipment provide more reliable findings,
which improves the study's scientific rigor.
Therefore, it is necessary to evaluate the
"goodness" for the instrument created.

The obvious response to this issue is that the


instrument should accurately represent the
variable of interest. The ease and effectiveness of
its application is also essential.

1. Characteristics of good measuring


instrument

 Reliability - Reliability denotes the degree


wherein the same results are obtained from
the same measuring device, given the same
conditions and the same participants. In a
nutshell, it is how consistent the
measurements are. Repeated
administrations of the same test with
comparable results are indicative of a
trustworthy measure. Recall that
trustworthiness is an estimation, not a
measurable attribute. A reliable measuring
device will always provide accurate results.
There are a variety of correlation
coefficients that may be used to determine
an instrument's dependability.
 Validity - Validity refers to a test's ability to
accurately assess its intended constructs.
The validity of a test is crucial for drawing
useful conclusions from the findings. The
validity of an examination is not evaluated
by a single metric, but rather by a collection
of studies that provide evidence for a causal
link between the assessment and the target
behavior. The validity of one's arguments
might be one of three categories, measuring
the reliability of deductions, inferences, or
assertions.
 Practicability - It has to be possible and
useful. Capability of being put to good use in
accomplishing the intended goal.
 Usability- Low cost, appropriate technical
make-up, and simplicity of administration,
scoring, comprehension, and application all
contribute to usability.
 Measurability - It must be possible to
gauge progress toward the desired outcome.

3.2.4.Sources of Error in Measurement

The ideal research project would include clear and


explicit guidelines for dealing with measurement
errors. Nevertheless, this aim is typically only
partially achieved. Therefore, it is incumbent upon
the researcher to be cognizant of potential
measurement error causes. Following are
examples of potential measurement errors:

 Respondent: It's conceivable that the


responder has very little information but
refuses to accept it. one also could be
uncomfortable expressing unpleasant emotions.
Due to everyone's hesitation, the interview will
probably be full with "guesses." There is always
the chance that the reply won't be able to give
researcher complete and honest answer
because of fleeting circumstances like
frustration, nervousness, exhaustion, etc.
 Situation: Confounding the accuracy of
measurements may be external influences. The
relationship between the interviewer and the
interviewee might be severely harmed by any
circumstance that makes conducting the
interview difficult. For instance, if another
person is there, one may influence reactions
just by being there. The responder may be
hesitant to open up about his emotions if one is
unsure about his anonymity.
 Measurer: A skewed reading of the data may
result from the interviewer's changing the
phrasing or the sequence of questions. His
demeanor, presentation, and physical attributes
may inspire some people to reply and others to
refrain from doing so. It's possible that careless
mechanical processing distorted the results.
During the data analysis phase, mistakes may
also occur as a result of sloppy coding, bad
tabulation, or wrong statistics.
 Instrument: There is a possibility of error due
to a flawed measurement device. Defects in the
measuring instrument include, but are not
limited to, the use of difficult terminology that
are past the respondent's grasp, confusing
meanings, bad printing, insufficient room for
answers, response option omissions, etc. The
inability to adequately sample the universe of
relevant objects is yet another limitation of the
instruments at play. Accurate measurement
requires addressing all of the aforementioned
challenges, which researchers must be aware
of. In order to ensure that the final findings are
not tainted by any erroneous data, one must
take every precaution to rule out, neutralize,
and alternatively deal with any such
possibilities.

3.2.5.Techniques of Developing Measurement


Tools

The four-step method used to create effective


measuring instruments consists of:

 Concept development;
 Selection of indicators;
 Specification of concept dimensions;
 Formation of index

The very first thing a researcher has to do is get a


firm grasps on the fundamental ideas that will
guide his investigation. This stage of idea
formation is often more evident in theoretical
investigations than in practical research, since the
core notions are frequently already defined.

In the next phase, the researcher must provide


more precise definitions to the ideas one
conceived in the initial stage. To achieve this,
researcher may use either deduction—that is,
using the more or less logical approach—or
empirical correlation—that is, comparing each
dimension to the whole idea or the other ideas.
The public's perception of a business may be
affected by several factors, including the quality of
its products and services, how it treats its
customers, the caliber of its leadership, how much
it cares about its employees and the community,
how seriously it takes its social responsibilities,
and so on.

The next step for a researcher after defining the


scope of an idea is to create indicators for gauging
its various components. Indicators refer to the
queries, scales, and other devices used to gauge a
respondent's level of understanding, opinion,
anticipation, etc. Since there is usually more than
one appropriate way to evaluate a notion,
researchers should weigh their options carefully.
The scores are more reliable and accurate when
many indicators are used.

The last stage is index construction, the process of


merging many indications into a single metric.
There may be a requirement to aggregate several
indices when dealing with situations in which the
same notion is measured in diverse ways.
Providing scale values with the replies and tallying
the resulting scores might be a quick and easy
approach to get an aggregate index. Since
"individual indicators have merely a probability
connection to what people really need to know,"
thus an aggregate index would serve as a more
reliable measuring instrument. In this approach,
researcher needs to compile a master index of all
the ideas included in the study.

3.3. Scaling

Unfortunately, obtaining a proper measurement is


not always possible in research. This is particularly
true when the ideas being tested are complicated
and abstract as well as no standardized measuring
methods are available. The challenge of legitimate
measurement arises when trying to assess people's
attitudes and views. When attempting to quantify
physical and institutional notions, researchers may
encounter a similar challenge, but to a lesser
extent.

The term "scaling" is used to describe the


processes through which levels of views, attitude,
and various other ideas are assigned numerical
values. There are two methods for doing this:

 Evaluating a person based on a metric that


has been specified with regard to of that
metric, and then putting him squarely on
that metric's scale
 Making sure that each person's answers add
up to a total score that places him on the
scale.

One definition of the scale is "a process in which


the highest and lowest possible values for some
traits, such as preference, favorableness, etc., are
separated by a range of intermediate values."
Because of the interconnected nature of these
scale-point roles, whenever the initial point occurs
to be the highest, the next one shows greater
significance in terms of a specific trait as
compared with the third point, and further along.
As a result, people are given numbers that
correspond to their places on a scale used to
quantify qualitative differences in their attitudes
and ideas. The discussion of scaling method(s)
clarifies the situation. Scaling, therefore, refers to
the processes that seek to put numbers on
intangible, subjective things. According to one
definition, scaling is "the process of giving
numerical or symbolic values to attributes of
physical objects with the goal to imbue those
attributes with some of the features of numbers."

3.3.1.Scale Classification Bases

Classifying scaling methods may be done in broad


strokes according to any of the following criteria:

 Focus on the topic matter;


 Formal reactions;
 Scale of subjectivity;
 Size characteristics;
 Dimensions quantity
 Methods of building on a grander scale.

1. Subject orientation: The term "subject


orientation" refers to whether or not a scale is
intended to gauge the respondent's opinion on
the stimulus item being shown to them. To
address the former, researcher assume that the
stimuli used are sufficiently similar to one
another to make inter-stimulus variation
negligible relative to inter-respondent variance.
The second method involves having
respondents rate an item on one or more
predetermined criteria, with the expectation
that there will be less diversity in the ratings of
individual respondents than there would be in
the ratings of the many stimuli that are shown
to them.

2. Response form: Scales may be categorized as


either categorized or comparative depending on
the response type. Rating scales are another
name for categorical scales. Whenever a
responder has to rate something without
making any comparisons to other items, these
scales come in handy. Ranking scales, or
comparison scales, require respondents to rate
how similar or different a set of items are. The
responder may choose to compare two items
and declare one to be better than the other, or
they may rate three different styles of pen from
best to worst. Essentially, a rating is just a
comparison of how well two or more things
share a certain quality.

3. Degree of subjectivity: Scale data can vary


depending on whether researcher are trying to
evaluate purely subjective preferences as well
as whether researcher are also making non-
preference evaluations, which is an indicator of
the level of subjectivity. The aforementioned
respondent's preferences for one person over
another or for one solution over another is
reflected in the first instance, while a
respondent's evaluation of which person seems
more successful in some regard or which
approach will take less time does not reflect
any particular preferences in the latter.

4. Scale properties: Based on their


characteristics, scales may be sorted into four
different types: nominal, ordinal, interval, and
ratio. The nominal scales only categorize,
without reflecting any sorting, distance, or
original source. Comparisons on ordinal scales
are expressed in terms of greater or lesser
magnitudes; they do not include information
about location or originality. There is no one
starting point from which interval scales get
their order as well as distance numbers. These
characteristics are inherent to ratio scales.

5. Number of dimensions: Based on this


criterion, researcher may classify scales as
either "unidimensional" or "multidimensional."
In the first case, researcher merely measure a
single characteristic of the responder or item,
whereas in the latter case, researcher take into
account the possibility that the thing is best
represented by referring to a 'n'-dimensional
attribute space instead of a continuous scale.

6. Scale construction techniques: The


following represent the 5 most common
methods used in the creation of scales:

 Arbitrary approach: Scale is created on a


case-by-case basis, which is an example of
the arbitrary method. The vast majority of
people use this method. The ideas for which
these scales were developed are assumed to
be measured, although there's minimal data
to back this up.
 Consensus approach: When using a
consensus method, a group of judges
discusses and agrees on which things should
be included in the instrument based on their
usefulness and clarity.
 Item analysis approach: The procedure
known as "item analysis" involves creating a
test comprised of many different questions
and then administering it to a sample of
people. The aggregate scores for all
participants are determined after the test
has been given. After a thorough
examination of each question, researcher
can see which ones really separate people
with high and low overall scores.
 Selecting a cumulative scale depends on
how well it fits a predetermined hierarchy of
elements with increasing and decreasing
discriminatory power. For instance, if one
item represents an extreme perspective,
then endorsing that item requires endorsing
all others reflecting a more moderate
position.
 When items have high inter correlations, it
suggests that a single factor accounts for
their connection. This finding may be used
to build factor scales. It is common practice
to use factor analysis to evaluate this
connection.

3.3.2.Scaling Techniques

A scaling approach is a way of classifying


respondents along a continuum of gradually
changing values, symbols, and numbers depending
on the characteristics of a specific item in
accordance with the criteria that have been
established. The four cornerstones upon which all
scaling methods rest are orders, description,
distance, & origins.
Scaling methods are crucial to marketing research
because they allow for accurate market analysis to
be conducted. Here are some of them:

 Likert Scale

The Likert scale seems a popular method for rating


respondents' ideas and feelings. The survey asks
respondents to score a series of assertions on a
number scale, often between 1 and 5. Data
collected utilizing a Likert scale seems ordinal,
making it amenable to analyses like component
analysis.

 Semantic Differential Scale

The connotative significance of variables may be


quantified with the use of a scaling method called
semantic differential scales. It is a sequence of
paired adjectives, with respondents assigning a
number value to each adjective based on how they
feel about it. Ordinal data like that generated by
semantic differential scales lends itself well to
statistical methods like factor analysis.

 Thurstone Scale

The opinions of respondents are measured using


the Thurstone scale, which is a kind of rating
scale. It's a list of assertions, and responders rate
how much they are in agreement or disagreement
with each one. The ordinal nature of the data
generated by the Thurstone scale makes it
amenable to statistical methods like factor
analysis.
 Guttman Scale

In order to quantify the extent to which the


participants affirm or reject a given variable, the
Guttman scale might be used. Hierarchical in
nature, it is made up of a set of assertions that go
from broader to much more detailed language. The
information gathered from a Guttman scale is
binary, making it amenable to statistical methods
like item response theory.

 Magnitude Estimation Scale

The subjective significance of a variable may be


quantified using a magnitude estimation scales.
Sequences of statements or items are presented to
the respondents, and they are asked to provide a
number rating to each, without any context or
references. Data from ratio scales used for
estimating magnitude is amenable to statistical
analyses like regression.

3.3.3.Multidimensional Scaling

The goal of the dimensionality-reducing approach


known as multidimensional scaling (MDS) aims to
map high-dimensional data over a lower-
dimensional space and preserve the integrity of
the bilateral distances among the data points. In
order to identify an extension of the data which
minimizes the discrepancies among the distances
within the space that originally existed and the
similarities in its lower-dimensional space, MDS
relies on the idea of distance.
Multiple-dimensional scaling (MDS) is often used
to show high-dimensional data in order to discover
hidden patterns and connections. It works well
with not just numerical data but also category
data, and even mixed data. Mathematical
optimization procedures like gradient descent as
well as simulated annealing are used in the
implementation of MDS to reduce the gap between
the initial and reduced-dimensional distances as
much as possible.

Overall, multidimensional scaling (MDS) is a


potent and adaptable method for dealing with
high-dimensional data and uncovering previously
unseen patterns and linkages. The areas of data
mining, machine learning & recognizing patterns
all make extensive use of it.

1. Features of the Multidimensional Scaling


(MDS)
 The idea behind multidimensional scaling
(MDS) is to identify lower-dimensional
projections of the information that reduces
the gap between the starting point as well
as the lower-dimensional space. Due to this,
MDS is able to maintain the connections
among the data points and draw attention to
developments and trends that could
otherwise go unnoticed.
 MDS may be used with a variety of different
data sets, like mathematical, categorical, as
well as mixed data. Due to its adaptability,
MDS may be utilized with a wide variety of
data types and is equipped to deal with
multi-modal datasets.
 Numerical optimization procedures like
gradient descent as well as simulated
annealing are used in the implementation of
MDS to reduce the gap between the initial
and reduced-dimensional distances as much
as possible. Because of this,
multidimensional scaling (MDS) is a
versatile and adaptive method that can deal
with nonlinear data and identify projections
which are distinct from those generated by
linear methods like principal component
analysis (PCA).
 Machine learning, data extraction, & pattern
identification are just a few of the numerous
applications of MDS. It's a tried-and-true
method that has undergone rigorous testing
and validation, and it has widespread
support from a sizable and engaged user
base.

2. Limitations of Multidimensional Scaling


(MDS)

It's important to keep in mind that there are


disadvantages and restrictions to employing MDS
for data analysis and visualization, as there are
with any other approach.

 It only takes into account the gap among the


data points and ignores any other links
between them, like correlations as well as
associations, when defining the projection.
Therefore, data sets with complicated, non-
distance-based connections, or those with
missing and noisy distances, could not be
good candidates for MDS.
 The accuracy of the projections and the
clarity of the findings are vulnerable to data
noise as well as outliers. When the data
includes outliers or noise, MDS might
provide predictions that are inaccurate or
misleading, since they would not properly
represent the data's fundamental structure.
 Since this method is a kind of global
optimization, it searches for a particular
projection which best fits the whole dataset.
However, MDS might not be capable to
detect the regional configuration of the data
inside each group, which may be
troublesome for data sets with complicated,
multi-modal patterns that contain several
categories or collections of data points.

3.3.4.Deciding the Scale

3.4. Data Collection

3.4.1.Introduction

The term "data collection" refers to the act of


amassing information for the sake of analysis,
interpretation, and usage in a variety of
commercial contexts. It's a fundamental
component of data analytics programs and
scientific investigations: In order to assess
business achievement or other results, make
predictions about future patterns, actions, and
situations, it is necessary to gather the right kind
of data in a systematic way.
Multiple layers of data collecting occur in
organizations. Data about customers, workers,
revenue, and various other business metrics is
routinely gathered by IT systems as a byproduct of
transaction processing and data entry. Businesses
also use social media monitoring and consumer
surveys for gathering customer input. The next
step is for data scientists, other statisticians, and
customers to assemble the necessary data for
analysis by mining internal systems as well as if
necessary, using other sources of information.
Data preparation, involving the former activity and
the subsequent steps of collecting and cleaning the
data for usage in BI and analytics programs,
begins with the latter work.

Data collecting is frequently a more specialized


procedure for studies in science, health, higher
education, and various other disciplines, as
researchers develop and apply techniques to
gather particular kinds of data. However, the
obtained data must be correct in the commercial
and academic settings for analytics to be reliable
and research outcomes to be trustworthy.

3.4.2.Experimental and Surveys

Experiment: The term "experiment" is used to


describe the process of trying something out in the
real world using a scientific method or approach
and then observing the results. Experiments are
conducted by carrying them out in accordance
with the scientific process or an approach based
on science. The purpose of an experiment enables
the examiner to put hypotheses to the test by
carrying them out and then observing what
happens.

Survey: The term "survey" is used to describe the


method of collecting data from everyone or a
selected number of participants in the universe in
order to analyze a particular variable. Data
gathering methods, such as interviews,
questionnaires, case studies, etc., are arranged in
a systematic fashion in order to conduct
questionnaires. Surveys use a predetermined
series of questions from an official survey to gather
data in the same format in which they were posed.

Table 3 Difference between Experiment and Survey

S.N
Experiment Survey
o.
01. It refers to the way It refers to a way of
of experimenting gathering
something information
practically with the regarding a variable
help of scientific under study from
procedure/approach people.
and the outcome is
observed.
02. Experiments are Surveys are
conducted in case of conducted in case of
experimental descriptive
research. research.
03. Experiments are Surveys are carried
carried out to out to see
experience something.
something.
04. These studies These studies
usually have smaller usually have larger
samples. samples.
05. The researcher may The surveyor does
manipulate the not manipulate the
variable or arrange variable or arrange
for events to happen. for events to
happen.
06. It is appropriate in It is appropriate in
case of physical and case of social or
natural science. behavioral science.
07. It comes under It comes under field
laboratory research. research.
08. Experiments are Possible relationship
meant to determine between the data
such relationships. and the unknowns in
the universe can be
studied through
surveys.
09. Experiments costs Surveys can be
higher than the performed in less
surveys. cost than a
experiments.
10. In experiments In surveys there is
usually laboratory no requirement of
equipment are used laboratory
in various activities equipment or there
during the is a very small
experiment process. requirement of
equipment just to
collect any sample
of data.

3.4.3.Collection of Primary Data

Primary data collection is collecting facts and


figures before turning to secondary and tertiary
sources for help. There are a number of ways to
get this information, including polls and
interviews. Despite bolstering their findings with
further research, professionals continue to depend
on primary sources for most of their knowledge.

The term "primary data collection" refers to the


practice of gathering information directly from a
natural or human-made phenomenon. Primary data
collection attempts to amass information that is
both precise and exhaustive. This information may
be utilized to make the world a better place for
humans and other species. Primary data may be
gathered either online or offline.

1. Primary Data Collection Methods

The term "primary data" is used to describe


information obtained straight from the original
source, as opposed to secondary sources. This
term is used to describe information that wasn't
used before. Primary data gathering techniques
often provide the highest-quality results in
academic studies.

Primary data gathering techniques may be further


classified as either quantitative (focusing on
measurable variables) or qualitative (examining
more subjective aspects of the research topic).

Primary data, often known as raw data, comprises


information that has not been filtered or
interpreted in any way before use. There are two
distinct categories for the main data collecting
technique. Here are some of them:

i. Quantitative Data Collection Methods


It relies on numerical computations in a number of
different forms, such as numerical answers to
closed-ended questions, statistical regression and
correlation, and measurements of central tendency
and dispersion. The time and money required to
use this strategy are much less than qualitative
data collecting strategies.

ii. Qualitative Data Collection Methods

There is no need for computations of any kind. This


approach is often used when dealing with
intangibles. Interviews, polls, observations, case
studies, and other similar methods are all part of
this category of data collecting. This information
may be gathered in a number of ways. Here are
some of them:

 Observation Method

In the field of behavioral research, the observation


technique is often used. This procedure is well
organized. It goes through a rigorous set of checks
and balances. Some examples of various kinds of
observations are:

o Structured and unstructured observation


o Controlled and uncontrolled observation
o Participant, non-participant and disguised
observation

 Interview Method

Data are gathered via interviews or other


conversational means. There are two methods that
may be used to accomplish this:
o Personal Interview – During a personal
interview, one person, the "interviewer,"
speaks with another directly to get
information. The personal interview may
take several forms, including but not limited
to: direct inquiry, concentrated talk, etc.

o Telephonic Interview – Telephonic


interviews are conducted by calling
individuals over the phone and asking them
questions or eliciting their opinions vocally.

 Questionnaire Method

This strategy entails sending the responder a


series of questions by email. They need to check
out the survey, fill it out, and send it back. The
form itself specifies the sequence of questions to
be answered. The following are characteristics of a
well-designed survey:

o Simple and brief


o Should be organized in a logical manner
o Do not forget to provide enough room for
responses.
o Don't be too technical.
o Should be aesthetically pleasing, drawing
the reader in with things like color and
paper quality.

 Schedules

There is a little variation between this approach


and the questionnaire approach. Schedules can't
be completed without the enumerations; hence
they're specifically designated for that reason. This
may clear up any confusion that has arisen by
outlining the investigation's goals and objectives.
The training of enumerators should emphasize the
need of diligence and perseverance on the task.

2. Advantages of primary data-collection


methods

 Accuracy: There is less possibility for


inaccuracy or misreporting since they are
collecting data directly from the population of
interest.
 Recency: Primary data collection guarantees
that researcherget the most recent information
available on the research topic.
 Control: The ability to modify the data-
gathering procedure and enhance the standard
of the information gathered is entirely under
one's control.
 Relevance: The questions that ask may be
tailored to the needs of your study, which
increases their usefulness.
 Privacy: Respondents' anonymity may be
protected and their privacy protected by
limiting who sees the study findings.

3. Disadvantages of primary data collection

 Cost: It may be costly to gather primary


information, particularly if researcher has a
large team.
 Labor: Data collection requires time and
effort. More competent people are needed to
collect information from huge populations.
Furthermore, it may be challenging to locate
relevant experts if your study topic is
obscure or unique.
 Time: Primacy data collection is time-
consuming. For instance, in order to
conduct surveys, researcher will need to
have respondent’s complete questionnaires.
Based on the sample size, survey
distribution method, and response rate, this
might take anything from a couple of days to
many months. Costs can accrue after a
survey is completed for tasks like data
cleansing and organization.

3.4.4.Collection of Secondary Data

Secondary data collection refers to the process of


obtaining information that has previously been
gathered but is not directly related to the primary
audience. The researcher fails to "collect" data but
rather "collects" it from secondary sources.

There are two main types of secondary data: public


data and unpublished data. Data which has been
released and made available to either the private
or public sectors are called "published data,"
whereas "unpublished data" refers to personal
data that has not been made public.

Drow suggests that while selecting public data


sources, one should think about when it was
published, the author's credibility, the reliability of
the source, the breadth and depth of debate and
analysis, and the contribution to the development
of the subject.

1. Secondary Research Methods


Secondary research has been chosen by many
companies and organizations due to its low price
tag. The costs associated with research and data
collection might be prohibitive for certain
businesses. Since it is possible to gather
information while lounging in front of a computer,
secondary research is sometimes referred to as
"desk research."

The following are some common secondary


research approaches and examples:

 Data Available on The Internet

Internet searches have become a common method


of gathering secondary data. Information may be
found and downloaded from the internet with ease.

In most cases, there is no fee associated with


accessing or downloading this information, and in
others, the cost is little. Many websites now
provide extensive databases that may be mined by
corporations and other groups for their own
research purposes. However, businesses should
only use reputable websites as sources of data.

 Government and Non-Government


Agencies

A number of governmental and non-governmental


organizations are good sources of information
useful for secondary research.

To access or make use of the data provided by


these organizations, researcher will need to pay a
fee. The information collected by these bodies may
be relied upon to be accurate.

 Public Libraries

Finding relevant information may also be


accomplished by visiting public library facilities.
Important historical studies may be found in public
libraries' archives. They include a wealth of useful
data and records that may be mined for insights.

It is important to note that the public library


services supplied may differ from one institution to
the next. Typically, libraries offer a vast
assortment of government publications including
market information, as well as a sizable collection
of company directories as well as newsletters.

 Educational Institutions

Secondary research sometimes overlooks the


significance of gathering data from academic
institutions. Nonetheless, academic institutions do
more studies than any other kind of industry.

The major purpose of colleges' data collection


efforts is academic inquiry. However, companies
and other groups may contact universities and
colleges to obtain data.

 Commercial Information Sources

Secondary sources such as local media such as


journals, newspapers, publications, radio stations,
and television may be very helpful in gathering
information. Market research, population
breakdowns, and updates on the political agenda
are just some of the topics that may be learned
about directly from these business-related sources.

Companies and other groups are welcome to


submit data requests in order to collect the
information they need. Since these channels have
a larger audience, businesses may not only find
potential customers but also learn how to best sell
their goods and services via them.

1. Advantages of secondary data-collection


methods

The following are a few of the numerous


advantages when using secondary data-collection
techniques over primary ones:

 Speed: Secondary data-collection


techniques are quick since they don't rely on
people to respond or record their findings,
which might create delays. This is because
analysts may skip the collection of primary
data and go right into the analysis phase
when using secondary data.
 Low cost: Secondary data are cheap since
they don't need as much money as main
data. Saving time and money on survey
logistics and administration is common
when using secondary data.
 Volume: Data analysts have access to tens
of thousands of books, articles, and online
resources. Several separate studies' worth
of data may be compiled for your perusal,
and the parts that are most useful to
researchercan be picked out.
 Ease of use: Secondary information
particularly that which has been made
public by private entities or the government,
is often well-organized and easy to utilize.
This facilitates comprehension and
extraction.
 Ease of access: Secondary data is more
convenient to collect than primary data
because of its availability. Information of
interest may be found with no effort and at
no expense by searching the internet.

2. Disadvantages of secondary data collection

 Lack of control: Using secondary sources


of information removes your say in the
survey design and execution. It's possible
that the queries researcher need answered
aren't covered by the publicly available data.
The result is that it's more challenging than
it should be to zero down on the specific
information researcher want.
 Lack of specificity: Government
publications sometimes suffer from the same
lack of clarity, and there might not be as
many reports accessible for emerging
sectors. In addition, leveraging secondary
data might be difficult if there are
insufficient information for the specific
market your business targets.
 Lack of uniqueness: Secondary sources
might not offer the fresh perspective and
innovative thinking researcher need in your
research. For example, if the product or
company run relies on originality and takes
a non-traditional approach to addressing
problems, researcher can be let down by the
lack of specificity in the data research
gather.
 Age: Time has passed, and data might
develop as a result of the changing tastes of
users. The secondary information researcher
collect may soon be obsolete. When this
occurs, gathering new information requires
more effort than would be possible if a poll
were conducted in person.

3.4.5.Selection of Appropriate Method for


Data Collection

The study design decision marks the beginning of


the process of collecting data that will lead to a
thorough and accurate solution to the research
question. There are many different types of data
collecting that may be used to investigate a
research issue or test a hypothesis. Nevertheless,
as can be shown below, a number of variables
impact the decision of which technique of data
gathering is most appropriate:

 The Nature of phenomenon under


study: The study's methodology relies
heavily on the specifics of the phenomena
being investigated. As a result of their
unique properties, study phenomena need a
variety of procedures and techniques for
data collecting. For instance, certain
phenomena, like clinical practices and
procedures in specific nursing procedures,
may only be investigated adequately via
observation. The expertise of a team of
nurses may be evaluated in the same way by
asking them questions or conducting
interviews. Hence, the selection of a specific
technique of data collecting is heavily
influenced by the specifics of the
phenomena being studied.
 Type of research subjects: The techniques
used to gather data are also affected by the
nature of the research participants. For
instance, interviewing or seeing people with
physical or mental impairments may be
preferable than using a questionnaire.
However, if information is needed on
physical items or establishments,
researchers may not be able to rely on
surveys or interviews and could be forced to
depend instead on observation for much of
their data collection.
 The type of research study: Different
approaches to data collecting are required
for quantitative and qualitative
investigations. For instance, in qualitative
investigations, in-depth information is
needed, making focused collective
interviews and unorganized participatory
interviews viable for data collection,
whereas in quantitative research
investigations, structured interviews,
questioning, and observations are used.
 The purpose of the research study: For
example, in-depth interviews could be
required for data collection within a study
whose purpose is the examination of a
phenomenon, whereas more structured
approaches of data collection can be
appropriate in a study whose purpose is the
explanation or correlation between two
variables.
 Size of the study sample: Interviews and
direct observation can be feasible when
studying a small sample size, but they
become time-consuming when studying a
larger population. Questionnaires are a good
and referable technique of data gathering,
especially for bigger populations. Cost-
effective and simple data gathering tools for
smaller groups include interviews &
observations, whereas questionnaires
provide these same advantages to
researchers working with bigger
populations.
 Distribution of the target
population: Mail-in surveys may be
preferable to in-person interviews or casual
observations if the population of interest is
dispersed over a wide region, since they
may be administered with greater ease and
at lower expense.

3.4.6.Case Study Method

Case studies are a common approach to qualitative


research because they allow in-depth examination
of a specific social unit such as an individual,
family, organization, cultural groups, or perhaps a
whole community. A more in-depth approach is
taken rather than a general one. One of the main
focuses of a case study is a thorough investigation
of a small set of circumstances and the way they
interact with one another. The procedures that
occur and their interplay are the focus of this case
study. Therefore, the case studies is an in-depth
analysis of the specific system being considered.
The purpose of a case study approach is to isolate
the variables that contribute to the observed
behavior of a specific unit.

The approach known as case study is a style of


qualitative analysis that involves in-depth study of
a single person, group of people, or organization in
order to draw broad conclusions and conclusions
about a larger population or system.

1. Characteristics

The following are some of the most prominent


features of a case study approach:

 The social unit might be a single person, a


group of people, or an entire community; an
event can be studied as a whole.
 To gather sufficient data for making
reasonable judgments.
 The goal is to investigate the group
thoroughly from every angle.
 Seek a holistic understanding of the many
interconnected forces at play inside a social
system.
 This method is qualitative rather than
quantitative. Data collection goes beyond
simple numbers. There is a concerted
attempt to gather data on every facet of
human existence.
 To comprehend how many things influence
one another.
 Instead of using a more theoretical or
indirect strategy, researchers look at how
the relevant unit really behaves.
 It yields useful theories and data that might
be used to test them, allowing for the overall
body of knowledge to expand and deepen.

2. Advantages

The use of case studies offers a number of benefits,


including:

 The goal is to learn everything possible


about the unit's typical behavior.
 Allows one to compile a more honest and
enlightening account of one's life events.
 Using this strategy, researchers may piece
together the social unit's evolutionary
history in connection to the social elements
and environmental forces at play.
 It's useful for coming up with plausible
hypotheses and gathering information that
might be used to put those ideas to the test.
 The case study approach is widely utilized,
especially in social research, since it allows
for in-depth examination of social units.
 The researcher will find this to be a huge aid
in creating the perfect questionnaires.
 The researcher have a wide variety of tools
at their disposal, including in-depth
interviews, surveys, papers, study reports
from persons, and so on.
 It's been helpful in figuring out what kind of
measurements can be made on the cosmos
and what kind of units can be examined
alongside it. The term "mode of organizing
data" was used to describe this method.
 As a method that places a premium on
looking backwards in order to look forward,
it is a great way to get a firm grasp on a
community's history and utilize that
knowledge to inform present-day
recommendations for change.
 It is a true account of individual experiences
that is frequently overlooked by the vast
majority of expert researchers using more
traditional methods of inquiry.
 The researcher's knowledge, analytical
prowess, and skill set are all bolstered as a
result.
 It makes conclusions easier to make and
keeps the research process moving forward.

3. Limitations

It is also worthwhile to point out some of a case


study approach's more significant drawbacks:

 Data gained from case studies is typically


incomparable because of the unique nature
of each occurrence. Due to the first-person
nature of the case study, investigators will
need to "read into" as well as "out of" the
narrative in order to apply logical principles
and scientific categorization units.
 Read Bain doesn't think of the case data to
be substantial scientific data because they
fail to shed light on the "impersonal,
ubiquitous, non-ethical, non-practical,
repetitious elements of phenomena."
Authentic data is typically lacking from case
studies due to the researcher's bias during
data collecting.
 Due to the lack of standard procedures for
data collection & the small sample size,
incorrect generalizations cannot be ruled
out.
 It's more expensive and time-consuming
than I'd like. The case study approach takes
more time because it emphasizes in-depth
research on the life cycles of social units.
 Reading Bain, it becomes clear that the
subject can record what one believes the
investigator wants, making the case data
suspect; and the stronger the connection,
further subjective the procedure becomes.
 Since the case study approach rests on a
number of assumptions, some of which could
not be entirely plausible, the validity of the
information gleaned through case studies is
always up for debate.
 There is a practical limit to where the case
study approach can be used; studying a
large society, for example, is outside its
scope. In addition, case studies don't allow
for random sampling.
 The case study technique has the significant
constraint of being subject to the
researcher's reaction. Sometimes one even
believes one can answer questions
regarding the unit on his own. In the event
that the inverse is true, results will be
obtained. This is not the case method's
problem; rather, it is the researcher's.
CHAPTER-4:Testing of
Hypotheses

4.1. Hypothesis

The researchers' predictions for the study's


outcomes are stated in the form of a hypothesis or
set of hypotheses, which are specific statements
that may be tested. The researchers make this
assumption right away. Research hypotheses often
propose a connection between both variables,
where the former is the one the researcher will
manipulate and the latter will be the one that will
be used to evaluate the effectiveness of the
intervention.

When doing research, it is common practice to


formulate two hypotheses: a null hypothesis to be
tested, as well as an experimental hypothesis to be
tested if an experiment is to be used as part of the
inquiry.

The ability to test a theory against reality and


either confirm or disprove it is essential. The
researcher must first presume that samples
gathered from different populations are equivalent
in order to test a hypothesis. The alternative is to
reject the null hypothesis altogether. There's a
common misconception that the term "research
hypothesis" refers to the null hypothesis.

1. Characteristics of Hypothesis

The theory has the following features:


 The hypothesis's clarity and precision are
necessary for it to be credible.
 When the hypothesis has to do with
relational one, it should specify the
connections between the factors under
consideration.
 The hypothesis has to be well-defined,
allowing for further testing.
 The hypothesis's justification must be as
straightforward as possible, although its
relevance should not hinge on its seeming
ease of formulation.

2. Sources of Hypothesis

Hypothesis comes from the following places:

 The striking similarities between the two


events.
 Insights from previous research, real-world
practice, and the industry leaders' rivals.
 Hypotheses based on scientific research.
 The overarching trends that have an effect
on people's thought processes.

3. Functions of Hypothesis

The theory serves the following purposes:

 The ability to make observations and


conduct experiments is greatly enhanced by
the use of hypotheses.
 It serves as the primary launching point for
further inquiry.
 Having a working hypothesis to compare
against observed data is quite useful.
 It's useful for guiding people's questions in
the appropriate direction.

4.1.1.Types of Hypothesis

There are currently six different types of


hypotheses, including:

1. Simple Hypothesis

It demonstrates a correlation between just one


independent and dependent factor. For instance,
eating more veggies can help researcher shed
pounds more quickly. Here, increasing vegetable
consumption influences weight loss, which acts as
the dependent variable.

2. Complex Hypothesis

It illustrates the connection between a set of


dependent variables and a set of independent
variables. Losing weight, improving skin tone, and
lowering the risk of several ailments, including
heart disease, are all results of eating more
veggies and fruits.

3. Directional Hypothesis

It's evidence that the researcher is both highly


intelligent and fully invested in the project at hand.
Its character may be foreseen from the correlation
between the variables. For example, four-year-olds
who eat well over the course of five years
outperform their non-fed peers on IQ tests. The
impact and its direction may be seen here.

4. Non-directional Hypothesis
In the absence of any theoretical considerations,
this term is utilized. It's a declaration of fact that
two variables are related, however its precise
nature (direction) cannot be predicted.

5. Null Hypothesis

It makes a claim that runs counter to the working


hypothesis. The assertion is false, and no causality
can be inferred between both dependent and
independent factors. "HO" is the abbreviation used
to represent the sign.

6. Associative and Causal Hypothesis

A change in one variable causes another variable


to shift, a phenomenon known as the association
hypothesis. On the contrary, the causal hypothesis
postulates that two or more variables engage
within a cause-and-effect fashion.

4.1.2.Basic Concepts Concerning Testing of


Hypotheses

Testing hypotheses is a statistical technique for


inferring properties of the whole from data
collected from a subset of the population. There
are two hypotheses to examine while doing
hypothesis evaluation: the null & the alternate.
The importance threshold, alternative hypotheses,
as well as null hypothesis as foundational
principles in hypothesis testing:

1. Null Hypothesis

The assumption of no statistically significant


variance between data from samples and
populations is known as the null hypothesis. H0 is
the conventional abbreviation for the "null"
hypothesis. If the data from a sample doesn't
strongly contradict the null hypothesis, therefore
that hypothesis is accepted.

Let's say are interested in determining whether or


not the population means bmg coincides with the
null hypothesis mH0 d i = 100.

Then, the alternative hypothesis would be that the


population means is different from the predicted
mean of 100, which researcher might represent
symbolically as:

Assuming the sample information does not back up


the null hypothesis, researcher must adopt an
alternative explanation. The conclusion reached
when a null hypothesis is rejected is referred to as
the alternative hypothesis. That is to say, the
alternative hypothesis is just another name for the
list of potential explanations that may be true
instead of becoming null. To accept H0 is to reject
Ha, and to reject H0 is to accept Ha.

Researchers should not make the common mistake


of developing hypotheses using the data they
gather and then testing those hypotheses with that
same data; instead, they should choose the null
hypothesis as well as the alternative hypothesis
before drawing the samples. The following factors
are often taken into account when selecting a null
hypothesis:
 In general, the null hypothesis represents
the one being tested, whereas the
alternative hypothesis represents the one
being refuted. Therefore, researcher have
the null hypothesis, which is the theory
researcher want to disprove, as well as the
alternative hypothesis, which is everything
else that may be true.
 If there's a high likelihood of making a
mistake and rejecting a correct hypothesis
(the degree of significance), then that
hypothesis will be treated as the null
hypothesis.
 The null hypothesis must always be a
precise hypothesis; it must never imply that
a value is somewhat or roughly equal to
another.
 Hypothesis testing is often conducted on the
foundation of the null hypothesis, with the
alternative hypothesis always in mind. The
probability of any potential sample result
can be calculated under the null hypothesis,
but under the alternative hypothesis, this
can't be accomplished. As a result, the
statistical hypothesis, frequently referred to
as the null hypothesis, is often used.

2. Alternative Hypothesis

Statements in the form of the alternative


hypothesis make the assumption that the sample
data does not accurately represent the population
data. In standard use, Ha is used to indicate the
alternative hypothesis. If there's enough evidence
from the sample data to disprove the null
hypothesis, therefore the alternative hypothesis
must be true.

3. Significance Level

The degree of statistical significance indicates how


much evidence there is to reject the null
hypothesis. Both 0.05 and 0.01 are common
choices for the significance threshold. The null
hypothesis is disregarded in preference of a
alternative hypothesis when the p-value, the
chance of receiving the observed sample statistics
when the null hypothesis seems true, is lower than
the significance threshold.

4. Decision rule or test of hypothesis

A decision rule, commonly referred to as a


hypothesis test, is a set of criteria that, when met,
lead one to accept either the null hypothesis (H0)
as well as the alternative hypothesis (Ha), or to
reject both hypotheses (H0 and Ha). For example:
H0 indicates that a particular lot is excellent
because there are very little faulty goods in it,
whereas Ha is that the collection isn't good
because there are far too many defective things in
it. In this case, researcher must determine the
sample size and the criteria for choosing to accept
or reject the null hypothesis. It is possible that
researcher would test 10 things from the lot and
make a choice based on the results, with the
possibility of accepting H0 if there is not even a
single faulty item in the bunch. The term "decision
rule" describes this kind of rationale.

5. Type I and Type II errors


There are actually two general categories of
mistakes that may be made while conducting
experiments to test theories. It's possible to accept
H0 when it's not true & reject it when it is. The
former category is referred to as Type I error,
whereas the latter is referred to as a Type II error.
This Type I mistake is the incorrect rejection of a
true hypothesis, whereas a Type II error involves
the incorrect acceptance of a false hypothesis.
Errors of type I (identified by alpha) & type II
(identified by beta) are often referred to as the
degree of significance of a test, respectively. The
aforementioned two mistakes might be listed as
such:

Figure 4:1 Type I and Type II errors1

The degree of significance of the test is often


believed to be the chance of Type I error, which is
calculated in advance. It's around a 5 in 100
probability that we'll reject H0 even if it's true if
researcher have a fixed type I error rate of 5%.
Simply addressing the root cause of Type I mistake
allows keeping it under control. If researcher set it
1
https://www.wisdomjobs.com/tutorials/type-i-and-type-
ii-errors.png
at 1%, for example, researcher may argue that the
most likely chance of making a Type I mistake is
0.01%.

However, when researchers attempt to decrease


Type I error with a predetermined sample size, n,
researcher also increase the likelihood of
increasing Type II error. It's impossible to
minimize both kinds of mistakes at the same time.
It is possible to decrease the likelihood of
committing one sort of mistake, but only at the
expense of increasing the likelihood of committing
the other. To balance this trade-off in commercial
settings, decision-makers weigh the costs and
penalties of both kinds of mistakes before settling
on an acceptable amount of Type I error. The risk
of poisoning a large number of people using a
chemical compound is much higher than the risk of
making a Type I error, so in this case, it's better to
make a Type I error than a Type II error if the
former is possible. This means that when testing a
hypothesis, it is essential to allow for a large
margin of Type I error.2 Thus, it is crucial to find a
balance among Type I & Type II mistakes while
doing hypothesis testing.

6. Two-tailed and One-tailed tests

The distinction between these two concepts is


crucial when discussing hypothesis testing. When
the sample mean is considerably higher or lower
compared to the expected value regarding the
population mean, for example, a two-tailed test will
conclude that the null hypothesis is false. This kind
of test is useful while the null hypothesis is a
known number while the alternative hypothesis
seems a number that is different from the known
number. The two-tailed test is suitable when
researcher have, which might imply either m >
mH0 or m < mH0 theoretically.

As a result, there are actually two rejection


regions, 1 at each end of the curves, in the two-
tailed test, as shown below:
Figure 4:2 Two-tailed and One-tailed tests

This may be stated mathematically as:

Acceptance Region A: Z < 1.96

Rejection Region R: Z > 1.96

According to the above curves, the chances of the


rejecting area is 0.05, divided evenly between the
two tails for the curve as 0.025, while the
likelihood of the acceptance zone is 0.95 if the
statistical significance level is 5% as well as the
two-tailed test is used. Assuming m = 100,
researcher will reject the null hypothesis when the
sample mean differs from 100 by more than a
certain amount in any direction; otherwise,
researcher will go along with the null hypothesis.

However, there are times when just a one-tailed


test is acceptable. A one-tailed test is used to
determine whether the population means is less
than or more than a certain numeric number. For
example, the left-tailed tests where there’s just one
rejection zone on the left tails is of relevance to if
the H 0 H0: m = m and Ha H : m < m 0 , as shown
below.
When the sample mean is substantially different
from 100 in the negative direction (m = 100), then
researcher will reject H0; otherwise, researcher
will accept it at a predetermined significance level.
The above curve shows that a rejection zone of size
0.05 of the left tail may be expected if the
significance threshold is set at 5%.

One-tailed tests (right tails) are of importance if


null hypothesis (H0 H0: m = m) is true and
alternative hypothesis (Ha H: m > m 0) is false, as
indicated in the following diagram:

4.1.3.Testing of Hypothesis

Testing hypotheses allows researchers to draw


conclusions from data collected from a larger
sample of the population. An analytical device that
puts hypotheses to the test and estimates the
likelihood of an event to a certain degree of
precision. When doing an experiment, evaluating
hypotheses may help ensure reliable findings.
The testing of hypotheses begins with the
establishment of a null hypothesis as well as an
alternative hypothesis. As a result, researcher are
better able to draw conclusions about the
population from which sample was drawn.

Hypothesis testing, as mentioned above, involves


deciding between two competing hypotheses
regarding the significance of the population
parameter by assessing the veracity of an
assumption known as a null hypothesis. Using a
subset of data to infer anything about the whole, or
"population," is called "hypothesis testing."
Regarding the goal of putting various hypotheses
to the test, statisticians have created a number of
tests of significance, commonly referred to as tests
of hypothesis:

 Parametric tests or standard tests of


hypotheses; and
 Non-parametric tests or distribution-free
test of hypothesis.

Many assumptions about the original population


are baked into parametric testing. Parametric tests
need a number of presumptions to be true,
including that the number of participants is
normal, the sample is sufficiently big, and that the
sample accurately represents the population's
characteristics (mean, variance, etc.). However,
there are times when researchers either shouldn't
or simply are unable to make these kinds of
presumptions. Since these tests don't depend on
any assumptions about the characteristics of the
parent populations, researcher refer to them as
non-parametric tests when researcher use them to
evaluate hypotheses in this context. In addition,
most non-parametric tests merely presume ordinal
or nominal data, while parametric tests need
measurement corresponding to a minimum of an
interval scale. Therefore, in order to attain the
same amount of Type I & Type II errors, non-
parametric tests require additional observations
compared to parametric tests.

1. Hypothesis Testing Steps

The following are the stages of a hypothesis test:

 The researchers usually start by stating


whether the concept is an alternative or null
theory. There is no evidence to support a
null hypothesis if there is no correlation
between the variables. The alternative
hypothesis would be true if there were any
association between the variables.
 The necessary information for sampling is
then gathered, and this data closely reflects
the whole population to be tested.
 The next step is for researchers to choose a
statistical analysis method that is
appropriate for the data they have gathered.
 The null hypothesis is accepted or rejected
based on the outcomes of the tests and the
degree of significance.
 The result of the study is written up as a last
step, compiling and summarizing the
statistical results.

2. Hypothesis Testing Formula


T-tests and z-tests are only two examples of the
many statistical analyses used by researchers. The
z-test is calculated as follows:

Z = ( x̅ – μ0 ) / (σ /√n)

 Here, x̅ is the sample mean,

 μ0 is the population mean,

 σ is the standard deviation,

 n is the sample size.

The study draws a conclusion on the hypotheses


that are supported by the Z-test's findings. It may
be either a null as well as an alternative to a null.
The following equation is used to calculate their
sizes:

H0: μ=μ0

Ha: μ≠μ0

Here,

H0 = null hypothesis

Ha = alternate hypothesis

When the mean is the same as the population


mean, one can know that the null hypotheses is
correct. In the absence of evidence for the null
hypothesis, the alternative is considered.

3. Benefits of Hypothesis Testing


The correctness of fresh ideas or hypotheses may
be gauged via the process of hypothesis testing.
This enables them to check whether their theory is
supported by the data, which helps eliminate the
possibility of drawing unfounded conclusions.
Additionally, the structure provided by hypothesis
testing allows for more objective, evidence-based
decision making as opposed to gut instinct.
Hypothesis testing provides a sound basis for
drawing reasonable findings by minimizing the
roles played by randomness and confounding
factors.

4. Limitations of Hypothesis Testing

The use of data alone in testing hypotheses may be


limiting and may not lead to a full comprehension
of the topic at hand. The reliability of the findings
also is contingent upon the completeness and
precision of the data as well as the robustness of
the statistical techniques used. Wrong conclusions
or experiment failures may result from using
inaccurate data or improperly formulated
hypotheses.

Mistakes in hypothesis testing include analysts


choosing to accept or reject the null hypothesis
which they shouldn't endure. These mistakes might
cause unwarranted inferences or the failure to
notice key trends or linkages.

4.2. Test Statistics and Critical Region

The rejection zone, also known as a crucial region,


is a section of a graph where it is reasonable to
reject the null hypothesis. Thus, if those results are
inside that range, they are likely to be significant
statistically.

Figure 4:3 Critical Region2

The primary function of statistics seems to verify


hypotheses or experimental findings. For instance,
supposing researcher’s believe have developed a
novel fertilizer that may increase plant growth by
50%. The test has to have the following aims, if it
is to show that the hypothesis is correct:

 Have a pattern of occurrence.


 The typical rate of growth for plants lacking
fertilizer might serve as a useful comparison
in this case.

The term "hypothesis test" is used to describe this


kind of statistical analysis. The testing procedure
includes the rejection area. A subfield of
probability, it may help researcher determine how
likely the theory and ""hypothesis"" is to be
correct.

2
https://www.statisticshowto.com/wp-content/uploads/2
014/02/t-distribution2-300x120.jpg
4.2.1.Rejection Regions and Alpha Levels

The researcher determines the tolerance for error,


or alpha, threshold. The 5% alpha level (100% -
95%), for instance, would be used if the desired
degree of confidence in the significance of the
findings was 95%. The decline zone is between 5%
and 10%. The 5% is in the correct direction for a
one-tailed test. The area of rejection for the two-
tailed test falls into both tails.

4.2.2.Rejection Regions and P-Values

A p-value as well as a critical value represents two


methods for determining whether or not a
hypothesis is true:

 P-value method: The p-value approach


uses a p-value as the outcome of a
hypothesis test, such as a z-test. A value for
probability is denoted by the symbol "p." It's
the metric by which researcher may
determine whether or not the hypothesis
statements is plausible. If the value is inside
the rejection range, then the findings are
statistically significant and the null
hypothesis may be rejected. If the p-value
seems beyond the rejection zone, then the
findings are insufficient to reject the null
hypothesis. The statistically important
finding in the case of the plant fertiliser
would be evidence that it accelerates plant
growth relative to other fertilizers.
 Rejection Region method with a critical
value: The procedure for the Rejection
Region technique having a critical value is
the same as before. Nevertheless, a critical
value is determined instead of a p-value. If
it's outside that range, one should not
accept the null hypothesis.

4.3. Critical Value and Decision Rule

Critical Value: The test statistic produced in


hypothesis testing is not expected to fall inside a
range that begins at the critical value, which is a
cut-off number. If the resulting test statistic is
larger than the crucial value, therefore the null
hypothesis must be rejected.

The crucial value is used to visually divide the test


space into an area for acceptance & a rejection
zone while doing hypothesis testing. It is useful for
determining whether or not a test statistic is
statistically significant. In this essay, researcher
will go further into the crucial value, exploring its
definition, several kinds, and how to determine its
exact value.

For many hypothesis tests, it is possible to


determine a crucial value. The level of significance
as well as the distribution regarding the test
statistic may be used to infer the critical value for
a given test. When testing a hypothesis with one
tail, there is just one important value to look for;
when testing with two tails, there are two.

1. Critical Value Formula


The crucial value may be calculated using a variety
of different formulae, each tailored to the specific
distribution to which the test statistic adheres. It is
possible to establish a threshold value by using
both the confidence interval and the significance
level. The various critical value formulae are
shown below:

2. Critical Value Confidence Interval

With the help of the confidence interval,


researcher can determine the crucial value to
perform one-tailed and two-tailed tests. It is
possible to calculate the crucial value by doing so:

 Step 1: The first thing to do is decrease the


confidence level down from 100. 100% - 95%
= 5%.

 Step 2: The second step is to express this


fractionally as. Therefore, α = 5%.

 Step 3: In the third stage, assuming it's a


one-tailed tests, the alpha level from the
second phase will remain unchanged.
Nevertheless, the alpha level is going to be
adjusted downward by a factor of 2 if tests
are two-tailed.

 Step 4: Fourth, the critical value may be


found in the appropriate distribution table
by utilizing the alpha value, based on the
kind of test that was performed.

4.3.1.Decision Rule in Hypothesis Testing


Decision rules are the criteria that determine
whether the null hypothesis can be accepted or
rejected.

The hypothesis is initially stated. The next step is


to figure out if the test is one- or two-tailed. It next
chooses an appropriate level of significance and
compute the test statistics. The crucial value has
now been determined. The crucial value is
calculated using the z-statistic regarding the
standard normal distribution to determine when
the test statistic has a normal distribution. The
crucial value & the test statistic are used to create
the decision rules.

The following are the principles to follow if one is


working with a 5% margin of error:

1. H0: θ = θ0 versus Ha: θ ≠ θ0

Reject the null hypothesis if test-statistic > 1.96 or


if test-statistic < -1.96.

2. H0: θ ≤ θ0 versus Ha: θ > θ0

Reject the null hypothesis if test-statistic > 1.645

3. H0: θ ≥ θ0 versus Ha: θ < θ0

Reject the null hypothesis if test-statistic < -1.645

The criterion for making a call in this instance is


going to be as follows:

Reject the null hypothesis if test-statistic > 1.96 or


if test-statistic < -1.96.
Figure 4:4 Decision Rule in Hypothesis Testing3

The t-statistic researcher obtained, 4, is larger


than the critical threshold of 1.96. Therefore, the
alternative hypothesis must be accepted. The two-
tailed test was performed.

The following graph illustrates the z-test's


rejection point around the 5% significance level
with a one-sided test.

4.4. Procedure for Hypothesis Testing

The procedures for testing hypotheses include


determining the veracity of a theory based on
evidence gathered during study. When doing
hypothesis testing, the first concern is always
whether or not to agree with the null hypothesis.
The term "hypothesis testing procedure" is used to
describe the measures taken to decide whether or
not to reject a null hypothesis. The following are
the many stages of hypothesis testing:

 Making a formal statement: This stage


entails stating both the null hypothesis (H0)
as well as the alternative hypothesis (Ha)
3
https://financetrain.sgp1.cdn.digitaloceanspaces.com/d
r11.gif
formally. This implies that hypotheses must
be articulated explicitly, taking into account
the specifics of the study issue at hand.
It is crucial to formulate hypotheses after
carefully considering the problem's context
and intended outcome. Further, it tells know
whether the one-tailed and two-tailed test is
appropriate. One-tailed tests are used if Ha
is belonging to the "greater than" (or "less
than") type, whereas two-tailed tests are
used if Ha is of the "whether greater and
smaller" kind.
 Selecting a significance level: The
significance level needs to be chosen with
care, since all hypotheses are tested using a
predetermined threshold. The 5% or 1%
threshold is often used in practice for this
reason. The directional hypothesis forecasts
the direction of the variation between them,
state, means; a non-directional theory does
not make such a prediction; as well as the
magnitude of the disparity between sample
values, the dimension of the samples, the
variation of measurements among the
samples, and the nature of the hypothesis all
play a role in determining the level of
significance. In a nutshell, the significance
level ought to make sense given the
research's goals and methodology.
 Deciding the distribution to use: The
next stage in hypothesis testing involves
choosing a suitable sample distribution,
which follows the selection of a significance
level. A normal distribution & t-distribution
are still the two most used options. The
principles for choosing the right distribution
are quite similar to those researcher
discussed before for estimate.
 Selecting a random sample and
computing an appropriate value: The
next step is to choose a random sample (or
samples) and use the sample data to
calculate an acceptable value for the test
statistic according to the suitable
distribution. To put it another way, a sample
should be selected to provide hard numbers.
 Calculation of the probability: Probability
is subsequently determined to see how likely
it exists that the sample results would
deviate from expectations as much as it has
assuming the null hypothesis seemed
correct.
 Comparing the probability: The following
step is to compare the resulting probability
to the value of a, the significance level, that
has been set by the user. If the estimated
probability is less than the a value within a
one-tailed test (or a /2 in a two-tailed test),
the null hypothesis is rejected as well as the
alternative hypothesis is accepted, but if it is
larger, the null hypothesis is accepted. Type
I errors are possible if H0 is rejected (at
best at the level of significance), yet Type II
errors are possible if H0 is accepted (the
amount of which cannot be stated as long as
the H0 occurs to be imprecise rather than
particular).
4.5. Hypothesis Testing

4.5.1.Mean

The purpose of a hypothesis test serves to arrive at


a conclusion or draw a conclusion regarding the
significance of a parameter, like the population
means. The critical value method and the P-value
method are both valid ways to test an assumption.
Since a statistic from the sample is employed to
make a judgment or judgement regarding the
value of the parameters varies there is a chance
that the decision obtained is an error; while doing
a hypothesis tests, there are 2 sorts of mistakes
that might occur: Type I Error & Type II Error.

 Null Hypothesis: The hypothesis being tested


represents the null hypothesis, or Ho. assumed
to be correct. There's a mathematical equality
in the null hypothesis.

 Alternative Hypothesis: Ha stands for


"alternative hypothesis," or a hypothesis that is
seen as a possible alternative for the null
hypothesis. These presumptions may turn out to
be correct. (The null hypothesis includes an
inequality and ).

1. Types of Errors
 Type I Error: Rejecting Ho when in fact Ho
is actually true
 Type II Error: Accepting Ho when in fact
Ho is actually false
The significance level (alpha, α; (0.01, 0.05, 0.1)) is
a measure of how likely it is that a Type I Error
will be made.

Although the significance level seems the


likelihood of rejecting an actual null hypothesis
(i.e., the chance that the test statistic would fall in
the rejection area whenever the null hypothesis
seems true), it serves as a foundation for
determining the rejection region.

 Rejection Region: The range of test


statistics where Ho may be rejected is
referred to as the "rejection region."
 Critical Values: The thresholds at which

𝑧𝛼⁄2 or tα or ± 𝑡𝛼⁄2
the rejection zone begins and ends, zα or ±

 Test statistic: The significance level of a


test statistic is employed to determine
whether or not the null hypothesis may be
rejected.

The Ho null hypothesis is going to be rejected if


the resulting test statistic value falls inside the
rejection zone. When the value of the test statistic
falls outside of the rejection zone, therefore the
null hypothesis is accepted.

4.5.2.Difference of Two Mean

The procedure for testing this hypothesis follows


the same broad outline as any other. The specifics
of this test's implementation & the test statistics
are novel compared to other comparable tests, but
otherwise they are consistent with prior
experience.

Step 1: Determine the hypotheses.

Hypotheses for explaining a disparity between the


means of two populations are analogous to those
for explaining a disparity between the proportions
of two populations. H0, a null hypothesis, is
another way of saying "no effect" and "no
difference."

H0: μ1 – μ2 = 0, which is the same as H0: μ1 = μ2

The alternative hypothesis, Ha, could refer to


either of the following:

 Ha: μ1 – μ2 < 0, which is the same as Ha: μ1 <


μ2

 Ha: μ1 – μ2 > 0, which is the same as Ha: μ1 >


μ2

 Ha: μ1 – μ2 ≠ 0, which is the same as Ha: μ1 ≠


μ2

Step 2: Collect the data.

The validity of the inference technique is, as


always, dependent on the quality of the data
collected. There are two criteria for gathering
information as per normal:

 Removal or reduction of bias requires a


random selection of samples.
 A sample ought to be a true representation
of the population it is meant to reflect.
The following requirements must be met for this
hypothesis test to be used:

 The two random selections are unrelated.


 The distribution of the variables is normal
within both groups. Samples larger than 30
are going to have a variance in sample
which can be effectively described by the t-
distribution even if this variable is unknown.
In "Hypothesis Test for the Population
Mean," researcher saw that t-procedures
hold up well even if the variable doesn't
have a normal distribution in the sample.
When investigating the population's
normalcy is impractical, researcher examine
the distribution within the samples instead.
The inference approach is used if the
histogram as well as dot-plot of the data
lacks a high skew or outliers, suggesting
that the variable isn't significantly skewed in
the demographics.

Step 3: Assess the evidence.

The t-test statistic is computed if the criteria are


satisfied. This is a common representation of the t-
test statistic:
The equation (μ1 – μ2) always evaluates to 0 under
the null hypothesis since it assumes that there's no
difference between the population means.

It was covered in "Estimating a Population Mean"


that the t-distribution changes depending on the
number of df. Df = n -1 in the instances of a single
sample and matched pairs. The df will be provided
manually or discovered using technological means.
Using the t-test statistic as well as the degrees of
freedom, one can calculate the P-value using the
proper t-model, exactly as was did in "Hypothesis
Tests for a Population Means." Yes, the same
simulation may be used.

Step 4: State a conclusion.

Following the same pattern as previous hypothesis


testing, researchers draw a conclusion. The P-
value is compared to an established threshold:

 If the P-value is less than the critical value,


therefore the alternative hypothesis is
preferred over the null.

 In the case of a P-value greater than, the


null hypothesis is not rejected. There is
insufficient evidence to accept the
contrasting theory.

Consistently, they provide background for findings


by mentioning the counterfactual.

4.5.3.Proportion

i. H0, the default or null hypothesis, always


indicates a "equal to." It is the initial assertion
regarding the population parameter, and it is
called the null hypothesis.
ii. Ha might be interpreted as a "less than,"
"greater than," and "not equal to" in the
alternative hypothesis. The structure of the
counterfactual hypothesis is determined by the
nature of the inquiry being asked.
iii. The left-tail, right-tail, and two-tail nature of
the test may be inferred from the alternative
hypothesis formulations. The right p-value may
be determined by testing against the
alternative hypothesis.
 The test is considered left-tail if the null
hypothesis represents "less than." The p-
value represents the space in the left tail of
the probability distribution.
 The test has a right-tail distribution if the
null hypothesis is a "less than" and the
alternative hypothesis represents a "greater
than." The p-value represents the right-tail
region of the probability distribution.
 Tests with a "not equal to" null hypothesis
have two tails. The p-value equals the total
space occupied by the two extremes of the
distributions. Each tail equals one-half of
the p-value.
iv. Think about the meaning of the p-
value. With a lower p-value, like 0.001 as
compared to 0.04 when employing a
significance threshold of 0.05, a data analyst or
anybody else might be more certain that
they've made the right choice to reject the null
hypothesis. Similarly, when a data analyst
decides not to reject the null hypothesis despite
a big p-value, like 0.4, compared to a small p-
value, like as 0.056, then they might be more
confident in their judgment. This forces the
data analyst to use discretion as opposed to
blindly following predetermined procedures.
v. The significance threshold must be determined
before data sampling and testing may begin.
The degree of relevance is often mentioned in
the inquiry. If no threshold of significance is
specified, it is customary to adopt a level of 5%.

4.5.4.Difference of Two Proportions

The following must be true when comparing two


independent proportions of the population using
hypothesis testing:

 The two samples are separate and


unrelated, taken at random.
 A minimum of five triumphs and a minimum
of five failures are included in each sample.
 There is mounting evidence suggesting that
the population should be 10 to 20 times
larger than the sample. This prevents the
oversampling that may lead to erroneous
conclusions from occurring in any given
group.

The comparison of two proportions is as frequent


as the comparison of two means. If there is a
discrepancy between two estimates, it might be
because of a variation in the populations being
studied or it could simply be random. If there is a
discrepancy between the two estimates, a
hypothesis test may help researcher to figure out
whether or not it represents actual variation in the
populations being studied.

There is a close approximation to a normal


distribution for the difference between two
proportions. The default assumption is that the two
percentages are equal, known as the null
hypothesis. In other words, H0: pA = pB. A
combined percentage (pc) is used in the
experiment.

Following is the formula for determining the

combined percentage:

The variances are distributed as follows:

The z-score represents the test statistics:

4.5.5.Test of Difference of more than Two


Proportions
4.5.6.Variance

Many people's first thoughts about inference from


statistics revolve on assuming something about the
averages or percentages of a given group. The
importance of a population's variability over its
mean changes from context to scenario, and the
specific population parameters required to address
an experimenter's practical queries changes as
well. As a result, the concept of minimal variability
is often used to characterize product quality.

It is possible to draw conclusions about the


population variance σ2 based on the sample
variance s2. When n measurements are taken at
random from the normal populations with mean μ
& variance σ2, the value s2 gives a point estimates
2
2 (n−1) s
for σ . Also, for df = n–1, the quantity 1 has
σ2
a Chi-square (χ2) distribution.

The characteristics of the Chi-square (χ2)


distribution are as follows:

 The chi-square distribution has only positive


values, contradicting the Z and t
distributions.

 Despite the Z and t distributions, which are


both symmetrical, the chi-square
distribution is asymmetrical.

 The chi-square family of distributions is


quite large. By deciding on the number of
degrees of freedom (df = n-1) for the sample
variances s2, researcher are able to achieve
a specific one.

Hypothesis testing using a one-sample (χ2)


analysis:

Null hypothesis: H0:σ2=σ20 (constant)

Alternative hypothesis:
 Ha: σ2>σ20 (one-tailed), reject H0 if the
observed χ2 > χ2U (upper-tail value at α).

 Ha: σ2>σ20 (one-tailed), reject H0 if the


2 2
observed χ <χ L (lower-tail value at α).

 Ha σ2≠σ20 (two-tailed), reject H0 if the


observed χ2>χ2U or χ2<χ2L at α/2.

When df = n-1 and a significance level is given α,


the χ2 critical value within the rejection zone is
calculated.

4.5.7.Difference of Two Variances

The F distribution may also be used to compare


two different variances. The comparison of two
variances, as opposed to two averages, is generally
preferred. For example, college officials want test
grades from two different instructors to vary
equally. The container's lid will only fit properly if
its tolerances are the same as those of the
container itself. The variance in checkout times
between two cashiers may be of importance to a
grocery store.

The test's p-values will be skewed in unforeseen


ways when the two distributions aren't the same
shape, therefore it's important to make sure they
are. Suppose researcher take a random sample
regarding two separate normal populations. Let
s21 and s22 represent the sampling variances while
σ21and σ22 represent the population variances. Let's
pretend n1 and n2are the sample sizes.
Also utilize the F ratio to do a direct comparison
between the two sample variances:

F has the distribution

where n1−1are the degrees of freedom for the


numerator and n2−1are the degrees of freedom
for the denominator.

If the null hypothesis is σ21=σ22, then the F Ratio


becomes:

The F ratio may similarly be calculated as (s2)2 /


(s1)2. Both Ha and the bigger of the two sample
variances must be considered.
When the two populations possess the same
standard deviation, then the ratio of s21and s22 will

be close to 1. However, s21and s22 can


differ much from one another if the population's

variances are quite dissimilar. A ratio higher


than one is obtained when s 1 is used as the bigger
2

sample variance. If there is a large gap between

s21and s22 has a high numerical value.

The data supports the null hypothesis that the


variances of the two populations are identical if F
is near to one. However, if F is much bigger than
1, then there is evidence to reject the null
hypothesis. Either one or both tails may be used in
a test of two variances.

4.5.8.P-Value approach

Probability is sometimes expressed as a number


called the P-value. The chance of obtaining a
conclusion that is either identical to or more
extreme instead of the actual findings is what this
term refers to. The P-value seems the chance of an
event occurring, and is also known as the degree
of marginal importance within the context of
hypothesis testing. In place of a hard rejection
threshold, the P-value gives the level of
significance below which the null hypothesis
cannot be accepted. The strength of the evidence
for the null hypothesis increases as the P-value
decreases.

P-value Decision
The result is not
statistically significant
P-value > 0.05
and hence don’t reject
the null hypothesis.
The result is statistically
significant. Generally,
P-value < 0.05 reject the null hypothesis
in favour of the
alternative hypothesis.
The result is highly
statistically significant,
and thus rejects the null
P-value < 0.01
hypothesis in favour of
the alternative
hypothesis.

1. P-value Formula

The P-value is a well-known statistical tool for


testing the plausibility of a proposition. The P-
value ranges from 0 to 1, inclusive. Researchers
are responsible for determining their own
significance level (α). The standard value is 0.05.
P-values may be computed using the following
formula:

Step 1: Find out the test static Z is

Where,

^p= Sample Proportion


P0 = assumed population proportion in the null
hypothesis

N = sample size

Step 2: Look at the Z-table to find the


corresponding level of P from the z value obtained.

4.5.9.Power of Test

The strength of a statistical test is measured by


how likely it is to reject the null hypothesis (H0)
whenever an alternative hypothesis (H1) seems
valid. It is usually represented by the symbol 1,
and it stands for the probability of making a
successful detection given the presence of an
impact to look for. The likelihood of committing a
type II mistake by incorrectly not rejecting the null
hypothesis reduces with the power of a test
increases, which occurs anywhere from 0 to 1.

1. Interpretation

If the chance β of making a type II mistake is, then


the statistical power seems 1 − β. A greater
likelihood of a type II error occurring in
Experiment E than in Experiment F exists, for
instance, if Experiment E have a statistical power
of 0.7 while Experiment F had a statistical power
of 0.95. This makes it less likely that substantial
impacts will be picked up in experiment E.
However, because the likelihood of a type I
mistake is smaller in experiment E compared to
experiment F, the former is more trustworthy. The
power of a test to detect an effect, assuming it
exists, may be seen in a similar light as the
likelihood of adopting the alternative hypothesis
(H1) whenever it is true. Thus,

In the case when H1 does not represent an equality


instead of the negation of H0 (for example, with
for any unknown population parameter 1
can be expressed simply ), then the power
can't be determined without knowing the
probabilities for every potential value of the
parameters that reject the null hypothesis. The
strength of a test is often discussed in relation to
its ability to reject a certain null hypothesis.

The power is proportional to the 1 – β of the false


negative rate (β), therefore as grows, the
likelihood of a type II mistake (also known as the
false negative rate) decreases. Type I error
probability, also known as the false positive rate
and the significance level of a test when the null
hypothesis is assumed to be true, is a related idea.

A test's effectiveness is measured by its statistical


sensitivity, also known as its true positive rate or
its likelihood of detection, when applied to a two-
class problem.

2. Application

Researchers are sometimes asked to do a power


analysis by funding agencies, ethics boards, and
scientific review committees. This is done so that
the appropriate number of test subjects may be
selected for an experiment using animals. With
frequentist statistics, it is difficult to make a
decision between hypotheses when the
investigation is inadequate. Hypothesis testing,
like in traditional power analysis, is not performed
in Bayesian statistics.

By incorporating the results of a research into


one's existing body of knowledge, one might adopt
a more nuanced Bayesian perspective. In theory,
an underpowered research from a hypothesis-
testing standpoint might still be employed in this
kind of update. A valuable indicator of how much a
certain experiment size may be anticipated to
refine one's opinions, power is still around. Low-
power research is not likely to result in substantial
shifts in perspective.

4.6. Limitations of the Tests of


Hypothesis

There are a number of factors inherent to


hypothesis testing that might compromise the
accuracy of the results. Some examples of these
restrictions are:

 The significance of an observed p-value is


conditional on the choice of cutoff and
multiple comparisons threshold. As the
halting criterion might be interpreted in
many ways, and since "multiple
comparisons" are inherently vague, this
complicates calculations.
 The researcher may run into conceptual
problems when attempting to test
hypotheses while combining the
conceptually diverse approaches of Fisher &
Neyman-Pearson.
 It's possible that the researcher may gloss
over estimate & confirmation through repeat
trials in favour of analyzing the data for
statistical significance.
 The need for statistical significance being a
requirement for publishing the results of a
hypothesis test might lead to publication
bias.
 Using hypothesis testing to determine
whether or not a distinction exists among
groups might lead to implausible
assumptions that distort the results.

4.7. Chi-square Test

The Chi-Square test seems a method of statistical


analysis used to compare actual results with
hypothetical ones. The classification variables in
that information may be examined for potential
correlations using this test. Useful for determining
if a statistically significant difference between two
category variables may be attributed to chance
alone.

The chi-square test seems a kind of statistical


analysis for contrasting actual and predicted
outcomes. The purpose of this analysis is to
determine whether or not a discrepancy between
observed and anticipated values is due to random
chance and to a correlation between the factors
under investigation. Therefore, the chi-square test
provides an excellent option for helping determine
the significance of the association between two
category variables.

A hypothesis about the distribution using a


categorical variable may be tested with the use of
the chi-square test and similar non-parametric
tests. Nominal or ordinal values may be assigned
to categorical variables that represent groups of
things like animals or nations. Due to the limited
range of possible values, they cannot follow a
normal distribution.

For instance, a food delivery service in India is


interested in learning more about how factors like
gender and location affect people's food choices. It
is used in determining the dissimilarity between
the following pair of category variables:

 As a result of chance
 Due of the closeness of the bond

1. Test hypotheses about frequency


distributions

Pearson's chi-square tests come in two flavours,


but they both check for substantial discrepancies
between the actual and anticipated frequency
distributions of a categorical variable. The
distribution of data points within several
categories is shown by the distribution of
frequencies.

Frequency distribution tables are a common way


to visualize distributions of values. The amount of
records that fall within each category is shown in a
table containing the distribution of frequencies.
The amount of observations in each possible
combination of groups may be shown in a
contingency table, a special kind of frequency
distribution tables used when there are two
variables that are categorical.

2. Properties

The chi-square test has the following salient


characteristics:

 The variance is proportional to twofold the


degrees of freedom.
 The standard deviation is proportional to the
number of degrees of freedom.
 As the number of degrees of freedom
becomes larger, the chi-square distribution
curves resemble that of the normal
distribution.

4.7.1.Types of chi-square tests

There are two distinct varieties of Pearson's chi-


square tests:

 Chi-square goodness of fit test


 Chi-square test of independence

These two tests are equivalent mathematically.


Nevertheless, researcher usually categorize them
as separate examinations due to their varied
applications.

1. Chi-square goodness of fit test

The chi-square goodness-of-fit test may be


performed when there is just one categorical
variable. This statistic is useful to determine
whether the observed frequency distributions of
the categorical variables differ considerably from
the forecasts. It is generally expected that each
category will have around the same number of
items.

Example: Hypotheses for chi-square goodness of fit


test Expectation of equal proportions

 Null hypothesis (H0): A same number of


birds of each species visit the feeder; hence
H0 is the correct hypothesis.

 Alternative hypothesis (HA): There is a


wide variation in the relative frequency of
sightings of each species at the feeder,
which supports the null hypothesis.

Expectation of different proportions

 Null hypothesis (H0): The species


composition of birds that come to the feeder
is consistent with the five-year moving
average (H0).

 Alternative hypothesis (HA): The species


composition of the birds at the feeder varies
from the long-term average during the
previous five years, which is consistent with
the alternative hypothesis (HA).

1. Chi-square test of independence

When working with two categorical variables, the


chi-square tests of independence may be
performed. It helps researcherto determine
whether there is a connection between the two
factors under consideration. Independent variables
are those whose membership in a given group is
not influenced by the presence or absence of the
other variable.

Example: Chi-square test of independence

 Null hypothesis (H0): There is no


statistically significant difference between
the left-handedness rates in the United
States and Canada (H0).

 Alternative hypothesis (HA): The


percentage of left-handed persons varies
among countries; hence this is an
alternative hypothesis (HA).

2. Other types of chi-square tests

There are others who see the chi-square test for


homogeneity as just another kind of Pearson's chi-
square. A similarity test compares two populations
to see whether they have identical proportions,
suggesting that they may be drawn from the same
distribution. One way to look at it is as an
alternative perspective upon the chi-square test of
independence.

McNemar's test involves a statistical analysis that


makes use of the chi-square test statistics.
Although it has some similarities with Pearson's
chi-square test, it is not a variant of that test. This
analysis may be performed if two categorical
variables with two possible categories are
available. By using it, one can ascertain whether or
not the proportions of each variable are similar.

4.7.2.Cautions in Using Chi Square Tests

Despite its widespread usage, the chi-square test


seems notoriously difficult to do properly. It's
important to remember that the test is only
appropriate if the events comprising the sample
can be considered "independent," in the sense that
the happening of one incident in the sample does
not affect the occurrence of every other
occurrence within the sample. It's important to
take extra precautions when dealing with
theoretically low frequencies if they exist in certain
populations. The additional ways this test might be
used incorrectly or misused include:

 The ignorance of non-occurrence


frequencies
 Inability to balance the frequencies
measured with those predicted
 Error in calculating the number of possible
states
 Erroneous calculations and similar
problems.

The researcher performing this test should be


mindful of all of these considerations and should
have a solid grasp on the reasoning behind this
crucial test before utilizing it to make conclusions
regarding the hypothesis.
CHAPTER-5:Interpretation and
Report Writing

Although reports may include a wide range of


subject matter, they are most often written for a
specific audience and cover a narrow topic. The
main goal of research reports seems to provide
critical information regarding a study for use in
formulating new marketing strategies.

The most efficient method of conveying specific


events, facts, and data related to happenings to the
persons in authority is via the creation of research
reports. The best research papers provide precise
facts and have a distinct purpose and final verdict.
A neat and well-organized layout is essential for
the success of these reports.

1. Research Reports

Research reports are written summaries of


findings from systematic studies, such as surveys
or in-depth interviews, analyzed by researchers
and mathematicians.

A research report may be relied upon to accurately


describe the steps and results of a study. It is
widely regarded as an authentic representation of
the efforts made to collect study details.

There are many different parts to a research


report, including:

 Summary
 Background/Introduction
 Implemented Methods
 Results based on Analysis
 Deliberation
 Conclusion

2. Components of Research Reports

When introducing a brand-new product, services,


or feature, thorough research is essential. In
addition to the constant influx of new players,
some of whom could have no useful things to offer,
modern marketplaces are notoriously unstable and
cutthroat. A company may stay competitive in such
an industry by consistently releasing new and
improved items to meet the wants of its target
audience.

There may be some variation in the specifics


regarding a research report depending on the kind
of the study being conducted, but the core
elements will always be the same. The method
used by the market researcher to gather data also
affects the report's tone and structure. The
following are the seven essential parts of a well-
written research report:

 Research Report Summary: An overview


of the study and its purpose should be given
in a brief summary of no more than a few
words for the report. The many different
parts of the study are briefly discussed in
the report's abstract. It needs to be
engaging enough to remind the reader of
the report's most important points.
 Research Introduction: Researchers
usually have one overarching objective in
mind while writing a report. The author may
provide the groundwork for achieving this
objective and establishing a thesis statement
in the introductory part. The inquiry "What
is the present scenario of the goal?" must be
addressed in this section. Include
information in the report's opening on
whether or not the organization achieved its
aim after completing the study design.

 Research Methodology: The research


methodology portion of the report contains
the bulk of the paper's material. Readers
may acquire topical knowledge, evaluate the
integrity of the information supplied, and
validate the study's findings via the work of
other market researchers. Therefore, it is
crucial that this part include a wealth of
information, covering every angle of the
study. Prioritized and important information
should be presented in a time-based
arrangement. If a study drew inspiration
from an existing method, a citation to that
method should be included.

 Research Results: This part will consist of


a brief discussion of the outcomes and the
calculations that were performed to reach
the objective. Typically, the discussion
section of the report is where the exposition
occurs following the data analysis section.
 Research Discussion: Discussion of
Research explains the findings in great
depth and evaluates other papers that might
potentially be in the same field. If anything
unusual turns up in the course of the study,
it will be discussed there. Research reports
require the writer to make connections
between the study's findings and their
potential practical applications.

 Research References and


Conclusion: Summarize all of the results
from the investigation and be sure to credit
the original authors and publications from
which they drew information.

5.1. Meaning of Interpretation

The term "data interpretation" is used to describe


the practice of using a variety of analytic
techniques to the task of making meaning of an
analyzed information set. The gathered data may
be presented in a variety of visual formats, such as
bar graphs, line diagrams, pie charts, histograms,
tabulated forms, etc., and will need interpretation
in order to be summarized. The purpose of data
interpretation seems to aid in the analysis of
acquired data and the understanding of given
numerical data. There is no mistaking the
significance of correctly interpreting data. When it
comes to analyzing numbers, everyone has their
own opinion and it might change from company to
company. When doing research, it is necessary to
analyze the data in order to form conclusions.
There are two main components to the process of
interpretation:

 The action of connecting the findings of


one study with the findings from others
in an attempt to construct a chain of
investigations,
 The development of certain underlying
theories.

Going to the bottom of a set of data and drawing


useful conclusions requires a process called "data
interpretation." The primary idea behind data
interpretation seems to do an analytical analysis of
the gathered data and draw significant inferences.
The data may be interpreted in two ways:

 Qualitative method – To analyze


qualitative or category data, researchers
often turn to the qualitative technique.
Qualitative data interpretation favored
textual representations over numerical or
graphical ones. Qualitative data may be
broken down into two categories: nominal
data as well as ordinal data. The analysis of
ordinal data is simpler than that of nominal
data.

 Quantitative method - The quantitative


approach is a way of studying facts that can
be measured, such as numbers. Instead of
relying on words to describe the facts,
numbers are used in quantitative analysis.
Quantitative information may be broken
down into two categories: discrete data as
well as continuous data. To understand data
using the quantitative approach, statistical
procedures and techniques such as the
median, standard deviation, mean, etc. are
required.

1. Basic Concept Of Data Interpretation

The term "data interpretation" is often used to


describe the process of analyzing data using a
variety of techniques in order to draw a
conclusion. A wide variety of sources, including the
functioning of companies, demographic censuses,
etc., may provide the data that has to be evaluated.
Among the most crucial aspects of interpreting
data are:

 The management board can study the data


in a systematic manner before acting on it to
adopt new ideas because of the
thoroughness with which it has been
analyzed and organized.
 It's a useful tool for anticipating market
shifts and potential rivalry.
 As a result of the data interpretation
procedure, the company was able to save
money in a number of different ways.
 Decision-making is when the value of data
interpretation really shines.
 The ability to understand data is crucial for
developing a competitive advantage.
 Data interpretation is a powerful tool for
gaining insight and providing solutions to
pressing problems.
 It's useful for gauging what people really
want.

2. Steps for Interpreting Data

The procedure for analyzing data consists of the


following steps:

 Collect The Information Needed To


Interpret Data – The first step in data
interpretation is to gather all the supporting
information that is required. Create
readable charts, graphs, tables, etc., to
organize this data.

 Develop findings Of Data – Forming a


more precise interpretation requires first
developing findings of the data, which
entails making observations regarding the
data, summarizing the key points, and
arriving at a conclusion.

 Development Of The Conclusion – The


conclusion is cited as a clarification for the
findings, therefore it's important to give it
some thought. The final result must be
supported by the facts.

 Develop the Recommendations of Data-


Create a set of suggestions for how the data
may be improved, taking into account its
results and findings.

3. Types Of Data Interpretation


 Bar Graphs – Bar graphs are useful
because the connection between the
variables may be read off as a series of
rectangular bars. These squares might be
made into either bars that are horizontal or
vertical. Bars of varying lengths indicate the
many data sets, with the value of each set
corresponding to its own bar. Bar graphs
may be clustered, subdivided, layered, or
any number of other configurations.

 Pie Chart – The pie chart seems a kind of


bar chart in which each slice represents a
different value or proportion of the whole.
The percentages and fractions are shown in
the pie charts. There are many variations on
the classic pie chart, including doughnut
charts and even three-dimensional
representations.

 Tables – Tables are used to display


statistical information. There are rows as
well as columns of information. There are
both simple tables and more involved ones.

 Line Graph – Line graphs encompass any


graph or chart in which data is presented as
a linear progression of elements. Line charts
provide a great way to display a series of
numbers or a continuous dataset. Line
graphs come in a variety of forms, from the
simplest to the most complex.

5.1.1.Technique of Interpretation
Research data interpretation involves a complex
process that calls for a high level of expertise and
finesse on the side of the researcher. The ability to
interpret is a skill that develops with experience.
Researchers may consult with subject-matter
experts for help with data interpretation.

The following are typical phases in the


interpretation process:

 Researchers are obligated to provide


plausible justifications for the associations
they've discovered, evaluate the lines of
connection in light of the underlying
procedure, and hunt for the common thread
beneath his otherwise disparate discoveries.
This is the method via which generalization
& the development of ideas are best
accomplished.
 It is important to have an open mind while
analyzing the outcomes of a research
project, since material that was not
originally planned for or anticipated may
turn out to be crucial in gaining a better
grasp of the issue at hand.
 Consulting an expert in the field who is
willing to be forthright and honest about
their observations and criticisms is a good
idea before diving into another
interpretation. The right interpretation of
the study will increase the value of the
findings, which may be achieved via such
discussions.
 Researchers shouldn't attempt
interpretation until they've thought through
all the key aspects influencing the situation
and eliminated any possibilities for incorrect
generalization. It's important that the
individual take time to analyse the data,
since first assumptions that seem correct
may turn out to be completely off base.

5.1.2.Precaution in Interpretation

It's important to keep in mind that poor


interpretation might invalidate otherwise sound
data analysis and result in misleading inferences.
Hence, is crucial that the process of interpretation
be carried out with calm, objectivity, and the right
viewpoint. The following are details that need to be
considered for a proper interpretation by the
researcher:

 First and foremost, a researcher must be


certain that his data are valid, reliable, and
sufficient for making conclusions; that they
exhibit high homogeneity; and that they
have been properly analyzed using
statistical procedures.
 The researcher has to be wary of the
potential for mistakes to be made
throughout the findings interpretation
process. Mistakes may occur when
researchers incorrectly extrapolate their
results beyond the scope of their
observations or mistake a correlation for a
causal relationship. Another common
mistake is to jump to the conclusion that a
link exists after seeing evidence supporting
a certain hypothesis. Acceptance of the
hypothesis via testing should be viewed as
"consistent with" the hypothesis instead of
as "proof" of the hypothesis's veracity. In
order to prevent making erroneous
generalizations, researchers must keep an
eye out for such red flags. one has to have
access to and an understanding of
appropriate statistical metrics to make valid
conclusions from his data.
 The role of interpretation is inextricably
bound up with analysis, and one must keep
this in mind at all times. Therefore, the
individual must treat the role of
interpretation just like a unique facet of
analysis, utilizing the same precautions that
one normally takes while undertaking the
procedure of analysis, including but not
limited to those related to the accuracy of
data, the correctness of computations, the
validity of results, and the evaluation of
alternatives.
 He must always keep in mind that it is not
enough to just observe events carefully; one
also has to find out and eliminate the
underlying causes that are originally
obscured. This will provide him a solid
foundation upon which to carry out his role
as an interpreter. Avoid making sweeping
generalizations about the results of a study
that may have been conducted under
unusual or constrained circumstances. Such
limitations, if any, have to be indicated, and
the findings must be presented, as far as
possible, within those constraints.
 It is important for the researcher to keep in
mind that "ideally, there is supposed to be
ongoing interaction among initial
hypothesis, empirical data, & theoretical
ideas" during the duration of a research
investigation. Here, at the intersection of
theoretical predisposition and actual
observation, is where true innovation and
creativity may flourish. While performing
the interpretive duty, one must pay close
attention to this detail.

5.2. Significance of Report Writing

A well-written research report seems a crucial


piece of writing since it details the steps taken and
the results obtained in a scientific study. The
report of an inquiry is a reliable source of data
since it is based on the researcher's own words
and observations. A comprehensive account of the
research process covering all the bases is what
one can expect to find in a well-written research
reports.

5.2.1.Importance of report writing in research


methodology

Writing reports is an essential aspect of any


legitimate research approach. The true value of
research is conveyed once making reports on the
findings becomes routine:

 Extensive explanations, either verbally or in


writing, that may be easily accessible by
scholars and other groups.
 Informs readers about the study's rationale,
results, and overall outcome.
 Among the most important parts of a study
that attempts to disseminate its findings to
the wider community.
 Clearly conveys the findings of the study to
the reader, serving the aim of the
investigation admirably.
 The results are ideal for pinpointing
research needs in the future.
 A concise report that properly and exactly
documents the relevant information, saving
time to create the research reports.

5.3. Different Steps in Writing Report

Although writing a report may be a lengthy and


difficult task, using a methodical strategy can help
to produce a document that is easy to understand
and useful. The process of creating a report
includes the following steps:

1. Define the Purpose and Scope of the


Report: Think about the report's purpose
and the outcomes it seeks to accomplish
before commencing writing. That way, one
will know exactly where to concentrate on
focusing and what data should go in the
report.
2. Gather Data and Information: The
information they require may be found in a
variety of places; articles, books,
conversations, and surveys are just a few
examples. The data that compile must be
reliable and appropriate for the report's
intended use.
3. Analyze the Data: Data analysis is sorting
through information for meaningful patterns
and associations. Doing so will aid in the
development of insightful findings and
suggestions.
4. Outline the Report Structure: Start by
sketching out the report's layout, down to
the major sections, parts, and headers. This
will aid in structuring the data and making
the report more comprehensible to the
intended audience.
5. Write the Report: Reports often begin with
an introduction that gives context and lays
out the report's goals. The next step is to
compose the report's body, which should
include the findings, analysis, and final
thoughts. Conclude the study by
summarizing its key results and suggestions
in an executive summary.
6. Format and Present the Report: The
report should be formatted such that it is
both aesthetically attractive and simple to
read. Ensure that the data is presented in an
understandable format by using the right
charts, tables, & graphs.
7. Review and Edit the Report: Verify that
the report is free of typos, misspellings, and
grammatical problems by reading it through
a few times. Verify that the report has a
good framework and the material is
presented clearly and concisely. Modify the
report so that it better reflects the original
intent.
8. Finalize the Report: Once the report has
been reviewed and revised, it is time to
complete it. The title page, list of contents,
citations, and appendices may be included in
this process.

With these guidelines in mind, the result will be


able to put together a report that not only
summarizes the research but also offers insightful
commentary.

5.4. Layout of the Research Report

Research report formatting adheres to a tried-and-


true scientific technique. The format of a research
report specifies the elements it must include. The
following is a summary of the data presented in
the study report:

1. Preliminary Pages

Both the study's title and its accompanying data


are required. The research project needs an
introduction or prologue. The table of contents will
appear after that. Provide a list of the charts and
diagrams used.

2. Main Text

It includes a comprehensive summary and all the


necessary elements for writing a research paper.
Title page information is included in the main
content. Information is presented sequentially,
with each chapter including its own set of details.
 Introduction

In order to familiarize readers with the subject of


the study, this section is provided. Included in this
section should be a description of the research
topic, hypotheses, the study's goals, a review of
the relevant literature, and a discussion of the
research technique, including its use of primary
and secondary sources, constraints, and its
proposed organization into sections. There are
others who, in the opening chapter, briefly
introduce the research endeavor and stress its
significance. A subsequent chapter will focus on
the technique used in the study.

The methodology section should include the


approach taken to the study, as well as the
research strategy and data gathering procedures.

 Statement of the problem

This is the main point of the study. The study's


central subject is emphasized. It has to be written
in plain English. The language used should be
understandable by the average person. The results
of sociological studies need to be made known to
the general public.

 Analysis of data

The gathered data must be presented in a


methodical fashion so that meaningful inferences
may be made. This is useful for verifying the
theory. The study's goals can only be verified by an
examination of the collected data.
 Implications of Data

There has to be credibility in the data-driven


findings. This constitutes the primary field of
study. It includes data analysis and statistics
summaries. The examination of data should follow
a reasonable plan. The foundation of the findings
may be found in the original data. His suggestions
and last thoughts need their own chapter. Analysis
of data is required for these inferences. The
findings should be generalizable and applicable to
other situations like this one. Researchers have a
need to be transparent about the caveats to their
work that prevent it from being applied
universally.

 Summary

This section of the research is the last. By


summarizing the findings of the study, the reader
is better able to grasp their significance. This may
be seen as a research summary as well.

3. End Matter

There are helpful appendices that explain key


ideas and provide a reference and other
background material. The report might be
improved by include an index.

5.4.1.Structure of a report in research


methodology

The report may be written using the following


format:

 Title
The title of the study should reflect the goals,
hypotheses, and conclusions that were reached
via extensive research.

 Table of contents

Readers are going to able to go about in the


research paper with the help of the table of
contents.

 Abstract

The abstract provides a synopsis of the research,


including a brief summary of the study's
objectives, methods, data, and results. It is
recommended that the 5ws & 1H format be used
while composing the abstract.

 Introduction

Research objectives and underlying issues may be


documented. If the research goals have been met
or more is needed, one should include that as well.

 Literature review

As the name implies, a literature review is a study


of the current literature on the subject under
investigation. Literature reviews are a great place
to lay out your research idea and its consequences
for the field.

 Investigation

This section of the study requires concise but


detailed descriptions of the research technique,
data collecting, samples, research subjects, &
analysis.

 Findings

This section is where the opportunity exists to


show off what researcherlearned by doing a
thorough study.

 Discussion

The study findings that researchersummarized


previously will now be elaborated upon. Provide an
explanation for each result and demonstrate
whether or not they are consistent with the
hypothesis.

 Conclusion

A research report's last section is an executive


summary, whereby the report's whole methodology
is discussed.

 Reference and appendices

All of the research's materials, whether primary or


secondary, should be cited here.

5.5. Types of Reports

The purpose of a research report seems to convey


the findings of the research to the relevant parties
and to convince the reader of the validity of the
findings.

Research reports of all stripes serve as a wrap-up


to a study, detailing its progress through its many
phases and the results of its analysis. As such, it is
the concluding chapter of a research flurry that
explains the road taken to get to a new friend or
revised facts.

The types of research report seems a mechanical


task because it necessitates not just dexterity on
behalf of the researcher as well as extensive effort,
resilience, infiltration, a general loom to the issue,
data, as well as analysis, in addition to grasp over
speech and greater impartiality, all of which spring
from significant thought.

There are two basic categories of research reports:


technical reports and popular reports.

1. Technical Report

A technical report is required whenever a


thorough written report regarding a research
project is required for widespread distribution or
archival purposes. These reports provide clear
data visualization and well-defined main outcomes.
The methods used, presumptions made, and
presentation of results, including any caveats, are
all given extensive attention in this technical
report.

This is a summary of the technical report: -

 Results Summary- This section provides a


synopsis of the study's results.

 Nature of Study- The term "nature of


study" is used to describe the study's goals,
the problem's functional formulation, the
working hypothesis, the data types and
analytic methods.

 Methods Used- Limitations of the study's


methodology are discussed, as are the
techniques and resources employed to
conduct it.

 Data- A brief overview of the data, including


where it came from and how it was
gathered, as well as its quality and any
restrictions it may have.

 Data Analysis and Presenting


Findings- The main part of a report
consists of the analysis and presentation of
facts and results, including justifications for
any conclusions drawn. There are many
kinds of graphs and tables used to illustrate
the points.

 Conclusions- The findings are summarized


and the policy implications derived from
them are discussed in the last section of the
paper, the conclusion.

 Bibliography- The bibliography lists all of


the books, articles, and websites that were
used to compile the work.

 Technical Appendices- Extensive technical


appendices detailing mathematical outliers,
the survey, and the analytic methods used.

 Index- The report's index is always included


at the very end.
It's possible that the structure of technical reports
varies from one to the other.

2. Popular Report

The most read report seems the one that makes


the data easy to understand and visually
appealing. It's employed when the results have the
potential to influence policy. The emphasis is on
writing clearly, keeping the technical details to a
minimum, and on utilizing charts and diagrams
extensively and in sufficient depth. The use of
several subheadings, huge text, and the odd
cartoon are further hallmarks of the popular
reports. This style of report places more emphasis
on a practical focus.

An overview of the Popular report can be found as


follows: -

 Findings and Their Implications- The


focus here is on the real-world applications
of the study's results and their ramifications.

 Recommendations for Action- This part


of the report makes suggestions for further
research or other action based on the
results presented here.

 Objectives of Study- The study's goals are


outlined, along with a brief explanation of
the nature of the issue being investigated.

 Techniques Used- This section of the


research reviews the methods and a
procedure used to reach the study’s
conclusion and provides the relevant data
used to draw those conclusions. All
information is presented without using
technical language.

 Results- The results section of a report


serves as where the bulk of the study's
findings are summarized in layman's terms.
There is extensive use of illustrations
including diagrams and charts.

 Technical Appendices- Technological


appendices contain in-depth information on
a variety of techniques, templates, and other
supplementary materials. If the report is
intended for the general audience, the
technical appendices should be as accurate
as possible.

5.6. Oral Presentation

Oral presentations are a formal way to share the


results of the research with an audience. There is
no one set location for presentations. A common
practice in workplaces that encourage teamwork is
for members of different groups to offer oral
updates on their progress toward a common goal.
For example, if researcher work for a nonprofit
that has a yearly meeting during which the group
reports on its operations, budget, and objectives to
its donors and the general public at large,
researcher could be asked to make a speech
during the event. The ability to create and deliver
a compelling oral presentation serves as a talent
that may serve researcherwell.
Presenting the findings of the experiment orally to
other individuals is a crucial part of any research
study. In the same way, one would in a research
paper, explain your experiment's background,
methodology, findings, interpretation, &
significance.

Nevertheless, a strong presentation differs from a


strong paper in important ways. Reading directly
from a prepared document is not acceptable for
the presentation. It's important to think about the
audience and avoid giving too much information.
An oral presentation can cover only so much
ground whereas a written document may go into
far more depth.

The presentational tone is also crucial. It is the


responsibility of the presenter to maintain the
audience's attention on the most important points
being made.

The following are some of the more concrete


factors to think about while crafting a speech or
presentation:

1. Organization

The standard format for oral presentations


includes an introductory paragraph, main body,
and summary.

2. Introduction

This paragraph has to be kept short. An adequate


amount of context should be provided so that the
audience can grasp the overarching hypothesis
and appreciate the importance of the experiments.
The investigated research question should also be
included.

3. Body

This part constitutes the bulk of the presentation.


The study process and findings should both be
included. The procedures should be succinctly
outlined, with more explanation provided only
when required.

4. Conclusion

Similarly, this section needs to be short. The


conclusions drawn from the study should be stated
clearly and briefly. There should be some
correlation between the results and previous trials,
but not too much. Unanswered queries might be
put to the test in future studies.

5.6.1.Presentation Style

The following are factors to keep in mind when


creating a presentation:

1. Time

Remember to keep track of time. The average


length of a research presentation is 15 minutes.

2. Pace

Avoid talking too rapidly. Make sure the audience


can hear by speaking more slowly.
 Speaking at the speed one believes is too
sluggish is usually just fine.

3. Volume/Tone

Speak clearly and loudly enough for the other


person to hear. Alter the pitch as well as tone of
the voice to keep the audience engaged.

 A monotonous presentation is the worst


possible scenario.
 Slight changes in loudness or tone might
make the audience believe that the speaker
is engaged in a conversation.

4. Eye Contact

Make an effort to look at the person you're talking


to; it sends a message that researchervalue their
attention.

5. Poise

Delivering the presentation in a business-like


manner is highly recommended.

 Wear something suitable.


 Keep an upright and confident stance.
 Avoid using slang and acting too casually.
 Keep your walking, hand gestures and other
actions to a minimum.
 Please refrain from reaching inside the
pockets.

6. Visual Aids
The use of visual aids is strongly recommended for
all research presentations. The most common tool
for this is PowerPoint.

 The information is easier to digest when


accompanied by visual aids like graphs,
tables, images, etc.
 Visuals are a great way to explain
complicated processes.
 The last statement is more likely to be
remembered if it is included in a list of
concluding statements.
 Research questions benefit from being
conveyed graphically.
 It's important to keep the images simple.
Only the relevant details should be included.
 Make sure the image is big enough for the
viewer to see properly.
 Draw attention to the slides as you're
talking.
 Keep the image up for as long as necessary
for the viewer to absorb the information.

7. Present Information Clearly

An effective presentation is one in which the


material is presented in a simple and
straightforward structure that the audience can
easily follow.

 Visual aids are useful in this case.


 Specifics need to be included whenever they
are crucial in establishing a conclusion. If
they prevent researcher from viewing a
certain point, they must be removed.
 Keep in mind that the message that sticks
with your audience is more essential than
the message originally send.

8. Subject Knowledge

During the presentation, the speaker has to show


that they have a firm grasp of the material. That's
achieved by:

 By delivering precise data,


 Through thoughtful engagement with points
of contention,
 By addressing the audience's legitimate
concerns.

5.7. Mechanics of Writing a Research


Report

The actual process of writing research reports or


papers is governed by a strict set of guidelines.
After settling on a set of procedures, they should
be followed without fail. Formatting requirements
should be settled upon once the research paper's
sources have been collected. For those interested
in the nuts and bolts of report writing, consider the
following:

1. Size and physical design: The document has


to be written on plain white paper that is 8 1/2
by 11 inches in size. Handwritten versions
should be done so using black or blue-black
color. The left side of the page has to have a
border of at least 1.5 inches, while the right
side needs a border of at least 1 inch. Margin
requirements include a single inch at the top &
bottom. The document must be well-organized
and easy to read. If the text is to be written, it
should be done so using double spacing and
only on a single side of the page, with the
exception of the inclusion of lengthy quotes.

2. Procedure: The stages for producing the


report must be followed precisely (these steps
are outlined in more detail previously in this
chapter).

3. Layout: The report's format should be


deliberated over, decided upon, and chosen in
light of the report's purpose and the character
of the situation. This chapter has already
covered the structure of a study's report and
the many kinds of reports that may be used as a
guide when producing a report in response to a
specific issue.

4. Treatment of quotations: All direct


quotations must be included in quotation marks
& formatted as a separate paragraph with
double spacing. If the citation is longer than 4
or 5 typewritten lines, however, it should be
presented as a separate block of text, single-
spaced as well as indented a half-inch towards
the correct side of the standard text margin.

5. The footnotes: Regarding further information


about footnotes, consider the following:

 The report's footnotes have two main


functions: first, to identify the sources
utilized in the report's citations; and second,
to bring attention to information that is not
central to the study itself but is nevertheless
relevant. In simpler terms, the purpose of
footnotes is to provide more contexts for the
reader, whether it is via a reference to
another work or the citation of an outside
authority, or by the acknowledgment of a
source or the expansion of an argument. The
footnote is neither the goal nor the method
of showcasing scholarship, and this fact
must be kept in mind at all times. The
current trend is to use as few footnotes as
possible since it is no longer considered
necessary to publicly show scholarly work.
 Footnotes appear at a bottom of the
document after the last citation or quote
they explain or clarify. Standard formatting
for footnotes includes a half-inch indentation
and a line approximately 1.5 inches long
that separates them from the main text.
 The normal convention for numbering
footnotes is to start at 1 at the beginning of
every chapters. The numeral should be
placed just above the line, for example at
the conclusion of a quote. Indenting and
slightly above the line to indicate a footnote
number is required once again at the bottom
of the page. To avoid misunderstanding,
references in the text should be numbered
consecutively, whereas footnotes should
contain symbols including an asterisk (*)
and similar ones to indicate that the
referenced content is numerical in nature.
 Single spacing is used inside footnotes while
double spacing separates them from the
main text.

6. Documentation style: The initial reference


within footnotes to a certain book should be
fully documented, including all pertinent
information concerning the edition utilized.
Such citations in documentaries tend to fall into
a certain pattern. The usual pecking order goes
as follows:

i. Regarding the single-volume reference


 Author's name (in alphabetical order, rather
than arranged alphabetically by last name
like in a bibliography), preceded by a
comma;
 Work title, with emphasis added by the use
of italics;
 Date and location of publication;
 Page numbers (or page references) are
included.

ii. Regarding multivolume reference


 Standardized order of authors' names;
 Work title, with emphasis added by the use
of italics;
 Date and location of publication;
 Quantity of volumes;
 Referencing specific pages in a document.

iii. Regarding works arranged alphabetically

 In general, no pagination references are


necessary for alphabetically-
organized books like dictionaries as well
as encyclopedias. Volumes and page
numbers may be required for in-depth
references to lengthy encyclopedia
articles.

iv. Regarding periodicals reference

 Name of the author in normal order;


 Title of article, in quotation marks;
 Name of periodical, underlined to indicate
italics;
 Volume number;
 Date of issuance;
 Pagination.

v. Regarding anthologies and collections


reference

 When using a quote from a compilation of


works of literature, it is necessary to provide
credit to both the original author and the
anthology's editor.

7. Regarding second-hand quotations


reference
Such instances need the following approach to
documentation:
 Original author and title;
 “quoted or cited in,”;
 Second author and work.

8. Case of multiple authorship: The name of


just the first author or editors is included in the
bibliography, with "et al." or "and others"
denoting the presence of further writers. In
subsequent referrals to the same work, the
above requirements for specificity might be
relaxed. When referencing the same source
many times in a row, use ibid., which is
followed by a comma, following the page
number. If there is just one page, write p., but if
there are many pages, write pp. If many pages
are being referred to at once, it is standard
practice to include the page number, as in pp.
190ff, which refers to page 190 and the pages
that follow; however, '190f' is only used for
page 190 & the page that follows it. Typically,
the volume number of a book is indicated using
Roman numerals. It is common practice to
abbreviate references in footnotes, and two of
the most common are op. cit. (opera citato,
within the work referenced) and loc. cit. (loco
citato, in the location mentioned). The notation
of op. cit. or loc. cit. following a writer's name
indicates that the source is another piece of the
author's work that was mentioned in full in
earlier footnotes.

9. Punctuation and abbreviations in


footnotes: Author names appear in standard
signature sequence immediately below the
footnote number. There will be a comma after
this. The book's title comes after the colon; it
lacks articles like "A," "an," "the," etc.; solely
the first word is capitalized; only proper
adjectives and nouns are italicized. An
indentation and comma separate the title. Next,
details about the edition are provided. There is
a comma after this entry. Following this, the
location where the book was printed is
identified; if the city is well-known, like London,
New York, and New Delhi, the name of the city
may be reduced to only its initials. There is a
comma after this entry. The comma after the
publisher's name indicates the end of this item.
If the publishing date appears on the title page,
it comes next. A comma and the year should be
eliminated and replaced by square brackets [c
1978], [1978] if the date exists in the copyright
notice at the back of the title page or anywhere
in the book. A comma comes after the item in
the list. The volume and page numbers, if
provided, come after, separated by a comma.
The whole citation ends with a period. Keep in
mind, however, that the aforementioned
explanation of bibliographical entries does not
apply to the documenting of acknowledgements
seen in magazine articles and periodicals.
For the sake of brevity and consistency,
bibliographies and footnotes frequently employ
a small handful of Standard English and Latin
acronyms. The researcher has to become
familiar with and proficient in using a variety of
acronyms that are often used in report writing.

 anon., anonymous
 art., article
 bk., book
 bull., bulletin
 cf., compare
 ch., chapter
 col., column
 diss., dissertation
 fig(s)., figure(s)
 fn., footnote

10. Use of statistics, charts and


graphs: Researchers are encouraged to utilize
statistics judiciously in their reports since doing
so helps to simplify and clarify the information
presented. A picture really is worth thousands
of words, as researcher may recall. Tables,
graphs, bars, & line diagrams are common
ways that statistics are shown and
communicated to the audience. Such a display
ought to be comprehensive and need no more
explanation. It has to make sense in light of the
issue at hand. To conclude, statistics should be
presented in a clean and appealing format.

11. The final draft: The report's preliminary


draft needs careful revision and rewriting
before it can become the final version.
Researchers should ask themselves whether or
not their report's sentences make sense to
ensure this. Do they use proper grammar? Have
they said what was intended? Does the report
make sense in light of the information
presented? It is very beneficial to have at least
one other person go through the report before
the final edit. A reader may find a relationship
that seems obvious to researcher to be a non
sequitur, while a sentence that makes perfect
sense to researchermay baffle the reader. A
helpful critic may be beneficial in
accomplishing the objective of appropriate
communication by pointing out sections that
appear unclear or irrational and providing
strategies to rectify the flaws.

12. Bibliography: The research report must


contain a bibliography, which should be
compiled and included as an appendix.

13. Preparation of the index: The significance


of an index rests within the reality that it serves
as a helpful guide, to the readers, and therefore
it should always be provided at the conclusion
of the report. The index may be made in two
ways: by topic and by author. The first kind
provides a list of ideas or subjects and the page
numbers where they are mentioned or
addressed in the report, while the latter
provides a list of writers. The index must be
ordered alphabetically at all times. Some
individuals like making just one index that
includes all of the usual suspects: authors,
subjects, and ideas.

5.8. Precautions for Writing Research


Reports

The purpose of writing a research report serves to


disseminate the findings of the study to the target
audience. A high-quality research report will
accomplish this goal skillfully and quickly. The
following considerations should be made when
writing the research report:

1. Length of the report: Research reports


may be very brief or very lengthy; while
deciding how long to make yours, bear in
mind that it has to be sufficient to cover the
topic thoroughly but not so long that readers
lose interest. Writing reports is not an
activity that should be used to increase
knowledge of peripheral topics.

2. Interesting: Research reports, if at all


possible, should not be boring; instead, they
should be written in a way that keeps the
reader interested throughout.

3. Use of abstract terminology and


Jargon: There should be not any use of
terminology or abstract terms in a study
report. The report's content should be
written in the simplest terms feasible. In this
regard, the report has to be written with a
neutral, straightforward manner, free of
phrases like "it seems" or "There could be"
and similar expressions.

4. Presentation of the findings: The report's


results should be presented in a way that's
easy to digest for the reader since they're
likely to want to brush up on the highlights
as soon as possible. The summary of key
findings should be supplemented by
appropriate visual aids, such as charts,
graphs, and statistics tables, for the
different outcomes presented in the primary
report.

5. Presentation of the report: The report's


structure should be well-considered and
acceptable, as well as in line with both the
report's stated goals and the nature of the
research issues at hand.

6. Writing of the Report: Reports need to be


devoid of grammatical errors and ought to
be written in strict compliance with the
methods of composition including footnotes,
records, appropriate punctuation, including
the utilization of acronyms in footnotes.

7. Logical presentation of the report: The


report's layout should be logical, since the
examination of the topic should follow a
logical progression. It should have a
coherent structure that shows how the
various parts of the investigation into the
research issue fit together.

8. Originality in writing report: Reports


need to show creativity and must represent
an effort to address an intellectual
challenge. It is needed to help aid in
resolving an issue and expanding the body
of knowledge.

9. Plan for future research and


implications: Future studies and their
ramifications should be outlined at the
conclusion of the report, along with any
policy effects on the issue at hand. It is
important for reports to include a look into
the future of the topic at hand and a list of
the types of study that still have to be
performed in order to advance the area.
10. Appendices: Every bit of technical
information included in the report has to be
documented in appendices.

11. Bibliography: A list of all the sources


researcher used to write your report is
called a bibliography, and it is required.

12. Index: Preparing and including an


index towards the report's conclusion is also
a requirement due to the importance of
indexes in general.

13. Appearance: The report's visual


presentation should be professional and
well-kept, whether it's typed or printed.

14. Stating confidence


limits: Confidence intervals should be
indicated, and the report could also contain
a discussion of the different limitations
encountered during data collection and
analysis.

15. Introduction: Everything of the


report's most important information should
be included in the introduction, including
the study's purpose, the problem's
background, the research strategy, and the
analytic methodologies used.

You might also like