0% found this document useful (0 votes)
581 views

RM&IPR Notes - Unit 1

Uploaded by

Sai Prasad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
581 views

RM&IPR Notes - Unit 1

Uploaded by

Sai Prasad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT-I

Research Problem
Research Problem
It refers to some difficulty which a researcher experiences in the context of either
theoretical or practical situation and wants to obtain a solution for the same.
A research problem can be simply defined as a statement that identifies the problem
or situation to be studied.

Components of research problem


 An individual or a group with some difficulty or problem
 Objectives of research that are to be attained
 The environment in which the problem exists
 Two or more course of action or Alternative means for obtaining the objective
 Two or more possible Outcomes
 Objective of the study

Characteristics of a good topic?


 Interest – The topic must be able to keep the researcher interested in it throughout
the research process
 Data Availability– It must be ensured that the topic can be investigated through the
collection and analysis of data
 Significant – The topic must contribute towards improvement and understanding of
an educational theory or practice
 Adequate – The topic must be according to the skills of the researcher, available
resources and time restrictions
 Ethical – The topic must not embarrass or harm the society

Selecting a Problem
Guidelines for selecting a research problem:-
 Subject which is overdone should not be chosen
 An average researcher must not choose Controversial topics
 Too narrow or too vague problems should be avoided
 The chosen subject should be familiar and feasible
 Significance and Importance of subject must be given attention
 Cost and time factor must be kept in mind
 Experience, Qualification and Training of the researcher must be according to the problem
in hand

Formulating a Research Problem


The steps involved in formulating a research problem are as follows:-
 Develop a Suitable Title
 Build a conceptual model of the problem
 Define the objectives of the study
 Set up investigative questions
 Formulate hypothesis
 State the operational definition of concepts
 Determine the scope of the study
Necessity of defining a problem
The problem to be investigated must be clearly defined in order to –
 Discriminate relevant data from the irrelevant one
 To keep a track and make a strategy
 Formulate objectives
 Choose an appropriate Research Design
 Lay down boundaries or limits

Technique involved in defining a research problem


A researcher may define a research problem by:-
1. Defining the statement of the problem in a general way.
2. Understanding the nature of the problem.
3. Surveying the available literature.
4. Developing ideas through discussions an brain storming
5. Rephrasing the research problems

There are few rules that must be kept in mind while defining a research problem. They are-
 Technical terms should be clearly defined.
 Basic assumptions should be stated.
 The criteria for the selection should be provided.
 Suitability of the time period and sources of data available must be considered.
 The scope of the investigation or the limits must be mentioned.

Methodological Steps for the Research Study


Sources of research problem
II. Sources of Problems to Solve
Beginners tend to start with relatively specific research problems focused on the face value
of the question, but eventually develop a broad research question with great generality. For
example, what started as “how can I help my roommate study more?” evolves into “what
controls studying in people?” At the beginning, the roommate’s behaviour is at issue for
itself. Later the person and the behaviour are seen as arbitrary instances of a much more
important and challenging question. Career long research problems tend to emerge
following several years of specific research topics, and require many specific research
studies to solve. This section details some of the sources for an initial, relatively specific,
research problem. It is intended to help you come up with research which is manageable,
enjoyable, and productive.

A very serious impediment facing new researchers is well illustrated by trying to use a
foreign language dictionary to learn what foreign words mean. Until you know "enough" of a
language, you cannot find out what the words mean. Until you know "enough" of a paradigm,
you do not know what unresolved questions remain, or when the paradigm is wrong. "A"
below is generally a person's first exposure to a research project for that reason.

In addition to not knowing what unresolved problems remain, is missing the more
fundamental broader issue underlying any specific behaviour change. When looking at the
world, try to see each functional relationship as only an instance of a more general class of
relationships.
A. Research Problem from Expert
The simplest source of a problem to solve is to have it given to you as a class assignment,
as a directed research project, or as a task while you are an apprentice in someone's lab.
You are told what problem to research and how to do it. This is probably an ideal way to
assure that your first research topic is a good one.
Example: Students in Experimental Psychology were assigned the task of finding out if
social attention made their roommate study more. They were told to measure the amount
of time their roommate studied on days during which they expressed interest in their
roommate's course material as compared to days when they refrained from talking about
academic topics.
B. Research Problem from Folklore
Common beliefs, common sense, or proverbs could be right but on the other hand, they
could also be wrong. You must verify that they are true before considering them as a
source of knowledge. It is possible that some unverified beliefs have the roots of a better
idea and therefore would be a worthy research topic. It is critical to note, however, that
the task of research is not to simply validate or invalidate common sense but rather to
come to understand nature.
Example: It's commonly believed that studying within the two hours preceding a test will
decrease test scores. To research this belief a randomly selected half of a class was told to
study immediately before taking a test while the other half was prohibited from studying
before the test. This research was intended to determine whether or not studying
immediately before a test decreased the points earned.
C. Research Problem from Insight
Sometimes people research an issue simply because it occurred to them and it seemed
important. The systematic development of the idea is lacking. This is "intuitive" or good
guess research. It is risky because you may not be able to get other researchers to
understand why the research is important. It is fun because you get to do what interests
you at the moment. Alternatively, it could be the application of a general rule of thumb or
guessing that a new problem is actually a well-understood function in disguise.
Example: While feeling especially competent after explaining course material to three
friends you realize that orally presenting material may help test performance. You
conducted a study in which material was orally presented before the test on a random
half of the occasions. The research was based on your insightful realization that oral
presentation may increase test performance.

D. Research Problem from Informal Discussion


This is a research problem that some discussion group feels is interesting. Discussion
among friends can often spark our interest in a problem or provides us with the
reinforcers for pursuing a question.
Example: After telling a group of friends about your success with oral presentations on
test taking, the group talks about it for a while and becomes interested in the possibility of
the subject becoming confused as well as doing better as a result of feedback from the
listeners. The group provides you with the idea and the excitement to do research on how
students can affect the accuracy of a teacher's understanding.

E. Research Problem from Knowledge of Techniques and Apparatus


This is the selection of a research topic based on your special knowledge outside the field.
A technique or apparatus with which you are familiar can offer the potential for a major
advance in the field of psychology. Sometimes we realize that we can apply a new
technique or apparatus to an area to which it has not yet been applied. Because we are
specially qualified to succeed, solving the problem can be especially gratifying.
Example: You may know about microelectronics and be good at detailed work. You find
out that many researchers are anxious to discover the migration patterns of butterflies so
you mount an integrated circuit transmitter on a butterfly and thereby trace the
behaviour of the free ranging butterfly.

F. Research Problem from Reading the Literature


These are research problems which capture your interest while reading. While reading
you will often wonder why, or will disagree, or will realize that you have a better idea
than the original author.

Example: While you were reading about jet lag and its effects on sleep the first night, you
realize that the author failed to control for light cycle. You try stretching either the light
period or stretching the dark period to make up the phase shift. You implement this by
changing the cabin illumination period on various trans-Atlantic flights, and monitoring
the passengers sleep for the next three days.

1. Sources of Research Literature


Initially, it may be hard to know where to start reading on your quest for knowledge.
Consider starting at a very broad general level and working your way to the more
recondite.
A good place to start is several Introductory Psychology textbooks. Understand the
basic area, what it is, how it fits in Psychology, and why it is important. Then look
through several middle level textbooks which cover that particular area.
Understanding the structure of the area by reading the table of contexts, then read the
specific section relevant to your research topic. Read the table of contents and sections
relevant to your paper in several more textbooks, paying particular attention to the
original research which led to the general paradigm and conclusions. Pay attention to
the theoretical significance of various types of results and to the functional relationship
depicted in the figures. Note the authors, titles, dates, volume, and pages of the journals
and books which are referred to. Then consult Psychological Abstracts. This is a
publication that organizes and provides short abstracts of the mass of knowledge
provided in journals. The Psychological Abstracts are available in the library. Locate
key papers in the Abstracts.
Introductory texts
Second-level or area texts
Annual Review / review articles
Special topic text / symposium reports
Journal articles
2. How to Find Additional Sources
Web
PsychInfo
Dictionary of Psychology (http://www.psychology.org)
Card catalog
Psychological Abstracts
Current Contents
Citation Index (find subsequently-published related articles)
Reference sections of relevant papers (find previously-published related articles)
Knowledgeable people
3. How to Read Research Articles
Actively participate while you are reading. At first it will keep you from falling asleep,
later it will keep you from thinking about other things, and eventually it will make it a
lot of fun.
Underline, write questions and answers in the margins, and keep an idea log. Draw a
diagram of the procedure. Consider how the research bears on your interests. Look for
what's important.
a. What was the research problem and why must it be answered?
b. What subjects were used and why?
c. What apparatus or setting was used and why?
d. What general procedure was used and why?
e. Was the procedure applicable and the best available?
f. What was the independent variable, how was it measured, and what was it inferred
to be doing (its interpretation) (e.g., did shock produce fear or something else)?
g. What were potential confounds and how were they controlled?
h. What was the dependent variable, how was it measured, and what was it inferred
to be the result of, or what did it represent (its interpretation)?
i. What were the actual results, what were they interpreted to mean, and to what
extent is it likely they would happen again if the experiment were replicated (their
reliability)?
j. How sensitive was the dependent measure? To what degree would small changes in
the independent variable be expected to change the dependent variable?
k. How much of the variability obtained could be accounted for?
l. To what extent will the findings apply to other subjects, situations, and procedures?
Was there generality?
m. What was gained by the research ("so what")? How has the paradigm been
extended by this finding?
G. Research Problem from a Paradoxical Incident or Conflicting Results
If the world is perfectly understood, then there can be no surprises. Contrariwise, if
something surprises you, then your theoretical framework is inadequate and needs
development. If two seemingly similar procedures produce different results, then
something is wrong with your understanding of the procedures. They are not actually
similar in the important respect of how they affect the dependent variable. Given that an
error has been made, something is not correctly understood and must be resolved.
H. Research Problem Deduced from Paradigms or Theories
Researchers who propose theoretical accounts for phenomena cannot think through
every possible ramification. As you come to understand a theory, potential errors or
extensions become apparent. This type of research tests the implications of theories to
confirm or reject them. This is classic deductive "normal" science. Using the object in the
lake from the first chapter as an example -- this would be deducing "if it is an steam
shovel under there, then we should find a long row of high spots coming out of one end."
You then test that prediction by probing around trying to find a boom.
If response strength approaches asymptotic response strength on each reinforced trial,
then presenting a compound stimulus of asymptotically conditioned stimuli should result
in a response decrement on subsequent tests with isolated stimuli. (This is a counter
intuitive prediction based on Rescorla-Wagner which is true.)
Criteria Characteristics of a good research problem
There are some suggestion for the graduate students and researchers which are
drawn from the different areas of education, social sciences as well as psychology. There are
two factors in the selection of topic external and personal. External criteria involves how the
topic is important for the field, availability of both data and data collection methods and the
administration is cooperative or not. Personal Criteria means researcher own interest, time
and cost. Criteria for selection of research problem depends on the following characteristics.
Personal Inclination. The chief motivation in the way of selecting research problem is the
personal inclination of the researcher. If a researcher has personal interest in the topic, he
would select that problem for his research work.
Resources Availability. During the selection, a researcher will see to the resources
available. If these resources like money, time, accommodation and transport are available to
the selection place, then the selection of the problem is easy.
Relative Importance. The importance and the problem also play a vital role in the selection
of research problem. If the problem is relatively important, then the researcher tends
towards the selection of the problem.
Researcher Knowledge. The researcher knowledge should play a vital role in the selection
of the research problem. The wisdom and experience of an investigator is required for well
collection of the research data. He can bitterly select a problem.
Practicality: Practicality is also responsible for the selection. The practical usefulness of the
problem is the main motivation for a researcher to attend it.
Time-lines of the Problem. Some problems take little time for its solution while others take
more time. So, it depends on the time in which we have to complete his research work.
Data Availability. If the desired data is available to the researcher, then the problem would
be selected.
Urgency. Urgency is a pinpoint in the way of the selection of research problem. Urgent
problem must be given priority because the immediate solution can benefit the people.
Feasibility. Feasibility is also an important factor for the selection of the research problem.
The researcher qualification, training and experience should match the problem.

Area Culture. The culture of the area for which a researcher conducts his research is also
responsible for the selection of research problem.

Characteristic of Research Problem


Any research is a difficult task to achieve and research needs to do a great effort. Selection of
research topic is the first step to success.
1. Research topic must be very clear and easy to understand. It should not distract people.
2. If a topic is well define is the only way to successful research. The topic should not create
doubt and double impression.
3. Easy language is a key to success. Use technical words if necessary otherwise focus of
simplicity.
4. Research title should be according to the rules of titling. There are different rules of
titling, a researcher must aware before writing a research title.
5. While selecting a research topic current importance of a researcher should also be
considered. Topic should not be obsolete and it should have great importance in the
current day.
Errors in selecting a research problem
Designing a research project takes time, skill and knowledge. If you don’t go into the
process with a clear goal and methods, you’ll likely come out with skewed data or an
inaccurate picture of what you were trying to accomplish. With Qualtrics survey software,
we make the survey creation process easier, but still you may feel overwhelmed with the
scope of your research project.
While it’s important to use proper methodology in the research process, it’s equally
important to avoid making critical mistakes that could produce inaccurate results. In this
article, we’ll list 5 common errors in the research process and tell you how to avoid making
them, so you can get the best data possible.

1. Population Specification
Population specification errors occur when the researcher does not understand who they
should survey. This can be tricky because there are multiple people who might consume the
product, but only one who purchases it, or they may miss a segment looking to purchase in
the future.
Example: Packaged goods manufacturers often conduct surveys of housewives, because
they are easier to contact, and it is assumed they decide what is to be purchased and also do
the actual purchasing. In this situation there often is population specification error. The
husband may purchase a significant share of the packaged goods, and have significant direct
and indirect influence over what is bought. For this reason, excluding husbands from
samples may yield results targeted to the wrong audience.
How to avoid this: Understand who purchases your product and why they buy it. It’s
important to survey the one making the buying decision so you know how to better reach
them.

2. Sampling and Sample Frame Errors


Survey sampling and sample frame errors occur when the wrong subpopulation is used to
select a sample, or because of variation in the number or representativeness of the sample
that responds, but the resulting sample is not representative of the population concern.
Unfortunately, some element of sampling error is unavoidable, but sometimes, it can be
predicted. For instance, in the 1936 presidential election between Roosevelt and Landon, the
sample frame was from car registrations and telephone directories. The researchers failed to
realize that the majority of people that owned cars and telephones were Republicans, and
wrongly predicted a Republican victory.
Example: Suppose that we collected a random sample of 500 people from the general U.S.
adult population to gauge their entertainment preferences. Then, upon analysis, found it to
be composed of 70% females. This sample would not be representative of the general adult
population and would influence the data. The entertainment preferences of females would
hold more weight, preventing accurate extrapolation to the US general adult population.
Sampling error is affected by the homogeneity of the population being studied and sampled
from and by the size of the sample.
How to avoid this: While this cannot be completely avoided, you should have multiple
people reviewing your sample to account for an accurate representation of your target
population. You can also increase the size of your sample so you get more survey
participants.
3. Selection
Selection error is the sampling error for a sample selected by a non-probability method.
When respondents choose to self-participate in a study and only those interested respond,
you can end up with selection error because there may already be an inherent bias. This can
also occur when respondents who are not relevant to the study participate, or when there’s
a bias in the way participants are put into groups.
Example: Interviewers conducting a mall intercept study have a natural tendency to select
those respondents who are the most accessible and agreeable whenever there is latitude to
do so. Such samples often comprise friends and associates who bear some degree of
resemblance in characteristics to those of the desired population.
How to avoid this: Selection error can be controlled by going extra lengths to get
participation. A typical survey process includes initiating pre-survey contact requesting
cooperation, actual surveying, and post-survey follow-up. If a response is not received, a
second survey request follows, and perhaps interviews using alternate modes such as
telephone or person-to-person.
4. Non-responsive
Nonresponse error can exist when an obtained sample differs from the original selected
sample.
This may occur because either the potential respondent was not contacted or they refused to
respond. The key factor is the absence of data rather than inaccurate data.
Example: In telephone surveys, some respondents are inaccessible because they are not at
home for the initial call or call-backs. Others have moved or are away from home for the
period of the survey. Not-at-home respondents are typically younger with no small children,
and have a much higher proportion of working wives than households with someone at
home. People who have moved or are away for the survey period have a higher geographic
mobility than the average of the population. Thus, most surveys can anticipate errors from
non-contact of respondents. Online surveys seek to avoid this error through e-mail
distribution, thus eliminating not-at-home respondents.
How to avoid this: When collecting responses, ensure your original respondents are
participating, and use follow-up surveys and alternates modes of reaching them if they don’t
initially respond. You can also use different channels to reach your audience like in person,
web surveys, or SMS.
5. Measurement
Measurement error is generated by the measurement process itself, and represents the
difference between the information generated and the information wanted by the
researcher. Generally, there is always some small level of measurement error due to
uncontrollable factors.
Example: A retail store would like to assess customer feedback from at-the-counter
purchases. The survey is developed but fails to target those who purchase in the store.
Instead, the results are skewed by customers who bought items online.
How to avoid this: Double check all measurements for accuracy and ensure your observers
and measurement takes are well trained and understand the parameters of the experiment.
While not all of these errors can be completely avoidable, recognizing them is half the battle.
Next time you’re starting a research project, use this blog as a checklist to ensure you’re
doing everything you can to avoid these common mistakes.
Also, before you begin your next research project, read 5 Ways to Formulate the Research
Problem. This is vital to any research project because you can’t begin creating surveys unless
you understand the research problem. Once you’re ready to begin creating your survey, use
a free Qualtrics account to get started and download the eBook below for an in-depth guide
to creating your survey questions.
Scope and objectives of research problem
Scoping is figuring out what, exactly, to explore for a study. It’s a Goldilocks problem:
you don’t want the scope too broad, or you will not see patterns appear in the data, but you
don’t want it too narrow, or the participants will tell you everything they have to say about it
in five minutes. You want to get the scope just right–somewhere in between these two
extremes.
The scope, in case you haven’t read Practical Empathy, is how you begin a listening session.
It’s how you introduce the subject you’d like the participant to cover, and it’s the only
question you think of in advance.
You can explore several different scopes over time, each examining an intent or purpose a
person has before reaching for your solution. Each scope has its own study. Scopes are
difficult–often it takes a week of discussion to figure out which scope to explore for
an upcoming study. Sometimes you discover a scope is too broad or too narrow after the few
couple of listening sessions, so you must adjust it mid-study.
Scopes are difficult to define because of the tendency to tie them to a technology or tool. This
is the solution space. In this research approach, you want to explore the problem space. Your
organization and its solutions should not be included or implied by the scope statement.

Here are some example scopes which define a particular problem space. The solution space
appears in parenthesis:
 Make sure the drivers in my commercial fleet don’t get us in trouble. (commercial vehicle
speed tracking)
 See if I can gain better insights from my engineering/scientific data. (statistical graphical
software)
 Figure out why my code isn’t working, to get it working how I’d like. (software API
developers network)
 Make sure we have reliable data storage access and future expansion. (data storage
configuration automation)
 Decide whether/how I can get a college degree/certificate, given my financial and family
situation. (technical & community college marketing plan)
 Guide a business toward success where my employees must drive to customer sites.
(decide whether to build a better dispatch tool)
 Decide what to get for lunch. (fast food restaurant)

Research Problem and Objectives


Research Problem 5 factors to consider to determine that a problem is researchable or not.
1. The problem existing in the locality or country but no known solution to problem.
2. The solution can be answer by using statistical methods and techniques.
3. There are probable solutions but they are not yet tested.
4. The occurrence of phenomena requires scientific investigation to arrive at precise
solution.
5. Serious needs/problems of the people where it demands research.
Research Objectives
Research Objectives are a specification of the ultimate reason for carrying out
research in the first place. They help in developing a specific list of information needs.
Only when the researcher knows the problem that management wants to solve can
the research project be designed to provide the pertinent information.
Objective is clear concern and declarative sentence which provide direction to
investigate solution or variable

Characteristic of Objective the objectives of a project should be SMART. They


should be,
Specific. The problem be specifically tested.
Measurable. It is easy to measure by using research instruments, apparatus or
equipment.
Achievable. The data are achievable using correct statistical tools to arrive at precise
results.
Realistic. Real results are attained because they are gathered scientifically and not
manipulated or manoeuvred.
Time-bound time frame is required in every activity because the shorter completion
of the activity, the better.
Approaches of investigation of solutions for research problem
Research approach can be divided into three types:
1. Deductive research approach
2. Inductive research approach
3. Abductive research approach
The relevance of hypotheses to the study is the main distinctive point between
deductive and inductive approaches. Deductive approach tests the validity of assumptions
(or theories/hypotheses) in hand, whereas inductive approach contributes to the emergence
of new theories and generalizations. Abductive research, on the other hand, starts with
‘surprising facts’ or ‘puzzles’ and the research process is devoted their explanation.
The following table illustrates the major differences between deductive, inductive and
abductive research approaches in terms of logic, generalizability, use of data and theory.

Deduction Induction Abduction


In a deductive inference, In an inductive inference,
In an abductive inference,
when the premises are known premises are used
Logic known premises are used to
true, the conclusion must to generate untested
generate testable conclusions
also be true conclusions
Generalising from the
Generalising from the Generalising from the
Generalizability interactions between the specific
general to the specific specific to the general
and the general
Data collection is used to explore
Data collection is used to
Data collection is used to a phenomenon, identify themes
explore a phenomenon,
evaluate propositions or and patterns, locate these in a
Use of data identify themes and
hypotheses related to an conceptual framework and test
patterns and create a
existing theory this through subsequent data
conceptual framework
collection and so forth
Theory generation or
modification; incorporating
Theory falsification or Theory generation and
Theory existing theory where
verification building
appropriate, to build new theory
or modify existing theory
Differences between deductive, inductive and abductive approaches
Discussion of research approach is a vital part of any scientific study regardless of the
research area. Within the methodology chapter of your dissertation to you need to explain
the main differences between inductive, deductive and abductive approaches. Also, you need
to specify the approach you have adopted for your research by breaking down your
arguments into several points.
Deductive Research Approach
If you have formulated a set of hypotheses for your dissertation that need to be confirmed or
rejected during the research process you would be following a deductive approach. In
deductive approach, the effects of labour migration within the EU are assessed by
developing hypotheses that are tested during the research process.
Dissertations with deductive approach follow the following path:

Deductive process in research approach


Inductive Research Approach
Alternatively, inductive approach does not involve formulation of hypotheses. It starts with
research questions and aims and objectives that need to be achieved during the research
process.
Inductive studies follow the route below:

Inductive process in research approach


Abductive Research Approach
In abductive approach, the research process is devoted to explanation of ‘incomplete
observations’, ‘surprising facts’ or ‘puzzles’ specified at the beginning of the study.
CONCEPT OF DATA COLLECTION

Data collection is the process of gathering and measuring information on variables of


interest, in an established systematic fashion that enables one to answer stated research
questions, test hypotheses, and evaluate outcomes. The data collection component of
research is common to all fields of study including physical and social sciences, humanities,
business, etc. While methods vary by discipline, the emphasis on ensuring accurate and
honest collection remains the same. The goal for all data collection is to capture quality
evidence that then translates to rich data analysis and allows the building of a convincing
and credible answer to questions that have been posed. Regardless of the field of study or
preference for defining data (quantitative, qualitative), accurate data collection is
essential to maintaining the integrity of research. Both the selection of appropriate data
collection instruments (existing, modified, or newly developed) and clearly delineated
instructions for their correct use reduce the likelihood of errors occurring.

Data collection is one of the most important stages in conducting a research. You can
have the best research design in the world but if you cannot collect the required data you
will be not be able to complete your project. Data collection is a very demanding job which
needs thorough planning, hard work, patience, perseverance and more to be able to
complete the task successfully. Data collection starts with determining what kind of data
required followed by the selection of a sample from a certain population. After that, you
need to use a certain instrument to collect the data from the selected sample.

TYPES OF DATA
Data are organized into two broad categories: qualitative and quantitative.

Qualitative Data: Qualitative data are mostly non-numerical and usually descriptive or
nominal in nature. This means the data collected are in the form of words and sentences.
Often (not always), such data captures feelings, emotions, or subjective perceptions of
something. Qualitative approaches aim to address the ‘how’ and ‘why’ of a program and
tend to use unstructured methods of data collection to fully explore the topic. Qualitative
questions are open-ended. Qualitative methods include focus groups, group discussions and
interviews. Qualitative approaches are good for further exploring the effects and unintended
consequences of a program. They are, however, expensive and time consuming to
implement. Additionally the findings cannot be generalized to participants outside of the
program and are only indicative of the group involved.
Qualitative data collection methods play an important role in impact evaluation
by providing information useful to understand the processes behind observed results
and assess changes in people’s perceptions of their well-being. Furthermore qualitative
methods can be used to improve the quality of survey-based quantitative evaluations by
helping generate evaluation hypothesis; strengthening the design of survey questionnaires
and expanding or clarifying quantitative evaluation findings. These methods are
characterized by the following attributes -
 they tend to be open-ended and have less structured protocols (i.e., researchers may
change the data collection strategy by adding, refining, or dropping techniques or
informants);
 they rely more heavily on interactive interviews; respondents may be interviewed
several times to follow up on a particular issue, clarify concepts or check the reliability of
data;
 they use triangulation to increase the credibility of their findings (i.e., researchers
rely on multiple data collection methods to check the authenticity of their results);
 generally their findings are not generalizable to any specific population, rather each case
study produces a single piece of evidence that can be used to seek general patterns
among different studies of the same issue.

Regardless of the kinds of data involved, data collection in a qualitative study takes a great
deal of time. The researcher needs to record any potentially useful data thoroughly,
accurately, and systematically, using field notes, sketches, audiotapes, photographs and
other suitable means. The data collection methods must observe the ethical principles of
research. The qualitative methods most commonly used in evaluation can be classified in
three broad categories -
 In-depth interview
 Observation methods
 Document review.

Quantitative Data: Quantitative data is numerical in nature and can be mathematically


computed. Quantitative data measure uses different scales, which can be classified as
nominal scale, ordinal scale, interval scale and ratio scale. Often (not always), such data
includes measurements of something. Quantitative approaches address the ‘what’ of the
program. They use a systematic standardized approach and employ methods such as
surveys and ask questions. Quantitative approaches have the advantage that they are
cheaper to implement, are standardized so comparisons can be easily made and the size
of the effect can usually be measured. Quantitative approaches however are limited in
their capacity for the investigation and explanation of similarities and unexpected
differences. It is important to note that for peer-based programs quantitative data
collection approaches often prove to be difficult to implement for agencies as lack of
necessary resources to ensure rigorous implementation of surveys and frequently
experienced low participation and loss to follow up rates are commonly experienced
factors.
The Quantitative data collection methods rely on random sampling and
structured data collection instruments that fit diverse experiences into predetermined
response categories. They produce results that are easy to summarize, compare, and
generalize. If the intent is to generalize from the research participants to a larger
population, the researcher will employ probability sampling to select participants.
Typical quantitative data gathering strategies include -
 Experiments/clinical trials.
 Observing and recording well-defined events (e.g., counting the number of
patients waiting in emergency at specified times of the day).
 Obtaining relevant data from management information systems.
 Administering surveys with closed-ended questions (e.g., face-to face and telephone
interviews, questionnaires etc).
 In quantitative research (survey research), interviews are more structured than in
Qualitative research. In a structured interview, the researcher asks a standard set
of questions and nothing more. Face -to -face interviews have a distinct advantage of
enabling the researcher to establish rapport with potential participants and therefore
gain their cooperation.
 Paper-pencil-questionnaires can be sent to a large number of people and saves the
researcher time and money. People are more truthful while responding to the
questionnaires regarding controversial issues in particular due to the fact that their
responses are anonymous.

Mixed Methods: Mixed methods approach as design, combining both qualitative and
quantitative research data, techniques and methods within a single research
framework. Mixed methods approaches may mean a number of things, i.e. a number of
different types of methods in a study or at different points within a study or using a mixture
of qualitative and quantitative methods. Mixed methods encompass multifaceted
approaches that combine to capitalize on strengths and reduce weaknesses that stem
from using a single research design. Using this approach to gather and evaluate data may
assist to increase the validity and reliability of the research. Some of the common areas
in which mixed-method approaches may be used include –
 Initiating, designing, developing and expanding interventions;
 Evaluation;
 Improving research design; and
 Corroborating findings, data triangulation or convergence.
Some of the challenges of using a mixed methods approach include –
 Delineating complementary qualitative and quantitative research questions;
 Time-intensive data collection and analysis; and
 Decisions regarding which research methods to combine.
Mixed methods are useful in highlighting complex research problems such as disparities in
health and can also be transformative in addressing issues for vulnerable or
marginalized populations or research which involves community participation. Using a
mixed-methods approach is one way to develop creative options to traditional or single
design approaches to research and evaluation.
There are many ways of classifying data. A common classification is based upon who
collected the data.
PRIMARY DATA
Data that has been collected from first-hand-experience is known as primary data.
Primary data has not been published yet and is more reliable, authentic and objective.
Primary data has not been changed or altered by human beings; therefore its validity is
greater than secondary data.
Importance of Primary Data: In statistical surveys it is necessary to get information
from primary sources and work on primary data. For example, the statistical records of
female population in a country cannot be based on newspaper, magazine and other
printed sources. A research can be conducted without secondary data but a research based
on only secondary data is least reliable and may have biases because secondary data has
already been manipulated by human beings. One of such sources is old and secondly they
contain limited information as well as they can be misleading and biased.
Sources of Primary Data:
Sources for primary data are limited and at times it becomes difficult to obtain data from
primary source because of either scarcity of population or lack of cooperation.
Following are some of the sources of primary data.
Experiments:
Experiments require an artificial or natural setting in which to perform logical study to
collect data. Experiments are more suitable for medicine, psychological studies, nutrition
and for other scientific studies. In experiments the experimenter has to keep control over
the influence of any extraneous variable on the results.
Survey:
Survey is most commonly used method in social sciences, management, marketing and
psychology to some extent. Surveys can be conducted in different methods.
Questionnaire:
It is the most commonly used method in survey. Questionnaires are a list of questions
either open-ended or close-ended for which the respondents give answers. Questionnaire
can be conducted via telephone, mail, live in a public area, or in an institute, through
electronic mail or through fax and other methods.
Interview:
Interview is a face-to-face conversation with the respondent. In interview the main problem
arises when the respondent deliberately hides information otherwise it is an in depth
source of information. The interviewer can not only record the statements the interviewee
speaks but he can observe the body language, expressions and other reactions to the
questions too. This enables the interviewer to draw conclusions easily.
Observations:
Observation can be done while letting the observing person know that s/he is being
observed or without letting him know. Observations can also be made in natural settings as
well as in artificially created environment.
Advantages of Using Primary Data
 The investigator collects data specific to the problem under study.
 There is no doubt about the quality of the data collected (for the investigator).
 If required, it may be possible to obtain additional data during the study period.
Disadvantages of Using Primary Data
1. The investigator has to contend with all the hassles of data collection-
 deciding why, what, how, when to collect;
 getting the data collected (personally or through others);
 getting funding and dealing with funding agencies;
 ethical considerations (consent, permissions, etc.).
2. Ensuring the data collected is of a high standard-
 all desired data is obtained accurately, and in the format it is required in;
 there is no fake/ cooked up data;
 unnecessary/ useless data has not been included.
3. Cost of obtaining the data is often the major expense in studies.
SECONDARY DATA
Data collected from a source that has already been published in any form is
called as secondary data. The review of literature in any research is based on secondary
data. It is collected by someone else for some other purpose (but being utilized by the
investigator for another purpose). For examples, Census data being used to analyze the
impact of education on career choice and earning.
Common sources of secondary data for social science include censuses,
organizational records and data collected through qualitative methodologies or qualitative
research. Secondary data is essential, since it is impossible to conduct a new survey that
can adequately capture past change and/or developments.
Sources of Secondary Data:
The following are some ways of collecting secondary data –
 Books
 Records
 Biographies
 Newspapers
 Published censuses or other statistical data
 Data archives
 Internet articles
 Research articles by other researchers (journals)
 Databases, etc.
Importance of Secondary Data:
Secondary data can be less valid but its importance is still there. Sometimes it is difficult to
obtain primary data; in these cases getting information from secondary sources is easier and
possible. Sometimes primary data does not exist in such situation one has to confine the
research on secondary data. Sometimes primary data is present but the respondents are not
willing to reveal it in such case too secondary data can suffice. For example, if the research
on the psychology of transsexuals first it is difficult to find out transsexuals and second they
may not be willing to give information you want for your research, so you can collect data
from books or other published sources. A clear benefit of using secondary data is that much
of the background work needed has already been carried out. For example, literature
reviews, case studies might have been carried out, published texts and statistics could
have been already used elsewhere, media promotion and personal contacts have also been
utilized. This wealth of background work means that secondary data generally have a pre-
established degree of validity and reliability which need not be re-examined by the
researcher who is re-using such data. Furthermore, secondary data can also be helpful in the
research design of subsequent primary research and can provide a baseline with which the
collected primary data results can be compared to. Therefore, it is always wise to begin
any research activity with a review of the secondary data.
Advantages of Using Secondary Data
 No hassles of data collection.
 It is less expensive.
 The investigator is not personally responsible for the quality of data (‘I didn’t do it’).
Disadvantages of Using Secondary Data
 The data collected by the third party may not be a reliable party so the reliability and
accuracy of data go down.
 Data collected in one location may not be suitable for the other one due variable
environmental factor.
 With the passage of time the data becomes obsolete and very old.
 Secondary data collected can distort the results of the research. For using secondary
data a special care is required to amend or modify for use.
 Secondary data can also raise issues of authenticity and copyright.

Keeping in view the advantages and disadvantages of sources of data requirement of the
research study and time factor, both sources of data i.e. primary and secondary data have
been selected. These are used in combination to give proper coverage to the topic.
PROCESS ANALYSIS
A step-by-step breakdown of the phases of a process, used to convey the inputs, outputs, and
operations that take place during each phase. A process analysis can be used to improve
understanding of how the process operates, and to determine potential targets for process
improvement through removing waste and increasing efficiency. Inputs may be materials,
labor, energy, and capital equipment. Outputs may be a physical product (possibly used as
an input to another process) or a service. Processes can have a significant impact on the
performance of a business, and process improvement can improve a firm’s competitiveness.
The first step to improving a process is to analyze it in order to understand the activities,
their relationships, and the values of relevant metrics. Process analysis generally involves
the following tasks-
 Define the process boundaries that mark the entry points of the process inputs and the
exit points of the process outputs.
 Construct a process flow diagram that illustrates the various process activities and their
interrelationships.
 Determine the capacity of each step in the process. Calculate other measures of interest.
 Identify the bottleneck, that is, the step having the lowest capacity.
 Evaluate further limitations in order to quantify the impact of the bottleneck.
 Use the analysis to make operating decisions and to improve the process.
Process Analysis Tools
When you want to understand a work process or some part of a process, these tools can help
 Flowchart: A picture of the separate steps of a process in sequential order, including
materials or services entering or leaving the process (inputs and outputs), decisions that
must be made, people who become involved, time involved at each step and/or process
measurements.
 Failure Mode Effects Analysis (FMEA): A step-by-step approach for identifying all possible
failures in a design, a manufacturing or assembly process, or a product or service;
studying the consequences, or effects, of those failures; and eliminating or reducing
failures, starting with the highest-priority ones.
 Mistake-proofing: The use of any automatic device or method that either makes it
impossible for an error to occur or makes the error immediately obvious once it has
occurred.
 Spaghetti Diagram: A spaghetti diagram is a visual representation using a continuous flow
line tracing the path of an item or activity through a process. The continuous flow line
enables process teams to identify redundancies in the work flow and opportunities to
expedite process flow.
Process Flow Diagram The process boundaries are defined by the entry and exit points of
inputs and outputs of the process. Once the boundaries are defined, the process flow
diagram (or process flowchart ) is a valuable tool for understanding the process using
graphic elements to represent tasks, flows, and storage. The following is a flow diagram for a
simple process having three sequential activities-

The symbols in a process flow diagram are defined as follows-


 Rectangles - represent tasks.
 Arrows - represent flows. Flows include the flow of material and the flow of information.
The flow of information may include production orders and instructions. The information
flow may take the form of a slip of paper that follows the material, or it may be routed
separately, possibly ahead of the material in order to ready the equipment. Material flow
usually is represented by a solid line and information flow by a dashed line.
 Inverted triangles - represent storage (inventory). Storage bins commonly are used to
represent raw material inventory, work in process inventory, and finished goods
inventory.
 Circles - represent storage of information (not shown in the above diagram) .
In a process flow diagram, tasks drawn one after the other in series are performed
sequentially. Tasks drawn in parallel are performed simultaneously. In the above diagram,
raw material is held in a storage bin at the beginning of the process. After the last task, the
output also is stored in a storage bin. When constructing a flow diagram, care should be
taken to avoid pitfalls that might cause the flow diagram not to represent reality. For
example, if the diagram is constructed using information obtained from employees, the
employees may be reluctant to disclose rework loops and other potentially embarrassing
aspects of the process. Similarly, if there are illogical aspects of the process flow, employees
may tend to portray it as it should be and not as it is. Even if they portray the process as they
perceive it, their perception may differ from the actual process. For example, they may leave
out important activities that they deem to be insignificant.
Process Performance Measures
 Operations managers are interested in process aspects such as cost, quality, flexibility,
and speed. Some of the process performance measures that communicate these aspects
include- Process capacity - the capacity of the process is its maximum output rate,
measured in units produced per unit of time. The capacity of a series of tasks is
determined by the lowest capacity task in the string. The capacity of parallel strings of
tasks is the sum of the capacities of the two strings, except for cases in which the two
strings have different outputs that are combined. In such cases, the capacity of the two
parallel strings of tasks is that of the lowest capacity parallel string.
 Capacity utilization -
Throughput rate (also known as flow rate) - the average rate at which units flow past a
specific point in the process. The maximum throughput rate is the process capacity.
 Flow time (also known as throughput time or lead time) - the average time that a unit
requires to flow through the process from the entry point to the exit point. The flow time
is the length of the longest path through the process. Flow time includes both processing
time and any time the unit spends between steps.
 Cycle time - the time between successive units as they are output from the process. Cycle
time for the process is equal to the inverse of the throughput rate. Cycle time can be
thought of as the time required for a task to repeat itself. Each series task in a process
must have a cycle time less than or equal to the cycle time for the process. Put another
way, the cycle time of the process is equal to the longest task cycle time. The process is
said to be in balance if the cycle times are equal for each activity in the process. Such
balance rarely is achieved.
 Process time - the average time that a unit is worked on. Process time is flow time less
idle time.
 Idle time - time when no activity is being performed, for example, when an activity is
waiting for work to arrive from the previous activity. The term can be used to describe
both machine idle time and worker idle time.
 Work In process - the amount of inventory in the process.
 Set-up time - the time required to prepare the equipment to perform an activity on a
batch of units. Set-up time usually does not depend strongly on the batch size and
therefore can be reduced on a per unit basis by increasing the batch size.
 Direct labor content - the amount of labor (in units of time) actually contained in the
product. Excludes idle time when workers are not working directly on the product. Also
excludes time spent maintaining machines, transp
utilization - the fraction of labor capacity that actually is utilized as direct labor.
Process Bottleneck
The process capacity is determined by the slowest series task in the process; that is, having
the slowest throughput rate or longest cycle time. This slowest task is known as the
bottleneck. Identification of the bottleneck is a critical aspect of process analysis since it not
only determines the process capacity, but also provides the opportunity to increase that
capacity. Saving time in the bottleneck activity saves time for the entire process. Saving time
in a non-bottleneck activity does not help the process since the throughput rate is limited by
the bottleneck. It is only when the bottleneck is eliminated that another activity will become
the new bottleneck and presents a new opportunity to improve the process. If the next
slowest task is much faster than the bottleneck, then the bottleneck is having a major impact
on the process capacity. If the next slowest task is only slightly faster than the bottleneck,
then increasing the throughput of the bottleneck will have a limited impact on the process
capacity.
Starvation and Blocking
Starvation occurs when a downstream activity is idle with no inputs to process because of
upstream delays. Blocking occurs when an activity becomes idle because the next
downstream activity is not ready to take it. Both starvation and blocking can be reduced by
adding buffers that hold inventory between activities.
Process Improvement
Improvements in cost, quality, flexibility, and speed are commonly sought. The following
lists some of the ways that processes can be improved.
 Reduce work-in-process inventory - reduces lead time.
 Add additional resources to increase capacity of the bottleneck. For example, an
additional machine can be added in parallel to increase the capacity.
 Improve the efficiency of the bottleneck activity - increases process capacity.
 Move work away from bottleneck resources where possible - increases process capacity.
 Increase availability of bottleneck resources, for example, by adding an additional shift -
increases process capacity.
 Minimize non-value adding activities - decreases cost, reduces lead time. Non-value
adding activities include transport, rework, waiting, testing and inspecting, and support
activities.
 Redesign the product for better manufacturability - can improve several or all process
performance measures.
 Flexibility can be improved by outsourcing certain activities. Flexibility also can be
enhanced by postponement, which shifts customizing activities to the end of the process.
In some cases, dramatic improvements can be made at minimal cost when the
bottleneck activity is severely limiting the process capacity. On the other hand, in well-
optimized processes, significant investment may be required to achieve a marginal
operational improvement. Because of the large investment, the operational gain may not
generate a sufficient rate of return. A cost-benefit analysis should be performed to
determine if a process change is worth the investment. Ultimately, net present value will
determine whether a process ‘improvement’ really is an improvement.
LINK ANALYSIS

Link analysis is a data analysis technique used in network theory that is used to evaluate the
relationships or connections between network nodes. These relationships can be between
various types of objects (nodes), including people, organizations and even transactions. Link
analysis is essentially a kind of knowledge discovery that can be used to visualize data to
allow for better analysis, especially in the context of links, whether Web links or relationship
links between people or between different entities. Link analysis has been used for
investigation of criminal activity (fraud detection, counterterrorism, and intelligence),
computer security analysis, search engine optimization, market research and medical
research.
Link analysis is literally about analyzing the links between objects, whether they are
physical, digital or relational. This requires diligent data gathering. For example, in the case
of a website where all of the links and backlinks that are present must be analyzed, a tool has
to sift through all of the HTML codes and various scripts in the page and then follow all the
links it finds in order to determine what sort of links are present and whether they are active
or dead. This information can be very important for search engine optimization, as it allows
the analyst to determine whether the search engine is actually able to find and index the
website. In networking, link analysis may involve determining the integrity of the connection
between each network node by analyzing the data that passes through the physical or virtual
links. With the data, analysts can find bottlenecks and possible fault areas and are able to
patch them up more quickly or even help with network optimization.
Link analysis has three primary purposes –
 Find matches for known patterns of interests between linked objects.
 Find anomalies by detecting violated known patterns.
 Find new patterns of interest (for example, in social networking and marketing and
business intelligence).
Instrument, Validity, Reliability
Instrument is the general term that researchers use for a measurement device (survey, test,
questionnaire, etc.). To help distinguish between instrument and instrumentation, consider
that the instrument is the device and instrumentation is the course of action (the process of
developing, testing, and using the device).
Instruments fall into two broad categories, researcher-completed and subject-completed,
distinguished by those instruments that researchers administer versus those that are
completed by participants. Researchers chose which type of instrument, or instruments, to
use based on the research question. Examples are listed below:
Researcher-completed Instruments Subject-completed Instruments
Rating scales Questionnaires
Interview schedules/guides Self-checklists
Tally sheets Attitude scales
Flowcharts Personality inventories
Performance checklists Achievement/aptitude tests
Time-and-motion logs Projective devices
Observation forms Sociometric devices

Usability refers to the ease with which an instrument can be administered, interpreted by
the participant, and scored/interpreted by the researcher. Example usability problems
include:
1. Students are asked to rate a lesson immediately after class, but there are only a few
minutes before the next class begins (problem with administration).
2. Students are asked to keep self-checklists of their after school activities, but the
directions are complicated and the item descriptions confusing (problem with
interpretation).
3. Teachers are asked about their attitudes regarding school policy, but some questions are
worded poorly which results in low completion rates (problem with
scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For
now, we can identify five usability considerations:
1. How long will it take to administer?
2. Are the directions clear?
3. How easy is it to score?
4. Do equivalent forms exist?
5. Have any problems been reported by others who used it?
It is best to use an existing instrument, one that has been developed and tested numerous
times, such as can be found in the Mental Measurements Yearbook. We will turn to why
next.

Validity is the extent to which an instrument measures what it is supposed to measure and
performs as it is designed to perform. It is rare, if nearly impossible, that an instrument be
100% valid, so validity is generally measured in degrees. As a process, validation involves
collecting and analyzing data to assess the accuracy of an instrument. There are numerous
statistical tests and measures to assess the validity of quantitative instruments, which
generally involves pilot testing. The remainder of this discussion focuses on external validity
and content validity.
External validity is the extent to which the results of a study can be generalized from a
sample to a population. Establishing eternal validity for an instrument, then, follows directly
from sampling. Recall that a sample should be an accurate representation of a population,
because the total population may not be available. An instrument that is externally valid
helps obtain population generalizability, or the degree to which a sample represents the
population.
Content validity refers to the appropriateness of the content of an instrument. In other
words, do the measures (questions, observation logs, etc.) accurately assess what you want
to know? This is particularly important with achievement tests. Consider that a test
developer wants to maximize the validity of a unit test for 7th grade mathematics. This
would involve taking representative questions from each of the sections of the unit and
evaluating them against the desired outcomes.

Reliability can be thought of as consistency. Does the instrument consistently measure what
it is intended to measure? It is not possible to calculate reliability; however, there are four
general estimators that you may encounter in reading research:
1. Inter-Rater/Observer Reliability: The degree to which different raters/observers give
consistent answers or estimates.
2. Test-Retest Reliability: The consistency of a measure evaluated over time.
3. Parallel-Forms Reliability: The reliability of two tests constructed the same way, from
the same content.
4. Internal Consistency Reliability: The consistency of results across items, often measured
with Cronbach’s Alpha.

Relating Reliability and Validity


Reliability is directly related to the validity of the measure. There are several important
principles. First, a test can be considered reliable, but not valid. Consider the SAT, used as a
predictor of success in college. It is a reliable test (high scores relate to high GPA), though
only a moderately valid indicator of success (due to the lack of structured environment –
class attendance, parent-regulated study, and sleeping habits – each holistically related to
success).
Second, validity is more important than reliability. Using the above example, college
admissions may consider the SAT a reliable test, but not necessarily a valid measure of other
quantities colleges seek, such as leadership capability, altruism, and civic involvement. The
combination of these aspects, alongside the SAT, is a more valid measure of the applicant’s
potential for graduation, later social involvement, and generosity (alumni giving) toward the
alma mater.
Finally, the most useful instrument is both valid and reliable. Proponents of the SAT argue
that it is both. It is a moderately reliable predictor of future success and a moderately valid
measure of a student’s knowledge in Mathematics, Critical Reading, and Writing.

You might also like