0% found this document useful (0 votes)
3 views102 pages

Maje Karo Research

A survey is a research method for collecting data from a specific group to gain insights on various topics, often conducted through standardized questionnaires. Online surveys are a popular tool due to their efficiency, accuracy, and ease of participation, allowing for real-time data analysis and flexible respondent engagement. Various survey templates and methodologies exist to ensure effective data collection and analysis, catering to different research objectives.

Uploaded by

Kushal Choudhury
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views102 pages

Maje Karo Research

A survey is a research method for collecting data from a specific group to gain insights on various topics, often conducted through standardized questionnaires. Online surveys are a popular tool due to their efficiency, accuracy, and ease of participation, allowing for real-time data analysis and flexible respondent engagement. Various survey templates and methodologies exist to ensure effective data collection and analysis, catering to different research objectives.

Uploaded by

Kushal Choudhury
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

What is a survey

A survey is a research method used for collecting data from a predefined group of respondents to gain
information and insights into various topics of interest. They can have multiple purposes, and
researchers can conduct it in many ways depending on the methodology chosen and the study’s goal. In
the year 2020, research is of extreme importance, and hence it’s essential for us to understand the
benefits of social research for a target population using the right survey tool.

The data is usually obtained through the use of standardized procedures to ensure that each respondent
can answer the questions at a level playing field to avoid biased opinions that could influence the
outcome of the research or study. The process involves asking people for information through a
questionnaire, which can be either online or offline. However, with the arrival of new technologies, it is
common to distribute them using digital media such as social networks, email, QR codes, or URLs.

What is an online survey?

An online survey is a set of structured questions that the respondent completes over the internet,
generally through filling out a form. It is a more natural way to reach out to the respondents as it is less
time consuming than the traditional way of gathering information through one to one interaction and
less expensive. The data is collected and stored in a database, which is later evaluated by an expert in
the field.

As an incentive for respondents to participate in such online research, businesses offer rewards like gift
cards, reward points that they can redeem for goods or services later, free airline miles, discounts at gas
stations, etc. Research studies with rewards are a win-win situation for both businesses and
respondents. Companies or organizations get valuable data from a controlled environment for market
research.

Create a free account!

What are the advantages of an online survey?

Accuracy: In an online research study, the margin of error is low, as the respondents register their
responses by easy selection buttons. Tradition methods require human interference, and according to a
study, human intervention increases the margin of error by 10%.
Easy and quick to analyze: Since all the responses are registered online, it is straightforward to analyze
the data in real-time. It is also ready to draw inferences and share the result.

Ease of participation: In this new age technology-oriented universe, most people on this planet have
access to the internet. Respondents prefer receiving the survey over the email. Ease of participation
dramatically increases as the respondents can choose a suitable time and place, according to their
convenience, to register their responses.

Great branding exercise: In an online design, organizations or businesses have this opportunity to
develop their questionnaire to align with their brand. Using logos and similar brand language (color and
fonts) gives the companies an advantage as respondents can connect better with the brand.

Respondents can be honest and flexible at the same time: According to a study, researchers have found
increased participation by respondents when deployed with online surveys rather than answering
lengthy questions. By designing questionnaires that ask relevant questions, respondents are honest with
their answers and can skip the questions or respondents to a more neutral option, increasing their
flexibility to respond.

Survey templates: Leading online research tools have expert-designed ready survey templates that make
it easier for researchers to choose from and conduct their research study. These templates are vetted
questionnaires and are specific to every industry, making the study even more efficient.

350+ Free survey templates

Good survey templates and examples

A researcher needs to conduct surveys using the right questions and the right medium to administer and
track responses. QuestionPro is a platform that helps create and deploy different types and sets of
questionnaires, polls, and quizzes.

We have 350+ varieties of survey templates. including:

Customer Satisfaction (CSAT) + Net Promoter Score (NPS) Survey: We hear this time and again that the
customer is king, which is true. A satisfied customer is a customer that helps your brand and
organization grow, through direct means as well as being an advocate for your brand. This template
talks about the goodwill your brand has created and how referenceable it is.

Employee Satisfaction Template: This template is the perfect fit for organizations that want to measure
their employees’ satisfaction levels. This template will give you insights into your organizations’ culture
and job satisfaction of your workforce within that culture.

B2B Templates: The business to business templates are efficient modes of collecting feedback around
entities that directly contribute to your business. These may include vendors, clients, their experiences,
and so on.
Company Communications Evaluation Template: This example is essential to analyze employee
perspective about the subject of internal company communications, topics to cover in the newsletter,
updates on the bulletin board, the efficiency of an organization’s management in conversation, etc.

Hardware Product Evaluation Template: Improving hardware product features isn’t a straightforward
proposition due to a lot of elements like raw materials, supply chain, and manufacturing lines getting
affected by it. Hence, while eliciting feedback for hardware, it is essential to be as objective as possible.
It helps us understand the kind of necessary product innovations.

Strategic Planning Survey: Innovation is essential to any organization’s product or service lines. Hence,
implementing customer support and making product or service tweaks when required is necessary for
the sustenance and growth of an organization. This template helps organizations chalk out their
business strategy.

Business Demographic Survey: This template aims to ask demographic questions and examples that help
gain information on occupation, the primary area of business, job function and description,
organization’s gross income, etc.

Course Evaluation Survey: This template helps educational institutions conduct period feedback on their
course and if students find it helpful or not if it’s stimulating enough and students to see this is as value
for money along with accentuated learning.

How to create a survey with a good design?

As explained before, a survey usually has its beginnings when a person, company, or organization faces a
need for information, and there is no existing data that is sufficient. Take into account the following
recommendations:

Define objective: The survey would have no meaning if the aim and the result unplanned before
deploying it. The survey method and plan should be in the form of actionable milestones, as well as the
sample planned for research. Appropriate distribution methods for these samples also have to be put in
place right at the outset.

The number of questions: The number of questions used in a market research study is dependent on the
end objective of the research. It is essential to avoid redundant queries in every way possible. The length
of the questionnaire has to be dictated only by the core data metrics that have to be collected.

Simple language: One factor that can cause a high survey dropout rate is if the respondent finds the
language difficult to understand. Therefore, it is imperative to use easily understandable text in the
survey.

Question types: There are several types of questions that can go into a survey. It is essential to use the
question types that offer the most value to the research while being the easiest to understand and
answer to a respondent. Using close-ended questions like the Net Promoter Score (NPS) questions or
multiple-choice questions help increase the survey response rate.
Consistent scales: If you use rating scale questions, make sure that the scales are consistent throughout
the research study. Using scales from -5 to +5 in one question and -3 to +3 in another question may
confuse a respondent.

Survey Logic: Logic is one of the most critical aspects of the survey design. If the logic is flawed,
respondents will not be able to continue further or the desired way. Logic has to be applied and tested
to ensure that on selecting an option, only the next logical question shows up.

Characteristics of a survey

1. Sample and Sample Determination

First, a sample also referred to as the audience, is needed, which should consist of a series of survey
respondents data with required demographic characteristics, who can relevantly answer your survey
questions and provide the best insights. Better the quality of your audience, better will be your response
quality and insights.

The characteristics of a survey sample, are:

Determining sample size: Once you have determined your sample, the total number of individuals in
that particular sample is the sample size. Selecting a sample size depends on the end objective of your
research study. It should consist of a series of survey respondents data with required demographic
characteristics, who can relevantly answer your survey questions and provide the best insights.

Types of sampling: There are two essential types of sampling methods; they are probability sampling
and non-probability sampling. The two standard sampling methods are:

Probability sampling: Probability sampling is a sampling method where the respondent is selected based
on the theory of probability. The major characteristic of this method is that each individual in a
population has an equal chance of being selected.

Non-probability sampling: Non-probability sampling is a sampling method where the researcher selects
a sample of respondents purely based on their discretion or gut. There is no predefined selection
method.

2. Survey Questions: How to ask the right questions?

Useful questions are the cornerstone for the success of any survey and, subsequently, any research
study.

The characteristics of the survey questions are as follows:


Data collection: Whether it is an email, SMS, web intercept, or a mobile app survey, the single common
denominator that determines how effectively you can collect accurate and complete responses is your
survey questions and their types.

Fundamental levels of measurement scales: Four measurement scales are crucial to creating a multiple-
choice question in a survey. They are nominal, ordinal, interval, and ratio measurement scales without
the fundamentals of which, no multiple-choice questions can be created. Hence, it is essential to
understand these levels of measurement to create a robust research framework.

Use of different question types: Multiple choice questions are the most common type of survey
questions, in which some of the popular question types are: dichotomous question, semantic
differential scale question, rank order questions, and rating scale questions. Open-ended questions help
collect in-depth qualitative data.

Administering the survey: It is essential to plan the type of survey to ensure the optimum number of
responses required for your study. It could be a mix of interviews and questions or a questionnaire.
Interviews could be telephone interviews, face-to-face interviews, online interviews, and questionnaires
can be personal intercept, or web surveys.

3. Survey Logic: Skip logic and branching

The logic is one of the essential characteristics of a survey. The objective of using logic in a study is to
move a respondent-based on their current selection to a question. Survey skip logic and branching
provide the ability to create “intelligent” surveys, meaning respondents can answer relevant questions
based on their answers to screening questions. The characteristics include:

Design: In this phase, the users design their logic and set it up in a way that irrelevant questions to each
respondent, don’t show up as part of the survey.

Application: Survey logic can be applied by using conditional branching or unconditional branching.
Other parameters such that form the basis of a logic depending on the objective of the study, are piping
data, question randomization, link quota, etc.

4. Survey Methods

Survey methodology studies the in-depth sampling of individual units from a population and
administering data collection techniques on that sample. It includes instruments or processes that ask
different question types to a predefined sample, to conduct data-collection, and increase the survey
response rate.

The two distinctive member types are professionals in the field that focus on empirical survey errors and
others that work to design surveys and reduce them. The primary tasks of an admin while deploying a
survey is to identify and create samples, validate test questions, select the mode to administer
questions, and verify data collection methods, statistical analysis, and data reporting.
Survey Methods based on Design

Research studies are of the following types:

Cross-sectional studies: Cross-sectional study is an observational research type that analyzes data of
variables collected at one given point of time across a sample population. Population or a predefined
subset. This study type is also known as cross-sectional analysis, transverse study, or prevalence study.
The data gathered in a cross-sectional study is from people who are similar in all variables except the
one under study. This variable remains constant throughout the cross-sectional study.

Longitudinal studies: Longitudinal study is an observational study employing continuous or repeated


measures to follow particular individuals over a prolonged period, often years or decades. The
longitudinal research collects data that is either qualitative or quantitative. In a longitudinal study,
respondents are under observation over a period, ranging from months to decades, to observe any
changes in them or their attitude. For example, a researcher wants to find out which disease affects
young boys (in the age group of 10-15). Then, the researcher will observe the individuals over that
period to collect meaningful data.

Correlational studies: Correlational study is a non-experimental type of research design where two
distinct variables are studied. Statistical analysis helps to examine the relationship between them
without the interference of external “variables.” This study aims to understand the change and level of
change in one of the two variables in the study if the other variable changes. For example, if an ice-
cream truck has a jingle that can be loudly heard, people start to understand which ice-cream truck is in
the neighborhood and how far it is from the person’s location.

Survey Methods based on the distribution

There are different ways of survey distribution. Some of the most commonly used methods are:

Email: Sending out an email is the easiest way of conducting a survey. The respondents are targeted,
and there is a higher chance of response due to the respondents already knowing about your brand. You
can use the QuestionPro email management feature to send out and collect responses.

Buy respondents: Buying a sample helps achieve a lot of the response criteria because the people who
are being asked to respond have signed up to do so. The qualifying criteria for the research study are
met.

Embedding on the website: Embedding a survey on a site ensures that the number of responses is very
high. Embedding a survey can be done while the person enters the website or is exiting it. A non-
intrusive method of collecting feedback is essential to achieve a higher number of responses. The
responses received are also honest due to the top brand recall value, and the answers are quick to
collect and analyze due to them being in a digital format.
Post to the social network: Posting on social networks is another effective way of receiving responses.
The survey is published as a link on social media, and people that follow the brand can be the set of
audiences or respondents. There is no upper cap on the number of survey responses required and is the
easiest and fastest way of eliciting responses.

QR code: QuestionPro QR codes store the URL for the survey. You can print/publish this code in
magazines, on signs, business cards, or on just about any object/medium. Users with a camera phone
equipped with the correct reader application can scan the QR Code’s image to open the survey in the
phone’s browser.

QuestionPro App: The QuestionPro App allows to circulate surveys quickly, and the responses can be
collected both online and offline.

API: You can use the API integration of the QuestionPro platform for potential respondents to take your
survey.

SMS: Using SMS surveys are another quick way to collect feedback. This method can be used in quick
responses and when the survey is simple, straightforward, and not too long. This method is used to
increase the open and response rate of collecting feedback.

Distribution allows using one or a mix of the above methods, depending on the research objective and
the resources being used for any particular survey. Many factors play a part in the mode of distribution
of surveys like cost, research study type, the flexibility of questions, time to collect responses, statistical
analysis to be run on data, and willingness of the respondent to take part in the study.

You can conduct a telephone or email survey and then select respondents for a face-to-face interview.
Survey data are sometimes also obtained through questionnaires filled out by respondents in groups, for
example, a school class or a group of shoppers in a shopping center.

You can also classify these by their content, using open or closed questions to know, for example,
opinions, attitudes, details of a fact, habits, experiences for a later classification, and analysis of the
obtained results.

In the same way, you can use some sample survey questions; ask for the classification of different
alternatives. You can do a concise survey with items that can take five minutes or less to answer, or it
can be a very long survey that requires one hour or more of the interviewee’s time. For example, those
who need to know in-depth behavior or attitudes of people prefer to use, in addition to surveys, a panel,
or an online community.

5. Survey data collection

The methods used to collect survey data have evolved with time. Researchers have increasingly moved
away from paper surveys to using quick, online questionnaires for survey data collection method has its
pros and cons, and the researcher has to, in most cases, use different ways to collect the requisite data
from a sample.

The survey response rates of each of these methods vary as multiple factors like time, interest,
incentive, etc. play a role in the data collection process.

In the section above, we have looked at survey data collection methods based on design, cross-sectional
research, and longitudinal surveys. In this method, we will look at the four main survey data collection
methods based on their actual implementation. They are:

Online: Online surveys have now become the most widely used survey data collection method. There is
a wide variety of advanced and straightforward question types that are available in online surveys. The
data collection and data analysis are now structured and easy to manage. The survey response online is
very high compared to other research options.

Telephone: Telephone surveys are a cheaper method than face-to-face surveys and less-time consuming
too. Contacting respondents via the telephonic medium requires less effort and human resources. Still,
the survey response rate could be debatable as respondents aren’t very trusting to give out information
on the call. In this survey data collection method, the researcher also has less scope to digress from the
survey flow.

Face-to-face: Face-to-face surveys are on the most widely used methods of survey data collection. The
survey response rate in this survey data collection method is always higher because the respondent
trusts the researcher since it is in-person. The survey design in this research method is planned well in
advance, but there is so scope to digress to collect in-depth data.

Paper or print: The least used survey data collection method that is now being used mostly in field
research is paper surveys. Researchers and organizations are moving away from using this method since
they are logistically tough to manage and tough to analyze. These can be used where laptops,
computers, and tablets cannot go, and hence they use the age-old method of data collection; pen and
paper.

6. Survey Data Analysis

When you conduct a survey, you must have access to its analytics. While manual surveys based on pen
and paper or excel sheets require the additional workforce to be analyzed by experienced data analysts,
it becomes much simpler when using an online survey platform.

Statistical analysis can be conducted on this survey data to make sense of all the data that has been
collected. There are multiple methods of survey data analysis, mostly for what is quantitative data. Most
of the commonly used types are:
Cross-tabulation is one of the most straightforward statistical analysis tools that use a basic tabulation
framework to make sense of data. Raw survey data can be daunting, but structuring that data into a
table helps draw parallels between different research parameters. It involves data that is mutually
exclusive to each other.

Trend analysis provides the ability to look at survey-data over a long period. This method of statistical
analysis of survey data helps plot aggregated response data over time, which helps to conclude the
change in respondent perception over time.

MaxDiff analysis is a research technique to help understand customer preferences across multiple
parameters. For example, a product’s pricing, features, marketing, etc. become the basis for Maxdiff
analysis. In a simplistic form, this method is also called the “best-worst” method. This method is similar
to conjoint analysis, but it is much easier to implement.

Conjoint analysis is an advanced statistical research method that aims to understand the choices a
person makes in selecting a product or service. This method offers in-depth insights into what is vital to
a customer and what parameters sway their purchasing decisions.

TURF Analysis or Total Unduplicated Reach and Frequency Analysis is a statistical research methodology
that assesses the total market reach of a product or service or a mix of both. This method is widely used
by organizations to understand at what frequency is their messaging reaching the audience and if that
needs tweaking. TURF Analysis is widely used to formulate and measure the success of go-to-market
strategies.

Gap analysis uses a side-by-side matrix question type that helps regulate the difference between
expected performance and actual performance. This statistical method for survey data helps understand
what has to move production from practical to planned performance.

SWOT analysis, another widely used statistical way organizes survey data into data that represents
strength, weaknesses, opportunities, and threats of an organization or product or service that provides a
holistic picture about competition. This method helps to create effective business strategies.

Text analysis is an advanced statistical method where intelligent tools make sense of and quantify or
fashion qualitative and open-ended data into easily understandable data. This method applied to
unstructured data.
VIEW LARGEDOWNLOAD SLIDE

Surveys remain the foundation of social science research but can be employed in almost any discipline,
including medical research. However, good survey research is harder than it looks. Anesthesiology
researchers use surveys to research behaviors, attitudes, and knowledge of both physicians and patients
or determine population characteristics, such as disease states, practices, or outcomes. Examples of
surveys include transfusion practices among American Society of Anesthesiologists (ASA) members,1
use of ultrasonography for regional anesthesia,2 and parental understanding of informed consent for
research.3 However, many journals are reticent about publishing survey research because of poor
quality.4–6 Some organizations, such as the Australian and New Zealand College of Anaesthetists and
the Society for Pediatric Anesthesia, have introduced formal vetting processes to improve the quality of
survey research and decrease respondent fatigue and burden.7,8

Several survey research errors (biases that divert from the truth) were seen in what is widely regarded
as the greatest survey disaster: the Literary Digest survey of 10 million Americans that incorrectly
predicted that Roosevelt would lose the 1936 Presidential election in a landslide when, in fact, the
absolute opposite occurred.9 Problems (errors) with this survey included an unrepresentative sample
(affluent Americans with phones), a low response rate (20%), and nonresponder bias (Roosevelt voters
tended not to respond). As we will discuss, all these errors can be avoided or at least minimized. As
Dillman et al.10 note, the entire survey process (from design to reporting) needs to be tailored to the
question asked, which in turn is the first step: ask a clear question.

In the absence of international consensus guidelines on conducting and reporting survey research, we
aim to discuss the elements of good survey research, outline some of the pitfalls, and introduce some
newer approaches to collecting and analyzing survey data. We provide pragmatic toolboxes for survey
researchers (table 1) and survey report readers (table 2) and suggest minimum standards for submitting
a survey (table 3).

Table 1.

Toolbox for Survey Researchers

Toolbox for Survey Researchers

VIEW LARGE

Table 2.

Toolbox for Survey Readers

Toolbox for Survey Readers


VIEW LARGE

Table 3.

Suggested Minimum Standards for Manuscript Submission

Suggested Minimum Standards for Manuscript Submission

VIEW LARGE

Design Considerations

The primary aim of any survey is to answer a good research question that is interesting for the broader
target population.4–6,10,11 A good, clear survey has further interrelated advantages: shorter, simpler
items that decrease the time to complete and enhance the response rate. Further, effective surveys
focus exclusively on “need to know” questions, not those that might be simply “nice to know.”5 The
aims of any survey should also be clearly stated in concrete terms, e.g., “To describe the current practice
patterns of Nordic anesthesia departments in anesthetic management of endovascular therapy in acute
ischemic stroke.”12

The choice of survey design will depend on the questions being asked, the population of interest, and
available resources.5,10 Each type of survey has advantages and disadvantages (table 4). The questions
(items) in a survey should reflect the objectives of the study.4,6,13 Whereas some surveys are designed
to simply measure knowledge, others measure constructs, practices, or behaviors. Thus, researchers
should consider the research goals when writing and formatting the questionnaire (instrument). In
general, surveys should be short, relevant, focused, interesting, easy to read, and complete. Surveys that
lack these attributes often suffer from poor response rates and decreased reliability.10,14

Table 4.

Advantages and Disadvantages of Different Survey Methods10

Advantages and Disadvantages of Different Survey Methods10

VIEW LARGE

When designing a survey, it is important to know your audience. Researchers and readers should put
themselves in the position of the intended respondents. How might they react to being approached and
how might they respond to the questions asked? Motivated participants, for example, may be more
willing to answer more detailed or probing questions. Questions should be written by using simple
language4 at a reading level commensurate with the literacy of the intended audience. In the United
States and most developed countries, surveys of the general population should be written at no more
than an eighth-grade reading level and avoid abbreviations, jargon, colloquialisms, acronyms, or
unfamiliar technical terms. Further, language and cultural differences may also be important
considerations, including tendencies to want to please, or conversely avoid, perceived authority figures
such as doctors. Surveys for professionals, such as physicians, can have more complex technical words,
but simple and clear structure and wording help everyone.

Questions (items) validated in previous research should be used whenever possible.4,15 For new or
revised questions, Peterson16 developed a guide with the acronym BRUSO: brief, relevant,
unambiguous, specific, and objective.4 First, questions should be brief to reduce the length of the
survey. Questions should include complete sentences but not be long-winded. Questions should also be
relevant to the survey’s purpose and focus on “need to know” information. Questions that may not
appear intuitively relevant but are deemed necessary require a brief explanation about why the
questions are important. Questions must be unambiguous. For example, asking respondents how often
they check social media on a “typical work day” may mean different things to different people, i.e., what
is “typical?” Questions that evoke a double-negative require logical thinking and are often answered
incorrectly. Questions should also be specific so that the respondent is clear as to their intent; questions
should be unidimensional. For example, “Do you consider yourself an empathetic and sympathetic
person?” could evoke different responses because one can be sympathetic without being empathetic.
This example would be better split into two questions addressing sympathy and empathy separately.
Unless a primary focus of the study, demographic questions should be placed at the end of the survey
and kept to a minimum. Objective questions should not contain words that “nudge” the answer or
reveal the researchers’ beliefs or opinions.

The choice of questions and response options (scales) depends on the type and goals of the
survey.5,10,11 Interviews and certain types of written surveys are better served by open-ended
questions with responses that can be electronically recorded or manually transcribed. Online and postal
surveys typically employ closed-ended questions in which the respondent chooses a response from a
structured list of options. Both open and closed responses have advantages and disadvantages. Open-
ended responses allow the respondents to answer in their own words in a manner that reflects their
personal experiences or beliefs and are less likely to be influenced by the expectations of the
investigator.10,17–19 Open-ended questions are particularly helpful when the researchers are unclear
how respondents might respond and for developing new response options for closed-ended questions.
One example is “Under what circumstances would you cancel anesthesia for the child with an upper
respiratory tract infection?” The major disadvantages of open-ended questions are that responses can
be long, difficult to transcribe, and difficult to classify and may need experts to identify underlying
themes. Further, surveys with a lot of open-ended questions may have incomplete or missing answers
because of response fatigue.

Closed-ended (structured) questions differ from open-ended by providing a list of options to choose
from.5,10,20 Closed-ended questions are optimal for postal and online surveys because they provide
standardized responses, take less time to complete, and are easier to analyze. The major disadvantage
of closed-ended questions is that they can be more difficult to write5,10 because the response options
must be both exhaustive (include all important options) and mutually exclusive (each option should be
distinct). Including every possible option can result in excessively long lists of responses that increase
survey fatigue and nonresponse. One strategy to limit the number of responses while avoiding missing
important data is to include an “other” response with a clarifying “please describe/specify.” Further, for
all surveys, a final open question of “Any further comments?” allows respondents to freely comment on
both the topic and the survey itself.21

Including too many questions can result in satisficing, where respondents increasingly fail to carefully
consider the questions and subsequently provide answers that are not well thought out.10,19 Survey
Monkey (http://surveymonkey.com) report from their data22 that respondents will spend an average
of 5 min to answer 10 questions in an online survey but only 10 min to answer 25 questions. This
suggests that as the number of questions increases, the time spent on each question decreases, i.e.,
satisficing. Further, if a survey takes 10 min to complete, data show that up to 20% of respondents will
abandon the survey before completing it.22 Respondents may also be more likely to abandon surveys
with compulsory questions, particularly if they do not include a “Don’t know” type option. Compulsory
questions should be minimized and, if used, should always include a “Don’t know,” “Not applicable,” or
“Don’t wish to answer” option.

Response scales (fig. 1)5 are typically categorical/nominal (e.g., male/female, true/false); ordinal, in
which the responses are ordered (e.g., very anxious to very calm); or numerical (e.g., age, height).
Categorical and ordinal response options (table 5) typically take the form of Likert scales23 with
different levels of response, for example, “I preoxygenate patients before general anesthesia” could be
answered by using the following list formatted vertically:

Table 5.

Selecting Categorical and Ordinal Questions and Responses

Selecting Categorical and Ordinal Questions and Responses

VIEW LARGE

Fig. 1.

Examples of open- and closed-format response options for questions (reproduced from Anaesthesia and
Intensive Care5 with the kind permission of the Australian Society of Anaesthetists).

VIEW LARGEDOWNLOAD SLIDE

Examples of open- and closed-format response options for questions (reproduced from Anaesthesia and
Intensive Care5 with the kind permission of the Australian Society of Anaesthetists).
□ Strongly disagree

□ Disagree

□ Neutral

□ Agree

□ Strongly agree

When formatting these scales, the endpoints should be mirror opposites, be balanced, be presented
from negative to positive, include equal intervals, and be presented as a vertical rather than horizontal
list. Vertical formatting is less subject to mistakes when responding and easier to code.

Depending on the degree of precision required, questions should offer three to seven responses, with
five probably optimal. Some survey researchers omit a “neutral” response option to force respondents
one way or another or because the researchers argue that a neutral option discourages respondents
from answering. Others argue that a neutral response provides a natural choice. Decisions regarding the
number of response options and inclusion/exclusion of a neutral response should be made during pilot
pretesting. Pilot testing with and without a neutral response option can provide a sense of whether
responses tend to cluster around a middle point.

Other survey formats include visual analog scales (e.g., visual analog pain scales24 ) that ask
respondents to either circle or electronically mark a number (typically 0 to 10) or a 100-mm scale to
indicate their level of response. Again, like pain scales, there should be descriptive anchors at each end
of the scale to provide context. Other types of scales include ranking scales, where the respondent ranks
a set of ideas or preferences; matrix scales, where the respondent evaluates one or more row items
using the same set of column choices; magnitude estimation scales; and factorial questions, in which a
vignette is presented that requires a judgement or decision-making response.

In addition to consideration of the types of questions and response scales, it is also important to
consider how questions transition from one to another. Skip or branch logic is a feature that routes the
participant to subsequent questions or page/sections based on their response to a particular question.
This is an important process that allows participants to avoid questions that do not apply to them, e.g.,
“If you replied ‘No’ to question 3, please skip to question 8.” The routes used in skip logic should be
thoroughly pretested before implementation. For readers, the easiest way to test the flow of questions
is to imagine answering the survey.

Reliability and Validity

Not all surveys require formal reliability and validity testing, e.g., simple descriptive surveys (table 6).
However, for surveys that are designed to describe or measure constructs, e.g., pain, sleep quality,
altruism, empathy, it is critical to ensure that the items in the survey or instrument actually measure
what they are designed to measure. All survey measures, whether quantitative or qualitative, are
subject to error.25 These errors can either be due to random chance and/or errors in the survey itself:
measurement error.10 Measurement errors reflect the accuracy of the survey, i.e., do the questions
measure what they are supposed to measure (validity), and are they reproducible across individuals
over time (reliability)? The validity and reliability of questions can be quantified statistically, often by
strength of association with other metrics.26

Table 6.

Reliability and Validity Estimates

Reliability and Validity Estimates

VIEW LARGE

As with all research, the first step in survey research is to review the literature for existing surveys or
survey questions that have already been formally tested. It makes no sense to generate a new set of
questions as substitutes for ones that have already been validated. Therefore, it is preferable to use or
adapt existing questions or surveys that have demonstrated validity, with appropriate acknowledgment
or citation. Although some reliability/validity testing may still be required, the burden of formal testing
of new questions is greatly reduced.

For developing de novo survey questionnaires (instruments), Sullivan26 provides sage advice:
“Researchers who create novel assessment instruments need to state the development process,
reliability measures, pilot results, and any other information that may lend credibility to the use of
homegrown instruments. Transparency enhances credibility.” Readers’ should look for these points in
novel surveys and whether the validity of previously reported items/surveys has been demonstrated.

Reliability

Reliability is the degree a measurement yields the same results over repeated trials or under different
circumstances.25–27 Test–retest reliability reflects the stability of the survey instrument and can be
measured by having the same group of respondents complete the identical survey at two points in time.
Surveys with good test–retest reliability typically have little variance between the two sets of data.
Interrater reliability refers to how two or more respondents respond to the same questions and
intraobserver reliability refers to the stability of responses over time in the same individual.

Similar question wording or order is subject to “practice” effects that can be overcome by rewording a
question or reordering the responses. Questions with similar responses regardless of wording or order
are said to have good alternate-form reliability.

Because not all traits or behaviors are observable or can be measured by a single question, researchers
often use several questions to describe the same behavior or trait of interest (constructs). Internal
consistency reliability is the degree to which these questions vary together as a group, i.e., the degree to
which these different questions consistently measure the same construct. For example, because
depression is hard to measure by using a single question (Are you depressed?), researchers employ
several different questions that address different but related aspects of depression, e.g., fatigue, trouble
concentrating.

Validity

Validity measures the degree to which questions in a survey measure what they are intended to
measure.25–27 For example, questions designed to measure pain should measure pain and not
something else, such as anxiety. Although some validity metrics are relatively easy to measure, some are
more complex. Two types of validity that are easy to measure are face and content validity. Face validity
refers to how the questions appear (on “face value”) to individuals with little expertise in the survey
topic. Although face validity is a somewhat casual assessment, it nonetheless reassures the investigator
that the questions will make sense at a layperson’s level. Content validity, on the other hand, requires
input from content experts. Neither face nor content validity is statistically quantifiable, yet both can
provide important information to ensure that questions are relevant. For example, a survey of pain
techniques by anesthesiologists might benefit from pretesting with a small group of surgeons (face
validity) and pain medicine specialists (content validity). The value of expert consultation before
implementing any survey cannot be understated.

Construct validity is harder to conceptualize but is a measure of the degree to which survey questions,
when applied in practice, reflect the true theoretical meaning of the concept. Construct validity is
typically established over years of use in different settings and populations. Although there is no simple
metric for construct validity, social scientists typically use other quantifiable measures, such as
comparing against an existing “gold standard.” This type of validity is termed concurrent criterion
validity.
Where there is no gold standard, construct validity can be established by measuring the degree to which
the questions in a survey correlate with other measures that should theoretically be associated with the
same construct (convergent validity). For example, to validate a new survey instrument to measure
sleep quality, it might be important to compare it with other measures of sleep quality (e.g., direct
observation). If convergent validity is established, a natural follow-up test would be to see whether the
same questions are able to discriminate between sleep quality and other related, but different,
measures such as sleep quantity. If these two measures do not correlate, we assume (if other validity
measures confirm) that they are measuring two separate constructs and that the sleep-quality questions
demonstrate good divergent or discriminant validity.

Ethics Review

Ethics committee or institutional review board approval is typically required before testing and
implementing any survey. The primary ethical concerns of surveys relate to content (e.g., could items be
psychologically damaging?) and how confidentiality will be maintained. Although surveys may not be
identifiable by the participant’s name, there are other sources of information, e.g., IP addresses and
email addresses, that could potentially link the survey with the participant. This is particularly important
when using third-party software services, e.g., Survey Monkey and Qualtrics. Investigators should thus
be aware of the security agreements of each company and assure participants that their information will
be maintained in a confidential manner, e.g., stored and maintained on password-protected computers
or cloud storage and/or how any identifying information will be delinked.

Pretesting (Piloting) the Survey

Although there is no such thing as a perfect survey, pretesting or pilot testing can significantly enhance
the effectiveness of any survey. Unfortunately, this step is often missing.6 Pretesting is typically
conducted in two phases. First, the research team reviews all aspects of the survey, i.e., the instructions,
the order and flow of questions, whether it contains skip or branch logic, how long the survey should
take to complete, and whether specific questions are ambiguous and/or are being consistently missed.
Second, the survey should be distributed among a small subset of the intended audience before it is
administered to the larger target group. This can be done somewhat informally but can also involve
structured focus groups followed by thorough debriefing. Even if previously validated surveys are used,
questions should be pretested because meaning can often be affected by the context of the survey. No
matter what the design, the piloted survey should be submitted as part of any manuscript, possibly as
an appendix.

Sampling

Precise estimates of large populations, up to millions of people, can be derived from survey samples of
fewer than 2,000 people.10,28,29 Thus, because it is not always practical to survey an entire
population, sampling provides an efficient way to collect data that, if done correctly, can be
representative of the population of interest. A representative sample should mirror the characteristics
of the broader population, ensuring generalizability and reducing the effect of sample bias. However,
although representativeness is a primary goal, the sampling approach will also depend on the type of
survey, the target population, inclusion of subgroups, and resources/cost.

Because a survey of the entire ASA membership (53,000 members in 2016) might be impractical, an
option would be to generate a sample that is representative of important characteristics of the ASA
membership, such as sex, ethnicity, and training. This is best achieved by employing some type of simple
random sampling.17,29 However, there may be instances in which investigators may want to focus on a
subgroup or oversample groups that are underrepresented, e.g., rural practitioners. In these cases,
stratified sampling can be employed in which random samples are drawn from each subgroup or strata,
e.g., ASA membership by geography. In cases in which there may be underrepresentation of certain
groups, other methods such as oversampling should be employed.

Sample-size Estimates

Sample-size estimates should be based on the primary question29 and large enough to be confident
(usually 95 or 99%) that results from the entire population will lie within the desired margin of error of
the sample (fig. 2).29 Typically, the maximum acceptable margin of error for a proportion (percentage)
of the population is set around ±5% (most political polls quote 3 to 5%). That is, if the margin of error is
±5%, and 25% of respondents from a sample of 325 ASA members reply that they use thiopental for
induction, the 95% CI (the results of 95 of 100 repeated samples of the sample) shows that between 20
and 30% of ASA members use thiopental. Small samples typically produce wider CIs. By using the same
example, a sample of 30 ASA members that produces a margin of error of ±20% would result in a 95%
confidence estimate that 5 to 45% of ASA members use thiopental: an unhelpful estimate ranging from
very few to almost half. However, although increasing the sample size to reduce the margin of error
increases the precision of the data, there is an effect of diminishing returns (fig. 2). As the margins of
error are tightened to less than 4%, the number of participants required increases disproportionally.
This is important when balancing precision with the practicality, availability of resources, and costs of
surveying large numbers of subjects.

Fig. 2.

Effects of different planned margins of error (±%) and the 95% and 99% CIs on sample-size estimations
for a survey of the entire membership of the American Society of Anesthesiologists (N = 52,905). Note:
The actual required sample size will also be affected by the response rate.

VIEW LARGEDOWNLOAD SLIDE

Effects of different planned margins of error (±%) and the 95% and 99% CIs on sample-size estimations
for a survey of the entire membership of the American Society of Anesthesiologists (N = 52,905). Note:
The actual required sample size will also be affected by the response rate.
For all investigators, we strongly advise working with a statistician for both planning and analysis. For
those with a background in statistics, there are several online resources28,31 and statistical packages
such as R (available free from R Foundation for Statistical Computing, Austria), STATA (StataCorp LLC,
USA), and SPSS (SPSS Statistics, IBM, USA).

Sample-size calculations can also be based on anticipated proportions, but when comparing groups, the
anticipated difference may be important. Notably, calculated sample sizes are for the number of
completed surveys. Although a response rate of more than 60% is considered good, less than 50% is
common. Recent surveys of anesthesiologists and anesthesia fellows, for example, reported 54 and 33%
response rates, respectively.32–36 A conservative approach, therefore, is to send the survey to
approximately two to three times the calculated sample size. In general, leading journals are unlikely to
publish a survey with a response of less than 30 to 40%, except in exceptional circumstances.

Survey Bias

In addition to sampling bias, there are several important ways in which error can creep into survey
research.10,17

Researcher Bias

Just as observer bias can adversely affect results in a randomized trial, researcher bias (subtle or overt)
can affect the way questions are asked. Care must be taken to ensure that questions are objective and
that personal opinions do not bias framing questions, e.g., “Do you feel guilty about accepting a Do Not
Resuscitate order?”33,36 Interviews can evoke implicit personal bias by both the interviewer and the
interviewee. Researchers must avoid words that are potentially charged or could generate an emotional
response.10 An extreme version of biased questions is push polling, where the hidden purpose is to
drive opinions rather than ask questions,37 e.g., “For fluid resuscitation do you use Dodgy-sol, which is
both dangerous and expensive?” Readers’ should look for these biases in questions.

Nonresponse Bias

Along with precision, the response rate is a central metric of survey quality.6 Nonresponse is one of the
most frustrating aspects of all survey research, and physicians are among the worst offenders.38 Topics
that have widespread practice implications may enhance response rate, e.g., video laryngoscopes (67%
response)39 or the effect of fatigue in trainees (59% response).40 However, even well designed, hot-
topic studies suffer from nonresponse. Although some nonresponse is expected and acceptable, surveys
that have large nonresponse rates are subject to bias, particularly if the nonresponse is related to the
survey topic (outcome) or if the nonresponders differ substantively from responders. For example,
individuals who have experienced a bad or sensitive outcome may be less willing to report it (report
bias), and as such, the true outcome may be underreported. Although nonresponse is often simply a
function of a lack of respondent time, its impact is often survey-specific. For example, whereas a
response rate of 50% may be adequate for postal and online surveys, 85% would be considered
minimally adequate for interviews.41 In any case, it is important to determine whether the
nonrespondents differ substantively from respondents. The most pragmatic way to do this is to compare
the demographics of the responders with the known demographics of the target population. Another
way is to send a brief follow-up survey to the nonrespondents requesting basic demographics and the
reason(s) for nonresponse. Using this approach for a survey project, one of us (A.R.T.) found that
nonrespondents had similar characteristics to respondents and that most nonresponse was due to a lack
of participant time.42 This follow-up may be less appropriate with patient surveys. Because
nonresponse bias is an important limitation, it should always be discussed in any written publication.6

There are several tactics to improve response rates and mitigate the effects of nonresponse.43–45
Importantly, prenotification of the survey by email or postcard has increased response rates.43 A
professionally written cover letter that explains the importance of the study is also critically important
to pique interest. Techniques such as increasing “white space,” emphasizing important points with
bolding/underlining, and use of color tend to engender better response rates.15,43 For surveys dealing
with sensitive topics, response rates will be greater if the data are anonymized or if confidentiality is
assured.

For online surveys, there should be email reminders with opportunities to receive additional surveys or
access to the online survey link (maximum of three reminders/follow-up attempts).10 Often
researchers will provide small (noncoercive) incentives to encourage respondents to complete their
surveys, e.g., gift cards, money, or lottery tickets, but these need to be in the planned budget.46 Online
surveys typically have poorer response rates than postal surveys44,47 and may also be subject to a
“speed through” phenomenon, where respondents satisfice by rushing through the survey without due
thought. For some online surveys, it is, however, possible to measure the time taken to complete the
survey. If this time is deemed too quick based on pilot-testing estimates, the results may be unreliable.
Online surveys are also limited to those individuals with online access and thus may evoke a selection
bias. Despite these concerns, however, online surveys are supplanting traditional postal surveys.
Although response rates can be lower than with postal surveys,44,47 online surveys also tend to be
quicker and cheaper to administer and reach larger or dispersed audiences.

In addition to the issues posed by total nonresponse, problems can also occur when participants choose
not to answer certain questions (item nonresponse). Typically, missing values are automatically
excluded from the analysis and do not pose a problem. However, if the percentage of missing responses
is high, e.g., more than 20%, the investigator may choose to correct for this by imputing the missing
data. In any case, it is important that missing data are reported to allow the reader to estimate the
potential impact of the item nonresponse.

Recall Bias
Recall bias refers to error associated with respondents being unable to adequately recall past events. To
minimize recall bias, questions should be framed in time periods calibrated for the events, e.g., “difficult
intubations in the last 3 months.”

Self-report Bias

Often called social desirability bias, this type of bias refers to the tendency for individuals to downplay
negative attributes. Asking parents whether they smoke in the house, for example, is likely to be
underreported because parents often know that second-hand smoke is inherently bad for their children.
Assuring that responses are either anonymized or that confidentiality will be honored will typically
reduce the potential for self-report bias.

Analysis

Analysis of survey data should be based on a predefined endpoint and will depend on the type of data
collected and the question(s) asked.17 Most quantitative survey research involves descriptive
frequency data involving proportions and measurements of central tendency, e.g., means and medians,
and variability, e.g., SD and range. Comparisons between groups will again depend on the type of data
collected, i.e., continuous data versus categorical data. For these data, simple statistics, such as
Student’s t tests, ANOVA, and the chi-square test, can be used, as appropriate. Analyzing categorical
data, such as Likert scales, can present challenges. For example, imagine a five-point Likert scale of
“extremely dissatisfied,” “dissatisfied,” “neither dissatisfied nor satisfied,” satisfied,” and “extremely
satisfied” used to test the attitude of 1,000 Australian anesthetists to a new laryngoscope: the Bonza-
Scope. The proportions giving each response could be stated and compared by using the chi-square test.
Another option (with greater statistical power) is to combine “extremely dissatisfied” with dissatisfied”
and “satisfied” with “extremely satisfied.” The summed results could thus be that 60% were satisfied,
10% neutral, and 30% dissatisfied with the Bonza-Scope. A simple analysis would be to just compare the
proportion who are satisfied with the proportion who are dissatisfied. This provides “headline”
statistics, e.g., “In a survey of 1,000 Australian anesthetists, 60% were satisfied with the Bonza-Scope,
whereas 30% were dissatisfied (difference 30%, 95% CI: 26 to 34%, P = 0.002).”

Another approach is to create dummy variables for categorical data.17,19 For example, data using the
same five-point Likert scale of “extremely dissatisfied” to “extremely satisfied” can also be coded from 1
to 5 (e.g., 1 = extremely dissatisfied, 5 = extremely satisfied). These are not continuous data and should
not be assumed to be evenly spaced ordinal data. Parametric statistics, including mean and SD
descriptive statistics, are not appropriate. These data can be analyzed as numerical data by using
comparative statistics, such as the nonparametric Mann–Whitney U test, which examines rank and not
magnitude. In another Bonza-Scope research project, the attitudes of Australian anesthetists might be
compared with the attitudes of American anesthesiologists. If the Australian group had a median score
of 4 and the American group a median score of 3 on a Likert scale question for satisfaction, rather than
saying there is a median difference of 1, it is probably more meaningful to say Australians were more
satisfied than Americans (P < 0.005).
With the advent of powerful desktop statistical programs, more complex statistical analysis can also be
applied to survey research, including logistic regression for analysis of predictive factors and factor
analyses that identify which individual questions or factors explain most of the variance in the
data.48,49 This process is important in identifying which factors in a survey are important and which
can be safely removed (data reduction). Again, collaborating with statisticians is likely to produce better
survey design and analysis.

Open-ended questions from both oral interviews and written surveys are analyzed to identify
themes.21,50 A theme is a patterned response within the survey data, e.g., repetitions, recurring topics.
For example, the question, “Under what circumstances would you cancel anesthesia for the child with
an upper respiratory tract infection?” is likely to evoke different responses that can be sorted into
themes. These themes might be related to patient, parent, anesthetic, or surgical factors. The
importance of a theme is typically determined by its prevalence or how many respondents articulated
that theme. Unfortunately, like many aspects of survey research, the importance and difficulty of
thematic analysis is often underestimated.10,21,50

Mixed Methods

Mixed-methods research (table 7) represents a relatively new approach to analyzing survey data.10
Although most clinical survey data are primarily quantitative, mixed methods allow researchers to
integrate both qualitative and quantitative data. By integrating both data types, mixed-methods
research provides richer information. Typically, mixed methods are used to corroborate results by using
other approaches, develop a theory about a phenomenon, complement the strengths and/or overcome
the weaknesses of a single design, or develop and test a new instrument.51

Table 7.

Mixed-methods Designs

Mixed-methods Designs

VIEW LARGE

The choice of mixed methods requires a systematic approach, including determining the sequence of
data collection, e.g., quantitative precedes or follows qualitative; identifying what method will take
priority during data collection and analysis; deciding what the integration of qualitative and quantitative
data stage might involve; and deciding whether a theoretical perspective will be used.51 The
advantages of mixed-methods designs are that they combine the strengths and diminish the weaknesses
of a single design, can provide a more comprehensive understanding of the questions asked, and may be
more effective in developing survey instruments. The disadvantages are that they can be complex, time-
consuming, and difficult to integrate and interpret. Continuing our example: Americans may be less
satisfied (P < 0.005) (quantitative) with a Bonza-Scope with a qualitative theme of “The handle is too
big.” Again, we strongly recommend collaborating with a biostatistician or social scientist or both.

Conclusions

Poor methodologic quality of survey research is often a (negative) factor in decisions regarding
publication. Producing good-quality survey research is a complex process that is harder than it looks. We
hope that this article will provide investigators with useful tools (table 1) to successfully navigate the
survey process and publication and provide readers with useful points to judge survey research (table 2).
We also have provided a short list of suggested minimum standards (table 3) that we think can be a
threshold for submitting surveys to journals. Survey reports failing these minimums will have far less
likelihood of success, and submission to major journals will probably be futile. Therefore, the toolboxes
and minimum standards can be used by researchers, editors, and the component anesthesia societies
(e.g., Australian and New Zealand College of Anaesthetists, Society for Pediatric Anesthesia) to ensure
conduct, submission, and publication of high-quality surveys for informed readers.
Fundamentals of Research:

Research Methods Versus Methodology Research methods include all the


techniques and methods which have been taken for conducting research
whereas research methodology is the approach in which research troubles are
solved thoroughly. It is a science of studying how research is conducted
systematically. In this field the researcher explains himself with the different steps
generally taken to study a research problem. Hence, the scientific approach which
is adopted for conducting a research is called methodology.
Meaning of Research: The term Research is related to seek out the information
and knowledge on a particular topic or subject. In other words, research is an art
of systematic investigation. Someone says that necessity is mother of all the
inventions and the person engaged in this scientific investigation can be termed
as research. Research is a pedagogic action the term should be used in a
technical sense. According to Clifford Woody research comprises defining and
redefining problems, formulating hypothesis or suggested solutions; collecting,
organizing and evaluating data; making deductions and reaching conclusions;
and at last carefully testing the conclusions to determine whether they fit
the formulating hypothesis. Sample Copy. Not For Distribution.
2 Objectives of Research

The major aim of any type of research is to find out the reality and facts which is
unknown and which has not been exposed. Although each research activity has its
own particular reason, the objectives of research can be grouped into the following
categories : 1.To achieve skillfulness with a trend or to get novel opinions into
it (research with this objective can be termed as exploratory or formulative);
2.To find out the characteristics of a particular character, condition or a
grouping (research with this objective can be termed as descriptive
research);3.To establish the relationship with which something occur or with
which it is related with something else (research with this objective are known as
diagnostic research);4.To test a hypothesis of a reasonable liaison between
different variables (this type of research can be grouped into
hypothesis-testing research ).

Types of Research The basic types of research are as follows:(i) Descriptive vs.
Analytical: Descriptive research consists of survey and fact-finding investigation
of different kinds. The main purpose of descriptive research is explanation
of the set of circumstances as it is present as such. The term Ex post facto
research has been used to elaborate this type of research in different areas
or subjects of research. The main feature of this method is that the scientist
does not have direct control over the variables; he can only report what is
happening or what has happened. For example, why peoples of the south
side are suffering from lung cancer as compared to north-side neighbors and
investigation revealed that south side persons have wood burning stoves and
fire places, the researcher could hypothesize the reason that the wood smoke is
a factor of lung cancer. The Sample Copy. Not For Distribution.
Handbook of Research Methodology3techniques used in descriptive research are
can be of all kinds like survey methods, comparative and correlational methods etc.
On the other hand, in analytical research, , the researcher could be use the facts,
information, data which is already available, and analyze these sources to
make a hypothesis to evaluation of the material.(ii) Applied vs. Fundamental:
Applied research refers to finding a solution for specific, practical problem facing
by an individual, society or an industrial or business organization, for example
how to abolish hate crime, what are the ways to market a product, what is causing
increased poverty etc. whereas fundamental research is mainly concerned with
overview and with the formulation of a theory. This is pure and basic type
of research, for example an investigation looking for whether stress levels
influence how often students engage in academic cheating or how caffeine
consumption impacts the brain. Thus, the main aim of applied research is to find
out a solution for some critical practical problem, whereas basic research is
handling towards finding information that has a wide sense of applications to
the already existing organized body of scientific knowledge.(iii) Quantitative vs.
Qualitative: In natural sciences and social sciences, quantitative research is based
on the aspect of quantity or extent. It is related to object that can be expressed
in terms of quantity or something that can be counted. Such type of research
involve systematic experimental analysis of observable phenomenon
via statistical, mathematical or computational techniques in numerical form
such as statistics, percentages, etc. whereas Qualitative research, , is
concerned with qualitative phenomenon, i.e., relating to quality or variety.
Such type of research is typically descriptive and harder to analyze than
quantitative data. Qualitative research involves looking in-depth at non-numerical
data. It is more naturalistic or anthropological. (iv) Conceptual vs. Empirical:
Conceptual research is that related to some abstract idea(s) or theory. It focuses
on the concept and theory that explain the concerned theory being studied.
It is generally used by logicians, philosophers and theorist to develop new
concepts or to again understand the existing ones. On the other hand,
empirical research relies on experience or observation alone. It is a way of
gaining knowledge by means of direct and indirect to observation or experience.
We can also refer it as experimental type of research. In such a research it
is necessary to get the facts and data firstly, their source, and then actively
engaged to doing certain things to stimulate the production of desired
information. (v) Some Other Types of Research: Other types of research may
be of different types rather than above stated types like form the point of
view of time one-time research or longitudinal research. In the former case the
research is restricted to a single time-period, while in the latter case the research
is carried on over several time-periods. Research can be field-setting research
or laboratory research or model research, which will depend upon the
environment in which it is to be carried out. Research may be understood as
clinical or diagnostic research. Such research follows case-study methods or
exhaustively approaches to reach the basic reasons behind the problems. The
research may be exploratory or it may be formalized. The objective of
exploratory research is the creation of hypotheses rather than their testing,
whereas formalized research are those with significant structure and with
specific hypotheses to be tested. The term historical research is refers to that
which make use of historical resource like documents, papers, leaflets remains,
etc. to study events or thoughts of the past, including the philosophy of
persons and groups at any point of time. Research can also be classified as
conclusion-oriented and decision-oriented. While doing conclusion oriented
research, a researcher having freethinking to choose a problem, redesign the
queries as he proceeds and is prepared to conceptualize as he wants. Decision-
oriented research is always for the need of a decision maker and the researcher in
this case is not free to get on research according to his own preference.
CONTENT ANALYSIS IN QUALITATIVE
RESEARCH
WHAT IS CONTENT ANALYSIS?
1. Shannon (2005) defined qualitative content analysis as “a research method for the
subjectivist interpretation of text and data through the systematic classification process
of coding and identifying themes or patterns” (p. 12).
2. According to Mayring (2000), qualitative content analysis is “an approach of
empirical, methodological controlled analysis of texts within their context of
communication, following content analytic rules and step-by-step models, without rash
quantification” (p. 23).
3. Qualitative content analysis allows researchers to understand social reality in a
subjective, yet scientific manner; explore the meanings underlying physical messages;
and is inductive, grounding the examination of topics and themes, as well as inferences
drawn from them, in data (Kaid, 1989; Patton, 2002; Zhang & Wildenmuth, 2009)
CHARACTERISTICS OF CONTENT ANALYSIS
One unique characteristic of qualitative content analysis is the flexibility of using Inductive or deductive
approaches or a combination of both approaches in data analysis.
An inductive approach is appropriate when prior knowledge regarding the phenomenon under
investigation is limited or fragmented (Elo & Kyngäs, 2008). In the inductive approach, codes,
categories, or themes are directly drawn from the data.
The deductive approach starts with preconceived codes or categories derived from prior
relevant theory, research, or literature. The deductive approach is appropriate when the objective of
the study is to test existing theory or retest existing data in a new context.
Second is the ability to extract manifest and latent content meaning. manifest content means the
researcher codes the visible and surface content of text, latent content means that the researcher
codes the underlying meaning of the text (Graneheim & Lundman, 2004).
ADVANTAGES AND DISADVANTAGES
Forman and Damschroder (2008) posited that the greatest advantage of qualitative
content analysis is that it is “a more hands-on approach to research than quantitative
content analysis” (p. 60).

McNamara (2006) maintained that qualitative content analysis relies heavily on


“researcher reading and interpretation of texts” (p. 5). The author should note that
this is also a disadvantage of qualitative content analysis, as it places a profound
emphasis on researcher bias.
QUALITATIVE AND QUANTITATIVE
1. Qualitative content analysis, compared against quantitative content analysis, is often
referred to as “latent level analysis, because it concerns a second-level, interpretative
analysis of the underlying deeper meaning of the data” (Dörnyei, 2007, p. 246); while
the latter is usually described as “manifest level analysis”, providing an objective and
descriptive overview of the “surface meaning of the data.”
2. The techniques of data sampling are different, as the quantitative approach requires
random sampling or other techniques of probability to ensure validity, while qualitative
analysis uses intentionally chosen texts.
3. There are different products of the two approaches; while quantitative analysis caters for
statistical methods and numerical results, the qualitative approach brings descriptions.
MIXING OF BOTH
1. Mixing qualitative and quantitative methods is known as one of the ways of using
triangulation, which, according to Flick (2010, p. 405), is “used as a strategy of
improving the quality of qualitative research …”.
2. Despite of these differences, it has been highlighted by numerous scholars that, in
research practice, the two approaches are often applied in combination (Dörnyei,
2007; Flick, 2007; Zhang & Wildemuth, 2009).
WHY TO USE CONTENT ANALYSIS?
1. Researchers use qualitative content analysis to illustrate the range of meanings of
phenomena, describe the characteristics of message content, and identify themes
or categories within a body of text.
2. Bryman (2008) maintained that qualitative content analysis comprises a searching
out of underlying themes in the texts being analyzed by researchers.
3. Researchers, if they intend to better explain the characteristics of message
content, or understand phenomena, must possess an encyclopedic knowledge of
qualitative content analysis.
THREE APPROACHES TO CONTENT
ANALYSIS
CONVENTIONAL CONTENT ANALYSIS

• Conventional content analysis is generally used with a study design whose aim is to describe a
phenomenon.
• This type of design is usually appropriate when existing theory or research literature on a phenomenon is
limited. Researchers avoid using preconceived categories (Kondracki & Wellman, 2002), instead allowing the
categories and names for categories to flow from the data.

• Researchers immerse themselves in the data to allow new insights to emerge (Kondracki & Wellman,
2002).

• With a conventional approach to content analysis, relevant theories or other research findings are addressed
in the discussion section of the study. The discussion would include a summary of how the findings from her study
contribute to knowledge in the area of interest and suggestions for practice, teaching, and future research.

• The advantage of the conventional approach to content analysis is gaining direct information from
study without imposing preconceived categories.

• One challenge of this type of analysis is failing to develop a complete understanding of the context, thus
failing to identify key categories. This can result in findings that do not accurately represent the data.
• Note: Many qualitative methods share this initial approach to study design and analysis.
DIRECT CONTENT ANALYSIS
• The goal of a directed approach to content analysis is to validate or extend conceptually a theoretical framework
or theory. Existing theory or research can help focus the research question. It can provide predictions about the
variables of interest or about the relationships among variables, thus helping to determine the initial coding
scheme or relationships between codes.

• Using existing theory or prior research, researchers begin by identifying key concepts or variables as initial coding
categories (Potter & Levine- Donnerstein, 1999). Operational definitions for each category are determined
using the theory.

• The second strategy that can be used in directed content analysis is to begin coding immediately with the
predetermined codes.

• The main strength of a directed approach to content analysis is that existing theory can be supported and
extended.

• Disadvantages
• Researchers might be more likely to find evidence that is supportive rather than non-supportive of a theory.
• Second, in answering the probe questions, some participants might get cues to answer in a certain way or agree
with the questions to please researchers.
• Third, an overemphasis on the theory can blind researchers to contextual aspects of the phenomenon.
SUMMATIVE CONTENT ANALYSIS
• A study using a summative approach to qualitative content analysis starts with identifying and quantifying certain words
or content in text with the purpose of understanding the contextual use of the words or content.

• A summative approach to qualitative content analysis goes beyond mere word counts to include latent content analysis.
Latent content analysis refers to the process of interpretation of content (Holsti, 1969).

• In this analysis, the focus is on discovering underlying meanings of the words or the content (Babbie, 1992;). Researchers
report using content analysis from this approach in studies that analyze manuscript types in a particular journal or specific
content in textbooks.

• In a summative approach to qualitative content analysis, data analysis begins with searches for occurrences of the
identified words by hand or by computer. Word frequency counts for each identified term are calculated, with source or
speaker also identified. It allows for interpretation of the context associated with the use of the word or phrase.
Researchers try to explore word usage or discover the range of meanings that a word can have in normal use.

• ADVANTAGES: It is an unobtrusive and nonreactive way to study the phenomenon of interest (Babbie, 1992). It
can provide basic insights into how words are actually used.

• DISADVANTAGES: The findings from this approach are limited by their inattention to the broader meanings present in the
data. this type of study relies on credibility.
METHODOLOGY
All approaches to qualitative content analysis require a similar analytical process of seven
classic steps, including formulating the research questions to be answered, selecting the sample
to be analyzed, defining the categories to be applied, outlining the coding process and the
coder training, implementing the coding process, deter mining trustworthiness, and analyzing
the results of the coding process (Kaid,1989).

Different research purposes require different research designs and analysis techniques (Knafl &
Howard, 1984). The question of whether a study needs to use a conventional, directed, or
summative approach to content analysis can be answered by matching the specific research
purpose and the state of science in the area of interest with the appropriate analysis technique.
VALIDITY
Face validity: the most common form of validity,
weakest because it relies on subjective than
Validity may be addressed in terms of objective, quantitative or methods of evaluation.
correspondence and generalizability.
Construct validity: it refers to the extent which a
Correspondence refer to agreement measure either corresponds or is discriminant from
between two sets of measurement related to measures or construct.
procedures for a particular construct or a Hypothesis validity refers correspondence
concept. between the categorization procedure and existing
theories.
Generalizability refers to the extent to
which the results are consistent with existing Predictive validity refers to the extent to which the
theory or predictive of associated events. measurement forecast future events.
Semantic validity refers to the examination of the
text by persons who are familiar with the content
and to the extent of their agreement and on
categorization procedure.
RELIABILITY
•Reliability here refers to replicability or consistency in the coding or interpretation of
content or portions of content. Reliability issues associated in content analysis are with
the ambiguity of word meanings or coding rules.
•Three types of reliability are relevant to content analysis which are:
•Stability refers to the extent which content classification in invariant over time. Stability
can be ascertained when the same content is coded more than once by the same coder.it
is relatively weak form of reliability.
•Reproducibility(inter-coder reliability) refers to the extent to which content classification
produces the same results when the same text is coded by more than one coder. High
reproducibility is the minimum standard of for content analysis.
•Accuracy the strongest form of reliability refers to the extent to which the classification
of text corresponds to the a particular standard or norm.
READINGS AND REFERENCES
1. Fang Hseih, Hsiu., Shannon, Sarah. E. (2005). Three approaches to qualitative content analysis,
Qualitative Health Research- Sage Publications, Vol. 15, No. 2, pg. 1277-1285.
2. Sandorova, Zuzuna. (2014). Content analysis as a research method in investigating the cultural
components in foreign language textbooks, Journal of language and culture education, pg. 95-
123.
3. http://www.utsc.utoronto.ca/~kmacd/IDSC10/Readings/Readings/text%20analysis/CA.pdf
4. http://www.zoltandornyei.co.uk/uploads/2012-dornyei-csizer-rmsla.pdf
5. http://www.paxamerica.org/2012/09/01/qualitative-content-analysis-in-social-research-an-
epigrammatic-summation-of-presidential-state-of-the-union-addresses/
6. http://www.utsc.utoronto.ca/~kmacd/IDSC10/Readings/Readings/text%20analysis/CA.pdf
7. http://www.utsc.utoronto.ca/~kmacd/IDSC10/Readings/Readings/text%20analysis/CA.pdf
SAMPLING
TECHNIQUES
Definition

■ Sampling is a technique of selecting individual members or


a subset of the population to make statistical inferences
from them and estimate characteristics of the whole
population.

■ For example, if a drug manufacturer would like to research the adverse side effects
of the COVID vaccines on the country’s population, it is almost impossible to
conduct a research study that involves everyone. In this case, the researcher
decides a sample of people from each demographic and then researches them,
giving him/her indicative feedback on the drug’s behavior.
Population vs sample
Population vs sample Cont…

■ The population is the entire group that you want to draw


conclusions about.
■ The sample is the specific group of individuals that you will
collect data from.
Sampling framework/ Sampling Size

Sampling frame

The sampling frame is the actual list of individuals that the sample will be drawn from.
Ideally, it should include the entire target population (and nobody who is not part of that
population).

Example

■ You are doing research on working conditions at Company X. Your population is all 1000
employees of the company. Your sampling frame is the company’s HR database which
lists the names and contact details of every employee.
Sampling…

■ Sample size

■ The number of individuals in your sample depends on the size of the population,
and on how precisely you want the results to represent the population as a whole.

■ You can use a sample size calculator to determine how big your sample should be.
In general, the larger the sample size, the more accurately and confidently you can
make inferences about the whole population.
Types of sampling: sampling methods
Probability sampling methods

■ Probability sampling means that every member of the population has a chance of
being selected.

■ It is mainly used in quantitative research. If you want to produce results that are
representative of the whole population, you need to use a probability sampling
technique.
There are four main types of probability
sample
■ .
Simple random sampling

■ In a simple random sample, every member of the population has an equal chance of
being selected. Your sampling frame should include the whole population.

■ To conduct this type of sampling, you can use tools like random number generators or
other techniques that are based entirely on chance.

Example
■ You want to select a simple random sample of 100 employees of Company X. You assign
a number to every employee in the company database from 1 to 1000, and use a
random number generator to select 100 numbers.
Systematic sampling

■ Systematic sampling is similar to simple random sampling, but it is usually slightly


easier to conduct. Every member of the population is listed with a number, but
instead of randomly generating numbers, individuals are chosen at regular intervals.

Example
■ All employees of the company are listed in alphabetical order. From the first 10
numbers, you randomly select a starting point: number 6. From number 6 onwards,
every 10th person on the list is selected (6, 16, 26, 36, and so on), and you end up
with a sample of 100 people.
Stratified sampling

■ Stratified sampling involves dividing the population into subpopulations that may
differ in important ways. It allows you draw more precise conclusions by ensuring
that every subgroup is properly represented in the sample.

■ To use this sampling method, you divide the population into subgroups (called
strata) based on the relevant characteristic (e.g. gender, age range, income bracket,
job role).

■ Based on the overall proportions of the population, you calculate how many people
should be sampled from each subgroup. Then you use random or systematic
sampling to select a sample from each subgroup.
Stratified sampling cont….

Example
■ The company has 800 female employees and 200 male employees. You want to
ensure that the sample reflects the gender balance of the company, so you sort the
population into two strata based on gender. Then you use random sampling on each
group, selecting 80 women and 20 men, which gives you a representative sample of
100 people.
Cluster sampling

■ Cluster sampling also involves dividing the population into subgroups, but each
subgroup should have similar characteristics to the whole sample.
■ Instead of sampling individuals from each subgroup, you randomly select entire
subgroups.

■ If it is practically possible, you might include every individual from each sampled
cluster.
■ If the clusters themselves are large, you can also sample individuals from within
each cluster using one of the techniques above.
Cluster sampling cont….

■ This method is good for dealing with large and dispersed populations, but there is
more risk of error in the sample, as there could be substantial differences between
clusters.
■ It’s difficult to guarantee that the sampled clusters are really representative of the
whole population.

Example
■ The company has offices in 10 cities across the country (all with roughly the same
number of employees in similar roles). You don’t have the capacity to travel to every
office to collect your data, so you use random sampling to select 3 offices – these
are your clusters.
Non-probability sampling methods

■ In a non-probability sample, individuals are selected based on non-random criteria,


and not every individual has a chance of being included.

■ This type of sample is easier and cheaper to access, but it has a higher risk of
sampling bias, and you can’t use it to make valid statistical inferences about the
whole population.

■ Non-probability sampling techniques are often appropriate for exploratory and


qualitative research. In these types of research, the aim is not to test a hypothesis
about a broad population, but to develop an initial understanding of a small or
under-researched population.
Convenience sampling

■ A convenience sample simply includes the individuals who happen to be most accessible
to the researcher.

■ This is an easy and inexpensive way to gather initial data, but there is no way to tell if the
sample is representative of the population, so it can’t produce generalizable results.

Example
■ You are researching opinions about student support services in your university, so after
each of your classes, you ask your fellow students to complete a survey on the topic. This
is a convenient way to gather data, but as you only surveyed students taking the same
classes as you at the same level, the sample is not representative of all the students at
your university.
Voluntary response sampling

■ Similar to a convenience sample, a voluntary response sample is mainly based on ease of access.
Instead of the researcher choosing participants and directly contacting them, people volunteer
themselves (e.g. by responding to a public online survey).

■ Voluntary response samples are always at least somewhat biased, as some people will inherently
be more likely to volunteer than others.

Example
■ You send out the survey to all students at your university and a lot of students decide to complete
it. This can certainly give you some insight into the topic, but the people who responded are more
likely to be those who have strong opinions about the student support services, so you can’t be
sure that their opinions are representative of all students.
Purposive sampling

■ This type of sampling involves the researcher using their judgement to select a sample
that is most useful to the purposes of the research.

■ It is often used in qualitative research, where the researcher wants to gain detailed
knowledge about a specific phenomenon rather than make statistical inferences. An
effective purposive sample must have clear criteria and rationale for inclusion.

Example
■ You want to know more about the opinions and experiences of disabled students at your
university, so you purposefully select a number of students with different support needs
in order to gather a varied range of data on their experiences with student services.
Snowball sampling

■ If the population is hard to access, snowball sampling can be used to recruit


participants via other participants.
■ The number of people you have access to “snowballs” as you get in contact with
more people.

Example
■ You are researching experiences of homelessness in your city. Since there is no list
of all homeless people in the city, probability sampling isn’t possible. You meet one
person who agrees to participate in the research, and she puts you in contact with
other homeless people that she knows in the area.
MEANING AND
CHARACTERISTICS OF
RESEARCH
Research

It is the systematic study of trend or event


which involves careful collection, presentation,
analysis and interpretation of quantitative
data or facts that relates man’s thinking with
reality.
• It is the systematic study of trend or event
which involves careful collection, presentation,
analysis and interpretation of quantitative
data or facts that relates man’s thinking with
reality.
Characteristics of Research

1. Empirical – research is based on direct experience or


observation by the researcher.
2. Logical – research is based on valid procedures and
principles.
3. Cyclical – research starts with a problem and ends
with a problem.
Characteristics of Research

4. Analytical – research utilizes proven analytical


procedures in gathering data, whether historical,
descriptive, experimental, and case study.
5. Critical – research exhibits careful and precise
judgment.
Characteristics of Research

6. Methodical – research is conducted in a methodical


manner without bias using systematic method and
procedures.
7. Replicability – research design and procedures are
repeated to enable the researcher to arrive at valid and
conclusive results.
TYPES OF RESEARCH

1. Basic Research – It seeks to discover basic truths or


principles. It is intended to add to the body of scientific
knowledge by exploring the unknown to extend the
boundaries of knowledge as well as to discover new facts,
and learn more accurately the characteristics of known
without any particular thought as to immediate practical
utility.
TYPES OF RESEARCH

• Applied Research – involves seeking new applications of


scientific knowledge to the solution of a problem such as the
development of new system or procedure, new device, or new
method, in order to solve the problem.

• Produces knowledge of practical use to man.


TYPES OF RESEARCH

3. Developmental research – this is a decisionoriented


research involving the application of the steps of the
scientific method in response to an immediate need to
improve existing practices.
• If a researcher continues to find practical applications
from theoretical knowledge and use this existing
knowledge to produce useful products.
RESEARCH METHODOLOGY I
• It is an Art of Scientific Investigation
• According to Redman and Mory, Research is a
“Systematized effort to gain new knowledge”
• Research is an original addition to the available
knowledge, which contributes to it’s further
advancement
• In sum, Research is the search for knowledge,
using objective and systematic methods to find
solution to a problem
“ a careful investigation or inquiry
specially through search for new
facts in any branch of knowledge”
The Oxford Advanced Learner’s Dictionary
• To gain familiarity with new insights into a
phenomenon
• To accurately portray the characteristics of a
particular individual, group, or a situation
• To analyse the frequency with which something
occurs or its association with something else.
• To examine the Hypothesis of a casual
relationship between two variables
• Research Methods are the methods that the
researcher adopts for conducting the research
Studies
• Research Methodology is the way in which
research problems are solved systematically.
• It is the Science of studying how research is
conducted Scientifically.
“All progress is born of inquiry. Doubt is often
better than over-confidence, for it leads to
inquiry, and inquiry leads to invention”
— Hudson Maxim

• Research inculcates scientific and inductive


thinking and it promotes the development of
logical habits of thinking and organization.
Qualitative Quantitative

Mixed

7
• Qualitative research refers to the use of non-numerical
observations to answer "Why?" questions, while quantitative
methods use data that can be counted or converted into
numerical form to address "How?" questions.

Analytical Descriptive

Observational Quantitative Case report


Cohort study Research Case series
Case-control
study

Experimental Cross sectional


Randomized
trials
• Good research is systematic: Research is structured
with specified steps to be taken in a specified
sequence in accordance with the well defined set of
rules.
• Good research is logical: Research is guided by the
rules of logical reasoning
• Good research is empirical: Research is related
basically to one or more aspects of a real situation
and deals with concrete data that provides a basis
for external validity.
• Good research is replicable: This characteristic
allows research results to be verified by replicating
the study and thereby building a sound basis for
decisions.
II. Review the literature
Review concepts
and theories IV. Design
I. Define Research III. Formulate
research(including
Problem hypotheses
Review previous sample design)
research finding

VII. Interpret VI. Analyse data V. Collect data


and report (Test hypotheses) (Execution)
• A research problem, in general, refers to some
difficulty which a researcher experiences in the
context of either a theoretical or practical situation
and wants to obtain a solution for the same.
• The research problem undertaken for study must be
carefully selected. Help may be taken from a
research guide in this connection.
Ask yourself one key question:
where do YOUR interests lie?
The following points may be observed by a
researcher in selecting a research problem or a
subject for research:
i. Subject which is overdone should not be normally chosen,
for it will be a difficult task to throw any new light in such
a case.
ii. There must be some objective(s) to be attained at. If one
wants nothing, one cannot have a problem.
iii. The subject selected for research should be familiar and
feasible so that the related research material or sources of
research are within one’s reach.
iv. The importance of the subject, the qualifications and the
training of a researcher, the costs involved, the time
factor are few other criteria that must also be considered
in selecting a problem. Before the final selection of a
problem is done, a researcher must ask himself the
following questions:
a. Whether he is well equipped in terms of his background to
carry out the research?
b. Whether the study falls within the budget he can afford?
c. Whether the necessary cooperation can be obtained from
those who must participate in research as subjects?
v. If the field of inquiry is relatively new and does not have
available a set of well developed techniques, a brief
feasibility study must always be undertaken.
• Defining a research problem properly and clearly is a
crucial part of a research study and must in no case
be accomplished hurriedly.
• The technique for the purpose involves the
undertaking of the following steps generally one
after the other:
i. statement of the problem in a general way;
ii. understanding the nature of the problem;
iii. surveying the available literature
iv. developing the ideas through discussions; and
v. rephrasing the research problem into a working
proposition.
• Once the problem is formulated, the researcher
should undertake extensive literature review
connected with the problem.
Why Literature Review??????
1. ASSIST IN REFINING STATEMENT OF THE PROBLEM

2. STRENGTHENING THE ARGUMENT OF SELECTION OF


A RESEARCH TOPIC (JUSTIFICATION )

3 .IT HELPS TO GET FAMILIAR WITH VARIOUS TYPES OF


METHODOLOGY THAT MIGHT BE USED IN THE STUDY
(DESIGN)
What are the major whether the research
issues and debate question already has
about the research been answered by
problem someone else?

Questions that What is the


Are there any gaps in can be answered What is the
chronology of the
Are there any gaps in by a review of chronology of the
knowledge of the development of
knowledge of the literature development of
subject? knowledge about my
subject? knowledge about my
research problem?
research problem?

What are the key


How can I bridge the theories, concept
gap? and ideas known
What directions about the subject?
/methodology are
indicated by the
work of other
researchers?
LITERATURE REVIEW

Sources of Literature:
Books Vital statistics
• Text books • Census
• Monographs • Government Records
• Edited collections • Surveillance system
• Surveys
Journal Articles International organization
• Academic journals documents
• Conference Proceedings • e.g. (WHO,UNICEF)

Indexing and Abstracting Media


journal search engines • Newspaper
• Pubmed • Magazine
• Google Scholar
Past Dissertations Internet
LITERATURE REVIEW
• Finding too much? If you find so many citations that
there is no end in sight to the number of references
you could use, its time to re-evaluate your question.
It's too broad/Nothing much to explore
• Finding too little? On the other hand, if you can't find
much of anything, ask yourself if you're looking in the
right area.
• Take thorough notes. Be sure to write copious notes
on everything as you proceed through your research.
It's very frustrating when you can't find a reference
found earlier that now you want to read in full.
• Look for references to papers from which you can
identify the most useful journals.
• Identify those authors who seem to be important in
your subject area.
• Institutional library serves as a greatest source of literature
review.
• Talk to the librarian for greater insight on the number of
journals available either as a hard copy or online
subscription
• Our JNMC library subscribes 115
International/Foreign and 25 Indian Journals in
various specialities. The library has a exclusive
collection of about 2000 Thesis and Dissertations of
MD/MS/PhD students besides a comprehensive
collection of WHO Publications.
• Besides this it also provide access to various
consortia e.g. ERMED (2000 Journals),J-Gate, UGC
Info-net, Pub Med database of 18 million
references/documents and other open source
documents .
Important concept related to academic journals
Indexing- Indexing as defined by British indexing standard
(BS3700:1988), as a systematic arrangement of entries designed to enable
users to locate information in a document.
– Many commercial indexing services available.
– Quality indexing services includes PubMed, Scopus, Embase etc
– A good indexing bodies ensures that journal should have
• Content, which is of high-quality.
• It should follow peer-review process.
• Subject matter of the journal should be compatible with the scope of Indexing
body.
• Disciplined publishing history.
– Now a days predatory publishers’ and predatory journals’ brag
about how many abstracting and indexing services cover their
journals. (Check… may b they r lying!!!!!!!!!)
Impact Factor (IF)- Impact Factor was developed by Eugene Garfield as
a quantitative method for comparing the journals. He together with Irving
H. Sher, proposed IF in 1955 to rank the journals according to the journal
citation.
– It is a measure of the frequency with which the "average article" in a journal
has been cited in a particular year or period.
– The impact factor of a journal is calculated by dividing the number of current
year citations to the source items published in that journal during the
previous two years.
– Let us assume that the total number of articles published in a journal in 2010
and 2011 are 50 (Denominator) and in 2012, the citation to everything
published in 2010 and 2011 is 500 (Numerator). The IF of will be 10 in 2013.
– Impact Factor is calculated after 3 years of journal launch. New journals
should not be expected to have IF from day 1.
– Thomson Reuters, ISI releases Journal Citation Reports every 2 years and
publishes IF of every journal.
– Impact Factor, once assigned by Thomson Reuters to a journal, will be
eligible from the date of its birth.
• After extensive literature survey, researcher should
state in clear terms the working hypothesis.
• For a researcher hypothesis is a formal question
that he intends to resolve.
• A hypothesis is a proposed explanation for an
observable phenomenon which is capable of being
tested by scientific methods .
• For example, consider a statement:
“the drug A is equally efficacious as drug B.”
This is a hypotheses capable of being objectively
verified and tested.
Characteristics of hypothesis: Hypothesis must possess the following
characteristics:
❖ Hypothesis should be clear and precise. If the hypothesis is not
clear and precise, the inferences drawn on its basis cannot be
taken as reliable.
❖ Hypothesis should be capable of being tested.
❖ Hypothesis should be limited in scope and must be specific.
❖ Hypothesis should be stated as far as possible in most simple
terms so that the same is easily understandable by all concerned.
❖ Hypothesis should be amenable to testing within a reasonable
time. One should not use even an excellent hypothesis, if the
same cannot be tested in reasonable time for one cannot spend a
life-time collecting data to test it.
❖ Thus hypothesis must actually explain what it claims to explain
“A research design is the arrangement of conditions for
collection and analysis of data in a manner that aims to
combine relevance to the research purpose with economy in
procedure.”
Research Methods in Social Sciences, 1962, p. 50

• It constitutes he blueprint for the


collection, measurement and analysis of
data.
• An outline of what the researcher will
do from writing the hypothesis and its
operational implications to the final
analysis of data.
What will be
the sample
What is the
design?
study about?
What periods
of time will
Why is the
the study
study being
include?
made?
What
Where will the techniques of
study be data
carried out? collection will
be used?

Where can the How will the


required data data be
be found? analysed?
Important concepts relating to research design:
1. Dependent and independent variables:
• A concept which can take on different quantitative values is called a
variable. As such the concepts like weight, height are all examples of
variables.
• Phenomena which can take on quantitatively different values even in
decimal points are called ‘continuous variables’.
• If it can only be expressed in integer values, they are non-continuous
variables or in statistical language ‘discrete variables’.
• If one variable depends upon or is a consequence of the other
variable, it is termed as a dependent variable, and the variable that is
antecedent to the dependent variable is termed as an independent
variable.
• For instance, if we say that height depends upon age, then height is
a dependent variable and age is an independent variable.
2. Extraneous variable:
• Independent variables that are not related to the
purpose of the study, but may affect the dependent
variable are termed as extraneous variables or
confounding variables.
• Whatever effect is noticed on dependent variable as a
result of extraneous variable(s) is technically described
as an ‘experimental error’.
• A study must always be so designed that the effect
upon the dependent variable is attributed entirely to the
independent variable(s), and not to some extraneous
variable or variables.
3. CONTROL:
• One important characteristic of a good research design is to
minimise the influence or effect of extraneous variable(s).
• The technical term ‘control’ is used when we design the study
minimising the effects of extraneous independent variables.
• In experimental researches, the term ‘control’ is used to refer
to restrain experimental conditions.
4. Experimental and control groups:
• In an experimental hypothesis-testing research when a group
is exposed to usual conditions, it is termed a ‘control group’,
but when the group is exposed to some novel or special
condition, it is termed an ‘experimental group’
5. Treatments:
• The different conditions under which experimental and control
groups are put are usually referred to as ‘treatments’.
Different Research Designs
• Different research designs can be conveniently described
as:
– Exploratory Research Design
– Descriptive and Diagnostic Research Design
– Hypothesis-testing Research Design/Experimental
Research Design
To be continued……………………

You might also like