100% found this document useful (1 vote)
3K views

pr2 - q2 - Mod5a - Collecting Data Using Appropriate Instrument Edited 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
3K views

pr2 - q2 - Mod5a - Collecting Data Using Appropriate Instrument Edited 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Practical Research 2

Quarter 2 – Module 5A:


Collects Data Using Appropriate
Instrument
Practical Research 2– Grade 12
Alternative Delivery Mode
Quarter 1 – Module 1: Introduction to Quantitative Research
First Edition, 2020

Republic Act 8293, section 176 states that: No copyright shall subsist in any work of the
Government of the Philippines. However, prior approval of the government agency or office wherein
the work is created shall be necessary for exploitation of such work for profit. Such agency or office
may, among other things, impose as a condition the payment of royalties.

Borrowed materials (i.e., songs, stories, poems, pictures, photos, brand names, trademarks,
etc.) included in this module are owned by their respective copyright holders. Every effort has been
exerted to locate and seek permission to use these materials from their respective copyright owners.
The publisher and authors do not represent nor claim ownership over them.

Published by the Department of Education


Secretary: Leonor Magtolis Briones
Undersecretary: Diosdado M. San Antonio

Development Team of the Module

Writer: Maria Regina Christine G. Ancheta


Editor: Bryan G. Bulanadi
Reviewer: Felina Sarmiento
Illustrator: Maria Regina Christine G. Ancheta
Layout Artist: Maria Regina Christine G. Ancheta
Cover Design:

Management Team:
Schools Division Superintendent : Romeo M. Alip, PhD, CESO V
Asst. Schools Division Superintendent : Rolando M. Fronda, EdD, CESE
Chief Education Supervisor, CID : Milagros M. Peñaflor, PhD
Education Program Supervisor, LRMDS : Edgar E. Garcia, MITE
Education Program Supervisor, AP/ADM : Romeo M. Layug
Education Program Supervisor, Learning Area: Edwin Bermillo
District Supervisor, Assigned Subject : Francisco B. Bautista
District LRMDS Coordinator, Assigned Subject: Jaypee M. Villa
School LRMDS Coordinator, Assigned Subject: Donna T. Santos-Villanueva
School Principal, Assigned Subject : Amelinda A. Fandialan
Lead Layout Artist, Assigned Subject :
Lead Illustrator, Assigned Subject :
Lead Evaluator, Assigned Subject :

Printed in the Philippines by Department of Education – Schools Division of Bataan


Office Address: Provincial Capitol Compound, Balanga City, Bataan
Telefax: (047) 237-2102
E-mail Address: [email protected]
Practical Research 2
Quarter 2 – Module 5A:
Collects Data Using Appropriate
Instrument
Introductory Message
For the facilitator:

Welcome to the Practical Research- 2 Grade 12 Alternative Delivery Mode (ADM)


Module on Introduction to Quantitative Research

This module was collaboratively designed, developed and reviewed by educators


both from public and private institutions to assist you, the teacher or facilitator in helping
the learners meet the standards set by the K to 12 Curriculum while overcoming their
personal, social, and economic constraints in schooling.

This learning resource hopes to engage the learners into guided and independent
learning activities at their own pace and time. Furthermore, this also aims to help learners
acquire the needed 21st century skills while taking into consideration their needs and
circumstances.

In addition to the material in the main text, you will also see this box in the body of
the module:

Notes to the Teacher


This contains helpful tips or
strategies that will help you in guiding the
learners.

As a facilitator you are expected to orient the learners on how to use this module.
You also need to keep track of the learners' progress while allowing them to manage their
own learning. Furthermore, you are expected to encourage and assist the learners as they
do the tasks included in the module.

For the learner:

Welcome to the Practical Research 2 -Grade 12 Alternative Delivery Mode (ADM)


Module on Introduction to Quantitative Research

The hand is one of the most symbolized part of the human body. It is often used to
depict skill, action and purpose. Through our hands we may learn, create and accomplish.
Hence, the hand in this learning resource signifies that you as a learner is capable and
empowered to successfully achieve the relevant competencies and skills at your own pace
and time. Your academic success lies in your own hands!

This module was designed to provide you with fun and meaningful opportunities
for guided and independent learning at your own pace and time. You will be enabled to
process the contents of the learning resource while being an active learner.
This module has the following parts and corresponding icons:

What I Need to Know This will give you an idea of the skills or
competencies you are expected to learn in the
module.

What I Know This part includes an activity that aims to check


what you already know about the lesson to take.
If you get all the answers correct (100%), you
may decide to skip this module.

What’s In This is a brief drill or review to help you link the


current lesson with the previous one.

What’s New In this portion, the new lesson will be introduced


to you in various ways such as a story, a song, a
poem, a problem opener, an activity or a
situation.

What is It This section provides a brief discussion of the


lesson. This aims to help you discover and
understand new concepts and skills.

What’s More This comprises activities for independent


practice to solidify your understanding and skills
of the topic. You may check the answers to the
exercises using the Answer Key at the end of the
module.

What I Have Learned This includes questions or blank


sentence/paragraph to be filled into process
what you learned from the lesson.

What I Can Do This section provides an activity which will help


you transfer your new knowledge or skill into
real life situations or concerns.

Assessment This is a task which aims to evaluate your level


of mastery in achieving the learning competency.

Additional Activities In this portion, another activity will be given to


you to enrich your knowledge or skill of the
lesson learned. This also tends retention of
learned concepts.

Answer Key This contains answers to all activities in the


module.

At the end of this module you will also find:

References This is a list of all sources used in developing


this module.
The following are some reminders in using this module:

1. Use the module with care. Do not put unnecessary mark/s on any part of
the module. Use a separate sheet of paper in answering the exercises.
2. Don’t forget to answer What I Know before moving on to the other activities
included in the module.
3. Read the instruction carefully before doing each task.
4. Observe honesty and integrity in doing the tasks and checking your
answers.
5. Finish the task at hand before proceeding to the next.
6. Return this module to your teacher/facilitator once you are through with it.
If you encounter any difficulty in answering the tasks in this module, do
not hesitate to consult your teacher or facilitator. Always bear in mind that you
are not alone.

We hope that through this material, you will experience meaningful learning
and gain deep understanding of the relevant competencies. You can do it!
What I Need to Know

This module was designed and written to help you to be able to make a plan that
the community may use to conserve and protect its resources for future
generations.

At the end of this module, the learners will be able to :

Collect data using appropriate instruments CS_RS12-IId-g-1

Present and internet data in textual, tabular, and graphical forms

CS_RS12-IId-g-2;
What I Know

Instruction: Rearrange the letters to identify the different steps of the research
process and give its definition. Write your answer on a separate sheet of paper.

Table 1: The Research Process

Answer Definition

1 YVTAILID

2 YOTBIJVEICT

3 ECVOIGTNI

4 EADPUTTI

5 EAVFIFTEC

6 HRCERSAE
TINNESMTUR
Lesson Collecting Data Using Appropriate
1 Instrument

In our past lesson, we learned to choose appropriate research design


for a research study. A research design is a framework or plan for conducting
the research process. It is noteworthy to remember that the research problem
determines the research design. It is also considered as a strategy on how
data collection, measurement and analysis of data will be done.

What’s In

In any type of quantitative research design, you will always need an


instrument to gather data from your study. A research instrument is a tool used
to collect, measure, and analyze data related to your subject. Research
instruments can be tests, surveys, scales, questionnaires, or even checklists. But
in this module, you will learn how to develop your research instrument. The
process of developing an instrument as well as the conditions under which the
instrument will be used is called instrumentation
A research instrument has the following characteristics: validity, reliability,
and objectivity. Validity refers to the degree to which an instrument accurately
measures what it intends to measure. Three common types of validity for
researchers and evaluators to consider are content, construct, and criterion
validities. Reliability refers to the degree to which an instrument yields
consistent results. Common measures of reliability include internal consistency,
test-retest, and inter-rater reliability. Objectivity refers to the absence of
subjective judgment in the research instrument. Developing a valid and reliable
instrument usually requires multiple iterations of piloting and testing which can
be resource-intensive. Therefore, when available, I suggest using already
established valid and reliable instruments, such as those published in peer-
reviewed journal articles. However, even when using these instruments, you
should re-check validity and reliability, using the methods of your study and your
own participants’ data before running additional statistical analyses. This process
will confirm that the instrument performs, as intended, in your study with the
population you are studying, even though they are identical to the purpose and
population for which the instrument was initially developed.
What’s New

Let's proceed!

Instruction: Get your dictionary and define the following words:

1. Data: ____________________________________________________
2. Survey : __________________________________________________
3. Questionnaire : ___________________________________________
4. Interview : ________________________________________________
5. Affective : _________________________________________________
6. Aptitude : _________________________________________________
7. Cognitive : ________________________________________________
8. Validity : _________________________________________________
9. Reliability : _______________________________________________
10. Objectivity : _________________________________________

What is It

Now let us find out how to collect data using the appropriate instrument!

Types of Research Instruments:

Cognitive Research Instrument. Get access to some measurement and analysis


instruments to assess cognitive function. This neuroscientific platform is designed
specifically for scientists.

Whatever the specialization, this digital platform will save you time, making it
quicker for the professional and more comfortable for the subjects The platform
makes it simple to collect, manage, and analyze the received data.

Aptitude Research Instrument. Is designed to assess what a person is capable


of doing or to predict what a person can learn or do given the right education and
instruction. It represents a person's level of competency to perform a certain type
of task. Aptitude tests are often used to assess academic potential or career
suitability and may be used to assess either mental or physical talent in a variety
of domains.

People encounter a variety of aptitude tests throughout their personal and


professional lives, often starting while they are children going to school.

Here are a few examples of common aptitude tests:

 A test assessing an individual's aptitude to become a fighter pilot.


 A career test evaluating a person's capability to work as an air traffic
controller.
 An aptitude test is given to high school students to determine which type of
careers they might be good at.
 A computer programming test to determine how a job candidate might solve
different hypothetical problems.
 A test designed to test a person's physical abilities needed for a particular
job such as a police officer or firefighter.

Affective Research Instrument. Is an individual difference variable that refers to


tendencies for regulating emotions. The emotion research literature has
consistently identified three general strategies to handle emotional reactions: some
strategies are aimed at re-adjusting affect to adapt successfully to situational
demands; other strategies intend to conceal or suppress effect, and a third
approach is to tolerate and accept emotions including unwanted and aversive
reactions. This type of research instrument is usually expressed in the Likert
scale, semantic differential scale, Thurstone scale, Guttman scale, and Rating
scale.

Instrument development begins with deciding what type of research


instrument that you will use in the study. In designing your research instruments,
ensure that you should start with a statement about the objectives of the research
study, review existing instruments with similar variables, how the data will be
used, the confidentiality of the data, how long it will take for the respondents to
answer the interview or survey, the language appropriate to the age group and
that every question that helps you answer the research questions and
questionnaires should use appropriate scales. You may also ask for demographic
information such as gender, age, if relevant, but don’t ask for so much detail that
it would be possible to identify people. You should always remember that in
constructing your instrument, the number of items should be twice the target
number of items because, after item analysis, you will have to delete weak items.

The next step is to find an expert in the field who will validate the content of
your instrument. It is also necessary to have your instrument checked by a
language editor for face validation. Content validity refers to the extent to which
the items on a test are fairly representative of the entire domain the test seeks to
measure. This entry discusses the origins and definitions of content validation,
methods of content validation, the role of content validity evidence in validity
arguments, and unresolved issues in content validation. Whereas Face validity,
also called logical validity, is a simple form of validity where you apply a
superficial and subjective assessment of whether or not your study or test
measures what it is supposed to measure. Revise the instrument according to the
comments and suggestions were given by the content and face validators.

After the content and face validation, you are now ready to pilot test your
instrument. You should bear in mind that the actual number of respondents
should double the number of your target respondents in the study to increase the
validity and reliability of your instrument. During pilot testing, you will also
determine the average time the respondents took to answer the instrument.
Respondents can also comment on the reliability of your instrument.

If applicable, after collecting data from the pilot test, you can now use
statistical tests to check the reliability of your instrument. Item analysis is one
way to check the reliability of your instrument. Item analysis is a statistical
technique that measures the effectiveness of a research instrument. This is done
by getting the difficulty and discrimination indices of the research instrument.
This is done by getting the difficulty and discrimination indices of the research
instrument. Difficulty index is the proportion or probability that candidates, or
students, will answer a test item correctly, whereas discrimination index is a
measure of how well an item can distinguish between examinees who are
knowledgeable and those who are not, or between masters and non-masters.

The first is the item-difficulty index (p). This index is determined by


calculating the proportion of examinees that answer the item correctly. The
second index is called the item-discrimination index (d). For this calculation, we
divide the test-takers into three groups according to their scores on the test as a
whole: an upper group consisting of the 27% who make the highest scores, a
lower group consisting of the 27% who make the lowest scores, and a middle
group consisting of the remaining 46%. (Note that some textbooks use 25%, not
27%.)

The formula for the item-difficulty index is

where: Np is the number of test-takers in the total group who pass the item, and N
indicates the total number of test-takers in the group
The formula for the item-discrimination index is
where: Up and Lp are the numbers of test-takers in the upper and lower groups
who pass the item, and U is the total numbers of test-takers in the upper group.

As an example, assume that 50 people take a test. For the difficulty index,
27 test-takers answers the item correctly. For the discrimination index, the upper
and lower groups will be formed from the top 14 and bottom 14 test-takers on the
total test score. If 12 of the test-takers in the upper group and 7 of those in the
lower group pass the item, then:

p= 27 =.54 d= 12-7=0.36
50 14
The item difficulty index (p) has a range of 0.00 to 1.00. If no one answers
the item correctly, the p-value would be 0.00. An item that everyone answers
correctly would have a p-value of 1.00.
The optimal level for an acceptable p-value depends on the number of options per
item. A formula that can be used to compute the optimal level is:

Where g = the chance of


1.0 + g level

2
In our case, we will have four options; therefore, g = .25. Therefore, the
optimal level for our tests will be .63. In general, as the number of options
increases, the optimum p-value decreases; we would expect questions with more
options to also be more difficult to answer.
We can also compute the lower bound for item difficulty. We do not want
too many questions to have a difficulty index below this bound, because it means
that these items were too difficult for the group of examinees and therefore will not
help us discriminate among the test takers.

where k = number of multiple choice items


n = number of examinees

For example, where k = 4 and n = 90, the lower bound = .325


The item discrimination index (d) is a measure of the effectiveness of an
item in discriminating between high and low scorers on the whole test. The higher
the value of d, the more effective the item is. When d is 1.00, all test-takers in the
upper group and no test-takers in the lower group answered the item correctly.
Conversely, if none of the upper group but all of the lower group answered an item
correctly, the d value would be -1.00. Both of these circumstances are rare, and
we will probably never see a value of - 1.00. The range of values for the item
discrimination index is -1.00 to 1.00. Generally speaking, an item is considered
acceptable if its d index is 0.30 or higher.

Activity 1

In this activity, you are asked to calculate difficulty and discrimination indexes for
the results of two test items and to interpret the results.
SHOW ALL COMPUTATIONAL STEPS

1 Compute the difficulty (p) and discrimination (d) indices of a test item
administered to 84 people if 52 test-takers answered the item correctly; 20 in the
upper group (upper 27% of total test score distribution) and 12 in the lower group
(lower 27% of total test score distribution) got the item right? (Note: k = 4)
p = ________ Optimal Level: ________
d = ________ Lower Bound: ________

Is this a good item? _____ Yes _____ No

2 Compute the difficulty(p) and discrimination (d) index for an item administered
to 263 people where 74 people answered the item correctly; 32 people in the upper
group and 23 people in the lower group passed the item (k = 4).

p = ________ Optimal Level: ________


d = ________ Lower Bound: ________

Is this a good item? _____ Yes _____ No

For rating scales and inventory test, the appropriate way to check its
reliability is to compute the Cronbach’s alpha of the instrument, which can be
computed using statistical tools such as the Statistical Package For Social
Science (SPSS) or even Microsoft Excel. Cronbach’s alpha refers to a measure
of internal consistency, that is, how closely related a set of items are as a
group. It is considered to be a measure of scale reliability. A “high” value for
alpha does not imply that the measure is unidimensional. If, in addition to
measuring internal consistency, you wish to provide evidence that the scale in
question is unidimensional, additional analyses can be performed. Exploratory
factor analysis is one method of checking dimensionality. Technically speaking,
Cronbach’s alpha is not a statistical test – it is a coefficient of reliability (or
consistency).

Cronbach’s alpha can be written as a function of the number of test items


and the average inter-correlation among the items. Below, for conceptual
purposes, we show the formula for Cronbach’s alpha:

Here N is equal to the number of items, 𝒄̅ is the average inter-item


covariance among the items and 𝒗
̅ equals the average variance.

One can see from this formula that if you increase the number of items,
you increase Cronbach’s alpha. Additionally, if the average inter-item correlation
is low, the alpha will be low. As the average inter-item correlation increases,
Cronbach’s alpha increases as well (holding the number of items constant).

An example

First, we calculated the standard deviation (SD) of the data of each column.
For calculation of SD of column B data, we wrote B10 cell formula as “= STDEV
(B2:B7)”, and the SD was calculated as 0.752773. Similarly, we calculated the SD
of other columns. In column F, we calculated the total score of four questions of
each subject. For calculation of total score of subject 1, we wrote formula for F2
cell as “= SUM (B2:E2)” and the sum score was calculated as 11. Then, we
calculated the SD of sum score in cell F10.
The formula for calculation of α is:

Where, k = number of items (question/statement) in questionnaire

Si = SD of ith item

St = SD of sum score

We wrote A12 cell text as for ease of identification and the right
adjacent cell (B12) equation was written as “= COUNT (B2:E2)/(COUNT (B2:E2)-
1)”. Then, we wrote D12 cell text as and the right adjacent cell (E12)
formula was written as “= SUMSQ (B10:E10)”, and the sum square of all SD was
calculated in the cell E2. The A13 cell text was written as and the right
adjacent cell (B13) formula was written as “= F10^2”. The α was calculated in cell
E13 when we put the cell formula as “= B12* (1-(E12/B13))”. The calculated value
of α in E13 was 0.835.

Those who have access to Microsoft Excel, they can carry out the same
calculation in an Excel ® spreadsheet as described above. We had tested all the
formulae in Excel and found it compatible. In addition, we calculated α for the
above example in IBM SPSS Statistics (New Orchard Road Armonk, New York,
USA) and it showed same result.

It was possible to calculate α of a questionnaire with the help of free


computer software. Cronbach's alpha for any number of questions and responses
can be calculated in similar fashion. This article would help researchers in
resource-poor setting for calculation of Cronbach's alpha.
Now that you have developed the instrument for your study, it’s now time
to plan the data gathering procedure. For survey or descriptive type of research,
collection of data can be done by one of the following modes: direct administration
to a group, mail surveys, telephone surveys, and personal interviews. Direct
administration to a group refers to a self-administered survey. Respondents are
gathered in a group, questionnaires are handed out to each and the
questionnaires are completed within the group situation. Mail surveys are the
pioneer of self-administered questionnaires. In this approach, the researcher
sends the questionnaires enclosed with postage-paid envelopes through the postal
system. Meanwhile, the participants will be asked to answer questions that are
written on paper. Telephone survey methods used in collecting data either from
the general population or from a specific target population. Telephone numbers
are utilized by trained interviewers to contact and gather information from
possible respondents. and personal interview is a face-to-face survey, is a
survey method that is utilized when a specific target population is involved. The
purpose of conducting a personal interview survey is to explore the responses of
the people to gather more and deeper information.
For correlational research design, is a research methodology adopted by
persons carrying out correlational research in order to determine the linear
statistical relationship between 2 variables. These data collection methods are
used to gather information in correlational research. . For the explanatory study,
the instrument will be administered to the respondents in one or two sessions.
Two or more instruments will be administered on the same day, or in any cases
when there are more than two instruments, another session will be devoted to the
administration of the said instrument. For a prediction study, on the other hand,
the measurement of the criterion variables often takes place sometime after the
measurement of the prediction variable.

For causal-comparative or quasi-experimental and experimental research


design, the instrument is administered before employing an intervention and is
administered again after the intervention. In some cases, only the post-test is
given or the administration of the research instrument is after the intervention
only.

A. Collecting Data Through Survey

What is a data collection survey?

Variations: questionnaire, e-survey, telephone interview, face-to-face interview,


focus group

A survey is defined as the act of examining a process or questioning a


selected sample of individuals to obtain data about a service, product, or process.
Data collection surveys collect information from a targeted group of people about
their opinions, behavior, or knowledge. Common types of example surveys are
written questionnaires, face-to-face or telephone interviews, focus groups, and
electronic (e-mail or website) surveys.

Surveys are a valuable data collection and analysis tool that is commonly
used with key stakeholders, especially customers and employees, to discover
needs or assess satisfaction.

When to use surveys to collect data?

It is helpful to use surveys when:

 Identifying customer requirements or preferences


 Assessing customer or employee satisfaction, such as identifying or prioritizing
problems to address
 Evaluating proposed changes
 Assessing whether a change was successful
 Monitoring changes in customer or employee satisfaction over time
How to Administer a survey?

1. Determine what you want to learn from the survey and how you will use the
results.
2. Determine who should be surveyed by identifying the population group. If they
are too large to permit surveying everyone, decide how to obtain a sample.
Decide what demographic information is needed to analyze and understand the
results.
3. Determine the most appropriate type of survey.

Determining Survey Type Example

4. Determine whether the survey’s answers will be numerical rating, numerical


ranking, yes-no, multiple-choice or open-ended, or a mixture.
5. Brainstorm questions and, for multiple-choice, the list of possible answers. Keep
in mind what you want to learn, and how you will use the results. Narrow down
the list of questions to the absolute minimum that you must have in order to
learn what you need to know.
6. Print the questionnaire or interviewer's question list.
7. Test the survey with a small group. Collect feedback.
 Which questions were confusing?
 Were any questions redundant?
 Were answer choices clear? Were they interpreted as you intended?
 Did respondents want to give feedback about topics that were not included?
(Open-ended questions can be an indicator of this.)
 On average, how long did it take for a respondent to complete the survey?
 For a questionnaire, were there any typos or printing errors?
8. Test the process of tabulating and analyzing the results. Is it easy? Do you have
all the data you need?
9. Revise the survey based on test results.
10. Administer the survey.
11. Tabulate and analyze the data. Decide how you will follow through. Report
results and plans to everyone involved. If a sample was involved, also report and
explain the margin of error and confidence level.
Survey Considerations

 Conducting a survey creates expectations for change in those asked to answer it.
Do not administer a survey if action will not, or cannot, be taken as a result.
 Satisfaction surveys should be compared to objective indicators of satisfaction,
such as buying patterns for customers or attendance for employees, and to
objective measures of performance, such as warranty data in manufacturing or
re-admission rates in hospitals. If survey results do not correlate with the other
measures, work to understand whether the survey is unreliable or whether
perceptions are being modified by the organization’s actions.
 Surveys of customer and employee satisfaction should be ongoing processes
rather than one-time events.
 Get help from a research organization in preparing, administering, and analyzing
major surveys, especially large ones or those whose results will determine
significant decisions or expenditures.

B. Collecting Data Through Interviews

When is an Interview an Appropriate Research Method?

Interviews are an appropriate method when there is a need to collect in-


depth information on people’s opinions, thoughts, experiences, and feelings.
Interviews are useful when the topic of inquiry relates to issues that require
complex questioning and considerable probing. Face-to-face interviews are
suitable when your target population can communicate through face-to-face
conversations better than they can communicate through writing or phone
conversations (e.g., children, elderly or disabled individuals).

What is an Interview?

An interview is a conversation for gathering information. A research


interview involves an interviewer, who coordinates the process of the conversation
and asks questions, and an interviewee, who responds to those questions.
Interviews can be conducted face-to-face or over the telephone. The internet is
also emerging as a tool for interviewing.

Types of Interviews

Interviews can be designed differently depending on the needs being


addressed and the information. They can be grouped into three types:

Structured interviews: In a structured interview, the interviewer asks a set of


standard, predetermined questions about particular topics, in a specific order. The
respondents need to select their answers from a list of options. The interviewer
may provide clarification on some questions. Structured Interviews are typically
used in surveys (see our “Survey Research Methods” Tip Sheet for more
information).

Semi-structured interviews: In a semi-structured interview, the interviewer uses


a set of predetermined questions and the respondents answer in their own words.
Some interviewers use a topic guide that serves as a checklist to ensure that all
respondents provide information on the same topics. The interviewer can probe
areas based on the respondent’s answers or ask supplementary questions for
clarification. Semi-structured interviews are useful when there is a need to collect
in-depth information systematically from several respondents or interviewees (e.g.,
teachers, community leaders).

Unstructured interviews: In an unstructured interview, the interviewer has no


specific guidelines, restrictions, predetermined questions, or a list of options. The
interviewer asks a few broad questions to engage the respondent in an open,
informal, and spontaneous discussion. The interviewer also probes with further
questions and/or explores inconsistencies to gather more in-depth information on
the topic. Unstructured interviews are particularly useful for getting the stories
behind respondents’ experiences or when there is little information about a topic.

Steps in Conducting an Interview:

Before the Interview:

1. Define your objectives → identify what you want to achieve and the information
you need to gather. Make sure an interview is an appropriate way to meet your
objectives.

2. Choose the type of interview → Review your required information, budget, time,
and potential respondents and decide whether you need to conduct structured,
semi-structured, or unstructured interviews.

3. Choose the appropriate respondents → Depending on the type of interview,


decide on the characteristics of interviewees and the number of interviews
required.

4. Decide how you will conduct the interviews → Consider telephone or face-to-
face interviews. For large surveys, consider computer-aided interviewing and
recording.

5. Decide how to recruit your respondents → Obtain contact information for


several respondents larger than the number of interviews you need, since some
may not respond. Contact them by phone, e-mail, or regular mail and introduce
yourself, your organization, and your project. Explain the purpose of the interview,
the importance of their participation, and set up an appointment.

6. Decide how you will record the interviews → Depending on the type of interview,
you may fill in a prepared form, use written notes, voice recorders, or computer
devices.

7. Make a list of questions and test them with a sample of respondents → the
questions must be aligned with the type of interview. If you are running
structured interviews, see our Tip Sheets on “Questionnaire Design” and Survey
Research Methods” for more information.

8. Decide who will conduct the interviews → develop an information kit that
includes an introduction to the research topic and instructions. For unstructured
interviews, you may need to hire skilled interviewers.
During the interview:

1. Introduce yourself and initiate a friendly but professional conversation.

2. Explain the purpose of your project, the importance of their participation, and
the expected duration of the interview.

3. Be prepared to reschedule the interview if a respondent has a problem with the


timing.

4. Explain the format of the interview.

5. Tell respondents how the interview will be recorded and how the collected
information will be used → if possible, to obtain their written consent to
participate.

6. Ask respondents if they have any questions.

7. Control your tone of voice and language → remain as neutral as possible when
asking questions or probing on issues.

8. Keep the focus on the topic of inquiry and complete the interview within the
agreed time limit.

9. Ensure proper recording → without distracting the respondent, check your


notes, and voice recorder regularly.

10. Complete the session → make sure all questions were asked, explain again
how you will use the data, thank the respondent, and ask them if they have any
questions.

After the interview

1. Make sure the interview was properly recorded → make additional notes, if
needed.

2. Organize your interview responses → responses from unstructured and semi-


structured interviews need to be transcribed. Responses from structured
interviews need to be entered into a data analysis program.

3. Get ready for data analysis → search for resources for analyzing qualitative
and/or quantitative data.
Checklist for Conducting Interviews

o Have you identified research questions that will be adequately addressed by using
interviews?
o Have you chosen the appropriate type of interview?
o Have you selected an interviewer?
o Have you prepared the list of questions?
o Have you tested them?
o Have you decided on the setting of interviews and how responses should be
recorded?
o Have you contacted your respondents and set up appointments?
o Have you obtained enough data for analysis?

Sample Research on

“ Students Perceptions About ICT in Learning Mathematics”

Sample Interview Guide for Students

1. What is your favorite subject/s?


2. Why do you like the subject?
3. What is the least-liked subject/s?
4. Why do you dislike the subject?
5. How is your favorite subject taught by your teacher?
6. How is your least-liked subject taught by your teacher?
7. Which method/strategy do you find interesting?
8. Does your teacher use ICT in teaching mathematics?
9. Which ICT devices do you frequently use? Graphic calcular? Computer? e-
learning?
10. What are the ICT activities that you perform inside the classroom? Outside
the classroom?
11. Which of these activities are helpful to you?
12. Given the number of minutes you spent in learning mathematics, how many
percent do you spend for ICT hands-on math activities?
13. Do you find the use of ICT in learning mathematics interesting?
14. From scale of 1 to 10, ( with 1 as the lowest and 10 as the highest), at what
level do you assess the effectiveness of ICT integration in learning
mathematics?
15. Does ICT integration in learning mathematics improve your academic
performance in mathematics?

Figure 2: Collecting Data through Interview


What’s More

After reading and learning the topic, let us have a comprehension check!

Directions: Answer the following questions and write your answer on a separate
paper.

Questions:

1. What is the purpose of the research instrument? Explain your answer


(10points).
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________

2. What are the characteristics of a good research instrument? Explain your


answer (10points).
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________

RUBRICS FOR ESSAY


Criteria 5 4 3 2 1
Quality The insight The insight The The insight The insight
given was given insight given was given was
Of Writing
excellent. somewhat given was not very poorly
informative vague and somehow organized
and general. related to and not
organized. the related to
concepts. the given.
Grammar No Few errors Minimal Moderate Many
Usage grammatical in spelling number of spelling and spelling or
error and errors punctuation grammatical
punctuation. error. errors.
What I Have Learned

How is it going? Did you have fun learning the topic? Let us recall what you have
learned! Write your answer on a separate sheet of paper.

Enumeration: Answer the following questions. Write your answer on a separate


sheet of paper.

A. Give three (3) characteristics of the research instrument.


1. ____________________
2. ____________________
3. ____________________
B. List down three (3) types of the research instrument.
1. ____________________
2. ____________________
3. ____________________
4.
C. Give the four (4) ways of collecting data in a descriptive type of
research
1. ____________________
2. ____________________
3. ____________________
4. ____________________
Assessment

After all the activities, now let us test your learning! No worries, I am pretty sure
that this challenge is very easy now. Good luck!

Instruction: Multiple Choice: Choose the letter of the best answer. Write your
answer on a separate sheet of paper.

_____ 1. In quantitative research ‘internal validity’ is an alternative for judging


which of the following in qualitative research?
a) Credibility
b) Transferability
c) Dependability
d) Measurability

_____ 2. It is a tool that measures variables in the study and is designed to obtain
data on a topic of interest from the subjects of research.

a) Data Gathering
b) Research Instrument
c) Collection of data
d) Instrumentation

_____ 3. It pertains to the proportion of students who answered the test item
correctly.

a) Item analysis
b) Discrimination Index
c) Difficulty Index
d) Content Validation

_____ 4. This research instrument assesses one’s feelings, attitudes, beliefs,


interests, personality, and values.

a) Aptitude Research Instrument


b) Descriptive Research
c) Quasi-Experimental Research
d) Affective Research Instrument
_____ 5. This research instrument measures intellectual processes such as
problem-solving, analyzing. and reasoning.

a) Aptitude Research Instrument


b) Experimental Research Design
c) Affective Research Instrument
d) Cognitive Research Instrument

_____ 6. Construct validity refers to ______________.

a) The degree to which an instrument can forecast an outcome


b) A logical link between research instrument and objective
c) Statistical procedures establish the contribution of each important factor
d) How well an instrument compares with a second assessment concurrently
done

_____ 7. Face validity refers to ___________________.

a) A logical link between research instrument and objective


b) The degree to which an instrument can forecast an outcome
c) Statistical procedures establish the contribution of each important factor
d) How well an instrument compares with a second assessment concurrently
done

_____ 8. Which of the following is a factor affecting the reliability of a research


instrument?

a) The wording of the questions


b) The physical setting
c) The respondent’s mood
d) All of the above

_____ 9. ‘Test/retest’ and ‘parallel forms of the same test’ are _________.

a) Internal consistency procedures to verify the validity of the measure


b) External consistency procedures to verify the reliability of a measure
c) Internal consistency procedures to verify the reliability of a measure
d) External consistency procedure to verify the validity of a measure

_____ 10. Which of the following is a procedure for verifying the internal
consistency of the reliability of a measure?
a) Test/retest
b) Split-half technique
c) Parallel forms of the same test
d) None of the above
Additional Activities

Directions: As you have learned from this lesson, answer each question
comprehensively.

What type of research instrument applies to your study?


________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________

What type of research instrument is classified as nominal? ordinal? Interval?


ratio? Explain.

________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________

RUBRICS FOR ESSAY


Criteria 5 4 3 2 1
Quality The insight The insight The The insight The insight
given was given insight given was given was
Of Writing
excellent. somewhat given was not very poorly
informative vague and somehow organized
and general. related to and not
organized. the related to
concepts. the given.
Grammar No Few errors Minimal Moderate Many
Usage grammatical in spelling number of spelling and spelling or
error and errors punctuation grammatical
punctuation. error. errors.
Key to Answers
What I know

1. VALIDITY
2. OBJECTIVITY
3. COGNITIVE
4. APPTITUDE
5. AFFECTIVE
6. RESEARCH INSTRUMENT

Assessment

1. A 6. C
2. B 7. A
3. C 8. D
4. D 9. B
5. D 10. B
References

Espinosa, Allen A. . Practical Research 2 Quantitative Research. Diwa Publishing

Esposito, J. L. (2002 November). Interactive, multiple-method questionnaire


evaluation research: A case study. Paper presented at the International Conference
in Questionnaire Development, Evaluation, and Testing (QDET) Methods.
Charleston, SC.

Groves, R. M., (1987). Research on survey data quality. Public Opinion Quarterly,
51, 156-172.

Norland-Tilburg, E. V. (1990). Controlling error in evaluation instruments. Journal


of Extension, [On-line], 28(2). Available
at http://www.joe.org/joe/1990summer/tt2.html

Radhakrishna, R. B. Francisco, C. L., & Baggett. C. D. (2003). An analysis of


research designs used in agricultural and extension education. Proceedings of the
30th National Agricultural Education Research Conference, 528-541.
For inquiries or feedback, please write or call:

Department of Education – Region III,


Schools Division of Bataan - Curriculum Implementation Division
Learning Resources Management and Development Section (LRMDS)

Provincial Capitol Compound, Balanga City, Bataan

Telefax: (047) 237-2102

Email Address: [email protected]

You might also like