The document discusses the importance of constructing reliable and valid research instruments for quantitative research, emphasizing the need for careful preparation and consideration of various factors. It outlines characteristics of good research instruments, methods for developing them, and common scales used in quantitative research, such as the Likert scale and semantic differential. Additionally, it explains different types of validity and reliability, providing examples to illustrate each concept.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
5 views45 pages
PR2-Q2-Lesson-3
The document discusses the importance of constructing reliable and valid research instruments for quantitative research, emphasizing the need for careful preparation and consideration of various factors. It outlines characteristics of good research instruments, methods for developing them, and common scales used in quantitative research, such as the Likert scale and semantic differential. Additionally, it explains different types of validity and reliability, providing examples to illustrate each concept.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45
REVIEW
RELIABILITY AND VALIDITY
LESSON 3 RESEARCH INSTRUMENT, VALIDITY AND RELIABILITY WHAT YOU NEED TO KNOW ➢ Quantitative Research Instrument
What do you think will happen if tools for
building a house are not prepared meticulously? The same thing when getting information for answers to a research problem, tools, or instruments should be prepared carefully. In constructing a quantitative research instrument, it is very important to remember that the tools created should require responses or data that will be numerically analyzed. WHAT YOU NEED TO KNOW ➢ Research Instruments are basic tools researchers used to gather data for specific research problems. ➢ Common instruments are performance tests, questionnaires, interviews, and observation checklist. ➢ The first two instruments are usually used in quantitative research, while the last two instruments are often in qualitative research. ➢ However, interviews and observation checklists can still be used in quantitative research once the information gathered is translated into numerical data. WHAT IS IT? ➢ In constructing the research instrument of the study, there are many factors to be considered. ➢ The type of instrument, reasons for choosing the type, and the description and conceptual definition of its parts are some of the factors that need to be decided before constructing a research instrument. ➢ Furthermore, it is also very important to understand the concepts of scales of research instruments and how to establish validity and reliability of instruments. WHAT IS IT? ➢ In constructing the research instrument of the study, there are many factors to be considered. ➢ The type of instrument, reasons for choosing the type, and the description and conceptual definition of its parts are some of the factors that need to be decided before constructing a research instrument. ➢ Furthermore, it is also very important to understand the concepts of scales of research instruments and how to establish validity and reliability of instruments. CHARACTERISTICS OF A GOOD RESEARCH INSTRUMENT 1. CONCISE ➢Have you tried answering a very long test, and because of its length, you just pick the answer without even reading it? A good research instrument is concise in length yet can elicit the needed data. 2. SEQUENTIAL ➢Questions or items must be arranged well. It is recommended to arrange it from simplest to the most complex. In this way, the instrument will be more favorable to the respondents to answer. 3. VALID AND RELIABLE ➢The instrument should pass the tests of validity and reliability to get more appropriate and accurate information. 4. EASILY TABULATED ➢ Since you will be constructing an instrument for quantitative research, this factor should be considered. ➢ Hence, before crafting the instruments, the researcher makes sure that the variable and research questions are established. ➢ These will be an important basis for making items in the research instruments. WAYS IN DEVELOPING RESEARCH INSTRUMENT ➢There are three ways you can consider in developing the research instrument for your study. ➢First is adopting an instrument from the already utilized instruments from previous related studies. ➢The second way is modifying an existing instrument when the available instruments do not yield the exact data that will answer the research problem. ➢And the third way is when the researcher made his own instrument that corresponds to the variable and scope of his current study. COMMON SCALES USED IN QUANTITATIVE RESEARCH ❖Likert Scale. This is the most common scale used in quantitative research. ❖Respondents were asked to rate or rank statements according to the scale provided. ❖Example: A Likert scale that measures the attitude of students towards distance learning. LIKERT SCALE LIKERT SCALE LIKERT SCALE LIKERT SCALE 2. Semantic Differential. In this scale, a series of bipolar adjectives will be rated by the respondents. This scale seems to be more advantageous since it is more flexible and easier to construct. SEMANTIC DIFFERENTIAL TYPES OF VALIDITY OF INSTRUMENT 1. Face Validity. It is also known as “logical validity.” ✓ It calls for an initiative judgment of the instruments as it “appear.” ✓ Just by looking at the instrument, the researcher decides if it is valid. 1. Face validity, also known as surface validity, refers to the extent to which a research instrument appears to measure what it's supposed to measure. ❖ Example: You create a survey to measure the regularity of people’s dietary habits. You review the survey items, which ask questions about every meal of the day and snacks eaten in between for every day of the week. On its surface, the survey seems like a good representation of what you want to test, so you consider it to have high face validity. 2. Content Validity. An instrument that is judged with content validity meets the objectives of the study.
✓ It is done by checking the
statements or questions if this elicits the needed information. ✓ Experts in the field of interest can also provide specific elements that should be measured by the instrument. ❖ Example: A mathematic teacher develops an end-of- semester algebra test for her class. The test should cover every form of algebra that was taught in the class. If some types of algebra left out, then the results may not be accurate indication of students’ understanding of the subject. Similarly, if she includes questions that are not related to algebra, the results are no longer a valid measure of algebra knowledge. 3. Construct Validity. It refers to the validity of instruments as it corresponds to the theoretical construct of the study. ✓ It is concerning if a specific measure relates to other measures. ❖ Example: There is no objective, observable entity called “depression” that we can measure directly, but based on existing psychological research and theory, we can measure depression based on a collection of symptoms and indicators, such as low self-confidence and low energy. (F. Middleton, 2019) 4. Concurrent Validity. When the instrument can predict results like those similar tests already validated, it has concurrent validity. ❖ Concurrent validity is a term used to denote assessments that give similar results when used in a short time frame. For example, a therapist may use two separate depression scales with a patient to confirm a diagnosis. As long as both the assessments give the same results, they are concurrently valid. 5. Predictive Validity. When the instrument can produce results like those similar tests that will be employed in the future, it has predictive validity. This is particularly useful for the aptitude test. ❖ Predictive validity is the degree to which test scores accurately predict scores on a criterion measure. A conspicuous example is the degree to which college admissions test scores predict college grade point average (GPA). 1. TEST-RETEST RELIABILITY
❖For example, a group of respondents is
tested for IQ scores: each respondent is tested twice – the two tests are, say, a month apart. Then, the correlation coefficient between two sets of IQ-scores is a reasonable measure of the test-retest reliability of this test. 2. EQUIVALENT FORMS RELIABILITY ❖Standardized tests: In education, equivalent-forms reliability is often used to ensure that different versions of standardized tests are measuring the same knowledge and skills. ❖For example, if a student takes two different versions of a math test, the scores should be consistent if the tests are equivalent. 3. INTERNAL CONSISTENCY RELIABILITY ❖For example, a question about the internal consistency of the PDS might read, 'How well do all of the items on the PDS, which are proposed to measure PTSD, produce consistent results?' If all items on a test measure the same construct or idea, then the test has internal consistency reliability. SHORT QUIZ *Identify the following statements: 1. These are basic tools researchers use to gather data for specific research problems. 2. This is the most common scale used in quantitative research. 3. It calls for an initiative judgment of the instruments as it “appear.” 4. It is done by checking the statements or questions if this elicits the needed information. 5. When the instrument can predict results like those similar tests already validated. 6. When the instrument can produce results like those similar tests that will be employed in the future. 7. It is achieved by giving the same test to the same group of respondents twice. The consistency of the two scores will be checked. 8. It is established by administering two identical tests except for wordings to the same group of respondents. 9. It determines how well the items measure the same construct. It is reasonable that when a respondent gets a high score in one item, he will also get one in similar items. 10. It refers to the appropriateness of the research instrument. THANK YOU!