0% found this document useful (0 votes)
8 views

What Is Questionnaire?

Uploaded by

saher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

What Is Questionnaire?

Uploaded by

saher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ASSIGNMENT

[RESEARCH
METHODOLOGY]
RELAIBILITY AND VALIDITY

SUBMITTED TO: AR. MUHAMMAD TALHA


SUBMITTED BY: SAHER SAFDAR (20014795-006)
DATED: January 1, 2024
RELAIBITY
Reliability refers to whether or not you get the same answer by using an instrument to
measure something more than once. In simple terms, research reliability is the degree to
which research method produces stable and consistent results.
A specific measure is considered to be reliable if its application on the same object of
measurement number of times produces the same results.

Research reliability can be divided into three categories:


1. Test-retest reliability relates to the measure of reliability
that has been obtained by conducting the same test more
than one time over period of time with the participation of
the same sample group.
Example: Employees of ABC Company may be asked to
complete the same questionnaire about employee job
satisfaction two times with an interval of one week, so that
test results can be compared to assess stability of scores.

2. Parallel forms reliability relates to a measure that is


obtained by conducting assessment of the same phenomena
with the participation of the same sample group via more
than one assessment method.
Example: The levels of employee satisfaction of ABC Company
may be assessed with questionnaires, in-depth interviews and
focus groups and results can be compared.

3. Inter-rater reliability as the name indicates relates to the


measure of sets of results obtained by different assessors
using same methods. Benefits and importance of assessing
inter-rater reliability can be explained by referring to
subjectivity of assessments.
Example: Levels of employee motivation at ABC Company can
be assessed using observation method by two different
assessors, and inter-rater reliability relates to the extent of difference between the two
assessments.

4. Internal consistency reliability is applied to assess the


extent of differences within the test items that explore the
same construct produce similar results. It can be represented
in two main formats.
a) average inter-item correlation is a specific form of internal
consistency that is obtained by applying the same construct
on each item of the test
b) split-half reliability as another type of internal consistency
reliability involves all items of a test to be ‘spitted in half’.

VALIDITY
Validity refers to the extent to which a concept, measure, or study accurately represents the
intended meaning or reality it is intended to capture. It is a fundamental concept in research
and assessment that assesses the soundness and appropriateness of the conclusions,
inferences, or interpretations made based on the data or evidence collected.
Research Validity
Research validity refers to the degree to which a study accurately measures or reflects what
it claims to measure. In other words, research validity concerns whether the conclusions
drawn from a study are based on accurate, reliable and relevant data.
Validity is a concept used in logic and research methodology to assess the strength of an
argument or the quality of a research study. It refers to the extent to which a conclusion or
result is supported by evidence and reasoning.
There are four main types of validity:
Construct validity: Does the test measure the concept that it’s intended to measure?
Content validity: Is the test fully representative of what it aims to measure?
Face validity: Does the content of the test appear to be suitable to its aims?
Criterion validity: Do the results accurately measure the concrete outcome they are
designed to measure?

1. Construct validity
Construct validity evaluates whether a measurement tool really represents the thing we are
interested in measuring. It’s central to establishing the overall validity of a method.
Example
There is no objective, observable entity called “depression” that we can measure directly.
But based on existing psychological research and theory, we can measure depression based
on a collection of symptoms and indicators, such as low self-confidence and low energy
levels.

2. Content validity
Content validity assesses whether a test is representative of all aspects of the construct.
To produce valid results, the content of a test, survey or measurement method must cover
all relevant parts of the subject it aims to measure. If some aspects are missing from the
measurement (or if irrelevant aspects are included), the validity is threatened and the
research is likely suffering from omitted variable bias.
Example
A mathematics teacher develops an end-of-semester algebra test for her class. The test
should cover every form of algebra that was taught in the class. If some types of algebra are
left out, then the results may not be an accurate indication of students’ understanding of the
subject. Similarly, if she includes questions that are not related to algebra, the results are no
longer a valid measure of algebra knowledge.

3. Face validity
Face validity considers how suitable the content of a test seems to be on the surface. It’s
similar to content validity, but face validity is a more informal and subjective assessment.
Example
You create a survey to measure the regularity of people’s dietary habits. You review the
survey items, which ask questions about every meal of the day and snacks eaten in between
for every day of the week. On its surface, the survey seems like a good representation of
what you want to test, so you consider it to have high face validity.

4. Criterion validity
Criterion validity evaluates how well a test can predict a concrete outcome, or how well the
results of your test approximate the results of another test.
Example
A university professor creates a new test to measure applicants’ English writing ability. To
assess how well the test really does measure students’ writing ability, she finds an existing
test that is considered a valid measurement of English writing ability, and compares the
results when the same group of students take both tests. If the outcomes are very similar,
the new test has high criterion validity.

RELATIONSHIP BETWEEN RELAIBILITY AND VALIDITY


The relationship between reliability and validity is that they are both important aspects of
research quality, but they are not the same thing. Reliability is about the consistency of a
measure, while validity is about the accuracy of a measure.
A measurement can be reliable without being valid, but it cannot be valid unless it is
reliable3. In other words, a reliable measurement can produce consistent results that are
not necessarily correct, but a valid measurement must produce correct results that are also
consistent.

COMPARISON BETWEEN RELAIBILITY AND VALIDITY

RELAIBILITY VALIDITY
Reliability refers to how consistently a Validity refers to how accurately a method
method measures something. measures what it is intended to measure.
A reliable measurement is not always valid: A valid measurement is generally reliable: if a
the results might be reproducible, but they’re test produces accurate results, they should be
not necessarily correct. reproducible
Reliability is assessed by checking the Validity is assessed by checking how well the
consistency of results across time, across results correspond to established theories
different observers, and across parts of the and other measures of the same concept.
test itself.
Reliability is related to precision: how close Validity is related to accuracy: how close the
the measurements are to each other. measurements are to the true value.
Reliability is easier to assess than validity. Validity is more difficult to assess than
reliability.
Reliability is less valuable than validity Validity is more valuable than reliability.

KEY DIFFERENCES BETWEEN VALIDITY AND RELIABILITY


The points presented below, explains the fundamental differences between validity and
reliability:

1. The degree to which the scale gauges, what it is designed to gauge, is known as validity.
On the other hand, reliability refers to the degree of reproducibility of the results, if
repeated measurements are done.
2. When it comes to the instrument, a valid instrument is always reliable, but the reverse is
not true, i.e. a reliable instrument need not be a valid instrument.
3. While evaluating multi-item scale, validity is considered more valuable in comparison to
reliability.
4. One can easily assess the reliability of the measuring instrument, however, to assess
validity is difficult.
5. Validity focuses on accuracy, i.e. it checks whether the scale produces expected results or
not. Conversely, reliability concentrates on precision, which measures the extent to which
scale produces consistent outcomes.

You might also like