0% found this document useful (0 votes)
21 views

Reaction Paper 1

Reaction Paper on Cornerstones of a Good Research

Uploaded by

aerodrigo.pcu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Reaction Paper 1

Reaction Paper on Cornerstones of a Good Research

Uploaded by

aerodrigo.pcu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Reaction Paper 1

Initially viewing research from an academic perspective, my perception has evolved


as a professional engaged in technical work involving report writing for decision-makers. I now
recognize research as an essential component of my role. However, I understand that the first
presentation focuses on a quality research paper. Stressing the importance of employing
reliable methods for stable results, the discussion explored four reliability tests: test-retest
reliability, which I associated with the assessment of training effectiveness; parallel forms
reliability, which I thought might be costlier to execute; inter-rater reliability, which I related to
the employee selection process; and internal consistency reliability, which I thought is often
applied in psychological exams. I understood that reliable results don't guarantee validity, as
the subsequent discussion introduced Face Validity, Content Validity, Construct Validity, and
Criterion-related Validity, each tied to specific considerations. The other presenters elaborated
on measuring reliability through correlation and validity through quantitative and qualitative
measures, emphasizing the importance of the scale of measurements. The notion that
reliability measures consistency, not truthfulness, particularly resonated with me.
The next topic is about designing descriptive studies. I find surveys to be a commonly
employed method, with a crucial reliance on the careful construction of questionnaires and
conduct of surveys to attain the desired level of reliability. Next is observational research as
another approach, and I thought that it might be time-consuming and potentially suitable for
smaller sample sizes in controlled settings, provided that a well-constructed "code sheet" has
to be in place to minimize observer bias among others. The discussion on various observation
methods, including covert or overt, naturally set up or contrived, and non-participatory or
participatory, added depth to my understanding. The archival method, akin to secondary data
gathering, emerged as the final common method, aligning with my research proposal's
reliance on market study and historical data. The presenter also provided an insightful
summary as to the advantages and disadvantages of the three approaches that are linked to
resource availability, reliability and validity requirements, expected outcomes, and researcher
skills. I resonated with the observation that descriptive studies often overlook the "why," which
made me anticipate other alternative methods, if any, that I can use for my study. As I
considered the relevance of the archival method to my proposal, the presenters also explored
various sampling methods, but beyond convenience sampling, none seemed suitable to my
study.
The third topic is on describing samples and started with a brief discussion on the
consideration of ethical and practical considerations. I also noted that archival records can be
treated as a sample but may require me to fully understand how to analyze such. The
emphasis on maintaining participant anonymity and confidentiality, along with the challenge of
distilling important information without omitting relevant details, particularly struck a chord,
because I often do this when presenting information to non-researcher audiences. In exploring
methods for describing a sample, I found qualitative analysis intriguing as it seems to align
well with my proposal. However, the discussion was too vague, especially on how to properly
execute it. While descriptive statistics, which present sample characteristics through absolute
values, seemed more straightforward to comprehend, I questioned their relevance to my work
and the problem I plan to address. The subsequent discussion on descriptive statistics,
including elements and the use of MS Excel, caught my attention, and I planned to explore
deeper into these through practical exercises. The coverage of selecting appropriate statistics,
graphs based on data type, and the use of z scores and percentiles for describing a sample
provided valuable insights, piquing my interest in their practical application in my research
endeavors.
The presentation on making inferences based on samples provided a comprehensive
overview of key statistical concepts. It highlighted the significance of differentiating between
populations and samples, emphasizing the efficiency of using samples in studies with large
populations. The foundation of inferential statistics in probability theory is underscored, with a
focus on its application in hypothesis testing. The presentation included the two outcomes of
hypothesis testing—null and alternative hypotheses—and their role in comparing variables.
Notably, the necessity to reject the null hypothesis before establishing the alternative
hypothesis was emphasized. The introduction of one-tailed and two-tailed tests added depth
to the discussion. The discussions on incorporating hypothesis testing into research,
especially in shaping proposals, resonated with me and I encouraged me to start with the
hypothesis to my proposal. The subsequent introduction of the t-test, its three types, and the
sample test comparing p-value to the significance level provided valuable insights, though I
acknowledged the need to revisit the definition and computation process of the p-value for a
thorough understanding of the topic. Overall, the presentation has deepened my appreciation
for the nuanced and critical role of statistical inference in research.
Examining Relationships Among Variables was the focus of the fifth presentation,
which introduced Correlational Design as a method for testing hypotheses and exploring
relationships between study variables without manipulation for unbiased results. The
limitations of correlational design were discussed, along with strategies for designing a
powerful correlational study. Through examples, I acknowledged that hypotheses can be
formed and tested for every correlational design. The presentation delved into basic statistics
for evaluating correlational research, utilizing Pearson's r to determine the direction of the
relationship between two variables. Graphically, the scatterplot or scattergram can be used to
visualize the relationship between two variables. The relationship between dichotomous and
interval/ratio variables was explored, and sample cases illustrated the significance of Pearson
r, p-value, correlation, and null hypothesis acceptance. Linear regression, as a statistical
method estimating the relationship between dependent and independent variables, was
discussed, emphasizing the graphical representation. The presentation also covered data
analysis using computer programs, but I preferred the use of MS Excel examples over other
software. The distinction between correlation and regression analyses, both unable to prove
causation, but I learned through quick research that the latter could verify if the assumed
direction or causality is correct. A subsequent personal research effort on causality should be
undertaken to enhance understanding of the topic presented. Multiple regression analysis was
also presented. This determines the relationships between one dependent variable and
multiple independent variables. The presentation included a sample demonstration using MS
Excel. However, I would like to have more time and materials to understand how to interpret
the results.

You might also like