Journal.pone.0293325
Journal.pone.0293325
RESEARCH ARTICLE
1 Department of Curriculum & Instructional Technology, Universiti Malaya, Kuala Lumpur, Malaysia,
a1111111111 2 Academy of Language Studies, University Teknologi MARA, Pulau Pinag, Malaysia
a1111111111
a1111111111 * [email protected]
a1111111111
a1111111111
Abstract
The Malaysian Education Blueprint (PPPM) 2013–2025 has spurred significant reforms in
the Primary School Standard Curriculum (KSSR) and Secondary School Standard Curricu-
OPEN ACCESS
lum (KSSM), particularly concerning classroom-based assessment (CBA). CBA evaluates
Citation: Abdul Razak R, Mat Yusoff S, Hai Leng C,
students’ understanding and progress, informs instruction, and enhances the learning out-
Mohamadd Marzaini AF (2023) Evaluating
teachers’ pedagogical content knowledge in comes. Teachers with robust pedagogical content knowledge (PCK) are better equipped to
implementing classroom-based assessment: A design and implement effective CBA strategies that accurately assess students’ compre-
case study among esl secondary school teachers hension and growth, provide personalised feedback, and guide instruction. This study aims
in Selangor, Malaysia. PLoS ONE 18(12):
to investigate the relationship between PCK and CBA among English as a Second Lan-
e0293325. https://doi.org/10.1371/journal.
pone.0293325 guage (ESL) secondary school teachers in Selangor, Malaysia. A 5-point Likert-scale ques-
tionnaire was administered to 338 teachers across 27 regional secondary schools in
Editor: Ahmad Samed Al-Adwan, Al-Ahliyya
Amman University, JORDAN Selangor. The Covariance-based structural equation modelling (SEM) was used to analyse
the data. The findings revealed that the secondary school teachers demonstrated a high
Received: June 11, 2023
level of PCK, with content knowledge (CK) obtaining the highest mean, followed by peda-
Accepted: October 10, 2023
gogical knowledge (PK) and pedagogical content knowledge (PCK). The CBA practices
Published: December 29, 2023 among these teachers were also found to be high. SEM analysis showed a positive associa-
Peer Review History: PLOS recognizes the tion between PK and CBA practices and between PCK and CBA. However, no positive
benefits of transparency in the peer review association was observed between CK and CBA practices. In order to enhance teachers’
process; therefore, we enable the publication of
PCK and ensure the effective implementation of CBA, which is crucial for student learning
all of the content of peer review and author
responses alongside final, published articles. The outcomes in Malaysian ESL secondary schools, it is recommended that continuous profes-
editorial history of this article is available here: sional development opportunities be provided, specifically focusing on PCK and CBA.
https://doi.org/10.1371/journal.pone.0293325
educational experiences of young language learners. Thus, the research questions in this study
are formulated as below;
Research questions:
1. What is the relationship between ESL teachers’ content knowledge and classroom assess-
ment practices among secondary school teachers?
2. What is the relationship between ESL teachers’ pedagogical knowledge and classroom-
based assessment practices among secondary school teachers?
3. What is the relationship between ESL teachers’ pedagogical content knowledge and class-
room-based assessment practices among secondary school teachers?
Literature review
The classroom-based assessment
Various studies related to classroom assessment utilised different terms to describe the proce-
dure for monitoring the students’ progress. The terms "approach," "tools and instruments,"
and "assessment procedures" have all been used interchangeably [9–12]. Teachers use various
exams to give students the chance to succeed and exhibit their skills. Contrary to large-scale
standardised testing, which frequently uses complex psychometric techniques, classroom
assessment allows teachers to choose the best assessment strategy to acquire precise informa-
tion on students’ learning progress.
Teachers utilise various assessment techniques to gauge their students’ learning, depending
on the learning context and how well the technique fits the needs of obtaining accurate data.
Researchers and educators have been researching the numerous student information-gather-
ing techniques teachers use for decades. A growing corpus of research offers helpful insights
into teachers’ strategies [13–15].
A diverse range of assessment methods is deemed crucial for enhancing learning as it
enables teachers to gather multiple sources of evidence that can comprehensively understand
their students’ progress towards their learning objectives [10]. According to Butler & McMunn
[10], information about students’ progress can be obtained through (1) General methods,
including teacher-student interactions, questioning, and observation, and (2) Specific meth-
ods, such as oral presentations, written essays, or multiple-choice exams. These specific assess-
ment methods are usually categorised into two broad categories, which may vary in
nomenclature based on the researcher’s preference.
In academic literature, the words "conventional assessment" and "alternative assessment"
are frequently used to distinguish between closed-ended and open-ended questions, respec-
tively. Multiple-choice, true/false, match, and other "chosen response" items are frequently
seen in traditional exams [16]. The term "paper-and-pencil evaluations" is occasionally used to
describe these tests [12]. However, "performance-based exams," which include oral presenta-
tions and compositional exercises like poem writing and recitation, are a kind of alternative
assessment that permits a show of learning [16].
Alternative assessment is frequently used to evaluate students’ knowledge exhibited through
real-world tasks and activities. Unlike conventional paper-and-pencil tests, which typically
measure verbal ability, alternative evaluations include pupils in active displays of their knowl-
edge. Alternative assessments are considered significant for offering a more thorough evalua-
tion of student’s learning while being subjective, and they are frequently used in conjunction
with standardised examinations to increase objectivity and fairness in assessments [17].
There has been some debate among researchers regarding the effectiveness of paper and
pencil assessments in capturing crucial aspects of student learning. Some authors argue that
such assessments must be improved in assessing higher-order skills like generative thinking,
collaboration, and sustained effort over time [18]. However, the findings of a study by Hawes
et al. [19] on the relationship between kindergarten students’ symbolic number comparison
abilities and future mathematics achievement suggest that paper and pencil assessments can be
a valid tool for assessing early math skills. In contrast, performance-based assessments, such as
portfolios, computer simulations, oral presentations, and projects requiring academic back-
ground, teamwork, and problem-solving abilities, have been touted as a better means of evalu-
ating these skills [18].
In recent research, the term "paper and pencil-based assessment" has been used to describe
a form of evaluation that consists of closed-ended and more structured questions that are usu-
ally administered on paper. Such assessments include objective evaluations where students are
required to select answers rather than generate them, and to provide concise responses to
inquiries in which they have limited personal involvement [20]. On the other hand, "perfor-
mance assessment" refers to forms of evaluation such as oral presentations, acting, and debates
that allow students to demonstrate their knowledge in various ways. According to Stiggins
[12], paper and pencil-based assessments can encompass teacher-made tests, quizzes, home-
work, seatwork exercises and assignments, observations of students’ behaviours and evalua-
tions of their work products for performance assessment.
that is more student-centred and encourages social learning processes, where students actively
participate in the teaching and learning process, is deemed more effective than teacher-domi-
nated methods [32].
Moreover, Harris & Brown [33] maintain that assessment results can be utilised for
accountability purposes concerning students, teachers, and schools. Specifically, evidence
from classroom assessments can evaluate the effectiveness of schools (school accountability),
teaching quality (teacher accountability), and student learning outcomes (student accountabil-
ity). It is important to note that accountability assessments are typically more connected with
summative assessments than formative assessment practices [31].
development, motivation, student needs and behaviour, and understanding of subject matter.
The knowledge requirements for educators have undergone significant evolution throughout
the history of education. For example, in the 1500s, Paris University required teachers to pos-
sess a high degree of knowledge. By the mid-1980s, teacher training and education programs
had emphasised pedagogy more, with content knowledge as a secondary consideration [38].
Shulman [38] introduced the concept of PCK, which calls for integrating pedagogical and
subject matter knowledge in teaching. He suggested that treating teaching methodology and
content as separate entities is inadequate and that a more comprehensive approach is required
to consider the specific needs and requirements of various curriculum areas. Over the years,
the PCK framework has gained wide recognition and is frequently cited in educational
research and literature. Shulman introduced the concept of PCK in his keynote address to the
American Educational Research Association in 1985, which was subsequently published in
1986. Initially, PCK was regarded as a subset of content knowledge, aligned with Shulman’s
perspective on the significance of subject knowledge in education. Shulman’s PCK framework
was listed as one of seven components of a professional knowledge base for educational practi-
tioners in 1987 by delegates who further explored his ideas. This model is a crucial foundation
for substantial academic research in education and is considered an essential paradigm for
educational scholars and practitioners.
The ability of a teacher to translate the topic knowledge they possess into pedagogically effec-
tive forms yet adaptable to the variances of evaluation in ability and background provided by
the pupils is the key to differentiating their knowledge base of teaching.
Methodology
Research design
The primary objective of this quantitative study, employing a survey research design, is to
assess teachers’ pedagogical content knowledge and classroom-based assessment practices.
The researcher adopted a quantitative approach, explicitly utilising a survey research design, to
comprehensively understand the educational context, particularly concerning teachers’ per-
ceptions, beliefs, and behaviours. Survey research is a widely used method in social sciences
area, enabling researchers to systematically collect information related to human perceptions,
beliefs, and behaviours [59]. Consequently, this method is well-suited for the present study,
effectively addressing the research objectives. The quantitative approach allows for the mea-
surement of variables. It facilitates in identifying the relationships and patterns among them,
providing a robust understanding of the factors impacting pedagogical content knowledge and
classroom-based assessment practices. By employing a survey research design, this study has
the potential to uncover valuable insights into the intricacies of pedagogical content knowledge
and assessment practices in the educational context, which can, in turn, contribute to the
development of practical teacher training programs and strategies. Ultimately, this method
can support the broader goal of enhancing educational outcomes by shedding light on the cru-
cial role of teachers’ pedagogical content knowledge and classroom-based assessment practices
in the teaching and learning process.
Sample
This study employed a simple random sampling method to ensure a representative sample of
ESL teachers in secondary schools in Selangor, Malaysia, from January 2023 to April 2023. The
researchers upheld ethical standards by securing permission letters for distributing the survey
instrument and obtaining approvals from both the Educational Policy Planning and Research
Division of the Ministry of Education Malaysia (approval letter number KPM.600-3/2/3-eras
(13277)) and the University Malaya Research Ethics Committee (Non-medical) (approval let-
ter number UM.TNC 2/UMREC). Prior to participating, written consent was obtained from
all respondents.
To ensure the generalizability of research findings to the broader population, a widely rec-
ognized probability sampling technique was utilized, affording every member of the target
population an equal opportunity for selection. This approach minimizes selection bias. The
sample was drawn from 27 public secondary schools located in the Petaling Utama district of
Selangor, encompassing 338 secondary school teachers. The gender distribution among the
participants included 73 males and 265 females. Teaching experience varied from 1 year to
more than 20 years, and participants’ academic qualifications are detailed in Table 1. This
diverse sample allows for a comprehensive exploration of teaching strategies employed by ESL
teachers with diverse backgrounds and levels of experience.
Instrumentation
For this study, a single survey was employed to collect data, adapted from Goggin [60], empha-
sising classroom-based assessment practices. The survey was developed by referencing "Class-
room Assessment for Student Learning" by Chappuis et al. [61]. Five individuals, including
four secondary school teachers and one teacher educator from University A, were consulted
based on their experience with classroom-based assessment to ensure face validity. To establish
content validity, the survey was distributed to twenty experts in the field through email,
Researchgate, and social media platforms such as WhatsApp, Telegram, and Facebook
Messenger. The experts were selected based on their research experience, educational back-
ground, and accomplishments in related research activities [62]. Of the twenty experts
approached, ten agreed to participate, two declined due to time constraints, and the remainder
have yet to respond. As suggested by Hong et al. [63], a sample of experts with experience in
the research area can enhance content validity and minimise biases in the research instrument.
The experts were asked to evaluate the instrument’s relevance, clarity, and simplicity by com-
pleting a questionnaire. This process helped ensure that the survey used in the study was both
reliable and valid, thus contributing to the overall quality and rigour of the research findings.
Data analysis
The researchers actively conducted screening tests in SPSS to address issues related to missing
values, straightlining, and outliers. The researchers assessed missing values by implementing
the count blank method. To ensure compliance with the straightlining rule, the researchers
inspected for straightlining, noting a non-zero standard deviation value, as described by Hair
et al. [64]. The team also performed an outlier test using the Mahalanobis Distance. After com-
pleting these steps, the researchers proceeded with a descriptive analysis.
The researchers used PLS-SEM for additional analysis, employing the SmartPLS software.
This method maximises the explained variance of exogenous variables on endogenous ones to
focus on prediction, as explained by Hair et al. [64] and Rigdon et al. [65]. Because it is suitable
for testing theoretically supported and additive causal models, as Chin [66] and Haenlein &
Kaplan [67] suggest, the team used it in line with the proposed theoretical framework, which is
an exploratory model. Contrastingly, CB SEM tests the alignment between the proposed theo-
retical model and the observed data, as per Ramayah et al. (2018). It is, therefore, effective
when working with confirmatory models. Wold [68] initially developed SEM, and his two dis-
ciples, Lohmoller and Joreskog, refined it. Lohmoller adapted it into PLS-SEM, while Joreskog
developed it into CB-SEM.
Before progressing with hypothesis testing, the researchers verified internal consistency and
convergent and discriminant validity. The researchers applied internal consistency to highlight
the interrelation among items, as per Sekaran & Bougie [69], ensuring a composite reliability
value of 0.7 for its successful implementation [64, 69]. The researchers examined convergent
validity through factor loadings and average variance extracted (AVE), demonstrating the
degree to which specific indicators of a construct share a substantial proportion of variance,
following Hair et al. [64]. They achieved convergent validity when factor loading values were
at least 0.5 while also maintaining an AVE score above 0.5, according to Byrne [70]. The
researchers also confirmed discriminant validity, demonstrating that each latent variable is dis-
tinct from others, pointing to the uniqueness of constructs [64, 71]. The researchers checked
discriminant validity using the advanced method of the heterotrait-monotrait ratio of correla-
tions (HTMT), with a threshold value of 0.9, as suggested by Franke & Sarstedt [72]. The
researchers then conducted a lateral collinearity assessment with a cut-off value of 3.3 to pre-
vent misleading results, based on Diamantopoulos & Siguaw [73]. Following these validation
steps, the team performed hypothesis tests using structural modelling.
Results
The measurement model
The structural equation modelling (SEM) technique, which links measurement items to their
underlying latent variables, necessitates establishing a measurement model. The present study
provides the theoretical foundations and statistical analysis to support the validity and reliabil-
ity of the measurement model. The analysis used the R programming language, utilising the
lavaan package for SEM analysis [74].
Normality check
The researchers use multivariate and univariate normality tests to determine whether the mea-
surement items are normal before moving forward with the confirmatory factor analysis
(CFA) model. The estimating method used in CFA (and SEM) depends on the normality of
the data. Hence, the data must be normal. The Shapiro-Wilk test (all p-values 0.05) of all mea-
surement items rejects the null hypothesis of univariate normality, just as the Mardia test (p-
value 0.05) does the same for the null hypothesis of multivariate normality. In order to estimate
the measurement model, the researchers instead employ the maximum likelihood robust
(MLR) estimator, also known as the Satorra-Bentler rescaling approach [74].
construct (see Table 2). This model was later confirmed using confirmatory factor analysis
(CFA). The right column of Table 2 displays the standardised factor loadings of the CFA
model. All of them are statistically significant (p-value < 0.001), indicating that the items prop-
erly reflect the underlying latent construct. This attests to the measurement model’s conver-
gence validity (Anderson & Gerbing, 1988). Additionally shown in Table 2 are each factor’s
composite reliability (CR) and Cronbach’s alpha [78]. According to Hair et al. [79], all factors’
Cronbach’s Alpha and CR values must be higher than the specified cutoff point of 0.70. Refer-
ring to Table 2, although pedagogical content knowledge and content knowledge are not
exceeding 0.70 (moderate Cronbach’s alpha), the researchers decided to maintain the con-
struct as Pallant [80] said if the items are less than 10 in a construct, it is hard to get a high
value of Cronbach’s alpha, so it is acceptable if the value is >.05. On the other hand, high com-
posite reliability and average variance extracted (AVE), and a moderate to high Cronbach’s
alpha, indicate a good measurement model in partial least squares structural equation model-
ling (PLS-SEM) [79, 81]. In this case, the researcher decided not to delete the item. Thus, the
measurement model’s reliability is confirmed.
If constructs that should not be related to one another are truly unrelated, this is known as
divergent or discriminant validity (DV). This can be confirmed by contrasting the average
extracted variance (AVE) with the squared correlations of all latent variables in a matrix, as
illustrated in Table 3. According to Hair et al. [79], squared correlations below the diagonal
must be less than the AVE of each latent variable to indicate DV. Table 4 shows that all con-
structs for AVE and squared correlation are below thresholds, confirming the DV of the latent
variables. The researchers can thus attest to the latent variables’ DV.
The Root Mean Square Error Approximation (RMSEA) and Standardized Root Mean
Square Residual (SRMR), which are below the cut-off value of 0.08, as well as the Comparative
Fit Index (CFI) and Tucker-Lewis Index (TLI), which are above the recommended threshold
of 0.90 [79], more evidence for the measurement model’s accuracy and a good fit (refer to
Table 2). After establishing the measurement model, the researchers present the descriptive
statistics in Table 4.
It should be noted that the mean and standard deviation (SD) values are determined by tak-
ing the arithmetic mean of the item scores that measure the respective latent variables. Based
on factor values retrieved by confirmatory factor analysis (CFA), the correlation matrix shows
a correlation between the latent variables.
https://doi.org/10.1371/journal.pone.0293325.t006
Discussion
Following establishing the measurement model in the previous sections, the researchers go on
to the structural model to look at relationships between the latent variables. Once more, the
researchers estimate SEM using the MLR method, as Rosseel [74] advised for non-normal
data. In Fig 2, the researchers present the estimated SEM. It can be challenging to develop a
theoretical structural model similar to the observed model at 5% statistical significance in com-
plex SEM research with more than 12 measurement items, as in this study [79]. According to
Bollen & Long [86], the chi-square statistic to degrees of freedom (DF) ratio in these circum-
stances should be less than three. The estimated SEM model (707.381/313 = 2.260) meets this
requirement, indicating a robust model fit. Besides, other model-fit indices conditions are also
satisfied. The RMSEA and SRMR are below 0.08, and the CFI and TLI are above 0.90. Hence,
SEM estimations are thus valid. The model accounts for around 50% and 10% of the variance
of the one endogenous variable, classroom-based assessment (as shown by the r-square of
latent endogenous variables).
Other than that, the researchers summarise hypothesis testing in Table 5 based on the SEM
model in Fig 2 and Table 5. All hypotheses relate to the association between teachers’ knowl-
edge and classroom-based assessment. Among those hypotheses, only H1 is not supported,
suggesting that content knowledge is not associated with classroom-based assessment practices
in the context of Malaysian secondary school teachers. In contrast, H2 and H3 are supported,
suggesting a positive association between the pedagogy knowledge of teachers and classroom-
based assessment practices. The overall results of path coefficient, t-value, p-values, R2, f2, and
Q2 can be seen in Tables 2–8 above.
Teachers’ grasp of pedagogy holds significant importance since it can impact their practices
in classroom-based assessment. This research focuses on investigating the pedagogical content
knowledge and classroom-based assessment practices of secondary school teachers. In sum-
mary, the researchers first evaluate the PCK levels of secondary school teachers, which are
based on their CK, PK, and PCK. As depicted in Table 4, the highest mean score is attributed
to content knowledge (mean: 4.06, SD: 0.43), followed by PK (mean: 3.97, SD: 0.44), PCK
(mean: 3.91, SD: 0.46), with classroom-based assessment (CBA) receiving the lowest but still
relatively high mean score (mean: 3.80, SD: 0.58). Consequently, teachers may want to con-
sider enrolling in additional courses that are relevant to their curriculum teaching, as this can
enhance their confidence in PCK and, in turn, enable them to deliver an exceptional curricu-
lum to their students more effective.
Similarly, a study done by Moh’d et al. [87], which assesses the level of teachers’ PCK in
selected secondary schools of Zanzibar, revealed that the level of teachers’ PCK was moderate.
According to the researchers, teachers’ PCK plays a significant role in assessment practices
and students’ performance, implying that teachers must have relevant knowledge in a particu-
lar area. Therefore, more in-service training on increasing teachers’ PCK levels is required,
ultimately resulting in better teaching and learning [87].
Next, the researchers delve into the impact of content knowledge on the classroom-based
assessment practices of secondary school teachers. According to the findings, there is no dis-
cernible link between content knowledge and classroom assessment practices. This stands in
contrast to the results obtained by Gess-Newsome et al. [42], who identified a substantial cor-
relation between content knowledge and classroom assessment practices. Furthermore, these
outcomes differ from those reported by Akram et al. [37], who established a noteworthy asso-
ciation between content knowledge and the utilization of ICT during teaching practice. One
plausible explanation for these findings is that content knowledge may undergo occasional
enhancements due to changes in the curriculum, leading teachers to perceive that it might not
significantly influence their assessment practices [88]. Additionally, educators with high-qual-
ity content knowledge have the ability to seamlessly integrate subject matter expertise with
other teaching knowledge, thereby facilitating comprehensive learning and assessment [40].
Akram et al. [6] underscore the significance of content knowledge, highlighting that a strong
foundation in subject matter expertise is indispensable for effective online teaching. This
implies that teachers’ proficiency within their respective fields plays a pivotal role in their
capacity to adapt teaching methodologies and engage students in meaningful learning
experiences.
In addition, the researchers discovered a positive correlation between teachers’ content
knowledge and their assessment practises in the classroom. This result contradicts the findings
of Herman et al. [89], who discovered no direct correlation between teachers’ assessment use
and their knowledge. According to the study, when teachers devote more time to evaluating
and analysing student work, they gain valuable insights into how students’ knowledge is devel-
oping and are able to identify any potential misunderstandings or obstacles. These findings are
consistent with those of Akram et al. [6], who found that teachers in their study employed a
variety of teaching strategies and techniques. The researchers emphasise the significance of
diverse pedagogical approaches such as role modelling, narrative, and experiential activities,
emphasising the significance of interactive and engaging teaching and learning techniques.
Given the obvious relationship between pedagogical content knowledge and effective class-
room assessment practises, it is recommended that teachers prioritise encouraging and contin-
uously supporting student engagement with key subject concepts as part of their assessment
practises in order to improve student learning [4].
Next, there is also a positive association between pedagogical knowledge and classroom
assessment practices. This finding is in contrast with the findings from Depaepe & König [90],
who found that there is no relationship between pedagogical knowledge and classroom assess-
ment practices. It is generally believed that these two factors are connected and impact instruc-
tional practice [90]. Not only that, Akram et al. [6] found a crucial integration of pedagogical
content knowledge in teaching and learning. The findings indicate that teachers exhibit posi-
tive perceptions regarding pedagogical content knowledge, believing that incorporating a solid
understanding of subject matter and effective instructional strategies enhances teaching effec-
tiveness and makes the learning process more exciting, interactive, and motivating for
students. Nevertheless, it is crucial to help educators develop pedagogical knowledge and facili-
tate its application in the classroom to assure high-quality instruction and favourably impact
student learning outcomes [54].
Conclusion
As a conclusion, this study has significantly contributed to the existing knowledge on ESL
teachers’ professional knowledge and assessment practices in Malaysian secondary schools.
The findings indicate that ESL teachers possess a high level of professional knowledge, as dem-
onstrated by their content knowledge, pedagogical knowledge, and pedagogical content
knowledge. Furthermore, the study has identified positive correlations between pedagogical
knowledge and classroom-based assessment and between pedagogical content knowledge and
classroom-based assessment practices. In contrast, no relationship was found between content
knowledge and classroom-based assessment.
In general, this study has the potential to enhance the professional development of ESL
(English as a Second Language) teachers, which is an important contribution. By identifying
positive correlations between pedagogical knowledge and pedagogical content knowledge and
classroom-based assessment practises, the research highlights the significance of these specific
domains in teacher preparation and ongoing professional development programmes. Educa-
tors and policymakers can use this information to devise targeted training modules that aim to
improve teachers’ assessment practises by enhancing their pedagogical knowledge.
In addition, the findings of the study have the implications for curriculum design and
assessment guidelines as well. The identified correlations can be considered for incorporation
into curriculum frameworks. For example, the curriculum can be structured to emphasise the
development of pedagogical knowledge and pedagogical content knowledge, ensuring that
teachers are well-equipped to implement classroom-based assessments. This, in turn, can
result in curricula that are more comprehensive and student-centred. Moreover, the absence
of a correlation between content knowledge and classroom-based evaluation is an intriguing
observation. It suggests that effective ESL instruction requires a more holistic approach, rather
than relying solely on content expertise. This finding challenges the conventional belief that
content expertise alone is sufficient to make a teacher effective. Consequently, teacher training
programmes can emphasise the significance of combining content knowledge and pedagogical
expertise in order to develop well-rounded educators capable of optimising student learning
outcomes.
As this study is in congruence with Akram et al.’s [6] emphasis on diverse pedagogical
approaches such as role modelling, narrative, and experiential activities, this study highlights
the importance of interactive and engaging teaching techniques. The findings call for a depar-
ture from traditional didactic teaching styles in favour of more interactive and participatory
approaches, which can foster a deeper comprehension of the subject matter and improve
assessment procedures. Finally, the study contributes to the academic field by emphasising
areas in need of additional research. The lack of a direct relationship between content knowl-
edge and classroom-based assessment, for instance, raises queries about the nuanced factors
influencing ESL teacher effectiveness. Future research can delve deeper into these variables
and investigate how they interact with teacher education and classroom dynamics. In conclu-
sion, this study’s contributions transcend the immediate context of ESL instruction in second-
ary institutions in Malaysia. They have the potential to inform pedagogical practises,
curriculum development, and teacher training programmes, which will ultimately benefit both
ESL teachers and students.
Acknowledgments
The authors sincerely thank the numerous secondary school teachers who graciously partici-
pated in the study as volunteers.
Author Contributions
Conceptualization: Rafiza Abdul Razak.
Data curation: Shahazwan Mat Yusoff, Anwar Farhan Mohamadd Marzaini.
Formal analysis: Shahazwan Mat Yusoff.
Investigation: Shahazwan Mat Yusoff, Anwar Farhan Mohamadd Marzaini.
Methodology: Shahazwan Mat Yusoff, Chin Hai Leng.
Resources: Chin Hai Leng.
Supervision: Rafiza Abdul Razak, Chin Hai Leng.
Validation: Chin Hai Leng, Anwar Farhan Mohamadd Marzaini.
Writing – original draft: Shahazwan Mat Yusoff.
Writing – review & editing: Rafiza Abdul Razak, Anwar Farhan Mohamadd Marzaini.
References
1. Kementerian Pendidikan Malaysia (KPM). (2018). Panduan Pelaksanaan Pentaksiran Bilik Darjah.
Putrajaya: Bahagian Pembangunan Kurikulum.
2. Stiggins R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan,
83(10), 758–765.
3. Yuh Tan Jia & Husaina Banu Kenayathulla. "Pentaksiran Bilik Darjah Dan Prestasi Murid Sekolah Jenis
Kebangsaan Cina Di Hulu Langat, Selangor." JuPiDi: Jurnal Kepimpinan Pendidikan 7, no. 3 (2020):
70–90.
4. Jones A., & Moreland J. (2005). The importance of pedagogical content knowledge in assessment for
learning practices: A case-study of a whole-school approach. Curriculum Journal, 16(2), 193–206.
5. Mat Yusoff S., Arepin M., & Mohd Marzaini A. F. (2022). Secondary school teachers’ perspectives
towards the implementation of CEFR-Aligned English Curriculum. Creative Practices in Language
Learning and Teaching (CPLT), 10(1), 32–48.
6. Akram H., Yingxiu Y., Al-Adwan A. S., & Alkhalifah A. (2021). Technology integration in higher educa-
tion during COVID-19: An assessment of online teaching competencies through technological pedagog-
ical content knowledge model. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.736522
PMID: 34512488
7. Mohamad Marzaini A. F., Sharil W. N. E. H., Supramaniam K., & Yusoff S. M. (2023). The Teachers’
Professional Development in The Implementation of CEFR-Aligned Classroom Based Assessment.
Asian Journal of Assessment in Teaching and Learning, 13(1), 1–14.
8. Habibi A., Riady Y., Samed Al-Adwan A., & Awni Albelbisi N. (2022). Beliefs and knowledge for pre-ser-
vice teachers’ technology integration during teaching practice: An extended theory of planned behavior.
Computers in the Schools, 40(2), 107–132. https://doi.org/10.1080/07380569.2022.2124752
9. Anderson J. C., & Gerbing D. W. (1988). Structural equation modeling in practice: A review and recom-
mended two-step approach. Psychological bulletin, 103(3), 411.
10. Butler, S. M., & McMunn, N. D. (2006). A Teacher’s Guide to Classroom Assessment: Understanding
and Using Assessment to Improve Student Learning. Jossey-Bass, An Imprint of Wiley. 10475 Cross-
point Blvd, Indianapolis, IN 46256.
11. Brookhart S. M. (2010). How to assess higher-order thinking skills in your classroom. ASCD.
12. Stiggins R. J. (1991). Assessment literacy. Phi Delta Kappan, 72(7), 534–539.
13. Cheng K. K., Thacker B. A., Cardenas R. L., & Crouch C. (2004). Using an online homework system
enhances students’ learning of physics concepts in an introductory physics course. American journal of
physics, 72(11), 1447–1453.
14. Hill K., & McNamara T. (2012). Developing a comprehensive, empirically based research framework for
classroom-based assessment. Language testing, 29(3), 395–420.
15. Inbar-Lourie O., & Donitsa-Schmidt S. (2009). Exploring classroom assessment practices: The case of
teachers of English as a foreign language. Assessment in Education: Principles, Policy & Practice, 16
(2), 185–204.
16. Rhodes N. C., Rosenbusch M. H., & Thompson L. (1996). Foreign languages: Instruments, techniques,
and standards. In Handbook of classroom assessment (pp. 381–415). Academic Press.
17. Qu W., & Zhang C. (2013). The Analysis of Summative Assessment and Formative Assessment and
Their Roles in College English Assessment System. Journal Of Language Teaching & Research, 4(2).
18. Ananda S., & Rabinowitz S. (2000). The High Stakes of HIGH-STAKES Testing. Policy Brief.
19. Hawes Z., Nosworthy N., Archibald L., & Ansari D. (2019). Kindergarten children’s symbolic number
comparison skills relates to 1st grade mathematics achievement: Evidence from a two-minute paper-
and-pencil test. Learning and Instruction, 59, 21–33.
20. Windschitl M. (1999). The challenges of sustaining a constructivist classroom culture. Phi Delta Kap-
pan, 80(10), 751.
21. Saeed M., Tahir H., & Latif I. (2018). Teachers’ Perceptions about the Use of Classroom Assessment
Techniques in Elementary and Secondary Schools. Bulletin of Education and Research, 40(1), 115–130.
22. Black P., & Wiliam D. (2018). Classroom assessment and pedagogy. Assessment in education: Princi-
ples, policy & practice, 25(6), 551–575.
23. Stiggins R. (2010). Essential formative assessment competencies for teachers and school leaders. In
Handbook of formative Assessment (pp. 233–250). Routledge.
24. Clark I. (2010). Formative assessment: ‘There is nothing so practical as a good theory’. Australian Jour-
nal of Education, 54(3), 341–352.
25. Johannesen M. (2013). The role of virtual learning environments in a primary school context: An analy-
sis of inscription of assessment practices. British Journal of Educational Technology, 44(2), 302–313.
26. Lipnevich A. A., McCallen L. N., Miles K. P., & Smith J. K. (2014). Mind the gap! Students’ use of exem-
plars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539–559.
27. Van der Kleij F. M., Vermeulen J. A., Schildkamp K., & Eggen T. J. (2015). Integrating data-based deci-
sion making, assessment for learning and diagnostic testing in formative assessment. Assessment in
Education: Principles, Policy & Practice, 22(3), 324–343.
28. Dixson D. D., & Worrell F. C. (2016). Formative and summative assessment in the classroom. Theory
into practice, 55(2), 153–159.
29. Buyukkarci K. (2014). Assessment beliefs and practices of language teachers in primary education.
International Journal of instruction, 7(1).
30. Allen J., Gregory A., Mikami A., Lun J., Hamre B., & Pianta R. (2013). Observations of effective
teacher–student interactions in secondary school classrooms: Predicting student achievement with the
classroom assessment scoring system—secondary. School psychology review, 42(1), 76–98. PMID:
28931966
31. Puad L. M. A. Z., & Ashton K. (2021). Teachers’ views on classroom-based assessment: an exploratory
study at an Islamic boarding school in Indonesia. Asia Pacific Journal of Education, 41(2), 253–265.
32. Almusharraf N., & Engemann J. (2020). Postsecondary instructors’ perspectives on teaching English
as a Foreign Language by means of a multimodal digital literacy approach. International Journal of
Emerging Technologies in Learning (iJET), 15(18), 86–107.
33. Harris L. R., & Brown G. T. (2009). The complexity of teachers’ conceptions of assessment: Tensions
between the needs of schools and students. Assessment in Education: Principles, Policy & Practice,
16(3), 365–381.
34. Santos I. T. R., Barreto D. A. B., & Soares C. V. C. O. (2020). Formative Assessment in the classroom:
the dialogue between teachers and students. Journal of Research and Knowledge Spreading, 1(1),
e11483.
35. Baghoussi M. (2021). Teacher-centered approach prevalence in Algerian secondary-school EFL clas-
ses: The case of English Teachers and learners in Mostaganem district. Arab World English Journal
(AWEJ) Volume, 12.
36. Mohamad Marzaini A. F., & Yusoff S. B. M. (2022). Assessing CEFR-Readiness Test in Malaysian ESL
Classroom: An Insight from English Language Teachers in Pulau Pinang. Asian Journal of Assessment
in Teaching and Learning, 12(1), 1–9.
37. Da-Hong L., Hong-Yan L., Wei L., Guo J. J., & En-Zhong L. (2020). Application of flipped classroom
based on the Rain Classroom in the teaching of computer-aided landscape design. Computer Applica-
tions in Engineering Education, 28(2), 357–366.
38. Shulman L. S. (1986). Those who understand: A conception of teacher knowledge. American Educator,
10(1).
39. König J., Blömeke S., Paine L., Schmidt W. H., & Hsieh F. J. (2011). General pedagogical knowledge of
future middle school teachers: On the complex ecology of teacher education in the United States, Ger-
many, and Taiwan. Journal of teacher education, 62(2), 188–201.
40. Ma’rufi, Budayasa I. K., & Juniati D. (2018). Pedagogical content knowledge: teacher’s knowledge of
students in learning mathematics on the limit of function subject. Journal of Physics: Conference Series,
954(1), 012002. https://doi.org/10.1088/1742-6596/954/1/012002
41. Gess-Newsome J., Cardenas S., Austin B., Carlson J., Gardner A., Stuhlsatz M., Wilson C. (2011).
Impact of educative materials and transformative professional development on teachers’ PCK, practice,
and student achievement. (Paper set presented at the Annual Meeting of the National Association for
Research in Science Teaching, Orlando, FL).
42. Gess-Newsome J., Taylor J. A., Carlson J., Gardner A. L., Wilson C. D., & Stuhlsatz M. A. (2019).
Teacher pedagogical content knowledge, practice, and student achievement. International Journal of
Science Education, 41(7), 944–963.
43. Park S., & Oliver J. S. (2008). Revisiting the conceptualisation of pedagogical content knowledge
(PCK): PCK as a conceptual tool to understand teachers as professionals. Research in science Educa-
tion, 38(3), 261–284.
44. Mat Yusoff S., Leng C. H., Razak R. A., & Marzaini A. F. M. (2023). Towards Successful Assessment
Practice: Examining Secondary School Teachers’ conceptions of assessment. JuKu: Jurnal Kurikulum
& Pengajaran Asia Pasifik, 11(1), 1–7.
45. Mat Yusoff S., Leng C. H., Razak R. A., & Marzaini A. F. (2022). Determining Conceptions of Assess-
ment among In-Service Secondary School Teachers. International Journal of Academic Research in
Business and Social Sciences, 12(12), 686–698.
46. Minor E. C., Desimone L., Lee J. C., & Hochberg E. D. (2016). Insights on how to shape teacher learn-
ing policy: The role of teacher content knowledge in explaining differential effects of professional devel-
opment. Education Policy Analysis Archives/Archivos Analı́ticos de Polı́ticas Educativas, 24, 1–34.
47. Liontou M. (2021). Conceptions of Assessment as an Integral Part of Language Learning: A Case
Study of Finnish and Chinese University Students. Languages, 6(4), 202.
48. Jung Youn S. (2023). Test design and validity evidence of interactive speaking assessment in the era of
emerging technologies. Language Testing, 40(1), 54–60.
49. Phothongsunan S. (2020). Teachers’ conceptions of the CLT approach in English language education.
Journal of Educational and Social Research, 10(4), 121–127.
50. Savić, V. (2020). Towards a Context-Sensitive Theory of Practice in Primary English Language Teach-
ing Through Theme-Based Instruction in Serbia. In Contemporary Foundations for Teaching English as
an Additional Language (pp. 77–88). Routledge.
51. Borko H., & Putnam R. T. (1996). Learning to teach.
52. Mohamad Marzaini A. F., Sharil W. N. E. H., Supramaniam K., & Yusoff S. M. (2023). Evaluating Teach-
ers’ Assessment Literacy in Enacting Cefr-Aligned Classroom-Based Assessment in Malaysian Sec-
ondary Schools ESL Classroom. International Journal of Academic Research in Progressive Education
and Development, 12(1), 661–675.
53. Dadvand B., & Behzadpoor F. (2020). Pedagogical knowledge in English language teaching: A lifelong-
learning, complex-system perspective. London Review of Education.
54. Kind V., & Chan K. K. (2019). Resolving the amalgam: connecting pedagogical content knowledge, con-
tent knowledge and pedagogical knowledge. International Journal of Science Education, 41(7), 964–978.
55. Nind M. (2020). A new application for the concept of pedagogical content knowledge: teaching
advanced social science research methods. Oxford Review of Education, 46(2), 185–201.
56. F Mudin V. (2019). An Investigation into English Teachers’ Understandings and Practices of Formative
Assessment in the Malaysian Primary ESL Classroom: Three Case Studies (Doctoral dissertation, Uni-
versity of East Anglia).
57. Abeywickrama P. (2021). Classroom Assessment Practices In An L2 Oral Skills Class. European Jour-
nal of Applied Linguistics and TEFL, 10(1), 45–61.
58. Ducar C. (2022). SHL Teacher Development and Critical Language Awareness: From Engaño to
Understanding. Languages, 7(3), 182.
59. Singleton R. A., & Straits B. C. (2012). Survey interviewing. The SAGE handbook of interview research:
The complexity of the craft, 77–98.
60. Goggin, S. E. (2018) A qualitative study of the implementation formative assessment strategies in the
classroom (Doctoral dissertation). Retrieved from https://www.proquest.com/docview/2551278456?
pqorigsite=gscholar&fromopenview=true.
61. Chappuis, J., Stiggins, R. J., Chappuis, S., & Arter, J. (2012). Classroom assessment for student learn-
ing: Doing it right-using it well (p. 432). Upper Saddle River, NJ: Pearson.
62. Lynn M. R. (1986). Determination and quantification of content validity. Nursing research. PMID:
3640358
63. Hong Q. N., Pluye P., Fàbregues S., Bartlett G., Boardman F., Cargo M., et al. (2019). Improving the
content validity of the mixed methods appraisal tool: a modified e-Delphi study. Journal of clinical epide-
miology, 111, 49–59.
64. Hair J. F., Jr, Matthews L. M., Matthews R. L., & Sarstedt M. (2017). PLS-SEM or CB-SEM: updated
guidelines on which method to use. International Journal of Multivariate Data Analysis, 1(2), 107–123.
65. Rigdon E. E., Sarstedt M., & Ringle C. M. (2017). On comparing results from CB-SEM and PLS-SEM:
Five perspectives and five recommendations. Marketing: ZFP–Journal of Research and Management,
39(3), 4–16.
66. Chin W. W. (1998). Commentary: Issues and opinion on structural equation modeling. MIS quarterly,
vii–xvi.
67. Haenlein M., & Kaplan A. M. (2004). A beginner’s guide to partial least squares analysis. Understanding
statistics, 3(4), 283–297.
68. Wold H. (1974). Causal flows with latent variables: partings of the ways in the light of NIPALS modelling.
European economic review, 5(1), 67–86.
69. Sekaran U., & Bougie R. (2016). Research methods for business: A skill building approach. john wiley
& sons.
70. Byrne B. M. (2016). Structural Equation Modelling with AMOS: Basic Concepts, Applications, and Pro-
gramming ( 3rd ed.). New York: Routledge.
71. Ramayah T. J. F. H., Cheah J., Chuah F., Ting H., & Memon M. A. (2018). Partial least squares struc-
tural equation modeling (PLS-SEM) using smartPLS 3.0. An updated guide and practical guide to statis-
tical analysis.
72. Franke G., & Sarstedt M. (2019). Heuristics versus statistics in discriminant validity testing: a compari-
son of four procedures. Internet Research, 29(3), 430–447.
73. Diamantopoulos A., & Siguaw J. A. (2006). Formative versus reflective indicators in organizational
measure development: A comparison and empirical illustration. British journal of management, 17
(4), 263–282.
74. Rosseel Y. (2012). lavaan: An R package for structural equation modeling. Journal of statistical soft-
ware, 48, 1–36.
75. Koh J. H. L., Chai C. S., & Tsai C. C. (2014). Demographic factors, TPACK constructs, and teachers’
perceptions of constructivist-oriented TPACK. Journal of Educational Technology & Society, 17(1),
185–196.
76. Luik P., Taimalu M., & Suviste R. (2018). Perceptions of technological, pedagogical and content knowl-
edge (TPACK) among pre-service teachers in Estonia. Education and Information Technologies, 23
(2), 741–755.
77. Schmidt D. A., Baran E., Thompson A. D., Mishra P., Koehler M. J., & Shin T. S. (2009). Technological
pedagogical content knowledge (TPACK) the development and validation of an assessment instrument
for preservice teachers. Journal of research on Technology in Education, 42(2), 123–149.
78. Cronbach L. J. (1951). Coefficient alpha and the internal structure of tests. psychometrika, 16(3),
297–334.
79. Hair J. F., Anderson R. E., Babin B. J., & Black W. C. (2010). Multivariate data analysis: A global per-
spective (Vol. 7).
80. Pallant J. (2013). SPSS survival manual: A step-by-step guide to data analysis using IBM SPSS ( 5th
ed.). New York: McGraw Hill.
81. Hair J. F., Ringle C. M., & Sarstedt M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing the-
ory and Practice, 19(2), 139–152.
82. Hair J. F., Risher J. J., Sarstedt M., & Ringle C. M. (2019). When to use and how to report the results of
PLS-SEM. European business review, 31(1), 2–24.
83. Knock N. (2015). Common method bias in PLS-SEM. International Journal of E-Collaboration, 11(4),
1–10.
84. Ringle, C., Da Silva, D., & Bido, D. (2015). Structural equation modeling with the SmartPLS. Bido, D.,
da Silva, D., & Ringle, C. (2014). Structural Equation Modeling with the Smartpls. Brazilian Journal of
Marketing, 13(2).
85. Podsakoff N. P., MacKenzie S. B., Lee J. Y., & Podsakoff N. P. (2003). Common method biases in
behavioral research: a critical review of the literature and recommended remedies. Journal of applied
psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879 PMID: 14516251
86. Bollen K. A., & Long J. S. (1992). Tests for structural equation models: introduction. Sociological Meth-
ods & Research, 21(2), 123–131.
87. Moh’d S. S., Uwamahoro J., Joachim N., & Orodho J. A. (2021). Assessing the Level of Secondary
Mathematics Teachers’ Pedagogical Content Knowledge. Eurasia Journal of Mathematics, Science
and Technology Education, 17(6).
88. Novak E., & Wisdom S. (2018). Effects of 3D printing project-based learning on preservice elementary
teachers’ science attitudes, science content knowledge, and anxiety about teaching science. Journal of
Science Education and Technology, 27(5), 412–432.
89. Herman J., Osmundson E., Dai Y., Ringstaff C., & Timms M. (2015). Investigating the dynamics of for-
mative Assessment: Relationships between teacher knowledge, assessment practice and learning.
Assessment in Education: Principles, Policy & Practice, 22(3), 344–367.
90. Depaepe F., & König J. (2018). General pedagogical knowledge, self-efficacy and instructional practice:
Disentangling their relationship in pre-service teacher education. Teaching and teacher education, 69,
177–190.