0% found this document useful (0 votes)
39 views11 pages

Pan 2021

Uploaded by

dzulizzatjulaihi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

Pan 2021

Uploaded by

dzulizzatjulaihi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

The International Journal of Management Education 19 (2021) 100501

Contents lists available at ScienceDirect

The International Journal of Management Education


journal homepage: www.elsevier.com/locate/ijme

Students’ evaluation of teaching in the project-based learning


programme: An instrument and a development process
Gary Pan *, Venky Shankararaman, Kevin Koh, Sandy Gan
Singapore Management University, Singapore

A R T I C L E I N F O A B S T R A C T

Keywords: Students’ evaluation of teaching (SET) is widely adopted in most universities. Besides allowing
Project-based learning teachers to improve their teaching, SET offers an important input for university administrators’
Students’ evaluation of teaching instrument consideration in faculty promotion, contract renewal, salary increase and teaching award. While
Development process
the practice of using SET instrument is well established, it remains ambiguous in the SET liter­
ature whether a dedicated SET instrument is used for alternative teaching pedagogy. Our paper
argues that SET instrument may need to be re-designed to reflect the idiosyncrasy of another
teaching pedagogy. Accordingly, this paper presents the development process of a SET instrument
used to measure an alternative teaching pedagogy in project-based learning. By drawing upon a
case study of a university in Singapore, we demonstrate the development of a SET instrument and
highlight the considerations surfaced during the development process. Our findings showed that
active involvement of students and faculty in the development process proved to be important for
the successful development and implementation of SET instrument. The paper concludes with
implications for research and education, and limitation of this study.

1. Introduction

Students’ evaluation of teaching (SET) is a commonly adopted practice to measure teaching effectiveness at universities (Dodeen,
2013; Mart, 2017). It may allow teachers to refine their teaching and serves as an important consideration when making administrative
decisions on tenure, promotion, merit pay, salary increase, teaching award and contract renewal.
Typically, SET instrument contains a Likert-type rating scale and some open-ended questions that allow students to write their
comments or suggestions (Alok, 2011). It covers characteristics of effective teaching (Henderson et al., 2014; Sadrina et al., 2018) that
include the extent of knowledge covered in class, preparation and organization of lesson, interaction with students, clarity of teaching,
effectiveness of communication, effective use of technology, enthusiasm of teacher, frequency and quality of feedback, fairness of
grading and availability of teacher outside class.
Increasingly, universities are embracing an alternative teaching pedagogy in project-based learning (PBL) that inculcates learning
philosophy of taking what was learned in one situation and applying it to new situations (Lee et al., 2014). Universities’ teaching
pedagogy is gradually shifting from the traditional teaching method of mainly delivering content, to engaging students in pedagogy
that centers on applying and reflecting knowledge (Pan et al., 2020). PBL results in a shift of teacher’s role from transmitter of in­
formation to facilitator of learning (Pan et al., 2019; Seow et al., 2019; Prince & Felder, 2007). This shift in the role of a teacher may

* Corresponding author.
E-mail addresses: [email protected] (G. Pan), [email protected] (V. Shankararaman), [email protected] (K. Koh), [email protected].
sg (S. Gan).

https://doi.org/10.1016/j.ijme.2021.100501
Received 18 July 2020; Received in revised form 8 January 2021; Accepted 28 March 2021
Available online 16 April 2021
1472-8117/© 2021 Elsevier Ltd. All rights reserved.
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

alter students’ expectation of how a class is conducted and influence students’ evaluation of teaching (Sadrina et al., 2018).
While SET instrument has gained widespread use in most universities, nevertheless, little is known about whether a single SET
instrument is used to measure the effectiveness of alternative teaching method in PBL pedagogy. The ambiguity in the SET literature is
a relevant concern because the performance dimensions and examples of the SET instrument used in one teaching method may now
need to be re-designed to capture the idiosyncrasy of another teaching method. For instance, questions in the SET instrument such as
‘teacher’s facilitation and mentoring skills’ and ‘experience working with real companies’, are directly applicable to the PBL course but
less relevant for a course taught using the traditional method. Therefore, without having appropriate performance dimensions and
examples in the SET instrument, it is difficult to surface relevant student feedback to enhance improvement in teaching and conduct
effective assessment of PBL pedagogy.
Accordingly, this paper focuses on the SET instrument developed at a university in Singapore, UNIS (a pseudonym), for PBL-type
courses called UNIS-X programme. The instrument uses nine performance examples along four performance dimensions (i.e., effec­
tiveness of instructor, experience in the course, effort put into the course and experience working with real companies) on a seven-
point Likert-type rating scale. The instrument also includes some open-ended questions that allow students to provide their com­
ments or suggestions in SET.
This paper will begin with a review of SET and PBL literature, followed by a discussion on the context in which the SET instrument
was developed. Next, the paper will describe in detail the instrument’s development processes. The paper will also show the per­
formance dimensions and examples of the SET instrument. Finally, the paper concludes by highlighting the implications for research
and education.

2. Literature review

2.1. SET

SET is the primary method used to evaluate teaching effectiveness (Hobson & Talbot, 2001) in most universities (Richardson,
2005). Comm and Mathaisel (1998) indicate that almost all Association to Advance Collegiate Schools of Business (AACSB) accredited
business schools responding to a survey use SET instruments as an example in determining teaching effectiveness. SET offers ideas to
faculty for enhancing their teaching performance (Chen & Hoshower, 2003) and for university administrators to make decisions about
their faculty’s promotion, salary increment and contract renewals.
The SET literature has shown that teaching effectiveness is multidimensional. Some performance examples (e.g., communication
skills, attitude towards the students, knowledge of the subject, organizational skills, enthusiasm, fairness, and encouragement of
students) are identified to be strongly related to teaching effectiveness (Kim et al., 2000). Toland and Ayala (2005) have also identified
three dimensions of teaching effectiveness, namely instructor’s delivery of course information, instructor-student interaction, and
regulation of student learning. Similarly, Jackson et al. (1999) identified six factors of teaching effectiveness: relationship with stu­
dents, course value, organization, grading, difficulty and workload. Clayson and Sheffet (2006) presented evidence of a strong rela­
tionship between students’ perception of the instructor’s personality, and evaluation of their instructional effectiveness in marketing
and business core courses. Smith and Anderson (2005) found that female Hispanic faculty received much lower scores on their SETs
than their Anglo counterparts. Bruno (2003) examined the employment status of the instructor and found that full-time faculty
members generally received higher scores than part-time faculty. Elective courses were rated higher that non-elective courses (Marsh,
1987). Regardless of the number and construct of dimensions in the SET instrument, it is clear that the instrument could assess
distinguished dimensions of effective teaching (Driscoll & Cadden, 2010).

2.2. PBL

Markham et al. (2003) describe PBL as “a systematic teaching method that engages students in learning knowledge and skills
through an extended inquiry process structured among complex, authentic questions and carefully designed projects and task” (p. 4). A
PBL environment would usually possess five key features (Tal et al., 2006): (1) It begins with a driving question or a problem to be
solved; (2) Students initiate and participate in authentic situated inquiry in order to explore the driving question, learn and apply
important ideas in relevant disciplines; (3) Students, teachers and members of the community engage in collaborative activities to
derive solutions for the driving question; (4) Scaffolding takes place with the help of learning technologies that engage students in the
process of inquiry; and finally, (5) Students create tangible outputs that address the driving question.
Typically, PBL involves assignment that requires students to apply previously acquired knowledge to produce some forms of
output. The final product, which is the central focus of the assignment, would normally be a written or oral report summarizing what
was done and what the outcome was (Prince & Felder, 2007). Studies that have compared PBL to conventional teaching approach
(Thomas, 2000; Mergendoller et al., 2006), show that the former yielded significant positive effects on problem solving skills (Kleczek
et al., 2020), conceptual understanding, attitudes to learning, and comparable or better student performance on tests of content
knowledge. Gultekin (2005) suggests students are turned into better researchers, problem solvers and high-order thinkers through PBL.
Williams and Linn (2003) have also demonstrated that students engaged in PBL achieved higher scores than their counterparts at the
receiving end of traditional classroom instruction.
PBL may also involve establishing partnerships between universities and business schools to enhance innovation and improve
social and educational outcomes for learners and employer groups (Seow et al., 2019; Pan et al., 2019). The increased understanding of
the differences, constraints and boundaries that exist between industry partners assisted teachers to co-produce industry-based

2
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

curricula, contextualize business school curriculum with industry examples and share sector specific knowledge and skills that help to
enhance students to work transitions (Watters et al., 2016). In this way, industry partners are able to enculturate business school
students to professions and trade areas, through prolonged contact (e.g., industry projects) and thereby enable more efficient cultural
transitions from school to work. Such partnership between business schools and industry partner could serve as a platform for the
recruitment of future employees.
PBL is somewhat different from traditional methods of teaching in that the teacher takes on the role of a facilitator and learning is
enacted in a more collaborative, hands-on process driven by real-world connection. It uses authentic projects as a vehicle to encourage
deeper learning through collaboration and extended inquiry, and culminate in a final product or event (Pan et al., 2019). For PBL to be
successful, there must be a shift in the definition and expectations of the teacher, and acceptance of breaking from the traditional
“teacher and student” model. The role of the teacher involves collating sources, facilitating thinking, and inspiring students to impact
the world with their learning, and spending class time probing students about their own sense-making and acquisition of skills (Prince
& Felder, 2007). Also, a PBL teacher may seek to understand her students, and craft driving questions or projects aimed at igniting
wonder, passion and action (Olzan, 2016).
Unlike traditional methods of teaching where teachers are considered the main source of information and dominate most of the talk
time in class (Aldabbus, 2018), teachers who are seen as facilitators and advisers, provide students with adequate guidance and
feedback. They give students more room to choose how they approach the tasks which in this way, motivates students to be more
independent. This is consistent with research on constructivist and student-centered learning environments, where learners are ex­
pected to experience ambiguity and cognitive disequilibrium (Savery, 2006).
While PBL pedagogy proves to be beneficial, a major obstacle of effective PBL adoption is attributed to a lack of understanding by
teachers and students in the roles they are required to play in the learning process (Shpeizer, 2019). Most of the difficulties centered on
teachers’ anxiety and resistance towards their new role as facilitator (Green, 1998). This change in role and responsibility often leads to
uncertainty and confusion in a PBL setting (Bradley-Levine et al., 2010). Pan et al. (2020) even suggest role ambiguity may lead to
negative effects such as stress, lower commitment and lower performance in teachers and students, which may affect quality of
teaching and learning.
As a consequence, Eskrootchi and Oskrochi (2010) have called for a clearer measure of effectiveness before committing to in­
vestment in courses using PBL pedagogy. In addition, it is unclear in the PBL literature whether existing SET instrument is adequate in
measuring the teaching effectiveness of PBL pedagogy in classroom and if business schools have dedicated a separate SET instrument
for PBL pedagogy. This is a relevant issue as it remains ambiguous whether the constructs of dimensions and examples in a SET in­
strument have to vary among different teaching methods (i.e., Traditional teaching method versus PBL pedagogy), as understanding
the multidimensionality of effective teaching is essential when validating instruments and interpreting final ratings of different
teaching approaches. It is therefore the aim of this paper to highlight the need for a dedicated SET instrument for alternative teaching
method such as PBL pedagogy, and demonstrate its development process.

3. Research methodology

Our strategy was to undertake a qualitative study of UNIS’s development process of SET instrument for the PBL courses during
2016–17 period. The qualitative approach is particularly appropriate for our exploratory study since it allows us to better capture the
organizational dynamics of the phenomenon and its ability to explain the phenomenon based on interpretation of data (McCray et al.,
2021; Taylor & Bogdan, 1998).
For data collection, the study used secondary documentations from a variety of sources as the text to be examined (i.e., taskforce
proposal, meeting minutes and discussion reports). We were inspired by several previous studies that deemed secondary data effective
and useful in explaining vulnerable and sensitive settings (Cowton, 1998; Church, 2001; Davidson et al., 1994).
In terms of data analysis, we recursively iterated between the secondary data and the SET literature. The iteration helped to shape
our findings. We continued with the iterative process until it is possible to comprehensively explain the SET instrument development
process, and no additional data were needed to be collected to improve the interpretation of the findings. Our analysis included reading
all transcripts and documents, highlighting the descriptions and developing a list of relevant themes.
For content analysis, we first identified documents relevant to the SET development process. After reading the relevant documents,
we highlighted the sentences that describe the performance dimensions and examples of the SET instrument and the topics that
emerged include: ‘performance dimensions’, ‘key examples’, and ‘open-ended questions’. We then developed a list of common themes
related to the performance dimensions and examples in PBL setting. Coding categories reflect our interpretations of the performance
dimensions and examples in SET instrument. An initial pilot run was conducted for coder training and pilot testing of reliability. During
the pilot run, coding instrument and procedures were also refined. To establish the reliability of the coding, each coder was asked to
quote a particular segment of the relevant texts. Coding was conducted independently and without consultation and guidance. We
examined the portions of the coding where both coders agreed and measured the inter-coder reliability using Cohen’s Kappa coeffi­
cient. As the reliability coefficient was high, each coder was subsequently asked to code separate portions of the texts. In order to
reduce researcher bias, a senior colleague was asked to take part in early analysis of some of the data. The colleague was uninvolved in
the fieldwork and was therefore unfamiliar with all four cases. The role of this colleague was to bring a different and possibly more
objective eye to the evidence and detect any bias in data analysis.

3
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

4. Case study

In 2009, UNIS recognized the importance of developing a rigorous and relevant instrument FACETS (“For Assessment of Continuing
Excellence in Teaching”) to measure teaching and learning. It therefore formed a Task Force with a faculty representative from each of
the six schools within UNIS to develop, test and launch an instrument that would be consistent with the evidence on best practices for
assessing university teaching and learning, and would be able to capture the performance dimensions and examples identified in the
SET literature, and also relevant to UNIS.
The FACETS instrument comprises fifteen dimensions of teaching and learning, all of which were identified in the SET literature
and adapted as necessary for UNIS. These performance dimensions are represented as items 1 to 15. These dimension questions are
then followed by two “overall” questions about the Instructor and the Course. There are also two questions about course load and
challenge, and four open-ended questions (expanded from two in 2018) for students to provide their comments about the Instructor
and Course. For detailed performance dimensions and examples of the FACETS instrument and excerpts of its development process,
please refer to Appendix A.

5. Development of SET instrument for the PBL programme at UNIS (FACETS-X)

Recognizing the need to prepare its students with twenty-first century competencies so as to tackle increasingly complex real-world
problems, UNIS launched undergraduate courses that adopt PBL pedagogy called UNIS-X programme. The UNIS-X Initiative is a
paradigm shift which focuses on experiential learning as opposed to teaching as well as a mind-set shift to get the university to
collaborate both internally and with our external stakeholders more. The PBL pedagogy at UNIS-X comprises four principles: 1)
project-based learning tackling real-world problems and issues; 2) inter-disciplinary learning; 3) active mentoring; and 4) a deeper
relationship between faculty, student and industry partner. By applying the four principles in a project, students are expected to learn
competencies such as critical and inventive thinking, communication, collaboration and adaptability. By weaving these four principles
together in a closely knitted manner, PBL offers a fundamental platform for students to learn and share knowledge.
The UNIS-X programme was first introduced in January 2015 through 2 courses – Managing Process Improvement and Public Policy
Task Force. As at May 2020, there were 82 UNIS-X courses. 9500 undergraduate students have studied at least 1 UNIS-X course and
3200 students studied 2 or more UNIS-X courses. These UNIS-X courses had collaborated with more than 555 organizations that
sponsored projects, and students taking such PBL courses had delivered more than 2000 implementable solutions to these organi­
zations. Among the 82 UNIS-X courses, 61 of them were related to the business or management disciplines (i.e., Accountancy, Business,
Economics and Business Information Systems) involving 7885 undergraduate students. The remaining 21 courses belonged to Law and
Social Science disciplines.
Types of PBL projects at UNIS-X include accounting, branding, business improvement, data analytics, design thinking, innovation,
policy implementation, smart technologies, strategic management and web/mobile application development. Out of the 555 industry
partners, 75% were private companies, 10% were public companies and 15% were non-governmental organizations (NGO). Among
the private companies, 31% were multi-national companies, 4% were large local companies and 65% were small and medium-sized
enterprises (SME). Top 3 industries were Information and Communication, Health and Social Sciences, and Wholesale and Retail
Trade.
To seek feedback on the inaugural course offerings in 2015, the UNIS-X team engaged students in informal conversations about
their learning experience. Over the next one year, concurrent with the FACETS end-of-term exercise, the UNIS-X team administered a
separate student survey on Qualtrics to gather feedback about the UNIS-X pedagogy, students’ experience working with industry
partners, their effort spent in the course, and the course workload etc. The UNIS-X team had developed the survey questions by
reviewing PBL and SET literature, benchmarking overseas institutions’ feedback surveys specific to PBL, seeking input from the UNIS-X
Steering Committee, and considering feedback from the inaugural cohort of UNIS-X students. The survey questions were in accord with
the evaluation factors commonly reported in SET literature (Braskamp & Ory, 1994; Alok, 2011).
In 2016, recognizing the growing number of UNIS-X courses, the UNIS Provost commissioned the UNIS-X team and the UNIS Centre
for Teaching Excellence (CTE) to co-develop a dedicated SET instrument for UNIS-X courses (FACETS-X). A dedicated instrument was
timely for the following reasons:

• There was a need to cater to the stronger experiential nature of the UNIS-X programme. An instrument that specifically reflects this
pedagogy would increase the validity of the questions and subsequently, the feedback gathered.
• The instrument would communicate expectations of teaching quality in UNIS-X courses. Insights gained from student feedback
were expected to enhance UNIS-X teaching and course quality.
• A dedicated instrument would allow for fairer comparisons of student feedback data between faculty teaching in traditional method
and UNIS-X instructors.
• Students’ feedback on industry partners would help to facilitate the UNIS-X team’s partner engagement strategy.
• A dedicated instrument would reduce survey fatigue in students (prior to FACETS-X, students were required to complete both
FACETS and UNIS-X Qualtrics survey).

The decision to develop a separate instrument to measure teaching effectiveness of PBL pedagogy was also necessary because the
existing FACETS instrument was inadequate in capturing the features of PBL pedagogy. For instance, questions such as ‘teacher’s
facilitation and mentoring skills’ and ‘experience working with real companies’, both directly relevant to a PBL course, were missing

4
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

from the FACETS instrument.


To create the initial draft, CTE and the UNIS-X team first identified four performance dimensions as shown in Table 1. Two di­
mensions were focused on teachers’ performance: ‘Effectiveness of Teaching’ and ‘Experience in the Course’. Both dimensions were
aligned with the performance dimensions reported in SET literature. For example, Darling-Hammond (2012) suggests that teachers’
performance may be evaluated on four areas: planning instruction and assessment, instructing and engaging students in learning,
assessing student learning experience, and analyzing teaching. Essentially, performance dimensions with examples that students could
directly observe are well suited for SET purposes. In addition, there were another two dimensions in the SET instrument that focused on
students’ self-assessment of their effort in the PBL courses and their interactions with the industry partners during their projects.
With performance dimensions being identified, the performance examples in the SET instrument were generated by adapting
relevant items from both the FACETS instrument (refer to Table 2) and the UNIS-X Qualtrics survey (refer to Table 3). The performance
examples selected were based on the following criteria (Alok, 2011): 1) Examples must be observable; 2) Examples must describe the
teaching performance; 3) Examples must not be offensive; and 4) Examples must be clear and one-dimensional in meaning.
Feedback from Students: During the development of the FACETS instrument, CTE administered a survey to 3300 undergraduate
students seeking their views on what ought to be included in an instrument that measures the effectiveness of an instructor and a
course. Students who participated in the questionnaire were of the view that all items were at least important (≥4.000). The averages
for items most relevant to FACETS-X are presented in Table 4 below.
In a subsequent focus group conducted by CTE for UNIS Student Association’s student leaders on their interpretation of the FACETS
items, students’ comments and interpretation of the items adapted by FACETS-X are presented in Table 5 below. It is noted students’
survey results and their focus group’s comments served as important inputs for revising the FACETS-X instrument.
The items identified in Tables 4 and 5 were relevant to measuring UNIS-X’s teaching effectiveness. For example, instructor’s
preparation and organization examines how well the instructor structures a class from traditional teaching approach to project-based
learning that is collaborative and inquiry-driven. As a project in UNIS-X typically involves students exploring ideas to address a real-
world problem, stimulation of students’ interest in content may act as a catalyst in generating fresh ideas by learning new knowledge
and applying knowledge learnt in relevant disciplines. Clarity of objectives and requirements/expectations, and quality and frequency
of feedback are key attributes of a successful project and therefore, it is key for teachers to play an effective role of a facilitator and
adviser in providing students with adequate guidance and feedback that in turn, enhances creation of tangible outputs that address the
driving question.
Table 6 summarizes the initial draft of the FACETS-X instrument and highlights the items adapted from FACETS instrument, which
were mostly instructor-related. These items identify the essential aspects of the role of an instructor regardless of the type of pedagogy
adopted in a class. For instance, instructor’s preparation and organization, which suggests whether the instructor has adequate
knowledge of the content and is able to present it in an order that aids understanding, is a necessary trait for both traditional and
project-based learning classes. The same goes for other instructor-related items such as clarity of objectives and expectations, stim­
ulation of interest in content, and quality and frequency of feedback. This highlights an interesting finding that while teaching
pedagogy may vary in our case, several instructor-related traits of a class may remain unchanged.
The initial draft of the FACETS-X instrument was shown to the UNIS-X Academic Director and Associate Director, selected UNIS-X
faculty members and students for their input. The team considered the feedback and revised the items where appropriate. Excerpts of
faculty feedback are provided below:
Feedback from Faculty A: As seen in Table 6, the initial suggestion was to include ‘Stimulation of interest in content’ in the SET
instrument. Nevertheless, Faculty A questioned the suitability of using the word ‘content’ as he commented that several UNIS-X courses
tend to emphasize little on content teaching and heavy on knowledge application in projects. In this instance, faculty’s role switched
from a teacher to a mentor/supervisor of students’ project. While this comment seemed to make sense, however, there were still quite a
number of UNIS-X courses that teach significant amount of content. In the end, ‘Stimulation of interest in content and project-based
learning’ was selected since content would not have been limited to what was covered during teaching, but also the project content and
the knowledge generated during project.
Feedback from Faculty B: Faculty B suggested including a question that explicitly referenced exposure to real-world challenges.
Something along the line of ‘the extent to which students feel they got a thorough grounding in the practical realities of the subject and
real-world challenges’. Initial suggestion was to include a question, ‘Enhancing your ability to apply subject concepts to real-world
issues’. It was further fine-tuned and drafted as ‘Providing you with insights into the real-world challenges of the topic’. After
much discussion, it was decided to be ‘Enhancing your ability to apply subject concepts to real-world issues’.
Feedback from Faculty C: Faculty C highlighted the importance of the client in students’ learning experience and suggested to
include ‘Effort in doing his/her best to help you where possible’ or ‘Doing his/her best to help you where possible’. After much
deliberation, the item was rephrased to be ‘The quality of help and assistance provided by the client’. Nevertheless, Faculty C again

Table 1
Performance dimensions identified by CTE and UNIS-X team.
Stakeholder Performance Dimension

Teacher Effectiveness of Teaching


Experience in the Course
Student Effort put into Course
Experience Working with Real Companies

5
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

Table 2
Performance examples adapted from existing FACETS instrument.
Q Existing FACETS Item Consideration for Use in FACETS-X
NO.

1 Instructor’s preparation and Retain: The initial consideration was to split this question into 2 for in-class activities and managing project
organization activities. After deliberation, the team deemed it worthwhile to keep the question in its compound form rather than
to overburden students with more questions.
3 Instructor’s stimulation of interest Revision: Consider to change to ‘Instructor’s stimulation of interest in experiential activities’ (e.g., reflection,
in content projects) to reflect project-based learning.
11 The clarity of objectives and Revision: Consider to change to ‘The clarity of objectives and expectations’. The initial consideration was to
requirements remove this question as a UNIS-X course is about taking risks and handling uncertainties. Additionally, project
objectives and requirements may not always be clear at the start and may change during the project process. After
deliberation, the team decided to adapt this question as it is important to communicate expectations and provide
students with a clear purpose on which to focus their learning efforts so they can monitor and assess their progress.
12 Quality and frequency of feedback Retain: The initial consideration was to change to ‘Quality and Timeliness of Feedback’. However, the FACETS
results over the years indicated that students did not have any particular problem understanding or responding to
this item. Further, ‘timely feedback’ was often mentioned in the qualitative comments.
16 Overall rating of the instructor Retain: Global item
17 Overall rating of the course Retain: Global item

Table 3
Items adapted from UNIS-X qualtrics survey.
Item Description

UNIS-X Pedagogy Working with classmates on projects; Projects as solving real-world problems
Overall Learning Outcomes Leadership and interpersonal skills in working effectively with others
Experience Working with Industry Partners Company name and brief description; Problems faced and what UNIS could do to resolve
Effort Put into Course Average hours spent per week; Proportion of time spent on project activities; Course workload vs credit unit
Actual vs Expectations (Overall experience) Extent to which course met expectation; Key learning takeaways; Recommendations to friends

Table 4
Averages for Items Most Relevant to FACETS-X Rated by Students who
Participated in the Questionnaire - on a scale of 1 (Not important at all) to 7
(Essential).
Item Average

Instructor’s preparation and organization 5.648


Instructor’s stimulation of interest in content 5.475
The clarity of objectives and requirements 5.425
Overall rating of the instructor 5.173
Overall rating of the course 5.115
Quality and frequency of feedback 4.819

pointed out that he found it hard to determine the extent of ‘doing his/her best’ and ‘quality of help’. In the end, faculty C suggested
having an open-ended question: ‘Please give responsible feedback regarding what you liked/disliked about the project sponsor/client/
organization you worked with’. His suggestion was subsequently accepted and embraced in the SET instrument.
The development of FACETS-X took slightly over a year with 2 semesters of pilot testing. In November 2016, the SET instrument
draft was piloted on all UNIS-X courses conducted in August 2016 semester 1 of the academic year. These UNIS-X courses consisted
more than 20 UNIS-X class sections. The overall results were positive. Students’ responses were analyzed and reviewed, and the UNIS-X
team engaged students in a focus group to gather perceptions and confirm their interpretation of the items in the instrument. A second
pilot was conducted on all UNIS-X courses in semester 2 of the academic year in 2017. The UNIS-X team presented aggregated Pilot
Test 1 and 2 results to the University Curriculum Committee in March 2017. With further refinement from the University Curriculum
Committee, the SET instrument was subsequently sent to senior management for their endorsement.
In August 2017 semester 1 of the academic year, the FACETS-X instrument was eventually launched university-wide. In its final
form, the FACETS-X instrument (refer to Appendix B) comprises 14-rated items, 3 open-ended questions that reflect students’ expe­
rience with the instructor and course, 1 question on students’ effort in PBL activities, and 3 questions about their experience with the
industry partners/organizations (Eley & Stecher, 1997).
To date, the FACETS-X instrument has been used over 100 times for various courses. In general, UNIS-X instructors felt that the
precise performance examples facilitate their understanding of the areas of improvement. Some instructors have also expressed that
having a dedicated instrument for UNIS-X courses is fairer and more relevant to capture the essence of the project-based learning
activities. Students’ responses in the FACETS-X instrument have enabled CTE to provide more targeted pedagogical support for UNIS-X
instructors through one-on-one consultations, resource curation (e.g. UNIS-X Toolkit), and the UNIS Experiential Education Day in
partnership with the UNIS-X team. The UNIS-X Toolkit, jointly developed by the UNIS-X Committee and CTE, has served as a one-stop

6
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

Table 5
Student leaders’ comments and interpretation of the items adapted by FACETS-X instrument.
Item Students’ Comments

Instructor’s preparation and • Whether the instructor has adequate knowledge of the content and is able to present it in an order that aids understanding
organization
• Whether the instructor updates slides and links content with current events and distributes course materials in a timely
manner
Instructor’s stimulation of interest in • Whether the instructor is able to generate students’ curiosity in the content and attract their interest
content
• Powerpoint, learning aids, things the instructor does beyond powerpoint slides, interesting case studies, field trips, class
activities, hands-on activities, real-life examples and applications, and simulations
Clarity of objectives and • Whether the instructor provides detailed course outline and expectations, and follows through on the information
requirements provided
• To be communicated clearly during the first week of term
Quality and frequency of feedback • The frequency in which the instructor provides constructive feedback
• Vague, not sure if the item is asking about feedback from students or instructor. Question should read “Quality and
frequency of feedback from instructor”a
Overall rating of the instructor • Item is very clear
• Rating may be based on aspects including and beyond the preceding questions, e.g. it could be based on the student’s
perception of the instructor’s personality
Overall rating of the course • Item is very clear, students can distinguish between this and the overall instructor item
• Some students may think more carefully about the course when rating this item
a
The CTE and the UNIS-X team did not see the need to specify the word ‘instructor’ for this item in FACETS-X. This concern was addressed by giving
clear instruction to students at the beginning of the FACETS-X survey that they were rating the effectiveness of their named instructors.

Table 6
Initial draft of the FACETS-X instrument (rated items only).
Instructor-related Items Course-related Items

Preparation and organizationa Enhancing your analytical, problem solving and reasoning skills
Clarity of objectives and expectationsa Enhancing your capacity to integrate knowledge from two or more disciplines to
solve a problem
Stimulation of interest in contenta Enhancing your ability to apply subject concepts
Facilitation and mentoring skills Enhancing your communication skills
Quality and frequency of feedbacka Developing you to be more open-minded and sensitive to individual differences
Creating opportunities for you to learn from others (i.e. partners, guest Preparing you to embrace uncertainty
speakers, peers)
Overall rating of the instructora Overall rating of the coursea
a
Indicates items adapted from the FACETS instrument.

resource to guide instructors as they teach UNIS-X courses. The Experiential Education Day serves a platform for UNIS-X instructors to
share their expertise in project-based experiential learning and/or its related topics with the community. The post-event report, in the
form of an Experiential Education Day Digest shared with the faculty community, summarizes the sharing session and includes best
practices on designing and teaching UNIS-X courses. The recent semester has seen an increase in FACETS-X ratings for most items
compared to an earlier semester. In addition, students’ feedback on industry partners has helped to facilitate the UNIS-X team’s partner
engagement strategy. For instance, industry partners that have demonstrated strong commitment to the UNIS-X programme and
offered effective mentorship to UNIS-X students during their projects, were identified as key strategic partner and will be given priority
to participate in the UNIS-X programme in future.
A Cronbach’s Alpha reliability analysis was conducted on the FACETS-X data collected over 4 academic semesters from August
2017 to August 2019. It showed all items in FACETS-X having a good item-scale correlation with the reliability coefficient alpha of at
least 0.96, suggesting that the instrument is highly reliable (refer to Table 7). However, such high Cronbach’s Alpha scores may suggest
that some items may be redundant and probably measuring the same concept. This serves as an important point to note for future
review and improvement of FACETS-X instrument.

6. Conclusion and implications

This paper presents the development process of a SET instrument used to measure the effectiveness of project-based learning

Table 7
Cronbach’s alpha reliability analysis of FACETS-X.
Instructor-related items (Q1a to 1g) 0.974

Course-related items (Q3a to 3g) 0.967


Instructor- and course-related items (Q1a to 3g) 0.960

7
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

pedagogy. By drawing upon a case study of UNIS, we demonstrate the development of a SET instrument and highlight the consid­
erations surfaced during the development process. For researchers, this paper contributes to the SET and PBL literature by highlighting
the need to have a dedicated SET instrument to measure the alternative teaching pedagogy in project-based learning. The paper also
adds value by demonstrating the development process of a SET instrument for the PBL programme.
As project-based learning continues to gain traction with business schools globally [i.e., a growing number of business schools are
joining ‘Leaders of Experiential Project-Based Education’ (LEPE) network], we believe our research findings may be beneficial to
business schools, especially those which are embarking on or have already embarked on ‘project-based learning’ programmes in a
major way. In particular, the process of developing a suitable SET instrument for project-based learning offers insights into developing
such instrument. In addition, the SET instrument we shared in this paper, allows business schools to adopt or adapt a teaching
evaluation instrument essential for improving project-based learning programmes. Further, for business schools which may already
have an alternative SET instrument for project-based learning pedagogy, our instrument offers an opportunity for them to compare and
contrast their instrument with an alternative template.
While it is beneficial to have a SET instrument to measure the teaching effectiveness of courses using project-based learning
pedagogy, there may be challenges in its development and implementation processes. For instance, one of the challenges of developing
SET instrument for project-based learning type of courses is whether its dimensions and examples are designed sufficiently well to
cover the essence of such courses and therefore, able to measure faculty’s teaching effectiveness. It is therefore recommended that
business schools may want to design common features for their project-based learning courses. For example in UNIS, each UNIS-X
course is designed to have a heavy project component of at least 30%, these projects must involve real-world issues or problems
sponsored by companies, and each project is usually mentored by a faculty member and an industry mentor. By having common
features for project-based learning courses, this shapes the SET instrument accordingly and allows it to measure teaching effectiveness
appropriately.
Another challenge is to develop a SET instrument that will obtain ‘buy-in’ from faculty and students. Our findings showed that
active involvement of students and faculty in the development and implementation stages have proven to be important for the suc­
cessful adoption of a SET instrument. In particular, it is critical to consider and act upon the perspectives and concerns raised by
various stakeholders in the development and implementation phases.
In addition, a major challenge in soliciting students’ feedback was the low response rate in some UNIS-X courses which could limit
meaningful interpretation of teaching evaluation owing to sampling bias. To improve response rates, business schools can consider
measures such as sending multiple reminders to students and emphasizing on student confidentiality, as well as informing faculty
members their ongoing response rate so they could also encourage participation.
While this study represents an important step toward developing a SET instrument for the PBL pedagogy, opportunities of lon­
gitudinal field studies that involve case studies to refine the SET template are clearly called for, to further improve the validity and
reliability of the instrument. Another future research direction is to examine whether students are knowledgeable in giving accurate
and meaningful evaluations of teaching in the PBL course. Given that PBL may usually involve an industry project, an interesting
research direction is to examine whether students who have worked on client/consulting projects during their internships or prior
consulting project experience are more objective and reliable in their evaluation.
Finally, the SET development process should be viewed within the context of its limitation – data collection was mainly based on
secondary documentation. As such there may be presence of interpretation bias. While this bias may be a shortcoming in this paper, it
must be noted here that our development process and SET instrument are in line with existing SET literature. This suggests gener­
alizability of the development of our SET instrument to theory, and their usefulness for building theoretical framework in developing a
SET instrument.

Author contribution

Gary Pan: Methodology, writing, visualisation. Venky Shankararaman: Conceptualization, supervision, project administration.
Kevin Koh: Resources, review & editing. Sandy Gan: Validation, formal analysis, data curation.

Appendix A. FACETS Instrument

The FACETS Task Force, together with Centre for Teaching Excellence, embarked on a 3-year effort which involved the following
procedures:

• Searched and read the scientific literature on the measurement of teaching and learning in tertiary institutions.
• Identified and appointed a Consultant who was one of the leading experts on measuring teaching and learning in tertiary in­
stitutions. The Consultant provided extensive guidance and advice throughout the process, and reviewed the instruments as the
Task Force developed, refined, pilot tested, and validated them.
• Obtained measurement instruments from other leading universities worldwide so that UNIS could benchmark its instrument
against others.
• Conducted university-wide surveys and forums with UNIS faculty members, students, administrators and other stakeholders to
obtain their views on what should be included in the instrument, and how these factors should be measured.
• Convened focus groups of UNIS faculty, students, administrators, and other stakeholders to gather reactions to draft instruments to
identify any dimensions that seemed potentially irrelevant or unsuitable for UNIS.

8
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

• Conducted 2 rounds of pilot testing, over a period of 2 terms, involving >20 instructors, > 50 class sections, and >2000 students. In
these pilot tests, participants were also asked to provide qualitative feedback on the instruments. After each round of pilot testing,
the data were analyzed and reviewed, and the instrument was revised accordingly.
• In November 2012, the FACETS instrument was launched university-wide.

Scale for Q1 to Q17: Excellent, Very Good, Good, Neutral, Poor, Very Poor, Extremely Poor, Not Applicable
Please rate the effectiveness of the instructor and your experience in the course.
Instructor’s preparation and organization
Instructor’s clarity and understandability
Instructor’s stimulation of interest in content
Instructor’s encouragement and openness
Instructor’s availability and helpfulness
Instructor’s presentation and speaking skills
Instructor’s enthusiasm for the subject
Instructor’s fairness
Instructor’s concern for students
The learning experience in this course
The clarity of objectives and requirements
Quality and frequency of feedback
Quality and value of the course material
Quality and usefulness of course assignments/projects
Degree to which the course was participative and interactive
Overall rating of the instructor
Overall rating of the course
Open-ended questions:
Please give responsible feedback regarding the instructor:
What are the strengths of the instructor’s teaching?
What suggestions do you have to improve the instructor’s teaching?
Please give responsible feedback regarding the course:
What elements of the course most contributed to your learning?
What suggestions do you have to improve the course?
Other questions:
For this course, how many hours per week on average did you spend on coursework outside of class?
____ hours per week
Scale for Q21: Strongly Agree, Agree, Slightly Agree, Neutral, Slightly Disagree, Disagree, Strongly Disagree,
Not Applicable
This course challenged me intellectually.

Appendix B. FACETS-X Instrument

Effectiveness of Instructor
Scale for Q1a to Q1g: Excellent, Very Good, Good, Neutral, Poor, Very Poor, Extremely Poor, Not Taught or
Mentored by Instructor
Please rate the effectiveness of [instructor name] in [course name]:
1a. Preparation and organization*
1b. Clarity of objectives and expectations*
1c. Stimulation of interest in content and project-based learning*
1d. Facilitation and mentoring skills
1e. Quality and frequency of feedback*
1f. Creating opportunities for you to learn from others (i.e. partners, guest speakers, peers)
1g. Overall rating of the instructor*
Open-ended question:
Please give responsible feedback regarding what you liked/disliked about [instructor name].
Experience in the Course
Scale for Q3a to Q3g: Excellent, Very Good, Good, Neutral, Poor, Very Poor, Extremely Poor
Please rate the effectiveness of [course name] in terms of the following:
3a. Enhancing your analytical, problem-solving and reasoning skills
3b. Enhancing your capacity to integrate knowledge from two or more disciplines to solve a problem
3c. Enhancing your ability to apply subject concepts to real-world issues
3d. Enhancing your communication skills
3e. Developing you to be more open-minded and sensitive to individual differences
3f. Preparing you to embrace uncertainty
3g. Overall rating of the course*
Open-ended questions:
(continued on next page)

9
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

(continued )
What perspective/learning/skills do you take away from [course name]?
In what areas can the university provide support to facilitate your learning in this course?
Effort put into UNIS-X Course
(Answer is in percent and the total percentage must add up to 100%)
Please indicate the proportion of your time spent on the following project activities for [course name].
Primary data collection (i.e. survey/focus group/interviews)
Secondary data collection (i.e. looking for information online)
Coding/Data Analysis
Meetings with project sponsor(s)/client(s)/organization mentor(s)
Meetings with professor/instructor
Meetings with project/group mates
Doing up final report/presentation slides/other materials
Experience Working with Organizations
Open-ended questions:
Please indicate the project sponsor/client/organization you have been working with on the project.
Please give responsible feedback regarding what you liked/disliked about the project sponsor/client/
organization you have been working with.
Please give a brief description of the project you have been working on.
* indicates items adapted from the FACETS instrument

References

Aldabbus, S. (2018). Project-based learning: Implementation & challenges. International Journal of Education, Learning and Development, 6(3), 71–79.
Alok, K. (2011). Student evaluation of teaching: An instrument and a development process. International Journal of Teaching and Learning in Higher Education, 23(2),
226–235.
Bradley-Levine, J., Berghoff, B., Seybold, J., Sever, R., Blackwell, S., & Smiley, A. (2010). What teachers and administrators “need to know” about project-based
learning implementation. In Annual meeting of the American educational research association (April, Denver, Colorado).
Braskamp, L. A., & Ory, J. C. (1994). Assessing faculty work: Enhancing individual and institutional performance. San Francisco, CA: Jossey-Bass.
Chen, Y. N., & Hoshower, L. (2003). Student evaluation of teaching effectiveness: An assessment of student perception and motivation. Assessment & Evaluation in
Higher Education, 28(1), 71–88.
Church, R. (2001). The effective use of secondary data. Learning and Motivation, 33, 32–45.
Clayson, D., & Sheffet, M. (2006). Personality and the student evaluation of teaching. Journal of Marketing Education, 28(2), 149–160.
Comm, C. L., & Mathaisel, D. (1998). Evaluating teaching effectiveness in America’s business schools: Implications for service markets. Journal of Professional Services
Marketing, 16(2), 163–170.
Cowton, C. (1998). The use of secondary data in business ethics research. Journal of Business Ethics, 17(4), 423–434.
Darling-Hammond, L. (2012). Creating a comprehensive system for evaluating and supporting effective teaching. Stanford, CA: Stanford Centre for Opportunity Policy in
Education.
Davidson, W. N., III., Worrell, D. L., & Lee, C. I. (1994). Stock market reactions to announced corporate illegalities. Journal of Business Ethics, 13, 979–987.
Dodeen, H. (2013). College students’ evaluation of effective teaching: Developing an instrument and assessing its psychometric properties. Research in Higher
Education Journal, 21, 1–12.
Driscoll, J., & Cadden, D. (2010). Student evaluation instruments: The interactive impact of course requirement, student level, department and anticipated grade.
American Journal of Business Education, 3(5), 21–30.
Eley, M., & Stecher, E. (1997). A comparison of two response scale formats used in teaching evaluation questionnaires. Assessment & Evaluation in Higher Education, 22
(1), 65–70.
Eskrootchi, R., & Oskrochi, R. (2010). A study of the efficacy of project-based learning integrated with computer-based simulation - stella. Educational Technology &
Society, 13(1), 236–245.
Green, A. (1998). Project-based learning: Moving students through the GED with meaningful learning [Online]. Available at: http://files.eric.ed.gov/fulltext/
ED422466.pdf.
Gultekin, M. (2005). The effect of project based learning on learning outcomes in the 5th grade social studies course in primary education. Educational Sciences: Theory
and Practice, 5(2), 548–556.
Henderson, C., Turpen, C., Dancy, M., & Chapman, T. (2014). Assessment of teaching effectiveness: Lack of alignment between instructors, institutions, and research
recommendations. Physics Education Research, 10, 1–20.
Hobson, S., & Talbot, D. (2001). Understanding student evaluation. College Teaching, 49(1), 26–31.
Jackson, D. L., Teal, C. R., Raines, S. J., Nansel, T. R., Force, R. C., & Burdsal, C. A. (1999). The dimensions of students’ perceptions of teaching effectiveness.
Educational and Psychological Measurement, 59(4), 580–596.
Kim, C., Damewood, E., & Hodge, N. (2000). Professor attitude: Its effect on teaching evaluation. Journal of Management Education, 24(4), 458–473.
Kleczek, R., Hajdas, M., & Wrona, S. (2020). Wicked problems and project-based learning: Value-in-Use approach. International Journal of Management in Education, 18
(1), 100324.
Markham, T., Larmer, J., & Ravitz, J. (2003). Project based learning handbook: A guide to standards-focused project based learning (2nd ed.). Novato, CA: Buck Institute for
Education.
Marsh, H. W. (1987). Students’ evaluation of university teaching: Research findings, methodological issues, and directions for future research. International Journal of
Educational Research, 11(3), 255–379.
Mart, C. (2017). Student evaluations of teaching effectiveness in higher education. International Journal of Academic Research in Business and Social Sciences, 7(10),
57–61.
McCray, J., Warwick, R., Palmer, A., & Thompson, T. (2021). Experiencing temporal patterns of action learning and the implications for leadership development.
International Journal of Management in Education, 19(1), 100433.
Mergendoller, J., Maxwell, N., & Bellisimo, Y. (2006). The effectiveness of problem-based instruction: A comparative study of instructional methods and student
characteristics. Interdisciplinary Journal of Problem-Based Learning, 1(2). Article 5.
Olzan, G. (2016). A project-based learning approach to teaching physics for pre-service elementary school teacher education students. Cogent Education, 3, 1–12.
Pan, G., Seow, P. S., & Koh, G. (2019). Examining learning transformation in project-based learning process. Journal of International Education in Business, 12(2),
167–180.

10
G. Pan et al. The International Journal of Management Education 19 (2021) 100501

Pan, G., Seow, P. S., Shankararaman, V., & Koh, K. (2020). An exploration into key roles in making project-based learning happen: Insights from a case study of a
university. Journal of International Education in Business (in press).
Prince, M., & Felder, R. (2007). The many faces of inductive teaching and learning. Journal of College Science Teaching, 36(5), 14–20.
Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & Evaluation in Higher Education, 30(4), 387–415.
Sadrina, M. R., & Ichsan, M. (2018). The evaluation of project-based learning in Malaysia: Propose a new framework for polytechnic system. Jurnal Pendidikan Vokasi,
8(2), 143–150.
Savery, J. (2006). Overview of problem-based learning: Definitions and distinctions. The Interdisciplinary Journal of Problem-based Learning, 1(1), 9–20.
Seow, P. S., Pan, G., & Koh, G. (2019). Examining an experiential learning approach to prepare students for the volatile, uncertain, complex and ambiguous (VUCA)
work environment. International Journal of Management in Education, 17(1), 62–76.
Shpeizer, R. (2019). Towards a successful integration of project-based learning in higher education: Challenges, technologies and methods of implementation.
Universal Journal of Educational Research, 7(8), 1765–1771.
Smith, G., & Anderson, K. (2005). Students’ ratings of professors: The teaching style contingency for latino/a professors. Journal of Latinos and Education, 4(2),
115–136.
Tal, T., Krajcik, J., & Blumenfeld, P. (2006). Urban schools’ teachers enacting project-based science. Journal of Research in Science Teaching, 43(7), 722–745.
Taylor, S., & Bogdan, R. (1998). Introduction to qualitative research methods. John Wiley & Sons.
Thomas, J. (2000). A review of research on project-based learning. Report prepared for The Autodesk Foundation. Retrieved online at http://www.bobpearlman.org/
BestPractices/PBL_Research.pdf.
Toland, M. D., & Ayala, R. J. (2005). A multilevel factor Analysis of students’ evaluation of teaching. Educational and Psychological Measurement, 65(2), 272–296.
Watters, J., Pillay, H., & Flynn, M. (2016). Industry-school partnerships: A strategy to enhance education and training opportunities. Australia: A Queensland University of
Technology Report.
Williams, M., & Linn, M. (2003). WISE inquiry in fifth grade biology. Research in Science Education, 32(4), 415–436.

11

You might also like