0% found this document useful (0 votes)
273 views130 pages

Gech Final Thesis Report

This document is a thesis submitted by Getachew Mekonnen in partial fulfillment of an MBA degree from Bahir Dar University. The thesis evaluates the effectiveness of the instructor performance appraisal process at the College of Business and Economics. It examines factors like the practices of appraisers, quality of evaluation criteria, clarity of purpose, and characteristics of performance feedback. The study uses surveys of instructors and students and interviews with department heads to assess the current process. It aims to identify weaknesses and make recommendations to enhance the effectiveness of instructor performance appraisals.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
273 views130 pages

Gech Final Thesis Report

This document is a thesis submitted by Getachew Mekonnen in partial fulfillment of an MBA degree from Bahir Dar University. The thesis evaluates the effectiveness of the instructor performance appraisal process at the College of Business and Economics. It examines factors like the practices of appraisers, quality of evaluation criteria, clarity of purpose, and characteristics of performance feedback. The study uses surveys of instructors and students and interviews with department heads to assess the current process. It aims to identify weaknesses and make recommendations to enhance the effectiveness of instructor performance appraisals.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 130

EFFECTIVENESS OF INSTRUCTORS PERFORMANCE

APPRAISAL PROCESS: A CASE OF COLLEGE OF


BUSINESS AND ECONOMICS, BAHIR DAR
UNIVERSITY

A THESIS SUBMITTED IN PARTIAL FULFILLMENT


OF THE REQUIREMENT FOR A MASTER OF
BUSINESS ADMINISTRATION [MBA]

By: Getachew Mekonnen

Advisor: Anteneh Eshetu (PhD Fellow)


June, 2015
Bahir Dar, Ethiopia

i
BAHIR DAR UNIVERSITY
College of Business and Economics

Department of Management

Effectiveness of Instructors Performance


Appraisal Process: A Case of College of
Business and Economics, Bahir Dar
University

By: Getachew Mekonnen

ii
Thesis Approval Page

The thesis titled “Effectiveness of Instructors Performance Appraisal Process: a Case of

College of Business and Economics, Bahir Dar University” by Mr. Getachew Mekonnen

Sisay is approved for the degree of Masters of Business administration (MBA).

BOARD OF EXAMINERS

Name Signature Date


1. Advisor: _______________________ ____________ ___________

2. Internal Examiner: __________________ ____________ ___________

3. External Examiner: ___________________ ____________ ___________

iii
Declaration

I, Getachew Mekonnen declare that this work entitled Effectiveness of Instructors

Performance Appraisal Process: A Case of College of Business and Economics, Bahir Dar

University is outcome of my own effort and study and that all sources of materials used for the

study have been duly acknowledged. I have produced it independently except for the guidance

and suggestion of the Research Advisor.

This study has not been submitted for any degree in this University or any other University. It is

offered for the partial fulfillment of the degree of MA in Business Administration [MBA]

By: Getachew Mekonnen

Signature____________________________

Date_______________________________

Advisor: Anteneh Eshetu (PhD Fellow)

Signature__________________________

iv
CERTIFICATION

This is to certify that this thesis entitled “Effectiveness of Instructors Performance Appraisal

Process: A Case of college of Business and Economics, Bahir Dar University” submitted in

partial fulfillment of the requirements for the award of the degree of Masters of Business

Administration to the College of Business and Economics, Bahir Dar University, through the

Department of Management, done by Mr. Getachew Mekonnen, ID No.BDU 0603212 is an

authentic work carried out by him under our guidance. The matter embodied in this thesis has not

been submitted earlier for award of any degree or diploma to the best of our knowledge and

belief.

Advisor: Anteneh Eshetu (PhD Fellow)

Bahir Dar University

College of Business and Economics)

Department Of Management

Bahir Dar/ Ethiopia

Signature ………………..…..

Date …………………………

v
ABSTRACT
Bahir Dar University has been implementing instructors’ performance appraisal process
whereby peers, students and heads of departments evaluate instructors’ performance. However,
to the best knowledge of the researcher; no systematic study has been conducted to evaluate
effectiveness of instructors’ performance appraisal process in the university. Therefore, the
overriding objective of this study is to evaluate effectiveness of instructors’ performance
appraisal process in Bahir Dar University (COBE). In order to achieve the objective, the focus
of the study is on factors in the process of the appraisal (including practices of appraisers,
qualities of evaluation criteria, clarity of the purpose, and characteristics of performance
feedback system). The study employed cross sectional survey design. Even though 39 semi-
structured questionnaires to instructors and 232 structured questionnaires to students were
distributed, only 35(89.74%) and 221(95.26%) were returned and analyzed from the former and
the latter, respectively. Sample respondents were selected using proportionate stratified random
sampling. Moreover, semi structured interview with heads of departments were conducted to
supplement data collected using questionnaire. Data collected through interviews were analyzed
qualitatively; whereas data collected through questionnaires were analyzed quantitatively using
descriptive statistics with the help of SPSS version 21. The result of the study indicated that
instructors’ performance appraisal process is ineffective because of poor qualities of evaluation
criteria, bias practices of appraisers, ineffective performance feedback system, and appraisers
(students) lack of awareness on the appraisal purpose & less attention given for formative
purposes of the appraisal. Finally, to enhance effectiveness of the appraisal, the researcher
recommended the university to: redesign the evaluation criteria in consultation with instructors;
train appraisers and appraisee’s; make the feedback frequent, precise, timely & consistent; and
focus on formative purposes.

KeyWords:
Key Words: Appraisee’s,
Appraisees, Appraisers,Appraisers, Appraisal
Appraisal Criteria, Criteria,
Effectiveness Effectiveness
of Performance of
Appraisal,
Performance
Mekelle University,Appraisal, Bahir
Performance Dar Peer
Feedback, University, Performance Feedback, Peer

vi
Acknowledgement
First of all I would like to thank the almighty God for His in desirable gift. Without His help this
research could not have been realized. I am grateful to my advisor, Anteneh Eshetu (PhD
Fellow), for his committed and motivated guidance to successfully complete this research paper.

I would like to express my gratitude to my heartily friend Deacon worku Kassaw, with whom I
share all ups and downs of life and acknowledge his concern about my education and initiation
he took to let me go to school, from idea initiation to every life consultation.

Also I would like to express my indebtedness to Bahir Dar Hotel No. 2 all Staffs, for their
patience when I was using Wi Fi. My sincere regards also goes to Mr. Zewdu Lake (Lecturer) for
his Help in data gathering and suggestions in the course of my study.

I would like to express my gratitude my classmates Fatuma Beyan, Samueal Negsh Yaskebral
Tigab, Mandefro Tagele, Kassa Yimam (“Fish”) and Getu Yayu who have assisted me
indifferent ways during course work.

I would also like to thank all Department of Management of Bahir Dar University who have
taken part in educating me and All department Heads of CoBE for their valuable information
during interview.

The last but not the least, I would like to thank all respondents (instructors and students) who
furnished me with necessary information used for successful accomplishment of this study.

Getachew Mekonnen

June, 2015

Bahir Dar / Ethiopia

vii
List of Acronyms
HOD- Head of Department

RGCS - Regular Graduating Class Students

BDU – Bahir Dar University

NATH - National Association of Head Teachers

OECD- Organization for Economic Cooperation and Development

SPSS- Statistical Package for Social Science

CoBE- College of business and Economics

viii
Table of Contents

Thesis Approval Page .............................................................................................................................. iii


Declaration............................................................................................................................................. iv
CERTIFICATION ........................................................................................................................................ v
ABSTRACT .............................................................................................................................................. vi
Acknowledgement................................................................................................................................. vii
List of Acronyms ....................................................................................................................................viii
CHAPTER ONE ......................................................................................................................................... 1
INTRODUCTION ....................................................................................................................................... 1
1.1 Background of the study ................................................................................................................ 1
1.2 Statement of the Problem .............................................................................................................. 2
1.3 Research Questions ....................................................................................................................... 4
1.4 Objectives of the Study .................................................................................................................. 6
1.4.1 General objective .................................................................................................................... 6
1.4.2 Specific Objectives .................................................................................................................. 6
1.5 Significance of the Study ................................................................................................................ 6
1.6 scope of the study.......................................................................................................................... 6
1.7 Limitation of the study ................................................................................................................... 7
1.8 Operational Definitions .................................................................................................................. 7
1.9 Organization of the Thesis.............................................................................................................. 8
CHAPTER TWO ........................................................................................................................................ 9
LITERATURE REVIEW................................................................................................................................ 9
2.1 Theoretical Literature .................................................................................................................... 9
2.1.1 Performance Appraisal: An overview ....................................................................................... 9
2.1.2 Factors affecting effectiveness of performance appraisal ...................................................... 13
2.1.3 Problems in Performance Appraisal Process .......................................................................... 18
2.1.4 Performance Appraisal in Higher Education Institutions ........................................................ 20
2.1.4.3.2 Peer Appraisal of Instructors‟ Performance ...................................................................... 24
2.1.4.3.3 Self Appraisal of Instructors‟ Performance ...................................................................... 25
2.1.4.3.4 Supervisor Appraisal of Instructors‟ Performance ............................................................ 25
2.2 Empirical literature ...................................................................................................................... 25

ix
2.3 Conceptual Framework ................................................................................................................ 27
CHAPTER THREE .................................................................................................................................... 30
METHODOLOGY OF THE STUDY ............................................................................................................. 30
3.1 Research Design........................................................................................................................... 30
3.2 Data Type and Source .................................................................................................................. 30
3.3 Sampling Design and Procedure ................................................................................................... 30
3.4 Methods of Data Collection and Instrumentation ......................................................................... 32
3.5 Data Processing and Methods of Data Analysis ............................................................................ 34
CHAPTER FOUR ..................................................................................................................................... 35
RESULTS AND DISCUSSION .................................................................................................................... 35
4.1 Demographic Profile of Respondents (Instructors) ....................................................................... 35
4.2 Demographic Profile of Respondents (Students) .......................................................................... 38
4.3 Qualities of Appraisal Criteria....................................................................................................... 40
4.3.1 Instructors Awareness of Their Appraisal Criteria .................................................................. 41
4.3.2 Relevance, Reliability and Realistically of Appraisal Criteria ................................................... 44
4.3.3 Appraisal Criteria as Measure of Student Learning ................................................................ 47
4.3.4 Participation of Instructors in Design of Appraisal Criteria ..................................................... 48
4.4 Practices of Appraisers in Instructors’ Performance Appraisal ...................................................... 50
4.4.1 Students’ Practice in Appraisal: Students Self-Report Vs Instructors’ Perception ................... 50
4.4.2 Instructors’ Perception of Practices of Peers ......................................................................... 70
4.4.3 Instructors’ Perception of Practices of Heads of Departments ............................................... 76
4.5 Characteristics of the Performance Feedback System .................................................................. 82
4.5.1 Existence of Official Performance Feedback after Appraisal................................................... 83
4.5.2 Existence of Continuous Discussion on Instructors Performance ........................................... 84
4.5.3 Specificity of Performance Feedback ..................................................................................... 85
4.5.4 Timeliness of Performance Feedback .................................................................................... 87
4.5.5 Acceptance of Performance Feedback by Instructors ............................................................ 89
4.6 Clarity of Purpose of Instructors’ Performance Appraisal ............................................................. 91
CHAPTER FIVE........................................................................................................................................ 96
CONCLUSION AND RECOMMENDATION ................................................................................................ 96
5.1 Conclusions.................................................................................................................................. 96
5.2 Recommendations ....................................................................................................................... 99

x
Area for Further Research.................................................................................................................... 102
BIBLOGRAPHY ..................................................................................................................................... 103
Appendices.......................................................................................................................................... 110
Questionnaire for instructors ........................................................................................................... 111
Questionnaire for students .............................................................................................................. 117

xi
CHAPTER ONE

INTRODUCTION
1.1 Background of the study
The role of human resource becomes more and more vital which includes personnel related areas
such as job design resource planning, performance appraisal system, recruitment, selection,
compensations and employee relations (Derven,1990 ). Among these functions, one of the most
critical one that can bring global success is performance appraisal (Marquardt, 2004). An
organization‟s performance management system helps it to meet its short and long term goals
and objectives by helping management and employees do their jobs more efficiently and
effectively, and performance appraisal is one part of this system (Bacal, 1999).

Performance appraisal refers to the activity used to determine the extent to which an employee
performs work effectively. In essence, a formal performance appraisal is a system established by
the organization to regularly and systematically evaluate employees‟ performance (Ivancevich
2004). Performance appraisal is a formal system of periodic review and evaluation of an
individual‟s job performance (Mondy and Noe, 1990). In addition Nzuve (2007) defines
performance appraisal as a means of evaluating employees work performance over a given
period of time. Performance appraisal is a tool that provides management with valuable
information regarding the quality of the human resources the organizations possess which may
serve as a basis for important human resource decisions that may result in motivation and / or de
motivation of the employees.

The process of performance evaluation begins with the establishment of performance standards,
followed by communicating the standards to the employees because if left to themselves, would
find it difficult to guess what is expected of them. This is followed by measurement of actual
performance and then compare the actual performance to the performance standard set and
discuss the appraisal outcome with the employee and if necessary, initiate corrective action
(Marmora 1995).

1
Kavanagh, Brown and Benson (2007) asserted that in performance appraisal process, it is likely
that the evaluation is subjectively biased by his or her emotional state; managers may consider
variable codes and standards for different employees which results are inconsistent, biased,
invalid and unacceptable appraisal.

Similar to any organization, universities appraises its employees/instructors performance for


effective human resource management. Although both academic and non academic staffs in
universities play a vital role in escalating universities‟ performance, yet major onus comes upon
academic staffs (instructors) who are the source of student‟s knowledge, learning and growth
(Rasheed 2011).

Education is an investment to development and poor study methods should not compromise the
mandate of higher education institutions to generate, preserve and disseminate knowledge and
produce high quality graduates (Mutsotso and Abenga 2010). Quality of education in universities
cannot be achieved without consistent appraisal and improvement of instructors‟ performance
(Danial 2011).

The academic staff of higher education institution is a key resource to institution‟s success. To
ensure the quality of education, the university has been implementing; among other things,
instructors performance appraisal. Teaching performance is being inferred from students‟
Performance. To this end, the performance of instructor has been evaluated semi-annually
(usually at the end of each semester). The university has designed performance appraisal where
peers, students, and department heads are the appraisers of the instructors‟ performance.
However, in spite of implementing such an appraisal, little attempt has been given to evaluate its
effectiveness. Hence, this study will conduct to evaluate the effectiveness of instructors‟
performance appraisal process currently being implemented at Bahir Dar University.

1.2 Statement of the Problem


Performance appraisal helps to success of the organization in realizing of strategic purpose and
increasing of effective working process through continuous improvement of individuals‟
performance and process along with focusing on weak improvable points Divandari (2008)

2
In higher education institutions, instructors constitute a particular group of knowledge-based
workers whose commitment plays pivotal role in successful operation of their institutions. It is
the responsibility of managers in higher education to design and implement performance
appraisal that both motivate instructors and align their efforts to organizational objectives
(Simmons and Iles 2010). In order to reap benefits from performance appraisal process, it is
imperative to develop it in a more effective manner. In spite of the accolade of effective
performance appraisal, ineffective appraisal can bring about various problems such as
diminished employee morale, low employee productivity, reduction of employee‟s enthusiasm
and support for the organization (Islam and Rasad 2005). Steiner and Rain (1989) reported that
the order in which good and poor performance was observed affected performance ratings, and
that raters biased judgment about inconsistent extreme performance (unusually good or poor)
toward the general impression already held.

Walsh (2003) posited that finding the commonly accepted approach to evaluating effectiveness
of a performance appraisal based on a set of well-defined variables is difficult. The author also
cited that identifying and organizing the most important variables in performance appraisal has
proved to be a challenging task to researchers and practitioners. However, different scholars
made attempt to evaluate effectiveness of performance appraisal using different variables. For
instance, according to Dobbins, et al. (1990); Monyatsi, et al. (2006) and Nurse (2005),
performance appraisal is effective if it could have outcomes such as increased motivation,
reduced employee turnover and improved employee performance, feeling of equity among
employees, enhanced working relationships and reduced employee stress.

Similarly, Ishaq, et al. (2009) stated that common outcomes of an effective performance
appraisal are employees‟ learning about themselves, employees‟ knowledge about how they are
doing, employees‟ learning about “what management values”. Therefore, an astute reader can
note that the aforementioned studies were emphasized on the appraisal‟s behavioral outcome; in
their attempt to evaluate effectiveness of performance appraisal process.

On the other hand, based on the assumption that effective performance appraisal process would
result in those desirable behavioral outcomes (such as increased employee motivation, reduced

3
employee turnover, etc), many studies also focus on key factors in the process itself to evaluate
effectiveness of performance appraisal process. For instance, studies (Ellett et al., 1996;
Kyriakides 2006; Monyatsi et al. 2006 and Danial 2011) emphasized on four factors, namely; the
sources for collecting relevant data, the appraisal purpose, the appraisal criteria and the feedback
system as main aspects that need to be considered in developing effective instructor‟s
performance appraisal. Theoretically, plethora of specific factors influences effectiveness of
performance appraisal. However, the empirical evidence concerning effectiveness of
performance appraisal in academic institutions is silent, despite the fact that extensive research
According to studies (Roberts 2003 and Monyatsi, et al. 2006), effectiveness of performance
appraisal process is particularly dependent on the perception that the users (both the appraisers
and appraises) hold about the appraisal process. To this end, current researcher had conducted
preliminary survey in Bahir Dar University and identified that instructors were anxious with the
appraisal criteria, the appraisers (especially students) and the performance feedback system.

Besides the paucity of research in the field, instructors‟ tendency of frustrations with the
appraisal process was a motivational force for the current researcher to conduct a comprehensive
study on effectiveness of instructors‟ performance appraisal process. Moreover, though the
university has been implementing performance appraisal, to the best knowledge of the
researcher, no systematic study has attempted to evaluate effectiveness of the appraisal process.
Hence, this study is intended to evaluate effectiveness of instructors‟ performance appraisal
process, a case of Business and Economic College, Bahir Dar University.

1.3 Research Questions


Based on the above stated problem, this study has made attempted to provide answers for the
following research questions.
1. Do instructors‟ performance appraisal criteria have the standard/ required qualities?
2. What are the potential practical mistakes of the appraisers during instructors‟
performance appraisal process?
3. What are the characteristics of instructors‟ performance feedback system?
4. To what extent do instructors and students understand the purposes of instructors‟
performance appraisal process?

4
5. Is there relationships between instructors work experience and instructors‟ perception on
Current appraisal criterion preparation by consultation with instructors?
6. Is there a relationship between Students practice of scoring for physically attractive
instructors and instructors‟ perception on students rating?
7. Is there a relationship between students‟ practice of scoring for funny instructors and
instructors perception on students rating based on funniness?
8. Is there a relationship between students practice of Rating based on previous grade
awarded by instructor and instructors‟ perception on students rating?
9. Is there a Relationship between students practice of Rating based on expected good grade
from instructors‟ and instructors perception on students rating?
10. Is there a relationship between students practice of Rating based on number of
assignments given by instructors‟ and instructors‟ perception on students rating?
11. Is there a relationship between students practice of Rating based on exam easiness
prepared by instructor and instructors‟ perception on students rating?
12. Is there a relationship between students practice of Rating based on single negative
experience with instructor‟ and instructors‟ perception on students rating?
13. Is there a relationship between students practice of rating by contrasting the performance
of an instructor against that of other instructors‟ and instructors‟ perception on students
rating?
14. Is there a relationship between students‟ practice of rating by comparing instructors‟
performance with a given standard/appraisal criteria and instructors‟ perception on
students rating?
15. Is there a relationship between students practice of rating having enough awareness
about the purpose of evaluation criteria and instructors perception on students rating?
16. Is there a relationship between students‟ practice of rating by properly reading each
appraisal criterion and instructors perception on students rating?
17. Is there a relationship between students‟ practice of rating by having enough training to
evaluate instructors‟ performance and instructors‟ perception on students rating?

5
1.4 Objectives of the Study

1.4.1 General objective


The overall objective of the study is to evaluate the effectiveness of instructors‟ performance
appraisal process in college of Business and Economics, Bahir Dar University.

1.4.2 Specific Objectives


The following specific objectives are established in order to achieve the overall objective of the
study:
 To assess the qualities of appraisal criteria used to appraise the performance of
instructors in CoBE.
 To examine potential bias practices of appraisers in instructors‟ performance appraisal
process
 To identify characteristics of instructors performance feedback system
 To diagnose the instructors and the students understanding of purpose of instructors
performance appraisal process
 To examine the relationship between students rating practice of instructors performance
and instructors perception on students rating.

1.5 Significance of the Study


The study may have paramount importance as its findings could be utilized by different
stakeholders. The findings of the study may help the university management to identify the
extent of effectiveness of instructors‟ performance appraisal process and correct pitfalls, if any,
in order to improve performance of instructors and thereby enhancing quality of education.
Furthermore, the study contributes a piece of body of knowledge to literature on how factors in
process of the appraisal namely; quality of appraisal criteria, characteristics of performance and
feedback system, clarity of purpose and practices of appraisers determine effectiveness of
instructors performance appraisal process. It is also vital to researchers as a reference who will
conduct their study related to this. Finally it is requirement to me, for partial fulfillment of
Masters of Business Administration (MBA) from Bahir Dar University.

1.6 scope of the study


The study is delimited spatially, conceptually and methodologically. Despite the fact that, the
researcher has recognized the need to cover all the colleges in the university; resource limitation

6
coupled with unmanageable population size (students and instructors) forced the study to focus
on the Business and Economics college.

Conceptually, the study is confined to assessing effectiveness of the appraisal process in terms of
the aforementioned four factors in the process itself. Moreover, evaluating effectiveness of the
appraisal process based on its behavioral outcome (e.g. improved performance, motivation,
satisfaction, absenteeism, turnover, etc) is not the focus of this study. The rationale for
emphasizing on the appraisal process is based on the assumption that effectiveness of those four
factors in appraisal process would result in desirable behavioral aspect of instructors. Therefore,
studying effectiveness of the appraisal process was deemed as a prerequisite for studying the
outcome of the appraisal. Methodologically, though BDU has instructors, technical support
staffs, and administrative staffs, the current study was intended to evaluate solely the instructors‟
performance appraisal process.

In addition to the budget and time constraint to include all staffs of the university in to the scope
of the study, the reason for emphasizing only on instructors performance appraisal is the
assumption that instructors‟ performance has direct impact on quality of education. Moreover,
the study had employed cross sectional survey design where more relevant data were obtained
from instructors, department heads, and Regular Graduating Class Students (RGCS).

1.7 Limitation of the study


The limitation of the study emanated from its scope. As the study emphasized on effectiveness of
instructors‟ performance appraisal process in the college of Business and Economics, Bahir Dar
University and it is difficult to conclude about other staffs‟ (non instructors) performance
appraisal in the university. This study has attempted to identify potential bias practices of the
appraisers but the extent to which effectiveness of the appraisal process is influenced by those
identified biases was not quantified. Moreover, this study hadn‟t investigated the impact of the
appraisal process on instructors work behavior.

1.8 Operational Definitions


 Performance refers to the act of accomplishing tasks. Instructors‟ performance means
act of accomplishing tasks such as teaching, research and consultancy.

7
 Performance appraisal is the assessment of how well somebody performs his/her job
relevant tasks.
 Effectiveness of performance appraisal process refers to the extent to which appraisal
is based on well defined appraisal criteria, clearly stated purpose and effective
performance feedback from impartial appraisers.
 Appraisal criteria/standard of appraisal refers to the measures used for evaluating
performance of instructors.
 Qualities of appraisal criteria refer to those characteristics that the appraisal criteria
must possess in order to be effective enough.
 Peers are those instructors who are within the same department that evaluate each
other‟s performance.

1.9 Organization of the Thesis

This project has five chapters. The first chapter deals with background information, statement of
the problem, objective of the study, significance of the study, scope and limitation of the study.
The second chapter deals with review of literature. The third chapter covers the methodology of
the study, research design, data requirements; source of data, data gathering and methods of data
analysis. The fourth chapter discusses analysis of the data gathered, presents finding and analyze
the data. The last, chapter five concludes and suggests some recommendations.

8
CHAPTER TWO

LITERATURE REVIEW
2.1 Theoretical Literature
2.1.1 Performance Appraisal: An overview

2.1.1.1 Definitions of Performance Appraisal


The term performance appraisal has been defined by many scholars in different ways, though the
concepts are closely related to each other. Mondy and Noe, (1990) defined Performance
appraisal as formal system of periodic review and evaluation of an individual‟s job performance.
Rao (2005) opines that “performance appraisal is a method of evaluating the behavior of
employees in the work spot, normally including both the quantitative and qualitative aspects of
job performance.

Evancevich (2004) defined performance appraisal as the human resource management activity
that is used to determine the extent to which an employee is performing the job effectively. As
Swanepoel et al., (2000) Performance appraisal is a formal and a systematic process of
identifying, observing, measuring, and recording the job relevant strengths and weaknesses of
employees.

According to Longenecker, (1997) performance appraisal is two rather simple words that often
arouse a raft of strong reactions, emotions, and opinions, when brought together in the
organizational context of a formal appraisal procedure. Jacobs et al. (1980) defined performance
appraisal as a systematic attempt to distinguish the more efficient workers from the less efficient
workers and to discriminate among strength and weaknesses an individual has across various job
elements. According to Rasheed (2011) performance appraisal is a continuous process through
which performance of employees is identified, measured and improved in the organization.

9
Yong (1996) defines performance appraisal as “an evaluation and grading exercise undertaken
by an organization on all its employees either periodically or annually, on the outcomes of
performance based on the job content, job requirement and personal behavior in the position”.

Alo (1999) defines performance appraisal as a process involving deliberate stock taking of the
success, which an individual or organization has achieved in performing assigned tasks or
meeting set goals over a period of time. It therefore shows that performance appraisal practices
should be deliberate and not by accident. It calls for serious approach to knowing how the
individual is doing in performing his or her tasks.

2.1.1.2 Assumptions of Performance Appraisal


Performance appraisal exists in organizations based on some basic assumptions. According to
Elverfeldt (2005), one of those assumptions is that employees differ in their contribution to the
organization because of individual performance, and that supervisors are actually able and
willing to differentiate between employees. Moreover, for development purposes it is assumed
that accurate and timely feedback can change behavior in a way that benefits both the
organization and the individual.

Jacob et al. (1980) stated that practicality of performance appraisal is also another assumed
aspect i.e. time and costs for designing and implementing the process should not exceed the
organizational benefit which is achieved by appraising performance. Furthermore, Jacobs et al.
(1980) described some methodological assumptions. The first is that equivalence is in place.
Meaning, the situations under which all appraises are rated and the ways different appraisers
actually rate appraises are comparable. Second, there are uniformed interpretations of standard
expectations and forms among appraisers. In addition, the evaluator must have the possibility of
direct observation of appraises‟ performance plus additional data as for example attendance
rates.

According to Armstrong and Baron (1998) factors affecting performance should be considered
when measuring, managing, improving and rewarding performance. These factors encompass the
following: Personal factors - the individual's skill, self-confidence, motivation and dedication;
leadership factors -the quality of encouragement, guidance and support from the managers and

10
team leaders; team factors- the quality of support from colleagues; system factors- the system of
work and facilities from the organization; and situational factors - internal and external
environmental pressures and changes. However, in contrast to above ideas, the traditional
approaches to performance appraisal rely on personal factors, when actually the performance can
be caused partially or entirely by situational or systems factors (Mwita 2000). Essentially,
Deming (1986) stated that the appraisal of individual performance must necessarily consider not
only what individuals have done (the results), but also the circumstances in which they have had
to perform.

2.1.1.3 The Purpose of Performance Appraisal


Performance appraisals are needed to justify a wide range of human resource decisions such as
salary increase, promotions, demotions, terminations, training need assessment. The performance
appraisal also allows the organization to tell the employee something about their rates of growth,
their competencies, and their potentials (Longenecker and Fink 1999; Evancevich 2004).

Performance appraisals are one of the most important requirements for successful business and
human resource policy (Kressler, 2003). Rewarding and promoting effective performance in
organizations, as well as identifying ineffective performers for developmental programs or other
personnel actions are essential to effective human resource management (Pulakos, 2003).

The ability to conduct performance appraisals relies on the ability to assess an employee‟s
performance in a fair and accurate manner. Evaluating employee performance is a difficult task.
Once the supervisor understands the nature of the job and the sources of information, the
information needs to be collected in a systematic way, provided as feedback, and integrated into
the organization‟s performance management process for use in making compensation, job
placement, and training decisions and assignments (London, 2003).

According to Morris (2005) the above purposes of performance appraisal can be clustered under
the headings of administrative purposes and developmental purposes. Administrative purposes
entails the use of performance data for personnel decision making including Human Resource
Planning (HRP); compensation; placement decisions such as promotion, demotion, transfer,
dismissal and retrenchments; and personnel research. Furthermore, Morris (2005) noted that

11
developmental purpose emphasize on developmental functions on the individual as well as the
organizational level. Appraisal can serve individual development purposes through: feedback on
their strengths and weaknesses and how to improve future performance, helping career planning
and development and providing inputs for personal remedial interventions. Organizational
development purposes may include: specifying performance levels and suggesting overall
training need; providing essential information for affirmative action programs, job redesign
efforts, multi skilling programs; and promoting effective communication within the organization
through ongoing interaction between appraisers and appraises.

Cumming (1972) writes that the overall objective of performance appraisal is to improve the
efficiency of an enterprise by attempting to mobilize the best possible efforts from individuals
employed in it. Such appraisals achieve four objectives including salary reviews, development
and training of individuals, planning job rotation and assisting in promotions.

Mamoria (1995) and Atiomo (2000) agree that although performance appraisal is usually thought
of in relation to one specific purpose, which is pay. It can in fact serve for a wider range of
objectives which are; identifying training needs, improving present performance of employees,
improving potentials, improving communication, improving motivation and aids in pay
determination.

Performance appraisal has been considered as a most significant and indispensable tool for an
organization, for the information it provides is highly useful in making decisions regarding
various personnel aspects such as promotions and merit increases. Performance measures also
link information gathering and decision-making processes, which provide a basis for judging the
effectiveness of personnel sub-divisions such as recruiting, selection, training and compensation.
If valid performance data are available, timely, accurate, objective, standardized and relevant
management can maintain consistent promotion and compensation policies throughout the total
system, Burack, Elmer and Smith (1977).

Performance appraisal also has other objectives, which McGregor (1957) says includes:
 It provides systematic judgment to the organization to back up salary increases.

12
 It is a means of telling a subordinate how he is doing and suggesting needed changes in
his behavior, attitude and skill or job knowledge. It lets him know where he stands with
the boss.
 It is being used as a base for coaching and counseling the individual by the superior.

2.1.2 Factors affecting effectiveness of performance appraisal


The appraisal process has been categorized into: (1) Establishing job criteria and appraisal
standards; (2) Timing of appraisal; (3) Selection of appraisers and (4) Providing feedback
(Scullen et al., 2003). Early PA processes were fairly simple, and involved ranking and
comparing individuals with other people (Milkovich and Boudreau 1997). However, these early
person-based appraisal systems were fraught with problems. As a result, a transition to job-
related performance assessments continues to occur. Thus, PA is being modified from being
Person-focused to behavior-oriented, with emphasis on those tasks or behaviors associated with
the performance of a particular job (Wellbourne etb al., 1998).

The performance appraisal must be effective in order to use its results for developmental and/or
administrative purposes. Developing the appraisal process that accurately reflects employee
performance is not an easy task. Performance appraisal process is not generic or easily passed
from one organization to another; and that their design and administration must be tailor-made to
match employee and organizational features and qualities (Boice and Kleiner 1997).

It is difficult to find commonly accepted method or efficient approach to evaluate the


effectiveness of a performance appraisal based on a set of well-defined variables. Identifying and
organizing the most important variables in performance appraisal has proved to be a challenging
task to researchers and practitioners (Walsh 2003). Here, based on the compilation of the work of
different authors, some essential characteristics of effective performance appraisal process were
identified and discussed.

2.1.2.1 Quality of Appraisal Criteria


An appraisal criterion is the standard against which the performance of employees is measured.
Preparing appraisal criteria is the first step in the process of performance appraisal (Ivancevich
2004). Every employee should be provided with job description so that they know exactly what

13
is expected of them, and the yardsticks by which their performance and results will be evaluated
(Khan 2007).

According to Ivancevich (2004) job description is a written description of what the job entails
and that it is important for organization to have thorough, accurate, and updated job description.
Appraisal criteria must be based on job description for the position employee holds (Khan 2007).
Meaning, appraisal criteria must have the quality of relevance to job duties. Relevance is the
degree to which performance measure is related to the actual output of job incumbent as
logically as possible (Evancevich 2004). Relevance may also refer the extent to which the
performance measure assesses all relevant and only the relevant aspects of performance (Noe et
al.1996). Thus, matching the performance criteria as reflected in the performance appraisal
format to the evaluation task is one way to ensure relevance (Lee, 1985).

Appraisal criteria must be realistic. Meaning, realistic appraisal criteria are attainable by any
qualified, competent, and fully trained employee who has authority and resources to achieve the
desired result. It should take in to account the practical difficulty in the environment in which the
employee works. This implies that the performance of employees should not be evaluated against
the standard which is beyond their control. Moreover, appraisal criteria must possess
characteristics like reliability in order for the performance appraisal to be effectiveness
(Evancevich 2004).

Furthermore, Roberts (2003) noted that the development of reliable, valid, fair and useful
appraisal standard is enhanced by employee participation, as employees possess necessary
unique and essential information needed for developing realistic standard.

2.1.2.2 Training for Appraisers and Appraises


Appraisers‟ training is also another characteristic of the performance appraisal. Almost all
authors agree in the fact that appraisers must be trained to observe, gather, process, and integrate
behavior-relevant information in order to improve performance appraisal effectiveness. For
example, Boice and Kleiner (1997) and Anjum (2011), emphasize that the training should be
given for appraisers because they must understand the standards that are used to measure their
performance, because errors are inevitable in appraisal. Tziner and Kopelman (2002) cited that

14
extensive training is essential for avoiding such errors. Therefore, the training should provide
appraisers with broad opportunities to practice the specified skills, provide appraisers with
feedback on their practice appraisal performance, and that a comprehensive acquaintance with
the appropriate behaviors to be observed.

Harris (1988) proposed that an organization must provide training regular basis. Training must
be frequently updated and involves appraisal aspects such as give and take feedback, personal
bias, active listening skills and conflict resolution approaches. If implemented this way,
employees are less confused; less disappointed concerning measures and are more aware about
the intentions of performance appraisal. This also means that they will be capable of useful
critique and feedback concerning the appraisal process (Elverfeldt, 2005). Generally, sufficient
training must be given to appraisers so that they: (1) understand the performance appraisal
process; (2) are able to use the appraisal instrument as intended which encompass interpreting
standards and use of scales; and (3) are able to provide effective feedback.

Additionally, the skills of the appraisers must be updated and refreshed on continual basis.
Furthermore, appraises must also receive certain form of appraisal training to introduce them to
the appraisal process. To attain their acceptance and support of the appraisal process, appraises
must understand the appraisal process as a whole as well as the behavioral facet and criteria that
are utilized to evaluate their performance (Elverfeldt, 2005)

2.1.2.3 The Performance Feedback


Another major factor that increases effectiveness of performance appraisal is to provide feedback
to employee regarding their performance. The performance feedback to the employee generally
aims at improving performance effectiveness through stimulating behavioral change.
Performance feedback not only generates change in job behavior but also improves appraises‟
organizational commitment (Tziner and Kopelaman, 2002).

The feedback helps employee discuss their problems in order to improve their future
performance (Anjum 2011). Roberts (2003) also stated that without feedback, employees are
unable to make adjustments in job performance or receive positive reinforcement for effective

15
job behavior. For feedback to improve performance of employee it must be timely, specific, and
behavioral in nature and presented by a credible source (Roberts 2003).

The performance appraisal process which provides formal feedback once a year is more likely to
be feedback deficient. For appraisal process to gain maximum effectiveness there must be
continual formal and informal performance feedback (Roberts 2003 and Elverfeldt 2005). Dalton
(1996) emphasized on that feedback event should be a confidential interaction between a
qualified and credible feedback giver and evaluate to avoid denial, venting of emotions, and
behavioral and mental disengagement.

In such an atmosphere discrepancies in appraisals can be discussed and the session can be used
as a catalyst to reduce the discrepancies. Another important point regarding performance
feedback is the use of multiple appraisers. Performance appraisal has been criticized for being
ineffective for a variety of reasons such as the potential biases of the appraisers and the potential
subjectivity of ratings (Roberts, 2003). Alexander (2006) stated that in 360 degree feedback
multiple appraisers offer feedback on observed performance as opposed to subjective viewpoints
from a single individual. Multiple appraisers offering similar feedback will send a reinforced
message to evaluate about what is working well and what needs to be improved. Feedback is
more difficult to ignore when it is repeatedly offered by multiple sources.

The 360 degree review process is alleged to be superior to traditional forms of appraisal and
feedback because it provides more complete and accurate assessment of the employees‟
competencies, behaviors and performance outcomes (Alexander, 2006). According to Elverfeldt
(2006) 360-degree appraisal might be a useful tool in enriching performance appraisal and
enhancing its acceptance, but this will only be the case if the appraiser and the appraised
generally perceive additional source of feedback as relevant and favorable.

2.1.2.4 Frequency of Appraisal


Performance appraisal should be conducted on frequent and ongoing basis. More frequently
conducting performance appraisal helps eliminate two situations: (1) selective memory by the
evaluator or the evaluatee and (2) surprises at an annual review (Boice and Kleiner 1997). People
generally tend to remember what happened within the last month or high profile incidents (good

16
or bad). The frequent appraisal rectifies such a generally unconscious, selective memory. Getting
rid of surprises in the appraisal process is also imperative.

Both the appraisers and evaluatee need to know that there is a performance problem prior to any
major performance appraisal period. If the problem is allowed to continue for longer period, it is
more difficult to take corrective action. Thus, frequent performance appraisals should eliminate
the surprise element and help to modify performance prior to any semiannual/annual review
(Boice and Kleiner 1997)

2.1.2.5 Clarity of Purpose of Performance Appraisal


Clearly stated purpose of appraisal is essential characteristics of effective performance appraisal.
Employees are bound to be committed and this may improve their daily performance, if they
understand purpose of their performance appraisal. The appraisal with unclear purpose is
meaningless exercise (Monyatsi, et al. 2006). Both the evaluator and the evaluatee must have
clear understanding about the purpose of the appraisal process; including the role of the evaluator
and the criteria that will be used (Goddard and Emerson 1996). This idea was echoed by Horne
and Pierce (1996) who posited that the purpose, nature and focus of the appraisal should be
decided upon and agreed to by both parties i.e. appraisers and appraises. The determination of
the appraisal purposes influences the design of appraisal instruments and their administration as
well as the interpretation of the results (Kyriakides, et al. 2006). Therefore, users‟ clear
understanding of the purpose of appraisal is very indispensable for effectiveness of the appraisal
process.

2.1.2.6 Users Acceptance


One of the determinant factors for effectiveness of performance appraisal process is the
acceptance of its users (both appraisers and appraises). If appraisers and appraises do not accept
the appraisal process, the process will be ineffective irrespective of its degree of technical
soundness. Lacks of users‟ acceptance provoke resistance and a reduction in user motivation
transforming the process into a paper "shuffling" exercise. Employees are more likely to accept
the appraisal process if they understand the performance appraisal process, agree on the value
orientation of the process (i.e., focus on quality over quantity), agree with management on the

17
appraisal criteria used, possess confidence in the accuracy of performance measurement, and
perceive an absence of appraisers bias (Roberts, 2003).

Appraises must view performance appraisers as accurate, impartial, and open in order for the
appraisal to be effective (Monyatsi, et al. 2006). Moreover, Roberts (2003) cited that the trust
that employees have on the accuracy and fairness in performance appraisal is vital concern,
otherwise there will be tremendous waste of time and money spend on development and
implementation of the system. The author further notified that employees‟ involvement in all
aspects of performance appraisal process increases the trust that employees have on performance
appraisal. Employee involvement helps them to incorporate their voice in the process, generates
an atmosphere of cooperation and trust which minimizes defensive behavior and appraiser-
appraise conflict. The argument is that if employees are confident in the fairness of the appraisal
process, they are more likely to accept performance score, even adverse ones.

Skarlicki and Folger (1997) stated that in any case, if the employees perceive the process as
unfair and not systematic and thorough, it is unlikely that they will accept the outcome of the
appraisal exercise. The appraisal process can become a source of extreme dissatisfaction when
employees believe the process is biased, political, or irrelevant.

2.1.3 Problems in Performance Appraisal Process

2.1.3.1 Appraisers’ Problems


Rating accuracy is an important factor which determines the effectiveness of performance
appraisal process. Unfortunately, there are several different error phenomena which all poses a
threat to the accuracy of ratings. The following are common appraisers‟ errors and biases in
performance appraisal:

Halo/horns effect: - this error refers to a failure to distinguish between various aspects of
performance. Halo error occurs when one positive performance aspect causes the evaluator to
assigns positive ratings to all other aspects of performance. Horns error operates in opposite
direction: when one negative aspect results in the evaluator to assign low ratings to all the other
aspects of performance. These errors are problems because they prohibit making necessary
distinction between strong and weak performance. (Noe, et al. 1996)
18
Leniency or harshness error: - Leniency occurs when an evaluator assigns high (lenient) rating
to all appraises. Appraisers with too lenient ratings are called easy appraisers or “Santa Claus”
(Hamman and Holt, 1999). They are mostly found among groups of appraisers who do not want
to put forth the effort to understand the performance standards, or among individuals who have
been appraisers for an extremely long time. There are six major reasons why managers inflate
ratings: (a) to maximize subordinates' merit raises; (b) to avoid hanging 'dirty laundry' in public;
(c) to avoid creating a written record of poor performance; (d) to give a break to an employee
who has shown recent improvement; (e) to avoid confrontation with a difficult employee; and (f)
to promote a problem subordinate “up and out” of the department (Fried et al. 1999)

Many of these reasons can be interpreted as supervisors' attempts to elicit positive reactions from
subordinates, such as increasing their work motivation and performance, as well as increasing
subordinates' trust in, and cooperation with, their supervisors. Harshness occurs when evaluator
provide low rating for all appraises. An evaluator appears to have excessively high standards
which results in a low mean score, and the distribution of scores is skewed toward the low end of
the rating scale (Berry, 2003). Such an evaluator is called hard evaluator or “ax man” (Hamman
and Holt, 1999). They often have the problem of being strongly biased by one event, thereby
causing their assessments to be extremely harsh.

Central tendency error: - occurs when an evaluator avoids using high or low ratings and
assigns average ratings. This type of average rating is almost useless because it fails to
discriminate between performers and that it provides little information to make HRM decisions
(Ivancevich 2004). According to Hamman and Holt (1999) this error is due to the evaluator‟s
feelings of unease with the assessment criteria, and the aversion to make mistakes.

Personal biases:- Contrast effect occurs when a evaluator lets another employee‟s performance
influence the rating that are provided to someone else (Ivancevich 2004).The similar-to-me error
is the tendency to evaluate the evaluatee more positive, if the evaluatee is perceived to be similar
to the evaluator (Jacobs et al., 1980). Stereotyping means that impressions about an entire group
alter the impression about a group member (Rudner, 1992). Perception differences error occurs
when the viewpoint or the past experiences affect how behavior is interpreted (ibid).
19
2.1.3.2 System Design and Operating Problems
According to Evancevich (2004) performance appraisal could break down due to poor design.
The design can be blamed if the criteria for appraisal are poor, the technique used is
cumbersome, or the system is more form than substance. If the criteria used focus solely on
activities rather than output (results), or on personality traits rather than performance, the
appraisal may not be well received. According to Deborah and Kleiner (1997) organizations need
to have a systematic framework to ensure that performance appraisal is “fair” and “consistent”.
The authors concluded that that designing an effective appraisal system requires a strong
commitment from top management. The system should provide a link between employee
performance and organizational goals through individualized objectives and performance
criteria.

2.1.4 Performance Appraisal in Higher Education Institutions

2.1.4.1 Purpose of instructors’ performance appraisal


Whilst the above mentioned concepts about performance appraisal may also apply to educational
settings, the complexity of teaching activity makes performance appraisal in higher learning
institution to have additional features. According to Berk (2005) performance appraisal for
instructors serves to make two types of decisions: The formative and the summative decisions.

Formative and summative Decisions are equivalent terms for development and administrative
decisions respectively, which have been described earlier. The formative decisions use the
evidence to improve and shape the quality of teaching. As individual instructor, one makes
formative decisions to plan and revise his/her teaching semester after semester. Similarly OECD
(2009) states that instructors appraisal for improvement focuses on the provision of feedback
useful for the improvement of teaching practices, namely through professional development. It
involves helping instructors learn about, reflect on, and improve their practice. On the other
hand, summative uses the evidence to “sum up” instructors overall performance or status to
make personnel decision about their annual merit pay, promotion, and tenure.

20
Summative decisions are rendered by administrators at various points in time to determine
whether instructors have a future and these decisions have an impact on the quality of
professional life. Moreover, summative uses of instructors‟ performance appraisal focus on
holding instructors accountable for their performance associating it to a range of consequences
for their career. It seeks to set incentives for instructors to perform at their best. It typically
entails performance-based career advancement and/or salaries, bonus pay, or the possibility of
sanctions for underperformance (OECD 2009). Though instructors‟ performance appraisal is
widely applied for summative and/or formative purpose in higher educational institutions, its
essence in these institutions has been criticized by the findings of different researches. For
instance, Stone (1996) cited in Morris (2005) suggested that performance appraisal is not
appropriate for academics and also it is an attack on academic freedom as well as a potential tool
to monitor and control staff. The author also noted that performance appraisal in higher
education have had limited and confused purposes and their contribution to enhanced
institutional performance and quality has been minimal.

As mentioned earlier, users understanding and acceptance of purpose of performance appraisal


process is mandatory if the process is to be effective. Monyatsi et al. (2006) stated that if
instructors are not aware or convinced of purpose of the instructors‟ performance appraisal, they
become anxious and suspicious of the whole process. To this end, in the effort of assessing the
effectiveness of instructors‟ performance appraisal process; this study has evaluated whether
instructors and students had clear understanding of purpose of the appraisal in BDU.

2.1.4.2 Criteria for Evaluating Instructors’ Performance


Fundamentally, an instructor‟s performance appraisal involves measuring if an instructor has the
necessary competencies in general areas required of an instructor. Since instructors have
accountability in teaching-learning process, whether they are being effective or not form part of
their performance appraisal. Effective teaching is caused by motivation of the instructor to help
student learning equipped by his or her mastery of subject content and competence in utilizing
appropriate pedagogical requirements (David and Macayan 2010).

Danielson and McGreal (2000) proposed a model containing four domains that represent
components of instructor‟s professional practice. These are planning and preparation, the

21
classroom management, instruction, and professional responsibilities. The author assumed that
competencies in these domains can serve as criteria for instructors‟ performance appraisal.
Moreover, Tigelaar, et al. (2004) identified a framework of teaching effectiveness with a
following major domain-person as instructors, expert on content knowledge, and facilitator of
learning processes, organizer, and scholar / lifelong learner. Kyriakides, et al. (2006) stated that
the goals and tasks assigned to instructors must be clear and specific; the outcomes of
instructors‟ performance must be easily observed; and standard of evaluation must be clearly
stated. Instructors are often expected to accomplish complicated tasks and meet objectives within
a predetermined timeframe. Consequently, the sources and support provided constitute important
facilitating factors for their work.

Therefore, prior to evaluating instructor performance, it is important to answer the following


questions: what kinds of resources are necessary to facilitate educational work? Are there
sufficient and/or common resources for all instructors; and how are resources related to
instructor performance and education outcomes? (Kyriakides et al. 2006) As described in the
earlier section of this paper, qualities of appraisal criteria is one of the factors that determine
effectiveness of performance appraisal process. Therefore, based on the above theoretical
foundation and those points stated earlier (for example, extent to which appraisal criteria is
realistic, relevant, reliable, etc); this study has attempted to assess the qualities of appraisal
criteria used to appraise the performance of instructors in the university under study

2.1.4.3 The Appraisers in Instructors Performance Appraisal


In higher education institutions since measuring the act of teaching is complex, it is reasonable to
expect that multiple appraisers who can provide a more accurate, reliable and compressive
picture of teaching effectiveness than just single source (Berk 2005). Here, the most common
sources of instructor performance appraisal are emphasized.

2.1.4.3.1 Student Appraisal of instructors’ Performance

Student appraisal is the most widely used technique to measure instructors‟ competence inside
the classroom. The assumption is that students are the direct consumers of the services provided
by instructors and therefore they are in a nice position to evaluate their instructors‟ performance.

22
This appraisal covers the most visible teaching habits of instructor in classroom situations to the
personal attributes including communication styles, attitudes, and other dispositions observable
in an instructor (David and Macayan 2010). Baker (1986) arrived at a conclusion that the validity
of students‟ appraisal was not influenced by the students‟ expected grade, the sex of the students,
and academic status of the student.

However, despite the fact that many research findings (Miller 1998, and Baker 1986, David and
Macayan 2010) support that students are in a good position to evaluate a variety of aspects
concerning effective instructions, the validity studies of student appraisal result contradictory
findings. Jones (1989) identified that the validity of student‟s appraisal of instructors‟
performance is distorted because students often relate personality characteristics of instructors
with teaching competence.

Naftulin et al. (1973) also concluded that an instructor‟s entertainment level influences student
appraisal scores and they call this influence the “Dr. Fox” effect. In their study Naftulin et al.
placed an actor, known as “Dr. Fox”, in a college classroom where he presented a highly
entertaining lecture that included no substantive content. The actor received rave student
appraisal scores, which led the researchers to determine that highly charismatic instructors can
seduce students into giving high score despite learning nothing.

Abrami et al. (1982) suggested that student rating should not be used in decision making about
instructors promotion and tenure, because charismatic and passionate instructors can receive
favorable student rating regardless of how well they know their subject matter nor do these
instructors characteristics relate to how much their students learned. Some instructors dislike
student appraisal of instructor‟s performance and complain about the intellectual and personal
capacity of students to objectively rate instructors‟ performance effectiveness (Emery, et al.
2003). It is possible that some ratings become assessment of students‟ satisfaction or attitude
toward their instructors instead of being able to assess actual instructors‟ performance and
effectiveness. David and Macayan (2010) stated that student rating of instructor‟s performance
could be based mainly on hidden anger resulting from a recent grade received on an exam or
from a single negative experience with an instructor. Studies found that there is an association

23
between grades and student appraisals of instructors‟ performance because student appraisal
score is higher in courses where student achievement is higher (Baird 1997; Cohen 1981).

In addition, Onwuegbuzie et al. (2007) stated that factors associated with testing (e.g. difficult
exam and administering sudden quizzes) and grading (e.g. harsh grading, notable number of
assignments and home works) were likely to lower students‟ appraisal of their instructors. David
and Macayan (2010) stated that student rating of instructors‟ performance could be based mainly
on hidden anger resulting from a recent grade received on an exam or from a single negative
experience with an instructor. Studies found that there is an association between grades and
student appraisals of instructors‟ performance because student appraisal score is higher in
courses where student achievement is higher (Baird 1997; Cohen 1981). In addition,
Onwuegbuzie et al. (2007) stated that factors associated with testing (e.g. difficult exam and
administering sudden quizzes) and grading (e.g. harsh grading, notable number of assignments
and home works) were likely to lower students‟ appraisal of their instructors.

Gezgin (2011) cited that students are susceptible to some cognitive bias such as self serving bias,
recency bias, and serial position bias. The author identified that students attribute success to
themselves and blame instructors and external factors (self serving bias); since the recent events
are more salient in human memory, students may be biased towards recall of recent weeks of the
semester (regency bias); and students‟ responses in appraisal form are affected by the order
through which the questions are asked (serial position bias).

In light of the continuous debate on the students‟ appraisal of instructors‟ performance, the focus
of this paper was to evaluate practices of students during appraisal so as to investigate potential
biases that may compromise the effectiveness of instructors‟ performance appraisal. Similarly,
the potential bias practices of peers and department heads has been investigated in the effort of
determining effectiveness of instructors‟ performance appraisal process.

2.1.4.3.2 Peer Appraisal of Instructors’ Performance


Peer appraisal is a process of evaluating instructors‟ performance by colleagues or peers. Many
scholars believe that when peers are well informed and well trained, they are ideally suitable to
rate their colleagues, especially colleagues in the same team (David and Macayan 2010). Peer
appraisal of instructor performance is the most complementary source of evidence to student

24
rating. Peer appraisal is a process in which instructors work collaboratively to assess each other‟s
teaching and to assist one another in the effort to strengthen teaching (Keig andWaggoner 1994).
Those aspects of instructors‟ performance that students are not in position to evaluate can be
covered by peer appraisal. However, although many instructors feel that they benefit from
thoughtful attention to their teaching, others find the peer appraisal process as intimidating,
meaningless, or both (Carter 2008). Moreover, peer appraisal has a tendency of bias and
unreliability because it is based on subjective judgments, if peers are less informed of each
other‟s performance (Berk 2005).

2.1.4.3.3 Self Appraisal of Instructors’ Performance


Self appraisal is also an alternative approach in assessing instructor where instructor evaluates
his/her performance based on a well-defined set of competencies or characteristics (David and
Macayan 2010). Self appraisal can provide support for what instructors do in classroom and can
present a picture of our performance unobtainable from any other source. Instructors input on
their own teaching complete the triangulation of the three direct observation sources of teaching
performance: students, peers, and self. However, there is possibility of bias in estimating one‟s
own teaching performance (Berk 2005).

2.1.4.3.4 Supervisor Appraisal of Instructors’ Performance


Supervisor appraisal is the process by which administrators/supervisors regularly evaluate
instructors‟ performance. There is the possibility of inducing fear in the Appraises due to
perceptual dilemma resulting from contradictory bureaucratic and professional expectations
inherent in administrative and supervisory roles. To some instructors, this exercise may feel
more like an administrative task conducted by the supervisor and less of an exercise for the
purpose of instructor‟s development. The validity of the rating may also be questioned because it
is susceptible to subjectivity, especially if there is only one supervisor acting as evaluator or
evaluator (Nhundu 1999).

2.2 Empirical literature


Pulakos (2004), only ten percent of the employees believe that their firm‟s performance appraisal
help them to improve performance. Appraisal errors, lack of objectivity and nonperformance
variables such as age, sex and race cause difficulties in the appraisal process (Miner, 1968;
Dornbush and Scott 1975, Winstanley, 1980).

25
Roberts, 1998, found four of out of ten supervisors agreed that employees receive much of the
blame for poor performance when in reality its poor management practices.

Elverfeldt (2005) from literature analysis generalized that the most significant factor in
determining performance appraisal system effectiveness is the acceptance of its users. Thus, a
questioning was conducted in a target organization to test how the users perceive their current
performance appraisal system. It was found that factors as 360-degree appraisal, procedural
justice, goal-setting and performance feedback scored relatively high, while performance-based
pay received the worst score. The only demographic variable that partly accounted for the
variance in opinion about factors was age.

Peterson‟s (2000) extensive literature review of over 70 years of empirical research on teacher
evaluation concluded: “Seventy years of empirical research on teacher evaluation shows that
current practices do not improve teachers or accurately tell what happens in classrooms. . . . Well
designed empirical studies depict principals as inaccurate raters both of individual teacher
performance behaviors and of overall teacher merit” (pp. 18–19).

Lortie (1975) found that only 7% of the teachers he interviewed saw judgments by their
organizational superiors as the most appropriate source of information about how well they were
doing. The study concluded that teachers had little direct interest in or respect for the process or
results of evaluation, and most operated independently of them.

Kauchak, Peterson, and Driscoll (1985), in a survey study of teachers in Utah and Florida, found
evaluations based on principal visits to be “perfunctory with little or no effect on actual teaching
practice” (p. 33). One problem identified by the teachers in the study was that evaluations were
too brief and lacked rigour. Teachers also complained that the principal was not knowledgeable
in their grade level or subject area. Finally, teachers in the study felt that the evaluation reports
lacked specifics about how to improve their teaching practice.

Johnson (1990) interviewed 115 teachers and found similar results. Teachers felt that principals
rarely offered ideas for improvement. They also felt that the ratings forms and items encouraged
principals to be picky in their criticisms; almost forcing principals to find something to criticize

26
so that they will look discriminating. However, the main dissatisfaction of teachers in the study
was what teachers saw as a basic lack of competence on the part of administrators to evaluate.
This included a lack of self-confidence, expertise, subject matter knowledge, and perspective on
what it is really like to be in the classroom.

The American literature on teacher evaluation indicates that neither teachers nor administrators
seem to receive much benefit from the process, despite it consuming large quantities of time and
resulting in considerable stress. The impact on teaching practice appears to be negligible and
often results in negative feelings among teachers as they do not feel that their evaluations are
objective or accurate. Administrators often view teacher evaluations as something they are
forced to do rather than something they want to do Sachin Maharaj (2014).

2.3 Conceptual Framework


Performance appraisal process is not generic or easily passed from one organization to another;
and that their design and administration must be tailor-made to match employee and
organizational features and qualities (Boice and Kleiner 1997).

Performance appraisal process has been categorized into: (1) Establishing job criteria and
appraisal standards; (2) Timing of appraisal; (3) Selection of appraisers and (4) Providing
feedback (Scullen et al., 2003). For this study what factors determine the effectiveness of
instructors‟ performance appraisal process are conceptualized as follows:

Preparing Quality appraisal criteria is the first step in the process of performance appraisal
(Ivancevich 2004). Moreover, appraisal criteria must possess characteristics like reliability in
order for the performance appraisal to be effectiveness (Ivancevich 2004). Monyatsi et al. (2006)
stated that if instructors are not aware or convinced of purpose of the instructors‟ performance
appraisal, they become anxious and suspicious of the whole process

The performance feedback to the employee generally aims at improving performance


effectiveness through stimulating behavioral change. Performance feedback not only generates
change in job behavior but also improves appraises‟ organizational commitment (Tziner and

27
Kopelaman, 2002). Performance feedback is also the second factor which shows the strength and
weakness of the employee.

OECD (2009) states that instructors appraisal for improvement focuses on the provision of
feedback useful for the improvement of teaching practices, namely through professional
development. It involves helping instructors learn about, reflect on, and improve their practice.

The third factor is purpose of appraisal which this study considers for this study. (Monyatsi, et
al. 2006); clearly stated purpose of appraisal is essential characteristics of effective performance
appraisal. Employees are bound to be committed and this may improve their daily performance,
if they understand purpose of their performance appraisal. The appraisal with unclear purpose is
meaningless exercise.

The last factor that involves in the study is practices of appraisers. Skarlicki and Folger (1997)
stated that in any case, if the employees perceive the process as unfair and not systematic and
thorough, it is unlikely that they will accept the outcome of the appraisal exercise. The appraisal
process can become a source of extreme dissatisfaction when employees believe the process is
biased, political, or irrelevant.

Students’ appraisal covers the most visible teaching habits of instructor in classroom situations
to the personal attributes including communication styles, attitudes, and other dispositions
observable in an instructor (David and Macayan 2010).

Peer appraisal of instructor performance is the most complementary source of evidence to


student rating. Peer appraisal is a process in which instructors work collaboratively to assess
each other‟s teaching and to assist one another in the effort to strengthen teaching (Keig
andWaggoner 1994).

Supervisor appraisal is the process by which administrators/supervisors regularly evaluate


instructors, performance. The validity of the rating may also be questioned because it is
susceptible to subjectivity, especially if there is only one supervisor acting as evaluator or
evaluator (Nhundu 1999).

28
Conceptual frame work

Practices of Characteristics of
Appraisers Performance
 Student Feedback
 Peer
 Head
Effectiveness of
Instructors‟ Performance
Appraisal Process

Clarity of Purpose of the


Qualities of Appraisal
Appraisal criteria

Source: Researcher’s own


Figure 1: Conceptual framework for the effectiveness of appraisal process.

29
CHAPTER THREE

METHODOLOGY OF THE STUDY


3.1 Research Design
The study employed cross sectional survey design and is quantitative in nature. According to
Zikmund (2000), a cross sectional survey design is the type of survey design in which necessary
data is collected at one point in time from particular set of population. This research design was
utilized because of resource and time limitation to undertake cross sectional survey.

3.2 Data Type and Source


Both quantitative and qualitative data were collected from primary and secondary data sources.
The secondary sources such as books, internet and journal articles were utilized mainly as
literature review. In addition, BDU teaching document analysis and instructors performance
appraisal score were reviewed. The primary data were obtained from instructors, department
heads and Regular Graduating Class Students (RGCS) of COBE of BDU. The RGCS were
targeted for this study based on the assumption that they are more experienced than non- RGCS
in evaluating instructors‟ performance.

3.3 Sampling Design and Procedure


In order to collect necessary data through questionnaire, the target populations of the study were
the instructors and the RGCS. Respondents were selected from the business and Economics
College. The total population for this study are 841 (including 123 instructors and 718 RGCS).

The sample size is determined by Yamane (1967: page 886) simplified formula. A 95%
confidence level and P = .5 are assumed.

Where, n is sample size, N is the population size and e is the level of precision. A 95%
confidence level and e = 0.05, is assumed for the purpose of determining sample size for this
study. Accordingly, the sample size for the study was calculated as follows.

30
841
𝑛=
1 + 841(0.05)2
n = 271
But instructors and RGCS responded to two separate questionnaires, the total sample size must
be proportional to the size of the two sets of the population. Accordingly, 39 instructors and 232
RGCS were included in to sample.

The proportionate stratified random sampling was utilized to select sample respondents (both
instructors and students). To select sample respondents, first the total population of instructors
and RGCS in the college was stratified in to six strata based on the number of departments in the
college. To ensure representativeness of the instructors‟ and students population, the numbers of
sample instructors and students was made proportional to the size of instructors‟ and students
population in each stratum (department) respectively. Proportional sample size from each stratum
was calculated by the following formula:
𝒏∗𝑵𝒊
𝒏𝒊=
𝑵
Where: ni is the sample instructors in respective departments; Ni is the total number of instructors in
the department; n and N are the sample size and the total population size at college level.

Table 1: Sampling Instructors

S. No Departments Total Number of Number of Sample


Instructors (Ni) Instructors(ni)
1. Economics 29 9
2. management 27 8
3. Accounting and finance 27 8
4 tourism and Hotel management 9 3
5 Logistics and supply chain 11 4
6 Marketing management 20 7
Total 123 39
Source: BDU Personnel Office (2014/15)

31
Table 2: Sampling Students

S. No Departments Total Number of Number of Sample


students (Ni) students(ni)
1. Economics 157 51
2. management 171 55
3. Accounting and finance 245 79
4 tourism and Hotel management 41 13
5 Logistics and supply chain 47 15
6 Marketing management 57 19
Total 718 232

Source: BDU CoBE Registrar office

After the number of sample respondents from each stratum was determined, simple random
sampling technique was used using SPSS Version 21 to arrive at individual sample respondents.
This sampling technique was used to avoid biased.

3.4 Methods of Data Collection and Instrumentation


The Questionnaire
The self administered questionnaires with both open and close ended questions were distributed
to sample respondents. The questionnaires were used because it limits inconsistency and also
saves time. The following procedure was pursued to administer questionnaire to respondents.
First, the researcher defined the sample respondents using SPSS. From each department
distribute the questionnaire for the sampled instructors in their office and gathered after a day
After I were asked their cooperation in filling the questionnaire and explained the purpose of
collecting data, how the questionnaire will be filled and the confidentiality of to be obtained
information. Half of the students questionnaire was distribute through under graduate instructors
who teaches the RGCS namely Zewdu Lake. The left part was distributed through students‟
representative in each department (male and female) by checking the random sampled list. As
mentioned earlier two separate questionnaires were prepared independently for instructors and
for students. Even though 39 questionnaires to instructors and 232 questionnaires to students
were distributed, only 35 (89.74%) and 221 (95.26%) were returned from the former and the
latter, respectively. The questionnaire for instructors had five parts. The first part of the

32
questionnaire is about demographic characteristics of respondents such as age, year(s) of
experience, department, and academic ranks. The second part is about qualities of appraisal
criteria. Here some ideal characteristics that effective appraisal criteria must possess were
identified and the instructors were asked whether their appraisal criteria possess those identified
characteristics. The questions were in statements form and instructors were asked to express their
agreement/disagreement in the five point Likert scale, where 1=strongly disagree, 2= disagree,
3= neutral, 4= agree, 5=strongly agree. The third part of the questionnaire deals with the
appraisers‟ practices (students, peers and heads) in evaluating instructors‟ performance. Here
again instructors expressed their level of agreement or disagreement in the five point Likert
scale. The fourth part is about characteristics of performance feedback system. The final part
addresses the clarity of purpose of instructors‟ performance appraisal process. Moreover, the
questionnaire for students had two major sections. The first section is about students‟
demographic profile including age, sex, department, and cumulative grade point average
(CGPA). In the second part, students were asked to respond to series of statements representing
their potential practices in evaluating instructors‟ performance. The scale reliability test for the
major items of the questionnaire was conducted. The reliability of the scale for 62 items in
questionnaires completed by instructors was tested using Cronbach‟s Alpha and the overall
reliability coefficient is 71.6 percent. Similarly, the reliability of the scale for 12 major items in
questionnaires completed by students was tested and the overall reliability coefficient is 73.6%
percent. According to George and Mallory (2003) provides the following cronbach alpha
techniques:
a. > 0.90 = Excellent , b. 0.80 - 0.89 = Good, c. 0.70 - 0.79 = Acceptable, d. 0.60 - 0.69 =
Questionable, e. 0.50 - 0.59 = Poor, f. < 0.50 = Unacceptable.

The Personal Interview

The interview schedule was designed to conduct semi-structured interview with all heads of
departments within the college. However, during data collection period only three (3) department
heads from the total six department heads were interviewed because others were either reluctant
or not available. Before starting interviewing, the researcher introduced himself and explained
the purpose of the study to interviewee. During interview session, the researcher doted down all
important points on notepad and organized them for analysis purpose.

33
3.5 Data Processing and Methods of Data Analysis
The data collected using semi structured questionnaire were edited, coded and analyzed with
great care. Both in-house and field editing was undertaken to detect errors that had been
committed by respondents during completing the questionnaires. The coding of the possible
alternatives in the questionnaire was made in advance of administering the questionnaire to
sample respondents. Meaning, in a five point scale the possible responses was pre-coded (for
example, 1= strongly disagree, 2= disagree, 3= neutral, 4= agree, and 5=strongly agree; 1=very
low, 2=low, 3=medium, 4=high and 5=very high) to facilitate quick answering of the questions
and to simplify data entry in to computer software for analysis.

The qualitative method of data analysis was employed for the analysis of data that were collected
through personal interviews. Data collected through questionnaires were analyzed quantitatively
by utilizing statistical tools such as tabulation, bar charts, chi-square test and pie charts to present
data. In addition, descriptive statistics such as mean, percentage, and standard deviation were
used to analyze and interpret data. For data that were analyzed using mean score, since five point
Likert scale was used, mean score of 3.0 was considered as midpoint (neutral), while mean
scores of greater than 3.0 and less than 3.0 were assumed as agreement and disagreement,
respectively. Data were analyzed with the help of SPSS version 21. After data had been
presented and analyzed, conclusion and recommendations were drawn from the findings.

34
CHAPTER FOUR

RESULTS AND DISCUSSION


4.1 Demographic Profile of Respondents (Instructors)
Though many demographic characteristics of instructors could be there, this paper emphasized
on limited factors such as sex, years of experience, academic rank and Department.
Table 3: Sex of Instructors by Department

sex of respondents Total


Department of respondents Male Female
Economics Count 8 0 8
% within department of respondents 100.0% 0.0% 100.0%
% of Total 22.9% 0.0% 22.9%
Management Count 5 2 7
% within department of respondents 71.4% 28.6% 100.0%
% of Total 14.3% 5.7% 20.0%
Accounting Count 7 0 7
and Finance % within department of respondents 100.0% 0.0% 100.0%
% of Total 20.0% 0.0% 20.0%
Marketing Count 6 0 6
Management % within department of respondents 100.0% 0.0% 100.0%
% of Total 17.1% 0.0% 17.1%
Logistics Count 3 1 4
% within department of respondents 75.0% 25.0% 100.0%
% of Total 8.6% 2.9% 11.4%
Tourism and Count 3 0 3
Hotel % within department of respondents 100.0% 0.0% 100.0%
management % of Total 8.6% 0.0% 8.6%
Total Count 32 3 35
% within department of respondents 91.4% 8.6% 100.0%
% of Total 91.4% 8.6% 100.0%

Source: own survey result 2015

35
The above table 3 indicates that the male instructors account for 91.4%, while the female were
only 8.6%. Moreover, the table also indicates the distribution of male and female instructors
across departments. To this end, around 0.0%, 28.6%, 0.0%, 0.0%, 25% and 0.0% of female
instructors were from department of Economics, Management, Accounting and Finance,
Marketing Management, Logistics and supply chain and Tourism and Hotel management
respectively. On the other hand, 100.0%%, 71.4%, 100.0%, 100.0%, 75.0% and 100.0% of male
instructors were from department of Economics, Management, Accounting and Finance,
Marketing Management, Logistics and supply chain and Tourism and Hotel management.
Generally, it can also be brightly seen from the above table, that 22.9% from Economics, 20.0%
from Management, 20.0% from Accounting and Finance, 17.1% from Marketing Management,
11.4% Logistics and supply chain and 8.6% from Tourism and Hotel management.

Table 4: Academic Rank of Instructors

S. No Frequency Percent
1 Assistant lecturer 3 8.6
2 Lecturer 24 68.6
3 Assistant Professor 8 22.9
4 Total 35 100.0

Source: own survey result 2015

The above table 4 clearly indicates that 68.6% of instructors hold academic rank of lecturers,
while those instructors in academic rank of assistant professor were 22.9% and the left 8.6%
were assistant Lecturer in the total sample. Generally, it can be said that majority of instructors
are ranked as lecturers.

36
Figure 2: Instructors year(s) of experience

Source: Own Survey Result, 2015

Table 5: instructor’s years of experience

Experience Frequency Percent


2-3 year 7 20.0
4-6 year 6 17.1
7-10 year 19 54.3
=>11 3 8.6
Total 35 100.0

Source: Own Survey Result, 2015

The figure 2 and table 5 shows that majority of instructors (54.3%) stated that they had the
experience of 7-10 years. In addition, 20 %, 17.1 % and 8.6 % asserted that they had the
experience of 2-3 years, 4-6 years and > 11 years, respectively. From this figure, one can easily
deduce that the university has shortage of more experienced instructors (those having greater
than 11 years of experience).

37
4.2 Demographic Profile of Respondents (Students)
Whilst many demographic aspects of students are there, this paper focuses on three factors such
as sex, cumulative grade point average (CGPA) and department, that were assumed to have
relevance for the study.

Figure 3: Distribution of Students by Sex

Source: Own Survey Result, 2015


One can notice from the above pie chart that the majority of students (61.99%) are males while
the remaining 38.01% are females. Comparing the percentages of males and females, student‟s
population is male dominated. Moreover, the following table indicates the distribution of male
and female students across Departments.

38
Table 6: department of students * sex of students Cross tabulation

Department of respondents sex of respondents Total


Male female
Management Count 25 28 53
% within department of respondents 47.2% 52.8% 100.0%
% of Total 11.3% 12.7% 24.0%
Economics Count 26 24 50
% within department of respondents 52.0% 48.0% 100.0%
% of Total 11.8% 10.9% 22.6%
Accounting and Count 55 17 72
Finance % within department of respondents 76.4% 23.6% 100.0%
% of Total 24.9% 7.7% 32.6%
Marketing Count 14 5 19
Management % within department of respondents 73.7% 26.3% 100.0%
% of Total 6.3% 2.3% 8.6%
Logistics Count 10 5 15
% within department of respondents 66.7% 33.3% 100.0%
% of Total 4.5% 2.3% 6.8%
Tourism and Hotel Count 7 5 12
Management % within department of respondents 58.3% 41.7% 100.0%
% of Total 3.2% 2.3% 5.4%
Total Count 137 84 221
% within department of respondents 62.0% 38.0% 100.0%
% of Total 62.0% 38.0% 100.0%
Source: Own Survey Result, 2015

As it can be easily seen from table 6, 24% of students are from Management, 22.6% from
Economics, 32.6% from Accounting and Finance, 8.6% from Marketing Management, 6.8%
from Logistics and 5.4% Tourism and Hotel Management respectively. Again the above table
shows that more than half of females (12.7%) belong to management department while males
are 11.3%. Conclusively, it can be said that Accounting and Finance department 32.6% is more
populous than the rest departments.

39
Table 7: CGPA of Students

S. No CGPA Frequency Percent

1 2.0-2.74 85 38.5
2 2.75-3.24 54 24.4
3 3.25-3.74 47 21.3

4 3.75-3.94 28 12.7

5 3.95-4.0 7 3.2

Total 221 100.0

Source: Own Survey Result, 2015

Generally speaking, majority of students (38.5%) stated that they had CGPA within the interval
of 2.00-2.74; whereas only few students (3.2%) stated that their CGPA falls in the interval of
very great distinction (3.95-4.00). The analysis shows that as CGPA increases, the number of
students achieving the higher CGPA decreases.

4.3 Qualities of Appraisal Criteria


Establishing appraisal criteria is the first step in performance appraisal process. In order to
generate criteria for instructors‟ performance appraisal it is acknowledged that potential sources
including the instructors‟ opinions, the instructors‟ job descriptions and/or the professional code
could guide the development of such criteria (Kyriakides, et al. 2006). To this end, some ideal
characteristics that effective appraisal criteria must possess were identified and the instructors
were asked whether their appraisal criteria possess those identified characteristics.

40
4.3.1 Instructors Awareness of Their Appraisal Criteria
According to Khan (2007) performance appraisal should be based on job description for the
position employee holds. This is vital in helping every employees of the organization know
exactly what is expected of them, and the yardsticks by which their performance will be
evaluated. To this end, instructors were asked to express their agreement/disagreement on the
statement “Up on employment at Bahir Dar University, Every instructor was given job
description specifying his/her duties”.

Table 8: Existence of Job Description Specifying Instructors Job Duties

Responses Frequency Percent


strongly disagree 9 25.7
Disagree 12 34.3
Neutral 7 20.0
Agree 5 14.3
strongly disagree 2 5.7
Total 35 100.0
Source: Own Survey Result, 2015

Table 8 shows that 25.7% strongly disagreed, 34.3% disagreed, 20 % neither agreed nor
disagreed, 14.3% agreed and 5.7% strongly agreed with the above statement. This means that
majority (60%) of instructors do not believe that job description is given for instructors up on
employment. The implication is that in the absence of job description specifying duties and
responsibilities of individual instructors, instructors may hardly know their performance
expectations.

Logically in addition to giving job description up on employment, providing training for


employees (instructors) about the criteria of appraisal may serve the purpose of clarifying the
performance expectations. To this end, the following bar chart indicates the instructors‟ opinion
on whether instructors were formally trained about their appraisal criteria.

41
Figure 4: Instructors Formal Training about Their Appraisal Criteria

Source: Own Survey Result, 2015

According to figure 4, majority 25 (71.4%) of instructor‟s didn‟t support the statement “up on
employment at Bahir Dar University, every instructor were formally trained about appraisal
criteria”. More specifically, 9 (25.7%) strongly disagreed and 16 (45.7%) disagreed with the
statement. On the other hand, only minority 4 (11.4%), strongly agree) supported the statement.
The remaining 17.1% were neither agreed nor disagreed. Generally, the analysis shows that
because job description and formal training regarding appraisal criteria is lacking, instructors
(especially less experienced) have less chance to know the standard against which their
performance is evaluated.

Furthermore, instructors were also asked how they managed to keep their performance up to
standard in the university. The underneath table summarizes the responses of instructors by years
of experience.

42
Table 9: Ways for keeping performance up to standard by instructors’ years of experience
Respondents work Relevancy of performance evaluation criteria for Total
experience respondents task
By asking By doing By reading I found
senior what I criteria on difficulty to
instructor in feel instructors know what
the appropri evaluation is right
department ate form
2-3 Count 0 7 0 0 7
year % within work 0.0% 100.0% 0.0% 0.0% 100.0%
experience
% of Total 0.0% 20.0% 0.0% 0.0% 20.0%
4-6 Count 3 0 0 3 6
year % within work 50.0% 0.0% 0.0% 50.0% 100.0%
experience
% of Total 8.6% 0.0% 0.0% 8.6% 17.1%
7-10 Count 6 3 3 7 19
year % within work 31.6% 15.8% 15.8% 36.8% 100.0%
experience
% of Total 17.1% 8.6% 8.6% 20.0% 54.3%
= >11 Count 0 0 3 0 3
% within work 0.0% 0.0% 100.0% 0.0% 100.0%
experience
% of Total 0.0% 0.0% 8.6% 0.0% 8.6%
Total Count 9 10 6 10 35
% within work 25.7% 28.6% 17.1% 28.6% 100.0%
experience
% of Total 25.7% 28.6% 17.1% 28.6% 100.0%
Source: Own Survey Result, 2015

It is shown in table 9 that 28.6% of instructors do “what they feel right” in their attempt of
keeping their performance up to standard. The same 28.6% of instructors faced difficulty in
knowing what is right to keep their performance up to standard. Around 25.7 % asserted that
they ask senior instructor in their department while 17.1% asserted that they read appraisal
criteria on their appraisal form in order to keep their performance up to standard.

Furthermore, a closer look at table 9 shows that for instructors that had greater than eleven years
of experience; they read appraisal criteria on their appraisal form in order to keep their
performance 100% dominant mechanism. But, for instructors with less than eleven years of
experience, “what they feel right” in their attempt of keeping their performance, faced difficulty
in knowing what is right to keep their performance up and asking senior instructor in the
department are dominantly used mechanisms. The analysis shows that regardless of their years of

43
experience, “reading appraisal criteria on appraisal form” is less commonly used mechanisms
among instructors. This means that instructors gave less emphasis for appraisal criteria as a
reference to guide their performance. In the subsequent section of the paper an attempt was made
to discuss instructors‟ perception of various characteristics of appraisal criteria. These
discussions may also provide a clue for why appraisal criteria had received less attention by
instructors. Additionally the following table 10 shows the relationship between work experience
and instructors‟ perception on Current appraisal criterion preparation by consultation with instructors

Table 10: The relation between instructors works experience and instructors’ perception on
Current appraisal criteria preparation in consultation with instructors.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 21.527a 9 .011 .007**
Likelihood Ratio 26.118 9 .002 .003***
Fisher's Exact Test 16.794 .009**
Linear-by-Linear 2.454b 1 .117 .124 .070 .020
Association
N of Valid Cases 35
a. 15 cells (93.8%) have expected count less than 5. The minimum expected count is .60.
b. The standardized statistic is -1.567.

The result from the chi-square taste shows that, there is strong relationship between work
experience and instructors‟ perception on Current appraisal criteria preparation in consultation with
instructors‟ but due to the assumption violation that the minimum expected count is below 5. So fishers
exact test result that…, X2 = 16.794, df = 1, p = .009. The result supports the mean score value
which indicates there is strong relationship between instructors work experience and instructors
perception on appraisal criteria preparation in consultation with instructors.

4.3.2 Relevance, Reliability and Realistically of Appraisal Criteria


As noted earlier in literature review part of this paper, for the appraisal criteria to be effective it
must be relevant, reliable, and realistic. Therefore, let‟s take a look at whether the instructors‟
performance appraisal criteria in the university fulfill these and some other characteristics.
According to Noe et al. (1996) relevance of appraisal criterion is defined as the extent to which
the performance measure assesses all relevant and only the relevant aspects of performance.

44
Based on this definition, two aspects of relevance of appraisal criteria are identified. That is the
phrases “all relevant” and “only relevant” refer to the completeness of the criteria and the
relevance of all criteria to job duties, respectively. In line with this, table 11 displays responses
of instructors regarding whether their appraisal criteria satisfies those aspects of relevance.

Table 11: Relevance of Appraisal Criteria

Responses
Strongly

Strongly
disagree

disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
All appraisal criteria are
relevant to tasks in my job 9 25.7 10 28.6 6 17.1 9 25.7 1 2.9 35 100

All my duties are measured


in appraisal criteria 9 25.7 16 45.7 3 8.6 7 20 - - 35 100
Source: Own Survey Result, 2015

Table 11 reveals that when instructors were asked to express their opinion on the statement „All
appraisal criteria are relevant to tasks in my job‟, 54.3% (28.6% disagree plus 25.7% strongly
disagree) opposed the statement. On the other wing, 28.6% of instructors were for the statement
i.e. 2.9% strongly agrees plus 25.7% agree. The remaining 17.1% of instructors neither agree nor
disagree with the statement. Furthermore, responding to open ended question, one assistant
professor stated that “The appraisal form contains relevant and irrelevant factors. For example,
out of 25-30 criteria on teaching appraisal form (completed by students), the relevant criteria
are not more than eight. Therefore, it needs revision to select best yardsticks that evaluate
instructors‟ work related performance and the form needs customization belongs to each
department.”
Table 11 also indicates the instructors‟ responses to the statement “All my duties are measured
in appraisal criteria”. In other words, instructors were asked whether the appraisal criteria are
comprehensive enough to measure all relevant tasks of their job. Accordingly, 71.4% of
instructors were against the statement i.e. 25.7% strongly disagree and 45.7% disagree. On the

45
other hand, around 20% agree of instructors believe that their appraisal criteria measure all their
job relevant duties. The remaining 8.6% of instructors asserted that they are neither for nor
against the statement.
Generally, table 11 reveals that majority of instructors never view that all their appraisal criteria
are relevant to their job, and they also believe that some of their job duties that should have been
measured by appraisal criteria were not included in current appraisal form. Therefore, it can be
concluded that instructors had reservation on relevance and completeness of their appraisal
criteria.

Table 12: Realisticality of Appraisal Criteria

Responses
Strongly

Strongly
disagree

disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
the evaluation criteria
consideration the 9 25.7 12 34.3 7 20 7 20 - - 35 100

practical difficulties in
the environment the
instructor performs

Source: Own Survey Result, 2015

Instructors were asked to express their opinion on the statement “Current appraisal criteria take
in to consideration the practical difficulties in the environment in which I work”. The practical
difficulty in this context is all about the resources and support that instructors need in order to
successfully achieve the appraisal criteria. According to table 12, whilst 60% (25.7% strongly
disagree plus 34.3% disagree) refuted the statement, only 20% agree supported the statement.
The left 20% neither agree nor disagree. Generally, from the above description it is possible to
say that majority of instructors believe that their current appraisal criteria do not take in to
account the practical difficulties in the environment in which they work. According to
Kyriakides et al. (2006) instructors are often expected to accomplish complicated tasks and meet

46
objectives within a predetermined timeframe. Consequently, the resources and support provided
for instructors comprise important facilitating factors for their work. The implication is that
instead of simply using the appraisal criteria to evaluate performance of instructors, it is better
for the university to reconsider the kind, quality and quantity of resources available for all
instructors.
According to Evancevich (2004) defined reliability of appraisal criteria as the consistence of
measure. To this end, the underneath table reveals the instructors perception of reliability of their
appraisal criteria.

Table 13: Reliability of Appraisal Criteria

reliability of evaluation criteria


Responses Frequency Percent
strongly disagree 13 37.1
disagree 15 42.9
agree 7 20.0
Total 35 100.0
Source: Own Survey Result, 2015

Looking at table 13, one can notice that most of instructors (80%), including 37.1% strongly
disagree and 42.9% disagree, feel that their appraisal criteria are not reliable. However, while the
left 20% of instructors stated that their appraisal criteria are reliable. Again looking at voice of
majority of instructors, it is possible to dictate that the reliability of appraisal criteria needs
improvements.

4.3.3 Appraisal Criteria as Measure of Student Learning


Students make self-reflective judgments about teaching based on whether they have learned the
contents of instruction. Because of this reason the criteria of evaluating effective teaching should
be designed to assess the extent to which teaching and learning activities enhanced student
learning (Nimmer and Stone 1991; Ellet et al.1997). To this end, instructors in Bahir Dar
University were asked to express their agreement/disagreement with the statement “current
appraisal criteria measure how an instructor contributes to student learning”. The following table
14 shows the responses of instructors on this issue.

47
Table 14: Appraisal Criteria as Measure of Student Learning

Responses Frequency Percent


strongly disagree 13 37.1
Disagree 12 34.3
Neutral 6 17.1
Agree 4 11.4
Total 35 100.0
Source: Own Survey Result, 2015
The above table 14 show that majority (71.4%) of instructors opposed (37.1% strongly disagree
plus 34.3% disagree) the statement “appraisal criteria measure how an instructor contributes to
students learning”. Whilst 17.1% were neutral, 11.4% were in agreement with the statement.
Moreover, some instructors expressed their opinion in an open ended form. For example, one
assistant professor said that “…in the current appraisal sheet the criteria are not in the manner
that measures how an instructor adds value to teaching learning process. Rather the points
stated their focus on the personal behavior of instructors.” The analysis shows that majority of
instructors think that their appraisal criteria do not measure how much an instructor contributes
to student learning. Since students are the major customer group that university serves (Berk
2005) and the primary onus is on the shoulder of instructors (Simmons and Iles 2010), the
criteria measuring student learning must be incorporated in instructors performance appraisal
form.

4.3.4 Participation of Instructors in Design of Appraisal Criteria


The development of reliable, valid, fair and useful appraisal criteria is enhanced by employee
participation because workers possess necessary unique and essential information needed for
developing realistic yardsticks (Roberts 2003; Elverfeldt 2006). By making a glance at table 15
below, one could easily notice that majority of instructors (60%, 34.3% strongly disagree and
25.7%disagree), didn‟t support the idea that “current appraisal criteria are prepared in
consultation with instructors”. While 20% were neither agree nor disagree, remaining 20% were
in agreement with idea.

48
Table 15: Instructors year(s) of experience * Appraisal criteria are prepared in consultation
with instructors Cross tabulation

Count(n) and percent(%) Current appraisal criteria are prepared in


Instructors within respondents years consultation with instructors Total
work of experience and with

Disagree
disagree
strongly

Neutral

agree
experience total

2-3 year Count 0 3 4 0 7


% within work experience 0.0% 42.9% 57.1% 0.0% 100.0%
% of Total 0.0% 8.6% 11.4% 0.0% 20.0%
4-6 year Count 3 0 0 3 6
% within work experience 50.0% 0.0% 0.0% 50.0% 100.0%
% of Total 8.6% 0.0% 0.0% 8.6% 17.1%
7-10 year Count 6 6 3 4 19
% within work experience 31.6% 31.6% 15.8% 21.1% 100.0%
% of Total 17.1% 17.1% 8.6% 11.4% 54.3%
=>11 Count 3 0 0 0 3
% within work experience 100.0% 0.0% 0.0% 0.0% 100.0%
% of Total 8.6% 0.0% 0.0% 0.0% 8.6%
Total Count 12 9 7 7 35
% within work experience 34.3% 25.7% 20.0% 20.0% 100.0%
% of Total 34.3% 25.7% 20.0% 20.0% 100.0%

Source: Own Survey Result, 2015


Moreover, a more detail analysis shows that 100% strongly disagree of instructors with >=11
years of experience and 63.2% (31.6% strongly disagree plus 31.6% disagree) of those with 7-10
years of experience stated that current appraisal criteria were not prepared in consultation with
instructors. According to interview with heads of departments appraisal criteria were prepared at
national level by Ministry of Education (MoE) and forwarded to Universities in the country.
Then, the universities adopt appraisal criteria with slight modification by professionals without
instructors‟ involvement and approved by the university senate. Therefore, it can be concluded
that instructors‟ participation in the design of appraisal criteria is minimal.

49
4.4 Practices of Appraisers in Instructors’ Performance Appraisal
Using multiple appraisers is often viewed as a key to successful practice; at least more than a
person should be involved in judging instructors‟ performance (Stronge and Turker 2003).
According to Danielson and McGreal (2000) the 360 degree appraisal system incorporates the
participation of many kinds of appraisers based on the assumption that instructors competence
may be seen from several different perspectives and that it should be exemplary (at least
adequate) from all those different angles. Elverfeldt (2006) cited that 360-degree appraisal might
be a useful tool in enriching performance appraisal and enhancing its acceptance, but this will
only be the case if the appraiser and the appraises generally perceive additional source of
feedback as relevant and favorable. Therefore, recently scholars have begun to argue that
employee perceptions are vital determinant of effectiveness of performance appraisal process
(Dargham 2007). BDU also uses 360 degree feedback system in which students, peers and heads
of departments are appraisers of instructors‟ performance. Hence, in this section the focus of the
paper is on examining practices of appraisers in order to identify some biases that may
compromise the effectiveness of instructors‟ performance appraisal process.

4.4.1 Students’ Practice in Appraisal: Students Self-Report Vs Instructors’ Perception


Student‟s appraisal of instructors‟ performance has been widely used across higher education
institutions worldwide. Despite this fact, researchers have found that the soundness of student
appraisal of instructors‟ performance can also be affected by situational factors or biases (Emery
et al. 2003; Centra 2003). Dowell and Neal (2004) concluded that the situational variables
completely contaminate the validity of student appraisal of instructors‟ performance. To this end,
students were asked to make self-report of their practice and instructors were also asked to
express their perceptions of students‟ practice during appraisal. Since both the students and the
instructors were asked to express their agreement level on the same statements but worded
appropriately to their different perspectives, the mean comparison of students‟ responses and
instructors‟ responses was made. Furthermore, detail analysis of students‟ self-reported practice
during appraisal is discussed in subsequent few pages.

50
Table 16: Comparison of Students Self-Report of Their Own Practice and Instructors
Perception of Students Practice

Instructors Students’ self


perception of report of their
own practice
student practice
N Mean N Mean
Rating based on physical attractiveness of instructor 35 3.57 221 3.53
Rating based on funniness of instructor 35 3.03 221 3.04
Rating based on previous grade awarded by instructor 35 3.17 221 3.65
Rating based on expected good grade from instructor 35 3.26 221 3.21
Rating based on number of assignments given by 35 3.31 221 3.06
instructor
Rating based on exam easiness prepared by instructor 35 3.06 221 3.10
Rating based on single negative experience with 35 3.26 221 3.11
instructor
Rating by contrasting the performance of an instructor 35 3.34 221 3.09
against that of other instructor(s).
Rating by comparing instructors performance with a 35 1.94 221 2.84
given standard/appraisal criteria.
Rating having enough awareness about the purpose of 35 1.86 221 2.96
evaluation criteria
Rating by properly reading each appraisal criterion 35 2.03 221 2.40
Rating by having enough training to evaluate 35 1.51 221 2.43
instructors‟ performance
Source: Own Survey Result, 2015
Note: Please refer back to the mean scores on this table as you keep on reading the next 12
pages.

4.4.1.1 Rating Based on Personal Attributes of Instructors


The review of literature reveals that different personal attributes of instructor can influence
students‟ appraisal of instructor‟s performance. The underneath figure 5 and figure 6 summarizes
the responses of students regarding their rating based on the physical attractiveness and the
funniness of instructor, respectively.

51
Figure 5: Rating based on Physical Attractiveness of an Instructor

Source: Own Survey Result, 2015

The above figure 5 indicates that majority 137(62%), including 26.7% strongly disagree plus
35.3% disagree of students opposed the statement “I provide favorable score for physically
attractive instructor‟. On the other hand, 71 (35.1%) 28.5% agree and 3.6% disagree of students
state that they provide favorable score for physically attractive instructor. The remaining
13(5.9%) were neutral. Moreover, according to table 16, the mean score of students‟ response on
the statement is 3.53; showing that majority of students disagreed with the statement.

Furthermore, instructors were also asked to express their perception on the statement “students
provide favorable score for physically attractive instructors”. Similar to the response of students,
the mean score of instructors‟ response is 3.57 (see table 16), indicating that majority of
instructors were in disagreement with statement. Therefore, the analysis shows that the
instructors‟ perception and the students self-report on this issue matches with each other and that
during appraisal students give less consideration for instructor‟s physical attractiveness.
Additionally table 17 shows the association of students rating and instructors‟ perception.
52
Table 17: Students practice of scoring for physically attractive instructors and instructors’
perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 36.419a 4 .000 .000***
Likelihood Ratio 31.959 4 .000 .000***
Fisher's Exact Test 29.341 .000***
b
Linear-by-Linear .037 1 .847 .880 .457 .059
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 1.09.
b. The standardized statistic is -.192.

The resulting p-value using Fisher‟s exact test P value is less than 0.001. Therefore, it shows that
there is significant relationship between instructors perception on students rating practice and
students rating practice based on physically attractiveness of instructors at α = 0.05 level. This
result supports the cross tabulation result discussed above.

53
Figure 6: Rating Based on Funniness of an Instructor

Source: Own Survey Result, 2015

According to Shevlin (2000) student appraisal of instructors are greatly affected by instructor‟s
personal attribute like sense of humor. Meaning, if an instructor entertains students perhaps by telling
jokes/funs, students provide favorable score for that instructor. To examine whether this situation
holds true here, students were asked to express their agreement/disagreement with the statement “I
provide favorable score for funny instructor (who tells jokes)”. Accordingly, 22 (10%) agreed and 40
(18.1%) strongly agreed with the statement. On the other wing, 15 (6.8%) strongly disagreed and 81
(36.7%) disagree with the statement, while the remaining 63 (28.7%) were indifferent. Based on the
voice of majority, it can be deduced that funny instructor can gain favorable score in student
appraisal. Moreover, instructors were also given a chance to express their perception on the
statement “students provide favorable score for funny instructor (who tells jokes)”. According to
table 16, the mean score of instructors‟ response of 3.03 is a bit less than 3.04 mean score of

54
students‟ response. The implication is that whilst the mean score for both groups (students and
instructors) reflects their agreement with the given statement, students do favor funny
instructor(s) more than instructors may perceive them.

The finding is consistent with that of Shevlin (2000), who found positive correlation between
student rating and humor of instructor. Similarly, Naftulin et al. (1973) also concluded that an
instructor‟s entertainment level influences student appraisal scores and they call this influence
the “Dr.Fox” effect. In their study Naftulin et al. placed an actor, known as “Dr.Fox”, in a
college classroom where he presented a highly entertaining lecture that included no substantive
content. The actor received rave student appraisal scores, which led the researchers to determine
that highly charismatic instructors can seduce students into giving high score despite learning
nothing. The relationship between students rating and instructors‟ perception is stated in table 18
below.

Table 18: Relationship test result of students practice of scoring for funny instructors *
instructors perception on students rating based on funniness.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 16.316 4 .003 .003***
Likelihood Ratio 14.340 4 .006 .009**
Fisher's Exact Test 14.679 .004***
Linear-by-Linear .003b 1 .957 1.000 .505 .059
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 3.01.
b. The standardized statistic is .054.

The result from the analysis using Fisher‟s exact test, P value is 0.004 which is lower than 0.05.
Therefore, it shows that there is significant relationship between instructors perception on
students rating practice of funny instructors and students rating practice based on funniness of
instructors at α = 0.05 level. This result supports the cross tabulation result discussed above.

55
4.4.1.2 Rating Based on Grade and Exam Related Factors
Studies found that there is an association between grades and student appraisals of instructors‟
performance because student appraisal score is higher in courses where student achievement is
higher (Baird 1997; Cohen 1981). In addition, Onwuegbuzie et al. (2007) stated that factors
associated with testing (e.g. difficult exam and administering sudden quizzes) and grading (e.g.
harsh grading, notable number of assignments and home works) were likely to lower students‟
appraisal of their instructors. In this regard, to examine whether students take in account grade
and exam related factors in evaluation of instructors performance, students participated in the
current study were asked to make self report of their practice on these issues.

Table 19: Rating Based on Grade and Exam Related Factors

Grade and exam related factors


Cumulative grade point Exam Fewer number Expected good Previously awarded
average of students easiness of assignments grade good grade

2.0-2.74 Mean 2.89 2.85 2.74 2.54


N 84 84 84 84
2.75-3.24 Mean 3.24 3.40 3.16 4.02
N 55 55 55 55
3.25-3.74 Mean 3.06 3.26 3.72 4.64
N 47 47 47 47
3.75-3.94 Mean 3.07 2.50 3.64 4.29
N 28 28 28 28
3.95-4.0 Mean 5.00 4.00 4.00 5.00
N 7 7 7 7
All over Mean 3.10 3.06 3.21 3.65
mean N 221 221 221 221

Source: Own Survey Result, 2015

Table 19 above shows that, when students were asked to express their agreement/disagreement
with the statement “I evaluate an instructor positively, if he/she awarded me good grade in
previous course he/she taught” the overall mean score of their response is 3.65. This implies that

56
majority of students are in agreement with the statement. However, making a closer look at table
19, one can notice that while students with CGPA of greater than 3.25 agree with the statement,
those students whose CGPA is less than 3.25 disagree with the statement. Put differently, the
higher the students‟ CGPA, the higher likely that they evaluate their instructors based on
previously awarded grade. The implication is that perhaps because students with higher CGPA
had more positive in evaluating instructors‟.

Moreover, instructors were also asked to pose their opinion on the statement “Students evaluate
an instructor positively, if he/she awarded them good grade in previous course he/she taught”.
Table 16 revealed that, similar to case of students‟ response, the mean score of instructors‟
responses (3.17) indicates instructors support for the statement. Conclusively, though the
influence is moderated by their CGPA; majority of students who had received good grade
previously in the course taught by an instructor, would more likely to evaluate that instructor
positively. This determination is supported by other studies (Cohen 1981; Braskamp and Ory
1994; Baird 1997; Weinberg, et al. 2007) who also found that students reward those instructors
who reward them with good grades. Justifiably, this reciprocal trend may lead to grade inflation
among instructors who wish to receive high student appraisal scores.

Table 19 above also indicates that supporters of the statement “I evaluate instructor positively,
when I expect good grade from him/her” were supported by most of the sampled students. The
overall mean score of students‟ response of 3.21 is the evidence of students‟ agreement with the
statement.

Moreover, instructors were also responded to the statement “students evaluate instructors
positively, when they expect good grade from him/her”. According to table 16, similar to the
case of students, 3.26 mean scores instructors‟ responses shows that instructors were in
agreement with the statement.

Conclusively, though students‟ CGPA moderates the degree of influence of expected grade on
student evaluation, students‟ expectation of good grade from an instructor would lead them to
provide favorable score for that instructor. Consistent with this finding, other studies
(Worthington 2002; Weinberg et al. 2007) also found that students evaluate their instructors
more favorably when they expect higher grade.

57
Table 19 also reveals that mean score of students‟ response on the statement “I provide high
score for instructor whose exam is easy” is 3.10. This manifests that majority of students were in
agreement with the statement. However, a more detail analysis indicates that the influence of
exam easiness on student evaluation increases, as students‟ CGPA increases. In the same
fashion, it is shown in table 16 that the mean score of instructors‟ response of 3.06 on the
statement “students provide high score for instructor whose exam is easy”; implies the
instructors‟ support of the statement. By and large, the analysis sends a message that while
evaluating an instructor; students take in to account the level of exam easiness that an instructor
prepares for them.

Lastly, table 19 reveals students response on the statement “I provide high score for instructor
who gives fewer assignments”. Accordingly, students‟ response resulted in mean score of 3.06,
signifying that they favor those instructors who give fewer numbers of assignments.
Additionally, the analysis indicates that regardless of their CGPA, during evaluating instructors‟
performance; students are more influenced by number of assignments that instructor provides.
Furthermore, instructors were also asked to express their opinion on the statement “students
provide high score for instructor who gives fewer assignments”. Table 16 shows that the mean
score of instructors‟ responses is 3.31, implying the instructors‟ agreement with the statement.

Generally, similar to findings of other studies (Centra 2003; Onwuegbuzie et al. 2007); the
analysis indicates that majority of students positively evaluate instructor who gives fewer
assignments. This seems that students lacked awareness of why instructors provide assignments
and that students must be convinced of importance of doing assignments so that their attitude
towards assignments may be improved. Additionally the association between students rating and
instructors‟ perception on it is shown in table 20, table 21, table 22 and table 23 below.

58
Table 20: Relationship test result of students practice of Rating based on previous grade
awarded by instructor * instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 32.579 4 .000 .000***
Likelihood Ratio 24.425 4 .000 .000***
Fisher's Exact Test 25.136 .000***
Linear-by-Linear 3.402b 1 .065 .065 .039 .009
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 1.64.
b. The standardized statistic is 1.845.

The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating practice and
students rating practice based on physically attractiveness of instructors at α = 0.05 level. This
result supports the mean score of the cross tabulation result discussed above.
Table 21: Relationship test result of students practice of Rating based on expected good grade
from instructor* instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 15.586a 5 .008 .010*
Likelihood Ratio 12.212 5 .032 .035*
Fisher's Exact Test 12.730 .020*
b
Linear-by-Linear .013 1 .908 .915 .319 .036
Association
N of Valid Cases 256
a. 4 cells (33.3%) have expected count less than 5. The minimum expected count is .14.
b. The standardized statistic is -.116.

59
The result from the analysis using Fisher‟s exact test, P value is 0.02 which is lower than 0.05.
Therefore, it shows that there is significant relationship between instructors perception on
students rating practice of instructors based on expected good grade from instructors and
students rating practice accordingly of instructors at α = 0.05 level. This result supports the cross
tabulation result discussed above.

Table 22: relationship test of students practice of Rating based on number of assignments
given by instructor* instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 9.369 4 .053 .052
Likelihood Ratio 8.732 4 .068 .073
Fisher's Exact Test 9.350 .042*
Linear-by-Linear 1.625 1 .202 .209 .118 .030
b
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is .82.
b. The standardized statistic is -1.275.

From these results, the Pearson chi-square .053 indicates there is no relation between students‟
evaluation practice based on the number of assignments given by instructors and the instructors‟
perception about students rating. But Pearson chi-square result violates the assumptions
necessary for the standard asymptotic calculation of the significance level for this test may not
have been met. Therefore, you should obtain exact results. Due to this the fishers exact test
result is considered with the p value of 0.042 is less than 0.05 that indicates there is relationship
between students‟ evaluation practice based on the number of assignments given by instructors
and the instructors‟ perception about students rating. This result supports the mean score of cross
tabulation result.

60
Table 23: relationship test of students practice of Rating based on exam easiness prepared by
instructor* instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi- 13.605 4 .009 .009**
a
Square
Likelihood Ratio 10.689 4 .030 .033*
Fisher's Exact Test 11.369 .018**
Linear-by-Linear .043b 1 .835 .884 .450 .057
Association
N of Valid Cases 256
a. 1 cell (10.0%) has expected count less than 5. The minimum expected count is 1.09.
b. The standardized statistic is .208.

The fisher‟s exact test result p value 0.042 is less than 0.05, which indicates there is relationship
between students‟ evaluation practice based on exam easiness prepared by instructors and the
instructors‟ perception about students rating. This result supports the mean score of cross
tabulation result.

4.4.1.3 Rating by Contrasting Performance of an Instructor against Other Instructors


Contrast effect occurs when an evaluator lets another employee‟s performance influence the
score that is provided to someone else (Evancevich 2004). Students were asked to respond to the
statement “I evaluate instructor‟s performance by contrasting the performance of an instructor
against that of another instructor”.

61
Table 24: The Contrast Effect in Student Appraisal

Responses

Strongly

Strongly
disagree

disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
I evaluate
instructors 26 11.8 79 35.7 39 17.6 42 19 35 15.8 221 100

performance by
contrasting
Source: Own Survey Result, 2015

According to table 24, around 34.8% (19% agree and 15.8% strongly agree) of students stated
that while evaluating instructor‟s performance they do contrast the performance of an instructor
against that of other instructors. On the other wing, 47.5 % (11.8% strongly disagree and 35.7%
disagree) asserted that they do not contrast the performance of an instructor against that of other
instructors. The remaining 17.6% weren‟t expressed their agreement or disagreement.

The analysis shows that greater part of students had contrast effect in their appraisal of
instructors‟ performance. Table 16 shows that the students and the instructors‟ response on the
statement had resulted in mean scores of 3.09 and 3.34, respectively. The mean scores for both
groups of respondents indicate that they were in agreement with their given statement.

4.4.1.4 Rating by Comparing Instructors’ Performance against Appraisal Standard


Under ideal circumstance of performance appraisal, the performance of an instructor must be
compared only against the standard (criteria) of appraisal rather than someone else‟s
performance. In this respect, students were also asked to express their opinion on whether they
compare the performance of an instructor with the given standard of appraisal. Accordingly, the
underneath table 25 and 26 summarizes the responses of students on the statement “I evaluate
my instructors” real performance by comparing with a given standard/appraisal criteria”. In
addition the following table shows the association between students rating and instructors‟
perception on students rating.

62
Table 25: Rating by Comparing Instructors‟ Performance with Standard of Appraisal

Responses

Strongly

Strongly
disagree

disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
I evaluate my
instructors‟ 8 3.6 124 56.1 5 2.3 64 29.0 20 9.0 221 100

performance by
comparing with a
given
standard/appraisa
l criteria.
Source: Own Survey Result, 2015

According to table 25, more than half (59.7%) of students, including 3.6% strongly disagree and
56.1% disagree, were against the statement. However, 38% (29% agree and 9% strongly agree)
of students supported the statement. Instructors were also given the chance to express their
perception on the statement “students evaluate my performance by comparing with a given
standard of appraisal”. Table 16 indicates that the mean scores of the instructors‟ and the
student‟s response are 1.94 and 2.84, respectively. Since these mean scores are less than the
midpoint (3.00) in the scale, the implication is that both groups of respondents were in
disagreement with their respective statement.

By and large, the analysis shows that majority of students compare their instructors performance
against benchmarks other than the given standard of appraisal. This tendency of students has also
been reflected in the above analysis on different factors influencing students‟ appraisal such as
expected grade, previously received grade, exam easiness, funniness, etc. Logically, for students
to evaluate instructors by comparing instructors‟ real performance against appraisal criteria,
properly reading each appraisal criterion is necessary.

63
Table 26: relationship test of students practice of Rating by comparing instructors
performance with a given standard/appraisal criteria * instructors perception on students
rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 25.777a 4 .000 .000***
Likelihood Ratio 23.878 4 .000 .000***
Fisher's Exact Test 21.943 .000***
b
Linear-by-Linear 1.953 1 .162 .175 .090 .019
Association
N of Valid Cases 256
a. 3 cells (30.0%) have expected count less than 5. The minimum expected count is 2.19.
b. The standardized statistic is 1.397.

The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
rating practice by comparing instructors performance with a given standard/appraisal criteria at α
= 0.05 level. This result supports the mean score of the cross tabulation result discussed above.
To this end, students were asked to express their opinion on whether they properly read each
appraisal criterion. The following figure 7 summarizes the response of students on this issue.

Figure 7: I Properly Read Each Appraisal Criterion during Appraisal

Source: Own Survey Result, 2015

64
The above figure 7 reveals that when students were asked to express their
agreement/disagreement on the statement “I properly read each appraisal criterion during
evaluating performance of my instructor”, (59.7%) of students, including 8 (3.6%) strongly
disagree and 124 (56.1%) disagree were the disagreement rate. However, the agreement rate is
38% of students which covers 64 (29%) agree and 20 (9%) strongly agree. From this description,
one can note that percentage of students who never properly read each appraisal criteria is
greater than their proper readers‟ counterparts. This is also reflected by 2.4 mean score of
students‟ response. Similarly, 2.03 mean score of instructors‟ response is also the reflection of
instructors‟ agreement on the issue. (See table 16 for mean scores). The chi-square test also
supports this result as shown in table 27.

Table 27 : relationship test of students practice of Rating by properly reading each appraisal
criterion * instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 124.354a 4 .000 .000
Likelihood Ratio 94.376 4 .000 .000
Fisher's Exact Test 90.680 .000
Linear-by-Linear 17.012b 1 .000 .000 .000 .000
Association
N of Valid Cases 256
a. 3 cells (30.0%) have expected count less than 5. The minimum expected count is 1.91.
b. The standardized statistic is 4.125.

The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
rating practice by properly reading each appraisal criterion at α = 0.05 level. This result supports
the mean score of the cross tabulation result discussed above.

Generally, it can be concluded that majority of students evaluate instructors without properly
reading appraisal criteria because their evaluation is highly influenced by their preconception
about instructors.

65
4.4.1.5 Rating Based on Negative Experience with Instructor(s)
Table 28 below clearly indicates that 40.7%, including (14.9% strongly agree plus 25.8% agree)
of students were in agreement with the statement “If I had a negative experience with instructor,
I will provide low score in all appraisal criteria”. Conversely, 51.1% (15.8% strongly disagree
plus 35.3% disagree) of students were against the statement. The remaining 8.1% were neutral
with the statement.

Table 28: Rating Based on Negative Experience with Instructor(S)

Responses
Strongly

Strongly
disagree

disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
If I…negative
experience...
35 15.8 78 35.3 18 8.1 57 25.8 33 14.9 221 100
instructor…
Standard/appraisal
criteria.
Source: Own Survey Result, 2015

Additionally, it was shown in table 16 that the mean score of students‟ response is 3.11. This
implies that majority of students agree that they negatively evaluate an instructor, if they had
negative experience with that instructor. According to table 16, 3.26 mean score of instructors‟
response on the statement “If students had a negative experience with instructor, they will
provide low score in all appraisal criteria”; entails the instructors‟ support of the statement.
However, comparison of the two mean scores reveals that students‟ negative experience with
instructor(s) influences students evaluation but with less extent than what instructors had
imagined.

Therefore, the analysis shows that majority of students try their level best to punish instructors
whom they don‟t like, though they had some frustration because they didn‟t seen instructor who
had been punished after receiving low score in student appraisal. Similarly, David and Macayan
(2010) found that student evaluation of instructor‟s performance is based mainly on hidden anger
resulting from a recent grade received on an exam or from a single negative experience with an

66
instructor. To support this result the chi-square test shows positive association between students
rating and instructors‟ perception.

Table 29: relationship test result of students practice of Rating based on single negative
experience with instructor * instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 8.747a 4 .068 .065
Likelihood Ratio 7.813 4 .099 .116
Fisher's Exact Test 7.681 .094
Linear-by-Linear .352b 1 .553 .587 .301 .046
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 3.55.
b. The standardized statistic is -.594.

The exact p value based on fisher‟s exact test is 0.094 and 0.068 for the Pearson chi-square
asymptotic value. Both p values are greater than significance level of 0.05. This result leads to
conclude that there is evidence that the students practice of Rating based on single negative
experience with instructor and instructors perception on students rating are not related. This result is
the opposite of the mean score result.

4.4.1.4 Students Training on How to Evaluate Instructors’ Performance


Business literatures clearly recommend that everyone who supplies data to be used in appraisal
receives some kind of training. Most authors support the fact that appraisers must be trained to
observe, gather, process, and integrate performance-relevant information in order to improve
performance appraisal effectiveness. Elverfeldt (2005) for instance, stated that sufficient training
must be given to appraisers so that they: (1) understand the performance appraisal process; (2)
are able to use the appraisal instrument as intended which encompass interpreting standards and
use of scales; and (3) are able to provide effective feedback.

Tziner and Kopelman (2002) cited that because errors are well-embedded habits, extensive
training is necessary for avoiding such errors. The current study has also attempted to evaluate
67
whether students, as appraisers of instructors performance, are well trained to evaluate
performance of instructors. In this respect, students were given a chance to express their
agreement/disagreement with the statement “I have received enough training on how to evaluate
my instructors‟ performance”. The following figure 8 displays a summary of students‟ responses
on the above statement.

Figure 8 : Students Training On How to Evaluate Instructors’ Performance

Source: Own Survey Result, 2015

As it can be seen from figure 8, the largest proportion of students strongly disagreed 71 (32.1%)
and the next largest proportion of students disagreed 64 (29%) with the above statement, making
61.1% disagreement rate. However, relatively smallest proportion of students strongly agreed 33
(14.9%) and agreed 15 (6.8%) with the statement. According to table 16, the mean score of
students‟ response is 2.43; reflecting that students lacked training on instructors‟ performance

68
appraisal. In the same fashion, mean score of instructor response (1.51) on the statement
“students have received enough training on how to evaluate their instructors‟ performance”;
implies that instructors were hardly agreed with the statement.

Furthermore, during interview with heads of departments, whilst majority of them recognized
necessity of training, they state that no training had been given for students with respect to
instructors‟ performance appraisal. The chi-square test result also supports this result as shown in
table 30.

Table 30: Relationship test of students’ practice of Rating by having enough training to
evaluate instructors’ performance * instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 25.738 4 .000 .000***
Likelihood Ratio 29.311 4 .000 .000***
Fisher's Exact Test 25.386 .000***
Linear-by-Linear 13.515b 1 .000 .000*** .000 .000
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 2.46.
b. The standardized statistic is 3.676.

The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
rating practice by having enough training to evaluate instructors‟ performance at α = 0.05 level.
This result supports the mean score of the cross tabulation result discussed above.

Generally, the analysis shows that students lack necessary training to evaluate performance of
instructors. Therefore, to enhance usefulness of student appraisal, students must be trained
during freshman semester and then periodically the refresher training must be given to them.

69
4.4.2 Instructors’ Perception of Practices of Peers
As mentioned earlier, peers appraisals is an integral part of instructors‟ performance appraisal in
BDU. According to Keig and Waggoner (1994), peer appraisal is a process in which instructors
work collaboratively to assess each other‟s teaching and to assist one another in efforts to
strengthen teaching. Needless to say peer appraisal is also widely used as supplement to
students‟ appraisal of instructors‟ performance. Therefore, the focus of this section is to see the
practices of peers in appraisal so as to examine the extent to which they perceive peer appraisal
is free of biases.

Table 31: Instructors’ Perception of Practices of Peers

Peers/instructors practices in evaluation


N Mean
peers reading situation during evaluation 35 2.71
peers provide equivalent high score to peers 35 3.06
Positively Evaluating Close Friends (Peers) 35 2.37
Rating Based on Full Information on Each Others‟ 35 1.80
Performance
Peers Rating by contrasting against others 35 3.29
Peers rating reality against the standard 35 1.97
Source: Own Survey Result, 2015

4.4.2.1 Reading Appraisal Criteria during Appraisal


Figure 9 below indicates that 52.8% including 9 (25%) strongly disagree and 10 (27.8%)
disagree) of instructors were against the statement “peers properly read each criterion of
appraisal during evaluating my performance”. On the other hand, 41.6% including 12 (33.3%)
agree plus 3 (8.3%) strongly agree of instructors supported the statement. The remaining 1
(2.8%) of instructors were neutral. Moreover, the mean value of instructors‟ response is 2.71 as
shown in table 34; indicating the instructors‟ disagreement with the statement. The implication is
that majority of instructors (peers) never properly read each appraisal criterion to evaluate
performance of each other.

70
Figure 9: Reading Appraisal Criteria during Appraisal
Source: Own Survey Result, 2015

4.4.2.2 Providing Equivalent High Scores to All Peers


As it can be vividly seen from figure 10 below, when instructors replied to the statement “peers
provide equivalent high scores to all peers”; 6 (17.1%) of them were agreed and 9 (25.7%) were
strongly agreed, adding up to 42.8% support for the statement.

71
Figure 10: Peers Provide Equivalent High Scores to All Peers

Source: Own Survey Result, 2015

On the other hand, 8 (22.9%) and 2 (5.7%) were strongly disagreed and disagreed, respectively.
The remaining 10 (28.6%) were neutral. The mean score of instructors‟ response is 3.06. This
indicates that largest proportions of instructors perceive that during peer appraisal, peers provide
equivalent high score for each other.

As described earlier, since appraisal score is associated with some personnel decisions
(especially promotion and scholarship); instructors are more interested in giving high scores so
as to help each other. Conclusively, peer appraisal score is more subject to leniency bias because
peers wants each other to be beneficiary of promotion and scholarship opportunities that are
largely determined by appraisal score. Consistent with this finding, Rudd, et al. (2001) found that
at University of Florida, peers were reluctant to give negative feedback because of promotion,
tenure, and award implication. Prowker (1999) also stated that purpose of performance appraisal
information may serve to concentrate the evaluator‟s attention to positive behavioral incidents
which leads to inflated appraisal score. Therefore, it can be said that this practice is major
problem that clouded the peer appraisal result.

72
4.4.2.3 Positively Evaluating Close Friends (Peers)
Favoritism is one of the potential biases inherent in performance appraisal process. In this
context, instructors were given a chance to display their perception of whether peers positively
evaluate their close friends than other peers. The following table 32 summarizes instructors‟
response on the statement “peers positively evaluate their close friends/peers”.

Table 32: Positively Evaluate Their Close Friends (Peers).

Responses Frequency Percent


strongly agree 12 34.3
Agree 9 25.7
Neutral 7 20.0
Disagree 3 8.6
strongly disagree 4 11.4
Total 35 100.0
Source: Own Survey Result, 2015

The above table indicates that 34.3% and 25.7% of instructors were strongly agreed and agreed,
respectively with the above statement. While 20% of them were neutral, 8.6% and 11.4% of
them were disagreed and strongly disagreed, respectively with the statement. From table 31 the
mean score of their response is 2.37. This indicates that majority of instructors perceive that
peers positively evaluate their close friends. The implication is that instructors view each other
on equal eyes in the course of their performance appraisal.

4.4.2.4 Rating Based on Full Information on Each Others’ Performance


The appraiser must attend and recognize job relevant behavior and information concerning the
appraised. The appraiser eventually categorizes an appraisee in to a schema and most of the time
appraisers make ratings on the basis of little information (Friedman 1984). To this end, current
study made attempt to evaluate instructors‟ perception of whether peers are well informed of
their performance. The underneath table 33 summarizes responses of instructors on the statement
“Peers are well informed of my performance along all appraisal criteria”.

73
Table 33: Peers are well informed of My Performance along All Appraisal Criteria

Responses Frequency Percent


strongly disagree 15 42.9
Disagree 15 42.9
Neutral 2 5.7
Agree 3 8.6
Total 35 100.0

Source: Own Survey Result, 2015

Accordingly, table 33 reveals that 42.9% and 42.9% of instructors replied that they were strongly
disagreed and disagreed, respectively with the statement. While 5.7 % of them were neither
agreed nor disagreed, 8.6% of them were agreed with the statement. The mean score of
instructors‟ response on the statement is 1.80. This shows that majority of instructors believe that
peers are not well informed of all their performance dimensions.

However, given insufficient amount of information, it is likely that the appraiser will not have
formed a particular position about the appraised and will be concerned about making a mistake
(Harris 1994; Elverfeldt 2005). Therefore, it is recommended that peers with full information
about performance of each other should do peer appraisal. Meaning, for example, instructors
within the same team may be well informed of performance of their team members than other
instructors in the department and that peer appraisal should be better done on team basis rather
than on conventional departmental basis.

4.4.2.5 Contrasting Performance of Peers against Each Other Vs. Comparing Peers’
Performance with the Appraisal Standard
As discussed earlier in this paper, when an evaluator let the rating of an appraisee to be
influenced by performance of other appraisee‟s, the contrast effect will crop up. Under ideal
situation, performance of employees (instructors) must be compared with the given standards of
appraisal rather than against performance of somebody else. In this regard, instructors were
asked of their perception of the peers‟ practice of comparing their performance with standard of

74
appraisal and/or against performance of other instructors. The following table 4.19 displays
aggregate response of instructors concerning this issue.

Table 34: Comparing a peer’s performance with standard of appraisal vs. contrasting a
peer’s performance against performance of other peers

Responses

Disagree
Strongly

Strongly
disagree

Neutral

Agree

agree

Total
Description
N % N % N % N % N % N %
…peers do
contrast 10 28.6 3 8.6 9 25.7 13 37.1 20 9.0 35 100
performance
…other peers
…comparing my
performance with 10 28.6 19 54.3 3 8.6 3 8.6 - - 35 100
a given standard
of evaluation
Source: Own Survey Result, 2015

Table 34 vividly shows that when instructors were asked to reflect on the statement “during
appraisal, peers do contrast performance of one peer against that of other peers”, 37.2%
including 28.6% strongly disagree and 8.6% disagree of instructors replied their disagreement
with the statement. While 25.7 % of them were neither agreed nor disagreed, 46.1% of them
were agreed with the statement. The mean score of instructors‟ response on the statement is 3.29
as shown in table 31. This indicates that majority of instructors were agree with the statement. In
other words, largest proportions of instructors perceive that their peer appraisal score is affected
by contrast effect.

Additionally Table 34 shows that when instructors were asked to reflect on the statement “peers
evaluate me by comparing my real performance with a given standard/appraisal criteria”, 82.9%
including 28.6% strongly disagree and 54.3% disagree of instructors replied their disagreement
with the statement. While 8.6 % of them were neither agreed nor disagreed, 8.6% of them were

75
agreed with the statement. Moreover, table 34 also indicates that mean score of instructors
response on the statement “peers evaluate me by comparing my real performance with a given
standard/appraisal criteria”, is 1.97. This implies that greater proportion of instructors do not
perceive that peers do compare their performance against the given standard of appraisal.
Generally, the analysis shows that majority of instructors believe that peer appraisal is victimized
by contrast effect to the higher extent and at the same time they also believe that peers do not
compare their performance against given standard of appraisal.

4.4.3 Instructors’ Perception of Practices of Heads of Departments


As it has been described earlier, in BDU head appraisal is also an integral part of instructors‟
performance appraisal. In this regard, this study is attempted to assess how instructors perceived
heads‟ appraisal related practices.

4.4.3.1 Heads Awareness about Instructors’ Performance


Table 35: Instructors’ perception of head’s awareness about their performance

Responses
Strongly

Strongly
disagree

disagree

Neutral

Agree

agree

Total
Description
N % N % N % N % N % N %
HOD is well
informed of my 12 34.3 10 28.6 3 8.6 6 17.1 4 11.4 35 100
performance along
all criteria of
appraisal.
Mean 2.43
Source: Own Survey Result, 2015

The above table 35 indicates, majority of instructors (62.9%, including 34.3% strongly disagree
plus 28.6% disagree) were against the statement „HOD is well informed of my performance
along all criteria of appraisal”. The mean score of response on this statement is 2.43. This
implies that majority of instructors perceive that their head of department is less informed of
their performance. However, as mention earlier the information about performance of appraised
is a critical determinant of accuracy of appraisal score. Therefore, it is recommended that prior to

76
undertaking appraisal; every head of department should try all the best to gather necessary
information regarding instructors under his/her supervision.

4.4.3.2 Coaching to Improve Performance


Table 36: Heads’ Coaching Practice: Instructors Perception by Academic Rank

coaching situation of HOD for performance improvement


Academic Rank Of Respondents N Mean
Assistant Lecturer 3 2.00
Lecturer 24 2.88
Assistant Professor 8 2.38
Total 35 2.69
Source: Own Survey Result, 2015

Making a glance at table 36, one can notice that overall mean score of instructors‟ response on
the statement “HOD is also my coach to improve my performance” is 2.69. This indicates that
majority of instructors believe that their head is not providing a coaching service to improve their
performance. However, making a closer look at the table, astute reader can observe that for
academic rank of assistant lecturer were lower mean for the statement while lecturers were the
highest mean and assistant professors had in between.; showing that their agreement level with
the above statement decreases.

4.4.3.3 Keeping Record of Instructors Performance


Keeping and maintaining accurate records of employee‟s performance is a key to ensure the
effective use of a performance appraisal process. The carefully maintained records establish
patterns in an employee‟s behavior that may be hard to spot by typical incident- by-incident
supervision (Crane 1991). Moreover, the careful review of the records helps avoid selective
memory problem, and helps plot appropriate actions. Of course, well-maintained records are
essential, if need arises to discipline, demote or dismiss an employee (Elverfeldt 2005). In line
with this, present study has attempted to investigate whether heads of departments keeps
necessary record of instructors‟ performance. The following figure 11 displays instructors‟
response to the statement “HOD usually keeps necessary record on my performance during the
appraisal period”.

77
Figure 11: HOD Practice of Keeping Record of Instructors’ Performance

Source: Own Survey Result, 2015

According to figure 11, majority of instructors 22 (62.9%), including 12 (34.3%) strongly


disagree and 10 (28.6%) disagree, were against the above statement. On the other hand, 6
(17.1%) and 4 (11.4%) of them were agreed and strongly agreed, respectively with the statement.
Generally, the mean score of instructors response on the idea is 2.43, signifying that greater
proportion of instructors believe that their HOD never keeps necessary record of their
performance during the typical appraisal period. This implies that with lack of accurate record of
instructors, performance, in case instructors complain about their score in HOD appraisal, HOD
may not have tangible evidence so as to convince them. Therefore, it is wise for HODs to
maintain all necessary records of instructors‟ performance so as to obtain bases to settle any
complaints that may arise regarding appraisal score and effectively make subsequent decisions.

4.4.3.4 Giving Equivalent Score for All Instructors and Doing Favor for close friends
There is a growing body of literature supporting the idea that during appraisal, appraisers may be
influenced by political consideration, and favoritism. In line with this, this study has attempted to

78
see the existence of these situations in HODs‟ practice of instructors‟ performance appraisal and
that the underneath table shows aggregate response of instructors in this respect.

Table 37: Giving High Score for All Instructors and Doing Favor for Close Friends

Responses

Disagree
Strongly

Strongly
disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
HOD gives high
appraisal score to 6 17.1 10 28.6 6 17.1 9 25.7 4 11.4 35 100

all staff members

Mean 3.14
HOD rate his
close friends 12 34.3 3 8.6 10 28.6 6 17.1 4 11.4 35 100

more positively
than other
instructors
Mean 3.37
Source: Own Survey Result, 2015

Instructors were requested to express their agreement level on the statement “HOD gives high
appraisal score to all staff members”. Accordingly, table 37 reveals that 37.1% of instructors
(25.7% agree plus 11.4% strongly agree) supported the statement. Similarly, the mean score of
instructors‟ response is 3.14, implying that instructors were in agreement with the statement.
Meaning, some of instructors believe that heads of departments are giving more favorable score
for all instructors under their supervision. According to Freid et al. (1999), deliberate inflation of
appraisal score is a political consideration which may hinder organizations‟ effort to use
appraisal score for developmental, motivational, and developmental purposes. Supervisors
deliberately inflate the employees appraisal score for different reasons such as maximizing
subordinates' merit raises, avoiding creation of a written record of poor performance, and

79
minimizing potential challenges from subordinates to their own appraisal score (Freid et al.
1999).

To this end, the interview conducted with HODs shows that they provide almost equal high
scores for all staffs because the appraisal score is associated with instructors promotion. The
phrase of one head reads as: “No better incentive system is available for instructors. The major
means through which instructors will get additional payment is the „intermittent‟ promotion
which is largely determined by performance appraisal score. Therefore, I am not as such serious
at making distinction between instructors during that semi-annual appraisal”.

Conclusively, the analysis shows that heads‟ appraisal is affected by leniency bias because
appraisal score has implication for promotion decision. Therefore, other things remain on the
right track, heads are recommended to rate instructors based on their real performance so as to
ensure effectiveness of instructors‟ performance appraisal process. According to Elverfeldt 2005,
when employees perceive bias or favoritism in managerial behavior, the perception of inequity
accelerates. Managers frequently engage in what is termed as “in-group and “out-group”
behavior in which employees who are viewed as capable are given privilege while those who are
viewed as unfavorable are discriminated (Robbins, 1997). This categorization of employees is
frequently made with incomplete information leading to misclassification of employees which
will attenuate the effectiveness of performance appraisal process (Elverfeldt 2005).

Based on this theoretical standing, an effort has been made to elicit instructors‟ perception of
favoritism in their HODs appraisal. Accordingly, table 37 indicates that 42.9% of instructors
(34.3% strongly disagree plus 8.6% disagree) were against the statement “HOD rates his close
friends more positively than other instructors”. In the same fashion, the 3.37 mean score of
instructors‟ response signifies that almost half of instructors perceived that their heads of
departments favor their close friends in appraisal. While 28.6% indifferent the left 28.5% agreed
the statement.

80
4.4.3.5 Contrasting Performance of Instructors against Each Other vs. Comparing Instructors’
Performance against the Appraisal Standard
In order to evaluate the perceived contrast effect in heads appraisal, instructors were requested to
reply to the statement “During appraisal, HOD does contrast my performance against that of my
peers”. Accordingly, table 38 reveals that half of instructors (50%) were against the statement i.e.
29.7% strongly disagree plus 20.3% disagree. The mean score of their response is 2.59 showing
that instructors‟ perception of contrast effect in heads appraisal is minimal. Whilst this is an
interesting finding; to the dismay, more than half of instructor (52.7%, 17.6% strongly disagree
and 35.1% disagree) were in disagreement with the statement „HOD evaluate me by comparing
my performance with a given standard/appraisal criteria‟. This is also evidenced by the mean
score of 2.74.

Table 38: Contrasting performance of instructors against each other vs. comparing
instructors’ performance against the appraisal standard

Responses
Disagree
Strongly

Strongly
disagree

Neutral

agree

agree

Total
Description
N % N % N % N % N % N %
During appraisal,
HOD does 6 17.1 4 11.4 9 25.7 12 34.3 4 11.4 35 100
contrast my
performance
against that of
my peers.
Mean 2.89
HOD evaluate
me by comparing 7 20.0 7 20.0 9 25.7 10 28.6 2 5.7 35 100
my performance
with a given
standard/appraisa
l criteria.
Mean 2.8
Source: Own Survey Result, 2015

81
Generally, the heads perceived practice of neither comparing instructors‟ performance against
standard of appraisal nor contrasting instructors‟ performance against each other, may lead one
to question how do heads exercise the appraisal. Surprisingly, the result of interview with heads
of departments shows that some heads do not formally and regularly evaluate performance of
instructors. For example, one head said “Frankly speaking, I never formally evaluate instructors’
performance; rather I conduct it on demand by instructors for scholarship or promotion.
Otherwise, I am not as such committed to do evaluation. However, I believe that performance
evaluation is a serious business that everybody in university must care about”.

Another head said “It is difficult to say performance evaluation exists. I personally not motivated
to undertake strict evaluation of instructors’ performance. Sometimes, if feel guilty to provide
different evaluation score for instructors because evaluation criteria itself has a lot of problems.
It needs customization of elements raised department wise. Additionally, since the situation itself
is not favorable for instructors; it could be hard for instructors to perform as per the evaluation
criteria. Therefore, the evaluation is handled carelessly as mere formality.”

Generally, the analysis shows that instructor‟s performance appraisal has not received enough
attention by heads of department. From the above quotes, it can be understood that heads are not
evaluating the real performance of instructors.

4.5 Characteristics of the Performance Feedback System


Feedback is an essential facet of effective performance appraisal process. In the absence of
feedback, employees are unable to make adjustments in job performance or receive positive
reinforcement for effective job behavior (Roberts 2003). However, the acceptance of feedback is
the catalyst to behavioral change. Feedback provides individual motivation, if the employee is
ready to accept it (Alexander, 2006). Generally, recognizing the central role of feedback system
in performance appraisal process, the present study had made endeavor to assess characteristics
of instructors‟ performance feedback system. In this context, feedback system refers to the
process by which instructors are informed of their performance after and/or before appraisal
period.

82
4.5.1 Existence of Official Performance Feedback after Appraisal
To investigate whether official feedback on performance is available for instructors, they were
requested to answer the question: “Have you ever got official feedback on your performance
after performance appraisal”? Accordingly, table 4.24 indicates that 62.9% of instructors asserted
that they were given the feedback while the remaining 37.1% were not. Moreover, to further
analyze whether official performance feedback exists uniformly across different departments; the
responses of instructors were cross tabulated in the following manner.

Table 39: Existence of Official Performance Feedback after Appraisal by Department

Have you ever get official Total


feedback on your
performance after
performance appraisal?
Yes No
Economics Count 3 5 8
% within department of respondents 37.5% 62.5% 100.0%
Management Count 3 4 7
% within department of respondents 42.9% 57.1% 100.0%
Accounting Count 6 1 7
and Finance % within department of respondents 85.7% 14.3% 100.0%
Marketing Count 4 2 6
Management % within department of respondents 66.7% 33.3% 100.0%
Logistics Count 4 0 4
% within department of respondents 100.0% 0.0% 100.0%
Tourism and Count 2 1 3
Hotel % within department of respondents 66.7% 33.3% 100.0%
Management
Total Count 22 13 35
% of Total 62.9% 37.1% 100.0%
Source: Own Survey Result, 2015

83
Making a closer look at table 39, one can notice that 37.5%, 42.9%, 85.7%, 66.7%, 100.0% and
66.7% of instructors in the Economics, Management, Accounting and Finance, Marketing
Management, Logistics and Tourism and Hotel Management respectively were given official
performance feedback after their appraisal. Generally, the analysis shows that the official
feedback after appraisal is available for around two-third of instructors. In addition, regarding
obtaining the feedback; there is a variation among departments where Logistics is at better
position than the rest departments.

Furthermore, those instructors who didn‟t receive official performance feedbacks after appraisal
were 13 in number i.e. 37.1% of 35 sample instructors. This group of instructors was asked the
reason why they didn‟t received the feedback. All of the 13 (100%) stated that “no feedback
system at all in the university”

Generally, the analysis shows that feedback after appraisal is not uniformly available for
instructors in different departments in the university.

4.5.2 Existence of Continuous Discussion on Instructors Performance


In addition to formal performance feedback, informal and regular communication between
supervisor and employee are desirable to make performance appraisal process more effective
(David andLanda 1999; Kondrasuk et al. 2002). The performance appraisal process, that
provides formal feedback once a year is more likely to be feedback deficient (Roberts 2003;
Elverfeldt 2005). Therefore, these scholars recommend the ongoing and informal performance
feedback for the performance appraisal process to be maximally effective. In line with this,
sampled instructors were asked whether continuous dialogue on instructors‟ performance exists
between the department heads and the instructors. Accordingly, the following figure 12 clearly
displays responses of instructors.

84
Figure 12: Existence of Continuous Discussion on Instructors Performance

Source: Own Survey Result, 2015

The above figure reveals that majority 25 (71.4%) of instructors maintained that
management/head departments and instructors do not make continuous discussion about
instructors performance. On the other wing, 10 instructors (28.6%) stated that such situation
exists. This shows that little attention has been given for the importance of continuous dialogue
in the effort of improving performance of instructors. Therefore, instead of simply waiting for
semester end appraisal, it is wise if heads of departments play the role of coaches and thereby
adapting the culture of continuous dialogue with instructors.

4.5.3 Specificity of Performance Feedback


As described earlier, one of essential features of performance feedback is the extent to which it is
specific. Meaning, as opposed to general feedback which is confusing; specific feedback helps
appraisee‟s to recognize their particular aspect of strengths and weaknesses. To this end,
instructors were asked in what form they received feedback on their performance. Then, they
were requested to evaluate the ability of feedback system in helping them know their specific
aspects of strengths and weaknesses.

85
Table 40: Forms of Performance Feedback Available for Instructors

Frequency Percent Valid Cumulative


Percent Percent
V summary of evaluation result 20 57.1 90.9 90.9
al detail score along all 1 2.9 4.5 95.5
id evaluation criteria
oral discussion with HOD 1 2.9 4.5 100.0
Total 22 62.9 100.0
System Missing 13 37.1
Total 35 100.0

Source: Own Survey Result, 2015

According to table 40, 22 instructors who responded to the question, 20 (90.9%) stated that they
received performance feedback in the form of summary of appraisal score (result). Therefore, the
analysis shows that though not regularly exercised, the dominant form of communicating
appraisal score to instructors is such a summarized appraisal score. This implies that instructors
may hardly know their achievement (low or high score) along each appraisal criterion. Hence,
such a form of feedback does not help instructors recognize specific performance aspect that
need improvement for the future.

Furthermore, the following figure 13 shows the instructors perception on the ability of feedback
in helping them know their specific aspects of strengths and weaknesses. Accordingly, of 56
instructors who responded to the question, 44.64%, 28.57%, 17.86% perceived the feedback as
„poor‟, „satisfactory‟, and „good‟, respectively in terms of letting them know their specific
aspects of strengths and weaknesses.

86
Figure 13: Specificity of performance feedback

Source: Own Survey Result, 2015

Moreover, the mean score of 1.40 testifies that performance feedback is lacking specificity.
Therefore, the university needs to arrange the form that provides instructors with detail
information about their performance. For instance, other things remain ideal, preparing the report
communicating instructors‟ achievement (appraisal score) along each appraisal criterion will
assist instructors know their positive side and pitfalls.

4.5.4 Timeliness of Performance Feedback


Feedback must be timely, if its effectiveness is needed to be achieved. When performance
feedback is precise and timely, it may result in behavioral change (Tziner et al. 1992; Roberts
2003). It is assumed that timely feedback is essential because it enables appraisee‟s to analyze
mistakes rapidly, learn from mistakes and remove obstacles from work processes. Based on this
theoretical foundation, an attempt was made to evaluate whether the existing feedback for

87
instructors is timely or not. The following table 41displays instructors‟ response regarding the
time interval in which they obtain feedback on their performance.

Table 41: Time interval in which feedback is available for instructors

Timeliness of Performance Feedback


Frequency Percent Valid Cumulative
Percent Percent
Valid 1-2 month 7 20.0 20.0 20.0
3-4 month 3 8.6 8.6 28.6
> 4 month 25 71.4 71.4 100.0
Total 35 100.0 100.0
Source: Own Survey Result, 2015

According to above table 41, majority (71.4%) of instructors asserted that feedback is available
for them within time interval of > 4 months. The next largest percentage (20.0%) stated that for
them to obtain feedback, within time interval of 1-2 months. Still other instructors maintained
that feedback is available for them within time interval of 3-4 months (8.6%).

However, this situation has great deviation from what is expected. The expectation is that from
instructors‟ performance appraisal during a given semester; instructors must be provided with
timely feedback which helps them rectify their weaknesses. This is vital to enhance instructors‟
performance in the subsequent semester. The appraisal is always conducted at the end of each
semester. Instructors and students will have two weeks break at the end of first semester and two
months vacation at the end of second semester of a given academic year. Nevertheless, as
described earlier, it takes more than three months for instructors to obtain the performance
feedback. Convincingly, the feedback is not timely enough to improve instructors‟ performance.
Hence, there is a need to shorten the time interval within which feedback is available for
instructors.

88
4.5.5 Acceptance of Performance Feedback by Instructors
The way feedback is perceived and used will be influenced by the attitude held by the feedback
recipients. Alexander (2006) suggested that individuals that have positive attitudes toward the
appraisal process and believe it is fair are more receptive to feedback. The author also cited that
if the appraisee‟s become hostile towards the appraisers and the process, they are clearly not
ready to accept feedback. Moreover, while appraisee‟s may become defensive towards negative
feedback, they more often become ready to accept positive feedback (Roberts 2003; Elverfeldt
2005, Alexander 2006). In order to examine the instructors‟ acceptance of their appraisal score,
an attempt was made at analyzing the instructors score in head, peer and student appraisal. Then,
the instructors were asked whether their appraisal score from the three sources reflects their true
performance. To this end, the subsequent analysis and discussion is based on information from
table 42 and table 43.

Table 42: Instructor Appraisal Score during Second Semester of 2014/15 A/Year

N Minimum Maximum Mean Std. Deviation


Head appraisal score 35 2 5 3.26 .780

Peer appraisal score 35 1 5 3.00 1.138

Student appraisal 35 1 5 2.74 1.146


score
Total appraisal score 35 1 5 3.11 1.278
Source: Own Survey Result, 2015

Prior to launching the discussion on the data on the above table, it is better to take a brief look at
how instructors‟ appraisal score is calculated in the university under study. On instructors
performance appraisal form, instructors are usually evaluated based on five point scale along
every appraisal criterion; where 1= very low, 2=low, 3=medium, 4=high, and 5= very high.
Because this five point scale is used, the total achievement of instructors along every appraisal
criterion is calculated out of maximum of five (5) points.

89
In line with this, the above table 42 indicates that the mean of instructors score in head, peer, and
student appraisals were 3.26, 3.0, and 2.74, respectively. This implies that on average every
instructor will be getting those marks out of 5.0 points. The mean of instructors total appraisal
score was 3.11, which signifies that on average every instructor will achieve 3.11 out of 5.0
points. Generally, the analysis shows that on average every instructor receives medium score (3.0
and above) in appraisals from all three sources.

However, the comparing instructors‟ appraisal score from all the three sources, head appraisal
score is more inflated, followed by peer and student appraisal scores. Furthermore, the following
table 43 summarizes response of instructors regarding whether their appraisal score reflects their
true performance. According to the table, instructor stated that their score in head appraisal
reflects their true performance to medium (60.0%) and low (11.4%) extent. On the other hand,
8.6% and 20.3% of instructors insisted that their head appraisal score reflects their true
performance to high and very high extent, respectively.
Table 43: Instructors perception of their appraisals score as reflection of their true
performance

Head appraisal Peer appraisal Student appraisal


N Valid N Valid N Valid
Percent Percent Percent
very low 3 8.6 6 17.1
Low 4 11.4 10 28.6 7 20.0
Medium 21 60.0 9 25.7 15 42.9
High 7 20.0 10 28.6 4 11.4
very high 3 8.6 3 8.6 3 8.6
Total 35 100.0 35 100.0 35 100.0
Mean 3.26 3.00 2.74

Source: Own Survey Result, 2015

90
The table also reveals that instructor believe that their score in peer appraisal reflects their true
performance to medium (25.7%), low (28.6%), and very low (8.6%) extent. While 28.6% and
8.6% of instructors maintained that their score in peer appraisal reflects their true performance to
high and very high extent, respectively. Moreover, regarding score in student appraisal, 42.9%,
20.0%, and 17.1% of instructors stated that the score reflects their true performance to medium,
low and very low extent, respectively. On the other hand, 11.4% and 8.6% of instructors believe
that their true performance is reflected by student appraisal score to high and very high extent,
respectively. Furthermore, the mean score of instructors‟ response in head, peers and student
appraisals are 3.26, 3.0 and 2.75, respectively.

The above descriptions denotes that majority of instructors believe their appraisal scores from all
source (from head, peer and student) reflects their true performance to minimum level. However,
comparison of the mean values of the responses shows that head appraisal score is at better
position than scores from peer and student appraisal. The student appraisal score resulted in
lowest mean value of responses, indicating that it is the lowest representative of instructors‟ true
performance as compared to the case of head and peer appraisal.

4.6 Clarity of Purpose of Instructors’ Performance Appraisal


As it has been discussed in the earlier section of this paper, organizations (including universities)
conduct performance appraisal for various purposes. Kyriakides (2006) suggested that purpose of
performance appraisal is one of a number of situational or contextual variables that affects the
appraisal process. According to Monyatsi, et al. (2006), if performance appraisal is to be
effective; the users (both the appraisers and the appraised) must understand and accept the
purposes of the appraisal scheme. The authors also added that, if users are not aware or
convinced of their performance appraisal; they become anxious and suspicious of the whole
process. To this end, the instructors‟ and the students‟ understanding of the purpose of
instructors‟ performance appraisal was evaluated.

91
Table 44: Instructors’ understanding of the purpose of their performance appraisal

Level of purpose of understanding Frequency Percent Mean

very low 6 17.1


Low 4 11.4
Medium 9 25.7

3.11
High 12 34.3
very high 4 11.4
Total 35 100.0
Source: Own Survey Result, 2015
The instructors were asked the question “What is your level of understanding about the purpose
of instructors‟ performance appraisal in your University”? Out of the 35 instructors who
responded to the question, 25.7% of them stated that they had medium level of understanding
about purpose of their performance appraisal, while 34.3% and 11.4% of them asserted that their
understanding is high and very high, respectively. However, the remaining 28.5% (17.1% very
low plus 11.4% low) asserted that they lacked understanding of why the appraisal is in place.
Generally, it can be said that for majority of instructors the purpose of their performance
appraisal is clear. The overall mean of 3.11 could be an evidence of this scenario.

Moreover, to evaluate students‟ understanding of purpose(s) of instructors‟ performance


appraisal, students were asked to express their agreement/disagreement on the statement “I know
the purpose(s) of instructors‟ performance appraisal”. Accordingly, the following table 45
displays summary of students‟ response to the statement.
Table 45: Students Understanding of Purpose of Instructors’ Performance Appraisal

Responses Frequency Percent mean


strongly disagree 48 21.7
disagree 46 20.8
neutral 60 27.1
2.81

agree 35 15.8
strongly agree 32 14.5
Total 221 100.0
Source: Own Survey Result, 2015

92
As indicated in table 45, of 221 students who responded to the above statement, 21.7% were
strongly disagreed and 20.8% were disagreed, making 42.5% disagreement rate. On the other
hand, 15.8% were agreed and 14.5% were strongly agreed, adding up to 30.3% agreement rate
with the statement. Moreover, the mean score of students response is 2.81, signifying that
majority of students lacked understanding of purpose of instructors‟ performance appraisal.
When mean scores of instructors (3.11) and students (2.43) were compared, instructors were at
better position than students regarding the clarity of the appraisal purpose.

The reasons for these instructors were asked the actual purpose(s) of instructors‟ performance
appraisal in their university. In order to know the main purposes of instructors‟ performance
appraisal as applicable in the university, respondents (instructors) were given lists of some ideal
purposes of performance appraisal. And they were asked to rate each purposes independently by
expressing their agreement/disagreement level on five points scale. Accordingly, Table 46
indicates summary of instructors‟ response (mean scores) for each purpose of instructors‟
performance appraisal in the university.

Table 46: relationship test of students practice of Rating having enough awareness about the
purpose of evaluation criteria * instructors perception on students rating.

Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 19.071 4 .001 .001
Likelihood Ratio 21.165 4 .000 .000
Fisher's Exact Test 17.895 .001
Linear-by-Linear
15.059 b
1 .000 .000 .000 .000
Association
N of Valid Cases 256
a. 1 cells (10.0%) have expected count less than 5. The minimum expected count is 4.38.
b. The standardized statistic is 3.881.

The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students

93
rating practice by having enough awareness about the purpose of evaluation criteria at α = 0.05
level. This result supports the mean score of the cross tabulation result discussed above.

Table 47: Purpose of Instructors Performance Appraisal

Purpose of The Appraisal N Mean


To give training to improve performance 35 2.3143
To identify one s strengths and weaknesses through feedback 35 2.8286
To use as a base for giving scholarship 35 3.1714
To give promotion for those who meet standard 35 3.6286
To dismiss instructors who fail to meet standard 35 2.3429
To provide award for best performers 35 2.4000
Just for formality 35 2.6857

Source: Own Survey Result, 2015

For purpose of this analysis, since five point Likert scales was used, mean score of 3.0 was
considered as midpoint (neutral), while mean scores of greater than 3.0 and less than 3.0 were
assumed as agreement and disagreement, respectively. According to table 47, the purpose of
instructors performance appraisal with greater than 3.0 mean score are giving promotion (mean
=3.62) and scholarship (mean =3.17) for instructors. The analysis shows that performance
appraisal is primarily used for making promotion decision and secondly for making scholarship
decision. On the other hand, instructors argued that performance appraisal never serves the
following purposes: dismissing poor performers (mean=2.34), awarding best performers (mean
=2.4), training need assessment (mean = 2.31), identifying strengths and weaknesses through
feedback (2.82), and formality (mean =2.68).

Generally, the analysis shows that formative purpose of performance appraisal had received less
attention in the university. Normally, instructors‟ performance appraisal should be used in order
to improve performance of the instructors. This could be done through, for example; assessing
instructors training needs and providing performance feedback that would help instructors
identify their strengths and weaknesses. However, the case in the current university is the inverse
of what is expected because the appraisal is primarily used for promotion. The use of appraisal

94
for promotion is summative in nature which has nothing to do with improving performance of
instructors.

Therefore, it can be concluded that current performance appraisal in the university is neither
serving the ideal purposes nor it is appropriate to serve the purposes unless urgent corrective
action is undertaken.

95
CHAPTER FIVE

CONCLUSION AND RECOMMENDATION


5.1 Conclusions
Based on the data analysis and discussion in the last chapter, the researcher has managed to
conclude the following major points.

Under normal situation prior to conducting performance appraisal, the appraisee‟s must be
informed of the appraisal criteria against which his/her performance is going to be evaluated.
This could be done through providing training/orientation and/or providing detailed job
description that clarifies performance expectations. However, at the outset instructors were given
neither job description nor training that clarifies their role perception. Consequently, instructors
may hardly understand what is expected of them as instructor in the university. Because they do
not know their performance expectation, it was identified that majority of instructors do what
they feel right in the attempt to keep their performance up to standard. Therefore, the absence of
job description and training on appraisal criteria revealed that instructors‟ performance appraisal
process has lacked proper foundation.

Majority of instructors perceived their current appraisal criteria as deficient of essential qualities
that effective appraisal criteria must possess. Among other things, instructors perceived that
appraisal criteria were falling short of the following qualities: instructors‟ participation in its
design, relevance and completeness, consideration of practical difficulty, reliability and ability to
measure instructors‟ contribution to student learning.

The study has also attempted to assess the practices of appraisers (students, peers and heads).
With regard to student appraisal, the researcher has come to know that students‟ practices during
appraisal are full of biases. The results indicate that during evaluating instructors‟ performance,
students take in consideration many factors that are not normally related instructors performance.
To this end, easy exam prepared by instructor, less number of assignments given by instructor,
previous good grade awarded by instructor, good grade expected from instructor and funniness
nature of instructor are found to be some bases on which students provide favorable score for

96
instructor(s). In addition, it was also identified that; if students had single negative experience
with an instructor, they revenge against that instructors by providing poor appraisal score.

Moreover, during appraising instructors‟ performance students do not compare performance of


instructors with the given appraisal criteria. In order to evaluate instructors‟ performance based
on appraisal criteria, there is a need to properly read each appraisal criterion. However, to the
dismay majority of students reported that they never properly read appraisal criteria. Instead,
they do appraisal based on preconceived image about instructors. In addition, they do contrast
performance of one instructor against that of other instructors. Furthermore, students self report
and interview with head departments revealed that students lacked proper training on instructors‟
performance appraisal. Therefore, it is possible to dictate that lack of students training as
appraisers could have been responsible for aforementioned bias full practices. The only
encouraging finding about students practice during appraisal is that students give less
consideration for instructor‟s physical attractiveness.

Concerning peer appraisal, the study revealed that majority of practices of peers in appraisal are
discouraging. The only interesting finding is that majority of instructors perceived that favoritism
had less room in their peer appraisal. Meaning, they stated that peers neither favor their close
friends. On the other hand, it was unearthed that peers are less informed of each other‟s
performance; however, they provide inflated appraisal score for each other regardless of reading
each appraisal criterion. Conclusively, peer appraisal is not the reflection of real performance of
instructors.

The study also uncovered that most of heads‟ practices during appraisal are discouraging. The
justifications for this determination are: heads lacked proper record and information on
instructors‟ performance, they never compare instructors‟ performance with appraisal criteria
rather since appraisal score has implication for promotion; they provide equivalent high score for
all instructors in the departments. Additionally, it was expected that heads should play dual role
in appraisal i.e. evaluator and coach; however, the coaching role of heads was found to be
missing. On the other hand, the only appealing finding regarding head appraisal is; contrast
effect had less room in heads appraisal. Generally, it can be concluded that students, peers and
heads appraisals of instructors‟ performance are full of biases.

97
The study also revealed that instructors‟ performance feedback exists for mere formality. Despite
the fact that, majority of instructors have received official feedback (appraisals score); the
feedback is not uniformity available for all instructors. Moreover, the feedback is found to be
irregular, unspecific and not timely. Meaning, since instructors are given a summarized form of
appraisal score, instructors cannot identify their specific strengths and weaknesses from the
feedback. The situation is aggravated by the fact that the appraisal score is given within more
than four months duration. Similarly, the regularity of giving that appraisal score and the
continuous dialogue between instructors and management on instructors‟ performance is found
to be deficient. This nature of performance feedback, made it ineffective in serving its ideal
purpose i.e. improving instructors performance.

Furthermore, it was uncovered whilst all instructors had medium appraisal score; they didn‟t
accept those appraisal scores as reflection of their true performance.

The attempt was also made to evaluate the clarity of purpose of instructors‟ performance
appraisal. The analysis indicated that instructors had high level of understanding of why
instructors‟ performance appraisal is in place; however, students lacked this understanding. The
actual purposes of the appraisal are to make promotion and scholarship decisions. This shows
that some ideal purposes of the appraisal; for instance, training need assessments and identifying
strength and weaknesses were missing.

Therefore, at least it can be dictated that utility of instructors‟ performance appraisal process in
improving instructors‟ performance is minimal. To sum up, the instructors performance appraisal
process is not effective because of 1) poor qualities of appraisal criteria; 2) Bias practices of
appraisers; 3) Ineffective performance feedback system; and 4) Appraisers (students) lacked
awareness on the appraisal purpose and the appraisal is not serving formative purpose.

98
5.2 Recommendations
Based on thorough analysis of findings of the study, the researcher forwarded the following
recommendations that may enhance effectiveness of instructors‟ performance appraisal process.

It is obvious that preparing appraisal criteria is the foremost step in performance appraisal
process. This study disclosed that the appraisal criteria used for appraising performance of
instructors had lacked indispensable features. Therefore, it is recommended that the appraisal
criteria should be urgently redesigned. To do so, firstly, the detail analysis of what instructors job
entails must be conducted. This job analysis is needed to specifically identify the key tasks and
duties that instructors ought to perform. Then, written job description which specifies duties and
responsibilities of instructors have to be prepared. Based on the job description, detail appraisal
criteria should be established.

In the course of establishing the appraisal criteria, full participation of instructors must be
obtained. The participation helps establish realistic, relevant, complete and reliable targets. In
addition, if appraisal criteria are prepared in consultation with instructors, it is highly likely that
the criteria get acceptance among instructors; thereby enhancing effectiveness of the appraisal
process. To further enhance quality of appraisal criteria, it is better, if best experiences of other
universities are benchmarked and necessary adjustments be effected. Finally, the job description
and the detailed appraisal criteria must be communicated to instructors in order to clarify their
role perception. The university must also reconsider the kind, quality and quantity of resources
available for instructors to achieve the established criteria.

The result of this study indicated various factors that contaminate the usefulness of student
appraisal of instructors‟ performance. Though, biases are inevitable in performance appraisal, it
is possible to minimize the likelihood of the bias practices. The literature provide enough
evidence of the role that appraisers‟ training plays in reducing possible errors and biases in
appraisal process. Therefore, to enhance usefulness of student appraisal, students must be trained
during freshman semester and then periodic refresher training must be given to them. The study
identified that students lacked training on instructors‟ performance appraisal and they also lacked
awareness of purpose of the appraisal. For that matter, some of them perceive the appraisal as the
tool for punishing instructors who are unpopular among them. Therefore, during training;
students must be told that the appraisal is primarily needed to improve performance of

99
instructors. Since the improved performance of instructors will contribute to students learning,
this fact must be communicated to students during training.

It was also discovered that students are suspicious of the confidentiality of the appraisal score
they provide for instructors. Consequently, they avoid providing open ended comments about
instructors and sometimes they tend to inflate appraisal score for instructors whom they afraid of.
Therefore, during training; students must be informed of who is responsible for processing the
appraisal score and how it is to be processed. Moreover, the content of training program should
cover issues such as the meaning of each appraisal criterion and the interpretation of scales along
each appraisal criterion. In addition, students must be alerted of the irrelevance of contextual
variables (for example; personality attributes, exam easiness and grade offering) to instructors‟
performance.

The result of the study indicated that peers appraisal was subject to leniency bias because the
appraisal score is used for making promotion and scholarship decisions. To overcome this
problem, the principal purpose of instructors‟ performance appraisal must be for improving
performance of instructors rather than for the conventional administrative purposes. The
instructors must act in professional manner and view the appraisal as part and parcel of their
organizational responsibility. This value system must be cultivated in their mind through
different training sessions, workshops and seminars that could be conducted as in house training
programs. The training must focus on the appraisal purpose, how to give effective feedback and
the appraisal criteria.

To objectively evaluate performance of each other, instructors must be well informed of each
others‟ performance. The information on performance is critical to increase the accuracy of
ratings. Therefore, peers appraisal must be conducted on team basis rather than on departmental
basis because instructors within the same team could have more interaction with each other and
that they are more informed of each others‟ performance.

Head of departments must try all their bests to obtain information on performance of
departmental instructors and where necessary they should also keep necessary records on
instructors‟ performance. In addition to evaluating instructors‟ performance, heads must also
play the role of coach and facilitator so that instructors become better performers. In addition,

100
heads must be given interpersonal skill based training so that they will be equipped with
communication, coaching, and counseling skills. These skills are essential to ensure the
performance appraisal is a pleasant experience for both the instructors and the heads. More
importantly, heads themselves must be evaluated on how effectively they evaluate instructors‟
performance and play all necessary roles in enhancing performance of instructors. This is needed
to amplify the value that heads may associate with instructors‟ performance appraisal process.

Performance feedback plays a pivotal role in improving performance of instructors. Therefore,


the following points are recommended to enhance value of the feedback.

1. The frequency of the appraisal must be increased as conducting the appraisal at the end of
each semester can hardly improve instructors performance.

2. The summarized form of appraisal score must be replaced with the form that provides
instructors with detailed information about their performance. For instance, other things remain
ideal, preparing the report communicating instructors‟ achievement (appraisal score) along each
appraisal criterion will assist instructors know their strengths and weaknesses.

3. The time duration for communicating appraisal score for instructors must be shortened in such
a way those instructors could rectify their downside during one semester so as to perform better
in future semesters.

4. The culture of continuous dialogue on instructors‟ performance must be experienced among


instructors and heads of departments. Meaning, whenever an instructor exhibited some
weaknesses and heads or peers recognized the fact, it must be discussed on the spot rather than
waiting for months when performance will be evaluated. Moreover, it is not sufficient to tell an
instructor about his/her performance problem areas without suggesting the specific mechanisms
on how he/she can improve his/her performance. Generally, performance feedback must be
frequent, precise, timely, and consistent. After all the above mentioned adjustments are effected
in instructors‟ performance appraisal process, the appraisal must be used for all ideal purposes
(both formative and summative). For instance, best performers must be recognized and rewarded
(summative), and poor performers must identified and trained (formative), etc.

101
Area for Further Research
This study has examined effectiveness of instructors‟ performance appraisal process in terms of
the qualities of appraisal criteria, practices of appraisers, effectiveness of the feedback, and
clarity of purpose of the appraisal. However, future research may investigate effectiveness of
instructors‟ performance appraisal process in terms its impact on instructors‟ behavioral
outcome.

102
BIBLOGRAPHY
Alo Oladimeji (1999), Human Resource Management in Nigeria, Business and Institutional
Support Associates Limited, Lagos.
Anjum, A (2011), „Performance appraisal systems in public sector universities of Pakistan,
International Journal of Human Resource Studies, vol.1, no1, pp 41-51.
Armstrong, M & Baron, A (1998), Performance Management Handbook, IPM, London
Atiomo A.C. (2000), Human Resource Management; Malt house Management Science Books,
Lagos.
Bacal, R. (1999), “Performance management” New York: McGraw-Hill. C. Dawson. (2002).
Practical Research Methods: A user friendly guide to mastering research
techniques and projects. Cromwell Press, Trowbridge, Wiltshire.
Baird, J 1997, “Perceived learning in relation to student evaluation of university instruction”
Journal of Educational Psychology, vol.79, no.1, pp 90-91.
Berk, A (2005), “Survey of 12 strategies to measure teaching effectiveness”, International
Journal of Teaching and Learning in Higher Education, vol. 17, no. 1, pp 48-62.
Boice, F & Kleiner, H (1997), „Designing effective performance appraisal systems,‟ Work
Study, vol.46, no.6, pp 197–201.
Braskamp, L & Ory, J 1994, Assessing faculty work, San Francisco: Jossey Bass.
Burak, Elmer and Smith (1977), Personnel Management: A Human Resource Systems
Approach; Reinhold Publishing Corporations Ltd. New York.
Centra, A (2003), „Will teachers receive higher student evaluation by giving higher grades and
less course work?” Research In Higher Education, vol.44, no.5, pp 495-518.
Clinton O. Longenecker (1997), “Why managerial performance appraisals are ineffective:
Causes and Lessons Career Development international, Vol.2, 5, pp. 212- 218.
Cohen, P. (1981), „Student ratings of instruction and student achievement: a meta- analysis of
multi-section validity studies‟, Review of Education, vol.51, no.3, pp 281-309.
Crane, D.P., & Jones Jr., W.A (1991): the public manager. Atlanta Georgia state university press
Cummings. M.W. (1972): “Theory and Practice” William Heinemann Ltd. London.
Danial, A (2011), „Performance evaluation of instructors in universities: contemporary issues
and challenges‟, Journal of Education and Social research, vol.1, no.2, pp 10-
31.

103
Danielson, C & McGreal, TL (2000), Instructor evaluation to enhance professional practice
Princeton, New Jersey: Educational Testing Service.
Dargham, S (2007), Effective management of the performance appraisal process in Lebanon: an
exploratory study
David, P & Macayan, V (2010): “Assessment of teacher performance” The Assessment
Handbook, vol.3, pp 65-76.
Davis, T & Landa, M (1999), A contrary look at performance appraisal, Canadian
manager/manager Canadian, pp 18-28.
Deming, E (1986), Out of crisis: quality, productivity and competition position, Cambridge
University Press, Cambridge.
Derven, M. (1990): “The Paradox of Performance Appraisal” Personnel Journal Volume 69.
Diane M. Alexander (2006), how do 360 degree performance reviews affect employee Attitudes,
effectiveness and performance?: University of Rhode Island press.
Dowell, D & Neal, J (2004), „The validity and accuracy of student ratings of instruction: a reply
to Peter A. Cohen‟, Journal of Higher Education, vol.54, pp. 459-63.
Ellet, C, Wren, C, Callendar, K & Loup, K (1997), „Assessing enhancement of learning,
personal learning environment, and student efficacy: alternatives to traditional
faculty appraisal in higher education‟, Journal of Personnel evaluation in
Education, vol.11, pp 167-192.
Elverfeldt, A (2005), Performance appraisal: how to improve its effectiveness, unpublished
master thesis, University of Twente, Enschede.
Emery, R, Kramer, R, Tian, G (2003), „Return to academic standards: a critique of student
evaluations of teaching effectiveness‟, Quality Assurance in Education, vol.11,
no.1, pp 37-46.
Evancevich, M (2004), Human Resource Management, 7th edition, Irwin/McGraw- Hill, USA.
Fakharyan, M., Jalilvand, R.M, Dini, B. &Dehafarin, E.(2012): “The effect of performance
appraisal satisfaction on employee‟s outputs implying on the moderating role of
motivation in workplace” International journal of business and management
tomorrow.

104
Fried, Y (1999) “Inflation of subordinates‟ performance ratings: main and interactive effects of
rater negative affectivity, documentation of work behavior, and appraisal
visibility”, Journal of Organizational Behavior, vol.20, no.4, pp.431-444.
Friedman, S (1984), „Strategic appraisal and development at General Electric Company‟,
Journal of Management, vol.45, no.2, pp 183-201.
Goddard, I & Emerson, C (1996): “Appraisal and your school”, Oxford: Heinemann.
Harris, M (1994), “Rater motivation in the performance appraisal context: a theoretical
framework”, Journal of Management, vol.20, no.4, pp 737-756.
Horne, H & Pierce, A (1996), A practical guide to staff development and appraisal in schools,
London: Koganpag.
Islam, R & Rasad, M (2005), „Employee performance evaluation by AHP: a case study‟,
Honolulu, Hawaii.
Jacobs R. (1980), Expectations of behaviorally anchored rating scales.
Johnson, S. (1990). Teachers at work: Achieving success in our schools. New York: Basic
Books.
Kauchak, D., Peterson, K., & Driscoll, A. (1985) “An interview study of teachers‟ attitudes
towards teacher evaluation practices”. Journal of Research and Development in
Education, 19, 32–37.
Kavanagh, P., J. Benson, M. Brown, (2007). “Understanding performance appraisal fairness”
Asia Pacific Journal of Human Resource, 45(2): 89-99.
Keig, L & Waggoner, M (1994), „Collaborative peer review: the role of faculty in improving
college teaching ‟ASHE/ERIC Higher Education Report, no.2.
Kessler HW (2003). Motivate and reward: Performance appraisal and incentive systems for
business success. Great Britian: Curran Publishing Services.
Khan, A (2007): “Performance appraisal‟s relation with productivity and job satisfaction‟,
Journal of Managerial Sciences, vol.1, no.2, pp 100-114.
Kondrasuk, J.N. et al. (2002). An Elusive Panacea: The Ideal Performance Appraisal. Journal of
Managerial Psychology, 64 (2), 15-31
Kyriakides, L, Demetriou, D & Charalambous, C (2006), „Generating criteria for evaluating
teachers through teacher effectiveness research‟, Educational Research, vol.48,
no.1, pp 1- 20.

105
Lee, C (1985), „Increasing performance appraisal effectiveness: matching task types, appraisal
process, and evaluator training‟, Academy of Management Review, vol.10,
no.2, pp 322-331.
London M (2003): “Job feedback: Giving, seeking, and using feedback for performance
improvement” Second edition London, England: Lawrence Erlbaum Associates.
Lortie, D. (1975). Schoolteacher: A sociological study. Chicago: University of Chicago Press.
Mamoria, C.B. (1995): “Personnel Management; Himalaya Publishing House, Bombay.
Marquardt, M. (2004): “Optimizing the Power of Action Learning”. Palo Alto, CA: Davises-
Black, 26 (8): 2.
McGregor, Douglas (1957), An Uneasy Look At As Means Human Resource Development
Would Be Better Off if Performance In Harvard Business Review, May/June
Methods for Graduate Business & Social Science Students. California, Sage.
Milkovich, G. M., & Boudreau, J. W. (1997): “Human resource management” Chicago: Irwin.
Mondy, R. Wayne and Noe, Robert M. (1981): “Human Resource Management, Massachusetts:
Simon & Schuster, Inc.,
Monyatsi P, Steyn, T & Kamperet, G (2006): “Instructor perceptions of the effectiveness of
instructor appraisal in Botswana”, South African Journal of Education, vol.26,
no.3, pp 427–441.
Morris, L (2005), „Performance appraisals in Australian Universities: imposing a managerialistic
framework into a collegial culture‟, AIRAANZ, pp 388-393.
Mwita, I 2000, „Performance management model- A systems-based approach to public service
quality‟, The International Journal of Public Sector Management, vol. 13, no.1,
pp. 19-37.
Naftulin, D, Ware, J & Donnelly, F 1973, “The doctor fox lecture: a paradigm of educational
seduction”, Journal of Medical Education, vol.48, pp 630-635.
Nimmer, J. & Stone, E. (I991), „Effects of grading practices and time of rating on student ratings
of faculty performance and student learning‟, Research in Higher Education,
vol.32, no.2, pp 195-215.
Noe, A 1996, Human Resource Management: Gaining a Competitive Advantage, 2nd edition,
Irwin /McGraw-Hill, USA.

106
Nzuve S.N.M. (2007). Management of Human Resources: A Kenyan Perspective, Nairobi, Basic
Modern Management Consultants.
Onwuegbuzie, J, Witcher, E, Collins, MT, Filer, D, Wiedmaier, D, & Moore, W (2007),
„Students' perceptions of characteristics of effective college teachers: a validity
study of a teaching evaluation form using a mixed-methods analysis‟, American
Educational Research Journal, vol.44, no.1, pp.113-160.Organizational
Behavior, vol.20, no.4, pp.431-444.
Peterson, K. (2000). Teacher evaluation: A comprehensive guide to new directions and practices
(2nd ed.). Thousand Oaks, CA: Corwin
Prowker, A (1999), Effects of purpose of appraisal on leniency errors: an exploration of self-
efficacy as a mediating variable, unpublished master thesis, Virginia
polytechnic institute and state university.
Rao, V.S.P (2005). Human Resource Management: Text and Cases. (2nd ed.). New Delhi: Excel
Books.
Rasheed, I, Aslam, D, Yousaf, S & Noor, A (2011), „A critical analysis of performance appraisal
system for instructors in public sector universities of Pakistan: A case study of
the Islamia University of Bahawalpur (IUB)‟, African Journal of Business
Management, vol.5, no.9, pp 3735-3744.
Robbins, S. P. (1997). Organizational behavior: Concepts, controversies, applications (8th ed.).
Upper Saddle River, New Jersey: Prentice Hall.
Roberts, E (2003), „Employee performance appraisal system participation: a technique that
works‟, Public Personnel Management, vol.32, no.1, 89-98.
Rudd, R, Hoover, T & Connor, N (2001): “Peer evaluation of teaching in University of Florida‟s
college of agricultural and life sciences”, Journal of Southern Agricultural
Education Research, vol.51, no.1, pp 189-200.
Sachin Maharaj (2014) “Administrators‟ views on teacher evaluation: Examining Ontario‟s
teacher performance appraisal Canadian Journal of Educational Administration
and Policy, Issue #152
Scullen, S. E., Mount, M. K., & Judge, T. A. (2003): “Evidence of the construct validity of
developmental ratings of managerial performance. Journal of Applied
Psychology, 88(1), 50–66.

107
Shevlin, M., Banyard, P., Davies, M.N.O. & Griffiths, M.D.(2000). The validity of student
evaluations in higher education: Love me, love my lectures? Assessment and
Evaluation in Higher Education, 25, 397-405.
Simmons, J & Iles, (2010) „Performance appraisals in knowledge-based organizations:
implications for management education‟, International journal of management
education, vol.9, pp 3-18.
Steiner, D. D., & Rain, J. S. (1989): “Immediate and delayed primacy and regency effects in
performance evaluation”. Journal of Applied Psychology, 74: 136-142.
Stronge, J & Tucker, P 2003, Handbook on Teacher Appraisal: Assessing and Improving
Performance, Eye On Education Publications.
Swanepoel, B.,Erasmus, B., Van, W. M. and Schenk, H. (2000): “South African Human
Resource Management Theory and Practice”. Cape Town: Juta and Company.
Thomas, S. L., & Bretz, R. D., Jr. (1994, Spring), "Research and practice in performance
appraisal: Evaluating performance in America's largest companies", SAM
Advanced Management Journal, 28-37.
Tziner, A & Kopelman, R (2002), „Is there a preferred performance rating format? A non
psychometric perspective‟, Applied Psychology: an International Review,
vol.51, no.3, pp 479-503.
Walsh, B (2003): “Perceived fairness of and satisfaction with employee performance appraisal”,
unpublished PhD dissertation, Louisiana State University.
Weinberg, A (2007), Evaluating methods for evaluating instruction: The case of higher
education, National Bureau of Economic Research working paper, 12844:
Cambridge, MA.
Welbourne, T. M., Johnson, D. E., & Erez, A. (1998). The role-based performance scale:
Validity analysis of a theory-based measure. Academy of Management Journal,
41(1), 540–555.
Worthington, A (2002), „The impact of student perceptions and characteristics on teaching
appraisals: a case study in finance education‟, Assessment & Appraisal in
Higher Education, vol.27, no.1, pp 49-64.
Yamane, Taro. (1967). Statistics: An Introductory Analysis, 2nd Ed., New York: Harper and
Row.

108
Yong. F. (1996), “Inflation of subordinates‟ performance ratings: Main and interactive effects
of Rater Negative Affectivity, Documentation of Work Behavior, and Appraisal
visibility. Journal of organizational Behavior”, Vol.20, No.4. (Jul.,1999),
pp.431-444.

109
Appendices

110
Questionnaire for instructors
Dear respondents, This questionnaire is designed to elicit necessary
information for conducting the research entitled “Effectiveness of Instructors
Performance Appraisal Process in Bahir Dar University: A Case of
Business and Economics College).” The study is for academic purpose i.e. for
partial fulfillment of the requirement for the award of Master of Business
Administration (MBA). Your genuine responses for each question on this
questionnaire, will completely determine the success of the study. You are
hereby assured that your identity and the information you provide will be kept
in strict confidence.

General direction:
 You need not write your name anywhere on this questionnaire.

 Please, carefully read each of the following questions and make a tick
mark (√) in the appropriate box. You can choose more than one option
where necessary.

 Note that: 1=SDA (Strongly Disagree), 2=DA (Disagree), 3=N (Neutral),


4=A (agree), 5=SA (Strongly Agree)

 If you have any difficulty or further query on how to fill this


questionnaire, please don’t hesitate to contact me via the following
address:

Name: Getachew Mekonnen

Phone: 0912902525

Email: [email protected]

Thank you in anticipation for your cooperation!!!

111
Part One: Demographic Profile of Respondents (Instructors)

1. What is your sex?

[1] Male [2] Female

2. What is your year(s) of experience as an instructor in Bahir Dar University?

[1] =<1 [2] 2-3 [3] 4-6 [4] 7-10 [5] >=11

3. What is your academic rank?

[1] GA-I [2] GA-II [3] Assistant Lecturer [4] Lecturer

[5] Assistant professor [6] Associate professor [7] Professor

4. You are member of which of the following Department?

[1] Economics [2] management [3] Accounting [4] Marketing

Part Two. Quality of Appraisal Criteria

Please, express the extent of your agreement/disagreement with the following statements about
quality of evaluation criteria currently used to evaluate performance of instructors in your
University.

SDA DA N A SA
No Description (1) (2) (3) (4) (5)
1 Up on employment in Bahir Dar University, every
instructor will be job description specifying his/her
duties
2 Up on employment in Bahir Dar University, every
instructor will be formally trained about criteria used to
evaluate his/her performance
3 All evaluation criteria currently used to evaluate my
performance are relevant to tasks in my job.
4 All my duties are measured in current evaluation
criteria
5 Current evaluation criteria take in to consideration the
practical difficulties in the environment in which I
perform.
6 Current evaluation criteria are reliable
7 Current evaluation criteria measure how an instructor
contributes to students learning
8 Current evaluation criteria are prepared in consultation
with instructors

112
9. How did you managed to keep your performance up to standard as an instructor in the
University

[1] By asking senior instructor in the department

[2] By doing what I feel appropriate

[3] By reading criteria on instructors evaluation form

[4] I found difficulty to know what is right

10. What is your overall comment on your appraisal criteria?


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________

Part Three: Instructors Perceptions of Their Performance Evaluators’ Practices

Direction: Please, express the degree of your agreement/disagreement on the following


statements indicating potential practices of evaluators of instructors’ performance.

1. Students practices in evaluation of instructors performance


SDA DA N A SA
No Description (1) (2) (3) (4) (5)
1 Students provide favorable score for physically
attractive instructor
2 Students provide favorable score for funny instructor
(who tells jokes) during class
3 Students evaluation is based on previous grade
awarded by instructors
4 Students evaluate instructor positively if they expect
good grade from him/her
5 Students provide high score for instructor who gives
less number of assignments
6 Students provide high score for instructor who gives
easy exam
7 Students, who had single negative experience with
instructor, will provide low score on all evaluation
criteria.
8 Students evaluate instructors‟ performance by
contrasting the performance of one instructor against
that of another instructor.
9 Students evaluate me by comparing my real
performance with a given standard/evaluation criteria.
10 Students properly read criteria of evaluation while
evaluating instructors‟ performance.

113
11 Students know the purpose of instructors performance
evaluation
12 Students have received enough training to evaluate
instructors performance

2. Peers/instructors practices in evaluation


1 Peers properly read each evaluation criterion during
evaluating my performance
2 Peers provide equivalent high scores to all peers
3 Peers positively evaluate their close friends (peers).
4 Peers are well informed of my performance along all
evaluation criteria
5 During evaluation, peers do contrast performance of
one peer against that of other peers.
6 Peers evaluate me by comparing my real performance
with a given standard/evaluation criteria.

3. Head of Department (HOD) practices in evaluation


1 HOD usually keep necessary record on my
performance during the appraisal period
2 HOD rate his close friends more positively than other
instructors‟
3 HOD is well informed of my performance along all
criteria of evaluation
4 HOD evaluate me by comparing my performance with
a given standard/evaluation criteria
5 During evaluation, HOD does contrast my performance
against that of my peers
6 HOD is also my coach to improve my performance.
7 HOD gives equivalent high evaluation score to all staff
members

Part Four: Characteristics of Performance Feedback (Communication of Evaluation


score/result)

1. Have you ever get official feedback on your performance after performance evaluation?

[1]Yes [2] No

If your answer to question no.1 is No, please answer question no.2 and 3 and move to questions
under part five. If your answer for question no.1 is yes, skip question no.2 and continue replying
the remaining questions.

114
2. What was the reason that you didn‟t receive the feedback?

[1] No feedback system at all in the university

2] I dislike receiving the feedback

[3] I never trust the evaluators of my performance

[4]Other, if any, please specify________________________________________

3. Besides the formal feedback after performance evaluation at the end of semester, do
management (department head) and instructors make continuous dialogue (discussion)
about instructors performance?

[1]Yes [2] No

4. In what form did you receive the feedback on your performance?

[1] A summary of evaluation result (score) provided by evaluators

2]. A detail of score along all evaluation criteria

[3]. Oral discussion with department heads

[4] Other, if any, please specify_______________________________________

5. In how long time interval does the feedback on your performance is available for you?

[1] < 1 Month [2] 1 -2 Months [3] 3-4 Months [4] > 4 Months

6. How do you evaluate the ability of feedback system in helping you know your specific
aspects of strengths and weaknesses?

[1] Poor [2] Satisfactory [3] Good [4] Very good [5] Excellent

7. What was your score in your most recent (first semester of 2014/15 academic year
performance appraisal?

Student evaluation: _______

Head evaluation: _________

Peer evaluation: _________

115
8. To what extent does your evaluation score reflects your true performance?

A. Student evaluation: Very low [1] Low [2] Medium [3] High [4] Very high [5]

B. Head evaluation: Very low [1] Low [2] Medium [3] High [4] Very high [5]

C. Peer evaluation: Very low [1] Low [2] Medium [3] High [4] Very high [5]

Part Five: Clarity of Purpose of Instructors’ Performance Evaluation

1. What is your level of understanding about the purpose of instructors performance


evaluation in your University?

[1] Very low [2] Low [3] Medium [4] High [5] Very high

2. What do you think are the purpose of evaluating your performance in your university? Mark
all those applicable in your University!

SDA DA N A SA
No Description (1) (2) (3) (4) (5)
1 To give training to improve performance
2 To identify one s strengths and weaknesses through
feedback
3 To use as a base for giving scholarship
4 To give promotion for those who meet standard
5 To dismiss instructors who fail to meet standard
6 To provide award for best performers
7 Just for formality
8 Other purposes

Your over all comments about instructors performance appraisal is very important!
______________________________________________________________________________
______________________________________________________________________________
__________________________________________________________________________

Thank You for Your Cooperation!

116
Questionnaire for students
Dear respondents, This questionnaire is designed to elicit necessary
information for conducting the research entitled “Effectiveness of Instructors
Performance Appraisal Process in Bahir Dar University: A Case of
Business and Economics College).” The study is for academic purpose i.e. for
partial fulfillment of the requirement for the award of Master of Business
Administration (MBA). Your genuine responses for each question on this
questionnaire, will completely determine the success of the study. You are
hereby assured that your identity and the information you provide will be kept
in strict confidence.

General direction:
 You need not write your name and ID No anywhere on this
questionnaire.

 Please, carefully read each of the following questions and make a tick
mark (√) in the appropriate box. You can choose more than one option
where necessary.

 Note that: 1=SDA (Strongly Disagree), 2=DA (Disagree), 3=N (Neutral),


4=A (agree), 5=SA (Strongly Agree)

 If you have any difficulty or further query on how to fill this


questionnaire, please don’t hesitate to contact me via the following
address:

Name: Getachew Mekonnen

Phone: 0912902525

Email: [email protected]

Thank you in anticipation for your cooperation!!!

117
Part One: Demographic Profile of Respondents (students)

1. What is your sex?


[1] Male [2] Female
2. in which of the following intervals does your last semester CGPA falls?
Below 2.0 2.0-2.74 2.75-3.24
3.25-3.74 3.75-3.94 3.95-4.0

3. From which department do you belong?


[1] Management [2] Economics [3] Accounting and Finance
[4] Marketing management [5] Logistics Tourism and Hotel mgt

Part Two: Evaluating Practices of Students in Instructors Performance appraisal


Direction: Please, express your agreement/disagreement on the following statements.
Note: SA= Strongly Agree, A= Agree, N=Neutral, DA= Disagree, SDA= Strongly
disagree
1. Students practices in evaluation of instructors performance
SDA DA N A SA
No Description (1) (2) (3) (4) (5)
1 I provide favorable score for physically attractive instructor
2 I provide favorable score for funny instructor (who tells jokes)
during class
3 I evaluate an instructor positively, if he/she awarded me good grade
in previous course he/she taught.
4 I evaluate instructor positively, if I expect good grade from him/her
5 I provide high score for instructor who gives less number of
assignments
6 I provide high score for instructor who gives easy exam
7 If I had single negative experience with instructor, I will provide
him with low score on all evaluation criteria.
8 I evaluate instructors‟ performance by contrasting his/her
performance against that of other instructors.
9 I evaluate instructor by comparing his/her real
performance with a given standard/evaluation criteria.
10 I properly read criteria of evaluation while evaluating instructors
performance
11 I know the purpose of instructors performance evaluation
12 I have received enough training to evaluate instructors performance
Thank You for Your Cooperation!!!

118
Interview with Heads of Departments

1. How do you keep track of the instructors‟ performance? Can you get full information about all
instructors‟ performance along all evaluation criteria?

2. What is/are your role(s) as HOD in instructors‟ performance appraisal other than evaluating
instructors?

 Training,
 Coaching,
 Feedback or other roles?

3. Is there any training (about instructors‟ performance evaluation process) given for instructors
and students?

4. The evaluation form was prepared by whom? Are instructors consulted on the design of the
form and appropriateness of evaluation criteria?

119

You might also like