Gech Final Thesis Report
Gech Final Thesis Report
i
BAHIR DAR UNIVERSITY
College of Business and Economics
Department of Management
ii
Thesis Approval Page
College of Business and Economics, Bahir Dar University” by Mr. Getachew Mekonnen
BOARD OF EXAMINERS
iii
Declaration
Performance Appraisal Process: A Case of College of Business and Economics, Bahir Dar
University is outcome of my own effort and study and that all sources of materials used for the
study have been duly acknowledged. I have produced it independently except for the guidance
This study has not been submitted for any degree in this University or any other University. It is
offered for the partial fulfillment of the degree of MA in Business Administration [MBA]
Signature____________________________
Date_______________________________
Signature__________________________
iv
CERTIFICATION
This is to certify that this thesis entitled “Effectiveness of Instructors Performance Appraisal
Process: A Case of college of Business and Economics, Bahir Dar University” submitted in
partial fulfillment of the requirements for the award of the degree of Masters of Business
Administration to the College of Business and Economics, Bahir Dar University, through the
authentic work carried out by him under our guidance. The matter embodied in this thesis has not
been submitted earlier for award of any degree or diploma to the best of our knowledge and
belief.
Department Of Management
Signature ………………..…..
Date …………………………
v
ABSTRACT
Bahir Dar University has been implementing instructors’ performance appraisal process
whereby peers, students and heads of departments evaluate instructors’ performance. However,
to the best knowledge of the researcher; no systematic study has been conducted to evaluate
effectiveness of instructors’ performance appraisal process in the university. Therefore, the
overriding objective of this study is to evaluate effectiveness of instructors’ performance
appraisal process in Bahir Dar University (COBE). In order to achieve the objective, the focus
of the study is on factors in the process of the appraisal (including practices of appraisers,
qualities of evaluation criteria, clarity of the purpose, and characteristics of performance
feedback system). The study employed cross sectional survey design. Even though 39 semi-
structured questionnaires to instructors and 232 structured questionnaires to students were
distributed, only 35(89.74%) and 221(95.26%) were returned and analyzed from the former and
the latter, respectively. Sample respondents were selected using proportionate stratified random
sampling. Moreover, semi structured interview with heads of departments were conducted to
supplement data collected using questionnaire. Data collected through interviews were analyzed
qualitatively; whereas data collected through questionnaires were analyzed quantitatively using
descriptive statistics with the help of SPSS version 21. The result of the study indicated that
instructors’ performance appraisal process is ineffective because of poor qualities of evaluation
criteria, bias practices of appraisers, ineffective performance feedback system, and appraisers
(students) lack of awareness on the appraisal purpose & less attention given for formative
purposes of the appraisal. Finally, to enhance effectiveness of the appraisal, the researcher
recommended the university to: redesign the evaluation criteria in consultation with instructors;
train appraisers and appraisee’s; make the feedback frequent, precise, timely & consistent; and
focus on formative purposes.
KeyWords:
Key Words: Appraisee’s,
Appraisees, Appraisers,Appraisers, Appraisal
Appraisal Criteria, Criteria,
Effectiveness Effectiveness
of Performance of
Appraisal,
Performance
Mekelle University,Appraisal, Bahir
Performance Dar Peer
Feedback, University, Performance Feedback, Peer
vi
Acknowledgement
First of all I would like to thank the almighty God for His in desirable gift. Without His help this
research could not have been realized. I am grateful to my advisor, Anteneh Eshetu (PhD
Fellow), for his committed and motivated guidance to successfully complete this research paper.
I would like to express my gratitude to my heartily friend Deacon worku Kassaw, with whom I
share all ups and downs of life and acknowledge his concern about my education and initiation
he took to let me go to school, from idea initiation to every life consultation.
Also I would like to express my indebtedness to Bahir Dar Hotel No. 2 all Staffs, for their
patience when I was using Wi Fi. My sincere regards also goes to Mr. Zewdu Lake (Lecturer) for
his Help in data gathering and suggestions in the course of my study.
I would like to express my gratitude my classmates Fatuma Beyan, Samueal Negsh Yaskebral
Tigab, Mandefro Tagele, Kassa Yimam (“Fish”) and Getu Yayu who have assisted me
indifferent ways during course work.
I would also like to thank all Department of Management of Bahir Dar University who have
taken part in educating me and All department Heads of CoBE for their valuable information
during interview.
The last but not the least, I would like to thank all respondents (instructors and students) who
furnished me with necessary information used for successful accomplishment of this study.
Getachew Mekonnen
June, 2015
vii
List of Acronyms
HOD- Head of Department
viii
Table of Contents
ix
2.3 Conceptual Framework ................................................................................................................ 27
CHAPTER THREE .................................................................................................................................... 30
METHODOLOGY OF THE STUDY ............................................................................................................. 30
3.1 Research Design........................................................................................................................... 30
3.2 Data Type and Source .................................................................................................................. 30
3.3 Sampling Design and Procedure ................................................................................................... 30
3.4 Methods of Data Collection and Instrumentation ......................................................................... 32
3.5 Data Processing and Methods of Data Analysis ............................................................................ 34
CHAPTER FOUR ..................................................................................................................................... 35
RESULTS AND DISCUSSION .................................................................................................................... 35
4.1 Demographic Profile of Respondents (Instructors) ....................................................................... 35
4.2 Demographic Profile of Respondents (Students) .......................................................................... 38
4.3 Qualities of Appraisal Criteria....................................................................................................... 40
4.3.1 Instructors Awareness of Their Appraisal Criteria .................................................................. 41
4.3.2 Relevance, Reliability and Realistically of Appraisal Criteria ................................................... 44
4.3.3 Appraisal Criteria as Measure of Student Learning ................................................................ 47
4.3.4 Participation of Instructors in Design of Appraisal Criteria ..................................................... 48
4.4 Practices of Appraisers in Instructors’ Performance Appraisal ...................................................... 50
4.4.1 Students’ Practice in Appraisal: Students Self-Report Vs Instructors’ Perception ................... 50
4.4.2 Instructors’ Perception of Practices of Peers ......................................................................... 70
4.4.3 Instructors’ Perception of Practices of Heads of Departments ............................................... 76
4.5 Characteristics of the Performance Feedback System .................................................................. 82
4.5.1 Existence of Official Performance Feedback after Appraisal................................................... 83
4.5.2 Existence of Continuous Discussion on Instructors Performance ........................................... 84
4.5.3 Specificity of Performance Feedback ..................................................................................... 85
4.5.4 Timeliness of Performance Feedback .................................................................................... 87
4.5.5 Acceptance of Performance Feedback by Instructors ............................................................ 89
4.6 Clarity of Purpose of Instructors’ Performance Appraisal ............................................................. 91
CHAPTER FIVE........................................................................................................................................ 96
CONCLUSION AND RECOMMENDATION ................................................................................................ 96
5.1 Conclusions.................................................................................................................................. 96
5.2 Recommendations ....................................................................................................................... 99
x
Area for Further Research.................................................................................................................... 102
BIBLOGRAPHY ..................................................................................................................................... 103
Appendices.......................................................................................................................................... 110
Questionnaire for instructors ........................................................................................................... 111
Questionnaire for students .............................................................................................................. 117
xi
CHAPTER ONE
INTRODUCTION
1.1 Background of the study
The role of human resource becomes more and more vital which includes personnel related areas
such as job design resource planning, performance appraisal system, recruitment, selection,
compensations and employee relations (Derven,1990 ). Among these functions, one of the most
critical one that can bring global success is performance appraisal (Marquardt, 2004). An
organization‟s performance management system helps it to meet its short and long term goals
and objectives by helping management and employees do their jobs more efficiently and
effectively, and performance appraisal is one part of this system (Bacal, 1999).
Performance appraisal refers to the activity used to determine the extent to which an employee
performs work effectively. In essence, a formal performance appraisal is a system established by
the organization to regularly and systematically evaluate employees‟ performance (Ivancevich
2004). Performance appraisal is a formal system of periodic review and evaluation of an
individual‟s job performance (Mondy and Noe, 1990). In addition Nzuve (2007) defines
performance appraisal as a means of evaluating employees work performance over a given
period of time. Performance appraisal is a tool that provides management with valuable
information regarding the quality of the human resources the organizations possess which may
serve as a basis for important human resource decisions that may result in motivation and / or de
motivation of the employees.
The process of performance evaluation begins with the establishment of performance standards,
followed by communicating the standards to the employees because if left to themselves, would
find it difficult to guess what is expected of them. This is followed by measurement of actual
performance and then compare the actual performance to the performance standard set and
discuss the appraisal outcome with the employee and if necessary, initiate corrective action
(Marmora 1995).
1
Kavanagh, Brown and Benson (2007) asserted that in performance appraisal process, it is likely
that the evaluation is subjectively biased by his or her emotional state; managers may consider
variable codes and standards for different employees which results are inconsistent, biased,
invalid and unacceptable appraisal.
Education is an investment to development and poor study methods should not compromise the
mandate of higher education institutions to generate, preserve and disseminate knowledge and
produce high quality graduates (Mutsotso and Abenga 2010). Quality of education in universities
cannot be achieved without consistent appraisal and improvement of instructors‟ performance
(Danial 2011).
The academic staff of higher education institution is a key resource to institution‟s success. To
ensure the quality of education, the university has been implementing; among other things,
instructors performance appraisal. Teaching performance is being inferred from students‟
Performance. To this end, the performance of instructor has been evaluated semi-annually
(usually at the end of each semester). The university has designed performance appraisal where
peers, students, and department heads are the appraisers of the instructors‟ performance.
However, in spite of implementing such an appraisal, little attempt has been given to evaluate its
effectiveness. Hence, this study will conduct to evaluate the effectiveness of instructors‟
performance appraisal process currently being implemented at Bahir Dar University.
2
In higher education institutions, instructors constitute a particular group of knowledge-based
workers whose commitment plays pivotal role in successful operation of their institutions. It is
the responsibility of managers in higher education to design and implement performance
appraisal that both motivate instructors and align their efforts to organizational objectives
(Simmons and Iles 2010). In order to reap benefits from performance appraisal process, it is
imperative to develop it in a more effective manner. In spite of the accolade of effective
performance appraisal, ineffective appraisal can bring about various problems such as
diminished employee morale, low employee productivity, reduction of employee‟s enthusiasm
and support for the organization (Islam and Rasad 2005). Steiner and Rain (1989) reported that
the order in which good and poor performance was observed affected performance ratings, and
that raters biased judgment about inconsistent extreme performance (unusually good or poor)
toward the general impression already held.
Walsh (2003) posited that finding the commonly accepted approach to evaluating effectiveness
of a performance appraisal based on a set of well-defined variables is difficult. The author also
cited that identifying and organizing the most important variables in performance appraisal has
proved to be a challenging task to researchers and practitioners. However, different scholars
made attempt to evaluate effectiveness of performance appraisal using different variables. For
instance, according to Dobbins, et al. (1990); Monyatsi, et al. (2006) and Nurse (2005),
performance appraisal is effective if it could have outcomes such as increased motivation,
reduced employee turnover and improved employee performance, feeling of equity among
employees, enhanced working relationships and reduced employee stress.
Similarly, Ishaq, et al. (2009) stated that common outcomes of an effective performance
appraisal are employees‟ learning about themselves, employees‟ knowledge about how they are
doing, employees‟ learning about “what management values”. Therefore, an astute reader can
note that the aforementioned studies were emphasized on the appraisal‟s behavioral outcome; in
their attempt to evaluate effectiveness of performance appraisal process.
On the other hand, based on the assumption that effective performance appraisal process would
result in those desirable behavioral outcomes (such as increased employee motivation, reduced
3
employee turnover, etc), many studies also focus on key factors in the process itself to evaluate
effectiveness of performance appraisal process. For instance, studies (Ellett et al., 1996;
Kyriakides 2006; Monyatsi et al. 2006 and Danial 2011) emphasized on four factors, namely; the
sources for collecting relevant data, the appraisal purpose, the appraisal criteria and the feedback
system as main aspects that need to be considered in developing effective instructor‟s
performance appraisal. Theoretically, plethora of specific factors influences effectiveness of
performance appraisal. However, the empirical evidence concerning effectiveness of
performance appraisal in academic institutions is silent, despite the fact that extensive research
According to studies (Roberts 2003 and Monyatsi, et al. 2006), effectiveness of performance
appraisal process is particularly dependent on the perception that the users (both the appraisers
and appraises) hold about the appraisal process. To this end, current researcher had conducted
preliminary survey in Bahir Dar University and identified that instructors were anxious with the
appraisal criteria, the appraisers (especially students) and the performance feedback system.
Besides the paucity of research in the field, instructors‟ tendency of frustrations with the
appraisal process was a motivational force for the current researcher to conduct a comprehensive
study on effectiveness of instructors‟ performance appraisal process. Moreover, though the
university has been implementing performance appraisal, to the best knowledge of the
researcher, no systematic study has attempted to evaluate effectiveness of the appraisal process.
Hence, this study is intended to evaluate effectiveness of instructors‟ performance appraisal
process, a case of Business and Economic College, Bahir Dar University.
4
5. Is there relationships between instructors work experience and instructors‟ perception on
Current appraisal criterion preparation by consultation with instructors?
6. Is there a relationship between Students practice of scoring for physically attractive
instructors and instructors‟ perception on students rating?
7. Is there a relationship between students‟ practice of scoring for funny instructors and
instructors perception on students rating based on funniness?
8. Is there a relationship between students practice of Rating based on previous grade
awarded by instructor and instructors‟ perception on students rating?
9. Is there a Relationship between students practice of Rating based on expected good grade
from instructors‟ and instructors perception on students rating?
10. Is there a relationship between students practice of Rating based on number of
assignments given by instructors‟ and instructors‟ perception on students rating?
11. Is there a relationship between students practice of Rating based on exam easiness
prepared by instructor and instructors‟ perception on students rating?
12. Is there a relationship between students practice of Rating based on single negative
experience with instructor‟ and instructors‟ perception on students rating?
13. Is there a relationship between students practice of rating by contrasting the performance
of an instructor against that of other instructors‟ and instructors‟ perception on students
rating?
14. Is there a relationship between students‟ practice of rating by comparing instructors‟
performance with a given standard/appraisal criteria and instructors‟ perception on
students rating?
15. Is there a relationship between students practice of rating having enough awareness
about the purpose of evaluation criteria and instructors perception on students rating?
16. Is there a relationship between students‟ practice of rating by properly reading each
appraisal criterion and instructors perception on students rating?
17. Is there a relationship between students‟ practice of rating by having enough training to
evaluate instructors‟ performance and instructors‟ perception on students rating?
5
1.4 Objectives of the Study
6
coupled with unmanageable population size (students and instructors) forced the study to focus
on the Business and Economics college.
Conceptually, the study is confined to assessing effectiveness of the appraisal process in terms of
the aforementioned four factors in the process itself. Moreover, evaluating effectiveness of the
appraisal process based on its behavioral outcome (e.g. improved performance, motivation,
satisfaction, absenteeism, turnover, etc) is not the focus of this study. The rationale for
emphasizing on the appraisal process is based on the assumption that effectiveness of those four
factors in appraisal process would result in desirable behavioral aspect of instructors. Therefore,
studying effectiveness of the appraisal process was deemed as a prerequisite for studying the
outcome of the appraisal. Methodologically, though BDU has instructors, technical support
staffs, and administrative staffs, the current study was intended to evaluate solely the instructors‟
performance appraisal process.
In addition to the budget and time constraint to include all staffs of the university in to the scope
of the study, the reason for emphasizing only on instructors performance appraisal is the
assumption that instructors‟ performance has direct impact on quality of education. Moreover,
the study had employed cross sectional survey design where more relevant data were obtained
from instructors, department heads, and Regular Graduating Class Students (RGCS).
7
Performance appraisal is the assessment of how well somebody performs his/her job
relevant tasks.
Effectiveness of performance appraisal process refers to the extent to which appraisal
is based on well defined appraisal criteria, clearly stated purpose and effective
performance feedback from impartial appraisers.
Appraisal criteria/standard of appraisal refers to the measures used for evaluating
performance of instructors.
Qualities of appraisal criteria refer to those characteristics that the appraisal criteria
must possess in order to be effective enough.
Peers are those instructors who are within the same department that evaluate each
other‟s performance.
This project has five chapters. The first chapter deals with background information, statement of
the problem, objective of the study, significance of the study, scope and limitation of the study.
The second chapter deals with review of literature. The third chapter covers the methodology of
the study, research design, data requirements; source of data, data gathering and methods of data
analysis. The fourth chapter discusses analysis of the data gathered, presents finding and analyze
the data. The last, chapter five concludes and suggests some recommendations.
8
CHAPTER TWO
LITERATURE REVIEW
2.1 Theoretical Literature
2.1.1 Performance Appraisal: An overview
Evancevich (2004) defined performance appraisal as the human resource management activity
that is used to determine the extent to which an employee is performing the job effectively. As
Swanepoel et al., (2000) Performance appraisal is a formal and a systematic process of
identifying, observing, measuring, and recording the job relevant strengths and weaknesses of
employees.
According to Longenecker, (1997) performance appraisal is two rather simple words that often
arouse a raft of strong reactions, emotions, and opinions, when brought together in the
organizational context of a formal appraisal procedure. Jacobs et al. (1980) defined performance
appraisal as a systematic attempt to distinguish the more efficient workers from the less efficient
workers and to discriminate among strength and weaknesses an individual has across various job
elements. According to Rasheed (2011) performance appraisal is a continuous process through
which performance of employees is identified, measured and improved in the organization.
9
Yong (1996) defines performance appraisal as “an evaluation and grading exercise undertaken
by an organization on all its employees either periodically or annually, on the outcomes of
performance based on the job content, job requirement and personal behavior in the position”.
Alo (1999) defines performance appraisal as a process involving deliberate stock taking of the
success, which an individual or organization has achieved in performing assigned tasks or
meeting set goals over a period of time. It therefore shows that performance appraisal practices
should be deliberate and not by accident. It calls for serious approach to knowing how the
individual is doing in performing his or her tasks.
Jacob et al. (1980) stated that practicality of performance appraisal is also another assumed
aspect i.e. time and costs for designing and implementing the process should not exceed the
organizational benefit which is achieved by appraising performance. Furthermore, Jacobs et al.
(1980) described some methodological assumptions. The first is that equivalence is in place.
Meaning, the situations under which all appraises are rated and the ways different appraisers
actually rate appraises are comparable. Second, there are uniformed interpretations of standard
expectations and forms among appraisers. In addition, the evaluator must have the possibility of
direct observation of appraises‟ performance plus additional data as for example attendance
rates.
According to Armstrong and Baron (1998) factors affecting performance should be considered
when measuring, managing, improving and rewarding performance. These factors encompass the
following: Personal factors - the individual's skill, self-confidence, motivation and dedication;
leadership factors -the quality of encouragement, guidance and support from the managers and
10
team leaders; team factors- the quality of support from colleagues; system factors- the system of
work and facilities from the organization; and situational factors - internal and external
environmental pressures and changes. However, in contrast to above ideas, the traditional
approaches to performance appraisal rely on personal factors, when actually the performance can
be caused partially or entirely by situational or systems factors (Mwita 2000). Essentially,
Deming (1986) stated that the appraisal of individual performance must necessarily consider not
only what individuals have done (the results), but also the circumstances in which they have had
to perform.
Performance appraisals are one of the most important requirements for successful business and
human resource policy (Kressler, 2003). Rewarding and promoting effective performance in
organizations, as well as identifying ineffective performers for developmental programs or other
personnel actions are essential to effective human resource management (Pulakos, 2003).
The ability to conduct performance appraisals relies on the ability to assess an employee‟s
performance in a fair and accurate manner. Evaluating employee performance is a difficult task.
Once the supervisor understands the nature of the job and the sources of information, the
information needs to be collected in a systematic way, provided as feedback, and integrated into
the organization‟s performance management process for use in making compensation, job
placement, and training decisions and assignments (London, 2003).
According to Morris (2005) the above purposes of performance appraisal can be clustered under
the headings of administrative purposes and developmental purposes. Administrative purposes
entails the use of performance data for personnel decision making including Human Resource
Planning (HRP); compensation; placement decisions such as promotion, demotion, transfer,
dismissal and retrenchments; and personnel research. Furthermore, Morris (2005) noted that
11
developmental purpose emphasize on developmental functions on the individual as well as the
organizational level. Appraisal can serve individual development purposes through: feedback on
their strengths and weaknesses and how to improve future performance, helping career planning
and development and providing inputs for personal remedial interventions. Organizational
development purposes may include: specifying performance levels and suggesting overall
training need; providing essential information for affirmative action programs, job redesign
efforts, multi skilling programs; and promoting effective communication within the organization
through ongoing interaction between appraisers and appraises.
Cumming (1972) writes that the overall objective of performance appraisal is to improve the
efficiency of an enterprise by attempting to mobilize the best possible efforts from individuals
employed in it. Such appraisals achieve four objectives including salary reviews, development
and training of individuals, planning job rotation and assisting in promotions.
Mamoria (1995) and Atiomo (2000) agree that although performance appraisal is usually thought
of in relation to one specific purpose, which is pay. It can in fact serve for a wider range of
objectives which are; identifying training needs, improving present performance of employees,
improving potentials, improving communication, improving motivation and aids in pay
determination.
Performance appraisal has been considered as a most significant and indispensable tool for an
organization, for the information it provides is highly useful in making decisions regarding
various personnel aspects such as promotions and merit increases. Performance measures also
link information gathering and decision-making processes, which provide a basis for judging the
effectiveness of personnel sub-divisions such as recruiting, selection, training and compensation.
If valid performance data are available, timely, accurate, objective, standardized and relevant
management can maintain consistent promotion and compensation policies throughout the total
system, Burack, Elmer and Smith (1977).
Performance appraisal also has other objectives, which McGregor (1957) says includes:
It provides systematic judgment to the organization to back up salary increases.
12
It is a means of telling a subordinate how he is doing and suggesting needed changes in
his behavior, attitude and skill or job knowledge. It lets him know where he stands with
the boss.
It is being used as a base for coaching and counseling the individual by the superior.
The performance appraisal must be effective in order to use its results for developmental and/or
administrative purposes. Developing the appraisal process that accurately reflects employee
performance is not an easy task. Performance appraisal process is not generic or easily passed
from one organization to another; and that their design and administration must be tailor-made to
match employee and organizational features and qualities (Boice and Kleiner 1997).
13
is expected of them, and the yardsticks by which their performance and results will be evaluated
(Khan 2007).
According to Ivancevich (2004) job description is a written description of what the job entails
and that it is important for organization to have thorough, accurate, and updated job description.
Appraisal criteria must be based on job description for the position employee holds (Khan 2007).
Meaning, appraisal criteria must have the quality of relevance to job duties. Relevance is the
degree to which performance measure is related to the actual output of job incumbent as
logically as possible (Evancevich 2004). Relevance may also refer the extent to which the
performance measure assesses all relevant and only the relevant aspects of performance (Noe et
al.1996). Thus, matching the performance criteria as reflected in the performance appraisal
format to the evaluation task is one way to ensure relevance (Lee, 1985).
Appraisal criteria must be realistic. Meaning, realistic appraisal criteria are attainable by any
qualified, competent, and fully trained employee who has authority and resources to achieve the
desired result. It should take in to account the practical difficulty in the environment in which the
employee works. This implies that the performance of employees should not be evaluated against
the standard which is beyond their control. Moreover, appraisal criteria must possess
characteristics like reliability in order for the performance appraisal to be effectiveness
(Evancevich 2004).
Furthermore, Roberts (2003) noted that the development of reliable, valid, fair and useful
appraisal standard is enhanced by employee participation, as employees possess necessary
unique and essential information needed for developing realistic standard.
14
extensive training is essential for avoiding such errors. Therefore, the training should provide
appraisers with broad opportunities to practice the specified skills, provide appraisers with
feedback on their practice appraisal performance, and that a comprehensive acquaintance with
the appropriate behaviors to be observed.
Harris (1988) proposed that an organization must provide training regular basis. Training must
be frequently updated and involves appraisal aspects such as give and take feedback, personal
bias, active listening skills and conflict resolution approaches. If implemented this way,
employees are less confused; less disappointed concerning measures and are more aware about
the intentions of performance appraisal. This also means that they will be capable of useful
critique and feedback concerning the appraisal process (Elverfeldt, 2005). Generally, sufficient
training must be given to appraisers so that they: (1) understand the performance appraisal
process; (2) are able to use the appraisal instrument as intended which encompass interpreting
standards and use of scales; and (3) are able to provide effective feedback.
Additionally, the skills of the appraisers must be updated and refreshed on continual basis.
Furthermore, appraises must also receive certain form of appraisal training to introduce them to
the appraisal process. To attain their acceptance and support of the appraisal process, appraises
must understand the appraisal process as a whole as well as the behavioral facet and criteria that
are utilized to evaluate their performance (Elverfeldt, 2005)
The feedback helps employee discuss their problems in order to improve their future
performance (Anjum 2011). Roberts (2003) also stated that without feedback, employees are
unable to make adjustments in job performance or receive positive reinforcement for effective
15
job behavior. For feedback to improve performance of employee it must be timely, specific, and
behavioral in nature and presented by a credible source (Roberts 2003).
The performance appraisal process which provides formal feedback once a year is more likely to
be feedback deficient. For appraisal process to gain maximum effectiveness there must be
continual formal and informal performance feedback (Roberts 2003 and Elverfeldt 2005). Dalton
(1996) emphasized on that feedback event should be a confidential interaction between a
qualified and credible feedback giver and evaluate to avoid denial, venting of emotions, and
behavioral and mental disengagement.
In such an atmosphere discrepancies in appraisals can be discussed and the session can be used
as a catalyst to reduce the discrepancies. Another important point regarding performance
feedback is the use of multiple appraisers. Performance appraisal has been criticized for being
ineffective for a variety of reasons such as the potential biases of the appraisers and the potential
subjectivity of ratings (Roberts, 2003). Alexander (2006) stated that in 360 degree feedback
multiple appraisers offer feedback on observed performance as opposed to subjective viewpoints
from a single individual. Multiple appraisers offering similar feedback will send a reinforced
message to evaluate about what is working well and what needs to be improved. Feedback is
more difficult to ignore when it is repeatedly offered by multiple sources.
The 360 degree review process is alleged to be superior to traditional forms of appraisal and
feedback because it provides more complete and accurate assessment of the employees‟
competencies, behaviors and performance outcomes (Alexander, 2006). According to Elverfeldt
(2006) 360-degree appraisal might be a useful tool in enriching performance appraisal and
enhancing its acceptance, but this will only be the case if the appraiser and the appraised
generally perceive additional source of feedback as relevant and favorable.
16
or bad). The frequent appraisal rectifies such a generally unconscious, selective memory. Getting
rid of surprises in the appraisal process is also imperative.
Both the appraisers and evaluatee need to know that there is a performance problem prior to any
major performance appraisal period. If the problem is allowed to continue for longer period, it is
more difficult to take corrective action. Thus, frequent performance appraisals should eliminate
the surprise element and help to modify performance prior to any semiannual/annual review
(Boice and Kleiner 1997)
17
appraisal criteria used, possess confidence in the accuracy of performance measurement, and
perceive an absence of appraisers bias (Roberts, 2003).
Appraises must view performance appraisers as accurate, impartial, and open in order for the
appraisal to be effective (Monyatsi, et al. 2006). Moreover, Roberts (2003) cited that the trust
that employees have on the accuracy and fairness in performance appraisal is vital concern,
otherwise there will be tremendous waste of time and money spend on development and
implementation of the system. The author further notified that employees‟ involvement in all
aspects of performance appraisal process increases the trust that employees have on performance
appraisal. Employee involvement helps them to incorporate their voice in the process, generates
an atmosphere of cooperation and trust which minimizes defensive behavior and appraiser-
appraise conflict. The argument is that if employees are confident in the fairness of the appraisal
process, they are more likely to accept performance score, even adverse ones.
Skarlicki and Folger (1997) stated that in any case, if the employees perceive the process as
unfair and not systematic and thorough, it is unlikely that they will accept the outcome of the
appraisal exercise. The appraisal process can become a source of extreme dissatisfaction when
employees believe the process is biased, political, or irrelevant.
Halo/horns effect: - this error refers to a failure to distinguish between various aspects of
performance. Halo error occurs when one positive performance aspect causes the evaluator to
assigns positive ratings to all other aspects of performance. Horns error operates in opposite
direction: when one negative aspect results in the evaluator to assign low ratings to all the other
aspects of performance. These errors are problems because they prohibit making necessary
distinction between strong and weak performance. (Noe, et al. 1996)
18
Leniency or harshness error: - Leniency occurs when an evaluator assigns high (lenient) rating
to all appraises. Appraisers with too lenient ratings are called easy appraisers or “Santa Claus”
(Hamman and Holt, 1999). They are mostly found among groups of appraisers who do not want
to put forth the effort to understand the performance standards, or among individuals who have
been appraisers for an extremely long time. There are six major reasons why managers inflate
ratings: (a) to maximize subordinates' merit raises; (b) to avoid hanging 'dirty laundry' in public;
(c) to avoid creating a written record of poor performance; (d) to give a break to an employee
who has shown recent improvement; (e) to avoid confrontation with a difficult employee; and (f)
to promote a problem subordinate “up and out” of the department (Fried et al. 1999)
Many of these reasons can be interpreted as supervisors' attempts to elicit positive reactions from
subordinates, such as increasing their work motivation and performance, as well as increasing
subordinates' trust in, and cooperation with, their supervisors. Harshness occurs when evaluator
provide low rating for all appraises. An evaluator appears to have excessively high standards
which results in a low mean score, and the distribution of scores is skewed toward the low end of
the rating scale (Berry, 2003). Such an evaluator is called hard evaluator or “ax man” (Hamman
and Holt, 1999). They often have the problem of being strongly biased by one event, thereby
causing their assessments to be extremely harsh.
Central tendency error: - occurs when an evaluator avoids using high or low ratings and
assigns average ratings. This type of average rating is almost useless because it fails to
discriminate between performers and that it provides little information to make HRM decisions
(Ivancevich 2004). According to Hamman and Holt (1999) this error is due to the evaluator‟s
feelings of unease with the assessment criteria, and the aversion to make mistakes.
Personal biases:- Contrast effect occurs when a evaluator lets another employee‟s performance
influence the rating that are provided to someone else (Ivancevich 2004).The similar-to-me error
is the tendency to evaluate the evaluatee more positive, if the evaluatee is perceived to be similar
to the evaluator (Jacobs et al., 1980). Stereotyping means that impressions about an entire group
alter the impression about a group member (Rudner, 1992). Perception differences error occurs
when the viewpoint or the past experiences affect how behavior is interpreted (ibid).
19
2.1.3.2 System Design and Operating Problems
According to Evancevich (2004) performance appraisal could break down due to poor design.
The design can be blamed if the criteria for appraisal are poor, the technique used is
cumbersome, or the system is more form than substance. If the criteria used focus solely on
activities rather than output (results), or on personality traits rather than performance, the
appraisal may not be well received. According to Deborah and Kleiner (1997) organizations need
to have a systematic framework to ensure that performance appraisal is “fair” and “consistent”.
The authors concluded that that designing an effective appraisal system requires a strong
commitment from top management. The system should provide a link between employee
performance and organizational goals through individualized objectives and performance
criteria.
Formative and summative Decisions are equivalent terms for development and administrative
decisions respectively, which have been described earlier. The formative decisions use the
evidence to improve and shape the quality of teaching. As individual instructor, one makes
formative decisions to plan and revise his/her teaching semester after semester. Similarly OECD
(2009) states that instructors appraisal for improvement focuses on the provision of feedback
useful for the improvement of teaching practices, namely through professional development. It
involves helping instructors learn about, reflect on, and improve their practice. On the other
hand, summative uses the evidence to “sum up” instructors overall performance or status to
make personnel decision about their annual merit pay, promotion, and tenure.
20
Summative decisions are rendered by administrators at various points in time to determine
whether instructors have a future and these decisions have an impact on the quality of
professional life. Moreover, summative uses of instructors‟ performance appraisal focus on
holding instructors accountable for their performance associating it to a range of consequences
for their career. It seeks to set incentives for instructors to perform at their best. It typically
entails performance-based career advancement and/or salaries, bonus pay, or the possibility of
sanctions for underperformance (OECD 2009). Though instructors‟ performance appraisal is
widely applied for summative and/or formative purpose in higher educational institutions, its
essence in these institutions has been criticized by the findings of different researches. For
instance, Stone (1996) cited in Morris (2005) suggested that performance appraisal is not
appropriate for academics and also it is an attack on academic freedom as well as a potential tool
to monitor and control staff. The author also noted that performance appraisal in higher
education have had limited and confused purposes and their contribution to enhanced
institutional performance and quality has been minimal.
Danielson and McGreal (2000) proposed a model containing four domains that represent
components of instructor‟s professional practice. These are planning and preparation, the
21
classroom management, instruction, and professional responsibilities. The author assumed that
competencies in these domains can serve as criteria for instructors‟ performance appraisal.
Moreover, Tigelaar, et al. (2004) identified a framework of teaching effectiveness with a
following major domain-person as instructors, expert on content knowledge, and facilitator of
learning processes, organizer, and scholar / lifelong learner. Kyriakides, et al. (2006) stated that
the goals and tasks assigned to instructors must be clear and specific; the outcomes of
instructors‟ performance must be easily observed; and standard of evaluation must be clearly
stated. Instructors are often expected to accomplish complicated tasks and meet objectives within
a predetermined timeframe. Consequently, the sources and support provided constitute important
facilitating factors for their work.
Student appraisal is the most widely used technique to measure instructors‟ competence inside
the classroom. The assumption is that students are the direct consumers of the services provided
by instructors and therefore they are in a nice position to evaluate their instructors‟ performance.
22
This appraisal covers the most visible teaching habits of instructor in classroom situations to the
personal attributes including communication styles, attitudes, and other dispositions observable
in an instructor (David and Macayan 2010). Baker (1986) arrived at a conclusion that the validity
of students‟ appraisal was not influenced by the students‟ expected grade, the sex of the students,
and academic status of the student.
However, despite the fact that many research findings (Miller 1998, and Baker 1986, David and
Macayan 2010) support that students are in a good position to evaluate a variety of aspects
concerning effective instructions, the validity studies of student appraisal result contradictory
findings. Jones (1989) identified that the validity of student‟s appraisal of instructors‟
performance is distorted because students often relate personality characteristics of instructors
with teaching competence.
Naftulin et al. (1973) also concluded that an instructor‟s entertainment level influences student
appraisal scores and they call this influence the “Dr. Fox” effect. In their study Naftulin et al.
placed an actor, known as “Dr. Fox”, in a college classroom where he presented a highly
entertaining lecture that included no substantive content. The actor received rave student
appraisal scores, which led the researchers to determine that highly charismatic instructors can
seduce students into giving high score despite learning nothing.
Abrami et al. (1982) suggested that student rating should not be used in decision making about
instructors promotion and tenure, because charismatic and passionate instructors can receive
favorable student rating regardless of how well they know their subject matter nor do these
instructors characteristics relate to how much their students learned. Some instructors dislike
student appraisal of instructor‟s performance and complain about the intellectual and personal
capacity of students to objectively rate instructors‟ performance effectiveness (Emery, et al.
2003). It is possible that some ratings become assessment of students‟ satisfaction or attitude
toward their instructors instead of being able to assess actual instructors‟ performance and
effectiveness. David and Macayan (2010) stated that student rating of instructor‟s performance
could be based mainly on hidden anger resulting from a recent grade received on an exam or
from a single negative experience with an instructor. Studies found that there is an association
23
between grades and student appraisals of instructors‟ performance because student appraisal
score is higher in courses where student achievement is higher (Baird 1997; Cohen 1981).
In addition, Onwuegbuzie et al. (2007) stated that factors associated with testing (e.g. difficult
exam and administering sudden quizzes) and grading (e.g. harsh grading, notable number of
assignments and home works) were likely to lower students‟ appraisal of their instructors. David
and Macayan (2010) stated that student rating of instructors‟ performance could be based mainly
on hidden anger resulting from a recent grade received on an exam or from a single negative
experience with an instructor. Studies found that there is an association between grades and
student appraisals of instructors‟ performance because student appraisal score is higher in
courses where student achievement is higher (Baird 1997; Cohen 1981). In addition,
Onwuegbuzie et al. (2007) stated that factors associated with testing (e.g. difficult exam and
administering sudden quizzes) and grading (e.g. harsh grading, notable number of assignments
and home works) were likely to lower students‟ appraisal of their instructors.
Gezgin (2011) cited that students are susceptible to some cognitive bias such as self serving bias,
recency bias, and serial position bias. The author identified that students attribute success to
themselves and blame instructors and external factors (self serving bias); since the recent events
are more salient in human memory, students may be biased towards recall of recent weeks of the
semester (regency bias); and students‟ responses in appraisal form are affected by the order
through which the questions are asked (serial position bias).
In light of the continuous debate on the students‟ appraisal of instructors‟ performance, the focus
of this paper was to evaluate practices of students during appraisal so as to investigate potential
biases that may compromise the effectiveness of instructors‟ performance appraisal. Similarly,
the potential bias practices of peers and department heads has been investigated in the effort of
determining effectiveness of instructors‟ performance appraisal process.
24
rating. Peer appraisal is a process in which instructors work collaboratively to assess each other‟s
teaching and to assist one another in the effort to strengthen teaching (Keig andWaggoner 1994).
Those aspects of instructors‟ performance that students are not in position to evaluate can be
covered by peer appraisal. However, although many instructors feel that they benefit from
thoughtful attention to their teaching, others find the peer appraisal process as intimidating,
meaningless, or both (Carter 2008). Moreover, peer appraisal has a tendency of bias and
unreliability because it is based on subjective judgments, if peers are less informed of each
other‟s performance (Berk 2005).
25
Roberts, 1998, found four of out of ten supervisors agreed that employees receive much of the
blame for poor performance when in reality its poor management practices.
Elverfeldt (2005) from literature analysis generalized that the most significant factor in
determining performance appraisal system effectiveness is the acceptance of its users. Thus, a
questioning was conducted in a target organization to test how the users perceive their current
performance appraisal system. It was found that factors as 360-degree appraisal, procedural
justice, goal-setting and performance feedback scored relatively high, while performance-based
pay received the worst score. The only demographic variable that partly accounted for the
variance in opinion about factors was age.
Peterson‟s (2000) extensive literature review of over 70 years of empirical research on teacher
evaluation concluded: “Seventy years of empirical research on teacher evaluation shows that
current practices do not improve teachers or accurately tell what happens in classrooms. . . . Well
designed empirical studies depict principals as inaccurate raters both of individual teacher
performance behaviors and of overall teacher merit” (pp. 18–19).
Lortie (1975) found that only 7% of the teachers he interviewed saw judgments by their
organizational superiors as the most appropriate source of information about how well they were
doing. The study concluded that teachers had little direct interest in or respect for the process or
results of evaluation, and most operated independently of them.
Kauchak, Peterson, and Driscoll (1985), in a survey study of teachers in Utah and Florida, found
evaluations based on principal visits to be “perfunctory with little or no effect on actual teaching
practice” (p. 33). One problem identified by the teachers in the study was that evaluations were
too brief and lacked rigour. Teachers also complained that the principal was not knowledgeable
in their grade level or subject area. Finally, teachers in the study felt that the evaluation reports
lacked specifics about how to improve their teaching practice.
Johnson (1990) interviewed 115 teachers and found similar results. Teachers felt that principals
rarely offered ideas for improvement. They also felt that the ratings forms and items encouraged
principals to be picky in their criticisms; almost forcing principals to find something to criticize
26
so that they will look discriminating. However, the main dissatisfaction of teachers in the study
was what teachers saw as a basic lack of competence on the part of administrators to evaluate.
This included a lack of self-confidence, expertise, subject matter knowledge, and perspective on
what it is really like to be in the classroom.
The American literature on teacher evaluation indicates that neither teachers nor administrators
seem to receive much benefit from the process, despite it consuming large quantities of time and
resulting in considerable stress. The impact on teaching practice appears to be negligible and
often results in negative feelings among teachers as they do not feel that their evaluations are
objective or accurate. Administrators often view teacher evaluations as something they are
forced to do rather than something they want to do Sachin Maharaj (2014).
Performance appraisal process has been categorized into: (1) Establishing job criteria and
appraisal standards; (2) Timing of appraisal; (3) Selection of appraisers and (4) Providing
feedback (Scullen et al., 2003). For this study what factors determine the effectiveness of
instructors‟ performance appraisal process are conceptualized as follows:
Preparing Quality appraisal criteria is the first step in the process of performance appraisal
(Ivancevich 2004). Moreover, appraisal criteria must possess characteristics like reliability in
order for the performance appraisal to be effectiveness (Ivancevich 2004). Monyatsi et al. (2006)
stated that if instructors are not aware or convinced of purpose of the instructors‟ performance
appraisal, they become anxious and suspicious of the whole process
27
Kopelaman, 2002). Performance feedback is also the second factor which shows the strength and
weakness of the employee.
OECD (2009) states that instructors appraisal for improvement focuses on the provision of
feedback useful for the improvement of teaching practices, namely through professional
development. It involves helping instructors learn about, reflect on, and improve their practice.
The third factor is purpose of appraisal which this study considers for this study. (Monyatsi, et
al. 2006); clearly stated purpose of appraisal is essential characteristics of effective performance
appraisal. Employees are bound to be committed and this may improve their daily performance,
if they understand purpose of their performance appraisal. The appraisal with unclear purpose is
meaningless exercise.
The last factor that involves in the study is practices of appraisers. Skarlicki and Folger (1997)
stated that in any case, if the employees perceive the process as unfair and not systematic and
thorough, it is unlikely that they will accept the outcome of the appraisal exercise. The appraisal
process can become a source of extreme dissatisfaction when employees believe the process is
biased, political, or irrelevant.
Students’ appraisal covers the most visible teaching habits of instructor in classroom situations
to the personal attributes including communication styles, attitudes, and other dispositions
observable in an instructor (David and Macayan 2010).
28
Conceptual frame work
Practices of Characteristics of
Appraisers Performance
Student Feedback
Peer
Head
Effectiveness of
Instructors‟ Performance
Appraisal Process
29
CHAPTER THREE
The sample size is determined by Yamane (1967: page 886) simplified formula. A 95%
confidence level and P = .5 are assumed.
Where, n is sample size, N is the population size and e is the level of precision. A 95%
confidence level and e = 0.05, is assumed for the purpose of determining sample size for this
study. Accordingly, the sample size for the study was calculated as follows.
30
841
𝑛=
1 + 841(0.05)2
n = 271
But instructors and RGCS responded to two separate questionnaires, the total sample size must
be proportional to the size of the two sets of the population. Accordingly, 39 instructors and 232
RGCS were included in to sample.
The proportionate stratified random sampling was utilized to select sample respondents (both
instructors and students). To select sample respondents, first the total population of instructors
and RGCS in the college was stratified in to six strata based on the number of departments in the
college. To ensure representativeness of the instructors‟ and students population, the numbers of
sample instructors and students was made proportional to the size of instructors‟ and students
population in each stratum (department) respectively. Proportional sample size from each stratum
was calculated by the following formula:
𝒏∗𝑵𝒊
𝒏𝒊=
𝑵
Where: ni is the sample instructors in respective departments; Ni is the total number of instructors in
the department; n and N are the sample size and the total population size at college level.
31
Table 2: Sampling Students
After the number of sample respondents from each stratum was determined, simple random
sampling technique was used using SPSS Version 21 to arrive at individual sample respondents.
This sampling technique was used to avoid biased.
32
questionnaire is about demographic characteristics of respondents such as age, year(s) of
experience, department, and academic ranks. The second part is about qualities of appraisal
criteria. Here some ideal characteristics that effective appraisal criteria must possess were
identified and the instructors were asked whether their appraisal criteria possess those identified
characteristics. The questions were in statements form and instructors were asked to express their
agreement/disagreement in the five point Likert scale, where 1=strongly disagree, 2= disagree,
3= neutral, 4= agree, 5=strongly agree. The third part of the questionnaire deals with the
appraisers‟ practices (students, peers and heads) in evaluating instructors‟ performance. Here
again instructors expressed their level of agreement or disagreement in the five point Likert
scale. The fourth part is about characteristics of performance feedback system. The final part
addresses the clarity of purpose of instructors‟ performance appraisal process. Moreover, the
questionnaire for students had two major sections. The first section is about students‟
demographic profile including age, sex, department, and cumulative grade point average
(CGPA). In the second part, students were asked to respond to series of statements representing
their potential practices in evaluating instructors‟ performance. The scale reliability test for the
major items of the questionnaire was conducted. The reliability of the scale for 62 items in
questionnaires completed by instructors was tested using Cronbach‟s Alpha and the overall
reliability coefficient is 71.6 percent. Similarly, the reliability of the scale for 12 major items in
questionnaires completed by students was tested and the overall reliability coefficient is 73.6%
percent. According to George and Mallory (2003) provides the following cronbach alpha
techniques:
a. > 0.90 = Excellent , b. 0.80 - 0.89 = Good, c. 0.70 - 0.79 = Acceptable, d. 0.60 - 0.69 =
Questionable, e. 0.50 - 0.59 = Poor, f. < 0.50 = Unacceptable.
The interview schedule was designed to conduct semi-structured interview with all heads of
departments within the college. However, during data collection period only three (3) department
heads from the total six department heads were interviewed because others were either reluctant
or not available. Before starting interviewing, the researcher introduced himself and explained
the purpose of the study to interviewee. During interview session, the researcher doted down all
important points on notepad and organized them for analysis purpose.
33
3.5 Data Processing and Methods of Data Analysis
The data collected using semi structured questionnaire were edited, coded and analyzed with
great care. Both in-house and field editing was undertaken to detect errors that had been
committed by respondents during completing the questionnaires. The coding of the possible
alternatives in the questionnaire was made in advance of administering the questionnaire to
sample respondents. Meaning, in a five point scale the possible responses was pre-coded (for
example, 1= strongly disagree, 2= disagree, 3= neutral, 4= agree, and 5=strongly agree; 1=very
low, 2=low, 3=medium, 4=high and 5=very high) to facilitate quick answering of the questions
and to simplify data entry in to computer software for analysis.
The qualitative method of data analysis was employed for the analysis of data that were collected
through personal interviews. Data collected through questionnaires were analyzed quantitatively
by utilizing statistical tools such as tabulation, bar charts, chi-square test and pie charts to present
data. In addition, descriptive statistics such as mean, percentage, and standard deviation were
used to analyze and interpret data. For data that were analyzed using mean score, since five point
Likert scale was used, mean score of 3.0 was considered as midpoint (neutral), while mean
scores of greater than 3.0 and less than 3.0 were assumed as agreement and disagreement,
respectively. Data were analyzed with the help of SPSS version 21. After data had been
presented and analyzed, conclusion and recommendations were drawn from the findings.
34
CHAPTER FOUR
35
The above table 3 indicates that the male instructors account for 91.4%, while the female were
only 8.6%. Moreover, the table also indicates the distribution of male and female instructors
across departments. To this end, around 0.0%, 28.6%, 0.0%, 0.0%, 25% and 0.0% of female
instructors were from department of Economics, Management, Accounting and Finance,
Marketing Management, Logistics and supply chain and Tourism and Hotel management
respectively. On the other hand, 100.0%%, 71.4%, 100.0%, 100.0%, 75.0% and 100.0% of male
instructors were from department of Economics, Management, Accounting and Finance,
Marketing Management, Logistics and supply chain and Tourism and Hotel management.
Generally, it can also be brightly seen from the above table, that 22.9% from Economics, 20.0%
from Management, 20.0% from Accounting and Finance, 17.1% from Marketing Management,
11.4% Logistics and supply chain and 8.6% from Tourism and Hotel management.
S. No Frequency Percent
1 Assistant lecturer 3 8.6
2 Lecturer 24 68.6
3 Assistant Professor 8 22.9
4 Total 35 100.0
The above table 4 clearly indicates that 68.6% of instructors hold academic rank of lecturers,
while those instructors in academic rank of assistant professor were 22.9% and the left 8.6%
were assistant Lecturer in the total sample. Generally, it can be said that majority of instructors
are ranked as lecturers.
36
Figure 2: Instructors year(s) of experience
The figure 2 and table 5 shows that majority of instructors (54.3%) stated that they had the
experience of 7-10 years. In addition, 20 %, 17.1 % and 8.6 % asserted that they had the
experience of 2-3 years, 4-6 years and > 11 years, respectively. From this figure, one can easily
deduce that the university has shortage of more experienced instructors (those having greater
than 11 years of experience).
37
4.2 Demographic Profile of Respondents (Students)
Whilst many demographic aspects of students are there, this paper focuses on three factors such
as sex, cumulative grade point average (CGPA) and department, that were assumed to have
relevance for the study.
38
Table 6: department of students * sex of students Cross tabulation
As it can be easily seen from table 6, 24% of students are from Management, 22.6% from
Economics, 32.6% from Accounting and Finance, 8.6% from Marketing Management, 6.8%
from Logistics and 5.4% Tourism and Hotel Management respectively. Again the above table
shows that more than half of females (12.7%) belong to management department while males
are 11.3%. Conclusively, it can be said that Accounting and Finance department 32.6% is more
populous than the rest departments.
39
Table 7: CGPA of Students
1 2.0-2.74 85 38.5
2 2.75-3.24 54 24.4
3 3.25-3.74 47 21.3
4 3.75-3.94 28 12.7
5 3.95-4.0 7 3.2
Generally speaking, majority of students (38.5%) stated that they had CGPA within the interval
of 2.00-2.74; whereas only few students (3.2%) stated that their CGPA falls in the interval of
very great distinction (3.95-4.00). The analysis shows that as CGPA increases, the number of
students achieving the higher CGPA decreases.
40
4.3.1 Instructors Awareness of Their Appraisal Criteria
According to Khan (2007) performance appraisal should be based on job description for the
position employee holds. This is vital in helping every employees of the organization know
exactly what is expected of them, and the yardsticks by which their performance will be
evaluated. To this end, instructors were asked to express their agreement/disagreement on the
statement “Up on employment at Bahir Dar University, Every instructor was given job
description specifying his/her duties”.
Table 8 shows that 25.7% strongly disagreed, 34.3% disagreed, 20 % neither agreed nor
disagreed, 14.3% agreed and 5.7% strongly agreed with the above statement. This means that
majority (60%) of instructors do not believe that job description is given for instructors up on
employment. The implication is that in the absence of job description specifying duties and
responsibilities of individual instructors, instructors may hardly know their performance
expectations.
41
Figure 4: Instructors Formal Training about Their Appraisal Criteria
According to figure 4, majority 25 (71.4%) of instructor‟s didn‟t support the statement “up on
employment at Bahir Dar University, every instructor were formally trained about appraisal
criteria”. More specifically, 9 (25.7%) strongly disagreed and 16 (45.7%) disagreed with the
statement. On the other hand, only minority 4 (11.4%), strongly agree) supported the statement.
The remaining 17.1% were neither agreed nor disagreed. Generally, the analysis shows that
because job description and formal training regarding appraisal criteria is lacking, instructors
(especially less experienced) have less chance to know the standard against which their
performance is evaluated.
Furthermore, instructors were also asked how they managed to keep their performance up to
standard in the university. The underneath table summarizes the responses of instructors by years
of experience.
42
Table 9: Ways for keeping performance up to standard by instructors’ years of experience
Respondents work Relevancy of performance evaluation criteria for Total
experience respondents task
By asking By doing By reading I found
senior what I criteria on difficulty to
instructor in feel instructors know what
the appropri evaluation is right
department ate form
2-3 Count 0 7 0 0 7
year % within work 0.0% 100.0% 0.0% 0.0% 100.0%
experience
% of Total 0.0% 20.0% 0.0% 0.0% 20.0%
4-6 Count 3 0 0 3 6
year % within work 50.0% 0.0% 0.0% 50.0% 100.0%
experience
% of Total 8.6% 0.0% 0.0% 8.6% 17.1%
7-10 Count 6 3 3 7 19
year % within work 31.6% 15.8% 15.8% 36.8% 100.0%
experience
% of Total 17.1% 8.6% 8.6% 20.0% 54.3%
= >11 Count 0 0 3 0 3
% within work 0.0% 0.0% 100.0% 0.0% 100.0%
experience
% of Total 0.0% 0.0% 8.6% 0.0% 8.6%
Total Count 9 10 6 10 35
% within work 25.7% 28.6% 17.1% 28.6% 100.0%
experience
% of Total 25.7% 28.6% 17.1% 28.6% 100.0%
Source: Own Survey Result, 2015
It is shown in table 9 that 28.6% of instructors do “what they feel right” in their attempt of
keeping their performance up to standard. The same 28.6% of instructors faced difficulty in
knowing what is right to keep their performance up to standard. Around 25.7 % asserted that
they ask senior instructor in their department while 17.1% asserted that they read appraisal
criteria on their appraisal form in order to keep their performance up to standard.
Furthermore, a closer look at table 9 shows that for instructors that had greater than eleven years
of experience; they read appraisal criteria on their appraisal form in order to keep their
performance 100% dominant mechanism. But, for instructors with less than eleven years of
experience, “what they feel right” in their attempt of keeping their performance, faced difficulty
in knowing what is right to keep their performance up and asking senior instructor in the
department are dominantly used mechanisms. The analysis shows that regardless of their years of
43
experience, “reading appraisal criteria on appraisal form” is less commonly used mechanisms
among instructors. This means that instructors gave less emphasis for appraisal criteria as a
reference to guide their performance. In the subsequent section of the paper an attempt was made
to discuss instructors‟ perception of various characteristics of appraisal criteria. These
discussions may also provide a clue for why appraisal criteria had received less attention by
instructors. Additionally the following table 10 shows the relationship between work experience
and instructors‟ perception on Current appraisal criterion preparation by consultation with instructors
Table 10: The relation between instructors works experience and instructors’ perception on
Current appraisal criteria preparation in consultation with instructors.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 21.527a 9 .011 .007**
Likelihood Ratio 26.118 9 .002 .003***
Fisher's Exact Test 16.794 .009**
Linear-by-Linear 2.454b 1 .117 .124 .070 .020
Association
N of Valid Cases 35
a. 15 cells (93.8%) have expected count less than 5. The minimum expected count is .60.
b. The standardized statistic is -1.567.
The result from the chi-square taste shows that, there is strong relationship between work
experience and instructors‟ perception on Current appraisal criteria preparation in consultation with
instructors‟ but due to the assumption violation that the minimum expected count is below 5. So fishers
exact test result that…, X2 = 16.794, df = 1, p = .009. The result supports the mean score value
which indicates there is strong relationship between instructors work experience and instructors
perception on appraisal criteria preparation in consultation with instructors.
44
Based on this definition, two aspects of relevance of appraisal criteria are identified. That is the
phrases “all relevant” and “only relevant” refer to the completeness of the criteria and the
relevance of all criteria to job duties, respectively. In line with this, table 11 displays responses
of instructors regarding whether their appraisal criteria satisfies those aspects of relevance.
Responses
Strongly
Strongly
disagree
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
All appraisal criteria are
relevant to tasks in my job 9 25.7 10 28.6 6 17.1 9 25.7 1 2.9 35 100
Table 11 reveals that when instructors were asked to express their opinion on the statement „All
appraisal criteria are relevant to tasks in my job‟, 54.3% (28.6% disagree plus 25.7% strongly
disagree) opposed the statement. On the other wing, 28.6% of instructors were for the statement
i.e. 2.9% strongly agrees plus 25.7% agree. The remaining 17.1% of instructors neither agree nor
disagree with the statement. Furthermore, responding to open ended question, one assistant
professor stated that “The appraisal form contains relevant and irrelevant factors. For example,
out of 25-30 criteria on teaching appraisal form (completed by students), the relevant criteria
are not more than eight. Therefore, it needs revision to select best yardsticks that evaluate
instructors‟ work related performance and the form needs customization belongs to each
department.”
Table 11 also indicates the instructors‟ responses to the statement “All my duties are measured
in appraisal criteria”. In other words, instructors were asked whether the appraisal criteria are
comprehensive enough to measure all relevant tasks of their job. Accordingly, 71.4% of
instructors were against the statement i.e. 25.7% strongly disagree and 45.7% disagree. On the
45
other hand, around 20% agree of instructors believe that their appraisal criteria measure all their
job relevant duties. The remaining 8.6% of instructors asserted that they are neither for nor
against the statement.
Generally, table 11 reveals that majority of instructors never view that all their appraisal criteria
are relevant to their job, and they also believe that some of their job duties that should have been
measured by appraisal criteria were not included in current appraisal form. Therefore, it can be
concluded that instructors had reservation on relevance and completeness of their appraisal
criteria.
Responses
Strongly
Strongly
disagree
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
the evaluation criteria
consideration the 9 25.7 12 34.3 7 20 7 20 - - 35 100
practical difficulties in
the environment the
instructor performs
Instructors were asked to express their opinion on the statement “Current appraisal criteria take
in to consideration the practical difficulties in the environment in which I work”. The practical
difficulty in this context is all about the resources and support that instructors need in order to
successfully achieve the appraisal criteria. According to table 12, whilst 60% (25.7% strongly
disagree plus 34.3% disagree) refuted the statement, only 20% agree supported the statement.
The left 20% neither agree nor disagree. Generally, from the above description it is possible to
say that majority of instructors believe that their current appraisal criteria do not take in to
account the practical difficulties in the environment in which they work. According to
Kyriakides et al. (2006) instructors are often expected to accomplish complicated tasks and meet
46
objectives within a predetermined timeframe. Consequently, the resources and support provided
for instructors comprise important facilitating factors for their work. The implication is that
instead of simply using the appraisal criteria to evaluate performance of instructors, it is better
for the university to reconsider the kind, quality and quantity of resources available for all
instructors.
According to Evancevich (2004) defined reliability of appraisal criteria as the consistence of
measure. To this end, the underneath table reveals the instructors perception of reliability of their
appraisal criteria.
Looking at table 13, one can notice that most of instructors (80%), including 37.1% strongly
disagree and 42.9% disagree, feel that their appraisal criteria are not reliable. However, while the
left 20% of instructors stated that their appraisal criteria are reliable. Again looking at voice of
majority of instructors, it is possible to dictate that the reliability of appraisal criteria needs
improvements.
47
Table 14: Appraisal Criteria as Measure of Student Learning
48
Table 15: Instructors year(s) of experience * Appraisal criteria are prepared in consultation
with instructors Cross tabulation
Disagree
disagree
strongly
Neutral
agree
experience total
49
4.4 Practices of Appraisers in Instructors’ Performance Appraisal
Using multiple appraisers is often viewed as a key to successful practice; at least more than a
person should be involved in judging instructors‟ performance (Stronge and Turker 2003).
According to Danielson and McGreal (2000) the 360 degree appraisal system incorporates the
participation of many kinds of appraisers based on the assumption that instructors competence
may be seen from several different perspectives and that it should be exemplary (at least
adequate) from all those different angles. Elverfeldt (2006) cited that 360-degree appraisal might
be a useful tool in enriching performance appraisal and enhancing its acceptance, but this will
only be the case if the appraiser and the appraises generally perceive additional source of
feedback as relevant and favorable. Therefore, recently scholars have begun to argue that
employee perceptions are vital determinant of effectiveness of performance appraisal process
(Dargham 2007). BDU also uses 360 degree feedback system in which students, peers and heads
of departments are appraisers of instructors‟ performance. Hence, in this section the focus of the
paper is on examining practices of appraisers in order to identify some biases that may
compromise the effectiveness of instructors‟ performance appraisal process.
50
Table 16: Comparison of Students Self-Report of Their Own Practice and Instructors
Perception of Students Practice
51
Figure 5: Rating based on Physical Attractiveness of an Instructor
The above figure 5 indicates that majority 137(62%), including 26.7% strongly disagree plus
35.3% disagree of students opposed the statement “I provide favorable score for physically
attractive instructor‟. On the other hand, 71 (35.1%) 28.5% agree and 3.6% disagree of students
state that they provide favorable score for physically attractive instructor. The remaining
13(5.9%) were neutral. Moreover, according to table 16, the mean score of students‟ response on
the statement is 3.53; showing that majority of students disagreed with the statement.
Furthermore, instructors were also asked to express their perception on the statement “students
provide favorable score for physically attractive instructors”. Similar to the response of students,
the mean score of instructors‟ response is 3.57 (see table 16), indicating that majority of
instructors were in disagreement with statement. Therefore, the analysis shows that the
instructors‟ perception and the students self-report on this issue matches with each other and that
during appraisal students give less consideration for instructor‟s physical attractiveness.
Additionally table 17 shows the association of students rating and instructors‟ perception.
52
Table 17: Students practice of scoring for physically attractive instructors and instructors’
perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 36.419a 4 .000 .000***
Likelihood Ratio 31.959 4 .000 .000***
Fisher's Exact Test 29.341 .000***
b
Linear-by-Linear .037 1 .847 .880 .457 .059
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 1.09.
b. The standardized statistic is -.192.
The resulting p-value using Fisher‟s exact test P value is less than 0.001. Therefore, it shows that
there is significant relationship between instructors perception on students rating practice and
students rating practice based on physically attractiveness of instructors at α = 0.05 level. This
result supports the cross tabulation result discussed above.
53
Figure 6: Rating Based on Funniness of an Instructor
According to Shevlin (2000) student appraisal of instructors are greatly affected by instructor‟s
personal attribute like sense of humor. Meaning, if an instructor entertains students perhaps by telling
jokes/funs, students provide favorable score for that instructor. To examine whether this situation
holds true here, students were asked to express their agreement/disagreement with the statement “I
provide favorable score for funny instructor (who tells jokes)”. Accordingly, 22 (10%) agreed and 40
(18.1%) strongly agreed with the statement. On the other wing, 15 (6.8%) strongly disagreed and 81
(36.7%) disagree with the statement, while the remaining 63 (28.7%) were indifferent. Based on the
voice of majority, it can be deduced that funny instructor can gain favorable score in student
appraisal. Moreover, instructors were also given a chance to express their perception on the
statement “students provide favorable score for funny instructor (who tells jokes)”. According to
table 16, the mean score of instructors‟ response of 3.03 is a bit less than 3.04 mean score of
54
students‟ response. The implication is that whilst the mean score for both groups (students and
instructors) reflects their agreement with the given statement, students do favor funny
instructor(s) more than instructors may perceive them.
The finding is consistent with that of Shevlin (2000), who found positive correlation between
student rating and humor of instructor. Similarly, Naftulin et al. (1973) also concluded that an
instructor‟s entertainment level influences student appraisal scores and they call this influence
the “Dr.Fox” effect. In their study Naftulin et al. placed an actor, known as “Dr.Fox”, in a
college classroom where he presented a highly entertaining lecture that included no substantive
content. The actor received rave student appraisal scores, which led the researchers to determine
that highly charismatic instructors can seduce students into giving high score despite learning
nothing. The relationship between students rating and instructors‟ perception is stated in table 18
below.
Table 18: Relationship test result of students practice of scoring for funny instructors *
instructors perception on students rating based on funniness.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 16.316 4 .003 .003***
Likelihood Ratio 14.340 4 .006 .009**
Fisher's Exact Test 14.679 .004***
Linear-by-Linear .003b 1 .957 1.000 .505 .059
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 3.01.
b. The standardized statistic is .054.
The result from the analysis using Fisher‟s exact test, P value is 0.004 which is lower than 0.05.
Therefore, it shows that there is significant relationship between instructors perception on
students rating practice of funny instructors and students rating practice based on funniness of
instructors at α = 0.05 level. This result supports the cross tabulation result discussed above.
55
4.4.1.2 Rating Based on Grade and Exam Related Factors
Studies found that there is an association between grades and student appraisals of instructors‟
performance because student appraisal score is higher in courses where student achievement is
higher (Baird 1997; Cohen 1981). In addition, Onwuegbuzie et al. (2007) stated that factors
associated with testing (e.g. difficult exam and administering sudden quizzes) and grading (e.g.
harsh grading, notable number of assignments and home works) were likely to lower students‟
appraisal of their instructors. In this regard, to examine whether students take in account grade
and exam related factors in evaluation of instructors performance, students participated in the
current study were asked to make self report of their practice on these issues.
Table 19 above shows that, when students were asked to express their agreement/disagreement
with the statement “I evaluate an instructor positively, if he/she awarded me good grade in
previous course he/she taught” the overall mean score of their response is 3.65. This implies that
56
majority of students are in agreement with the statement. However, making a closer look at table
19, one can notice that while students with CGPA of greater than 3.25 agree with the statement,
those students whose CGPA is less than 3.25 disagree with the statement. Put differently, the
higher the students‟ CGPA, the higher likely that they evaluate their instructors based on
previously awarded grade. The implication is that perhaps because students with higher CGPA
had more positive in evaluating instructors‟.
Moreover, instructors were also asked to pose their opinion on the statement “Students evaluate
an instructor positively, if he/she awarded them good grade in previous course he/she taught”.
Table 16 revealed that, similar to case of students‟ response, the mean score of instructors‟
responses (3.17) indicates instructors support for the statement. Conclusively, though the
influence is moderated by their CGPA; majority of students who had received good grade
previously in the course taught by an instructor, would more likely to evaluate that instructor
positively. This determination is supported by other studies (Cohen 1981; Braskamp and Ory
1994; Baird 1997; Weinberg, et al. 2007) who also found that students reward those instructors
who reward them with good grades. Justifiably, this reciprocal trend may lead to grade inflation
among instructors who wish to receive high student appraisal scores.
Table 19 above also indicates that supporters of the statement “I evaluate instructor positively,
when I expect good grade from him/her” were supported by most of the sampled students. The
overall mean score of students‟ response of 3.21 is the evidence of students‟ agreement with the
statement.
Moreover, instructors were also responded to the statement “students evaluate instructors
positively, when they expect good grade from him/her”. According to table 16, similar to the
case of students, 3.26 mean scores instructors‟ responses shows that instructors were in
agreement with the statement.
Conclusively, though students‟ CGPA moderates the degree of influence of expected grade on
student evaluation, students‟ expectation of good grade from an instructor would lead them to
provide favorable score for that instructor. Consistent with this finding, other studies
(Worthington 2002; Weinberg et al. 2007) also found that students evaluate their instructors
more favorably when they expect higher grade.
57
Table 19 also reveals that mean score of students‟ response on the statement “I provide high
score for instructor whose exam is easy” is 3.10. This manifests that majority of students were in
agreement with the statement. However, a more detail analysis indicates that the influence of
exam easiness on student evaluation increases, as students‟ CGPA increases. In the same
fashion, it is shown in table 16 that the mean score of instructors‟ response of 3.06 on the
statement “students provide high score for instructor whose exam is easy”; implies the
instructors‟ support of the statement. By and large, the analysis sends a message that while
evaluating an instructor; students take in to account the level of exam easiness that an instructor
prepares for them.
Lastly, table 19 reveals students response on the statement “I provide high score for instructor
who gives fewer assignments”. Accordingly, students‟ response resulted in mean score of 3.06,
signifying that they favor those instructors who give fewer numbers of assignments.
Additionally, the analysis indicates that regardless of their CGPA, during evaluating instructors‟
performance; students are more influenced by number of assignments that instructor provides.
Furthermore, instructors were also asked to express their opinion on the statement “students
provide high score for instructor who gives fewer assignments”. Table 16 shows that the mean
score of instructors‟ responses is 3.31, implying the instructors‟ agreement with the statement.
Generally, similar to findings of other studies (Centra 2003; Onwuegbuzie et al. 2007); the
analysis indicates that majority of students positively evaluate instructor who gives fewer
assignments. This seems that students lacked awareness of why instructors provide assignments
and that students must be convinced of importance of doing assignments so that their attitude
towards assignments may be improved. Additionally the association between students rating and
instructors‟ perception on it is shown in table 20, table 21, table 22 and table 23 below.
58
Table 20: Relationship test result of students practice of Rating based on previous grade
awarded by instructor * instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 32.579 4 .000 .000***
Likelihood Ratio 24.425 4 .000 .000***
Fisher's Exact Test 25.136 .000***
Linear-by-Linear 3.402b 1 .065 .065 .039 .009
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 1.64.
b. The standardized statistic is 1.845.
The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating practice and
students rating practice based on physically attractiveness of instructors at α = 0.05 level. This
result supports the mean score of the cross tabulation result discussed above.
Table 21: Relationship test result of students practice of Rating based on expected good grade
from instructor* instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 15.586a 5 .008 .010*
Likelihood Ratio 12.212 5 .032 .035*
Fisher's Exact Test 12.730 .020*
b
Linear-by-Linear .013 1 .908 .915 .319 .036
Association
N of Valid Cases 256
a. 4 cells (33.3%) have expected count less than 5. The minimum expected count is .14.
b. The standardized statistic is -.116.
59
The result from the analysis using Fisher‟s exact test, P value is 0.02 which is lower than 0.05.
Therefore, it shows that there is significant relationship between instructors perception on
students rating practice of instructors based on expected good grade from instructors and
students rating practice accordingly of instructors at α = 0.05 level. This result supports the cross
tabulation result discussed above.
Table 22: relationship test of students practice of Rating based on number of assignments
given by instructor* instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 9.369 4 .053 .052
Likelihood Ratio 8.732 4 .068 .073
Fisher's Exact Test 9.350 .042*
Linear-by-Linear 1.625 1 .202 .209 .118 .030
b
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is .82.
b. The standardized statistic is -1.275.
From these results, the Pearson chi-square .053 indicates there is no relation between students‟
evaluation practice based on the number of assignments given by instructors and the instructors‟
perception about students rating. But Pearson chi-square result violates the assumptions
necessary for the standard asymptotic calculation of the significance level for this test may not
have been met. Therefore, you should obtain exact results. Due to this the fishers exact test
result is considered with the p value of 0.042 is less than 0.05 that indicates there is relationship
between students‟ evaluation practice based on the number of assignments given by instructors
and the instructors‟ perception about students rating. This result supports the mean score of cross
tabulation result.
60
Table 23: relationship test of students practice of Rating based on exam easiness prepared by
instructor* instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi- 13.605 4 .009 .009**
a
Square
Likelihood Ratio 10.689 4 .030 .033*
Fisher's Exact Test 11.369 .018**
Linear-by-Linear .043b 1 .835 .884 .450 .057
Association
N of Valid Cases 256
a. 1 cell (10.0%) has expected count less than 5. The minimum expected count is 1.09.
b. The standardized statistic is .208.
The fisher‟s exact test result p value 0.042 is less than 0.05, which indicates there is relationship
between students‟ evaluation practice based on exam easiness prepared by instructors and the
instructors‟ perception about students rating. This result supports the mean score of cross
tabulation result.
61
Table 24: The Contrast Effect in Student Appraisal
Responses
Strongly
Strongly
disagree
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
I evaluate
instructors 26 11.8 79 35.7 39 17.6 42 19 35 15.8 221 100
performance by
contrasting
Source: Own Survey Result, 2015
According to table 24, around 34.8% (19% agree and 15.8% strongly agree) of students stated
that while evaluating instructor‟s performance they do contrast the performance of an instructor
against that of other instructors. On the other wing, 47.5 % (11.8% strongly disagree and 35.7%
disagree) asserted that they do not contrast the performance of an instructor against that of other
instructors. The remaining 17.6% weren‟t expressed their agreement or disagreement.
The analysis shows that greater part of students had contrast effect in their appraisal of
instructors‟ performance. Table 16 shows that the students and the instructors‟ response on the
statement had resulted in mean scores of 3.09 and 3.34, respectively. The mean scores for both
groups of respondents indicate that they were in agreement with their given statement.
62
Table 25: Rating by Comparing Instructors‟ Performance with Standard of Appraisal
Responses
Strongly
Strongly
disagree
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
I evaluate my
instructors‟ 8 3.6 124 56.1 5 2.3 64 29.0 20 9.0 221 100
performance by
comparing with a
given
standard/appraisa
l criteria.
Source: Own Survey Result, 2015
According to table 25, more than half (59.7%) of students, including 3.6% strongly disagree and
56.1% disagree, were against the statement. However, 38% (29% agree and 9% strongly agree)
of students supported the statement. Instructors were also given the chance to express their
perception on the statement “students evaluate my performance by comparing with a given
standard of appraisal”. Table 16 indicates that the mean scores of the instructors‟ and the
student‟s response are 1.94 and 2.84, respectively. Since these mean scores are less than the
midpoint (3.00) in the scale, the implication is that both groups of respondents were in
disagreement with their respective statement.
By and large, the analysis shows that majority of students compare their instructors performance
against benchmarks other than the given standard of appraisal. This tendency of students has also
been reflected in the above analysis on different factors influencing students‟ appraisal such as
expected grade, previously received grade, exam easiness, funniness, etc. Logically, for students
to evaluate instructors by comparing instructors‟ real performance against appraisal criteria,
properly reading each appraisal criterion is necessary.
63
Table 26: relationship test of students practice of Rating by comparing instructors
performance with a given standard/appraisal criteria * instructors perception on students
rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 25.777a 4 .000 .000***
Likelihood Ratio 23.878 4 .000 .000***
Fisher's Exact Test 21.943 .000***
b
Linear-by-Linear 1.953 1 .162 .175 .090 .019
Association
N of Valid Cases 256
a. 3 cells (30.0%) have expected count less than 5. The minimum expected count is 2.19.
b. The standardized statistic is 1.397.
The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
rating practice by comparing instructors performance with a given standard/appraisal criteria at α
= 0.05 level. This result supports the mean score of the cross tabulation result discussed above.
To this end, students were asked to express their opinion on whether they properly read each
appraisal criterion. The following figure 7 summarizes the response of students on this issue.
64
The above figure 7 reveals that when students were asked to express their
agreement/disagreement on the statement “I properly read each appraisal criterion during
evaluating performance of my instructor”, (59.7%) of students, including 8 (3.6%) strongly
disagree and 124 (56.1%) disagree were the disagreement rate. However, the agreement rate is
38% of students which covers 64 (29%) agree and 20 (9%) strongly agree. From this description,
one can note that percentage of students who never properly read each appraisal criteria is
greater than their proper readers‟ counterparts. This is also reflected by 2.4 mean score of
students‟ response. Similarly, 2.03 mean score of instructors‟ response is also the reflection of
instructors‟ agreement on the issue. (See table 16 for mean scores). The chi-square test also
supports this result as shown in table 27.
Table 27 : relationship test of students practice of Rating by properly reading each appraisal
criterion * instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 124.354a 4 .000 .000
Likelihood Ratio 94.376 4 .000 .000
Fisher's Exact Test 90.680 .000
Linear-by-Linear 17.012b 1 .000 .000 .000 .000
Association
N of Valid Cases 256
a. 3 cells (30.0%) have expected count less than 5. The minimum expected count is 1.91.
b. The standardized statistic is 4.125.
The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
rating practice by properly reading each appraisal criterion at α = 0.05 level. This result supports
the mean score of the cross tabulation result discussed above.
Generally, it can be concluded that majority of students evaluate instructors without properly
reading appraisal criteria because their evaluation is highly influenced by their preconception
about instructors.
65
4.4.1.5 Rating Based on Negative Experience with Instructor(s)
Table 28 below clearly indicates that 40.7%, including (14.9% strongly agree plus 25.8% agree)
of students were in agreement with the statement “If I had a negative experience with instructor,
I will provide low score in all appraisal criteria”. Conversely, 51.1% (15.8% strongly disagree
plus 35.3% disagree) of students were against the statement. The remaining 8.1% were neutral
with the statement.
Responses
Strongly
Strongly
disagree
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
If I…negative
experience...
35 15.8 78 35.3 18 8.1 57 25.8 33 14.9 221 100
instructor…
Standard/appraisal
criteria.
Source: Own Survey Result, 2015
Additionally, it was shown in table 16 that the mean score of students‟ response is 3.11. This
implies that majority of students agree that they negatively evaluate an instructor, if they had
negative experience with that instructor. According to table 16, 3.26 mean score of instructors‟
response on the statement “If students had a negative experience with instructor, they will
provide low score in all appraisal criteria”; entails the instructors‟ support of the statement.
However, comparison of the two mean scores reveals that students‟ negative experience with
instructor(s) influences students evaluation but with less extent than what instructors had
imagined.
Therefore, the analysis shows that majority of students try their level best to punish instructors
whom they don‟t like, though they had some frustration because they didn‟t seen instructor who
had been punished after receiving low score in student appraisal. Similarly, David and Macayan
(2010) found that student evaluation of instructor‟s performance is based mainly on hidden anger
resulting from a recent grade received on an exam or from a single negative experience with an
66
instructor. To support this result the chi-square test shows positive association between students
rating and instructors‟ perception.
Table 29: relationship test result of students practice of Rating based on single negative
experience with instructor * instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
Pearson Chi-Square 8.747a 4 .068 .065
Likelihood Ratio 7.813 4 .099 .116
Fisher's Exact Test 7.681 .094
Linear-by-Linear .352b 1 .553 .587 .301 .046
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 3.55.
b. The standardized statistic is -.594.
The exact p value based on fisher‟s exact test is 0.094 and 0.068 for the Pearson chi-square
asymptotic value. Both p values are greater than significance level of 0.05. This result leads to
conclude that there is evidence that the students practice of Rating based on single negative
experience with instructor and instructors perception on students rating are not related. This result is
the opposite of the mean score result.
Tziner and Kopelman (2002) cited that because errors are well-embedded habits, extensive
training is necessary for avoiding such errors. The current study has also attempted to evaluate
67
whether students, as appraisers of instructors performance, are well trained to evaluate
performance of instructors. In this respect, students were given a chance to express their
agreement/disagreement with the statement “I have received enough training on how to evaluate
my instructors‟ performance”. The following figure 8 displays a summary of students‟ responses
on the above statement.
As it can be seen from figure 8, the largest proportion of students strongly disagreed 71 (32.1%)
and the next largest proportion of students disagreed 64 (29%) with the above statement, making
61.1% disagreement rate. However, relatively smallest proportion of students strongly agreed 33
(14.9%) and agreed 15 (6.8%) with the statement. According to table 16, the mean score of
students‟ response is 2.43; reflecting that students lacked training on instructors‟ performance
68
appraisal. In the same fashion, mean score of instructor response (1.51) on the statement
“students have received enough training on how to evaluate their instructors‟ performance”;
implies that instructors were hardly agreed with the statement.
Furthermore, during interview with heads of departments, whilst majority of them recognized
necessity of training, they state that no training had been given for students with respect to
instructors‟ performance appraisal. The chi-square test result also supports this result as shown in
table 30.
Table 30: Relationship test of students’ practice of Rating by having enough training to
evaluate instructors’ performance * instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 25.738 4 .000 .000***
Likelihood Ratio 29.311 4 .000 .000***
Fisher's Exact Test 25.386 .000***
Linear-by-Linear 13.515b 1 .000 .000*** .000 .000
Association
N of Valid Cases 256
a. 2 cells (20.0%) have expected count less than 5. The minimum expected count is 2.46.
b. The standardized statistic is 3.676.
The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
rating practice by having enough training to evaluate instructors‟ performance at α = 0.05 level.
This result supports the mean score of the cross tabulation result discussed above.
Generally, the analysis shows that students lack necessary training to evaluate performance of
instructors. Therefore, to enhance usefulness of student appraisal, students must be trained
during freshman semester and then periodically the refresher training must be given to them.
69
4.4.2 Instructors’ Perception of Practices of Peers
As mentioned earlier, peers appraisals is an integral part of instructors‟ performance appraisal in
BDU. According to Keig and Waggoner (1994), peer appraisal is a process in which instructors
work collaboratively to assess each other‟s teaching and to assist one another in efforts to
strengthen teaching. Needless to say peer appraisal is also widely used as supplement to
students‟ appraisal of instructors‟ performance. Therefore, the focus of this section is to see the
practices of peers in appraisal so as to examine the extent to which they perceive peer appraisal
is free of biases.
70
Figure 9: Reading Appraisal Criteria during Appraisal
Source: Own Survey Result, 2015
71
Figure 10: Peers Provide Equivalent High Scores to All Peers
On the other hand, 8 (22.9%) and 2 (5.7%) were strongly disagreed and disagreed, respectively.
The remaining 10 (28.6%) were neutral. The mean score of instructors‟ response is 3.06. This
indicates that largest proportions of instructors perceive that during peer appraisal, peers provide
equivalent high score for each other.
As described earlier, since appraisal score is associated with some personnel decisions
(especially promotion and scholarship); instructors are more interested in giving high scores so
as to help each other. Conclusively, peer appraisal score is more subject to leniency bias because
peers wants each other to be beneficiary of promotion and scholarship opportunities that are
largely determined by appraisal score. Consistent with this finding, Rudd, et al. (2001) found that
at University of Florida, peers were reluctant to give negative feedback because of promotion,
tenure, and award implication. Prowker (1999) also stated that purpose of performance appraisal
information may serve to concentrate the evaluator‟s attention to positive behavioral incidents
which leads to inflated appraisal score. Therefore, it can be said that this practice is major
problem that clouded the peer appraisal result.
72
4.4.2.3 Positively Evaluating Close Friends (Peers)
Favoritism is one of the potential biases inherent in performance appraisal process. In this
context, instructors were given a chance to display their perception of whether peers positively
evaluate their close friends than other peers. The following table 32 summarizes instructors‟
response on the statement “peers positively evaluate their close friends/peers”.
The above table indicates that 34.3% and 25.7% of instructors were strongly agreed and agreed,
respectively with the above statement. While 20% of them were neutral, 8.6% and 11.4% of
them were disagreed and strongly disagreed, respectively with the statement. From table 31 the
mean score of their response is 2.37. This indicates that majority of instructors perceive that
peers positively evaluate their close friends. The implication is that instructors view each other
on equal eyes in the course of their performance appraisal.
73
Table 33: Peers are well informed of My Performance along All Appraisal Criteria
Accordingly, table 33 reveals that 42.9% and 42.9% of instructors replied that they were strongly
disagreed and disagreed, respectively with the statement. While 5.7 % of them were neither
agreed nor disagreed, 8.6% of them were agreed with the statement. The mean score of
instructors‟ response on the statement is 1.80. This shows that majority of instructors believe that
peers are not well informed of all their performance dimensions.
However, given insufficient amount of information, it is likely that the appraiser will not have
formed a particular position about the appraised and will be concerned about making a mistake
(Harris 1994; Elverfeldt 2005). Therefore, it is recommended that peers with full information
about performance of each other should do peer appraisal. Meaning, for example, instructors
within the same team may be well informed of performance of their team members than other
instructors in the department and that peer appraisal should be better done on team basis rather
than on conventional departmental basis.
4.4.2.5 Contrasting Performance of Peers against Each Other Vs. Comparing Peers’
Performance with the Appraisal Standard
As discussed earlier in this paper, when an evaluator let the rating of an appraisee to be
influenced by performance of other appraisee‟s, the contrast effect will crop up. Under ideal
situation, performance of employees (instructors) must be compared with the given standards of
appraisal rather than against performance of somebody else. In this regard, instructors were
asked of their perception of the peers‟ practice of comparing their performance with standard of
74
appraisal and/or against performance of other instructors. The following table 4.19 displays
aggregate response of instructors concerning this issue.
Table 34: Comparing a peer’s performance with standard of appraisal vs. contrasting a
peer’s performance against performance of other peers
Responses
Disagree
Strongly
Strongly
disagree
Neutral
Agree
agree
Total
Description
N % N % N % N % N % N %
…peers do
contrast 10 28.6 3 8.6 9 25.7 13 37.1 20 9.0 35 100
performance
…other peers
…comparing my
performance with 10 28.6 19 54.3 3 8.6 3 8.6 - - 35 100
a given standard
of evaluation
Source: Own Survey Result, 2015
Table 34 vividly shows that when instructors were asked to reflect on the statement “during
appraisal, peers do contrast performance of one peer against that of other peers”, 37.2%
including 28.6% strongly disagree and 8.6% disagree of instructors replied their disagreement
with the statement. While 25.7 % of them were neither agreed nor disagreed, 46.1% of them
were agreed with the statement. The mean score of instructors‟ response on the statement is 3.29
as shown in table 31. This indicates that majority of instructors were agree with the statement. In
other words, largest proportions of instructors perceive that their peer appraisal score is affected
by contrast effect.
Additionally Table 34 shows that when instructors were asked to reflect on the statement “peers
evaluate me by comparing my real performance with a given standard/appraisal criteria”, 82.9%
including 28.6% strongly disagree and 54.3% disagree of instructors replied their disagreement
with the statement. While 8.6 % of them were neither agreed nor disagreed, 8.6% of them were
75
agreed with the statement. Moreover, table 34 also indicates that mean score of instructors
response on the statement “peers evaluate me by comparing my real performance with a given
standard/appraisal criteria”, is 1.97. This implies that greater proportion of instructors do not
perceive that peers do compare their performance against the given standard of appraisal.
Generally, the analysis shows that majority of instructors believe that peer appraisal is victimized
by contrast effect to the higher extent and at the same time they also believe that peers do not
compare their performance against given standard of appraisal.
Responses
Strongly
Strongly
disagree
disagree
Neutral
Agree
agree
Total
Description
N % N % N % N % N % N %
HOD is well
informed of my 12 34.3 10 28.6 3 8.6 6 17.1 4 11.4 35 100
performance along
all criteria of
appraisal.
Mean 2.43
Source: Own Survey Result, 2015
The above table 35 indicates, majority of instructors (62.9%, including 34.3% strongly disagree
plus 28.6% disagree) were against the statement „HOD is well informed of my performance
along all criteria of appraisal”. The mean score of response on this statement is 2.43. This
implies that majority of instructors perceive that their head of department is less informed of
their performance. However, as mention earlier the information about performance of appraised
is a critical determinant of accuracy of appraisal score. Therefore, it is recommended that prior to
76
undertaking appraisal; every head of department should try all the best to gather necessary
information regarding instructors under his/her supervision.
Making a glance at table 36, one can notice that overall mean score of instructors‟ response on
the statement “HOD is also my coach to improve my performance” is 2.69. This indicates that
majority of instructors believe that their head is not providing a coaching service to improve their
performance. However, making a closer look at the table, astute reader can observe that for
academic rank of assistant lecturer were lower mean for the statement while lecturers were the
highest mean and assistant professors had in between.; showing that their agreement level with
the above statement decreases.
77
Figure 11: HOD Practice of Keeping Record of Instructors’ Performance
4.4.3.4 Giving Equivalent Score for All Instructors and Doing Favor for close friends
There is a growing body of literature supporting the idea that during appraisal, appraisers may be
influenced by political consideration, and favoritism. In line with this, this study has attempted to
78
see the existence of these situations in HODs‟ practice of instructors‟ performance appraisal and
that the underneath table shows aggregate response of instructors in this respect.
Table 37: Giving High Score for All Instructors and Doing Favor for Close Friends
Responses
Disagree
Strongly
Strongly
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
HOD gives high
appraisal score to 6 17.1 10 28.6 6 17.1 9 25.7 4 11.4 35 100
Mean 3.14
HOD rate his
close friends 12 34.3 3 8.6 10 28.6 6 17.1 4 11.4 35 100
more positively
than other
instructors
Mean 3.37
Source: Own Survey Result, 2015
Instructors were requested to express their agreement level on the statement “HOD gives high
appraisal score to all staff members”. Accordingly, table 37 reveals that 37.1% of instructors
(25.7% agree plus 11.4% strongly agree) supported the statement. Similarly, the mean score of
instructors‟ response is 3.14, implying that instructors were in agreement with the statement.
Meaning, some of instructors believe that heads of departments are giving more favorable score
for all instructors under their supervision. According to Freid et al. (1999), deliberate inflation of
appraisal score is a political consideration which may hinder organizations‟ effort to use
appraisal score for developmental, motivational, and developmental purposes. Supervisors
deliberately inflate the employees appraisal score for different reasons such as maximizing
subordinates' merit raises, avoiding creation of a written record of poor performance, and
79
minimizing potential challenges from subordinates to their own appraisal score (Freid et al.
1999).
To this end, the interview conducted with HODs shows that they provide almost equal high
scores for all staffs because the appraisal score is associated with instructors promotion. The
phrase of one head reads as: “No better incentive system is available for instructors. The major
means through which instructors will get additional payment is the „intermittent‟ promotion
which is largely determined by performance appraisal score. Therefore, I am not as such serious
at making distinction between instructors during that semi-annual appraisal”.
Conclusively, the analysis shows that heads‟ appraisal is affected by leniency bias because
appraisal score has implication for promotion decision. Therefore, other things remain on the
right track, heads are recommended to rate instructors based on their real performance so as to
ensure effectiveness of instructors‟ performance appraisal process. According to Elverfeldt 2005,
when employees perceive bias or favoritism in managerial behavior, the perception of inequity
accelerates. Managers frequently engage in what is termed as “in-group and “out-group”
behavior in which employees who are viewed as capable are given privilege while those who are
viewed as unfavorable are discriminated (Robbins, 1997). This categorization of employees is
frequently made with incomplete information leading to misclassification of employees which
will attenuate the effectiveness of performance appraisal process (Elverfeldt 2005).
Based on this theoretical standing, an effort has been made to elicit instructors‟ perception of
favoritism in their HODs appraisal. Accordingly, table 37 indicates that 42.9% of instructors
(34.3% strongly disagree plus 8.6% disagree) were against the statement “HOD rates his close
friends more positively than other instructors”. In the same fashion, the 3.37 mean score of
instructors‟ response signifies that almost half of instructors perceived that their heads of
departments favor their close friends in appraisal. While 28.6% indifferent the left 28.5% agreed
the statement.
80
4.4.3.5 Contrasting Performance of Instructors against Each Other vs. Comparing Instructors’
Performance against the Appraisal Standard
In order to evaluate the perceived contrast effect in heads appraisal, instructors were requested to
reply to the statement “During appraisal, HOD does contrast my performance against that of my
peers”. Accordingly, table 38 reveals that half of instructors (50%) were against the statement i.e.
29.7% strongly disagree plus 20.3% disagree. The mean score of their response is 2.59 showing
that instructors‟ perception of contrast effect in heads appraisal is minimal. Whilst this is an
interesting finding; to the dismay, more than half of instructor (52.7%, 17.6% strongly disagree
and 35.1% disagree) were in disagreement with the statement „HOD evaluate me by comparing
my performance with a given standard/appraisal criteria‟. This is also evidenced by the mean
score of 2.74.
Table 38: Contrasting performance of instructors against each other vs. comparing
instructors’ performance against the appraisal standard
Responses
Disagree
Strongly
Strongly
disagree
Neutral
agree
agree
Total
Description
N % N % N % N % N % N %
During appraisal,
HOD does 6 17.1 4 11.4 9 25.7 12 34.3 4 11.4 35 100
contrast my
performance
against that of
my peers.
Mean 2.89
HOD evaluate
me by comparing 7 20.0 7 20.0 9 25.7 10 28.6 2 5.7 35 100
my performance
with a given
standard/appraisa
l criteria.
Mean 2.8
Source: Own Survey Result, 2015
81
Generally, the heads perceived practice of neither comparing instructors‟ performance against
standard of appraisal nor contrasting instructors‟ performance against each other, may lead one
to question how do heads exercise the appraisal. Surprisingly, the result of interview with heads
of departments shows that some heads do not formally and regularly evaluate performance of
instructors. For example, one head said “Frankly speaking, I never formally evaluate instructors’
performance; rather I conduct it on demand by instructors for scholarship or promotion.
Otherwise, I am not as such committed to do evaluation. However, I believe that performance
evaluation is a serious business that everybody in university must care about”.
Another head said “It is difficult to say performance evaluation exists. I personally not motivated
to undertake strict evaluation of instructors’ performance. Sometimes, if feel guilty to provide
different evaluation score for instructors because evaluation criteria itself has a lot of problems.
It needs customization of elements raised department wise. Additionally, since the situation itself
is not favorable for instructors; it could be hard for instructors to perform as per the evaluation
criteria. Therefore, the evaluation is handled carelessly as mere formality.”
Generally, the analysis shows that instructor‟s performance appraisal has not received enough
attention by heads of department. From the above quotes, it can be understood that heads are not
evaluating the real performance of instructors.
82
4.5.1 Existence of Official Performance Feedback after Appraisal
To investigate whether official feedback on performance is available for instructors, they were
requested to answer the question: “Have you ever got official feedback on your performance
after performance appraisal”? Accordingly, table 4.24 indicates that 62.9% of instructors asserted
that they were given the feedback while the remaining 37.1% were not. Moreover, to further
analyze whether official performance feedback exists uniformly across different departments; the
responses of instructors were cross tabulated in the following manner.
83
Making a closer look at table 39, one can notice that 37.5%, 42.9%, 85.7%, 66.7%, 100.0% and
66.7% of instructors in the Economics, Management, Accounting and Finance, Marketing
Management, Logistics and Tourism and Hotel Management respectively were given official
performance feedback after their appraisal. Generally, the analysis shows that the official
feedback after appraisal is available for around two-third of instructors. In addition, regarding
obtaining the feedback; there is a variation among departments where Logistics is at better
position than the rest departments.
Furthermore, those instructors who didn‟t receive official performance feedbacks after appraisal
were 13 in number i.e. 37.1% of 35 sample instructors. This group of instructors was asked the
reason why they didn‟t received the feedback. All of the 13 (100%) stated that “no feedback
system at all in the university”
Generally, the analysis shows that feedback after appraisal is not uniformly available for
instructors in different departments in the university.
84
Figure 12: Existence of Continuous Discussion on Instructors Performance
The above figure reveals that majority 25 (71.4%) of instructors maintained that
management/head departments and instructors do not make continuous discussion about
instructors performance. On the other wing, 10 instructors (28.6%) stated that such situation
exists. This shows that little attention has been given for the importance of continuous dialogue
in the effort of improving performance of instructors. Therefore, instead of simply waiting for
semester end appraisal, it is wise if heads of departments play the role of coaches and thereby
adapting the culture of continuous dialogue with instructors.
85
Table 40: Forms of Performance Feedback Available for Instructors
According to table 40, 22 instructors who responded to the question, 20 (90.9%) stated that they
received performance feedback in the form of summary of appraisal score (result). Therefore, the
analysis shows that though not regularly exercised, the dominant form of communicating
appraisal score to instructors is such a summarized appraisal score. This implies that instructors
may hardly know their achievement (low or high score) along each appraisal criterion. Hence,
such a form of feedback does not help instructors recognize specific performance aspect that
need improvement for the future.
Furthermore, the following figure 13 shows the instructors perception on the ability of feedback
in helping them know their specific aspects of strengths and weaknesses. Accordingly, of 56
instructors who responded to the question, 44.64%, 28.57%, 17.86% perceived the feedback as
„poor‟, „satisfactory‟, and „good‟, respectively in terms of letting them know their specific
aspects of strengths and weaknesses.
86
Figure 13: Specificity of performance feedback
Moreover, the mean score of 1.40 testifies that performance feedback is lacking specificity.
Therefore, the university needs to arrange the form that provides instructors with detail
information about their performance. For instance, other things remain ideal, preparing the report
communicating instructors‟ achievement (appraisal score) along each appraisal criterion will
assist instructors know their positive side and pitfalls.
87
instructors is timely or not. The following table 41displays instructors‟ response regarding the
time interval in which they obtain feedback on their performance.
According to above table 41, majority (71.4%) of instructors asserted that feedback is available
for them within time interval of > 4 months. The next largest percentage (20.0%) stated that for
them to obtain feedback, within time interval of 1-2 months. Still other instructors maintained
that feedback is available for them within time interval of 3-4 months (8.6%).
However, this situation has great deviation from what is expected. The expectation is that from
instructors‟ performance appraisal during a given semester; instructors must be provided with
timely feedback which helps them rectify their weaknesses. This is vital to enhance instructors‟
performance in the subsequent semester. The appraisal is always conducted at the end of each
semester. Instructors and students will have two weeks break at the end of first semester and two
months vacation at the end of second semester of a given academic year. Nevertheless, as
described earlier, it takes more than three months for instructors to obtain the performance
feedback. Convincingly, the feedback is not timely enough to improve instructors‟ performance.
Hence, there is a need to shorten the time interval within which feedback is available for
instructors.
88
4.5.5 Acceptance of Performance Feedback by Instructors
The way feedback is perceived and used will be influenced by the attitude held by the feedback
recipients. Alexander (2006) suggested that individuals that have positive attitudes toward the
appraisal process and believe it is fair are more receptive to feedback. The author also cited that
if the appraisee‟s become hostile towards the appraisers and the process, they are clearly not
ready to accept feedback. Moreover, while appraisee‟s may become defensive towards negative
feedback, they more often become ready to accept positive feedback (Roberts 2003; Elverfeldt
2005, Alexander 2006). In order to examine the instructors‟ acceptance of their appraisal score,
an attempt was made at analyzing the instructors score in head, peer and student appraisal. Then,
the instructors were asked whether their appraisal score from the three sources reflects their true
performance. To this end, the subsequent analysis and discussion is based on information from
table 42 and table 43.
Table 42: Instructor Appraisal Score during Second Semester of 2014/15 A/Year
Prior to launching the discussion on the data on the above table, it is better to take a brief look at
how instructors‟ appraisal score is calculated in the university under study. On instructors
performance appraisal form, instructors are usually evaluated based on five point scale along
every appraisal criterion; where 1= very low, 2=low, 3=medium, 4=high, and 5= very high.
Because this five point scale is used, the total achievement of instructors along every appraisal
criterion is calculated out of maximum of five (5) points.
89
In line with this, the above table 42 indicates that the mean of instructors score in head, peer, and
student appraisals were 3.26, 3.0, and 2.74, respectively. This implies that on average every
instructor will be getting those marks out of 5.0 points. The mean of instructors total appraisal
score was 3.11, which signifies that on average every instructor will achieve 3.11 out of 5.0
points. Generally, the analysis shows that on average every instructor receives medium score (3.0
and above) in appraisals from all three sources.
However, the comparing instructors‟ appraisal score from all the three sources, head appraisal
score is more inflated, followed by peer and student appraisal scores. Furthermore, the following
table 43 summarizes response of instructors regarding whether their appraisal score reflects their
true performance. According to the table, instructor stated that their score in head appraisal
reflects their true performance to medium (60.0%) and low (11.4%) extent. On the other hand,
8.6% and 20.3% of instructors insisted that their head appraisal score reflects their true
performance to high and very high extent, respectively.
Table 43: Instructors perception of their appraisals score as reflection of their true
performance
90
The table also reveals that instructor believe that their score in peer appraisal reflects their true
performance to medium (25.7%), low (28.6%), and very low (8.6%) extent. While 28.6% and
8.6% of instructors maintained that their score in peer appraisal reflects their true performance to
high and very high extent, respectively. Moreover, regarding score in student appraisal, 42.9%,
20.0%, and 17.1% of instructors stated that the score reflects their true performance to medium,
low and very low extent, respectively. On the other hand, 11.4% and 8.6% of instructors believe
that their true performance is reflected by student appraisal score to high and very high extent,
respectively. Furthermore, the mean score of instructors‟ response in head, peers and student
appraisals are 3.26, 3.0 and 2.75, respectively.
The above descriptions denotes that majority of instructors believe their appraisal scores from all
source (from head, peer and student) reflects their true performance to minimum level. However,
comparison of the mean values of the responses shows that head appraisal score is at better
position than scores from peer and student appraisal. The student appraisal score resulted in
lowest mean value of responses, indicating that it is the lowest representative of instructors‟ true
performance as compared to the case of head and peer appraisal.
91
Table 44: Instructors’ understanding of the purpose of their performance appraisal
3.11
High 12 34.3
very high 4 11.4
Total 35 100.0
Source: Own Survey Result, 2015
The instructors were asked the question “What is your level of understanding about the purpose
of instructors‟ performance appraisal in your University”? Out of the 35 instructors who
responded to the question, 25.7% of them stated that they had medium level of understanding
about purpose of their performance appraisal, while 34.3% and 11.4% of them asserted that their
understanding is high and very high, respectively. However, the remaining 28.5% (17.1% very
low plus 11.4% low) asserted that they lacked understanding of why the appraisal is in place.
Generally, it can be said that for majority of instructors the purpose of their performance
appraisal is clear. The overall mean of 3.11 could be an evidence of this scenario.
agree 35 15.8
strongly agree 32 14.5
Total 221 100.0
Source: Own Survey Result, 2015
92
As indicated in table 45, of 221 students who responded to the above statement, 21.7% were
strongly disagreed and 20.8% were disagreed, making 42.5% disagreement rate. On the other
hand, 15.8% were agreed and 14.5% were strongly agreed, adding up to 30.3% agreement rate
with the statement. Moreover, the mean score of students response is 2.81, signifying that
majority of students lacked understanding of purpose of instructors‟ performance appraisal.
When mean scores of instructors (3.11) and students (2.43) were compared, instructors were at
better position than students regarding the clarity of the appraisal purpose.
The reasons for these instructors were asked the actual purpose(s) of instructors‟ performance
appraisal in their university. In order to know the main purposes of instructors‟ performance
appraisal as applicable in the university, respondents (instructors) were given lists of some ideal
purposes of performance appraisal. And they were asked to rate each purposes independently by
expressing their agreement/disagreement level on five points scale. Accordingly, Table 46
indicates summary of instructors‟ response (mean scores) for each purpose of instructors‟
performance appraisal in the university.
Table 46: relationship test of students practice of Rating having enough awareness about the
purpose of evaluation criteria * instructors perception on students rating.
Chi-Square Tests
Value df Asymp. Sig. Exact Sig. Exact Sig. Point
(2-sided) (2-sided) (1-sided) Probability
a
Pearson Chi-Square 19.071 4 .001 .001
Likelihood Ratio 21.165 4 .000 .000
Fisher's Exact Test 17.895 .001
Linear-by-Linear
15.059 b
1 .000 .000 .000 .000
Association
N of Valid Cases 256
a. 1 cells (10.0%) have expected count less than 5. The minimum expected count is 4.38.
b. The standardized statistic is 3.881.
The result p-value using Fisher‟s exact test is less than 0.001. Therefore, it shows that there is
highly significant relationship between instructors perception on students rating and students
93
rating practice by having enough awareness about the purpose of evaluation criteria at α = 0.05
level. This result supports the mean score of the cross tabulation result discussed above.
For purpose of this analysis, since five point Likert scales was used, mean score of 3.0 was
considered as midpoint (neutral), while mean scores of greater than 3.0 and less than 3.0 were
assumed as agreement and disagreement, respectively. According to table 47, the purpose of
instructors performance appraisal with greater than 3.0 mean score are giving promotion (mean
=3.62) and scholarship (mean =3.17) for instructors. The analysis shows that performance
appraisal is primarily used for making promotion decision and secondly for making scholarship
decision. On the other hand, instructors argued that performance appraisal never serves the
following purposes: dismissing poor performers (mean=2.34), awarding best performers (mean
=2.4), training need assessment (mean = 2.31), identifying strengths and weaknesses through
feedback (2.82), and formality (mean =2.68).
Generally, the analysis shows that formative purpose of performance appraisal had received less
attention in the university. Normally, instructors‟ performance appraisal should be used in order
to improve performance of the instructors. This could be done through, for example; assessing
instructors training needs and providing performance feedback that would help instructors
identify their strengths and weaknesses. However, the case in the current university is the inverse
of what is expected because the appraisal is primarily used for promotion. The use of appraisal
94
for promotion is summative in nature which has nothing to do with improving performance of
instructors.
Therefore, it can be concluded that current performance appraisal in the university is neither
serving the ideal purposes nor it is appropriate to serve the purposes unless urgent corrective
action is undertaken.
95
CHAPTER FIVE
Under normal situation prior to conducting performance appraisal, the appraisee‟s must be
informed of the appraisal criteria against which his/her performance is going to be evaluated.
This could be done through providing training/orientation and/or providing detailed job
description that clarifies performance expectations. However, at the outset instructors were given
neither job description nor training that clarifies their role perception. Consequently, instructors
may hardly understand what is expected of them as instructor in the university. Because they do
not know their performance expectation, it was identified that majority of instructors do what
they feel right in the attempt to keep their performance up to standard. Therefore, the absence of
job description and training on appraisal criteria revealed that instructors‟ performance appraisal
process has lacked proper foundation.
Majority of instructors perceived their current appraisal criteria as deficient of essential qualities
that effective appraisal criteria must possess. Among other things, instructors perceived that
appraisal criteria were falling short of the following qualities: instructors‟ participation in its
design, relevance and completeness, consideration of practical difficulty, reliability and ability to
measure instructors‟ contribution to student learning.
The study has also attempted to assess the practices of appraisers (students, peers and heads).
With regard to student appraisal, the researcher has come to know that students‟ practices during
appraisal are full of biases. The results indicate that during evaluating instructors‟ performance,
students take in consideration many factors that are not normally related instructors performance.
To this end, easy exam prepared by instructor, less number of assignments given by instructor,
previous good grade awarded by instructor, good grade expected from instructor and funniness
nature of instructor are found to be some bases on which students provide favorable score for
96
instructor(s). In addition, it was also identified that; if students had single negative experience
with an instructor, they revenge against that instructors by providing poor appraisal score.
Concerning peer appraisal, the study revealed that majority of practices of peers in appraisal are
discouraging. The only interesting finding is that majority of instructors perceived that favoritism
had less room in their peer appraisal. Meaning, they stated that peers neither favor their close
friends. On the other hand, it was unearthed that peers are less informed of each other‟s
performance; however, they provide inflated appraisal score for each other regardless of reading
each appraisal criterion. Conclusively, peer appraisal is not the reflection of real performance of
instructors.
The study also uncovered that most of heads‟ practices during appraisal are discouraging. The
justifications for this determination are: heads lacked proper record and information on
instructors‟ performance, they never compare instructors‟ performance with appraisal criteria
rather since appraisal score has implication for promotion; they provide equivalent high score for
all instructors in the departments. Additionally, it was expected that heads should play dual role
in appraisal i.e. evaluator and coach; however, the coaching role of heads was found to be
missing. On the other hand, the only appealing finding regarding head appraisal is; contrast
effect had less room in heads appraisal. Generally, it can be concluded that students, peers and
heads appraisals of instructors‟ performance are full of biases.
97
The study also revealed that instructors‟ performance feedback exists for mere formality. Despite
the fact that, majority of instructors have received official feedback (appraisals score); the
feedback is not uniformity available for all instructors. Moreover, the feedback is found to be
irregular, unspecific and not timely. Meaning, since instructors are given a summarized form of
appraisal score, instructors cannot identify their specific strengths and weaknesses from the
feedback. The situation is aggravated by the fact that the appraisal score is given within more
than four months duration. Similarly, the regularity of giving that appraisal score and the
continuous dialogue between instructors and management on instructors‟ performance is found
to be deficient. This nature of performance feedback, made it ineffective in serving its ideal
purpose i.e. improving instructors performance.
Furthermore, it was uncovered whilst all instructors had medium appraisal score; they didn‟t
accept those appraisal scores as reflection of their true performance.
The attempt was also made to evaluate the clarity of purpose of instructors‟ performance
appraisal. The analysis indicated that instructors had high level of understanding of why
instructors‟ performance appraisal is in place; however, students lacked this understanding. The
actual purposes of the appraisal are to make promotion and scholarship decisions. This shows
that some ideal purposes of the appraisal; for instance, training need assessments and identifying
strength and weaknesses were missing.
Therefore, at least it can be dictated that utility of instructors‟ performance appraisal process in
improving instructors‟ performance is minimal. To sum up, the instructors performance appraisal
process is not effective because of 1) poor qualities of appraisal criteria; 2) Bias practices of
appraisers; 3) Ineffective performance feedback system; and 4) Appraisers (students) lacked
awareness on the appraisal purpose and the appraisal is not serving formative purpose.
98
5.2 Recommendations
Based on thorough analysis of findings of the study, the researcher forwarded the following
recommendations that may enhance effectiveness of instructors‟ performance appraisal process.
It is obvious that preparing appraisal criteria is the foremost step in performance appraisal
process. This study disclosed that the appraisal criteria used for appraising performance of
instructors had lacked indispensable features. Therefore, it is recommended that the appraisal
criteria should be urgently redesigned. To do so, firstly, the detail analysis of what instructors job
entails must be conducted. This job analysis is needed to specifically identify the key tasks and
duties that instructors ought to perform. Then, written job description which specifies duties and
responsibilities of instructors have to be prepared. Based on the job description, detail appraisal
criteria should be established.
In the course of establishing the appraisal criteria, full participation of instructors must be
obtained. The participation helps establish realistic, relevant, complete and reliable targets. In
addition, if appraisal criteria are prepared in consultation with instructors, it is highly likely that
the criteria get acceptance among instructors; thereby enhancing effectiveness of the appraisal
process. To further enhance quality of appraisal criteria, it is better, if best experiences of other
universities are benchmarked and necessary adjustments be effected. Finally, the job description
and the detailed appraisal criteria must be communicated to instructors in order to clarify their
role perception. The university must also reconsider the kind, quality and quantity of resources
available for instructors to achieve the established criteria.
The result of this study indicated various factors that contaminate the usefulness of student
appraisal of instructors‟ performance. Though, biases are inevitable in performance appraisal, it
is possible to minimize the likelihood of the bias practices. The literature provide enough
evidence of the role that appraisers‟ training plays in reducing possible errors and biases in
appraisal process. Therefore, to enhance usefulness of student appraisal, students must be trained
during freshman semester and then periodic refresher training must be given to them. The study
identified that students lacked training on instructors‟ performance appraisal and they also lacked
awareness of purpose of the appraisal. For that matter, some of them perceive the appraisal as the
tool for punishing instructors who are unpopular among them. Therefore, during training;
students must be told that the appraisal is primarily needed to improve performance of
99
instructors. Since the improved performance of instructors will contribute to students learning,
this fact must be communicated to students during training.
It was also discovered that students are suspicious of the confidentiality of the appraisal score
they provide for instructors. Consequently, they avoid providing open ended comments about
instructors and sometimes they tend to inflate appraisal score for instructors whom they afraid of.
Therefore, during training; students must be informed of who is responsible for processing the
appraisal score and how it is to be processed. Moreover, the content of training program should
cover issues such as the meaning of each appraisal criterion and the interpretation of scales along
each appraisal criterion. In addition, students must be alerted of the irrelevance of contextual
variables (for example; personality attributes, exam easiness and grade offering) to instructors‟
performance.
The result of the study indicated that peers appraisal was subject to leniency bias because the
appraisal score is used for making promotion and scholarship decisions. To overcome this
problem, the principal purpose of instructors‟ performance appraisal must be for improving
performance of instructors rather than for the conventional administrative purposes. The
instructors must act in professional manner and view the appraisal as part and parcel of their
organizational responsibility. This value system must be cultivated in their mind through
different training sessions, workshops and seminars that could be conducted as in house training
programs. The training must focus on the appraisal purpose, how to give effective feedback and
the appraisal criteria.
To objectively evaluate performance of each other, instructors must be well informed of each
others‟ performance. The information on performance is critical to increase the accuracy of
ratings. Therefore, peers appraisal must be conducted on team basis rather than on departmental
basis because instructors within the same team could have more interaction with each other and
that they are more informed of each others‟ performance.
Head of departments must try all their bests to obtain information on performance of
departmental instructors and where necessary they should also keep necessary records on
instructors‟ performance. In addition to evaluating instructors‟ performance, heads must also
play the role of coach and facilitator so that instructors become better performers. In addition,
100
heads must be given interpersonal skill based training so that they will be equipped with
communication, coaching, and counseling skills. These skills are essential to ensure the
performance appraisal is a pleasant experience for both the instructors and the heads. More
importantly, heads themselves must be evaluated on how effectively they evaluate instructors‟
performance and play all necessary roles in enhancing performance of instructors. This is needed
to amplify the value that heads may associate with instructors‟ performance appraisal process.
1. The frequency of the appraisal must be increased as conducting the appraisal at the end of
each semester can hardly improve instructors performance.
2. The summarized form of appraisal score must be replaced with the form that provides
instructors with detailed information about their performance. For instance, other things remain
ideal, preparing the report communicating instructors‟ achievement (appraisal score) along each
appraisal criterion will assist instructors know their strengths and weaknesses.
3. The time duration for communicating appraisal score for instructors must be shortened in such
a way those instructors could rectify their downside during one semester so as to perform better
in future semesters.
101
Area for Further Research
This study has examined effectiveness of instructors‟ performance appraisal process in terms of
the qualities of appraisal criteria, practices of appraisers, effectiveness of the feedback, and
clarity of purpose of the appraisal. However, future research may investigate effectiveness of
instructors‟ performance appraisal process in terms its impact on instructors‟ behavioral
outcome.
102
BIBLOGRAPHY
Alo Oladimeji (1999), Human Resource Management in Nigeria, Business and Institutional
Support Associates Limited, Lagos.
Anjum, A (2011), „Performance appraisal systems in public sector universities of Pakistan,
International Journal of Human Resource Studies, vol.1, no1, pp 41-51.
Armstrong, M & Baron, A (1998), Performance Management Handbook, IPM, London
Atiomo A.C. (2000), Human Resource Management; Malt house Management Science Books,
Lagos.
Bacal, R. (1999), “Performance management” New York: McGraw-Hill. C. Dawson. (2002).
Practical Research Methods: A user friendly guide to mastering research
techniques and projects. Cromwell Press, Trowbridge, Wiltshire.
Baird, J 1997, “Perceived learning in relation to student evaluation of university instruction”
Journal of Educational Psychology, vol.79, no.1, pp 90-91.
Berk, A (2005), “Survey of 12 strategies to measure teaching effectiveness”, International
Journal of Teaching and Learning in Higher Education, vol. 17, no. 1, pp 48-62.
Boice, F & Kleiner, H (1997), „Designing effective performance appraisal systems,‟ Work
Study, vol.46, no.6, pp 197–201.
Braskamp, L & Ory, J 1994, Assessing faculty work, San Francisco: Jossey Bass.
Burak, Elmer and Smith (1977), Personnel Management: A Human Resource Systems
Approach; Reinhold Publishing Corporations Ltd. New York.
Centra, A (2003), „Will teachers receive higher student evaluation by giving higher grades and
less course work?” Research In Higher Education, vol.44, no.5, pp 495-518.
Clinton O. Longenecker (1997), “Why managerial performance appraisals are ineffective:
Causes and Lessons Career Development international, Vol.2, 5, pp. 212- 218.
Cohen, P. (1981), „Student ratings of instruction and student achievement: a meta- analysis of
multi-section validity studies‟, Review of Education, vol.51, no.3, pp 281-309.
Crane, D.P., & Jones Jr., W.A (1991): the public manager. Atlanta Georgia state university press
Cummings. M.W. (1972): “Theory and Practice” William Heinemann Ltd. London.
Danial, A (2011), „Performance evaluation of instructors in universities: contemporary issues
and challenges‟, Journal of Education and Social research, vol.1, no.2, pp 10-
31.
103
Danielson, C & McGreal, TL (2000), Instructor evaluation to enhance professional practice
Princeton, New Jersey: Educational Testing Service.
Dargham, S (2007), Effective management of the performance appraisal process in Lebanon: an
exploratory study
David, P & Macayan, V (2010): “Assessment of teacher performance” The Assessment
Handbook, vol.3, pp 65-76.
Davis, T & Landa, M (1999), A contrary look at performance appraisal, Canadian
manager/manager Canadian, pp 18-28.
Deming, E (1986), Out of crisis: quality, productivity and competition position, Cambridge
University Press, Cambridge.
Derven, M. (1990): “The Paradox of Performance Appraisal” Personnel Journal Volume 69.
Diane M. Alexander (2006), how do 360 degree performance reviews affect employee Attitudes,
effectiveness and performance?: University of Rhode Island press.
Dowell, D & Neal, J (2004), „The validity and accuracy of student ratings of instruction: a reply
to Peter A. Cohen‟, Journal of Higher Education, vol.54, pp. 459-63.
Ellet, C, Wren, C, Callendar, K & Loup, K (1997), „Assessing enhancement of learning,
personal learning environment, and student efficacy: alternatives to traditional
faculty appraisal in higher education‟, Journal of Personnel evaluation in
Education, vol.11, pp 167-192.
Elverfeldt, A (2005), Performance appraisal: how to improve its effectiveness, unpublished
master thesis, University of Twente, Enschede.
Emery, R, Kramer, R, Tian, G (2003), „Return to academic standards: a critique of student
evaluations of teaching effectiveness‟, Quality Assurance in Education, vol.11,
no.1, pp 37-46.
Evancevich, M (2004), Human Resource Management, 7th edition, Irwin/McGraw- Hill, USA.
Fakharyan, M., Jalilvand, R.M, Dini, B. &Dehafarin, E.(2012): “The effect of performance
appraisal satisfaction on employee‟s outputs implying on the moderating role of
motivation in workplace” International journal of business and management
tomorrow.
104
Fried, Y (1999) “Inflation of subordinates‟ performance ratings: main and interactive effects of
rater negative affectivity, documentation of work behavior, and appraisal
visibility”, Journal of Organizational Behavior, vol.20, no.4, pp.431-444.
Friedman, S (1984), „Strategic appraisal and development at General Electric Company‟,
Journal of Management, vol.45, no.2, pp 183-201.
Goddard, I & Emerson, C (1996): “Appraisal and your school”, Oxford: Heinemann.
Harris, M (1994), “Rater motivation in the performance appraisal context: a theoretical
framework”, Journal of Management, vol.20, no.4, pp 737-756.
Horne, H & Pierce, A (1996), A practical guide to staff development and appraisal in schools,
London: Koganpag.
Islam, R & Rasad, M (2005), „Employee performance evaluation by AHP: a case study‟,
Honolulu, Hawaii.
Jacobs R. (1980), Expectations of behaviorally anchored rating scales.
Johnson, S. (1990). Teachers at work: Achieving success in our schools. New York: Basic
Books.
Kauchak, D., Peterson, K., & Driscoll, A. (1985) “An interview study of teachers‟ attitudes
towards teacher evaluation practices”. Journal of Research and Development in
Education, 19, 32–37.
Kavanagh, P., J. Benson, M. Brown, (2007). “Understanding performance appraisal fairness”
Asia Pacific Journal of Human Resource, 45(2): 89-99.
Keig, L & Waggoner, M (1994), „Collaborative peer review: the role of faculty in improving
college teaching ‟ASHE/ERIC Higher Education Report, no.2.
Kessler HW (2003). Motivate and reward: Performance appraisal and incentive systems for
business success. Great Britian: Curran Publishing Services.
Khan, A (2007): “Performance appraisal‟s relation with productivity and job satisfaction‟,
Journal of Managerial Sciences, vol.1, no.2, pp 100-114.
Kondrasuk, J.N. et al. (2002). An Elusive Panacea: The Ideal Performance Appraisal. Journal of
Managerial Psychology, 64 (2), 15-31
Kyriakides, L, Demetriou, D & Charalambous, C (2006), „Generating criteria for evaluating
teachers through teacher effectiveness research‟, Educational Research, vol.48,
no.1, pp 1- 20.
105
Lee, C (1985), „Increasing performance appraisal effectiveness: matching task types, appraisal
process, and evaluator training‟, Academy of Management Review, vol.10,
no.2, pp 322-331.
London M (2003): “Job feedback: Giving, seeking, and using feedback for performance
improvement” Second edition London, England: Lawrence Erlbaum Associates.
Lortie, D. (1975). Schoolteacher: A sociological study. Chicago: University of Chicago Press.
Mamoria, C.B. (1995): “Personnel Management; Himalaya Publishing House, Bombay.
Marquardt, M. (2004): “Optimizing the Power of Action Learning”. Palo Alto, CA: Davises-
Black, 26 (8): 2.
McGregor, Douglas (1957), An Uneasy Look At As Means Human Resource Development
Would Be Better Off if Performance In Harvard Business Review, May/June
Methods for Graduate Business & Social Science Students. California, Sage.
Milkovich, G. M., & Boudreau, J. W. (1997): “Human resource management” Chicago: Irwin.
Mondy, R. Wayne and Noe, Robert M. (1981): “Human Resource Management, Massachusetts:
Simon & Schuster, Inc.,
Monyatsi P, Steyn, T & Kamperet, G (2006): “Instructor perceptions of the effectiveness of
instructor appraisal in Botswana”, South African Journal of Education, vol.26,
no.3, pp 427–441.
Morris, L (2005), „Performance appraisals in Australian Universities: imposing a managerialistic
framework into a collegial culture‟, AIRAANZ, pp 388-393.
Mwita, I 2000, „Performance management model- A systems-based approach to public service
quality‟, The International Journal of Public Sector Management, vol. 13, no.1,
pp. 19-37.
Naftulin, D, Ware, J & Donnelly, F 1973, “The doctor fox lecture: a paradigm of educational
seduction”, Journal of Medical Education, vol.48, pp 630-635.
Nimmer, J. & Stone, E. (I991), „Effects of grading practices and time of rating on student ratings
of faculty performance and student learning‟, Research in Higher Education,
vol.32, no.2, pp 195-215.
Noe, A 1996, Human Resource Management: Gaining a Competitive Advantage, 2nd edition,
Irwin /McGraw-Hill, USA.
106
Nzuve S.N.M. (2007). Management of Human Resources: A Kenyan Perspective, Nairobi, Basic
Modern Management Consultants.
Onwuegbuzie, J, Witcher, E, Collins, MT, Filer, D, Wiedmaier, D, & Moore, W (2007),
„Students' perceptions of characteristics of effective college teachers: a validity
study of a teaching evaluation form using a mixed-methods analysis‟, American
Educational Research Journal, vol.44, no.1, pp.113-160.Organizational
Behavior, vol.20, no.4, pp.431-444.
Peterson, K. (2000). Teacher evaluation: A comprehensive guide to new directions and practices
(2nd ed.). Thousand Oaks, CA: Corwin
Prowker, A (1999), Effects of purpose of appraisal on leniency errors: an exploration of self-
efficacy as a mediating variable, unpublished master thesis, Virginia
polytechnic institute and state university.
Rao, V.S.P (2005). Human Resource Management: Text and Cases. (2nd ed.). New Delhi: Excel
Books.
Rasheed, I, Aslam, D, Yousaf, S & Noor, A (2011), „A critical analysis of performance appraisal
system for instructors in public sector universities of Pakistan: A case study of
the Islamia University of Bahawalpur (IUB)‟, African Journal of Business
Management, vol.5, no.9, pp 3735-3744.
Robbins, S. P. (1997). Organizational behavior: Concepts, controversies, applications (8th ed.).
Upper Saddle River, New Jersey: Prentice Hall.
Roberts, E (2003), „Employee performance appraisal system participation: a technique that
works‟, Public Personnel Management, vol.32, no.1, 89-98.
Rudd, R, Hoover, T & Connor, N (2001): “Peer evaluation of teaching in University of Florida‟s
college of agricultural and life sciences”, Journal of Southern Agricultural
Education Research, vol.51, no.1, pp 189-200.
Sachin Maharaj (2014) “Administrators‟ views on teacher evaluation: Examining Ontario‟s
teacher performance appraisal Canadian Journal of Educational Administration
and Policy, Issue #152
Scullen, S. E., Mount, M. K., & Judge, T. A. (2003): “Evidence of the construct validity of
developmental ratings of managerial performance. Journal of Applied
Psychology, 88(1), 50–66.
107
Shevlin, M., Banyard, P., Davies, M.N.O. & Griffiths, M.D.(2000). The validity of student
evaluations in higher education: Love me, love my lectures? Assessment and
Evaluation in Higher Education, 25, 397-405.
Simmons, J & Iles, (2010) „Performance appraisals in knowledge-based organizations:
implications for management education‟, International journal of management
education, vol.9, pp 3-18.
Steiner, D. D., & Rain, J. S. (1989): “Immediate and delayed primacy and regency effects in
performance evaluation”. Journal of Applied Psychology, 74: 136-142.
Stronge, J & Tucker, P 2003, Handbook on Teacher Appraisal: Assessing and Improving
Performance, Eye On Education Publications.
Swanepoel, B.,Erasmus, B., Van, W. M. and Schenk, H. (2000): “South African Human
Resource Management Theory and Practice”. Cape Town: Juta and Company.
Thomas, S. L., & Bretz, R. D., Jr. (1994, Spring), "Research and practice in performance
appraisal: Evaluating performance in America's largest companies", SAM
Advanced Management Journal, 28-37.
Tziner, A & Kopelman, R (2002), „Is there a preferred performance rating format? A non
psychometric perspective‟, Applied Psychology: an International Review,
vol.51, no.3, pp 479-503.
Walsh, B (2003): “Perceived fairness of and satisfaction with employee performance appraisal”,
unpublished PhD dissertation, Louisiana State University.
Weinberg, A (2007), Evaluating methods for evaluating instruction: The case of higher
education, National Bureau of Economic Research working paper, 12844:
Cambridge, MA.
Welbourne, T. M., Johnson, D. E., & Erez, A. (1998). The role-based performance scale:
Validity analysis of a theory-based measure. Academy of Management Journal,
41(1), 540–555.
Worthington, A (2002), „The impact of student perceptions and characteristics on teaching
appraisals: a case study in finance education‟, Assessment & Appraisal in
Higher Education, vol.27, no.1, pp 49-64.
Yamane, Taro. (1967). Statistics: An Introductory Analysis, 2nd Ed., New York: Harper and
Row.
108
Yong. F. (1996), “Inflation of subordinates‟ performance ratings: Main and interactive effects
of Rater Negative Affectivity, Documentation of Work Behavior, and Appraisal
visibility. Journal of organizational Behavior”, Vol.20, No.4. (Jul.,1999),
pp.431-444.
109
Appendices
110
Questionnaire for instructors
Dear respondents, This questionnaire is designed to elicit necessary
information for conducting the research entitled “Effectiveness of Instructors
Performance Appraisal Process in Bahir Dar University: A Case of
Business and Economics College).” The study is for academic purpose i.e. for
partial fulfillment of the requirement for the award of Master of Business
Administration (MBA). Your genuine responses for each question on this
questionnaire, will completely determine the success of the study. You are
hereby assured that your identity and the information you provide will be kept
in strict confidence.
General direction:
You need not write your name anywhere on this questionnaire.
Please, carefully read each of the following questions and make a tick
mark (√) in the appropriate box. You can choose more than one option
where necessary.
Phone: 0912902525
Email: [email protected]
111
Part One: Demographic Profile of Respondents (Instructors)
[1] =<1 [2] 2-3 [3] 4-6 [4] 7-10 [5] >=11
Please, express the extent of your agreement/disagreement with the following statements about
quality of evaluation criteria currently used to evaluate performance of instructors in your
University.
SDA DA N A SA
No Description (1) (2) (3) (4) (5)
1 Up on employment in Bahir Dar University, every
instructor will be job description specifying his/her
duties
2 Up on employment in Bahir Dar University, every
instructor will be formally trained about criteria used to
evaluate his/her performance
3 All evaluation criteria currently used to evaluate my
performance are relevant to tasks in my job.
4 All my duties are measured in current evaluation
criteria
5 Current evaluation criteria take in to consideration the
practical difficulties in the environment in which I
perform.
6 Current evaluation criteria are reliable
7 Current evaluation criteria measure how an instructor
contributes to students learning
8 Current evaluation criteria are prepared in consultation
with instructors
112
9. How did you managed to keep your performance up to standard as an instructor in the
University
113
11 Students know the purpose of instructors performance
evaluation
12 Students have received enough training to evaluate
instructors performance
1. Have you ever get official feedback on your performance after performance evaluation?
[1]Yes [2] No
If your answer to question no.1 is No, please answer question no.2 and 3 and move to questions
under part five. If your answer for question no.1 is yes, skip question no.2 and continue replying
the remaining questions.
114
2. What was the reason that you didn‟t receive the feedback?
3. Besides the formal feedback after performance evaluation at the end of semester, do
management (department head) and instructors make continuous dialogue (discussion)
about instructors performance?
[1]Yes [2] No
5. In how long time interval does the feedback on your performance is available for you?
[1] < 1 Month [2] 1 -2 Months [3] 3-4 Months [4] > 4 Months
6. How do you evaluate the ability of feedback system in helping you know your specific
aspects of strengths and weaknesses?
[1] Poor [2] Satisfactory [3] Good [4] Very good [5] Excellent
7. What was your score in your most recent (first semester of 2014/15 academic year
performance appraisal?
115
8. To what extent does your evaluation score reflects your true performance?
A. Student evaluation: Very low [1] Low [2] Medium [3] High [4] Very high [5]
B. Head evaluation: Very low [1] Low [2] Medium [3] High [4] Very high [5]
C. Peer evaluation: Very low [1] Low [2] Medium [3] High [4] Very high [5]
[1] Very low [2] Low [3] Medium [4] High [5] Very high
2. What do you think are the purpose of evaluating your performance in your university? Mark
all those applicable in your University!
SDA DA N A SA
No Description (1) (2) (3) (4) (5)
1 To give training to improve performance
2 To identify one s strengths and weaknesses through
feedback
3 To use as a base for giving scholarship
4 To give promotion for those who meet standard
5 To dismiss instructors who fail to meet standard
6 To provide award for best performers
7 Just for formality
8 Other purposes
Your over all comments about instructors performance appraisal is very important!
______________________________________________________________________________
______________________________________________________________________________
__________________________________________________________________________
116
Questionnaire for students
Dear respondents, This questionnaire is designed to elicit necessary
information for conducting the research entitled “Effectiveness of Instructors
Performance Appraisal Process in Bahir Dar University: A Case of
Business and Economics College).” The study is for academic purpose i.e. for
partial fulfillment of the requirement for the award of Master of Business
Administration (MBA). Your genuine responses for each question on this
questionnaire, will completely determine the success of the study. You are
hereby assured that your identity and the information you provide will be kept
in strict confidence.
General direction:
You need not write your name and ID No anywhere on this
questionnaire.
Please, carefully read each of the following questions and make a tick
mark (√) in the appropriate box. You can choose more than one option
where necessary.
Phone: 0912902525
Email: [email protected]
117
Part One: Demographic Profile of Respondents (students)
118
Interview with Heads of Departments
1. How do you keep track of the instructors‟ performance? Can you get full information about all
instructors‟ performance along all evaluation criteria?
2. What is/are your role(s) as HOD in instructors‟ performance appraisal other than evaluating
instructors?
Training,
Coaching,
Feedback or other roles?
3. Is there any training (about instructors‟ performance evaluation process) given for instructors
and students?
4. The evaluation form was prepared by whom? Are instructors consulted on the design of the
form and appropriateness of evaluation criteria?
119