SJT Monograph
SJT Monograph
1 . Introduction............................................................................................. 3
2 . What are Situational Judgement Tests?............................................... 4
3 . How and where are SJTs used?............................................................. 5
3.1. How are SJTs used?........................................................................... 5
3 . 2 . SJTs in other contexts........................................................................ 5
8 . 3 . Format..............................................................................................14
8 . 4 . Response instructions.......................................................................16
8 . 5 . Response options.............................................................................17
9 . 3 . Practical tips.....................................................................................22
9 . 4 . Susceptibility to coaching................................................................23
U s e f u l r e s o u r c e S ..................................................................24
A p p e n d i x A – S J T D o m a i n s ..............................................25
R e f e r e n c e s ......................................................................................28
2
Situational Judgement Tests
1. Introduction
From FP 2013, selection to the UK Foundation Programme will be based on
a Situational Judgement Test (SJT) and an Educational Performance Measure
(EPM). The SJT forms part of the best practice approach for selection to the UK
Foundation Programme and allocation to foundation schools.
Examples of the FY1 SJT questions, with answer rationales, are available from
www.foundationprogramme.nhs.uk.
3
2. What are Situational Judgement Tests?
In a Situational Judgement Test (SJT) applicants are presented with a set of
hypothetical work relevant scenarios and asked to make judgements about
possible responses. Following best practice, SJT scenarios should be based on a
thorough analysis of the job role to determine the key attributes and behaviours
associated with successful performance in the job. This is to ensure that the
test content directly reflects work related situations that an applicant will face
once in the job. A key output from a job analysis is a test specification which
describes in detail the attributes to be assessed in an SJT. The design of the SJT
for selection to the UK Foundation Programme followed best practice – please
see sections 5 and 6 for more information.
SJTs require applicants to use their judgement about what is effective behaviour
in a work relevant situation rather than focusing on clinical knowledge or
skills. SJTs are often used in combination with knowledge based tests or
related educational performance indicators to give a better overall picture of
an applicant’s aptitude for a particular job. In this way, SJTs focus more on
professional attributes, compared to clinical knowledge exams, for example.
Applicants’ responses are evaluated against a scoring key (correct answers) which
is predetermined by subject matter experts (SMEs), including clinicians, so that
the scoring of the test is standardised. Each applicant’s responses are assessed
in exactly the same way, and it is therefore possible to compare applicants.
Please see section 8.6 on the scoring methods for the SJT for selection to the
UK Foundation Programme.
SJTs are usually well accepted and received positively by applicants as they are
recognised as being relevant to the role applied for.1,2 SJTs also offer the benefit
of presenting a realistic job preview,2 as SJTs provide the applicant with further
information relating to typical or challenging situations that they may encounter
in the target job role. The SJT developed for selection to the UK Foundation
Programme has been well received by the 8,000 participants in pilots in the
development stages. Please see the archived website of the Improving Selection
to the Foundation Programme project (www.isfp.org.uk) for more information.
4
SJTs have been used for several decades in selection for many different
occupational groups. One of the earliest examples is the U.S. civil service
in 1873, in which applicants were presented with a job-related situation and
asked to write down how they would respond.3 The British Army also used tests
of judgement for officer recruitment in World War II.4 These SJTs aimed to
measure specific attributes and experience more efficiently – and on a larger
scale - than would be possible through interviews, for example.
SJTs using managerial situations grew in popularity over the second half of
the 20th century in Europe and the USA. These aimed to assess job applicants’
potential to perform well in supervisory roles and were used in US government
recruitment and by large corporations such as the Standard Oil Company.5
SJTs are used for many purposes within selection, assessment and development.
As a measurement methodology, rather than a single type of test or tool, SJTs
can be tailored to fit the specific needs of the organisation or selection process.
Often SJTs are used as a shortlisting tool or ‘sift’ where large volumes of
applicants take an SJT and those who are successful are shortlisted to the next
stage of the selection process; for example, to a selection centre. Usually, SJTs
form part of a selection process if combined with other exercises or assessment
methodologies. SJTs can also be used for development purposes, where the aim
is to identify areas of development anvd training needs.
Whatever purpose the SJT is being used for, it is important to note that the design
should be based on a job analysis to ensure that it is targeting the attributes
required; that the test specification is developed in collaboration with key
stakeholders and role incumbents; and that a thorough design and evaluation
process is undertaken to confirm the psychometric quality of the test.
SJTs are increasingly employed in large scale selection processes; for example
they are used by the police in the UK as part of the assessments for recruitment
and promotion and by the Federal Bureau of Investigation (FBI) in the USA, as
5
well as in many public and private sector graduate recruitment processes and
training schemes. Some examples of the SJTs in other occupational groups are
outlined below.
The NPIA has also developed an SJT used as part of the selection process for
the High Potential Development Scheme (HPDS). The HPDS is a development
programme aimed at student officers, constables and sergeants who display
exceptional potential. This SJT requires candidates to demonstrate their
judgement and ability to make effective decisions in a series of policing based
managerial situations. Candidates are asked to use their judgement to rate each
of the options in terms of effectiveness on a scale of 0 to 5.
In the context of UK medicine, SJTs have been used successfully for several
years for selection into postgraduate training, including General Practice and
Public Health. SJTs have been piloted for a variety of other specialties including:
Surgery, Radiology, Histopathology, Core Medical Training, Anaesthesia and
Acute Specialities. SJTs are also being used for selection in Australia to select
trainees for entry to training in General Practice.
SJTs are seen as a valuable addition to the selection processes within the
medical education and training pathway. It is widely acknowledged that non-
cognitive or professional attributes (e.g. communication, integrity, empathy,
teamworking) are essential requirements for a doctor.6 SJTs are able to target
these important professional attributes that are difficult to assess through
traditional examinations.
6
4. What is the research evidence for using SJTs in
selection?
For any selection tool in any context, it is important that there is appropriate
evidence that the tool works (i.e. is it selecting the right people for a role in a
consistent way?). As SJTs have been used for many years in many contexts, there
is a great deal of evidence to support their use as part of selection processes.7
Research evidence has consistently shown that, as a selection tool, when designed
appropriately, SJTs show good reliability (i.e. measure the criteria consistently)
and validity (i.e. measure what they are intended to measure).8,9 The research
literature indicates that SJTs are able to predict job performance and training
criteria across a range of occupations,8,10,11 that is, the way an individual responds
to an SJT question has been found to predict actual behaviour and performance
once in a role. Several validity studies have also shown that SJTs are better
predictor of subsequent job performance beyond structured interviews, tests of
IQ and personality questionnaires.12-14
7
In dealing with issues of fairness, researchers generally examine sub-group
differences – that is, the extent to which different groups (e.g. ethnic groups)
perform differently from other groups in a particular selection method. In relation
to analysis of fairness, research shows that differences in mean scores on SJTs
between ethnic groups tend to be smaller than for tests of IQ, for example.19
The best way to summarise the research literature is to note that SJTs measure
an individual’s awareness and judgement about what is effective behaviour in
particular contexts. In the FY1 SJT, various professional attributes are targeted
such as coping with pressure, teamwork and effective communication.
SJTs are also designed to draw on an applicant’s knowledge of how they should
respond in a given situation, rather than what they would do. This is proven to
reduce the effects of coaching or ‘test-wise’ applicants, and is in line with the
GMC’s emphasis on probity. Please see section 8.4 for more information.
8
Figure 1. Steps to design and validate an SJT
The first step in designing an SJT is to determine the test specification. A test
specification includes a description of the test content, the item types and
response formats used (e.g. multiple choice, rank order, rating, best and worst,
etc.). Also included in the specification should be a description of the length of
the test, the scoring convention to be used and how the test will be administered.
Step 4 requires construction of the test; it may be in a written paper and pencil
format, presented electronically or for some SJTs, the scenario may be presented
in a video or an interactive format.
9
It is then important that the constructed test is piloted to ensure that the test is
fair and measuring what it is intended to measure (step 5). Piloting is also a great
opportunity to seek applicant reactions to ascertain the acceptability of the test
and gain feedback as to whether applicants are satisfied that the test is fair and
relevant to the role applied for.
In response to this, an Options Appraisal was carried out to review the use of
potential selection tools (the Options Appraisal can be found at www.isfp.org.
uk). This extensive Options Appraisal showed an SJT to be the most effective
and efficient way to select applicants, alongside a measure of educational
performance.
As well as SJTs being able to select according to the attributes required of the
role and set out in the national person specification, the SJT is also able to
address the issue of fairness. As the SJT questions are standardised, and the
scoring criteria are defined in advance, this means that all applicants have the
same opportunity to demonstrate their competence and aptitude in relation to
10
attributes assessed in the SJT. In addition, the SJT will be invigilated, meaning
that students will have a fair chance to do well without the possibility that some
are receiving outside help.
A further concern with the application form was the resource required to
mark the white space questions. SJTs provide an efficient and flexible method
of selection for large scale recruitment as they can be administered to large
groups and marking them is less resource intensive than the previous ‘white
space’ application forms, as they can be scanned and are machine markable.
Consultants, and other Subject Matter Experts (SMEs) including FY1s,
continue to be involved in the process – but they contribute their expertise in
the development of the content of the SJT papers, which are machine marked,
rather than in the hand marking of all 7,000+ answer sheets. Thus the clinical
contribution to the SJT is significantly more efficient and effective than with the
previous application process.
To define the professional attributes that are expected in the FY1 role, and as
such, those to be assessed as part of the SJT, a multi-method job analysis was
conducted. A job analysis is a systematic process for collecting and analysing
information about jobs. A properly conducted job analysis provides objective
evidence of the skills and abilities needed for effective job performance and
11
thus provides support for the use of selection procedures measuring those skills
and abilities. As such, a comprehensive job analysis is typically regarded as best
practice as a first step in designing any selection system.
The FY1 Job Analysis was undertaken in a range of clinical settings ranging from
inner city Manchester, Cambridge and the more remote Scottish islands, and
included interviews with those familiar with the FY1 role, observations of FY1
doctors and a validation survey that asked participants to rate the importance of
the professional attributes identified. A total of 294 individuals participated in
the job analysis, providing a wide range of perspectives. The results from this
analysis showed that the SJTs should target five professional attributes and these
are outlined below (further details can be found in Appendix A):
• Commitment to Professionalism
• Coping with Pressure
• Effective Communication
• Patient Focus
• Working Effectively as Part of a Team
A matrix which outlines the SJT target attribute domains and examples of
possible SJT scenarios associated with them is provided in Table 1. The report
of the extensive FY1 Job Analyis is available from www.isfp.org.uk.
12
Patient Focus • Identifying that a patient’s views and concerns are important
and they should have input into their care
• Considering that a patient may have different needs from
others around them
• Spending time trying to understand a patient’s concerns and
empathising with them
All items developed for the SJT for selection to the Foundation Programme
have been through the extensive development process outlined in Section 5.
New SJT items are developed each year to ensure that the scenarios presented
to applicants are relevant and reflect current practice.
To ensure the SJT content is relevant to the role of a FY1 and is fair to all
applicants, working with Subject Matter Experts (SMEs) is essential for item
development. FY1 SJT scenarios are therefore written in collaboration with
SMEs from a wide range of specialties. This ensures that the scenario content
covers the breadth of the role of a FY1. SMEs include educational and clinical
supervisors, clinical tutors and foundation doctors themselves.
All SJT items undergo a thorough review process that includes a review by
experts in SJT design and development, and a review by a team of SMEs which
include Foundation doctors. At each stage, items are reviewed for fairness and
relevance. Reviews also take place to ensure that each item is appropriate for all
applicants in terms of the language used and that locality specific knowledge is
avoided. This ensures that the SJT is fair to all applicants.
13
8.2. Context of the FY1 SJT
8.3. Format
Most SJTs are written and delivered as a paper and pencil test.2 Whilst video-
based testing is possible, written tests offer wider participation and more cost-
effective delivery.22 Written SJTs, in comparison to video-based SJTs, have also
been found to have a higher correlation with cognitive ability22 and so can be
more appropriate in job roles that require advanced cognitive processing skills,
as in medicine. The FY1 SJT is presented in written format.
There are a variety of different response formats that can be used in SJTs; for
example, pick best, pick best/worst, pick best three, rate effectiveness and rank
options. In the SJT for selection to the Foundation Programme, applicants are
asked to rank the responses as in Example 1, or choose the 3 most appropriate
actions from a list of 8 possible options as seen in Example 2. The choice of
response format reflects the scenario content and the appropriate format to both
provide and elicit the information needed. For example, the nature of some
scenarios and the possible responses to them lend themselves to ranking items
(requiring ability to differentiate between singular actions in response to a
scenario that vary in appropriateness), whereas some scenarios lend themselves
to multiple choice items (where it is necessary to do more than one thing/tackle
more than one aspect in response to a scenario).
Two typical examples of items that employ these types of response format are
presented next, along with the typical answer response format.
14
Example 1: SJT for FY1 (ranking)
You are looking after Mr Kucera who has previously been treated for
prostate carcinoma. Preliminary investigations are strongly suggestive
of a recurrence. As you finish taking blood from a neighbouring
patient, Mr Kucera leans across and says “tell me honestly, is my
cancer back?”
Rank in order the following actions in response to this situation
(1= Most appropriate; 5= Least appropriate)
A. Explain to Mr Kucera that it is likely that his cancer has come back
B. Reassure Mr Kucera that he will be fine
C. Explain to Mr Kucera that you do not have all the test results, but you
will speak to him as soon as you do
D. Inform Mr Kucera that you will chase up the results of his tests and ask
one of your senior colleagues to discuss them with him
E. Invite Mr Kucera to join you and a senior nurse in a quiet room, get a
colleague to hold your ‘bleep’ then explore his fears
Fill in the box to indicate the ranking for each option on the answersheet. If
you thought D was the most appropriate option in response to Question 1, A the
second most appropriate, B the third, E the fourth and C the least appropriate
option, you would complete the answer sheet as follows:
The correct answer for this item is DCEAB. How the ranking items are scored
is outlined further in section 8.6.
15
Example 2: SJT for FY1 (multiple choice)
Fill in THREE boxes out of the eight available for each question on the
answersheet. For example, if you thought the three most appropriate options in
response to a question were B, F and H, you would complete the answer sheet
as follows:
The correct answer for this item is BCH. How the multiple choice items are
scored is outlined further in section 8.6.
Using both response formats enables a fuller range of item scenarios to be used,
rather than forcing scenarios into a less appropriate item type and potentially
reducing item effectiveness (e.g. asking applicants to rank three equally correct
options). Approximately two thirds of the items used in the FY1 SJT will have
the ranking answer format, and one third will have the multiple choice format.
This will allow a good balance of scenarios and attributes to be tested.
The general response instructions for all the items in the test have a ‘knowledge’
based format (what should you do) as opposed to a behavioural based format
(what would you do). Knowledge based instructions are deemed more
appropriate for high stakes selection as they measure maximal performance
16
(how respondents perform when doing their best), whereas behavioural based
instructions measure typical performance (how one typically behaves).14 It is
therefore not possible to fake the answer by trying to give the answer an applicant
thinks the employer wants, as can be done in a typical performance test using a
behavioural based format. Knowledge based instructions have also been found
to be less susceptible to coaching than behavioural response formats14 due to
some of the aspects outlined above. In this way, the FY1 SJT was designed to
minimise susceptibility to coaching effects.
In the context of professional behaviour, and with the General Medical Council
(GMC) putting a high premium on probity, it is also more appropriate to frame
the response instruction as what you ‘should’ do rather than what you ‘would’
do. For example, the correct answer, as determined by a panel of SMEs, will be
in accordance with GMC guidelines and policy documents such as Tomorrow’s
Doctors; hence the answer will always be what a FY1 ‘should’ do. Although it
is appreciated that asking an applicant what they ‘should’ do, may not reflect
their behaviour in the role, it does provide evidence that the applicant is aware,
or not, of what defines appropriate behaviour in the role.
The response instruction for each SJT items reminds applicants to answer the
scenario according to what they should do.
Responses to the scenario are usually actions to address the situation described
e.g. ‘Make sure the nursing staff are informed’. In this way, responses are mapped
to a target professional attribute such as ‘coping with pressure’. Response options
will be realistic and include things that applicants are likely to do and the ‘best
thing to do’ will always be included. There is a mixture of good, acceptable and
poor responses to the situation. However, completely implausible responses are
not included as an option, as judged by SMEs.
In contrast with clinical knowledge items, with SJT items there is often no
definitive correct answer, as the scenarios implicitly assess judgement. Following
best practice, the SJT scoring key is determined through:
• Consensus at the item review stage of item writers and initial SMEs
• Expert judgement in a concordance panel review
17
• Review and analysis of the pilot data (nb in addition to items developed
in piloting, all new items are trialled alongside live items before they
‘count’). Consideration is given to how applicants keyed the items and the
consensus between this and the key derived from the first two stages
The final key is determined using data from several pilot studies. For example,
if high performing applicants consistently provided a different key from the
established key from the concordance panel review, then the key would be
reviewed with the assistance of SMEs.
Despite this predetermined scoring key, the scoring is not “all or nothing.” It
is based on how close the response is to the key. For ranking items, a total of
20 marks are available for each item. For each of the five response options up
to four marks are available. For ranking items, applicants get points for “near
misses”; therefore an applicant does not need to get every option in exactly the
correct order to obtain a good score on the SJT. Figure 2 provides an example of
how the ranking scoring system works.
18
Example ranking scoring
The example below demonstrates how the scoring convention works for a
ranking item.
You are looking after Mr Kucera who has previously been treated for
prostate carcinoma. Preliminary investigations are strongly suggestive
of a recurrence. As you finish taking blood from a neighbouring
patient, Mr Kucera leans across and says “tell me honestly, is my
cancer back?”
Rank in order the following actions in response to this situation
(1= Most appropriate; 5= Least appropriate)
A. Explain to Mr Kucera that it is likely that his cancer has come back
B. Reassure Mr Kucera that he will be fine
C. Explain to Mr Kucera that you do not have all the test results, but you
will speak to him as soon as you do
D. Inform Mr Kucera that you will chase up the results of his tests and ask
one of your senior colleagues to discuss them with him
E. Invite Mr Kucera to join you and a senior nurse in a quiet room, get a
colleague to hold your ‘bleep’ then explore his fears
The correct answer is DCEAB. If an applicant thought D was the most appropriate
option in response to Question 1, A the second most appropriate, B the third, E
the fourth and C the least appropriate option, For the example response given
above, the applicant would receive the following marks:
• 4 points for option D as it is in the correct position
• 1 point for option C as the correct position is 2, but it was ranked 5th
• 3 points for option E as the correct position is 3, but it was ranked 4th
• 2 points for option A as the correct position is 4, but it was ranked 2nd
• 2 points for option B as the correct position is 5, but it was ranked 3rd
The total marks the applicant would therefore receive for this item is 12 out of
a possible score of 20.
For multiple response items, points are received for each correct answer
provided; no negative marking is used. Four points are available for each option
identified correctly, making a total of 12 points available for each item. If an
applicant ‘ties’ two responses, then they will receive no marks for either option.
19
Example multiple choice scoring
The example below demonstrates how the scoring convention works for a
multiple choice item.
The correct answer is BCH. If an applicant thought the three most appropriate
options in response to a question were B, F and H they would receive:
• 4 points for option B
• 0 points for choosing option F
• 4 points for choosing option H
The applicant would therefore receive a total of 8 marks for this item out of a
possible score of 12.
Further information about the scoring of the FY1 SJT can be found in the SJT
FAQs available from the UKFPO website – www.foundationprogramme.nhs.uk
The SJT for selection to the Foundation Programme will last for two hours and
20 minutes and will consist of 70 items (60 ‘live’ items and 10 ‘pilot’ items). It is
recommended that you aim to allow yourself two minutes to answer each item,
which is consistent with previous applicant experience and feedback during the
20
SJT.12 This number of items was shown to be sufficient to cover the five target
attribute domains in a sufficiently reliable and broad way without overloading
applicants.
Ten of these items are ‘pilot’ items that are embedded within the test in order to
validate them for future use. This method of validating trial items is consistent
with psychometric test use throughout selection practice in all contexts. Your
response to the trial items will not be included in your final score.
Although applicants find the SJT challenging, the vast majority complete the
test well within the allocated time. Sixty ‘live’ items also allows sufficient
distribution of scores considering the large number of applicants who will be
taking the test. It is a slightly greater number than similar tests used in other
areas of medical selection, and this is planned to ensure a sufficient level of test
reliability in the context of selection to the Foundation Programme.
21
9.3 Practical Tips
• When taking the test, look closely at the detail of the scenario, the options
and also whether you are being asked for your judgement to pick the three
most appropriate options or whether you are asked to rank the options in
order of most to least appropriate (nb actions are discrete actions and are
not chronological). It is important that you read each scenario thoroughly
and each possible response before beginning to pick or rank the responses.
Familiarise yourself with the different question types and how to answer
each type on the answer sheet.
• You are expected to use only the information provided in the question, do not
make assumptions about the situation or scenario. It may be that you would
like more information within the scenario; however you should respond
using only the information available in the scenario. Reviews have ensured
that the scenarios contain sufficient information to be able to respond.
• It is important that you read each scenario thoroughly and each possible
response before beginning to rank or choose the responses.
• Remember you are being asked what you ‘should’ do and not necessarily
what you ‘would’ do.
• Bear in mind that you can only choose from the available options and that
you are being asked to evaluate the ‘best’ of these; not any other possible
options. When being asked to rank those options, they may not represent all
possible responses but your task is to put them in order of appropriateness.
Note that actions are discrete actions, and should not be thought of as
chronological.
• Answer ranking questions in full. You can score up to four marks for the
options you rank as ‘least’ appropriate and for ‘most’ appropriate, as well as
for each rank in-between – but if you leave an answer, or part of an answer
blank, you won’t receive any marks for your thinking. There is no negative
marking.
• For multiple choice questions, select only three answers. If you select four
or more answers, you will score zero marks for the whole question. There is
no negative marking for incorrect answers – you receive four marks for each
of the three correct answers you give.
• Do not spend too much time on one item. There are approximately two
minutes allowed for each item. If you are struggling with one item then
move on to the next.
• Make sure you fully understand how to complete the answer sheet correctly.
An example answer sheet can be found on the UKFPO website.
22
9.4. Susceptibility to coaching
2. How will you ensure that the SJT is still a fair and relevant tool
to be using?
Every year new items will be developed to refresh the item bank. In addition,
every year there will be a review of existing items to ensure that they align
with the current Foundation Programme curriculum and don’t go out of date.
In line with best practice, each year a quality assurance evaluation of the SJT
will be undertaken which will include analysis of reliability and tests of group
differences. Applicant feedback will also be sought and monitored. Studies
of predictive validity (i.e. does the SJT predict performance in the role?) are
planned and results of these will be disseminated when available.
23
3. I am not an FY1, how will I know what I should do in these
scenarios?
The SJT has been developed specifically to be appropriate for final year
medical students. Although the SJT is set in the context of the Foundation
Programme, all items have been reviewed to ensure they are fair and able to
be answered by medical students. All items also avoid specific knowledge
about procedures or policies that may not be experienced until the Foundation
Programme. We suggest that you familiarise yourself with the person
specification and the professional attributes, and also use your own experience
to help you answer the questions.
Useful Resources
• UKFPO website for application information, practice SJT paper, Applicant
Handbook, person specification - http://www.foundationprogramme.nhs.uk
• ISFP website, for project background, design and development of the SJT
- http://www.isfp.org.uk
• GMC - http://www.gmc-uk.org
• Foundation Programme curriculum - http://www.foundationprogramme.
nhs.uk/pages/home/training-and-assessment
24
Appendix A – SJT Domains
1. Commitment to 1. Is punctual
Professionalism 2. Takes responsibility for own actions/work
3. Owns up to mistakes
Displays honesty, 4. Takes responsibility for own health and well-being
integrity and aware- 5. Demonstrates commitment to and enthusiasm/motivation
ness of confidentiality for role
& ethical issues. Is 6. Understands/is aware of the responsibility of the role of
trustworthy and reli- being a doctor
able. Demonstrates 7. Is reliable
commitment and 8. Displays honesty towards others (colleagues and patients)
enthusiasm for role. 9. Trustworthy
Willing to challenge 10. Identifies/challenges unacceptable/unsafe behaviour/situa-
unacceptable behav- tions when appropriate (colleague/organisational issues)
iour or behaviour 11. Challenges others’ knowledge where appropriate
that threatens patient 12. Understands/demonstrates awareness of ethical issues,
safety, when appropri- including confidentiality
ate. Takes responsibil-
ity for own actions.
25
Professional Behavioural Descriptors
Attribute
3. Effective General
Communication 1. Listens effectively
2. Ensures surroundings are appropriate when communicating
Actively and clearly 3. Understands/responds to non verbal cues
engages patients 4. Uses non-verbal communication effectively
and colleagues in
With Patients
equal/open dialogue.
1. Uses language that is understood by patients/relatives and
Demonstrates active
free from medical jargon
listening. Commu-
2. Demonstrates sensitive use of language
nicates verbal and
3. Communicates information to patients clearly and concisely
written information
4. Adjusts style of communication according to patient’s/
concisely and with
relative’s needs
clarity. Adapts style
5. Adjusts how much information to provide according to
of communication ac-
patient’s/relative’s needs
cording to individual
6. Provides information to patients and keeps them updated
needs and context.
7. Readily answers patient’s and relative’s questions
Able to negotiate with
8. Ensures he/she has all the relevant information before
colleagues & patients
communicating to patients/colleagues
effectively.
9. Asks questions/seeks clarification to gain more information/
understanding about the patient
10. Finds out patient’s/relative’s level of knowledge/
understanding
11. Allows patients/relatives to ask questions/uses silence
effectively
12. Checks patient’s/relative’s understanding
13. Summarises information/reflects back to patients to clarify
their own understanding
With colleagues
1. Asks questions of colleagues to gain more information
2. Provides/summarises information accurately and concisely
to colleagues
3. Provides only relevant information to colleagues
4. Keeps colleagues informed/updated (about patients and
about where they will be)
5. Is able to negotiate/use diplomacy
6. Knows exactly what colleagues are asking for and why
7. Is assertive where necessary
8. Adapts style of communication according to need and
situation
9. Clarifies information to check their own understanding
Written
1. Displays high standards of written communication
2. Uses concise and clear written communication
3. Has legible handwriting
26
Professional Behavioural Descriptors
Attribute
27
References
1. Lievens F, Sackett PR. Situational judgement tests in high-stakes settings: Issues and
strategies with generating alternate forms. Journal of Applied Psychology 2007;92:1043-
1055.
2. Weekley JA, Ployhart RE. Situational judgment: Antecedents and relationships with
performance. Human Performance 2005;18:81-104.
3. Dubois PH. A history of psychological testing. Boston: Allyn and Bacon, 1970.
4. Northrop LC. The psychometric history of selected ability constructs. Washington, DC:
U.S. Office of Personnel Management; 1989.
5. Whetzel DL, McDaniel MA. Situational judgment tests: An overview of current research.
Human Resource Management Review 2009;19:188–202.
6. Patterson F, Ferguson E, Lane PW, Farrell K, Martlew J, Wells AA. Competency model for
general practice: implications for selection, training and development. British Journal of
General Practice 2000;50:188-93.
8. McDaniel MA, Morgeson FP, Finnegan, EB, Campion MA, Braverman EP. Use of
situational judgment tests to predict job performance: A clarification of the literature.
Journal of Applied Psychology 2001;86:730-740.
10. Chan D, Schmitt N. Situational judgment and job performance. Human Performance
2002;15:233-254.
11. Borman WC, Hanson MA, Oppler SH, Pulakos ED, White LA. Role of early supervisory
experience in supervisor performance. Journal of Applied Psychology 1993;78:443-449.
12. Lievens F, Buyse T, Sackett PR. Retest effects in operational selection settings: Development
and test of a framework. Personnel Psychology 2005a;58:981-2007.
13. O’Connell MS, Hartman NS, McDaniel MA, Grubb WL, Lawrence A. Incremental validity
of situational judgement tests for task and contextual job perforemance. International
Journal of Selection and Assessment 2007;15:19-29.
28
14. McDaniel MA, Hartman NS, Whetzel DL, Grubb WL. Situational judgement tests, response
instructions and validity: A meta-analysis. Personnel Psychology 2007;60:63-91.
16. Lievens F, Patterson F. The validity and incremental validity of knowledge tests, low-fidelity
simulations, and high-fidelity simulations for predicting job performance in advanced high
level high-stakes selection. Journal of Applied Psychology 2011;96(5);927-940.
18. Banki S, Latham GP. The Criterion-Related Validities and Perceived Fairness of the
Situational Interview and the Situational Judgment Test in an Iranian Organisation. Applied
Psychology: An International Review 2010;59:w4–142.
19. Motowidlo SJ, Tippins N. Further studies of the low-fidelity simulation in the form
of a situational inventory. Journal of Occupational and Organizational Psychology
1993;66:337-344.
20. Sternberg RJ. Creating a vision of creativity: The first 25 years. Psychology of Aesthetics,
Creativity, and the Arts 2006;1:2-12.
21. Nguyen NT, McDaniel MA. Constructs assessed in situational judgment tests: A meta-
analysis. Paper presented at the 16th Annual Convention of the Society for Industrial and
Organizational Psychology, 2001 April; San Diego, CA.
22. Lievens F, Sackett PR. Video-based vs. written situational judgment tests: A comparison in
terms of predictive validity. Journal of Applied Psychology 2006;91:1181–1188.
23. Oswald FL, Schmitt N, Kim BH, Ramsay LJ, Gillespie, MA. Developing a biodata measure
and situational judgment inventory as predictors of college student performance. Journal of
Applied Psychology 2004;89:187-207.
24. Lievens F, Buyse T, Sackett PR. Retest effects in operational selection settings: Development
and test of a framework. Personnel Psychology 2005b;58:981-2007.
29