Publication year
2017Source
Medical Education (London), 51, 4, (2017), pp. 401-410ISSN
Publication type
Article / Letter to editor
Display more detailsDisplay less details
Organization
Primary and Community Care
Journal title
Medical Education (London)
Volume
vol. 51
Issue
iss. 4
Page start
p. 401
Page end
p. 410
Subject
Radboudumc 18: Healthcare improvement science RIHS: Radboud Institute for Health Sciences; Primary and Community Care Radboud University Medical CenterAbstract
CONTEXT: Interest is growing in the use of qualitative data for assessment. Written comments on residents' in-training evaluation reports (ITERs) can be reliably rank-ordered by faculty attendings, who are adept at interpreting these narratives. However, if residents do not interpret assessment comments in the same way, a valuable educational opportunity may be lost. OBJECTIVES: Our purpose was to explore residents' interpretations of written assessment comments using mixed methods. METHODS: Twelve internal medicine (IM) postgraduate year 2 (PGY2) residents were asked to rank-order a set of anonymised PGY1 residents (n = 48) from a previous year in IM based solely on their ITER comments. Each PGY1 was ranked by four PGY2s; generalisability theory was used to assess inter-rater reliability. The PGY2s were then interviewed separately about their rank-ordering process, how they made sense of the comments and how they viewed ITERs in general. Interviews were analysed using constructivist grounded theory. RESULTS: Across four PGY2 residents, the G coefficient was 0.84; for a single resident it was 0.56. Resident rankings correlated extremely well with faculty member rankings (r = 0.90). Residents were equally adept at reading between the lines to construct meaning from the comments and used language cues in ways similarly reported in faculty attendings. Participants discussed the difficulties of interpreting vague language and provided perspectives on why they thought it occurs (time, discomfort, memorability and the permanency of written records). They emphasised the importance of face-to-face discussions, the relative value of comments over scores, staff-dependent variability of assessment and the perceived purpose and value of ITERs. They saw particular value in opportunities to review an aggregated set of comments. CONCLUSIONS: Residents understood the 'hidden code' in assessment language and their ability to rank-order residents based on comments matched that of faculty. Residents seemed to accept staff-dependent variability as a reality. These findings add to the growing evidence that supports the use of narrative comments and subjectivity in assessment.
This item appears in the following Collection(s)
- Academic publications [246205]
- Faculty of Medical Sciences [93266]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.