| Literature DB >> 22653078 |
Douglas J Murphy1, Bruce Guthrie, Frank M Sullivan, Stewart W Mercer, Andrew Russell, David A Bruce.
Abstract
BACKGROUND: Medical revalidation decisions need to be reliable if they are to reassure on the quality and safety of professional practice. This study tested an innovative method in which general practitioners (GPs) were assessed on their reflection and response to a set of externally specified feedback. SETTING AND PARTICIPANTS: 60 GPs and 12 GP appraisers in the Tayside region of Scotland, UK.Entities:
Mesh:
Year: 2012 PMID: 22653078 PMCID: PMC3404544 DOI: 10.1136/bmjqs-2011-000429
Source DB: PubMed Journal: BMJ Qual Saf ISSN: 2044-5415 Impact factor: 7.035
Summary of tools used and processes followed*
| Tool | Source | Prepared by | |
| Multi-source feedback (MSF) | General Medical Council (GMC) colleague survey | GMC | Practice manager and colleagues |
| 2Q MSF | Developed by study author | ||
| Patient satisfaction questionnaires | GMC patient survey | GMC | Patients and practice staff |
| Consultation and relational empathy | Developed by study authors | ||
| Open book self-assessed knowledge test | Consisted of 60 items focusing on chronic disease management, referral issues and prescribing | Royal College of General Practitioners (RCGP Scotland) | GP undertook test |
| Prescribing safety data feedback | 12 measures of undesirable co-prescriptions | Developed for study | Web-based report |
| Quality of care data feedback | Single area of interest selected for each participant's practice by an external assessor | Quality outcome framework | Web-based report |
| Patient complaints | – | As received | Practice staff including GP |
For the purpose of the research study programme, participants collected and reflected on output from two patient satisfaction questionnaires and two MSF questionnaires, both on two occasions, in order to test the reliabilities of individual tools. In any real system, only one tool would be used and the collection of data would likely be spread over a longer period of time. The reliabilities of individual tools are not reported here.
These data on 12 undesirable co-prescriptions were developed for the purpose of this study.18 22 Other tools used are available to GPs to include when considering data for current appraisal submission.
GP, general practitioner.
Rating questions completed by general practitioner (GP) participants (preappraisal), by appraisers (after face-to-face appraisal) and by anonymous web-based portfolio assessors
| Question | Rating scale | Completed by |
| Reflection template | ||
| Source of feedback highlighted | ||
| 1. Important issues | Likert 1–7 | GP participant |
| 2. Concern in performance | Face-to-face appraiser (preappraisal) | |
| 3. Led to planned change | ||
| 4. Gave valuable feedback | ||
| Assessment of insightful practice template | ||
| Doctor demonstrated | ||
| 1. Satisfactory engagement with the TIPP process | Likert 1–7 | Face-to-face appraiser (postappraisal) |
| 2. Insight into the feedback provided on performance | Anonymous assessor (postappraisal) | |
| 3. Plans for appropriate action where applicable | ||
| 4. Engagement, insight and action (global rating of | ||
| 5. Suitability for recommendation as on track for revalidation without further opinion | Binary yes/no | Face-to-face appraiser (postappraisal) Anonymous assessor (postappraisal) |
Likert scale descriptors (1–7): (1) strongly disagree; (3) disagree; (5) agree; (7) strongly agree.
TIPP, Tayside In-Practice Portfolio.
Mean general practitioner (GP) ratings of perceived ability of each feedback tool (columns) to assess the 12 General Medical Council (GMC) attributes (rows) after feedback received. Scale (1–7) for each GMC with a score of 4 as a neutral point*
| The GP… | Colleague feedback | Patient feedback | Practice performance data | Knowledge test | Patient complaints |
| Maintains professional competence | 4.4 | ||||
| Applies knowledge and experience to practice | 3.9 | 4.0 | |||
| Keeps clear, accurate and legible records | 2.9 | 3.2 | |||
| Puts into effect systems to protect patients and improve care | |||||
| Responds to risks to safety | 2.9 | 3.3 | 2.8 | 3.1 | |
| Protects patients and colleagues from any risk posed by his/her health | 2.4 | 1.8 | 1.7 | 2.3 | |
| Communicates effectively | 4.1 | ||||
| Works constructively with colleagues and delegates effectively | 2.7 | 2.9 | 2.9 | ||
| Establishes and maintains partnerships with patients | 5.0 | 3.9 | |||
| Shows respect for patients | 4.3 | ||||
| Treats patients and colleagues fairly and without discrimination | 5.1 | 3.8 | |||
| Acts with honesty and integrity | 4.8 | 3.7 |
Tools or groups of tools significantly different from the rest as being the most highly valued for each attribute are represented in bold font. Tools or groups of tools significantly different from the rest as being the least highly valued for each attribute are represented in italic font (p=0.05).
Reliability of assessment of insightful practice (AIP) questions 1–5
| Raters | AIP questions 1–3 (engagement, insight and action) 1–7 scale reliability (G) | AIP question 4 (global assessment) 1–7 scale reliability (G) (ICC) | AIP question 5 (binary yes/no recommendation on revalidation) reliability (G) (ICC) | |||
| Internal consistency | Inter-rater | Inter-rater | Inter-rater (95% CI) | Inter-rater | Inter-rater (95% CI) | |
| 1 | 0.94 | 0.71 | 0.66 | – | 0.54 | – |
| 2 | 0.96 | 0.79 | (0.68 to 0.88) | 0.7 | (0.54 to 0.83) | |
| 3 | 0.96 | 0.78 | (0.69 to 0.86) | |||
| 4 | 0.97 | |||||
| 5 | 0.97 | |||||
| 6 | 0.97 | |||||
Reliabilities greater than 0.8, as required for high-stakes assessment, are given in bold.9
Intraclass correlation coefficients (ICCs) are G coefficients when you have a one facet design (rater).
Inter-rater reliability is the extent to which one rater's assessments (or when based on multiple raters, the average of raters' assessments) are predictive of another rater's assessments.
95% CIs for reliabilities (ICCs) were calculated using Fisher's ZR transformation which is dependent on raters (k) with a denominator value of (k-1), and so cannot be calculated when there is only one rater.9
Mean scores for reflective template questions (1–4) for feedback sources for each group (n=4)
| Reflective template question | Groups | Mean RT score over all feedback tools (95% CI) |
| Value of feedback | GPs with | 4.9 (4.6 to 5.2) |
| GPs with | 4.7 (4.6 to 4.9) | |
| Face-to-face appraisers | 4.7 (4.4 to 5.0) | |
| Anonymous assessors | 5.4 (4.9 to 5.9) |
GP, general practitioner; RT, reflective template.
Figure 1Cycle of insightful practice.