Literature DB >> 35139759

A KSA system for competency-based assessment of clinicians' professional development in China and quality gap analysis.

Xiaoqing Huang1, Zihua Li1, Jiali Wang1, Endong Cao1, Guiying Zhuang1, Fei Xiao1, Caihua Zheng1, Xiaowen Zhang1, Man Chen1, Liqing Gao1, Pi Guo2, Peiwei Lin3, Shaoyan Zheng4, Gang Xin5.   

Abstract

BACKGROUND: We aim to create a holistic competency-based assessment system to measure competency evolution over time - one of the first such systems in China.
METHOD: Two rounds of self-reported surveys were fielded among the graduates from the Shantou University Medical College: June through December 2017, and May through August 2018. Responses from three cohorts of graduates specializing in clinical medicine - new graduates, resident physicians, and senior physicians - were analyzed. Gaps between respondents' expected and existing levels of competencies were examined using a modified service quality model, SERVQUAL.
RESULTS: A total of 605 questionnaires were collected in 2017 for the construction of competency indicators and a 5-level proficiency rating scale, and 407 in 2018, for confirmatory factor and competency gap analysis. Reliability coefficients of all competency indicators (36) were greater than 0.9. Three competency domains were identified through exploratory factor analysis: knowledge (K), skills (S), and attitude (A). The confirmatory factor analysis confirmed the fit of the scale (CMIN/DF < 4; CFI > 0.9; IFI > 0.9; RMSEA ≤ 0.08). Within the cohorts of resident and senior physicians, the largest competency gap was seen in the domain of knowledge (K): -1.84 and -1.41, respectively. Among new graduates, the largest gap was found in the domain of skills (S) (-1.92), with the gap in knowledge (-1.91) trailing closely behind.
CONCLUSIONS: A competency-based assessment system is proposed to evaluate clinician's competency development in three domains: knowledge (K), skills (S), and attitude (A). The system consists of 36 competency indicators, a rating scale of 5 proficiency levels, and a gap analysis to measure competency evolution through 3 key milestones in clinician's professional career: new graduate, resident physician, and senior physician. The competency gaps identified can provide evidence-based guide to clinicians' own continuous development as well as future medical curriculum improvements.

Entities:  

Keywords:  Core competency; educational assessment; medical graduate; quality gap analysis; service quality model

Mesh:

Year:  2022        PMID: 35139759      PMCID: PMC8843213          DOI: 10.1080/10872981.2022.2037401

Source DB:  PubMed          Journal:  Med Educ Online        ISSN: 1087-2981


Introduction

Epstein and Hundert [1,2] defined systems-based competencies for health professionals as follows: ‘the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.’ It was further advocated in the report entitled ‘Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World’ and published by the Lancet Commissions in 2010 that ‘a 3rd generation [of medical education] is now needed that should be systems based to improve the performance of health systems by adapting core professional competencies to specific contexts while drawing on global knowledge.’[3] The earliest literature on physician core competencies can be traced as far back as to the 1970s. Government agencies and organizations worldwide have since been continuously updating these competencies. In 1998, the Accreditation Council for Graduate Medical Education (ACGME) in the U.S. defined core competencies in 6 areas for health practitioners[4]. In 2005, the Royal College of Physicians and Surgeons of Canada published the CanMEDS 2005 Physician Competency Framework as an update to the previous version (published in 1996 and entitled ‘Skills for the New Millennium’), outlining 7 physician roles[5]. In 2013, the General Medical Council in UK issued a guidance document entitled ‘Good Medical Practice’ to delineate the duties of doctors[6]. In order to transform medical education for the twenty-first century, it is essential that educational institutes (medical colleges and schools, teaching hospitals, etc.) strengthen their faculty teams and promote curriculum reforms to elevate a broad range of capabilities of the medical personnel [3,7]. Graduate surveys that collect feedback from the recipients of medical education have been relied upon as one of the effective tools to gauge the teaching quality at medical institutes, and can provide valuable input to help direct plans to improve medical curricula [8,9]. Since 1998, China has implemented the largest reform of medical education in the world by incorporating professional training into college education. This has significantly boosted the enrollment of health-care professionals at medical institutes[7]. In 2015, standardized resident training was also introduced in China[10]. There are three main tracks of formal medical education in China, which aspiring high-school graduates can pursue: the 5-year, the 5 + 3, and the 8-year programs. For the 5-year track, high-school graduates enroll themselves to the undergraduate medical program and will receive a bachelor’s degree at the end of their 5 years of study (‘new graduate’). These new graduates are eligible for standardized resident training which will last another 3 years. For the 5 + 3 track, after students complete their initial 5-year training (equivalent to that of the 5-year track), they attend a 3-year standardized resident training and will be awarded a master’s degree together with a standardized resident training certificate (‘resident physicians’) when completing the program. For the 8-year track, high-school graduates attend a broader training program that spans basic and clinical medicine as well as liberal arts, and will receive a degree of MD (Doctor of Medicine/Medical Doctor) at the end of their 8 years of study. Like ‘new graduates’, MDs can take up additional standardized resident training which will last 2 to 3 years. The bachelor’s degree prepares new graduates for a career in clinical medicine, if they so choose, or related professions. The goal of the 5 + 3 training program is to cultivate a pipeline of clinical physicians, while the 8-year program aims to incubate medical talents with more versatility. From 2012 to 2014, Dr. Baozhi Sun, former Vice President of the China Medical University, joined efforts with a team of scholars to conduct a large-scale and cross-sectional survey among clinicians in 31 provinces and cities across the country. They constructed the ‘Chinese Doctors’ Common Competency Model’ which consists of 76 indicators and covers 3 key dimensions of competency knowledge, skills, and attitude (KSA). The model encompasses the following aspects of medicine: clinical skills and patient care, disease prevention and health promotion, information and management, medical knowledge and life-long learning, interpersonal communication, teamwork and scientific research, core value, and professionalism. However, the model developed by Sun et al. mainly targets senior physicians with more extensive clinical experience as practitioners. Nevertheless, holistic systems to assess medical graduates’ professional development as they progress through different phases of their career remain few and far between in China. What is also lacking is a keen appreciation of professional development as a continuous and dynamic process, as well as investigations to assess this process that are reproducible. Therefore, the objective of the current study is to create a holistic competency-based assessment system comprising 3 components: competency indicators suitable for clinicians in different phases of their career, a rating scale aligned with the progression of skill acquisition, and an analytical tool to measure competency evolution over time – one of the first such systems in China.

Method

A competency-based KSA assessment system was designed by drawing from the conceptual framework of Norcini who espoused that an effective assessment system should include three segments: competency (defined by indicators), level of assessment (degree of mastery), and assessment of progression (skill acquisition through stages) [2,11,12]. The Dreyfus model that any skill acquisition spans 5 stages – novice, advanced beginner, competent, proficient, and expert [13] – was also consulted to create a more nuanced scheme for assessing the mastery level of competency. This study was approved by the Ethics Committee of the Shantou University of Medical College (SUMC).

KSA-based competency indicators and a rating scale

Thirty-six indicators (Figure 1) were derived by combining and simplifying closely related indicators from the model created by Sun et al [2]. so the scale can be applicable for surveying a more diverse group of clinicians – that is, new graduates, resident physicians, and senior physicians – who were selected to represent 3 key milestones in a clinician’s professional career. A more succinct scale also rendered the survey less cumbersome to administer, more enticing for respondents to complete the survey, thereby allowing the collection of more meaningful data.
Figure 1.

KSA Model of Core Competencies (36 indicators).

KSA Model of Core Competencies (36 indicators). The questionnaire developed based on this scale includes two sections: basic information, and self-assessment of competencies. The self-assessment is based on the 5-point Likert scale defined as follows: 0 represents ‘do not know’; ‘1’ represents ‘beginner’ (having acquired cognitive understanding of the relevant basics); ‘2’ represents ‘application’ (being able to practice or simulate under the guidance of others; ‘3’ represents ‘competent’ (being able to practice independently in the real world according to standards; ‘4’ represents ‘proficient’ (being able to practice independently and deliver top-quality outcome); and ‘5’ represents ‘expert’ (being able to serve as an example for peers and in an advisory capacity as well as participate in developments of standards). Respondents were asked to rate both their existing and expected levels of competencies. The anonymous questionnaire was made available on the graduate survey platform (http://bysczdc.med.stu.edu.cn/) at the SUMC from June through December 2017. All participants had earned a degree in clinical medicine from the SUMC or partnering hospitals. Responses from those enrolled in 2012 (‘new graduates’), 2008/2009 (‘resident physicians’), or 2005/2006 (‘senior physicians’) were analyzed for the construction of competency indicators. Respondents were informed that their answers would be kept strictly confidential and that they could withdraw from the survey at will. All participants completed and submitted the questionnaires electronically or on paper. Questionnaires collected were excluded from analysis if they met any of the following criteria: from graduates who earned their degrees outside the 3 time points specified; from respondents who no longer worked in the field of clinical medicine; from those who populated the answers mechanically (e.g., filled each question with identical answers); from respondents who submitted multiple questionnaires using the same Internet Protocol (IP) address (in this case, the last questionnaire submitted would be treated as valid input, with the rest, discarded). SPSS Statistics 21.0 for Windows (IBM Corp., Armonk, New York) was used to analyze reliability and validity. The competency level of ‘0’ was equated as ‘missing data’ and substituted with the mean score (‘mean imputation’)[14]. The Cronbach’s alpha value was used to evaluate the internal consistency. The Kaiser–Meyer–Olkin (KMO) measure greater than 0.9 and the significance level of Bartlett’s test of sphericity less than 0.05 would indicate that the data were suitable for exploratory factor analysis (EFA)[15]. Factors with eigenvalues greater than 1 and factor loading greater than 0.45 would be extracted after orthogonal rotation with Kaiser normalization. If there were multiple-factor loadings greater than 0.45, the factor with the highest loading would be selected[16]. For the confirmatory factor analysis, a separate random survey (using the same questionnaire) was fielded from May through August in 2018 among graduates who enrolled in the clinical medicine department at the SUMC in 2013 (‘new graduates’), 2010 (‘resident physicians’), or 2007 (‘senior physicians’). Confirmatory factor analysis using the software Amos 21.0 for Windows (IBM Corp., Armonk, New York) was carried out to test the fit of the scale. The reasonable fit of the scale would be determined based on the following: chi-square to the degree of freedom ratio (CMIN/DF) < 4; comparison fit index (CFI) > 0.9; incremental fit index (IFI) > 0.9; and root mean square error of approximation (RMSEA) ≤ 0.08[17].

Gap analysis of competencies and perceived quality of medical education by graduates

A revised service quality model, SERVQUAL – which was originally designed for commercial applications to business services [17] – was employed for the competency gap analysis based on the same survey responses collected in 2018. The quality of medical education (as measured by the gap between the existing competency level and the expected level) for the ith indicator is represented by , where P indicates the perceived existing level of competency for the ith indicator, and E, the expected level of competency[18]. The quality of medical education for each of the KSA domain is , where m represents the number of indicators in each domain. When m = 36, Q indicates the overall quality of medical education. The Kruskal–Wallis test was used to analyze the differences in perceived quality among the three groups of respondents.

Results

The KSA-based competency indicators

Reliability and validity. There were 226, 193, and 186 questionnaires collected from new graduates, resident physicians, and senior physicians, respectively, which were included according the established criteria (Table 1). The Cronbach’s alpha values (reliability coefficients) for each item in the questionnaire and the questionnaire as a whole were both greater than 0.9. Therefore, all 36 core competency indicators were retained. The KMO values associated with the 3 groups of respondents were 0.967, 0.964, and 0.943, respectively. The p values of the Bartlett’s sphericity test were less than 0.001. The indicators were thus suitable for factor analysis.
Table 1.

A Summary of Questionnaire Responses

Analysis SupportedMale (n)Female (n)Total (n)
Construction of IndicatorsNew Graduates113 (50.0%)113 (50.0%)226
Resident Physicians114 (59.1%)79 (40.9%)193
Senior Physicians109 (58.6%)77 (41.4%)186
Competency Gap AnalysisNew Graduates96 (60.4%)63 (39.6%)159
Resident Physicians86 (68.3%)40 (31.7%)126
Senior Physicians75 (61.5%)47 (38.5%)122
A Summary of Questionnaire Responses Based on the exploratory factor analysis, 3, 3, and 5 factors were extracted from the groups of new graduates, resident physicians, and senior physicians, respectively (Table 2). Three out of the five factors extracted from senior physicians shared the same constructs and were combined into one single factor (i.e., ‘knowledge’). The Cronbach’s alpha values for all factors associated with each group were greater than 0.9, indicating high internal consistency. The factors extracted were analyzed further, and three domains emerged with which the competency indicators measured were aligned: knowledge (K), skills (S), and attitude (A).
Table 2.

Rotated Component Matrix of Exploratory Factor Analysisb,c

IndicatorNew Graduate
Resident Physician
Senior Physician
Corresponding KSA Indicatord
KaAaSaKSASKAKK
10.150.530.610.160.790.380.770.160.310.07<0.001S1
20.210.430.750.210.800.330.800.200.180.150.10S2
30.430.330.700.280.720.320.740.220.320.130.13S3
40.420.390.680.350.740.360.750.240.360.120.16S4
50.460.340.680.290.740.390.810.200.280.190.05S5
60.560.260.650.320.740.350.860.080.190.200.15S6
70.640.100.470.410.670.250.640.180.120.290.22S7
80.500.380.530.330.700.390.690.090.350.210.20S8
90.640.170.530.480.640.220.520.090.150.560.27S9
100.570.400.440.530.600.250.390.200.230.650.13S10
110.740.200.410.640.580.150.250.330.110.750.14K1
120.730.230.430.660.570.140.290.340.190.750.08K2
130.790.120.310.770.380.130.160.440.190.710.18K3
140.840.200.220.730.380.010.080.510.090.620.22K4
150.760.290.370.660.310.330.260.810.180.230.03K5
160.730.270.300.700.260.380.310.750.230.190.02K6
170.790.170.180.560.400.440.220.340.480.370.15A1
180.780.280.220.690.370.420.080.560.290.490.27K7
190.770.340.190.770.230.370.110.620.200.340.22K8
200.580.390.280.740.210.300.190.720.020.090.26K9
210.700.330.370.800.190.300.100.770.120.180.25K10
220.740.320.260.790.320.320.050.690.260.350.15K11
230.560.500.460.540.400.550.400.440.490.220.18K12
240.650.460.400.610.310.590.330.520.340.250.30K13
250.770.340.220.710.210.390.240.620.200.290.28K14
260.350.640.500.320.480.660.490.240.560.150.12A2
270.460.650.400.350.440.690.460.220.640.150.12A3
280.210.790.320.260.440.720.450.060.730.110.06A4
290.500.640.220.440.350.620.270.120.590.160.48A5
300.700.460.190.570.270.510.150.240.250.260.73K15
310.630.570.230.670.290.450.250.280.350.200.57K16
320.700.520.220.700.320.410.200.400.240.240.70K17
330.760.310.260.780.250.330.150.510.020.080.65K18
340.420.730.280.460.400.640.300.170.730.130.27A6
350.160.840.200.220.300.740.220.160.750.070.07A7
360.210.820.220.320.310.680.240.200.700.240.14A8

aK: Knowledge; S: Skills; A: Attitude;.

bExtraction method: Principal component analysis.

cRotation method: Varimax with Kaiser normalization (Rotation converged in 8 iterations).

dThese indicators are detailed in Figure 1.

Rotated Component Matrix of Exploratory Factor Analysisb,c aK: Knowledge; S: Skills; A: Attitude;. bExtraction method: Principal component analysis. cRotation method: Varimax with Kaiser normalization (Rotation converged in 8 iterations). dThese indicators are detailed in Figure 1. Confirmatory factor analysis. There were 159, 126, and 122 questionnaires collected from the 3 cohorts of respondents, respectively, that were included according the established criteria (Table 1). The reasonable fit of the scale was confirmed based on the following: CMIN/DF = 3.596; CFI = 0.905; IFI = 0.905; RMSEA = 0.080. As shown in Table 3, the Q values represent the gaps between the existing and expected levels of competency as perceived by the 3 groups of participating graduates. Q values are negative for all 36 core competency indicators. Based on the total Q values, the largest overall competency gap is found among new graduates (−1.81), followed by resident physicians (−1.70) and senior physicians (−1.29) in that order. Within individual cohorts, the largest gap among resident physicians (−1.84) and senior physicians (−1.41) is seen in the domain of knowledge (K). Among new graduates, the largest gap (−1.92) is associated with the domain of skills (S), with the gap (−1.91) in knowledge trailing closely behind. For the domain of skills (S), both resident and senior physicians perceive their existing competency levels to be merely ‘applicable’ (2.46 & 2.77, respectively), which contrasts starkly with their expected levels of ‘proficient’ (4.13 & 4.02, respectively). New graduates, on the other hand, view their existing level as ‘applicable’ (2.00), while expecting their competency to reach the level of ‘competent’ (3.92).
Table 3.

Competency Gap Analysis Based on the Modified SERVQUAL Model

IndicatorbNew Graduate
Resident Physician
Senior Physician
PaEaQaPaired Sample Test
PEQPaired Sample Test
PEQPaired Sample Test
tptptp
K11.723.47−1.75−21.29<0.0012.013.75−1.74−18.47<0.0012.393.65−1.26−13.52<0.001
K21.723.54−1.82−21.66<0.0011.983.80−1.83−19.59<0.0012.353.70−1.35−14.75<0.001
K31.553.49−1.94−24.14<0.0011.973.75−1.78−19.11<0.0012.303.66−1.37−14.24<0.001
K41.533.38−1.86−22.39<0.0011.853.63−1.78−18.93<0.0012.123.56−1.44−14.86<0.001
K51.683.72−2.04−25.11<0.0012.053.93−1.89−19.97<0.0012.143.66−1.52−14.84<0.001
K61.713.72−2.00−23.78<0.0012.023.89−1.88−19.84<0.0012.153.60−1.45−14.93<0.001
K71.693.59−1.91−23.64<0.0011.913.86−1.95−20.36<0.0012.093.63−1.53−15.51<0.001
K81.723.56−1.84−22.04<0.0011.933.81−1.88−19.63<0.0011.953.50−1.56−15.32<0.001
K91.863.83−1.97−22.20<0.0011.963.88−1.92−20.87<0.0011.973.61−1.64−15.34<0.001
K101.753.64−1.89−24.17<0.0011.923.80−1.88−19.50<0.0012.013.45−1.44−14.80<0.001
K111.683.58−1.90−23.47<0.0011.983.75−1.76−19.56<0.0012.153.45−1.30−14.03<0.001
K121.853.89−2.04−23.84<0.0012.274.05−1.78−19.83<0.0012.523.79−1.27−14.71<0.001
K131.813.82−2.01−21.98<0.0012.213.98−1.77−19.58<0.0012.393.66−1.27−14.33<0.001
K141.613.46−1.85−20.75<0.0011.993.78−1.79−18.94<0.0012.123.53−1.41−14.47<0.001
K151.653.39−1.74−20.28<0.0011.893.80−1.91−20.46<0.0012.053.53−1.49−15.20<0.001
K161.793.67−1.88−23.84<0.0012.153.87−1.71−19.10<0.0012.193.50−1.31−14.16<0.001
K171.693.63−1.94−24.32<0.0012.003.86−1.86−19.98<0.0012.093.46−1.38−14.96<0.001
K181.563.58−2.02−23.63<0.0011.883.80−1.92−20.75<0.0011.923.38−1.47−14.52<0.001
S12.494.11−1.62−22.11<0.0012.804.22−1.43−16.41<0.0012.974.06−1.09−13.81<0.001
S22.264.11−1.85−23.90<0.0012.664.22−1.56−17.67<0.0012.854.06−1.20−14.50<0.001
S32.033.97−1.94−23.44<0.0012.584.14−1.56−16.86<0.0012.834.06−1.23−16.15<0.001
S42.043.98−1.94−23.31<0.0012.584.18−1.60−18.07<0.0012.874.09−1.22−15.06<0.001
S51.963.94−1.98−24.45<0.0012.504.18−1.68−18.66<0.0012.884.08−1.20−14.23<0.001
S61.813.92−2.11−23.61<0.0012.424.18−1.77−19.64<0.0012.784.07−1.29−15.72<0.001
S71.653.77−2.12−24.84<0.0012.154.09−1.93−19.85<0.0012.633.99−1.36−16.56<0.001
S81.953.83−1.88−23.24<0.0012.394.10−1.71−18.64<0.0012.794.06−1.27−15.75<0.001
S91.813.77−1.96−23.72<0.0012.214.02−1.80−19.50<0.0012.583.90−1.32−15.99<0.001
S101.973.74−1.77−21.57<0.0012.253.99−1.74−18.38<0.0012.563.84−1.29−14.84<0.001
A11.673.17−1.51−17.14<0.0011.973.49−1.53−15.04<0.0012.483.72−1.25−15.40<0.001
A22.363.92−1.56−19.97<0.0012.434.01−1.58−16.38<0.0012.723.82−1.10−13.56<0.001
A32.233.86−1.62−20.25<0.0012.454.06−1.61−17.34<0.0012.683.92−1.24−15.06<0.001
A42.593.86−1.27−16.29<0.0012.734.02−1.28−13.97<0.0013.023.80−0.79−10.40<0.001
A52.213.86−1.65−20.50<0.0012.413.98−1.58−15.46<0.0012.533.83−1.30−14.68<0.001
A62.233.77−1.54−19.24<0.0012.513.95−1.44−13.66<0.0012.803.84−1.03−11.86<0.001
A72.733.87−1.14−12.99<0.0012.964.04−1.08−11.34<0.0013.173.79−0.61−9.00<0.001
A82.633.86−1.23−15.38<0.0012.704.02−1.32−13.07<0.0012.583.75−1.17−12.44<0.001
K1.703.61−1.91--2.003.83−1.84--2.163.57−1.41--
S2.003.92−1.92--2.464.13−1.68--2.774.02−1.25- 
A2.333.77−1.44--2.523.95−1.43--2.753.81−1.06- 
Overall1.923.73−1.81--2.243.94−1.70--2.463.75−1.29--

aP: Perceived current level of competency; E: Perceived expected level of competency; Q = P – E (Gap between the perceived current and perceived expected competency levels).

bThese indicators are detailed in Figure 1.

Competency Gap Analysis Based on the Modified SERVQUAL Model aP: Perceived current level of competency; E: Perceived expected level of competency; Q = P – E (Gap between the perceived current and perceived expected competency levels). bThese indicators are detailed in Figure 1.

Discussion

Unlike previous research that focused on such parameters as tangibility, reliability, responsiveness, assurance of services as well as empathy of the faculty and staff [19,20], our study aimed to evaluate the evolution of medical graduates’ core competencies in 3 domains: knowledge (K), skills (S), and attitude (A). We designed a competency-based assessment system that is holistic and implementable to examine how clinicians’ competencies have evolved from when they were new medical graduates, through residency, to becoming seasoned practicing physicians. The gap analysis, another component of our system, yielded uniquely valuable insights about the quality of medical education as perceived by the participating graduates. In Table 3, the negative Q values for all 36 competency indicators among the 3 cohorts of graduates suggest a higher expected level of competency than participants’ perceived existing level. Based on the total Q values, the largest overall competency gap is seen among new graduates, followed by residents and senior physicians, in that order. In terms of domains, distinct gaps are found in domains of skills (S) and knowledge (K) in all 3 cohorts. Hence, there appear cohort-specific and domain-specific contributors to these gaps, and targeted remedial measures will be needed to bridge the gaps. For example, at the indicator level, the biggest gap among new graduates is associated with ‘conducting emergency rescue’ followed by ‘formulating the treatment plan’ – both indicators fall within the domain of skills (S). To bridge the gap, additional class hours – as part of the clinical skill training series at the SUMC – can be devoted to scenario-based simulation training. At the domain level, the biggest gap is found in the domain of skills (S) among new graduates. As required by laws and clinical practice standards in China, all medical activities shall be conducted under the supervision of senior physicians to ensure the safety of patients and the learning environment of medical students. New graduates can thus only reach the level where they can ‘apply’ the knowledge learned, but cannot reach the ‘competent’ level where they follow standard guidelines and practice independently. The ‘competent’ level of competency is now a requirement for standardized resident trainings in China. Therefore, there is a more urgent need to ramp up new graduates’ clinical skills, so they can be better prepared as they transition to the residency phase where more emphasis is placed on clinical practices. Methods such as simulation technique, standardized patient, and enhanced clinical exposure can all help elevate new graduates’ clinical skills [21,22]. Different levels of expectation were also found between new graduates and residents/senior physicians. While new graduates hoped to reach the level of ‘competent’ (competency level = 3) for the great majority of indicators when they graduated, resident & senior physicians aspired to become ‘proficient’ (competency level = 4) for more indicators. This difference is not a total surprise, given the different professional development phases that these graduates find themselves in. However, upon a closer examination, the expectation of ‘being proficient’ appeared predominantly associated with the domain of skills (‘S’) among residents (9 out of 10 skill-related indicators) and senior physicians (8 out of 10 skill-related indicators), and, to a lesser degree, among new graduates (2 out of 10 skill-related indicators). Interestingly, this strong correlation was not seen with the domains of attitude (‘A’) or knowledge (‘K’). In other words, the study participants did not demonstrate a similar degree of expectation for attitude- and knowledge-related competencies. This gravitation toward skill-defined competencies may reflect a paradigmatic orientation among medical graduates from the SUMC as a whole – which places higher emphasis on ‘skill acquisition’ than development of competencies in softer areas such as attitude and knowledge. This finding highlights the need to drive home not only the ultimate goal of nurturing well-rounded health-care professionals but also the importance of operationalizing this aim, so professional expectations can be raised accordingly and training courses/programs fit to deliver on this goal will be created and propagated. Fulfilling this objective also underscores the construction of a multi-component assessment system as proposed by this study to measure the multiple dimensions of medical competency.

Implications

The competency-based assessment system that we propose can be completed not only by ‘receivers’ of medical education/training (e.g., medical graduates, as in the current study), but also ‘administers’ (e.g., instructors, supervisors) and ‘beneficiaries’ (e.g., patients) of this education/training (although some modification of the indicators may be needed for the survey to be more meaningful to the latter group of stakeholders). This broad set of potential applications can facilitate the creation of a 360-degree survey of clinician’s core competencies, which echoes the systems-oriented characterization of the competencies that physicians need to demonstrate in order to serve the health-care needs of a society – a view elucidated by Epstein and Hundert [1,2] and referenced in the Introduction of this report. Contrary to the assessment system that simply rates clinicians’ competency level at particular time points, the gap analysis incorporated into our system compares existing with expected levels of competencies empowers the receivers of medical education by acknowledging the value of their feedback. Insights culled from this group of stakeholders can inform policy-makers and administers of medical education as these decision-makers endeavor to instigate on-target improvements to bridge the pedagogical gaps. On the other hand, gap analysis facilitates the establishment of personal benchmark for clinicians, which allows them to take stock of the progress which they’ve made and titrate their goals and expectations as they continue evolving professionally. [23-25] Additionally, the competency-based KSA scale proposed in our study can serve as a reference to guide the reform of medical licensing examinations. In the past, these examinations focused on knowledge. Today, more emphasis is placed on physicians’ professionalism and clinical skills. Pursuant to additional investigations of feasibility, the 36 indicators contained in the scale can be developed into an expanded set of criteria to assist the redesign of medical licensing examinations.

Limitations of the study

As constrained by time and resources, the assessment rated by the 3 cohorts of graduates from the SUMC – new graduates, resident physicians, and senior physicians – was used as a proxy to gauge clinician’s professional development over time. Hence, the development trends found in this research may diverge from those in a longitudinal study that monitors the competency evolution of one single group of graduates. Secondly, the analysis of competency gaps in the study was based on participants’ self-assessment that might not corroborate fully with the assessment based on more objective measures or furnished by other key stakeholders such as patients, supervising physicians, peers, and nurses. Interpretations and extrapolations of the study findings thus need to be pursued with caution. Last but not least, the KSA-based assessment system proposed in our study was tested only among the graduates from one medical university, and needs to be further validated at additional medical institutes and in different parts of the country.

Conclusion

A competency-based assessment system is proposed to evaluate clinician’s competency development in 3 domains: knowledge (K), skills (S), and attitude (A). The system consists of 36 competency indicators, a rating scale of 5 proficiency levels, and a gap analysis to measure competency evolution through 3 key milestones in clinician’s professional career: new graduate, resident physician, and senior physician. The competency gaps identified can provide evidence-based guide to clinicians’ own continuous development as well as future medical curriculum improvements.
  16 in total

1.  Clinical skills in early postgraduate medical trainees: patterns of acquisition of confidence and experience among junior doctors in a university teaching hospital.

Authors:  G M Marel; P M Lyon; L Barnsley; E Hibbert; A Parise
Journal:  Med Educ       Date:  2000-12       Impact factor: 6.251

2.  What graduate follow-up studies can tell us.

Authors:  L H Distlehorst
Journal:  Med Educ       Date:  2000-12       Impact factor: 6.251

3.  Quality improvement in medical students' education: the AAMC medical school graduation questionnaire.

Authors:  John H Lockwood; Rajeev K Sabharwal; Deborah Danoff; Michael E Whitcomb
Journal:  Med Educ       Date:  2004-03       Impact factor: 6.251

4.  Longitudinal research databases in medical education: facilitating the study of educational outcomes over time and across institutions.

Authors:  David A Cook; Dorothy A Andriole; Steven J Durning; Nicole K Roberts; Marc M Triola
Journal:  Acad Med       Date:  2010-08       Impact factor: 6.893

Review 5.  Implementation of competency-based medical education: are we addressing the concerns and challenges?

Authors:  Richard E Hawkins; Catherine M Welcher; Eric S Holmboe; Lynne M Kirk; John J Norcini; Kenneth B Simons; Susan E Skochelak
Journal:  Med Educ       Date:  2015-11       Impact factor: 6.251

6.  Feasibility and Outcomes of Implementing a Portfolio Assessment System Alongside a Traditional Grading System.

Authors:  Celia Laird O'Brien; Sandra M Sanguino; John X Thomas; Marianne M Green
Journal:  Acad Med       Date:  2016-11       Impact factor: 6.893

Review 7.  Defining and assessing professional competence.

Authors:  Ronald M Epstein; Edward M Hundert
Journal:  JAMA       Date:  2002-01-09       Impact factor: 56.272

Review 8.  The Dreyfus model of clinical problem-solving skills acquisition: a critical perspective.

Authors:  Adolfo Peña
Journal:  Med Educ Online       Date:  2010-06-14

9.  A study to investigate the effectiveness of SimMan® as an adjunct in teaching preclinical skills to medical students.

Authors:  Meenakshi Swamy; Marina Sawdon; Andrew Chaytor; David Cox; Judith Barbaro-Brown; John McLachlan
Journal:  BMC Med Educ       Date:  2014-11-19       Impact factor: 2.463

10.  Portfolio as a tool to evaluate clinical competences of traumatology in medical students.

Authors:  Fernando Santonja-Medina; M Paz García-Sanz; Francisco Martínez-Martínez; David Bó; Joaquín García-Estañ
Journal:  Adv Med Educ Pract       Date:  2016-02-11
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.