Literature DB >> 26600831

A model for evaluation of faculty members' activities based on meta-evaluation of a 5-year experience in medical school.

Aeen Mohammadi1, Kamran Soltani Arabshahi1, Rita Mojtahedzadeh2, Mohammad Jalili3, Hossein Keshavarz Valian4.   

Abstract

BACKGROUND: There is a global interest for deploying faculty members' activities evaluation systems, however implementing a fair and reliable system is a challenging issue. In this study, the authors devised a model for evaluation of faculty members' activities with regard to their viewpoints and meta-evaluation standards.
MATERIALS AND METHODS: The reliability of the current faculty members' activities metrics system was investigated in Medical School of Tehran University of Medical Sciences in 2014. Then authors conducted semi-structured interviews regarding meta-evaluation standards and designed a questionnaire based on interviews' results which were delivered to faculty members. Finally, they extracted the components of the model regarding interviews' content analysis and questionnaire's factor analysis and finalized them in a focus group session with experts.
RESULTS: Reliability of the current system was 0.99 (P < 0.05). The final model had six dimensions (mission alignment, accuracy, explicit, satisfaction, appropriateness, and constructiveness) derived from factor analysis of the questionnaire and nine factors (consensus, self-reporting, web-based system, evaluation period, minimum expectancies, analysis intervals, verifiers, flexibility, and decision making) obtained via qualitative content analysis of the interviews.
CONCLUSION: In this study, the authors presented a model for faculty members' activities evaluation based on meta-evaluation of the existing system. The model covered conceptual and executive aspects. Faculty members' viewpoints were the core component of this model, so it would be acceptable in a medical school to use the model for evaluating their activities.

Entities:  

Keywords:  Evaluation studies; faculty; medical; models; personnel management; workload

Year:  2015        PMID: 26600831      PMCID: PMC4621650          DOI: 10.4103/1735-1995.165958

Source DB:  PubMed          Journal:  J Res Med Sci        ISSN: 1735-1995            Impact factor:   1.852


INTRODUCTION

Implementation of a fair, systematic, and reliable evaluation system of the faculty members’ activities is a challenging issue.[12] This evaluation is mostly performed in the areas of teaching, research and innovation, professional practice, commitment, and citizenship and should be able to show the interpersonal differences in the above-mentioned areas.[3] It would help universities to make decisions regarding promotion, appointment, faculty compensation, recruiting, granting tenure, and rewarding excellence based on objective criteria.[45] On the other hand, expectations and the evaluation criteria should be clear and announced to the faculty members.[67] In spite of the global interest for these kinds of evaluation systems, few studies have assessed systems that can cover all the activities of the faculty members.[8] Therefore, there is a need for a comprehensive performance evaluation system. This system should be able to differentiate among faculty members, observe the mission of the university, and be applicable to all faculty members.[5] Some medical schools started to evaluate the activities of their faculty members based on Mission Based Management (MBM) presented by Association of American Medical Colleges and Computer Sciences Corporation in the late 1990s.[91011] MBM emerged as a way to understand the costs and revenues associated with the multiple missions of the medical school; align faculty members’ activities with the school's mission; provide transparent data; and make decisions based on those data.[12] Evaluation of the activities and productivity of the faculty members, especially in education, is very complicated;[11] however, several solutions have been proposed including the experiences of the universities of Dalhousie,[13] Texas Health Science Center,[14] and Wisconsin.[15] Moreover, several papers have been published in this regard since 1995.[16] Medical School of Tehran University of Medical Sciences (TUMS) implemented an objective metrics system to evaluate the faculty members’ performance quantitatively in the areas of education, research, and service provision in 2009. This system is called SHOA, a Persian abbreviation for a phrase meaning “academic performance metrics and valuing”[17] and is designed based on MBM concepts.[1218] In this system, the activities of faculty members are arranged in five categories including education (theoretical teaching, teaching in laboratories and practice, clinical and field training, educational workshops, consultation and supervision, evaluation, educational products, research in education, and self-promotion), research (research projects, articles and research products, research workshops and consultation and supervision), clinical services (patient care and service providing with and without presence of students), administrative affairs (management positions and participation in official meetings), and out of university academic activities.[17] Data are collected through a web-based software in which each faculty member has a password protected home page. Faculty members self-report their activities along with their details during each evaluation period (1-month) and at the end send them to a “verifier” who is either department's or clinical ward's dean. The verifier can return some data to the faculty member for more explanation or correction. Finally, the verified data are forwarded to the school's dean. The information of SHOA is confidential to others. Each activity has a relative value scale based on the time required for preparation and presentation, group or individual nature of the activity and its importance. A relative value unit (RVU) is obtained by multiplying the number of performed activities in their values. This scoring system is used in different universities worldwide.[1920] After 5 years, the coverage of the SHOA system is 79% and 75% in Clinical and Basic Sciences Departments respectively. Some applications of the system include determining the expectancies from faculty members, directing the activities of the Faculty Members and Educational Departments, promotion, and assessment of the effective physical presence.[17] However, the SHOA system itself has not been evaluated yet. There is no evidence-based structured model for faculty members’ metrics system in other universities as well.[5] Therefore, we decided to meta-evaluate the SHOA system to devise a model for evaluation of the activities of the faculty members. For meta-evaluation, the personal evaluation standards presented by Joint Committee on Standards for Educational Evaluation were used. These standards are provided in four areas of Propriety Standards (7 standards), Utility Standards (6 standards), Feasibility Standards (3 standards), and Accuracy Standards (11 standards).[21]

MATERIALS AND METHODS

This mix method study with explanatory design and QUAL-QUAN approach[22] was performed in TUMS Medical School in 2014. The school has 10 Basic Sciences and 25 Clinical Departments with 99 (10%) and 884 (90%) faculty members, respectively. Ethics Committee of TUMS approved the study. First, we investigated the validity and reliability of the current SHOA system. Then we conducted semi-structured interviews with the faculty members regarding Joint Committee meta-evaluation standards. Then, through content analysis of the interviews, we designed a questionnaire which was completed online by faculty members. In the final stage, using questionnaire's factor analysis, the components of the model of the evaluation of the faculty members’ activities were determined. We finalized the model in a focus group session with experts of the field. The validity of the system had been evaluated previously during developing SHOA by reviewing the activities list by all department deans (35 deans) to ensure complete coverage of all faculty members’ activities. To evaluate the reliability of the system, we test-retested self-reported data for September 2013. Forty faculty members were randomly selected proportionate to size from Basic Sciences and Clinical Departments. Four of them who had not entered their data in September 2013 and two who had refused to participate were excluded. 1-month after the evaluation period, we saved and removed the entered information off the system and asked the participants to re-enter their activities. The reliability of the system was calculated through assessing the correlation of the RVUs of the activities in the two stages. To evaluate the stance of regarding Joint Committee meta-evaluation standards in the SHOA system, we conducted semi-structured interviews with faculty members who had at least 2 years of experience with the SHOA. We selected them through convenience sampling and obtained informed consent for their participation. The questions of the interview were based on the meta-evaluation standards.[21] We interviewed with 18 faculty members to achieve data saturation. In these interviews, a same interviewer conducted all interviews and asked the faculty members to discuss issues such as how to provide a list of academic activities and their values, how to increase the validity and reliability of the system, the process of evaluation and data verification, and how obtained information could be utilized. We recorded the interviews and transcribed them for qualitative content analysis. Two experts performed content analysis and the results were compared to achieve trustworthiness. In the next stage, according to the codes and themes extracted from the interviews, we designed a questionnaire. Fifteen experts of faculty members’ evaluation confirmed its face validity. The questionnaire included 13 demographic and 33 Likert-type main questions ranging from 0 (very little) to 4 (very much). Evaluation of the reliability with retest after 2 weeks in a sample of 25 faculty members showed a correlation coefficient (r) of 0.81. Cronbach's alpha for the internal consistency of the questionnaire was 0.924. We delivered the final questionnaire to all faculty members through the internet. Participation in this survey was voluntary, and the questionnaire was anonymous. SPSS (Version 17.0. Chicago: SPSS Inc., Chicago, IL, USA) was used for data and exploratory factor analysis. Finally, three sources were used to design the model: The results of the factor analysis of the questionnaire, The results of interviews’ content analysis, and A focus group with the participation of medical education experts for finalizing the model.

RESULTS

As the study is a mix method one, we first present the results of quantitative and qualitative parts separately and then introduce the draft and final model.

Reliability

Table 1 shows the reliability of the system in total and for different categories. Comparison of the RVUs obtained from test and retest using paired t-test showed no significant difference.
Table 1

Reliability of faculty members’ activities metrics system in medical school

Reliability of faculty members’ activities metrics system in medical school

Interviews

We asked interview participants questions relating to eight domains of the academic activities list, activities’ values, validity, reliability, workflow, data verification, application of information, and effects of system in faculties’ performance. Using qualitative content analysis, we coded the transcribed text and extracted 96 subthemes from which 65 items remained after omitting phrases with overlapping statements. Then we reviewed these 65 subthemes to put relevant concepts in a same theme with an appropriate name. Finally, we reached 10 main themes which were used as the factors of the final model. Table 2 shows these main themes with some of their relevant subthemes.
Table 2

Main themes with some of their relevant subthemes resulted from qualitative content analysis of interviews

Main themes with some of their relevant subthemes resulted from qualitative content analysis of interviews Also, interviews showed that the status of meta-evaluation standards was acceptable except for the standard of “Functional Reporting” in the area of Utility Standards which was not met. “Functional Reporting” meant that “reports should be clear, timely, accurate, and germane so that they were of practical value to the evaluates and other appropriate audiences.”[21]

Questionnaire

A total number of 488 faculty members completed the questionnaire (response rate = 49.6%). Since, the faculty members were informed for participation in the survey by their academic email addresses and many of them used other email services, it seems that the corrected response rate could be higher. Table 3 shows the characteristics of the participants in the survey whose mean age was 45 (±7.9) years.
Table 3

Demographic characteristics of the faculty members’ activities metrics system survey participants in medical school

Demographic characteristics of the faculty members’ activities metrics system survey participants in medical school We used exploratory factor analysis to extract main factors. Kaiser-Mayer-Olkin index was 0.914, and Bartlett's test of sphericity was 5238.162 (P < 0.001) that showed the high adequacy of the items and its suitability for factor analysis. To extract factors, we used varimax rotation method with four, five, and six factors solutions. Finally, it seemed that the six factors solution was the most suitable one. In this solution, eigenvalues were greater than one and could explain 66.20% of the variability. Also, scree plot indicated that the data should be analyzed for six factors. Eight questions which were not loaded in these six factors and their removal had no significant effect on the variance explanation were removed. Then we appropriately labeled each of six factors regarding its related questions. The questions loaded in each factor are presented in Table 4. Table 5 shows the factors’ labels with their Cronbach's alpha, mean and standard deviation. The t-test showed no significant difference between men and women. Since, the average of all factors was above 2 (the midpoint of the Likert scale), they were all used in designing the model.
Table 4

Labels and related questions of extracted factors of faculty members’ activities metrics system survey

Table 5

Characteristics of extracted factors of faculty members’ activities metrics system survey

Labels and related questions of extracted factors of faculty members’ activities metrics system survey Characteristics of extracted factors of faculty members’ activities metrics system survey

Designing the draft of the model

The model draft had six dimensions and nine factors. The dimensions of the model should be considered in the whole evaluation system of the faculty members’ activities and were mainly derived from factor analysis of the questionnaire. Factors included the executive components of the system and were mainly obtained via qualitative content analysis of the interviews with faculty members. The dimensions of the model draft included: Mission alignment — The evaluation system of the faculty members’ activities should be effective in promoting their performance. The performance should cover the school's missions including education, research, clinical services, and administrative and managerial affairs. The system should motivate faculty members to deliver better and more teaching to students. The system should be able to help the faculty members and managers to identify performance weaknesses and strengths. Moreover, it should be able to identify the excellent performance. Accuracy — The information derived from the system should be accurate as much as possible. The list of the activities should cover all academic affairs, and their value should be close to their real ones. Explicit — Faculty members should receive adequate information on how to complete the required data, evaluation results usages, and guidelines, and bylaws. They should also be informed on the clear definition of the activities listed in the system. Satisfaction — Faculty members should be satisfied with the way their questions and ambiguities regarding the system are answered, and the way and time of verifying their self-reported data. Appropriateness — The minimum expectancies from faculty members in the areas of education, research, and clinical services should be appropriate. Constructiveness — The evaluation system should encourage faculty members to perform activities on the list that they did not do and, on the other hand, keep from activities that they previously did but are not included in the list because they are not in line with the missions. The system should also encourage faculty members to modify their activities according to the system. The executive factors of the model draft included: Consensus — The list of the activities and their values should be prepared upon the consensus of all the faculty members. The activities should be revised independently in each department and customized accordingly. Self-reporting — The activities should be self-reported and then verified by an informed and acceptable person. Web-based system — Data entry of the activities, their verification and correction, data analysis, and notifications should be done through a web-based software. Evaluation period — Faculty members’ activities data should be entered in the system in the predefined evaluation period, preferably 1-month. Minimum expectancies — The expected RVUs from faculty members in the areas of education and research should be clear and announced formally. Analysis intervals — Although the evaluation period was 1-month, the mean RVUs for each faculty members should be calculated in 1-year intervals. In this way, different amounts of activities in different months did not cause any problems in the evaluation. Verifiers — Each verifier should be in charge of verifying the data of a maximum of 15 faculty members to increase the accuracy and speed of the process. Flexibility — All the components of the evaluation system including the list of activities, their values, minimum expectancies, and workflow should be flexible and revised every 1 or 2 years. Decision making —The scores (RVUs) obtained from the system should not replace the school administrators’ decisions and are only used to help them in the process of decision making. Validity — The validity of the system should be constantly evaluated. For this reason, the documents and evidences of the activities of a number of randomly selected faculty members would objectively be assessed.

Finalizing the model

A focus group was held with the participation of the medical school dean, three medical education experts, and four faculty members who were experienced in the area of evaluation to finalize the above-mentioned model. In this session which took 4 h, all the factors of the model were discussed and reviewed. Since, the components of the model were directly extracted from the views of the faculty members, caution was exercised not to change or modify the items only based on taste. The only modification was removing “Validity” from factors since the members of the focus group believed that it was part of “Accuracy” and did not require a separate entity. Figure 1 shows a scheme of the model components.
Figure 1

Schematic model of evaluation of faculty members’ performance derived in Tehran University of Medical Sciences

Schematic model of evaluation of faculty members’ performance derived in Tehran University of Medical Sciences

DISCUSSION

Conceptual models are employed to better identify and understand the phenomena that are conceptualized in the brain[23] and can be used for structuration of a managerial problem in order to summarize the views of the experts on what is right.[24] In this study, we evaluated a system that was operational for some years to present a model from the viewpoint of its main users, that is, faculty members. Therefore, we determined its validity and reliability. Then we investigated the views of the faculty members through qualitative interviews, and a questionnaire was designed based on the results of the interviews. Joint Committee Personnel Evaluation Standards were observed in all the stages. In the end, the model was presented considering the results of the content analysis of the interviews, factor analysis of the questionnaire, and experts opinion through a focus group. One of the important points affecting this study is how the SHOA system has been installed and operated in TUMS. The little resistance to its installation, when the system was designed, has faded over time[25] since the system is flexible and revised periodically. The SHOA represents the views of the faculty members who have used it in the past years. For this reason, it was not surprising that the views of the faculty members were close to the running system, and in qualitative interviews, data were saturated after about the 10th interview, though we continued to 18 interviews to cover different educational groups. Joint Committee Personnel Evaluation Standards were acceptable, especially the validity and reliability of the system which were higher than expectations. Only the standard “Functional Reporting” in the area of “Utility Standards” was not met. Therefore, after the project, a module was added to the system that enabled the faculty members to evaluate their activities in a period of time of their choice and calculate their own RVUs based on different categories. The majority of the reports in universities with similar systems concerns and explains the installation of the system, determining the list of activities, and their weights; and mainly focuses on multiple source decision making and analysis of the obtained data.[826] In this study, we tried to present a model based on the viewpoints of the faculty members considering the 5-year experience of the SHOA system. One of the components of the model is “Mission Alignment.” The main objective of MBM is to align the activities of the faculty members to the school missions.[9] From the standpoint of faculty members, the evaluation system should differentiate between clinical and basic sciences faculty members fairly. Unlike basic sciences faculty members, clinical faculty members obtain a great proportion of their RVUs through clinical activities. Therefore, evaluation of activities should be based on the spectrum of mission-aligned activities in each department; otherwise, the results always falsely show that some departments are less active than others. This concern mostly exists for educational activities and is resolved by the factor of “Minimum Expectancies” in the model; in other words, calculation of minimum RVUs expected from faculty members for educational activities is performed independently in each department. In the SHOA, educational expectancy calculation is norm-referenced for each department, and the mean minus one standard deviation is announced as the mean expected RVUs for that department. On the other hand, Research activities are calculated criterion-referenced according to the research policies of the university.[17] Moreover, the faculty members insisted that managerial activities should not be included in the minimum expectancies and the RVUs obtained from administrative affairs should only be included in the total score. As a result, for the promotion of faculty members, three domains of education, research, and total scores should be independently evaluated. Due to the insistence of the faculty members in this regard, “Appropriateness” was added to the model dimensions. The dimensions of “Accuracy,” “Explicit,” and “Satisfaction” are very close to Personnel Evaluation Standards[21] and were specially emphasized by faculty members. One of the considerations in the evaluation of faculty members is its effect on their performance or being constructive.[27] The faculty members also focused on the importance of the issue which was discussed under “Constructiveness.” The faculty members in this study put emphasis on cooperation in different stages of designing the evaluation system, especially preparing the list of activities and their values, which was added as the factor of “Consensus” to the model. Moreover, constant correction and modification of the system proportionate to changes in programs and job descriptions of departments were another demand of the faculty members which were addressed as “Flexibility” in the model factors. “Self-reporting” and the role of “Verifiers” were other discussed issues. The faculty members believed that they should be trusted although they insisted on verification of their self-reported activities with consideration of their respected position. Having a “web-based” evaluation system to make the process easier was one of the discussed issues. The other points were “Evaluation Period” and “Analysis Intervals.” Although data entry intervals were 1-month, calculation of obtained scores for decision making was based on the monthly average of their activities through a year. Since activities vary in different months, their monthly average provides a better estimate of the activities of the faculty members in 1-year. Finally, the faculty members expected the school managers to evaluate their activities intelligently and to use the SHOA results as an assistant in the process of “decision making” rather than making decision only based on obtained RVUs. In this study, the questionnaire was mailed only to participants’ academic email address that could be considered as a limitation since some of them may use other email services. The model in this study was only presented based on the viewpoints of the faculty members as evaluates; therefore, other stakeholders may have other dimensions and factors in mind which should be investigated in future studies. Moreover, we did not investigate how the list and weight of the activities were determined, what were the problems of collecting the data of the faculty members’ activities especially educational ones, and how the quality of the activities and its relationship with their quantity should be assessed, which require further studies.

CONCLUSION

In this study, we evaluated a faculty members’ performance metrics system and investigated faculty members’ views through qualitative interviews and a quantitative survey. Self-reported faculty members’ activities had acceptable reliability and validity and would be used for decision making. The proposed model for faculty members’ activities evaluation consisted of six dimensions: “Mission alignment, accuracy, explicit, satisfaction, appropriateness, and constructiveness” and nine executive factors: “Consensus, self-reporting, web-based system, evaluation period, minimum expectancies, analysis intervals, verifiers, flexibility, and decision making” and could be used for designing and implementing such systems in medical schools.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

AUTHOR'S CONTRIBUTION

AM contributed in the conception and design of the work and acquisition, analysis, and interpretation of data, conducting the study, drafting and revising the manuscript, approval of the final version of the manuscript, and agreed for all aspects of the work. KSA contributed in the design of the work, revising the draft, approval of the final version of the manuscript, and agreed for all aspects of the work. RM contributed in the design of the work, revising the draft, approval of the final version of the manuscript, and agreed for all aspects of the work. MJ contributed in the design of the work, revising the draft, approval of the final version of the manuscript, and agreed for all aspects of the work. HKV contributed in the design of the work, revising the draft, approval of the final version of the manuscript, and agreed for all aspects of the work.
  10 in total

1.  Measuring faculty effort and contributions in medical education.

Authors:  D O Nutter; J S Bond; B S Coller; R M D'Alessandri; B L Gewertz; L M Nora; J P Perkins; T S Shomaker; R T Watson
Journal:  Acad Med       Date:  2000-02       Impact factor: 6.893

2.  Implementing a comprehensive relative-value-based incentive plan in an academic family medicine department.

Authors:  J S Cramer; S Ramalingam; T C Rosenthal; C H Fox
Journal:  Acad Med       Date:  2000-12       Impact factor: 6.893

3.  Mission aligned management and allocation: a successfully implemented model of mission-based budgeting.

Authors:  Gordon T Ridley; Susan E Skochelak; Philip M Farrell
Journal:  Acad Med       Date:  2002-02       Impact factor: 6.893

4.  How do medical schools use measurement systems to track faculty activity and productivity in teaching?

Authors:  William T Mallon; Robert F Jones
Journal:  Acad Med       Date:  2002-02       Impact factor: 6.893

5.  Ten-year experience with mission-based budgeting in the faculty of medicine of Dalhousie University.

Authors:  John Ruedy; Noni E MacDonald; Brian MacDougall
Journal:  Acad Med       Date:  2003-11       Impact factor: 6.893

6.  Implementing a mission-based reporting system at an academic health center: a method for mission enhancement.

Authors:  Lydia Pleotis Howell; Michael A Hogarth; Thomas F Anders
Journal:  Acad Med       Date:  2003-06       Impact factor: 6.893

7.  Ologs: a categorical framework for knowledge representation.

Authors:  David I Spivak; Robert E Kent
Journal:  PLoS One       Date:  2012-01-31       Impact factor: 3.240

8.  Implementing a simpler approach to mission-based planning in a medical school.

Authors:  Tod B Sloan; Celia I Kaye; William R Allen; Brian E Magness; Steven A Wartman
Journal:  Acad Med       Date:  2005-11       Impact factor: 6.893

9.  Measuring contributions to the clinical mission of medical schools and teaching hospitals.

Authors:  R M D'Alessandri; P Albertsen; B F Atkinson; R M Dickler; R F Jones; D G Kirch; D E Longnecker; E R McAnarney; V M Parisi; S E Selby; J S Stapczynski; J W Thompson; A G Wasserman; K L Zuza
Journal:  Acad Med       Date:  2000-12       Impact factor: 6.893

10.  Challenges of measuring a faculty member activity in medical schools.

Authors:  A Mohammadi; R Mojtahedzadeh; S H Emami Razavi
Journal:  Iran Red Crescent Med J       Date:  2011-03-01       Impact factor: 0.611

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.