Literature DB >> 32780751

Implementation of an automated scheduling tool improves schedule quality and resident satisfaction.

Frederick M Howard1, Catherine A Gao2, Christopher Sankey3.   

Abstract

Rotation schedules for residents must balance individual preferences, compliance with Accreditation Council for Graduate Medical Education guidelines, and institutional staffing requirements. Automation has the potential to improve the consistency and quality of schedules. We designed a novel rotation scheduling tool, the Automated Internal Medicine Scheduler (AIMS), and evaluated schedule quality and resident satisfaction and perceptions of fairness after implementation. We compared schedule uniformity, fulfillment of resident preferences, and conflicting shift assignments for the hand-made 2017-2018 schedule, and the AIMS-generated 2018-2019 schedule. Residents were surveyed in September 2018 to assess perception of schedule quality and fairness. With AIMS, 71/74 (96.0%) interns and 66/82 (80.5%) residents were assigned to their first-choice rotation, a significant increase from the 50/72 (69.4%) interns and 25/82 (30.5%) residents assigned their first-choice in the 2017-2018 academic year. AIMS also yielded significant improvements in the number of night shift/day shift conflicts at the time of rotation switches for interns, with a significant decrease to 0.3 conflicts per intern compared to 0.7 with the prior manual schedule. Twenty-two of 82 residents (27%) completed the survey, and average satisfaction and perception of fairness were 0.7 and 0.9 points higher on a 5-point Likert scale for the AIMS-generated schedule when compared to the non-AIMS schedule. There was no significant difference in the preference for assigned vacation blocks, or in variance for night or ICU rotations. Automated scheduling improved several metrics of schedule quality, as well as resident satisfaction. Future directions include evaluation of the tool in other residency programs and comparison with alternative scheduling algorithms.

Entities:  

Mesh:

Year:  2020        PMID: 32780751      PMCID: PMC7418963          DOI: 10.1371/journal.pone.0236952

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Each year, over 30,000 newly graduated physicians begin work within residency programs in the United States as part of the pathway to independent practice [1]. Residency training is comprised of supervised inpatient and ambulatory rotations, with a minimum duration of experience in each practice setting mandated by the Accreditation Council for Graduate Medical Education (ACGME). It is challenging to develop a schedule of rotations that adheres to standards for accreditation, respects intern (first-year trainee) and resident (second- and third-year trainee) preferences, and is perceived as equitable. Schedules must satisfy the staffing requirements of affiliated hospitals, and accommodate trainee vacation preferences and requests for specific rotation experiences. Contingency plans must be developed for illness or other absences, often in the form of a backup or ‘jeopardy’ rotation–where trainees can be reassigned from non-essential duties to essential ones. The guidelines set forth by the ACGME impose additional constraints, including limitations on total night float (rotations comprised of consecutive night shifts) and intensive care unit rotations, and a minimum amount of time off between shifts. The ideal schedule also minimizes transitions of care, given the potential impact on patient safety [2]. In most internal medicine training programs, chief residents are responsible for scheduling residents for core rotations [3]. Chief residents can spend weeks of time manually designing a schedule, and are limited in the number of constraints they can simultaneously fulfill [4]. The interest in improved approaches to residency schedule is exemplified in a recent review of physician scheduling, in which over one third of identified literature involved resident physicians [5]. Automated systems have been developed to ease the administrative burden of scheduling residents for internal medicine residency programs. Approaches have generally focused on either the scheduling of individual shifts within a set period, or assigning a week- to month-long rotations with pre-set shift schedules to trainees within a year-long schedule. The shift work scheduling problem extends far beyond residency programs, with early model development in the medical field targeted towards nursing personnel [6]. The analogous shift scheduling problem for residency training programs has since been addressed with a variety of methods. Goal programming, with optimization of multiple conflicting constraints, has been used to assign shifts for anesthesia [7] and emergency medicine residents [8]. An integer programming model implemented for a pediatric emergency department successfully improved several important metrics of schedule quality, including suboptimal sleep patterns between shifts and disparity of night shift assignment between residents [9]. This implementation also documented over sixteen hours of time saved in schedule generation per month. Similarly, a spreadsheet implementation of an integer programming model for a radiology residency program increased the minimum interval between call shifts, and residents perceived the computer generated schedule as fairer [10]. The burden on chief residents was not formally measured in this implementation, although the manual schedule generation was described as “rather demotivating”, highlighting the importance of alternative approaches. Whereas the structure of emergency medicine and anesthesia training programs may benefit from optimizing individual shift scheduling, other programs have pre-defined rotations with a set shift schedule which repeats on a weekly to monthly basis. The basic problem of assigning trainees to rotations while satisfying hospital staffing and trainee education requirements is NP-complete, requiring exponentially more time to solve when accounting for increasing numbers of trainees and rotations [11]. Therefore, heuristic approaches are often used to identify near-optimal solutions within a reasonable time constraint. The first computerized internal medicine intern and resident schedule was implemented in 1979 at Johns Hopkins, utilizing a “greedy” algorithm to satisfy staffing and educational requirements and vacation requests [12]. Computerized schedules have demonstrated improved ability to satisfy preferences when compared to manual schedules in programs with fewer than 100 trainees [3, 13]. A simplified version of the problem, accounting for staffing and educational requirements without incorporating intern and resident preferences, can be rapidly solved in a “greedy” fashion for larger programs–this algorithm has been successfully implemented at the University of Illinois College of Medicine [11, 12]. An integer programming model applied to the surgical residency program at Emory University School of Medicine was able to identify near optimal staffing for 28 rotations, although certain requests such as vacation preferences were not accommodated in this solution [14]. Many residency programs have moved towards an “X+Y” block system, where residents alternate between X weeks of inpatient rotations and Y weeks of clinic, and algorithms have been specifically designed to generate schedules for X+Y systems. One implementation at University of Texas Health Center in San Antonio successfully created yearly schedules, but the only quality outcome reported was standardization of clinic capacity [15]. To our knowledge, no open source tool is available for computer-assisted schedule generation in a training program as large as the Yale New Haven Hospital Internal Medicine Residency Program (YNHH-IMRP), for which over 200 trainees are assigned to over 100 individual roles across numerous services. We therefore present a heuristic based scheduling tool, the Automated Internal Medicine Scheduler (AIMS) which was successfully implemented for the YNHH-IMRP during the 2018–2019 academic year and compare metrics of schedule quality before and after implementation.

Methods

Design of AIMS tool

AIMS was developed in Microsoft Excel and written with Microsoft Visual Basic for Applications (S1 Application). We collected preference data via online survey (Qualtrics, Provo, UT), in which residents ranked the desirability of vacation timeslots and selected their top three choices for rotations. We also collected free-text comments about scheduling preferences, to account for important life events such as maternity/paternity leave and weddings. Essential scheduling elements, such as the scheduling of vacation during a planned honeymoon, were entered manually into the AIMS tool prior to automated scheduling. Additionally, our program coordinated the scheduling of trainees on internal medicine services from six different residency programs within our institution, as well as external trainees from outside institutions who rotate through Yale services. The number of external trainees from outside programs is variable from block to block and determined by the administration of the outside programs. We do not have the liberty to reschedule these trainees, so they are hard-coded prior to running the AIMS tool. Rotation schedules are generated for a one-year period; for our program, this equates to 26 rotations per trainee, typically 2 weeks in length. AIMS emulates a sequential scheduling lottery to account for resident preferences, which is commonly used in college coursework, medical student scheduling, and holiday shift scheduling (S1 Fig) [16, 17]. A tally is kept for each resident, initialized at zero, representing how well the schedule fits with the resident preferences. For example, if a trainee receives a first-choice vacation, the tally will decrease by one; if a trainee receives a fourth-choice vacation, the tally decreases by four. Each trainee is assigned two vacations per ACGME requirements, one in the first fourteen blocks, and one in the second twelve blocks of the year–thus, if they were assigned their last choice vacation in the first half of the year, the tally would be decreased by fourteen. Trainees with the lowest tally are then chosen first for the next lottery. After vacation assignments, they will again be chosen by lottery to be assigned a rotation of their preference, if feasible. Our methodology is similar to previous studies, which have utilized a proportion of granted requests to allocate shift assignments equitably [18]. AIMS subsequently iterates through all trainees to assign rotations necessary to satisfy ACGME requirements for emergency medicine and the intensive care unit (ICU). For every rotation with an unfilled two-week block, AIMS iterates through each trainee in a random order to identify a suitable individual to insert. The constraints applied as trainees are assigned to rotations include: prohibition of sequential night float rotations (minimizing consecutive night float to two-week intervals), limits on the total number of months a trainee can spend on a specific service, limits on total number of intensive care unit and night rotations, and prohibition of rotation assignments with “shift conflicts” (i.e., a night shift on the last day of one rotation, followed by a day shift on the first day of the next rotation) which necessitate utilization of another trainee serving on a backup or jeopardy rotation. Adherence to all constraints does not allow the generation of a complete schedule in some cases, so the process is repeated without the constraint on overlapping clinical responsibilities, as a jeopardy trainee can be called upon to cover these shifts. Once all clinical services are covered, the remaining unscheduled time is converted to elective and jeopardy rotations, based on the year and program of the trainee. A single feasible schedule is thus generated on each run of the program. Output of the AIMS is displayed in separate Microsoft Excel sheets, one displaying the rotation schedule for each trainee, and one displaying the trainee staffing for each individual rotation. As we use AMiON® scheduling software (Newton, MA) to list daily clinical responsibilities, our program also includes a function to convert block schedules into daily call schedules which can be imported into AMiON.

Data collection and analysis

Our study was reviewed by the Yale New Haven Hospital Institutional Review Board (Protocol ID 2000024118) and deemed exempt given its educational focus, anonymized survey, and minimal risk nature. To evaluate the benefits of a schedule generated with AIMS, we compared both objective and subjective metrics of schedule quality across the 2017–2018 and the 2018–2019 academic years, pre- and post-implementation of AIMS. Trainee schedules must satisfy hospital staffing and ACGME requirements, and as such, these were not assessed as comparator measures. We focused on metrics germane to resident satisfaction. Objective measures included accommodation of trainee vacation choice preferences, accommodation of rotation preferences, shift conflicts, and variance in assignment to night, ICU, and jeopardy rotations. These specific schedule metrics were chosen by expert opinion, but have been described in prior studies on trainee scheduling. Accommodation of rotation and vacation preferences has used as a criteria for multiple other scheduling endeavors [3, 19, 20], and cited as a limitation when not included [14]. The difficulty in satisfying vacation and rotation preferences has led to other, non-automated approaches to scheduling [21], highlighting that optimization of these metrics is of high priority and a reasonable markers of schedule quality. Prior to scheduling, Qualtrics surveys of preferred vacation timeframes as well as rotation preferences were collected during the 2017–2018 and the 2018–2019 academic years. This data was abstracted to assess adherence to preferences. To evaluate the fit of vacation preferences, we calculated the average survey rank of the assigned vacation blocks for each trainee, with ‘1’ indicating a resident’s top choice. We calculated the number of trainees assigned to their first and second choice rotations, although only first choice preferences were available for interns in the 2017–2018 academic year. These metrics were compared between years using a two-sided unpaired t-tests at the α = 0.05 significance level. Discordance between overnight call and daytime responsibilities has been previously used to guide schedule creation [20]. We evaluated the number of conflicts between overnight and daytime clinical responsibilities in each schedule, which was again compared using a two-sided t-test. Such conflicts are defined as any night shift or overnight call when the trainee is subsequently scheduled for a day shift on a separate service. Equity of the trainee experience leads to a perception of fairness, and has been an outcome measure of previous attempts at automation [9]. Night and ICU rotations are less desired due to longer hours and disruptive sleep schedules; conversely, jeopardy rotations may be desired due to more perceived flexibility. We quantify equity by measuring variance in night, ICU, and jeopardy rotations, and we compared variance between the 2017–2018 and 2018–2019 years using the F-test at the α = 0.05 significance level. To ensure that results of these comparisons were not due solely to differing constraints between these two academic years, we regenerated the 2017–2018 schedule using the AIMS tool, which was compared to the 2017–2018 manual schedule. To assess subjective schedule quality, in September 2018 we surveyed upper-level residents (PGY-2 and PGY-3) who could compare the quality of the current schedule to the previous year’s schedule, which was previously created without the use of AIMS or any other scheduling tool. Residents were asked to rate their satisfaction and perceived fairness of both the previous and current schedule on a 5-point Likert scale, using the questions “How satisfied were you with your schedule?” and “How fair was your schedule?” (S1 Text). The results were compared with a paired two-sided t-test at the α = 0.05 significance level. We adjusted for multiplicity of comparisons using the Holm-Bonferroni method. Comparisons for objective metrics of schedule quality and survey results were separately considered for this adjustment as they were based on different sources of data. Analysis was performed in Microsoft Excel 2013 and Graphpad PRISM Version 7.01.

Results

We found no significant difference in the number of night, ICU, and Jeopardy rotations for interns, and no significant difference in the variance in assignment of these rotations (Table 1). The average number of night and ICU rotations needing to be filled were similar between the two years (Table A in S2 Text). There were fewer shift conflicts due to overlapping clinical responsibilities for interns after implementation of AIMS (0.7 conflicts per intern, versus 0.3 conflicts per intern after implementation, p < 0.01). With AIMS, 71/74 (96.0%) interns and 66/82 (80.5%) residents were assigned to their first-choice rotation, a significant increase from the 50/72 (69.4%) interns and 25/82 (30.5%) residents assigned their first-choice in the 2017–2018 academic year. For residents, the AIMS schedule demonstrated a lower variance in number of Jeopardy rotation assignments, although this did not remain significant upon adjustment for multiple comparisons (Table 2). There was an increase in variance for ICU rotations in the 2018–2019 year which did not meet statistical significance. Similar to the data for interns, we found a significant increase in the number of residents assigned their first and second choice rotations.
Table 1

Intern schedule quality metrics.

Mean (SD)2017–2018 (n = 72)*2018–2019 (n = 74)*t-test p-valueF-test p-value
Night Rotations3.7 (0.53)3.6 (0.53)0.200.95
ICU Rotations3.8 (0.64)3.8 (0.74)0.870.20
Jeopardy Rotations0.6 (0.57)0.5 (0.50)0.270.29
Shift Conflicts0.7 (0.90)0.3 (0.47)< 0.001
Average Ranking for Assigned Vacations (#, SD)1.8 (1.7)2.0 (1.4)0.52
Assigned First Choice Rotation (%, SD)69.4 (46)96.0 (20)<0.001

*Values listed as number per resident per year, SD unless otherwise specified.

Comparisons that remain significant after adjustment for multiple comparisons designated with bold text.

Table 2

Resident schedule quality metrics.

Mean (SD)2017–2018 (n = 82)*2018–2019 (n = 82)*t-test p-valueF-test p-value
Night Rotations1.9 (0.84)1.9 (0.74)0.690.25
ICU Rotations3.2 (0.53)3.1 (0.66)0.360.05
Jeopardy Rotations0.7 (0.65)0.7 (0.50)1.000.02
Shift Conflicts0.1 (0.35)0.1 (0.23)0.13
Average Ranking for Assigned Vacations (#, SD)1.3 (1.2)1.6 (1.0)0.14
Assigned First Choice Rotation (%, SD)30.5 (47)80.5 (41)<0.001
Assigned Second Choice Rotation (%, SD)46.3 (50)74.4 (44)<0.001

*Values listed as number per resident per year, SD unless otherwise specified.

Comparisons that remain significant after adjustment for multiple comparisons designated with bold text.

*Values listed as number per resident per year, SD unless otherwise specified. Comparisons that remain significant after adjustment for multiple comparisons designated with bold text. *Values listed as number per resident per year, SD unless otherwise specified. Comparisons that remain significant after adjustment for multiple comparisons designated with bold text. Similar results were obtained when the 2017–2018 schedule was recreated with AIMS (Tables B and C in S2 Text)–with improvements seen in number of shift conflicts, rank of vacation choices, and assignment of preferred rotations compared to the manual schedule. However, as actual implementation sometimes requires last minute adjustments that may decrease schedule quality, the 2018–2019 schedule may better describe the true net gains of automated schedule generation. Of 82 residents surveyed, 22 (27%) completed the survey to assess the subjective quality of schedules generated with AIMS. There was a significant increase in both perceived satisfaction and fairness of the schedule with implementation of AIMS (Table 3).
Table 3

Satisfaction and fairness of the schedule before and after implementation of AIMS.

Scale of 1–5; mean (SD)2017–20182018–2019t-test p-value
Satisfaction3.3 (1.2)4.0 (1.1)0.048
Fairness3.3 (1.4)4.2 (1.0)0.020

Discussion

The introduction of AIMS significantly improved both objective and subjective measures of schedule quality by increasing adherence to stated trainee preferences, decreasing transition conflicts, and improving trainee perception of scheduling fairness and quality. This tool significantly increased our ability to schedule trainees for their desired rotations and reduced the number of schedule conflicts generated by overlapping clinical responsibilities, which is anticipated to reduce the strain on the jeopardy pool. Distribution of night and ICU rotations was highly optimized in previous years, as inequity in these rotations was known to cause dissatisfaction. However, AIMS did reduce the variance in jeopardy rotation assignment. No single metric can adequately convey the quality of a schedule, but it is encouraging to note that housestaff perceived the tool to generate a fair schedule and were satisfied with its performance. Perception of fairness and fulfillment of scheduling requests are important aspects of trainee wellness; improved schedules as well as scheduling transparency may potentially lead to decreases in resident burnout. Lack of control over job schedules precipitates work-family conflict and may drive dissatisfaction [22], and increased control over scheduling has been cited in focus groups as a method to reduce burnout [23]. The use of technology to optimize scheduling provides an attractive target to improve well-being without altering duty hours, given the ongoing debate about ideal resident shift length [24]. AIMS features several attributes that make it a valuable tool for chief residents, who are required to create complex schedules, often with little or no prior experience. Once rotation structures and scheduling rules are programmed, they can be reused within an institution until structural changes are made, ensuring consistency over successive academic years. Instead of focusing on minute details of individual schedules, chief residents can evaluate and adjust overarching structural scheduling rules to best suit their trainees. The use of a heuristic algorithm allows schedule generation for even the largest residency programs. The flexibility to hard-code certain elements of the schedule allows more coordination with trainees from other departments which do not use AIMS. The spreadsheet format is easy to manipulate even for those without programming experience. Given the open source nature of AIMS, the structure can be further modified to suit the specific needs of any residency program. Although we did not formally measure chief resident opinion of AIMS, the tool was greatly preferred over manual scheduling, reducing a task that historically took weeks to a matter of days, in keeping with other reports of automation of this onerous task [10, 19]. Our study was limited by the implementation of our tool at a single center, with only two years of scheduling data available for analysis. The heuristic nature of the AIMS algorithm does not ensure an optimal schedule generation. The metrics analyzed were based on prior reports of scheduling quality and chief resident experience and may not reflect all aspects of schedule quality. The survey administered to trainees was not previously validated, and must be interpreted with caution. Trainee opinion about schedule quality for both years was assessed during the 2018–2019 academic year and recall bias may have impacted perception of the previous year’s schedule.

Conclusion

Creating a schedule for internal medicine trainees is challenging, especially in large programs, due to the competing interests of residents, the ACGME, and institutional staffing requirements. Automation has the potential to eliminate error and facilitate the consistency of scheduling between successive years of chief residents. The use of automated scheduling tools such as AIMS can improve metrics of schedule quality, such as avoiding shift conflicts and satisfying more resident preferences. This is reinforced by our survey of trainees, which suggests a subjective improvement in satisfaction and perception of fairness. We have continued to utilize AIMS in our program, a modified version of the algorithm described here was used for the 2019–2020 academic year, and we hope to gain further longitudinal experience with schedule automation. Expansion to other residency programs in our institution may enrich our experience with this tool and confirm a wider applicability. Technologic solutions informed by operations research have the potential to improve the residency experience by granting trainees more control over their time as they learn the practice of medicine.

A copy of the scheduling tool used to generate schedules, with step-by-step instructions on setting up the tool for use within other residency programs.

Example information is pre-populated using data from the Yale Internal Medicine Residency Program, with full names of residents censored. (XLSM) Click here for additional data file.

Flowchart illustrating the AIMS algorithm.

(PPTX) Click here for additional data file.

Scheduling satisfaction survey.

(PDF) Click here for additional data file.

Scheduling satisfaction survey–the list of questions distributed to residents to assess satisfaction and fairness of scheduling with the AIMS tool compared with the manual scheduling process used in years past.

(DOCX) Click here for additional data file. 20 Apr 2020 PONE-D-20-05580 Implementation of an Automated Scheduling Tool Improves Schedule Quality and Resident Satisfaction PLOS ONE Dear Dr Gao, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. We would appreciate receiving your revised manuscript by May 31 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Yong-Hong Kuo Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements: 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.plosone.org/attachments/PLOSOne_formatting_sample_main_body.pdf and http://www.plosone.org/attachments/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript. 3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. Additional Editor Comments (if provided): This manuscript has been reviewed by three experts in the area of staff scheduling. Their recommendations and comments tend to be positive. While physician scheduling tool has been studied extensive for many years, they appreciate that the implementation and practical issues discussed in this paper are quite interesting and will be of interested to the reader. They have provided very constructive comments and suggestions for the authors to revise the manuscript. A common comment is that the review of literature and existing work is rather short. Since there have been many studies on this topic, the authors may wish to discuss in more detail the related work. There are a few studies suggested by the reviewers to discuss. The following papers may also be useful for the authors: Damcı-Kurt, P., Zhang, M., Marentay, B., & Govind, N. (2019). Improving physician schedules by leveraging equalization: Cases from hospitals in US. Omega, 85, 182-193. Gross, C. N., Brunner, J. O., & Blobner, M. (2019). Hospital physicians can’t get no long-term satisfaction–an indicator for fairness in preference fulfillment on duty schedules. Health care management science, 22(4), 691-708. Gross, C. N., Fügener, A., & Brunner, J. O. (2018). Online rescheduling of physicians in hospitals. Flexible Services and Manufacturing Journal, 30(1-2), 296-328. Hong, Y. C., Cohn, A., Epelman, M. A., & Alpert, A. (2019). Creating resident shift schedules under multiple objectives by generating and evaluating the Pareto frontier. Operations Research for Health Care, 23, 100170. Kuo, Y. H. (2014). Integrating simulation with simulated annealing for scheduling physicians in an understaffed emergency department. HKIE Transactions, 21(4), 253-261. Schoenfelder, J., & Pfefferlen, C. (2018). Decision support for the physician scheduling process at a German hospital. Service Science, 10(3), 215-229. Tohidi, M., Kazemi Zanjani, M., & Contreras, I. (2019). Integrated physician and clinic scheduling in ambulatory polyclinics. Journal of the Operational Research Society, 70(2), 177-191. Vermuyten, H., Rosa, J. N., Marques, I., Belien, J., & Barbosa-Póvoa, A. (2018). Integrated staff scheduling at a medical emergency service: An optimisation approach. Expert Systems with Applications, 112, 62-76. Please seriously address the reviewers' concerns. Unsuccessful revision can lead to rejection of the paper. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This work describes the development and implementation of a heuristic-based scheduling tool applied to a large internal medicine residency program including before and after metrics. This is a pragmatic description of implementation and analysis, including the real-world accommodations made for practicality (e.g. essential elements manually entered in to AIMS tool prior to automated scheduling). The core functionality is automated sequential scheduling lottery. Qualitative and quantitative outcomes were assessed and appropriate statistical analysis was applied. In particular, the authors provide the appropriate level of detail to both understand their methodology but maintain a level of accessibility for the target audience of whom many will not be familiar with automated scheduling heuristics. In addition, inclusion of both qualitative and quantitative measures of success is of particular importance for scheduling outcomes. The supplemental materials will be a valuable resource for other programs to implement their strategy. Specifically: 1. Results: With the Jeopardy and night rotations, is there any variability in day night transitions or will a set number of night float shifts always occur for a block without potential for night coverage elsewhere? If there is any heterogeneity in this, consider including a metric of day to night transitions to compare automated to manual scheduling. 2. Lines 171-173: Discussion of wellness as it related to scheduling satisfaction is appropriate. Consider expanding this area of the discussion more as this strongly underscores the role technologic solutions may play in decreasing provider burnout and improving workplace wellness. 3. Lines 174-179: These are all valuable attributes as described. In addition, given that the individual preparing the schedule is likely to be different in consecutive years, consider including reflection on the role of technological solutions to decrease the “learning curve” for schedule development. 4. Lines 180-186: Consider including reflection on how the survey instrument itself—while appropriate—was not previously validated. The authors should be commended for their enthusiasm for this topic and their desire to apply technological solutions to common training problems with scheduling. Reviewer #2: Manuscript Number: PONE-D-20-05580 Referee Report on the paper “Implementation of an Automated Scheduling Tool Improves Schedule Quality and Resident Satisfaction” submitted to PLOS One Summary and recommendation This referee report discusses an article about the quality of automating rotation schedules for residents. The authors developed a rotation scheduling tool that was implemented for the Yale New Haven Hospital Internal Medicine Residency Program. In their study, they compare the quality of schedules before and after implementation of the new tool based on objective as well as subjective perception. Even though the methodical contribution is minor (and not necessary for the journal submitted), the implementation and analysis in a working environment are quite interesting. In general, the paper is very well structured and well written. However, the paper misses a careful literature review. Additionally, the description of the tool/algorithm is only superficial. Based on the following comments, we recommend a major revision. Major comments Introduction: The authors jump directly into the topic. Even if you are somewhat familiar with the subject, essential terms should be introduced for the reader (e.g. rotation, jeopardy rotation etc.). A clear problem description might help as well. Please note that residency programs are quite different in various countries. The literature review from line 57-69 is rather short and misses some essential papers in resident scheduling literature (at least from a methodological point of view). We would recommend the literature review “State of the art in physician scheduling” by Erhard et al. (2018) as a starting point for the review. Forward and backward search might be very helpful finding relevant papers. Method: The description of the scheduling tool is lagging. The process is not entirely transparent at first glance. For instance, you could add a pseudocode to the appendix or use a flow chart for the process. We have a couple of open questions that show the ambiguity or incompleteness of the text. - What is the time horizon of your schedule? - Are the problems for residents and interns independent, i.e. you solve both independently? If not, then some explanation in the text is needed. - What do you mean by scheduling lottery? How often is this lottery performed within the time horizon? - What kind of preferences do you look at, i.e., only rotation preferences or overnight duties? - Can this lottery be manipulated, i.e., when I know that I got my first preference in the last lottery, can I change my preferences so that a lower priority might be my true first one? Or are you running the lottery for each 2-week horizon using the data collected for the whole year? It is somewhat unclear in the text. - What is the “jeopardy” pool? Please explain and add a description (see comment our comments to the introduction). - A visualization of the algorithm might be helpful. - What is the objective function of your algorithm, or do you just construct a feasible solution? We assume the latter but it is not clear in the text. - What are your objective metrics based on, i.e., expert knowledge or literature? We assume the former but again it is not clear in the text. A motivation based on literature might be valuable as well. - On page 4 you are fixing some input values. What is the effect of relaxing this assumption? We guess you achieve better schedules? But understand that e.g. external rotations are fixed. However, are you planning rotations that are external rotations for other units? Then some coordination between units is necessary. You should motivate the assumption. - On your tally, what happens if you do not grant any request? We mean: If nothing is fulfilled, then a resident/intern has 0 and is never chosen? How do you initialize the heuristic? - What happens in case you have less residents/interns available than needed? - Are you limiting the number of requests of any kind (e.g. preferences, vacations) for each resident/intern? Results: We like the comparison between the two years which can be seen as a contribution for itself. However, it might be biased. Just as an idea, you might regenerate schedules for 2017-2018 retrospective and compare the results with the realized ones. Also, it was unclear whether you base your evaluations on the planned or realized (with re-planning) schedules. We think and hope you use planned schedules. If so, the subjective assessment might be biased by re-planning as well. Please make it clear and comment on it in the text. In future, we recommend detailing the subjective questionnaire in alignment with the quality metrics. Can you say something about the savings for the chief resident? E.g. is (s)he faster on top of the quality gains? Also, will the procedure be used next year as well. If so, why? If not so, why? A short outlook might be useful for other practitioners. Just a note from methodological point of view, are you correcting multiple hypotheses testing, or are you just performing single tests in your analysis? For the latter, some of your results might not be significant anymore. You might use a step-wise procedure and most of your findings should be the same. Minor comments - The authors introduce some abbreviations twice, i.e., AIMS and ACGME. You might don’t want to use abbreviations in the abstract. - The conclusion is very short. You might want to address aspects you have not considered but are from interest. We hope the authors regard our feedback as helpful for improving the paper. Furthermore, we suggest extending your team by a researcher more familiar with operations research to improve your scheduling part. We are pretty sure that some qualified persons exist at your university (maybe in industrial engineering or business). The report can be found in the attachment as well. Reviewer #3: The paper describes the implementation of a resident scheduling algorithm at a large hospital. While this is not really new, I really liked the link to a survey that demonstrates an increase in satisfaction after the new software. I think the paper should be published, but ask for two things: 1) Literature: I think you should discuss a little what has been done. There is a relatively recent literature review on physician scheduling (Erhard et al. 2018) that you should have a look at, and probably refer your readers to. I am personally aware of some related papers discussing new approaches in physician scheduling along a real-life implementation and comparison of results, such as Bowers et al. 2016 comparing an equital and a preferences-orientated scheduling, or Fügener et al. 2015 who include both fairness concerns and individual preferences in physician scheduling. Erhard et al. 2018 provide a list of papers with respect to, e.g., residents, fairness, preferences. Again, there is no need to write an extensive literature review, but at least show what comparable studies exist, and maybe refer to the review paper. 2) Algorithm: You should discuss more clearly how your approach works - it could be an appendix if it takes too much space. I am happy to review a revised version. References: Bowers, M. R., Noon, C. E., Wu, W., & Bass, J. K. (2016). Neonatal physician scheduling at the University of Tennessee Medical Center. Interfaces, 46(2), 168-182. Erhard, M., Schoenfelder, J., Fügener, A., & Brunner, J. O. (2018). State of the art in physician scheduling. European Journal of Operational Research, 265(1), 1-18. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Stefanie J. Hollenbach, M.D., M.S. Reviewer #2: Yes: Jens O. Brunner Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PONE-D-20-05580-report-20200411.pdf Click here for additional data file. 22 May 2020 To begin, we would like to comment on some updates to our schedule analysis. In response to reviewer 2, we regenerated the 2017-2018 schedule to ensure year to year variability did not lead to the superior schedule generation with our tool. When regenerating this schedule, we carefully ensured that the same number of rotations / rotation types were covered. When doing this, we discovered an inaccuracy with how the number of nights and ICU rotations were calculated per resident; specifically in the intern year. This resulted in one intern with an additional night rotation, and four interns with an additional ICU rotation. We have since updated the schedule. We were previously using the number of night / ICU rotations that were reported by the authors who manually generated the prior schedule – highlighting how miscalculations can occur with manual schedule work. Second – we have updated our calculations for shift conflicts – due to overcounting the number of conflicts that occurred. This has increased the significance of one of our comparisons, and made one of the comparisons no longer significant – we feel the provided number of conflicts is a more accurate measure of the number of times a resident would need to be pulled to provide coverage. Finally, we have corrected the standard deviation for the average ranking of the assigned vacation choices. Each resident gets two vacation choices, and the average rank they had assigned to each vacation choice is reported. The standard deviation was previously calculated from the summed rank of vacation choices, not the average – the standard deviation of the average is effectively half. Comments from Reviewer 1: 1. Results: With the Jeopardy and night rotations, is there any variability in day night transitions or will a set number of night float shifts always occur for a block without potential for night coverage elsewhere? If there is any heterogeneity in this, consider including a metric of day to night transitions to compare automated to manual scheduling. There is indeed a slight variability in number of night-to-day transitions per block; our program mandates that residents spend no more than 2 weeks on a night rotation, so one easily-obtainable metric of night to day transitions is the number of night rotations per block. We have provided this metric for each year analyzed as a supplement. 2. Lines 171-173: Discussion of wellness as it related to scheduling satisfaction is appropriate. Consider expanding this area of the discussion more as this strongly underscores the role technologic solutions may play in decreasing provider burnout and improving workplace wellness. We agree that technologic advances – in scheduling and in other avenues – have the potential to improve resident wellness, and have further discussed the impact of our work in this regard. 3. Lines 174-179: These are all valuable attributes as described. In addition, given that the individual preparing the schedule is likely to be different in consecutive years, consider including reflection on the role of technological solutions to decrease the “learning curve” for schedule development. We appreciate the invitation to provide our reflection on the role of automated scheduling solutions on the learning curve for chief residents, and have detailed our opinions as such. 4. Lines 180-186: Consider including reflection on how the survey instrument itself—while appropriate—was not previously validated. We have called attention to the lack of validation of the survey in the discussion. Comments from Reviewer 2: Major comments
Introduction: The authors jump directly into the topic. Even if you are somewhat familiar with the subject, essential terms should be introduced for the reader (e.g. rotation, jeopardy rotation etc.). A clear problem description might help as well. Please note that residency programs are quite different in various countries. The literature review from line 57-69 is rather short and misses some essential papers in resident scheduling literature (at least from a methodological point of view). We would recommend the literature review “State of the art in physician scheduling” by Erhard et al. (2018) as a starting point for the review. Forward and backward search might be very helpful finding relevant papers. Thank you for calling attention to the ambiguous terminology; we have provided some further clarification of these terms and further introduction to the topic as a whole. We have also utilized the review recommended as a starting point to expand our literature review. 

Method: The description of the scheduling tool is lagging. The process is not entirely transparent at first glance. For instance, you could add a pseudocode to the appendix or use a flow chart for the process. We have a couple of open questions that show the ambiguity or incompleteness of the text.

- What is the time horizon of your schedule? We have clarified the time horizon (1 year at a time) in the first paragraph of our methods. - Are the problems for residents and interns independent, i.e. you solve both independently? If not, then some explanation in the text is needed. Both problems are solved concurrently. We have detailed the methodology further with a flowchart that may better explain this. Essentially each resident is assigned a label – such as ‘intern’, ‘psychiatry’, ‘senior resident’, etc, that designates which rotations can be assigned to which residents. - What do you mean by scheduling lottery? How often is this lottery performed within the time horizon? The initial assignments of vacations are done by lottery – i.e. a resident is selected and then given their first choice of vacation if it is available. The order of selection for subsequent lotteries is influenced by the tally of previously satisfied preferences. - What kind of preferences do you look at, i.e., only rotation preferences or overnight duties?
The only resident input provided is vacation and rotation preferences. - Can this lottery be manipulated, i.e., when I know that I got my first preference in the last lottery, can I change my preferences so that a lower priority might be my true first one? Or are you running the lottery for each 2-week horizon using the data collected for the whole year? It is somewhat unclear in the text. The lottery cannot be manipulated, as all preferences are submitted upfront and then the lotteries are run sequentially. - What is the “jeopardy” pool? Please explain and add a description (see comment our comments to the introduction). We have concretely defined this term in the introduction to hopefully remove all ambiguity. - A visualization of the algorithm might be helpful. We have created a flow chart to illustrate the algorithm - What is the objective function of your algorithm, or do you just construct a feasible solution? We assume the latter but it is not clear in the text. Excellent question – an in-process version of our tool (which we used for the 2019-2020 schedule) uses an objective function with iterative schedule generation to further optimize the results, but in this version a single solution is generated at a time. We have clarified in the text. - What are your objective metrics based on, i.e., expert knowledge or literature? We assume the former but again it is not clear in the text. A motivation based on literature might be valuable as well. You are correct that we selected these metrics based on expert knowledge – but we find recurring themes in the literature to support our selection, although no universally standard metric exists given the differing needs of training programs. We have added a discussion on the motivation for these metrics. - On page 4 you are fixing some input values. What is the effect of relaxing this assumption? We guess you achieve better schedules? But understand that e.g. external rotations are fixed. However, are you planning rotations that are external rotations for other units? Then some coordination between units is necessary. You should motivate the assumption. Relaxing the fixed input would likely provide better schedules, but would require participation from the program director coordinating rotating residents from other services (i.e. emergency medicine, primary care, etc). Currently, we are provided with a list of rotators from these other services, and this causes fluctuations in the number of residents required from our program for each block. We have attempted to clarify this in the text. - On your tally, what happens if you do not grant any request? We mean: If nothing is fulfilled, then a resident/intern has 0 and is never chosen? How do you initialize the heuristic? We have clarified this further in the text – the tally is initialized at zero, so every resident has an equal chance of being chosen for their first vacation. Each resident will be mandated to receive two vacations, and have ranked all blocks in order of preference, so if they receive their last choice vacation, the tally could increase by 14. - What happens in case you have less residents/interns available than needed? We are lucky to have enough redundancy to provide coverage to all required services. - Are you limiting the number of requests of any kind (e.g. preferences, vacations) for each resident/intern? Each resident/intern ranks every potential vacation choice, so we get a complete ordering of their preferences. They also rank their top three rotation preferences. We have clarified as such in the text. We did allow free text specification of any other considerations, such as weddings or religious holidays. Results:
We like the comparison between the two years which can be seen as a contribution for itself. However, it might be biased. Just as an idea, you might regenerate schedules for 2017-2018 retrospective and compare the results with the realized ones. We have provided this comparison as a supplement as an exploratory analysis, although there are factors aside from resident preferences (such as specific requests for weddings or other important events) that are unavailable for the 2017-2018 schedule that may provide our regenerated schedule with more flexibility and thus introduce more bias. Additionally, replanning to accommodate last minute changes can potentially reduce other metrics of schedule quality when looking at the actually implemented schedule. Thus, this regeneration may provide an overoptimistic view of actual schedule quality obtained through automation. Also, it was unclear whether you base your evaluations on the planned or realized (with re-planning) schedules. We think and hope you use planned schedules. If so, the subjective assessment might be biased by re-planning as well. Please make it clear and comment on it in the text. The subjective assessment of schedule fairness and satisfaction were based on realized schedules, which indeed may have introduced bias. However, the perception of fairness and satisfaction with schedule quality were ‘lived’ experiences, and we did not feel it would be accurate to have residents compare an abstract schedule generated by AIMS to their lived experiences with the manual 2018-2019 In future, we recommend detailing the subjective questionnaire in alignment with the quality metrics. Can you say something about the savings for the chief resident? E.g. is (s)he faster on top of the quality gains? We have added in the discussion a subjective assessment of time saved, as a formal assessment of the time difference was not performed. When taking on this task from the previous years’ chiefs, they unanimous rated scheduling as the most challenging aspect of the job, and we can state with confidence the task was much more enjoyable this year. Also, will the procedure be used next year as well. If so, why? If not so, why? A short outlook might be useful for other practitioners. We did use this procedure for the 2019 – 2020 academic year, with a rewritten tool and algorithmic modifications. Just a note from methodological point of view, are you correcting multiple hypotheses testing, or are you just performing single tests in your analysis? For the latter, some of your results might not be significant anymore. You might use a step-wise procedure and most of your findings should be the same. Thank you for raising this point, we have provided a correction for multiple hypothesis testing and specified this in our methodology. 

Minor comments
- The authors introduce some abbreviations twice, i.e., AIMS and ACGME. You might don’t want to use abbreviations in the abstract. Thank you for noticing these duplicates, we have removed them. - The conclusion is very short. You might want to address aspects you have not considered but are from interest. Thank you for the comment – we have added to our conclusion, and moved some items from discussion better suited for conclusion to that section. Reviewer #3: The paper describes the implementation of a resident scheduling algorithm at a large hospital. While this is not really new, I really liked the link to a survey that demonstrates an increase in satisfaction after the new software. I think the paper should be published, but ask for two things:
1) Literature: I think you should discuss a little what has been done. There is a relatively recent literature review on physician scheduling (Erhard et al. 2018) that you should have a look at, and probably refer your readers to. I am personally aware of some related papers discussing new approaches in physician scheduling along a real-life implementation and comparison of results, such as Bowers et al. 2016 comparing an equital and a preferences-orientated scheduling, or Fügener et al. 2015 who include both fairness concerns and individual preferences in physician scheduling. Erhard et al. 2018 provide a list of papers with respect to, e.g., residents, fairness, preferences. Again, there is no need to write an extensive literature review, but at least show what comparable studies exist, and maybe refer to the review paper. Thank you for this comment. We have expanded on our literature review, referred our readers to Erhard et al, and included a selection of references in our background discussion, in particular studies addressing residency scheduling. 2) Algorithm: You should discuss more clearly how your approach works - it could be an appendix if it takes too much space. Thank you, we have included a more thorough description of the algorithm as a flow chart to illustrate the steps taken. Submitted filename: Response to reviewers.docx Click here for additional data file. 13 Jul 2020 PONE-D-20-05580R1 Implementation of an Automated Scheduling Tool Improves Schedule Quality and Resident Satisfaction PLOS ONE Dear Dr. Gao, Thank you for submitting your manuscript to PLOS ONE. The revision has been reviewed by two reviewers from the last round. They are satisfied with the revision and have favourable recommendations. There are some minor suggestions for the authors to consider for the final manuscript. Please submit your revised manuscript by Aug 27 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Yong-Hong Kuo Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: We think the authors revised their paper very well. Our comments have all been taken into account. The revision of the literature has been very successful, and the paper can now generally be read more smoothly. So in our view, there is nothing to be said against accepting the paper for publication. Minor comments - p. 5, line 89: You should use “constraints” rather than “restraints”. - p. 6, line 103: You might want to use “trainee” rather than “learner”. Reviewer #3: The autors improved the manuscript. I like the revision, and have only a minor comment. I would rather have the reference within the sentence it belongs to, e.g., "...pathway to independent practive [1]." instead of "...independent practive. [1]". However, please leave a space before the reference (unlike e.g., on page 9, references [20] and [9]. Thanks for the opportunity to review your paper! ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Jens O. Brunner Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 14 Jul 2020 Response to reviewers: Reviewer #2: We think the authors revised their paper very well. Our comments have all been taken into account. The revision of the literature has been very successful, and the paper can now generally be read more smoothly. So in our view, there is nothing to be said against accepting the paper for publication.

Minor comments
- p. 5, line 89: You should use “constraints” rather than “restraints”.
- p. 6, line 103: You might want to use “trainee” rather than “learner”. Response: Thank you for taking the time to review our paper and for your comments; we have made the recommended changes. Reviewer #3: The autors improved the manuscript. I like the revision, and have only a minor comment.

I would rather have the reference within the sentence it belongs to, e.g., "...pathway to independent practive [1]." instead of "...independent practive. [1]". However, please leave a space before the reference (unlike e.g., on page 9, references [20] and [9].

Thanks for the opportunity to review your paper! Response: Thank you for taking the time to review our paper and for your comments; we have made the recommended changes in citation formatting. Submitted filename: Response to reviewers 2.docx Click here for additional data file. 17 Jul 2020 Implementation of an Automated Scheduling Tool Improves Schedule Quality and Resident Satisfaction PONE-D-20-05580R2 Dear Dr. Gao, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Yong-Hong Kuo Academic Editor PLOS ONE Additional Editor Comments (optional): The authors have successfully addressed the reviewers' concerns. Thus, I recommend acceptance of the work. Reviewers' comments: 29 Jul 2020 PONE-D-20-05580R2 Implementation of an Automated Scheduling Tool Improves Schedule Quality and Resident Satisfaction Dear Dr. Gao: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Yong-Hong Kuo Academic Editor PLOS ONE
  12 in total

1.  Association of third-year medical students' first clerkship with overall clerkship performance and examination scores.

Authors:  Susan M Kies; Valerie Roth; Michelle Rowland
Journal:  JAMA       Date:  2010-09-15       Impact factor: 56.272

2.  An Innovative Approach to Resident Scheduling: Use of a Point-Based System to Account for Resident Preferences.

Authors:  Robert Tao-Ping Chow; Shrikant Tamhane; Manling Zhang; Lori-Ann Fisher; Jenni Yoon; Sameep Sehgal; Madel Lumbres; Ma Ai Thanda Han; Tiffany Win
Journal:  J Grad Med Educ       Date:  2015-09

3.  Rethinking the Clockwork of Work: Why Schedule Control May Pay Off at Work and at Home.

Authors:  Erin L Kelly; Phyllis Moen
Journal:  Adv Dev Hum Resour       Date:  2007-11

4.  Hospital physicians can't get no long-term satisfaction - an indicator for fairness in preference fulfillment on duty schedules.

Authors:  Christopher N Gross; Jens O Brunner; Manfred Blobner
Journal:  Health Care Manag Sci       Date:  2018-07-25

5.  Shift-to-Shift Handoff Research: Where Do We Go From Here?

Authors:  Lee Ann Riesenberg
Journal:  J Grad Med Educ       Date:  2012-03

6.  Automated medical resident rotation and shift scheduling to ensure quality resident education and patient care.

Authors:  Hannah K Smalley; Pinar Keskinocak
Journal:  Health Care Manag Sci       Date:  2014-08-30

7.  House staff scheduling: a computer-aided method.

Authors:  S James; W Outten; P J Davis; J Wands
Journal:  Ann Intern Med       Date:  1974-01       Impact factor: 25.391

8.  Computerized house officer schedules at the University of Michigan.

Authors:  G E Becker; R L Wortmann; J Silva
Journal:  J Med Educ       Date:  1982-04

9.  Demographic and work-life study of chief residents: a survey of the program directors in internal medicine residency programs in the United States.

Authors:  Dushyant Singh; Furman S McDonald; Brent W Beasley
Journal:  J Grad Med Educ       Date:  2009-09

10.  Education Outcomes in a Duty-Hour Flexibility Trial in Internal Medicine.

Authors:  Sanjay V Desai; David A Asch; Lisa M Bellini; Krisda H Chaiyachati; Manqing Liu; Alice L Sternberg; James Tonascia; Alyssa M Yeager; Jeremy M Asch; Joel T Katz; Mathias Basner; David W Bates; Karl Y Bilimoria; David F Dinges; Orit Even-Shoshan; David M Shade; Jeffrey H Silber; Dylan S Small; Kevin G Volpp; Judy A Shea
Journal:  N Engl J Med       Date:  2018-03-20       Impact factor: 91.245

View more
  1 in total

1.  Research on multi-objective optimal scheduling considering the balance of labor workload distribution.

Authors:  Zhengyu Hu; Wenrui Liu; Shengchen Ling; Kuan Fan
Journal:  PLoS One       Date:  2021-08-05       Impact factor: 3.240

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.