Literature DB >> 34637437

Characterising the nationwide burden and predictors of unkept outpatient appointments in the National Health Service in England: A cohort study using a machine learning approach.

Sion Philpott-Morgan1, Dixa B Thakrar2, Joshua Symons1,2, Daniel Ray3, Hutan Ashrafian2, Ara Darzi2.   

Abstract

BACKGROUND: Unkept outpatient hospital appointments cost the National Health Service £1 billion each year. Given the associated costs and morbidity of unkept appointments, this is an issue requiring urgent attention. We aimed to determine rates of unkept outpatient clinic appointments across hospital trusts in the England. In addition, we aimed to examine the predictors of unkept outpatient clinic appointments across specialties at Imperial College Healthcare NHS Trust (ICHT). Our final aim was to train machine learning models to determine the effectiveness of a potential intervention in reducing unkept appointments. METHODS AND
FINDINGS: UK Hospital Episode Statistics outpatient data from 2016 to 2018 were used for this study. Machine learning models were trained to determine predictors of unkept appointments and their relative importance. These models were gradient boosting machines. In 2017-2018 there were approximately 85 million outpatient appointments, with an unkept appointment rate of 5.7%. Within ICHT, there were almost 1 million appointments, with an unkept appointment rate of 11.2%. Hepatology had the highest rate of unkept appointments (17%), and medical oncology had the lowest (6%). The most important predictors of unkept appointments included the recency (25%) and frequency (13%) of previous unkept appointments and age at appointment (10%). A sensitivity of 0.287 was calculated overall for specialties with at least 10,000 appointments in 2016-2017 (after data cleaning). This suggests that 28.7% of patients who do miss their appointment would be successfully targeted if the top 10% least likely to attend received an intervention. As a result, an intervention targeting the top 10% of likely non-attenders, in the full population of patients, would be able to capture 28.7% of unkept appointments if successful. Study limitations include that some unkept appointments may have been missed from the analysis because recording of unkept appointments is not mandatory in England. Furthermore, results here are based on a single trust in England, hence may not be generalisable to other locations.
CONCLUSIONS: Unkept appointments remain an ongoing concern for healthcare systems internationally. Using machine learning, we can identify those most likely to miss their appointment and implement more targeted interventions to reduce unkept appointment rates.

Entities:  

Mesh:

Year:  2021        PMID: 34637437      PMCID: PMC8509877          DOI: 10.1371/journal.pmed.1003783

Source DB:  PubMed          Journal:  PLoS Med        ISSN: 1549-1277            Impact factor:   11.069


Background

Unkept hospital appointments, also known as “did not attends” (DNAs), are a dilemma facing multiple healthcare systems worldwide. In 2017–2018, 8 million National Health Service (NHS) hospital appointments, almost 1 in 10, were unkept in England. Each outpatient hospital appointment is estimated to cost the NHS £120, yielding an overall cost for the system of approximately £1 billion in unkept appointments [1-3]. It is estimated that unkept appointments cost the healthcare system in the US $150 billion a year [4]. The financial and public health impacts of unkept appointments are therefore vast. It is wasteful of resources and may also increase patient morbidity and lengthen waiting lists from 1 week to up to 6 months [5,6]. A nationwide study in Scotland reported that those who missed more than 2 appointments had a 3-fold increase in hazards of mortality compared to those who did not miss appointments [6]. The NHS is a service with limited resources and is ever under the stress of financial limitations; hence, unkept appointments need to be addressed to ensure resources are allocated appropriately. Reasons for unkept appointments often include forgetfulness, being unaware of the appointment, feeling too unwell to attend, hospital administrative errors, work commitments, transport difficulties, and resolution of symptoms [7-20]. In the absence of a more predictive and proactive approach to mitigate unkept appointments, outpatient clinics often overbook appointments. This puts constraints on those delivering clinic care. Other proactive interventions such as appointment reminders by phone call, letter, or text message have been implemented to try to reduce the number of unkept appointments. Other strategies have included giving patients the responsibility of booking their appointment, either through Freephone service or online [5]. Interventions may fail as they are often targeted using a blanket approach, without knowing which cohort of patients or clinical factors are most effective to target. Interventions need to be targeted effectively to increase patient engagement and therefore aid the ongoing issue of tackling health inequalities. In order to understand how best to target interventions to reduce the number of unkept outpatient clinic appointments, it is first important to understand the predictors and characteristics of unkept appointments. Previous studies have employed machine learning and statistical modelling to predict unkept appointments [21-24]. However, such modelling has not been carried out in a UK setting across all medical specialties, but rather has been limited to individual specialties or restricted to primary care or community settings. Our aim was to demonstrate the rates of unkept outpatient clinic appointments across hospital trusts in England, with an added breakdown by specialty. To appraise the potential utility of this approach at a local level, we examined the predictors of unkept outpatient clinic appointments across specialties at a single trust within England, Imperial College Healthcare NHS Trust (ICHT). Finally, we trained machine learning models to determine the effectiveness of a potential intervention in reducing unkept appointments.

Methods

Data capture

We present a data flow diagram in Fig 1. We used Hospital Episode Statistics (HES) outpatient data spanning England from April 2016 to March 2018 to generate and train the models used for this study. Data codings and definitions can be found on the NHS Digital website [2]. HES is a database detailing all admissions and emergency and outpatient appointments at NHS hospitals. HES provides a number of patient characteristics including age, sex, ethnicity, and geographical information. Index of Multiple Deprivation (IMD) demographic data were also used—a dataset that is openly available [25].
Fig 1

Data flow diagram.

Numbers of appointments and unkept appointments were obtained for the top 50 trusts by appointment volume. Unkept appointments within Imperial College Healthcare NHS Trust were further broken down by specialty. DNA, did not attend; HES, Hospital Episode Statistics; IMD, Index of Multiple Deprivation.

Data flow diagram.

Numbers of appointments and unkept appointments were obtained for the top 50 trusts by appointment volume. Unkept appointments within Imperial College Healthcare NHS Trust were further broken down by specialty. DNA, did not attend; HES, Hospital Episode Statistics; IMD, Index of Multiple Deprivation.

Data cleaning

The submission of kept outpatient appointment data to NHS Digital is mandatory in England. However, the submission of unkept appointments is not mandatory. Some sites have a reported 0% unkept appointment rate in the data they submit. We excluded these sites from this analysis, along with the bottom and top 10% of outliers. Our goal was to predict outpatient non-attendance without warning. Hence, appointments that were cancelled in advance, either by patients or consultants, were also excluded from this analysis. Appointments from April 2016 to March 2017 were used to train the models. The recency (in days) and frequency (over the previous 12 months) of appointments and unkept appointments were calculated and included as predictors (see S1 Text for details). We tested the models using ICHT-specific data from April 2017 to March 2018. This again included the number of appointments, unkept appointments, and cancellations, calculated by specialty, from the previous 12 months.

Statistical analysis

The recorded outcome variable was binary, indicating whether the patient attended their appointment or did not attend their appointment without providing advance warning. Note however that the output of the models was a probability between 0 and 1, indicating the likelihood of a given patient not showing up to an appointment prior to the fact. Advance cancellations were not included in the data. Variables used in the model were defined as shown in Table 1.
Table 1

Predictor variable definition.

Predictor variableDescription
Unkept appointments last 12 monthsCount of number of unkept appointments in last 12 months
Appointments last 12 monthsCount of number of outpatient appointments in last 12 months
Cancellations last 12 monthsCount of number of cancellations in last 12 months
Days since last unkept appointmentDays since the previous unkept appointment
Days since last appointmentDays since the previous outpatient appointment (kept or unkept)
Days since last cancellationDays since the previous cancellation
Lead care professionalPatient to be seen by the lead care professional versus another member of the professional team
APPTAGE CALCAge at appointment—babies under 1 year decimalised
REFSOURCSource of referral
Health deprivation scoreHealth Deprivation and Disability subscale from Index of Multiple Deprivation (IMD)
IDAOPI scoreIncome Deprivation Affecting Older People Index subscale from IMD
IDACI scoreIncome Deprivation Affecting Children Index subscale from IMD
IMD scoreOverall Index of Multiple Deprivation Scale
SexSex of patient
WeekdayDay of week (Monday, Tuesday, Wednesday, etc.)
ConsultationService type requested
Appointment typeFirst attendance, follow-up, or telephone
Using the R statistical programming language, we trained machine learning models for each of the top 100 treatment specialties by national volume in 2016–2017 to determine predictors of unkept appointments and their relative importance (S1 Text). These models were gradient boosting machines (GBMs). HES data from 2016–2017 were used to train the models. The models were then tested using 2017–2018 HES outpatient data. We conducted an analysis based on a hypothetical intervention targeting the top 10% of outpatient appointments by risk. The intervention here could be a phone call reminder or a virtual consultation. This was a post hoc analysis, based on observational data, and did not have a prespecified study plan.

Test metrics

Model sensitivity is the proportion of unkept appointments captured by the intervention. For example, a sensitivity of 0.33 implies that 33% of unkept appointments are captured; that is, the intervention could at best reduce unkept appointments by 33%. The positive predictive value (PPV) is the proportion of the time the model is right. For example, suppose that 100 people are chosen for the intervention. If the PPV is 0.5, then this tells us that 50 out of 100 would have missed their appointment if there was no intervention. The likelihood ratio (LR) tells us how much more likely those who are selected for an intervention are to have an unkept appointment, in comparison to those who are not selected. The area under the receiver operating characteristic curve (AUROC) tells us how good the model is at distinguishing classes, in this instance between patients who will attend their appointment versus those who will not.

Prediction metrics

The prediction metrics included the percentage importance of the different predictors for a given speciality (defined in terms of reduction in predictive error), the average importance for a predictor across all outpatient specialties, and the variation in importance for a predictor across all outpatient specialties. See S1 Table (RECORD Checklist) for details on our reporting in this study. As per the Health and Social Care Act 261 and the Data Protection Act 2018, as a national institution, NHS Digital is directed to store and analyse secondary care data. Internal approval for this project was granted by the information asset owner.

Results

Across the whole NHS

In both 2016–2017 and 2017–2018, there were approximately 97–98 million outpatient appointments after removing advance cancellations, and data cleaning. The rate of unkept appointments in these datasets was 8.0% and 8.1%, respectively. Across trusts in England, focusing on the top 50 trusts by appointment volume, the rate of unkept appointments ranged from 3.9% to 14.8%. Fig 2 shows the top 20 trusts with the highest unkept appointment rates. The full table can be found in S2 Table. Eight of the 10 trusts with the highest unkept appointment rates were within the London region.
Fig 2

Unkept appointment rates by UK hospital trust (excluding advance cancellations).

Within Imperial College Healthcare NHS Trust

Focusing on ICHT only, there were approximately 1.1 million outpatient appointments, excluding advance cancellations and sites with very low volumes. Approximately 910,000 remained after data cleaning. The rate of unkept appointments in this data set was 11.2%. The rates of the highest and lowest 5 specialties are presented in Table 2. Hepatology had the highest rate of unkept appointments (17%), and medical oncology had the lowest (6%). See S3 Table for all specialties.
Table 2

Highest and lowest unkept appointment rates by speciality.

Speciality nameNumber of appointmentsNumber of unkept appointmentsUnkept appointment rate (%)
Highest rates of unkept appointments
Hepatology14,7312,49417%
Diabetic medicine12,1821,92916%
Ophthalmology74,44211,18815%
Ear, nose, and throat34,8205,10915%
Vascular surgery12,6661,77214%
Lowest rates of unkept appointments
Gynaecology62,5265,5089%
Breast surgery16,8381,4288%
Anaesthetics15,6901,1808%
Audiological medicine20,5771,6478%
Medical oncology36,4662,0526%

Predictors of outpatient non-attendance overall

Predictor importance, sorted by average importance, for our composite prediction model across all specialties is shown in Table 3. Days since the previous unkept appointment (recency) was responsible on average for 25%, and the number of unkept appointments a patient had in the previous 12 months (frequency) was responsible on average for 13% of the predictive value of these models. Age at time of appointment accounted for around 10% of the predictive value. A larger number of previous unkept appointments was associated with an increased risk of failing to keep a future appointment. A larger number of previous cancellations was associated with an increased risk of failing to keep a future appointment. Older patients were least likely to miss an appointment. More deprived areas (lower IMD decile) were associated with an increased risk of an unkept appointment. Seeing a lead care professional was associated with a decreased risk of an unkept appointment.
Table 3

Overall predictor importance.

PredictorMeanRank averageVariation
Days since last unkept appointment25%1.20.2
Unkept appointments last 12 months13%3.01.7
Age at appointment10%4.68.8
Lead care professional9%5.16.5
Appointments last 12 months7%5.34.7
Days since last appointment6%6.04.2
Referral source5%7.18.2
Health deprivation score5%7.14.0
IDAOPl score5%7.77.6
IDACI score3%10.83.8
IMD score2%12.04.4
Weekday2%11.73.6
Appointment type2%12.85.9
Days since last cancellation2%12.73.1
Consultation2%13.95.4
Cancellations last 12 months1%15.62.5
Sex0%16.50.8

IDACI, Income Deprivation Affecting Children Index; IDAOPI, Income Deprivation Affecting Older People Index; IMD, Index of Multiple Deprivation.

IDACI, Income Deprivation Affecting Children Index; IDAOPI, Income Deprivation Affecting Older People Index; IMD, Index of Multiple Deprivation.

Predictors of outpatient non-attendance by specialty

Fairly consistently across the specialties with the highest and lowest unkept appointment rates, days since the previous unkept appointment and the number of previous unkept appointments in the last 12 months were among the most important predictors (Tables 4 and 5). The full table for all specialties can be seen in S4 Table. Age at time of appointment and number of appointments in the last 12 months were also important. Age at appointment was the most variable in terms of its importance for a given specialty. For instance, age at appointment is relatively important as a predictor of attendance for audiological medicine; ear, nose, and throat; and ophthalmology, but relatively unimportant for hepatology and vascular surgery. Referral source also varied in importance by specialty. Number of days since the last unkept appointment was consistently among the top 2 predictors, while sex was consistently among the bottom 4.
Table 4

Predictor rank for each specialty with the highest and lowest unkept appointment rates: Mean percentage.

Speciality nameAppointments last 12 monthsUnkept appointments last 12 monthsCancellations last 12 monthsDays since last appointmentDays since last unkept appointmentDays since last cancellationAge at appointmentIDAOPI scoreIDACI scoreIMD scoreHealth deprivation scoreConsultationLead care professionalAppointment typeSexReferral sourceWeekday
Highest rates of unkept appointments
Hepatology9.3%16.4%0.6%7.8%29.5%1.7%5.9%2.4%2.3%4.3%4.8%3.7%3.7%0.9%0.4%3.3%3.1%
Diabetic medicine12.3%15.1%0.6%9.5%19.3%1.4%6.7%1.5%1.3%0.9%5.2%0.8%14.8%4.5%0.9%4.0%1.3%
Ophthalmology5.9%11.4%0.5%4.3%28.3%1.1%13.6%6.4%3.9%3.1%4.8%1.1%6.3%0.7%0.3%6.9%1.4%
Ear, nose, and throat3.8%9.5%0.4%4.4%25.4%1.6%15.2%5.0%2.6%1.6%7.2%0.8%16.0%1.4%0.4%3.3%1.6%
Vascular surgery5.6%14.3%1.2%4.9%25.9%1.1%5.4%3.5%1.2%1.3%5.7%2.6%20.1%1.6%0.4%3.4%1.7%
Lowest rates of unkept appointments
Gynaecology3.4%7.8%0.6%8.5%26.9%2.4%6.1%4.4%2.7%2.2%7.6%1.5%11.4%3.7%0.0%8.2%2.5%
Breast surgery5.7%7.5%0.3%7.0%37.1%1.4%10.6%4.3%3.0%1.8%4.9%1.4%6.5%4.0%0.0%2.8%1.6%
Anaesthetics4.5%7.6%0.7%5.3%34.8%2.3%14.0%6.1%2.5%1.8%3.2%0.2%3.8%2.8%0.8%6.5%3.3%
Audiological medicine2.9%11.1%0.6%2.3%13.2%1.4%29.0%11.9%3.5%2.5%4.3%0.3%7.3%0.5%0.3%7.7%1.3%
Medical oncology7.2%8.2%1.6%8.2%13.7%3.6%6.3%10.5%6.9%6.4%8.3%5.3%5.3%1.8%0.4%3.3%3.2%

IDACI, Income Deprivation Affecting Children Index; IDAOPI, Income Deprivation Affecting Older People Index; IMD, Index of Multiple Deprivation.

Table 5

Predictor rank for each specialty with highest and lowest unkept appointment rates: Rank average.

Speciality nameAppointments last 12 monthsUnkept appointments last 12 monthsCancellations last 12 monthsDays since last appointmentDays since last unkept appointmentDays since last cancellationAge at appointmentIDAOPI scoreIDACI scoreIMD scoreHealth deprivation scoreConsultationLead care professionalAppointment typeSexReferral sourceWeekday
Highest rates of unkept appointments
Hepatology3216411451213768915171011
Diabetic medicine4217511161012157163814913
Ophthalmology7316911325101181461517412
Ear, nose, and throat8417711236101351521416911
Vascular surgery5315711668141341021217911
Lowest rates of unkept appointments
Gynaecology1051631137811146152917412
Breast surgery6316411428101271559171113
Anaesthetics7316611325121410178111549
Audiological medicine9414112121381071761516513
Medical oncology6416511292783111015171314

Green corresponds to the most important predictors, and red to the least important. IDACI, Income Deprivation Affecting Children Index; IDAOPI, Income Deprivation Affecting Older People Index; IMD, Index of Multiple Deprivation.

IDACI, Income Deprivation Affecting Children Index; IDAOPI, Income Deprivation Affecting Older People Index; IMD, Index of Multiple Deprivation. Green corresponds to the most important predictors, and red to the least important. IDACI, Income Deprivation Affecting Children Index; IDAOPI, Income Deprivation Affecting Older People Index; IMD, Index of Multiple Deprivation.

GBM model

In order to predict rates of unkept appointments, GBM models were trained on the top 100 specialties by appointment volume. Data from 2016–2017 were used to train the models, and data from 2017–2018 were used to test them. The test metrics for the specialties with the highest and lowest unkept appointment rates are shown in Table 6. From these models, we calculated the sensitivity, LR, and PPV for generating interventions in selected proportions of non-attenders to assess potential clinical improvements in attendance.
Table 6

Gradient boosting machine validation metrics of specialties with the highest and lowest unkept appointment rates.

Speciality nameNumber of appointmentsNumber of unkept appointmentsUnkept appointment percentSensitivityPPVLRAUROC
Highest rates of unkept appointments
Hepatology14,7312,49417%0.280.464.100.74
Diabetic medicine12,1821,92916%0.290.474.620.74
Ophthalmology74,44211,18815%0.260.393.630.71
Ear, nose, and throat34,8205,10915%0.260.383.610.69
Trauma and orthopaedics44,9666,46014%0.250.373.450.69
Lowest rates of unkept appointments
Gynaecology62,5265,5089%0.310.273.820.72
Breast surgery16,8381,4288%0.350.294.480.72
Audiological medicine20,5771,6478%0.240.192.670.67
Anaesthetics15,6901,1808%0.260.202.990.67
Medical oncology36,4662,0526%0.290.163.300.69

AUROC, area under the receiver operating characteristic curve; LR, likelihood ratio; PPV, positive predictive value.

AUROC, area under the receiver operating characteristic curve; LR, likelihood ratio; PPV, positive predictive value.

Sensitivity at 10% cutoff

A sensitivity of 0.28 for hepatology suggests that 28% of patients who do miss their appointment would be successfully targeted if the top 10% least likely to attend received an intervention. As a result, an intervention targeting the top 10% of likely non-attenders, in the full population of patients, would be able to capture 28% of unkept appointments if successful.

Likelihood ratio

The LR for the top 5 specialties was greater than 3, meaning those patients selected by the models for an intervention were at least 3 times as likely to miss their appointment than those who were not selected for an intervention.

Positive predictive value

The PPV for the top 5 specialties was between 37% and 47%, meaning that of those selected for the targeted intervention, 37%–47% would be expected to miss their appointment prior to the intervention. This is in comparison to a 14%–17% unkept appointment rate across all appointments for the top 5 specialties. Among the bottom 5 specialties, of those selected for an intervention roughly 16%–29% would be expected to miss their appointment, in comparison to a 6%–8% unkept appointment rate across all appointments in the bottom 5 specialties.

Area under the curve

As a metric of model performance, the AUROC was fairly consistent across both the top 5 and bottom 5 specialties, ranging from 0.67 to 0.74. So long as the cost of an intervention is less than one-third of the average cost of the potential reduction in unkept appointments, then using these models for targeted interventions would theoretically be cost-effective.

Discussion

Unkept appointments are a worldwide issue, causing inequalities in health and inefficient use of resources. Using a national data-driven approach, we determined national and local unkept outpatient appointment rates across secondary care in the United Kingdom, and across multiple specialties at a single hospital trust. Previous nationwide studies have looked primarily at general practice data [6]. In our study, rate of unkept appointments varied across NHS hospital trusts from 4.3% to 15.1%. The higher rates seen in London may be explained by the more heterogenous population within London, with language barriers, transport failures, and administrative failures. Predictors of unkept appointments can be divided into clinical, behavioural, and sociodemographic. At ICHT, there was great heterogeneity in unkept appointment rates across all specialties. The highest rates were seen in hepatology; diabetes; ophthalmology; ear, nose, and throat; and vascular surgery patients. Interestingly, these patients’ co-morbidities may overlap. For example, a patient with diabetes may require ophthalmology review of diabetic retinopathy or vascular review for diabetic foot ulcers. It is not clear why hepatology had the highest rate of unkept appointments; however, studies looking at gastroenterology patients found that causes of unkept appointments included forgetting their appointment or clerical errors [10]. A cohort study of 521 unkept ophthalmology appointments found that the top reasons for non-attendance were not feeling well enough to attend, forgetting the appointment, administrative errors, and that their condition had improved [26]. When stratifying by the predictors of unkept appointments, there were similarities and differences across all specialties. Consistently, having a prior unkept appointment was the greatest predictor across all specialties except medical oncology. This suggests that behaviour is the most important predictor, and hence behaviour-related interventions targeting those with recurrent unkept appointments is necessary. Therefore, adopting a targeted approach to reducing unkept appointments maybe more effective than a blanket approach. Sex was the least important predictor for the majority of specialties. This contradicts other findings in the literature. Previous studies suggested that sex was a predictor of non-attendance, with males having a higher risk [6,12,17,27]. Similarly, deprivation did not rank very highly, in contrast to existing literature [11,13,28-30]. A possible reason for this is that the models generated here had greater granularity and included predictors of greater importance that could not always be captured in other studies. Gynaecology, breast surgery, anaesthetics, audiological medicine, and medical oncology had the lowest rates of unkept appointments. Oncology patients may have better adherence to treatment due to the mortality associated with their disease and hence may be more likely to attend their appointments. Whilst not all, a significant proportion of gynaecological and breast patients may also fall under oncology. Targeted interventions could be implemented at multiple levels: organisational, psychosocial, or through information dissemination. For example, virtual clinics could be a practical solution, and have been trialled across multiple specialties [31-33]. Other interventions include stating the cost of the appointment when sending SMS reminders to patients, which has been shown in a trial to reduce unkept appointment rates compared to SMS reminders not stating appointment costs [34]. Shared appointments, where patients receive consultations with their doctor in the presence of other patients with similar conditions, may provide another means of reducing unkept appointment rates [35]. Such interventions would have to be trialled, and an assessment of utility, safety, cost-effectiveness, and patient satisfaction would have to be undertaken. Whilst text message reminders have been shown to reduce unkept appointment rates, patients are still missing their appointments, and hence the present findings will allow us to go a step further by introducing targeted interventions. In addition, vulnerable, elderly, and deprived patients may not have access to a mobile phone, and therefore would not benefit from a blanket approach using SMS reminders. They may also be most at harm, should they miss their appointment, which again calls for a more targeted approach to ensure they receive the appropriate care. Aside from the cost implications of unkept appointments, there is increased mortality associated with missing appointments, as seen in general practice. A nationwide study in Scotland reported that those who missed more than 2 appointments had a 3-fold increase in hazards of mortality compared to those who did not miss appointments [6]. Such data are lacking in the secondary care setting. The GBM models output unkept appointment propensity scores, helping us rank patients in order of which patients are most likely to miss their appointments. The idea is to target a certain proportion of patients, implement an intervention, and decrease unkept appointment rates with minimal effort, rather than targeting all patients. In this way, interventions can be introduced more cost-effectively. Using the 5 specialties with the highest rates of unkept appointments, the models suggest that so long as the cost of an intervention is less than one-third of the average cost of the potential reduction in unkept appointments, using these models for targeted interventions would theoretically be cost-effective. In the context of analysing many specialties, we used a uniform sensitivity cutoff of 10%. In a live service, we could have different cutoffs for different specialties, based on their resources and respective rates of unkept appointments. Overall, the top 2 predictors—namely, recency and frequency of previous unkept appointments—accounted for 38% of the average predictive value. This highlighted how a simple intervention based on these 2 predictors might have some utility. But it also highlights one of the advantages of applying machine learning models to predictive problems such as this—namely, that the contribution of many factors, along with the complexities of their interactions, can be accounted for in a way that focusing on just a few key factors does not allow. It is likely that in order to run and implement these models and then apply targeted interventions across the population, technology, and possibly artificial intelligence, will be utilised. In February 2019, the Topol review was published [36]. This review, commissioned by the UK secretary of health, was designed to elucidate how the NHS can make the most of technology to improve services and help ensure their sustainability [37]. Digital medicine and artificial intelligence can aid in decision-making processes such as booking systems and targeting interventions, as well as utilising the vast volume of data available to generate the models. As with any study relying on the use of routinely collected data, there are a number of limitations. The submission of kept outpatient appointment data to NHS Digital is mandatory in England. However, the submission of unkept outpatient appointment data is not. Not all trusts report unkept appointments consistently, so the data here may not reflect the true unkept appointment rate. This too may be the case within specialties at a single trust. Hospitals reporting a 0% rate of unkept appointments were excluded from this analysis, along with the bottom and top 10% of outliers. Data with the most missingness would have been among the top 10%. In addition, the results by specialty here are based on a single trust in England; hence, the generalisability of the results across the whole country or other countries may be questioned. Furthermore, our data did not include mental health services, as mental health data are found in the Mental Health Services Data Set (MHSDS) rather than HES. In addition, ICHT does not have a dedicated psychiatry department and utilises liaison psychiatry services from partner London trusts. We excluded the 10% of trusts with the highest rates of unkept appointments. Including them would have potentially resulted in the model underpredicting unkept appointments. However, we also excluded the 10% of outliers with the lowest rates of unkept appointments, and so there is likely to be some offsetting. Additionally, excluding records that were likely to have lower data quality would have improved the accuracy of the predictions. However, as a limitation, trusts thus excluded would be less represented in the data, so the model would again be less generalisable. Furthermore, in our analysis, we excluded cancellations as we did not have cancellation dates. This may have affected the applicability of the model. Appointments can be cancelled by patients or by the care provider, and cancellations may occur shortly before an appointment, or well in advance. In an ideal scenario, we would know when an intervention for reducing unkept appointments was applied, and filter cancellations accordingly. For example, if patients are called 3 days before an appointment, then we could include cancellations within 3 days of appointments as unkept appointments, while excluding all cancellations that had happened before the phone call reminder. Nonetheless, the study highlights the importance of repeating such an exercise across other datasets. We would recommend digital health policy makers mandate trusts to record and submit unkept appointments, in addition to those attended, to avoid this issue for future related research. This study has identified the prevalence of unkept appointments nationally, by trust and broken down by specialty within a single UK trust. The clinical implications are that those locations and specialties with the highest rates may require intervention. The granularity of the predictors allows us to identify which patients are best targeted to implement such interventions, to ensure a reduction in unkept appointments, to ultimately reduce the morbidity associated with them and the waste of resources. Further study is needed in other UK trusts and in other countries to better understand this issue globally, so as to tackle healthcare inequalities. Understanding the complications of unkept appointments in a secondary care setting would also be pertinent as it may aid in clinical decision making and follow-up planning. The new methods of modelling unkept appointments introduced in this study allow us to have a deeper understanding of the root causes of unkept appointments at the national and local level and may offer a path to offer novel interventions in order to address these causes. Future work should break down these unkept appointments and their causes into clinical, behavioural, and psychosocial domains so that specific targets in these areas can be generated to minimise unkept appointment loads. These approaches will need validation with other datasets and in formalised clinical trial settings to address the global issue of unkept appointments at the national and local level. The lessons derived from these approaches to unkept appointments may therefore in turn be a route to increase efficacy and efficiency in an era of healthcare rationing and financial constraint.

RECORD checklist.

(DOCX) Click here for additional data file.

Unkept appointment rates by UK hospital trust.

(DOCX) Click here for additional data file.

Unkept appointment rates and model metrics for ICHT in 2017–2018 by specialty.

(DOCX) Click here for additional data file.

Predictor importance for each specialty.

Filtered for specialties with at least 10,000 appointments in 2016–2017 (after data cleaning). (DOCX) Click here for additional data file.

Model specification.

Fig A: Risk ratio of non-attendance for patients with an unkept appointment in the past 12 months. (DOCX) Click here for additional data file. 24 Feb 2020 Dear Dr Thakrar, Thank you for submitting your manuscript entitled "Characterising and reducing the burden of unkept outpatient appointments in the NHS through a national data-driven machine learning approach" for consideration by PLOS Medicine. Your manuscript has now been evaluated by the PLOS Medicine editorial staff and I am writing to let you know that we would like to send your submission out for external peer review. However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire. Please re-submit your manuscript within two working days, i.e. by . Login to Editorial Manager here: https://www.editorialmanager.com/pmedicine Once your full submission is complete, your paper will undergo a series of checks in preparation for peer review. Once your manuscript has passed all checks it will be sent out for review. Feel free to email us at plosmedicine@plos.org if you have any queries relating to your submission. Kind regards, Helen Howard, for Clare Stone PhD Acting Editor-in-Chief PLOS Medicine plosmedicine.org 8 Dec 2020 Dear Dr. Thakrar, Thank you very much for submitting your manuscript "Characterising and reducing the burden of unkept outpatient appointments in the NHS through a national data-driven machine learning approach" (PMEDICINE-D-20-00544R1) for consideration at PLOS Medicine. Your paper was evaluated by a senior editor and discussed among all the editors here. It was also discussed with an academic editor with relevant expertise, and sent to three independent reviewers; their comments are under the editors' signoff and via the link below, and I hope you find them constructive. [LINK] In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript. In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org. We expect to receive your revised manuscript by Dec 29 2020 11:59PM. Please email us (plosmedicine@plos.org) if you have any questions or concerns. ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: http://journals.plos.org/plosmedicine/s/competing-interests. Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/ Your article can be found in the "Submissions Needing Revision" folder. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods. Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. We look forward to receiving your revised manuscript. Sincerely, Emma Veitch, PhD PLOS Medicine On behalf of Richard Turner, PhD, Senior Editor, PLOS Medicine plosmedicine.org ----------------------------------------------------------- Requests from the editors: *Please structure your abstract using the PLOS Medicine headings (Background, Methods and Findings, Conclusions). We'd suggest including in the last sentence of the Abstract Methods and Findings section a note about any key limitation(s) of the study's methodology. Potentially this might include noting (this factor could also be included in the main Discussion section if the authors agree it is relevant) that independent validation in a separate dataset was not done for the predictors that emerged from the current machine learning study. *At this stage, we ask that you include a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. Please see our author guidelines for more information: https://journals.plos.org/plosmedicine/s/revising-your-manuscript#loc-author-summary *Please reformat the citation style into PLOS Medicine's format (should be straight forward if using referencing software) - this should use callouts formatted as sequential numerals in square brackets (not superscript). *Please clarify if the analytical approach followed here corresponded to one laid out in a prospectively developed protocol or analysis plan? We'd ask the authors state this (either way) early in the Methods section. a) If a prospective analysis plan (from your funding proposal, IRB or other ethics committee submission, study protocol, or other planning document written before analyzing the data) was used in designing the study, please include the relevant prospectively written document with your revised manuscript as a Supporting Information file to be published alongside your study, and cite it in the Methods section. A legend for this file should be included at the end of your manuscript. b) If no such document exists, please make sure that the Methods section transparently describes when analyses were planned, and when/why any data-driven changes to analyses took place. c) In either case, changes in the analysis-- including those made in response to peer review comments-- should be identified as such in the Methods section of the paper, with rationale. *The editors would suggest referring to an appropriate reporting guideline to support reporting of the study, and one that may be appropriate is the RECORD guideline (https://www.equator-network.org/reporting-guidelines/record/), developed for reporting of observational routinely-collected data. If the authors agree this is appropriate please enclose the completed RECORD checklist as supporting information with the revised paper. *We'd suggest including a descriptor of the study design as part of the manuscript title (normally this would be in the subtitle, after a colon - with the first part of the title including the study question/objective). *In the Introduction, the authors summarise prior evidence (in the section beginning "Reasons for unkept appointments often include...") but it would be good to then set out the unanswered question driving the current analysis, ie give the reader some idea of the uncertainty around prior research on unkept appointments and therefore what remains to be understood. ----------------------------------------------------------- Comments from the reviewers: Reviewer #1: "Characterising and reducing the burden of unkept outpatient appointments in the NHS through a national data-driven machine learning approach" studies the rate of outpatient clinic appointments within the National Health Service (NHS) of England, and employs machine learning models (in particular, gradient boosting machines; GBMs) to predict future missed appointments ("Did Not Attends"; DNAs), from available predictor variables (e.g. past appointments, unkept/cancelled appointments, etc - see Table 1 & Appendix 3). Accurate prediction of DNAs holds out the promise of substantial cost savings, since each (missed) appointment costs £120 to the NHS, for an annual wastage of about £1 billion. Overall, this study raises the potential for both improving cost savings and patient health outcomes, through relatively easily-implemented means. The scale and scope of the HES (Hospital Episode Statistics) Outpatient Data used (containing close to 100 million appointments, and over 100 treatment specialties) is also a particular strength of the evaluation. However, there appear to remain a number of fairly significant issues with the current presentation, that might be addressed: 1. The first two citations supporting the costs of missed appointments to the NHS and the US healthcare system are to webpages. Might there be any alternative peer-reviewed publications that support these figures? 2. The background discussion includes prior work describing prevalence of and reasons for missing appointments, but does not really cover the closest relevant topic (i.e. on using machine learning/statistical methods to predict future unkept appointments) in detail. There appears to be a fair number of such papers, e.g. "Deprivation, demography and missed scheduled appointments at an NHS primary dental care and training service", West et al., British Dental Journal 228 (98-102), 2020; "Modeling patient no-show history and predicting future outpatient appointment behavior in the Veterans Health Administration", Goffman et al., Military Medicine 182 (5-6), e1708-e1714, 2017, etc. 3. The major methodological concern would be with the separation between the training dataset and the validation dataset. In particular, it is stated that "Appointments from April 2016 to March 2017 were used to train the models. The number of appointments, unkept appointments, and cancellations over the previous 12 months was also included. The models were tested using ICHT-specific data from April 2017 to March 2018. Again, including the number of appointments, unkept appointments and cancellations from the previous 12 months... HES data from 2016-17 was used to train the models, then (HES Outpatient data) data from 2017-18 was used for validation." This would appear to imply that some of the information might be shared between the training and validation datasets. For example, for a given patient, an appointment in March 2017 (with accompanying 12-month prior data) would be employed as training data. However, it would seem that an appointment in say April 2017 by the same patient would then be used in validation, despite it likely sharing similar 12-month prior data (due to eleven of those months overlapping). The conventional arrangement then would then generally be to either stratify by patient, or at the trust level. The authors might therefore comment on whether such a separation between the training and validation data was achieved, moreover since there might be come confusion as to whether the models were tested only on ICHT-specific data, or the full HES Outpatient data. In general, exactly what models were trained and validated on what data might be more systematically organized (e.g. a "composite prediction (GBM?) model" is suddenly mentioned in the Results section, but apparently not in the previous Methods section, while the GBM Model is only described in a later section after the Results section) 4. The exclusion of cancellations in advance from the analysis would seem to affect the applicability of the results to a real-life implementation, since the trained GBM model would possibly be applied to cases that are eventually cancelled (rather than kept/missed). The authors might wish to discuss whether such cancellations are prevalent, and as such the extent to which they might affect the utility of the models. 5. For the metrics reported under the Gradient Boosting Machine Model section, the Likelihood Ratio and PPV are dependant on patients who were "selected for an intervention". Is this selection based on the 10% sensitivity cut off, and if so, might different cut offs for different specialities be considered given prior knowledge (e.g. 6% unkept appointment rates for medical oncology, compared to 17% for hepatology, from Appendix 2)? 6. There are few details about the GBMs that would allow for some reproducibility. In particular, how were parameters such as the number of trees/shrinkage parameters/bagging fraction chosen? Was there any parameter search involved for each trained model? How was the importance of the various predictors determined? Was there any normalization of the inputs? These details might be appropriately covered in the appendix. 7. The description of "Model sensitivity" mentions that "This is the biggest difference the intervention could possibly make". It is not immediately clear as to what this means. 8. The motivation for using predictive models for targeted interventions might be more extensively analyzed in the Discussion section. In particular, some interventions such as SMS reminders would appear to be feasible for all patients, without the need for a predictive model. 9. There remain a small number of possible grammatical issues, e.g. "top 10% less likely" -> "top 10% least likely"; references to tables and appendices in the text are generally capitalized. ----------------------------------------------------------- Reviewer #2: This paper is an excellent piece of work. it clearly identifies the issues attributed to unkept appointments and states the research aims very clearly. The statistical terms have been very well explained as was your methodology. The tables clearly stated the results, particularly rates and numbers per location and specialisation very succinctly. I have no criticisms to add of the paper, it is a well written and executed research paper. ----------------------------------------------------------- Reviewer #3: General Feedback- major aspects Thank you for submitting this manuscript for consideration and I read it with interest as a clinician and researcher who works in this field. I do not however have any expertise in machine learning and appreciated the description of key terms included. It would be useful for the reader if you were to explain the change in language from missed to unkept; its not clear, because missed apts have not ever included cancelled apts in the data outputs. The focus on good population coverage is welcomed and the clear description of the data being used, its benefits and drawbacks. You attempted as best as I was able to discern (not being a stats expert) to account for the data recording variation with respect to missed/unkept appointments. However does this not also mean that some of the most stark data about missingness may have been contained in the top 10%? I note that mental health services are not included? In Scottish data they are. Could they all have been in the top 10% or are they not included in HES OP data? I think you need to explain why secondary care mental health services are not included in this data set. I'm making this point because the hospital activity data linked to the GP publshed data you cited does include mental health services and they were the highest generator of repeated missed apts in the Scottish data (currently under review for publication). I also think that you need to explain in applied terms why exluding the top 10% may impact on the results (even if it may make sense from a machine learning perspective). Is previous unkept appointments by specialty or any unkept appt, please clarify? Because one potential explanation for high missed appts is patient treatment burden, the patient has so many apts across multiple specialties that they cannot manage to keep them all, which may be why it is linked to cancelled apts too. What evidence given the lack of population data on this topic means you can speculate about needs being met elsewhere (line 220 to 231)? Can you really speculate about rates of unkept appts by English regions when a significant contribution to the data differences may be about data recording and quality if missed appointments are not mandated to be returned? Could this be relevant at specially level too? I'm uncomfortable with the amount of time spent on possible interventions in the manuscript (250-264)- as the most important finding from this research is that previous missed apts are the strongest predictor of future missingness. As far as I am aware no large or medium scale work has yet evaluated interventions that may work that focus on a history of previous missingness. The evaluations done at any scale do not distinguish between patients who miss one OP apt and those who miss many (and hence probably why studies often contradict each other). This focus on patients at high risk of a pattern of missingness and what works is what is needed- that's my reading of what this important study tells us. And we cannot speculate on what works until research focussed on patients with patterns of high missingness is conducted. As you are probably aware the evidence that underpins the economic cost per missed apts in the UK is flimsy at best (line 49) and it could be argued have some positive cost benefit (clinicians doing catch up letters etc), so I would suggest including 'estimate' at the very least in your statement about this. Linked to this would an important recommendation from this work not be that NHS Digital should mandate returning recording of all missed/unkept appointments in line with the need to return all attended ones too? Given that you present a strong case for attention to be paid to this issue. Minor aspects Appointment age as a term is misleading- I had to read further down to confirm that it is the age of the patient and not the time elapsed since the appointment was scheduled. This work was focussed on NHS care in England (not the UK). In summary though this is an important study that helps distinguish between the patients who miss the occasional apt and those who have patterns of high missed apts within healthcare and it helps the case that interventions for each group are likely to be distinct from each other. My overarching recommendation is a reframing of the paper based on this finding taking the above comments into account is done. Dr Andrea E Williamson ----------------------------------------------------------- Any attachments provided with reviews can be seen via the following link: [LINK] 2 Feb 2021 Submitted filename: PlosMed Response letter v4.docx Click here for additional data file. 12 Aug 2021 Dear Dr. Thakrar, Thank you very much for re-submitting your manuscript "Characterising and reducing the burden of unkept outpatient appointments in the NHS through a national data-driven machine learning approach: A retrospective cohort study" (PMEDICINE-D-20-00544R2) for consideration at PLOS Medicine. We do apologize for the long delay in sending you a decision. I have discussed the paper with our academic editor and it was also seen again by two reviewers. I am pleased to tell you that, provided the remaining editorial and production issues are fully dealt with, we expect to be able to accept the paper for publication in the journal. The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript: [LINK] ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. We hope to receive your revised manuscript within 1 week. Please email us (plosmedicine@plos.org) if you have any questions or concerns. We ask every co-author listed on the manuscript to fill in a contributing author statement. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. Please note, when your manuscript is accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you've already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosmedicine@plos.org. Please let me know if you have any questions, and we look forward to receiving the revised manuscript. Sincerely, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org ------------------------------------------------------------ Requests from Editors: Please adapt the title to better match PLOS Medicine style. We suggest: "Characterising the nationwide burden and predictors of unkept outpatient appointments in the NHS in England: A cohort study using a machine-learning approach". At line 33, please make that "... Data ... were used ...". At line 39 (abstract), we suggest quoting the estimated proportions contributed by the three predictors mentioned. Please remove the subsection headed "Key limitations" in your abstract. This material should be located in a new final sentence in the "Methods and findings" subsection, which should begin "Study limitations include ..." or similar and should quote 2-3 of the study's main limitations. Please remove the "non-technical summary" following the abstract and instead craft an accessible "Author Summary" section. You may find it help to consult one or two recent research papers in PLOS Medicine to get a sense of the preferred style. Please state early in the Methods section whether or not the study had a protocol or prespecified analysis plan. Please refer to the attached RECORD checklist in the Methods section ("See S1_RECORD_Checklist" or similar). Please restructure the start of the Discussion section, as the first paragraph should summarize the study's findings (it appears that this would be achieved if the first two current paragraphs of this section were amalgamated). Please adapt reference call-outs to precede punctuation throughout the text (e.g., "... up to 6 months [5,6]."). Please remove the information on study funding and competing interests form the end of the main text. This information will appear in the article metadata in the event of publication, via entries in the submission form. Noting reference 4, please ensure that all citations have full access details. Please use the journal name abbreviation "PLoS ONE" in the reference list. Please remove all iterations of "[Internet]" from the reference list. Comments from Reviewers: *** Reviewer #1: We thank the authors for addressing our previous concerns, particularly for the additional Appendix 4 detailing the model specification, and the analysis suggesting that the data leakage did not significantly bias the findings (as illustrated in Figure 3). While the information on RFM models are noted, in principle it would be more ideal to have the training and validation datasets not sharing the same patients. This caveat might be briefly acknowledged if thought appropriate. *** Reviewer #3: Thank you for addressing my feedback comprehensively. There are some minor typo errors- line 46, 87, 170. *** Any attachments provided with reviews can be seen via the following link: [LINK] 16 Aug 2021 Submitted filename: PlosMed Response letter v5.docx Click here for additional data file. 18 Aug 2021 Dear Dr. Thakrar, Thank you very much for re-submitting your manuscript "Characterising the nationwide burden and predictors of unkept outpatient appointments in the NHS in England: A cohort study using a machine-learning approach" (PMEDICINE-D-20-00544R3) for consideration at PLOS Medicine. I have discussed the paper with editorial colleagues, and we will need to ask you to address some further points before we are in a position to proceed further. The remaining issues that should be addressed are listed at the end of this email. ***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.*** In revising the manuscript for further consideration here, please ensure you address the specific points made by the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. We hope to receive your revised manuscript within 1 week. Please email us (plosmedicine@plos.org) if you have any questions or concerns. We ask every co-author listed on the manuscript to fill in a contributing author statement. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. Please note, when your manuscript is accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you've already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosmedicine@plos.org. Please let me know if you have any questions, and we look forward to receiving the revised manuscript. Sincerely, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org ------------------------------------------------------------ Requests from Editors: At line 26, please make that "appointments cost". At line 41 in the abstract, immediately prior to the sentence summarizing study limitations, we feel that an additional sentence or two should be added to summarize the inferences from the GBM work beginning at line 205 in the Results. The information about sensitivity may be the most intuitive finding to report in the abstract. At line 44, please make that "... appointments remain ..." or similar. Please revisit the "Author summary", which should consist of three subsections with the following headings, each comprising 3-4 short points (in turn of 1-2 short sentences each): "Why was this study done? - - - What did the researchers do and find? - - - What do these findings mean? - - - " Please use the active voice (e.g., "We investigated ...) in at least one point. At line 64, we suggest "impacts". At line 66, we suggest "... missed more than 2 appointments". At line 86, please make that "... carried out in a UK setting ...". At line 90, please make that "potential utility". At lines 101 and 207, please make that "[data] were used ...". At line 119, should that be "... appointments and unkept appointments ..."? At line 187, for example, please avoid using italics for emphasis. At line 260, please make that "existing literature". At line 277, please make that "have been shown". At line 278, please revisit the wording - perhaps "... the present findings will allow us to go a step farther ..." is intended? At line 352, please make that "break down" (two words). At line 359, please remove the information on competing interests, which will appear in the article metadata via entries in the submission form. *** 24 Aug 2021 Submitted filename: PlosMed Response letter v7.docx Click here for additional data file. 25 Aug 2021 Dear Dr Thakrar, On behalf of my colleagues and the Academic Editor, Dr Basu, I am pleased to inform you that we have agreed to publish your manuscript "Characterising the nationwide burden and predictors of unkept outpatient appointments in the NHS in England: A cohort study using a machine-learning approach" (PMEDICINE-D-20-00544R4) in PLOS Medicine. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Once you have received these formatting requests, please note that your manuscript will not be scheduled for publication until you have made the required changes. In the meantime, please log into Editorial Manager at http://www.editorialmanager.com/pmedicine/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process. PRESS We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with medicinepress@plos.org. If you have not yet opted out of the early version process, we ask that you notify us immediately of any press plans so that we may do so on your behalf. We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/. To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Thank you again for submitting to PLOS Medicine. We look forward to publishing your paper. Sincerely, Richard Turner, PhD Senior Editor, PLOS Medicine rturner@plos.org
  32 in total

1.  Technology will improve doctors' relationships with patients, says Topol review.

Authors:  Abi Rimmer
Journal:  BMJ       Date:  2019-02-11

2.  Reducing non-attendance at outpatient clinics.

Authors:  C A Stone; J H Palmer; P J Saxby; V S Devaraj
Journal:  J R Soc Med       Date:  1999-03       Impact factor: 5.344

3.  Reasons for new referral non-attendance at a pediatric dermatology center: a telephone survey.

Authors:  K L E Hon; T F Leung; Y Wong; K C Ma; T F Fok
Journal:  J Dermatolog Treat       Date:  2005-04       Impact factor: 3.359

4.  Failure to attend out-patient clinics: is it in our DNA?

Authors:  Kinley Roberts; Ian Callanan; Niall Tubridy
Journal:  Int J Health Care Qual Assur       Date:  2011

5.  Do 'do not attends' at a genitourinary medicine service matter?

Authors:  C Swarbrick; E Foley; L Sanmani; R Patel
Journal:  Int J STD AIDS       Date:  2010-05       Impact factor: 1.359

6.  Reduction of missed appointments at an urban primary care clinic: a randomised controlled study.

Authors:  Noelle Junod Perron; Melissa Dominicé Dao; Michel P Kossovsky; Valerie Miserez; Carmen Chuard; Alexandra Calmy; Jean-Michel Gaspoz
Journal:  BMC Fam Pract       Date:  2010-10-25       Impact factor: 2.497

7.  Who is not coming to clinic? A predictive model of excessive missed appointments in persons with multiple sclerosis.

Authors:  Elizabeth S Gromisch; Aaron P Turner; Steven L Leipertz; John Beauvais; Jodie K Haselkorn
Journal:  Mult Scler Relat Disord       Date:  2019-11-09       Impact factor: 4.339

8.  Why do patients not keep their appointments? Prospective study in a gastroenterology outpatient clinic.

Authors:  A Murdock; C Rodgers; H Lindsay; T C K Tham
Journal:  J R Soc Med       Date:  2002-06       Impact factor: 18.000

9.  Stating Appointment Costs in SMS Reminders Reduces Missed Hospital Appointments: Findings from Two Randomised Controlled Trials.

Authors:  Michael Hallsworth; Dan Berry; Michael Sanders; Anna Sallis; Dominic King; Ivo Vlaev; Ara Darzi
Journal:  PLoS One       Date:  2015-09-14       Impact factor: 3.240

10.  Evaluating the impact of a 'virtual clinic' on patient experience, personal and provider costs of care in urinary incontinence: A randomised controlled trial.

Authors:  Georgina Jones; Victoria Brennan; Richard Jacques; Hilary Wood; Simon Dixon; Stephen Radley
Journal:  PLoS One       Date:  2018-01-18       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.