Literature DB >> 36107953

Clinical utility and acceptability of a whole-hospital, pro-active electronic paediatric early warning system (the DETECT study): A prospective e-survey of parents and health professionals.

Bernie Carter1, Holly Saron1, Lucy Blake2, Chin-Kien Eyton-Chong3, Sarah Dee4, Leah Evans4, Jane Harris5, Hannah Hughes6, Dawn Jones7, Caroline Lambert8,9, Steven Lane10, Fulya Mehta3, Matthew Peak11, Jennifer Preston12, Sarah Siner7, Gerri Sefton13, Enitan D Carrol8,9.   

Abstract

BACKGROUND: Paediatric early warning systems (PEWS) are a means of tracking physiological state and alerting healthcare professionals about signs of deterioration, triggering a clinical review and/or escalation of care of children. A proactive end-to-end deterioration solution (the DETECT surveillance system) with an embedded e-PEWS that included sepsis screening was introduced across a tertiary children's hospital. One component of the implementation programme was a sub-study to determine an understanding of the DETECT e-PEWS in terms of its clinical utility and its acceptability. AIM: This study aimed to examine how parents and health professionals view and engage with the DETECT e-PEWS apps, with a particular focus on its clinical utility and its acceptability.
METHOD: A prospective, closed (tick box or sliding scale) and open (text based) question, e-survey of parents (n = 137) and health professionals (n = 151) with experience of DETECT e-PEWS. Data were collected between February 2020 and February 2021.
RESULTS: Quantitative data were analysed using descriptive and inferential statistics and qualitative data with generic thematic analysis. Overall, both clinical utility and acceptability (across seven constructs) were high across both stakeholder groups although some challenges to utility (e.g., sensitivity of triggers within specific patient populations) and acceptability (e.g., burden related to having to carry extra technology) were identified.
CONCLUSION: Despite the multifaceted nature of the intervention and the complexity of implementation across a hospital, the system demonstrated clinical utility and acceptability across two key groups of stakeholders: parents and health professionals.

Entities:  

Mesh:

Year:  2022        PMID: 36107953      PMCID: PMC9477367          DOI: 10.1371/journal.pone.0273666

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Paediatric early warning systems (PEWS) encompass a range of different interventions [1]. They are a means of tracking physiological state and alerting healthcare professionals about signs of deterioration, triggering a clinical review and/or escalation of care of children [2]. PEWS are reported to be used extensively internationally [2, 3] and across different health care settings such as emergency departments [4-6], oncology and haematology [7-9], and more rarely, hospital wide [10, 11] or nationally [12]. PEWS are used in paediatric in-patient hospital settings [2] in resource-rich [10] and resource-limited countries [7, 13]. Although electronic-based PEWS are reported as bringing additional safety benefits such as reduction in human error, greater time efficiency and instant visibility of recorded data to the clinical team [14]; this has not been reported across a whole hospital setting. The acronym PEWS is sometimes used ambiguously in the literature to describe early warning scores [15-18] or systems [6, 10, 11, 19], or both score and system [4]. Within this paper, PEWS is used to denote system. Although PEW scores are an important step, implementing a score in isolation without considering the wider system factors [20] and socio-technical systems [2] is unlikely to be effective as it does not take into account the environment, organisational culture, policy and human action contexts which impact upon the occurrence and prevention of deterioration [10]. Smith [21] proposes a ‘chain of prevention’, composed of five interlinked rings of equal importance: education, monitoring, recognition, escalation, and response, as a structure for preventing and detecting patient deterioration and cardiac arrest. Within the UK, the inquiry ‘Why Children Die’ report [22] led to the recommendation for “a standardised and rational monitoring system with imbedded early identification systems for children developing critical illness–an early warning score”(p4). This recommendation was made despite the evidence base for the effectiveness of PEWS being weak in terms of decreasing all-cause mortality [23] and being sufficiently sensitive in identifying children who need escalation of care in a hospital with higher levels of paediatric resource [24]. Across the UK, use of PEW scores and systems is widespread, but a variety of scoring systems, age bandings and formats (paper and electronic) exist [25]. A recent survey identified that while there are many common elements, standardisation across the UK has yet to be achieved [1]; this standardisation is the aim of the national PEWS Programme Board [1, 25]. Within hospital settings, implementation of PEWS is complex, requiring iterative processes to sustain use [10]. This complexity, as well as the methodological challenges associated with researching effectiveness, may contribute to the weak and often conflicting evidence about whether the implementation of PEWS does lead to reductions in cardiac arrest, morbidity, and mortality [2, 10]. Effective implementation requires consideration of implementation fidelity, effectiveness, and utility and account needs to be taken of key components of the system such as situational awareness [6, 20, 26], communication [7], the interface of the system with the users [27], the degree of change to workflow [19], the barriers and enablers of uptake [28, 29], and embedding and adaptation over time [10]. This paper reports the findings from survey data generated as part of one of the sub-studies from the Dynamic Electronic Tracking and Escalation to reduce Critical Care Transfers (DETECT) study [30].

The DETECT surveillance system

The DETECT study implemented a proactive end-to-end deterioration solution (the DETECT surveillance system, Fig 1) across a tertiary children’s hospital. This built on earlier work on translating PEW scoring from paper to electronic surveillance [14]. The DETECT surveillance system aims to proactively screen paediatric patients for early signs of serious deterioration or sepsis, thereby reducing complications and emergency transfers to critical care following deterioration in hospital.
Fig 1

DETECT surveillance system.

The DETECT surveillance system is supported by System C’s, CareFlow Connect and Vitals (paediatric version) apps. Vitals is an electronic observation and decision support system, which involves staff using an electronic hand-held device (iPod touch in this study) to record children’s vital signs. The recorded signs include breathing rate, effort of breathing, oxygen saturation, oxygen requirement, heart rate, blood pressure, capillary refill time, temperature, ‘alertness, verbal responsiveness, pain responsiveness, or unresponsiveness (APVU)’, and nurse or parental concerns (Fig 2). The recorded data automatically calculate a pre-defined PEW score, which categorises the risk (low, moderate, critical) of developing serious illness. CareFlow Connect is an encrypted communication system, which interacts with Vitals to provide automated alerts about the sickest children, generated from the PEW score or suspicion of sepsis, and includes the ability to escalate concerns direct to the clinical team who can respond in real-time, without the nurse leaving the child’s bedside. These modified apps are referred to as DETECT e-PEWS and are used by health professionals using iPods to document vital signs or respond to alerts of deterioration triggered by the system.
Fig 2

Example screenshots from iPod touch: DETECT e-PEW score screen and sepsis bundle overview (fictitious patient data).

Both apps had bespoke modifications made to them for the purpose of the DETECT study. The PEW score thresholds used the established Alder Hey age-specific PEW score and proactive screening for early signs of sepsis used modified NICE criteria [31]. Hands-on training in using the DETECT system, and education about the rationale for introducing the system (e.g., reducing human error in calculating scores, and reducing deterioration and need for transfer to critical care) was delivered to all health professionals who would be using the system, either in small groups or one-to-one. Children and parents were made aware of the implementation of the technology using dedicated posters in all public areas and an explanation provided at the child’s admission. Following staff training the apps were deployed on iPod touch and iPads across ten in-patient wards (240 beds). Each member of ward staff providing direct clinical care to children carried an iPod, and the Nurse in Charge of the shift had an iPad for overview of the entire ward. The clinical teams had a minimum of one iPod per team; each member of the on-call team (medical team that provides out-of-hours cross cover for inpatients and new admissions) and those within the Acute Care Team (nurse-led Rapid Response Team) each had an iPod or iPad (some used their personal mobile phone). Additionally, there was an agreement, approved by Trust Information Governance, that staff could have the Careflow Vitals and Connect apps loaded to their personal mobile phone under the ‘Bring Your Own Device’ scheme which meant that some staff did not have to carry an additional device and it was more convenient for them (Vitals is device specific (Apple), Connect is device agnostic and works on all personal devices). The vital signs data were visible in real-time on iPods, iPads, computers and personal devices and were also integrated back to Meditech, the electronic patient record (EPR) used by the study hospital. In the study setting, Vitals was implemented as a mandatory practice for monitoring all in-patients in study wards, and CareFlow Connect was implemented a month later, and was available but not mandatory, and its use was inconsistent.

Defining clinical utility and acceptability

It is important to define the concepts of clinical utility and acceptability, as it is evident in the literature that there is ambiguity and overlap in what is encompassed by the terms, and there is no consensus on definitions [32]. Within this discussion, clinical utility is defined in its narrowest sense; does the technology do what it is supposed to do, and does it perform its designated function [33]? However, the complexity inherent in implementation, adoption and assimilation of technology in healthcare systems [34, 35] requires the definition of acceptability to acknowledge its multifaceted nature, and to be more encompassing [32, 34, 36, 37]. The Theoretical Framework of Acceptability (TFA) (v2) [37] is composed of seven component constructs (Fig 3): ‘affective attitude’, ‘burden’, ‘ethicality’, ‘intervention coherence’, ‘opportunity costs’, ‘perceived effectiveness’ and ‘self-efficacy’. The TFA proposes that acceptability is a “multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention” [37].
Fig 3

Domains of theoretical framework of acceptability (v2) as applied to findings.

The aim of this part of the sub-study was to generate a broad, baseline understanding of the DETECT system in terms of its clinical utility and its acceptability to health professionals who had experience of using the handheld (iPods and iPads) DETECT e-PEWS and to parents and children who had received care by professionals using the system. The research question underpinning this sub-study was: ‘How do parents and health professionals view and engage with the DETECT e-PEWS?’ The Consensus-Based Checklist for Reporting Survey Studies (CROSS) [38] has been used to ensure high quality reporting.

Materials and methods

Study design

A prospective e-survey (paper copies available if preferred) using closed (tick box or sliding scale) and open (text based) questions.

Participants and setting

The target population was parents of children (aged 0–18 years old) who were in-patients (excluding children admitted as day-cases, the paediatric intensive care unit or neonatal surgical unit), and health professionals at Alder Hey Children’s Hospital, a tertiary setting in Liverpool in the UK. No sample size calculation was used. Recruitment of parents was undertaken face-to-face either by a researcher or, during the COVID-19 pandemic, by dedicated, trained DETECT study research nurses. Recruitment occurred between April 2020 and February 2021, although recruitment (staff shortages) was not possible in some months). A mixture of convenience, purposive and snowball sampling was used. Two groups of parents were recruited: Group 1 (parents whose children had not experienced a critical deterioration event during admission; non-CDE group) (n = 68 parents), and Group 2 (parents whose children had experienced a critical deterioration event, CDE group) (n = 69 parents). A CDE was defined as a deterioration where the patient is critically unwell, which culminates in an emergency transfer to high dependency unit or the intensive care unit, or an unexpected death. Recruitment of health professionals (doctors, nurses and allied health professionals, n = 151) with experience of using the system was initially opportunistically face-to-face on the wards by the trained research nurses asking staff if they were interested and latterly by email. Recruitment occurred between May 2020 and January 2021; face-to-face recruitment only occurred across seven months due to the pandemic (staff shortages). Health professionals were also given detailed information via information sheets, given sufficient time to consider if they wanted to participate and the opportunity to ask questions (face-to-face or remotely). The possibility of coercion was avoided by making it clear that participation was voluntary and leaving the device with the survey link on it with the potential participant for about ten minutes; allowing the participant to complete the survey or not, as preferred. Parents were approached on the wards where the DETECT devices (iPods or iPads) were being used and asked if they were interested in the study. Tailored information sheets for parents were given to potential participants. Consent by parents and health professionals for participation in the survey was gained via a ‘tick box’ at the start of the survey.

Parent involvement and engagement

To ground the design and content of the survey and to ensure that the wording and flow of the questions were clear and unambiguous parents were engaged with via two face-to-face workshop groups (n = 8) and by email (n = 3) from the Alder Hey Children’s NHSFT Parent and Carer’s Research Forum which is funded by the National Institute for Health Research (NIHR) Alder Hey Clinical Research Facility (CRF). Potentially sensitive questions such as whether parents were able to identify if their child was deteriorating were discussed with the parents and the final wording used and its positioning at the end of the survey were both informed by discussion with the parents. These contributions and the refinements were made to the survey were helpful in creating an engaging and sensitive survey that was well-received by parents.

E-surveys

Semi-structured surveys (non-validated) were specifically designed for the study, these survey instruments were not validated and, as previously noted, their sensitivity in terms of health literacy had been checked with parents. Consultation with health professionals (paediatric doctors and nurses who were part of our wider steering group) helped to develop the structure, content, and readability of the health professional survey. Pretesting/piloting of our proposed final versions of the surveys was carried out with parents (see engagement in previous section, n = 11) and health professionals (nurses and doctors, n = 5) was carried out on one occasion; no revisions were identified as being required. Closed (core) questions required mandatory responses to avoid non-response error. None of the questions were weighted. Three versions, each taking about 3–5 minutes to complete, were created one for health professionals, one for parents of children who had experienced a critical deterioration event (CDE), one for non-CDE parents. In the parent surveys we referred to vital signs as ‘obs’ (an abbreviation of observations).

Parent survey

Questions in the non-CDE and CDE parent surveys were identical apart from one additional question for CDE parents. The surveys were only available in English and no dedicated translation was available. The surveys consisted mainly of closed (tick box or sliding scale) questions (n = 13) some with more than one item; three questions had a box for parents to provide further comments. There was one open question at the end of both the CDE and non-CDE parent surveys. The surveys designed for parents were composed of 5 sections: (1) Introduction (brief information about the survey and the study); (2) Deciding to Take Part (consent); (3) Background Information (n = 5 questions asking about relationship to child, gender and age of their child, ward their child is/was being treated on, number of times their child has been admitted to the study hospital); (4) About the Device (n = 9 questions asking about satisfaction with explanations about the device and how it was used, trust in technology, feeling safe and secure with the device (relating to our definitions of utility [33] and acceptability [37]). A final question asked parents if they know when their child is getting ‘poorlier’ (deteriorating); and a (5) Thank you section. Typically, the researcher did not assist parents to complete the survey, although support was available, as needed.

Health professional survey

This consisted of closed (n = 21) questions (drop down response, tick box or sliding scale) some with more than one item; all but five questions had a comment box. The health professionals’ survey consisted of eight sections, five of which (education, monitoring, recognition, escalation, and response) related to the chain of prevention [21]: Introduction; Deciding to Take Part; Education and Training, (n = 2 questions); Monitoring and Recognition (n = 5 questions); Escalation and Response (n = 3 questions); System Features (n = 4 questions), Concluding Thoughts (n = 3 questions); all but the first section related to our definitions of utility [33] and acceptability [37].

Ethics

The study gained ethics approval via the North-West, Liverpool East Research Ethics Committee (IRAS ID: 215339). All those involved in gaining consent were suitably qualified, experienced, and trained and consent was gained in accordance with the principles of Good Clinical Practice on Taking Consent [39]. No potential participant was put under any level of pressure and their right to refuse to participate in the survey without giving reasons was respected. All relevant governance protocols relating to data management and anonymisation were followed. Participants ticked a consent/assent box at the start of the survey and submission of the surveys was taken as confirming consent (or assent) to participate in the study. All responses were anonymous unless they chose to share their contact details for potential participation in next phase (interview) of the study. Direct feedback to individual survey participants was not possible (due to anonymity of survey responses) but findings will be shared with the broad population of parents through the Parent and Carer’s Research Forum, hospital newsletter, social media etc. and with health professionals via Grand Rounds and other meetings. All required data governance procedures were followed.

Analysis

The survey responses were coded and analysed using descriptive and inferential statistics within a statistical package (SPSS v25). The text in the open questions was collated and subjected to generic, descriptive thematic analysis. The results are reported separately for the children, parents (non-CDE and CDE), and health professionals (depending on role). Descriptive statistics, mean (M) and standard deviation (SD), are presented to describe variables measured on a continuous scale, categorical variables are reported using counts and percentages. For the health professional data, Chi-squared and Fishers exact test were used to assess between group differences when the outcome of interest was categorical and independent T-test was used when outcome was continuous.

Results

Characteristics of participants

Parents

Of the parents approached, there was a 9–10% decline rate (typical reasons for declining being focused on child). In total, 137 parents completed the survey (mothers n = 115, 83.9% and fathers n = 22, 16.1%); of these, there were 68 non-CDE parents (n = 59 mothers, n = 9 fathers) and 69 CDE parents (n = 56 mothers, n = 13 fathers) (Table 1). Around half the parents (n = 27 CDE and n = 38 non-CDE) provided open text responses. All but three surveys (n = 134) were completed electronically.
Table 1

Parent and child demographics from parent survey responses.

non-CDECDE
Parent status N (68)%N (69)%
 Mother5986.85681.2
 Father913.21318.8
Child Gender
 Girl2942.62942
 Boy3957.44058
Child Age
 < 1 year2333.84463.8
 1 - < 2 years57.468.7
 2 - < 7 years1927.9913
 7 - < 13 years1217.6710.1
 >13 years913.234.3
Number of Admissions
 First admission4363.24058
 2–5 admissions1420.61724.6
 6–10 admissions45.9811.6
 >10 admissions710.345.8
Ward
 Cardiac710.32231.9
 General paediatrics17251014.5
 General surgery1014.757.2
 High dependency unit*11.51318.8
 Medical speciality1116.234.3
 Oncology68.8710.1
 Speciality surgery1014.734.3
 Neurology57.457.2
 Burns11.500
 Emergency decision unit**0000

* The high dependency unit (HDU) provides level 2 critical care [40]. The HDU patient population includes patients who have deteriorated on the ward, high acuity patients post-operatively as well as some step-downs from PICU with higher care needs than can be delivered on a ward.

** EDU is a short stay unit of admissions direct from ED who either stabilise and are discharged or are admitted to another ward within 24 hours.

* The high dependency unit (HDU) provides level 2 critical care [40]. The HDU patient population includes patients who have deteriorated on the ward, high acuity patients post-operatively as well as some step-downs from PICU with higher care needs than can be delivered on a ward. ** EDU is a short stay unit of admissions direct from ED who either stabilise and are discharged or are admitted to another ward within 24 hours. The parents reported on the experiences of their sons (n = 79, 57.7%) and daughters (n = 58, 42.3%). The age range was < 1 year-13 years or older with the majority (n = 67, 48.9%) being in the < 1 year category. For most of the children (n = 83, 60.6%) this was their first admission, although 11 children (8%) had experienced ten or more admissions. There was representation across all eligible ward settings with most children across both groups nursed on the cardiac unit (n = 29) and general paediatrics (n = 27) at the time of survey completion. Focusing solely on CDE children, most were nursed on the cardiac unit (n = 22), high dependency unit (n = 13), and general paediatrics (n = 10) at the time of survey completion (Table 1).

Health professionals

In total 151 health professionals participated in the survey (decline rate not calculated as staff were approached by email as well as directly but typical reason for declining being ‘too busy’). Of the 151 participants, the majority (n = 102, 67.5%) had been using DETECT e-PEWS for 6 months or longer, with 49 (32.5%) having used the device (iPod or iPad) for <6 months. Forty four percent (n = 66) of HPs provided at least one open text, with just under half of these (n = 25) at least three open text responses; some provided up to nine. All surveys were completed electronically. The sample included nurses, doctors and allied health professionals who were using DETECT e-PEWS in two distinct ways and the data are grouped and presented in this way: ‘Documenting Vital Signs (D-VS) which involved the work of taking and recording the child’s vital signs into the app on the iPod or ‘Responding to Vital Signs’ (R-VS) which encompassed the work of responding to concerns and alerts on the iPods, iPads or personal device from the automatically generated PEWS scores and taking appropriate action (Table 2).
Table 2

Aspect of DETECT e-PEWS app used and professional role.

Aspect of DETECT e-PEWS app and roleN%
Documenting vital signs (D-VS) on iPod 13388.1
 Staff Nurse7851.7
 Sister1912.6
 Student Nurse1610.6
 Allied Health Professional*106.6
 Health Care Assistant85.3
 Ward Manager10.7
 Assistant Practitioner**21.4
Responding to vital signs (R-VS) on iPad 1811.9
 Doctor149.3
 Advanced Clinical Practitioner***21.3
 Acute Care Team****21.4
Length of time using DETECT e-PEWS
 <6 months4932.5
 ≥ 6 months or longer10267.5

* Allied Health Professional is a term that includes physiotherapists and occupational therapists. We did not collect data on the specific profession of AHPs.

**Assistant Practitioners are not registered practitioners but they support care and have a high level of skill through their experience and training [41].

*** Advanced Clinical Practitioners are nurses or AHPs trained to Masters level on an approved ACP course who deliver clinical caseload management autonomously to acute and complex patient groups [42].

**** Acute Care Team is the nurse led Rapid Response Team in the study hospital.

* Allied Health Professional is a term that includes physiotherapists and occupational therapists. We did not collect data on the specific profession of AHPs. **Assistant Practitioners are not registered practitioners but they support care and have a high level of skill through their experience and training [41]. *** Advanced Clinical Practitioners are nurses or AHPs trained to Masters level on an approved ACP course who deliver clinical caseload management autonomously to acute and complex patient groups [42]. **** Acute Care Team is the nurse led Rapid Response Team in the study hospital. In the D-VS group (n = 133) the disciplinary role of the participants was Staff Nurse (n = 78, 51.7%), followed by Sister (n = 19, 12.6%), Student Nurse (n = 16, 10.6%), Allied Health Professional (n = 10, 6.6%), Assistant Practitioner (n = 2, 1.4%) Ward Manager (n = 1, 0.7%). In the R-VS group (n = 18) the reported role of most participants was Doctor (n = 14, 9.3%), Advanced Clinical Practitioner (n = 2, 1.3%), and Acute Care Team (n = 2, 1.4%). Health professionals worked across all 10 of the eligible ward settings with the majority working on four wards: general paediatrics (n = 34, 22.5%), medical speciality (n = 29, 19.2%), neurology (n = 26, 17.2%) and the high dependency unit (n = 21, 13.9%) (Table 3).
Table 3

Ward/unit professionals working on.

WardN%
 General paediatrics3422.5
 Medical speciality2919.2
 Neurology2617.2
 High dependency unit2113.9
 General surgery117.3
 Specialist surgery117.3
 Oncology85.3
 Cardiac64.0
 Burns32.0
 Emergency decision unit21.3

Parents: Core findings

Data have been reported from parents in two groups: parents whose children had not experienced a critical deterioration event (non-CDE) and those whose children who had (CDE). Labels are used to indicate parent number from survey and whether parent was CDE or non-CDE, for example, (CDE P12). Overall, the parents in both groups had similar experiences in terms of their engagement with and perceptions of the devices (Table 4). Most parents reported that they know when their child is ‘getting poorlier’ either “all’ or ‘some’ of the time; non-CDE (n = 63, 92.6%) and CDE (n = 62, 89.8%). Summary statistics are reported in Table 4.
Table 4

Parents’ responses to survey.

non-CDE ParentCDE Parent
N (68)%N (69)%
Initial Impressions
When the person did your child’s ’obs’ did you notice them using the device?
Yes6592.65681.2
No11.534.3
Can’t remember11.5710.1
To begin with I thought the person doing my child’s ’obs’ was on their phone
Yes2739.72333.3
No3652.93855.1
Can’t remember45.968.7
To begin with I didn’t know what the device was doing when they were using the device
Completely agree1014.71115.9
Agree a bit1725.01927.5
Neutral811.81014.5
Disagree a bit1522.1811.6
Completely disagree1725.01927.5
I don’t really understand what the device is doing
Completely agree710.3710.1
Agree a bit68.8811.6
Neutral68.8710.1
Disagree a bit913.2811.6
Completely disagree3957.43753.6
Did the person doing your child’s ‘obs’ explain what the device was for?
Yes3551.54159.4
No2841.21927.5
Can’t remember45.9710.1
Technology Related
Improvements in the technology are a good thing
Completely agree5885.36188.4
Agree a bit57.457.2
Neutral11.511.4
Disagree a bit11.5--
Completely disagree11.5--
I don’t trust technology like this
Completely agree22.9
Agree a bit34.434.3
Neutral811.857.2
Disagree a bit1217.61115.9
Completely disagree4261.84869.6
The person using the device sometimes has problems with it
Completely agree34.411.4
Agree a bit1116.2811.6
Neutral2232.42130.4
Disagree a bit68.81217.4
Completely disagree2536.82434.8
There are always enough devices available when the person needs to do my child’s ‘obs’
Completely agree3551.53246.4
Agree a bit913.2913.0
Neutral2029.42333.3
Disagree a bit11.522.9
Completely disagree22.911.4
Engagement with health professional
My child doesn’t mind the person using the device to record their ‘obs’
Completely agree5580.95579.7
Agree a bit57.411.4
Neutral57.41115.9
Disagree a bit----
Completely disagree11.5--
The person doing my child’s ‘obs’ just concentrates on the device then goes away
Completely agree710.334.3
Agree a bit811.81217.4
Neutral1116.21217.4
Disagree a bit1217.6913.0
Completely disagree2942.63144.9
After they’ve been done, I ‘d like to be able to see the results of my child’s ‘obs’
Completely agree2841.23144.9
Agree a bit1014.71014.5
Neutral2333.82029.0
Disagree a bit45.911.4
Completely disagree22.957.2
Feeling safe
I like the idea that an automated alert will be sent to a senior nurse or doctor is the device detects something of concern
Completely agree6291.26594.2
Agree a bit22.922.9
Neutral11.5--
Disagree a bit11.5--
Completely disagree11.5--
I feel safe knowing that the device aims to provide backup to the doctors and nurses
Completely agree5479.46188.4
Agree a bit1116.257.2
Neutral11.511.4
Disagree a bit----
Completely disagree11.5--
I don’t trust technology like this
Completely agree22.9--
Agree a bit34.434.3
Neutral811.857.2
Disagree a bit1217.61115.9
Completely disagree4261.84869.6
Based on knowing my child I know when they are getting poorlier
All of the time4261.83652.2
Some of the time2130.92637.7
Not very much of the time45.945.8

Overall satisfaction

On a scale of 0 to 100, most parents indicated high levels of satisfaction with the devices (M scores: non-CDE parents 86%, CDE 89%).

Initial impressions

Most parents (non-CDE 93%, CDE 81%) noticed the nurses using a device to do their child’s vital signs. Over half of the parents recalled that the person taking and recording their child’s vital signs had explained the device to them (CDE 52%, non-CDE 59%). One CDE parent explained: Initially, I had no clue what the device was for and did wonder if nurses were on their phones but now I know what they were doing I have been happy for them to use it (CDE P12). Initially, about a third (non-CDE 40%, CDE 33%) thought the device was the professional’s own phone. Similarly, around 40% of parents (non-CDE 40%, CDE 44%), were initially unsure about the purpose of the device; although at the time of filling in the survey, most understood the purpose of the device (non-CDE 71%, CDE 65%).

Technology related

Most parents (non-CDE 85%, CDE 88%) agreed that ‘improvements in technology are a good thing’. One parent noted that the “device seems to make ‘obs’ quicker” (CDE, P46) with another noting it was “wonderful for speed and efficiency…and a great observation checklist for the nurses” (CDE P57). Typical responses included parents thinking that the technology “lowers the risk of mistakes being made when using paper” (CDE P3), delivers the “right results we need to know about her” (CDE P9) and noting that if HPs “do obs on paper they can lose paper obs and have to do them again” (non-CDE P11). A non-CDE parent noted that they thought that: …. the device is a good idea, anything that ensures all the necessary people are seeing his obs has got to be a good thing in my opinion! (non-CDE P2). Most parents agreed (non-CDE 65%, CDE 59%) that there were always enough devices available when needed and most (non-CDE 79%, CDE 85%) disagreed with the statement that ‘I do not trust technology like this’.

Engagement with health professionals

Most parents (non-CDE 88% and CDE 81%) agreed that their ‘child did not mind the device being used to record their vital signs’. Most parents (non-CDE 60%, CDE 58%) disagreed that the ‘person doing their child’s vital signs just concentrated on the device and then left’. However, of those who did feel that the person doing their child’s vital signs concentrated on the device and then left, one CDE parent noted that: I feel when obs were taken on paper the nurse was more interactive whereas with the device they seemed to concentrate on that a lot then only let you know things were okay if prompted (CDE P24). However, a non-CDE parent noted: Staff are nothing but interested in the patient when carrying out the obs, constantly talking and making him feel comfortable. And it’s a time when he smiles the most, due to their attention and care (non-CDE P22). Most parents (non-CDE 56%, CDE 59%) agreed that they would have liked to have seen the results of their child’s vital signs. One CDE parent noted that “it’s good to have a trace of my child’s obs that isn’t just paper based” (CDE P43). One non-CDE parent expressed a need for more information: I would like the nurse to talk to me more about my baby’s ’obs’ so that I know what I need to look for on the monitor so I could know what a SAT would mean if it went to a certain number (non-CDE P64).

Feeling safe

Most parents (non-CDE 94%; CDE 97%) liked the idea that the device would trigger an automated alert if it detected something of concern. A non-CDE parent noted: I feel much more at ease knowing my son’s obs are going straight into the system and red flags are reviewed instantly. It’s much more effective in raising concerns of poorly children. Having a complex child that deteriorates quickly and being involved in paper obs and the new technology I feel much more at ease as it’s escalated much quicker (non-CDE P55). Most parents (non CDE 96%, CDE 96%) ‘felt safe knowing the device aimed to provide a backup’. Very few parents (non CDE 7%, CDE 4%) expressed distrust in ‘technology like this’; one CDE parent commented that: Obs are a really important part of any child’s recovery, safety and definitely have shown when he’s needed intervention. My experience of the obs done on the ward is that they were dealt with really quickly and efficiently which then lead to transferring to HDU. Not had any bad experiences. Couldn’t of done any more than they did, they kept him safe up to the point of transfer (CDE P13). One parent whose child had experienced a CDE, suggested parental concern should be included as an extra safety measure (although this was already part of the system): My little girl’s obs were not changing prior to becoming unwell so feel parental concern should also be recorded and included in ‘obs’ (CDE P24).

Health professionals: Core findings

Data have been reported from health professionals in two groups: those who documented vital signs (D-VS) using iPods and those responding to vital signs (R-VS) using iPods, iPads or personal device. Comparisons were made between groups on the continuous data using t tests. The means, standard deviations and significance levels (p values) are reported in Table 5 and the statistically significant t tests are reported in the text. Labels are used to indicate role, group and the HP number from survey, for example, (Staff Nurse, D-VS, 106).
Table 5

Health professionals’ responses: Comparison between D-Vs and R-VS*.

Documenting vital signs (D-VS)Responding to vital signs (R-VS)Group comparison
M (SD)M (SD)
Overall satisfaction (0 = low, 100 = high)
How confident do you feel about recognising that a child’s health is deteriorating?90.41 (10.44)80.72 (16.30)p = .024
What overall score would you assign VitalPAC in terms of your satisfaction?78.97 (17.20)55.82 (33.21)p = .012
How satisfied are you with the ability to obtain a charged hand-held device to perform your observations on VitalPAC?2.10 (.94)2.71 (1.16)p = .016
Education, training and implementation (1 = high, 5 = low)
How satisfied are you with the education and training you received?2.00 (1.02)2.59 (1.33)p = .094
How confident do you feel that your education and training on VitalPAC permit you to respond effectively to acutely ill patients?2.01 (.92)2.35 (1.46)p = .353
How satisfied are you with the way VitalPAC is implemented in your area?1.86 (.82)2.94 (1.25)p = .003
Recording and monitoring (1 = high, 5 = low)
How satisfied are you that VitalPAC allows you to record accurate data?2.08 (.93)2.59 (1.06)p = .040
How confident are you in the way in which VitalPAC monitor your patients for deterioration?2.00 (.76)2.82 (1.33)p = .023
How satisfied are you that VitalPAC will reduce the incidence of the omission of recording of vital signs?2.30 (.92)2.71 (1.16)p = .096
Completeness of documentation1.81 (.79)2.59 (1.23)p = .020
Frequency of documentation1.91 (.80)2.41 (.94)p = .019
Recognition, awareness and level of concern (1 = high, 5 = low)
How confident are you that VitalPAC escalation reflects the clinical decision you want to make?2.23 (.83)3.06 (1.30)p = .001
How satisfied are you with the way in which VitalPAC supports you in recognising deterioration?1.95 (.76)2.82 (1.33)p = .017
How confident are you that VitalPAC reflects your level of concern?2.14 (.85)2.88 (1.22)p = .002
How confident are you that VitalPAC helps make you aware of the sickest children in your setting/area of responsibility?2.10 (.84)2.76 (1.35)p = .065
Real time oversight of the sickest patients1.95 (.82)2.71 (1.31)p = .033
How satisfied are you that VitalPAC allows you to visualise trends in data efficiently?2.16 (.90)2.75 (1.53)p = .151
Escalation, decision making and timeliness of response (1 = high, 5 = low)
How confident are you that VitalPAC ensures that patients who require escalation are promptly referred to the appropriate clinician?2.20 (.94)3.12 (1.22)p = .001
How confident are you that VitalPAC assists a timely response to signs of deterioration?2.01 (.89)2.76 (1.52)p = .061
Usability (1 = high, 5 = low)
Ease of use1.65 (.90)2.50 (1.27)p = .001
View of completed observations1.95 (1.01)2.73 (1.49)p = .065
Careflow Connect2.11 (1.12)2.44 (1.67)p = .463
Availability of devices1.93 (.94)2.44 (1.37)p = .164
Speed of data1.98 (.98)2.50 (1.16)p = .050
Icons1.90 (.80)2.63 (1.09)p = .020
Automated prompts1.88 (.88)2.69 (1.30)p = .027
Automated doctor alert system1.94 (.96)2.81 (1.38)p = .025

* Note: At start of the study Vitals and Connect CareFlow (DETECT e-PEWS) were called VitalPAC. T tests were conducted to compare the D-VS and R-VS groups on the continuous variables. Means, SD, and p values are reports in the table and statistically significant t tests are reported in the text.

* Note: At start of the study Vitals and Connect CareFlow (DETECT e-PEWS) were called VitalPAC. T tests were conducted to compare the D-VS and R-VS groups on the continuous variables. Means, SD, and p values are reports in the table and statistically significant t tests are reported in the text. First, the data are presented for overall satisfaction and then the remaining results are presented under headings linked to the key aspects of Smith’s [21] chain of prevention. The health professionals were asked to rate their confidence and satisfaction in using the DETECT e-PEWS on a scale of 0–100. In both groups, levels of confidence and satisfaction were high. However, those in the D-VS group had significantly higher levels of confidence that they could recognise that a child’s health is deteriorating than those in the R-VS group (t (18, 93) = 2.46, p = .024) Similarly, the D-VS group had significantly higher levels of overall satisfaction with DETECT e-PEWS than those in the R-VS group (t (17,20) = 2.82, p = .012). The D-VS group also had significantly higher levels of satisfaction with being able to ‘obtain a device’ (t (138) = -2.44, p = .016). In the open-text responses, health professionals noted that “more nursing station chargers” (Staff Nurse, D-VS, 128) were needed and that sometimes “people can forget to charge them” (Sister, R-VS, 29).

Education, training and implementation

The D-VS group had higher levels of satisfaction ‘for the education and training received’ compared to the RVS group, although this difference was not statistically significant. In terms of training, there were few critical comments and these related to it being “tricky to take in all the info and retain it for use sometimes” (Staff Nurse, D-VS, 106) or using the device. One participant noted “no one asked if I needed extra help, I have dyslexia” (Staff Nurse, D-VS 48); most open responses were positive, such as: always someone there to help if extra advice needed (Allied Health Professional, D-VS, 2). Satisfaction with the ‘implementation of Vitals [DETECT e-PEWS] in their area’, was significantly higher in the D-VS group than the R-VS group (t (17.91) = -3.46, p = .003).

Recording and monitoring

Satisfaction was significantly higher amongst the D-VS group than the R-VS group in terms of accurately recording data (t (140) = -2.08, p = .040) and monitoring patients for deterioration, (t (17.45) = -2.49, p = .023). In relation to expectation that DETECT e-PEWS would ‘reduce the incidence of omission of recording vital signs’ once again the scores of the D-VS group were higher than the R-VS group, although this difference was not statistically significant. However, some open responses suggested that, despite training, staff did not always directly record vital signs in real-time: sometimes you do observations then have to do other cares and forget to record on DETECT [and] it’s still on a piece of paper (Staff Nurse, D-VS, 150). Satisfaction with DETECT e-PEWS was significantly higher for the D-VS group compared to the R-VS group in terms of both ‘completeness of documentation’ (t (17.88) = -2.55, p = .020) and ‘frequency of documentation’ (t (139) = -2.38, p = .019).

Recognition, awareness and level of concern

The D-VS group had significantly higher levels of ‘confidence in the way DETECT e-PEWS supports recognition of deterioration’ than the R-VS group (t (17.45) = -2.64, p = .017). A typical positive open response noted that DETECT e-PEWS: allows you to see trends in previous PEWS recorded and alerts you if there are any concerns if the PEWS are out of normal limits (Staff Nurse, D-VS, 88). The D-VS group also had more confidence in DETECT e-PEWS than the R-VS group when it came to the extent to which the device ‘reflects your level of concern’ and this difference between groups was statistically significant (t (139) = -3.20, p = .002). One member of staff noted that it was “good that it captures parental concern” (Advanced Practitioner, R-VS, 23). Only one participant noted that “there have been occasions where I have been more concerned than reflected on system” (Sister, D-VS, 141). In terms of the extent to which the device helped raise awareness of the sickest children in the setting/area of responsibility’, confidence was once again higher amongst the D-VS group, although this difference only approached statistical significance (t (17.76) = -2.31, p = .033). However, some participants rejected DETECT e-PEWS’ contributions noting “it doesn’t make a difference. We know who our most unwell patient is without it” (Staff Nurse, D-VS, 52). In terms of ‘real time oversight of the sickest patients’; confidence was once again significantly higher in the D-VS group than the R-VS group (t (17.76) = -2.31, p = .033). However, some staff raised concerns about alerts being triggered when children’s baseline (e.g., cardiac or complex healthcare needs) vital signs are outside of the standard limits, for example: some of our complex patients trigger high PEWs even when well and may not be the sickest patient on the ward (Sister, D-VS, 147). There was no difference between groups as to the extent that DETECT e-PEWS ‘allows professionals to visualise trends efficiently’. Although most staff were satisfied with the trends and liked “being able to see graphs as it shows trends easily” (Advanced Practitioner, R-VS, 16). Respondents differed in opinions about whether DETECT e-PEWS provided better visualisations than Meditech: one participant noted that they preferred visualising trends on DETECT e-PEWS as “vital signs and pew are graphically displayed is much better than on Meditech” (Doctor, R-VS, 19) whereas another preferred Meditech as the “screen [is] larger… more data” (Staff Nurse, D-VS, 52).

Escalation, decision making and timeliness of response

The D-VS group had significantly higher ‘confidence that patients requiring escalation of care are promptly referred to the appropriate health professional’ than the R-VS group (t (137) = -3.62, p = .001). The D-VS group had higher ‘confidence that Vitals [e-PEW score app] assists a timely response to signs of deterioration’ than the R-VS group although this group comparison was not statistically significant. Some concern was raised in the open text responses such as being unsure about whether “doctors always receive messages, end up bleeping on phone” (Staff Nurse, D-VS, 94) or “this system does not alert you if you are busy with another patient in the way that a bleep does and this can result in a delay” (Doctor, R-VS, 143). However, positive responses were typified by the visual cues and how it could: support you to demonstrate escalation is required, by showing an upward or downward trend, whichever is relevant (Advanced Practitioner, R-VS, 16). However, it was also noted that staff would “also use my own assessment” (Sister D-VS, 32).

Usability

Overall, usability was high. The D-VS group had higher levels of satisfaction in terms of ‘ease of use’ (usability) compared to the R-VS group and this difference was statistically significant (t (138) = -3.39, p = .001). Although most open responses about usability were positive, some negative responses reflected the following concerns such as “having multiple places to record information is confusing and complicated” (Staff Nurse, D-VS, 56) and reviewing vital signs is “no different to Meditech [and] much harder to see on smaller screens such as ipad” (Doctor, R-VS, 139). There was concern raised about there being “too many devices and means of communicating [in the hospital] already” (Doctor, R-VS, 18). Satisfaction with DETECT e-PEWS also reflected how embedded it was on a particular ward with staff in some settings seeing it as less suitable for their setting, for example, “designed to be more ward based…not HDU specific” (Staff Nurse, D-VS, 33). The D-VS and R-VS groups reported lower levels of satisfaction in relation to CareFlow Connect [response app] compared to other usability characteristics. Open text responses showed it was not used consistently across all settings, such as “CareFlow is not commonly used by MDT” (Staff Nurse, D-VS, 98) and not always thought to “make my job easier” (Doctor, R-VS, 139). Although both groups were similar in their satisfaction regarding the availability of devices (iPods or iPads), satisfaction was significantly higher in the D-VS group than the R-VS group in terms of speed of data input (t (137) = 1.97, p = .050), icons (t (17.18) = -2.57, p = .020), automated prompts (t (16.84) = -2.42, p = .027), and automated doctor alert system (t (16.95) = -2.45, p = .025).

Discussion

This is the first paper describing the clinical utility and acceptability of a hospital-wide, proactive end-to-end deterioration solution (the DETECT surveillance system) with an embedded e-PEWS that included sepsis screening. The DETECT surveillance system aims to proactively screen paediatric patients for early signs of serious deterioration or sepsis, create alerts, and escalate concerns to reduce complications and emergency transfers to critical care following deterioration in hospital. The discussion contextualises the perceptions of the clinical utility [33] and acceptability in line with our stated definitions of these concepts [37]. However, we frame the discussion within the five rings of the chain of prevention (Fig 1) [21] and we note that whilst Smith’s focus is entirely on health professionals, ours encompasses parents. We chose to structure the findings using the chain of prevention as each ‘ring’ is a discrete component important in the prevention of deterioration. When specifically considered, the acceptability constructs from Theoretical Framework of Acceptability v2 [37] (see also Fig 3) are signposted in brackets as Construct 1, Construct 2 etc. As seen in other PEWS studies, implementation is challenging and system-wide changes need organisational support [43].

Education

Overall, the clinical utility of the training was good and acceptability was good in that professionals felt satisfied, confident, well prepared, and able to respond effectively to acutely ill children. Although the chain of prevention focuses on education of staff [21], it was interesting to note that the implementation of the system created opportunities for professionals to explain DETECT e-PEWS and the devices used, talk about vital signs, and for parents to ask questions about the technology; thus supporting attainment of ‘intervention coherence’. This serendipitous parent training may prove beneficial, as studies addressing parent involvement in the escalation of care note that some professionals doubt parent capabilities [44] and have concerns about misuse of escalation [45]. Health professionals were supported by their initial and ongoing training and education promoting a sense of ‘self-efficacy’ (Construct 7). Success is known to be supported by factors including education which addresses the value of technology or the intervention [34], makes staff curious [46] and which enhances ‘affective attitudes’ (Construct 1) [37, 46]. Education is key to understanding processes (‘intervention coherence’) (Construct 4), and in the DETECT study both implementation and assimilation were ongoing processes, as recommended as this is known to be core to changing practice [35, 47].

Monitoring

The clinical utility of DETECT e-PEWS in terms of its ease of use in recording of vital signs via the app on the iPods was considered good by most health professionals. Generally, PEWS studies only consider monitoring acceptability from the perspective of health professionals [6, 7, 13]; however, our study also addressed acceptability from the perspectives of parents. Parents trusted DETECT e-PEWS, as they believed that it was efficient, better than ‘just paper’, and made them feel safe and it demonstrated robust acceptability across all aspects of acceptability (Constructs 1–7). However, acceptability could have been improved for some parents if more information (e.g., the results of their child’s vital signs) had been shared with them. It is interesting to note that other escalation of care studies focus attention on information and/or education about how to express concern [48-50], but do not present evidence of educating parents about their child’s vital signs. Acceptability was good overall for health professionals with most preferring the DETECT e-PEWS over paper-based scoring in terms of, for example, its ‘perceived effectiveness’ (Construct 6) (e.g., in reducing workload), its interface, icons, automated prompts and how it supported completeness of documentation. Such factors are key to the successful implementation of digital health interventions [46, 51]. Acceptability was good in terms of ethicality (Construct 3) as the DETECT system fitted with the ‘values, priorities and routines’ [52] particularly of the D-VS group who absorbed any ‘opportunity costs’ (Construct 5) into their everyday practice, and demonstrated clear ‘self-efficacy’(Construct 7) [37] in their confident engagement with the DETECT system.

Recognition

Most health professionals had confidence (better in D-VS than R-VS group) in the clinical utility of DETECT e-PEWS in triggering recognition of potential deterioration. Some health professionals in speciality settings (e.g., cardiac care and high dependency) identified that the pre-defined alert scores were inappropriately sensitive in triggering alerts. Parents’ perception of the automated calculation of scores component of DETECT e-PEWS reflected high acceptability (‘perceived effectiveness’) (Construct 6) as it would ‘keep their child safe’ and because it included parental concern. Overall, acceptability was good (higher in D-VS than R-VS group), with most health professionals seeing benefits (‘ethicality’ and ‘affective attitude’) (Constructs 1 and 3) such as liking the real-time and/or remote visualisation of trends and as seen in other studies [14, 53]. Most health professionals trusted the DETECT system (‘perceived effectiveness’, Construct 6) to better support recognition of deterioration, a core aspect inherent in the chain of prevention [21], further reflecting the ‘ethicality’ (Construct 3) of acceptability. However, as with other studies of digital health implementation, some health professionals were reticent, perhaps seeing the ‘opportunity costs’ (Construct 5) outweighing benefits, as they questioned the need for automation and/or considered the DETECT system a threat to their clinical judgement as seen in other work [53]. Clearly opportunity costs do need better consideration in future implementation work and attention needs to be paid to how perceived threats can be better managed.

The call for help

Although the DETECT system’s clinical utility was generally high in relation to automated alerts there were some concerns that the system might be less effective than ‘bleeping’ (paging) a doctor, as some health professionals were unsure if triggered messages were received. Lack of certainty and concerns about variation in responsiveness have been shown to be barriers [53]. The clinical utility of the DETECT system depends on its accuracy in supporting health professionals across general and speciality settings and avoiding problems such as ‘call fatigue’ [53], which has been reported as a barrier when alerts are triggered inappropriately. Parents had positive ‘affective attitudes’ (Construct 1) [37] toward the DETECT system, knowing that it would trigger an auto alert and ‘call for help’ without relying on a health professional to make the call. Most parents reported that they know when their child is ‘getting poorlier’, but it is unclear from the survey how confident parents felt in voicing these concerns, or how comfortable they felt in responding to the health professional asking them the ‘parental concern’ question as part of doing their vital signs. Other studies have shown that some parents lack confidence in raising and/or escalating concerns [44, 49] or challenging medical staff [22] and that concerns raised by relatives are not always related to deterioration [45, 54]. This occurs despite endorsement of national and international bodies in promoting consumer voices in escalation [45]. Overall, health professionals had positive ‘affective attitudes’ (Construct 1) to the DETECT system reflecting its acceptability. However, the response component (CareFlow Connect app) of the DETECT system had yet to reach similar levels of acceptability in some sub-sets of the R-VS group, perhaps reflecting that this group were more aware of ‘opportunity costs’ (Construct 5) [37] as they were less convinced by the net benefit [55], and value [34] which may have led to low levels of social proof (recommendation by peers) [32]. ‘Burden’ (Construct 2) of use can reduce acceptability [37] and the main complaint with some members in the R-VS group arose from the need to carry an additional piece of technology (iPods) with them. The requirement for apps to be device agnostic would help reduce the number of devices being carried and could reduce the burden.

Response

Overall, most health professionals had confidence in the clinical utility of the DETECT system in relation to response, although this was better in the D-VS than R-VS. group. Parents who had experienced a CDE whilst the system was in place reported high acceptability reflecting its ‘perceived effectiveness’ (Construct 6) for their children’s safety. Overall, health professionals had positive ‘affective attitudes’ and positive comments about the response component of the DETECT system, such as access to real-time data [53]. However, the CareFlow Connect app had been in place only a few weeks before the first lockdown of the COVID-19 pandemic. While its use was recommended as part of the DETECT study, this was difficult to mandate because clinical teams had to adapt quickly to work differently to address challenges associated with staffing, cross cover of patients and other challenges. Some scepticism about the response component of DETECT e-PEWS, held by some of the R-VS group, may reflect negative affect in relation to fears of the ‘burden’ (Construct 2) associated with suspected hidden work and concerns about the DETECT system not fitting in with their routines and practices (‘ethicality’, Construct 3), as seen in other e-implementation work [52]. Other studies addressing assimilation of new technologies note that professionalism can be a barrier to smooth implementation. Barriers can be raised as a result of different perspectives held by different professional groups [35]; perceptions of opportunity costs (Construct 5) could be reduced if respected professional champions were given time, support and organisational backing to drive forward implementation.

Limitations

No specific measures of or cut-offs for utility or acceptability were used, although the DETECT study did use rating scales with open text boxes as advised [32]. The lack of validated measures for the concepts of interest can be seen to be a limitation. Various factors limit the samples of parents and health professionals and thus potentially limit the validity and robustness of the findings. One key limitation that a non-probability sampling technique was used; the limitations associated with convenience sampling include sampling and selection bias, limits to generalisability of findings and less granularity of data. Further, the sample size for parents and professionals is relatively small compared to the population of all parents whose children were receiving care and all professionals using the DETECT system. However, although the HP population does include diverse representation across professions and grades, the findings are significantly more weighted to professionals in the D-VS group than the R-VS group. Although two settings (Cardiac and HDU) were less represented, their staff would have had similar access to DETECT system as other areas. This lower representation may be linked to the constraints related to COVID-19 measures created more limited access to these settings for data collection. The sample of parents is not likely to be as diverse as the whole population of eligible parents; a more targeted matrix sampling approach might be considered in future. Additionally, recruitment of parents occurred during the Covid-19 pandemic (fewer admissions) and we were not able to recruit consistently across all months that the study was open due to staff shortages, reduced access to wards). Thus, the population of non-CDE children may not be representative of the total hospital population pre-pandemic (e.g., elective surgeries cancelled, only the acutely unwell children remained or were admitted to hospital). However, our pre-pandemic baseline data (not reported in this paper) suggests that our CDE population is representative as pre-pandemic critical deterioration occurred, most commonly, in children who were acutely unwell or required emergency surgical care. The challenge of implementing the response component (CareFlow Connect app) of the DETECT system within a hospital under extraordinary pressure from the impact of COVID-19 limits what can be stated about this aspect of the system. These limitations mean that the generalisability of the results is limited.

Conclusion

Overall clinical utility and acceptability were positive, although there was evidence that liking/satisfaction dropped over time; as with most implementation strategies, assimilation is an ongoing process [35] requiring effort to sustain both motivation and a sense of positivity across the TFA’s constructs [37]. However, acceptability was evident across all seven constructs. Considering the multifaceted nature of the intervention and the complexity of the implementation across a whole hospital as part of a research study, rather than an organisationally driven programme, it is evident that the DETECT system has had success across two key groups of stakeholders: parents and health professionals. As the DETECT system is handed over to the organisation for ongoing embedding, the findings from the survey when considered in relation to both the chain of prevention [21] and the TFA, provide clear indications as to where the links in the chain need strengthening and where effort is required to enhance acceptability.

Checklist for Reporting Of Survey Studies (CROSS).

(DOCX) Click here for additional data file. 24 Nov 2021
PONE-D-21-34519
Clinical utility and acceptability of a whole-hospital, pro-active electronic paediatric early warning system (the DETECT study): a prospective e-survey of children, parents and health professionals.
PLOS ONE Dear Dr. Carter, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jan 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Jagan Kumar Baskaradoss Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. You indicated that you had ethical approval for your study. In your Methods section, please ensure you have also stated whether you obtained consent from parents or guardians of the minors included in the study or whether the research ethics committee or IRB specifically waived the need for their consent. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This paper reports on surveys exploring clinical utility and acceptability of an electronic paediatric early warning system from the perspective of parents, children and health professionals as a component of a larger implementation study. The study findings will be of interest after the manuscript is further developed. Following and citing a reporting checklist such CROSS as will strengthen the manuscript https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=0&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=survey&btn_submit=Search+Reporting+Guidelines P4 Training for health professional is described – but not how patients and families were prepared and supported or aware of the DETECT System e-handover function – not explained or if used – if not – is it necessary to describe? final para – need to explain on-call teams and Acute Care Team for international audience Explain significance of app on own device The concepts clinical utility and acceptability need to be explained and in the survey development p6 then relate to measures and items that address the concepts of interest P5 Design is a survey with quantitative and qualitative components. Following a reporting checklist will assist in addressing many of the comments listed. Participants and setting: need to explain critical deterioration event The groups of participants is not clear – for Group 1 - perhaps an error of children rather than parents and Group 2 should be parents of children also? Had the children experienced a CDE? Did parents provide consent for child’s participation? Was 7 years the minimum age to participate? Was there a rationale for the minimum age? Was the sample size estimated? Was recruitment purposive for ethnocultural diversity? Were interpreter services available for participants who did not understand English Health Professional recruitment needs a bit more explanation – was email to all eligible first and followed up by face to face requests? How was the possibility of coercion managed? Who is included? Is it nurses and doctors and allied health staff? I can see this reported in results but needs to be defined in methods P6 The surveys were developed for the purpose of this study and described as non-validated. There is no description of the measures for clinical utility and acceptability despite these concepts being previously reported by others There is some description about consulting with parents and children in designing the survey. There is no description about the development of the health professionals survey. Was there any consultation/involvement with health professionals? Was there any content validity testing? How long did the survey take to complete? Did the researcher assist families to complete? P7 Ethics Not described is how participants received information about the study results Results These are participant characteristics not demographics – need to change the heading There is no description of the denominator to understand the response rate. How many potential participants received the survey or how many were requested to complete the survey and declined? How many completed electronically and how many paper based? Table 1 – it will be helpful to understand the patients in the high dependency unit – are these patients who have been in PICU? For international audience the terminology of Assistant Nurse Practitioner, Assistant Practitioner, Advanced Clinical Practitioner and Acute Care Team needs to be explained Table 2 What are the professions of allied health – Physiotherapist, Occupational Therapist, Pharmacist? P 10 core findings will be improved by presenting the positive findings first Overall satisfaction and competence – not clear what the competence relates to. This scale is not described on P6 Qualitative findings:– it is not reported how many parents provided comments nor is it evident whether the quotes are selected from a few or many parents’ comments. P12 Findings from health professionals’ surveys – need to report the actual findings and statistics intext and it will improve the readability to report what was found first then detail differences. The reporting using headings linked to Smith’s chain of prevention should be described if this was planned This section of the manuscript p12 – 14 needs the most work as it is hard to follow. The last few lines p12 and on p15 there are statistics provided but these are not clearly presented No satisfaction scale described in survey development but reported here Were there differences in responses based on profession or professional experience? P15 “A similar pattern…. “ this need to be reworded to explain the finding first Discussion The concepts of clinical utility and acceptability are raised here but there needs to be greater clarity informing the survey. This section is insufficiently developed and is difficult to follow The discussion should more clearly identify how this study adds to or confirms or refutes others’ research in the area and include recommendations Limitations The lack of measures for the concepts of interest is a major limitation The small sample of health professionals is acknowledged but the sample of 137 parents and sample of 8 children is not acknowledged Generalisability should be addressed Conclusion This should be stand alone and highlight key findings ie not refer to figure Reviewer #2: General Comments: This study aimed to examine how parents, children and health professionals view and engage with the DETECT electronic Paediatric early warning systems (PEWS) apps, with a particular focus on its clinical utility and its acceptability. Overall, the study is well-written and presents interesting and novel findings. I have some major and some minor comments, which needs to be addressed before proceeding further. Major Comments: The study employed a non-probability sampling technique for selection of samples. This method has several limitations and could limit the validity of the study results. The authors have not discussed this issue. Recruitment of participants was done during the ongoing pandemic. This could influence the characteristics of patients included in the study. They may not be representative of the patients attending the hospital prior to the pandemic. This has to be discussed. Was any power analysis done? How did the authors decide on the sample size requirement? I believe the category of children is severely under powered to derive any meaningful conclusions. I suggest the authors add more children to the sample or eliminate this group from analysis. Was any piloting of the questionnaire performed? “parents/carers were engaged with via two face-to-face workshop groups (n=8) and by email (n=3) ” these are two different techniques, which can influence the validity of the results. The analysis is incomplete. I recommend that the authors take the help of an experienced statistician to enhance the data analysis. Minor Comments: • The referencing style in not in accordance with the journal’s style. Please review the author instructions or refer to any recent paper published in the Journal. • Abstract; open and closed question? Clarify.. • Materials and Methods: o Prospective or cross-sectional? o young people (aged 7-18 years old) ? adolescents? o Group 1 (children whose children had not experienced a critical deterioration event during admission…) ? Revise o “Although consent is not required for NHS professionals involved in evaluating an intervention, consent from the health professionals was gained via a ‘tick box’ on the survey. “ Incorrect statement. Consent is implied for procedures involving diagnosis or treatments withing the hospital facilities. This was a research project were a new instrument was being investigated. Any research involving human subjects require ethical approval (Declaration of Helsinki). o Analysis: inputted? o Mean and SD are descriptive statistics. How can this be used to compare distributions? List any statistical test used.. • nfe ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Associate Professor Fenella J Gill Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 13 Feb 2022 Please also see the word document which may be easier to read (as it includes additional text in red font)! Rebuttal to reviewers. Thank you both for giving up your time to review our paper and to provide such constructive comments. We have been conscientious in our response to each of your comments and we know believe that this version is stronger and more robust. All new text is presented in red. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for your helpful and insightful comments on our paper. We’re grateful for the time you’ve take to help us improve the manuscript. We also appreciate the time it will take to consider our revisions. This paper reports on surveys exploring clinical utility and acceptability of an electronic paediatric early warning system from the perspective of parents, children and health professionals as a component of a larger implementation study. The study findings will be of interest after the manuscript is further developed. Following and citing a reporting checklist such CROSS as will strengthen the manuscript https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=0&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=survey&btn_submit=Search+Reporting+Guidelines Thank you for this comment, we have used the CROSS checklist (Sharma et al, 2021) as suggested. (p5) The Consensus-based checklist for reporting survey studies (CROSS) [38] has been used to ensure high quality reporting. P4 Training for health professional is described – but not how patients and families were prepared and supported or aware of the DETECT System. (p4) Children and parents were made aware of the implementation of the technology through the use of dedicated posters in all public areas and an explanation provided at the child’s admission. e-handover function – not explained or if used – if not – is it necessary to describe? Since we do not mention e-handover within the results or discussion we have removed mention of it. final para – need to explain on-call teams and Acute Care Team for international audience. (pp4-5) The clinical teams had a minimum of one iPod per team; each member of the on-call team (medical team that provides out-of-hours cross cover for inpatients and new admissions) and those within the Acute Care Team (nurse-led Rapid Response Team) each had an iPod or iPad (some used their personal mobile phone). Explain significance of app on own device (p5) Additionally, there was an agreement that staff could have the Careflow Vitals and Connect apps loaded to their personal device under the ‘Bring Your Own Device’ scheme which meant that some staff did not have to carry an additional device and it was more convenient for them (Vitals is device specific (Apple), Connect is device agnostic and works on all personal devices). The concepts clinical utility and acceptability need to be explained The definitions of clinical utility and acceptability previously presented in the Discussion section have been moved forward to the Introduction section, see below. (p5) Defining clinical utility and acceptability It is important to define the concepts of clinical utility and acceptability, as it is evident in the literature that there is ambiguity and overlap in what is encompassed by the terms, and there is no consensus on definitions 32. Within this discussion, clinical utility is defined in its narrowest sense; does the technology do what it is supposed to do, and does it perform its designated function 33? However, the complexity inherent in implementation, adoption and assimilation of technology in healthcare systems 34 35 requires the definition of acceptability to acknowledge its multifaceted nature, and to be more encompassing 32 34 36 37. The Theoretical Framework of Acceptability (TFA) (v2) 37 is composed of seven component constructs (Figure 3): ‘affective attitude’, ‘burden’, ‘ethicality’, ‘intervention coherence’, ‘opportunity costs’, ‘perceived effectiveness’ and ‘self-efficacy’. The TFA proposes that acceptability is a “multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention” 37. And in the survey development p6 then relate to measures and items that address the concepts of interest… Note: the surveys were designed to ask broad questions of interest rather than specifically address seven component constructs of the TFA. We have made this clearer in the parent survey section (p7) …….. (4) About the Device (n=9 questions asking about satisfaction with explanations about the device and how it was used, trust in technology, feeling safe and secure with the device (relating to our definitions of utility 33 and acceptability 37). And in the HP survey section we used the domains from Smith’s Chain of Prevention to structure the survey as this structure was familiar to staff, although the concepts of the TFA were embedded: (p7) The health professionals’ survey consisted of eight sections, five of which (education, monitoring, recognition, escalation, and response) related to the chain of prevention 21: Introduction; Deciding to Take Part; Education and Training, (n=2 questions); Monitoring and Recognition (n=5 questions); Escalation and Response (n=3 questions); System Features (n= 4 questions), Concluding Thoughts (n=3 questions); all but the first section related to our definitions of utility 33 and acceptability 37. P5 Design is a survey with quantitative and qualitative components. Following a reporting checklist will assist in addressing many of the comments listed. (p5) Thank you for this suggestion we have adopted the CROSS checklist (Sharma et al., 2021) Participants and setting: need to explain critical deterioration event We have added in the definition of a CDE as follows: (p6) A CDE was defined as a deterioration where the patient is critically unwell, which culminates in an emergency transfer to high dependency unit or the intensive care unit, or an unexpected death. The groups of participants is not clear – for Group 1 - perhaps an error of children rather than parents and Group 2 should be parents of children also? (p6) This error has been corrected. Had the children experienced a CDE? (p6) Yes, children in Group 2 had experienced a CDE. Did parents provide consent for child’s participation? Was 7 years the minimum age to participate? Was there a rationale for the minimum age? Note, in response to your questions. Parents did provide consent for their child’s participation, 7 years was the minimum age for the child’s active participation (e.g., completing survey) in the study, and 7 years was deemed a sufficient age to give assent and to be able to complete survey. However, based on Reviewer 2’s comments about the inadequacy of the children’s dataset we have removed reference to children’s data from the paper. Was the sample size estimated? As the surveys were not validated and only intended to generate descriptive data we did not do sample size calculations. Our intention was to gain initial perspectives upon which to base future surveys. Additionally, the survey aimed to provide a baseline for the in-depth qualitative interviews we did as part of the study. Was recruitment purposive for ethnocultural diversity? Were interpreter services available for participants who did not understand English. Recruitment of parents was opportunistic and not purposive for diversity. Interpreter services were not available. (p7) The surveys were only available in English and no dedicated translation was available. Health Professional recruitment needs a bit more explanation – was email to all eligible first and followed up by face to face requests? How was the possibility of coercion managed? (p6) Recruitment of health professionals (doctors, nurses and allied health professionals, n=151) with experience of using the system was either by email or opportunistically face-to-face on the wards by the researcher asking staff if they were interested. Health professionals were also given detailed information via information sheets, given sufficient time to consider if they wanted to participate and the opportunity to ask questions (face-to-face or remotely).The possibility of coercion was avoided by making it clear that participation was voluntary and leaving the device with the survey link on it with the potential participant for about fifteen minutes; allowing the participant to complete the survey or not, as preferred. Who is included? Is it nurses and doctors and allied health staff? I can see this reported in results but needs to be defined in methods This has been corrected (see also above): (p6) Recruitment of health professionals (doctors, nurses and allied health professionals, n=151) with experience of using the system … P6 The surveys were developed for the purpose of this study and described as non-validated. There is no description of the measures for clinical utility and acceptability despite these concepts being previously reported by others As noted previously, we have moved the description of the definitions of clinical utility an acceptability into the Introduction section (p5). There is some description about consulting with parents and children in designing the survey. There is no description about the development of the health professionals survey. Was there any consultation/involvement with health professionals? We did consult with health professionals. We have added this in. (p7) Consultation with health professionals (paediatric doctors and nurses who were part of our wider steering group) helped to develop the structure, content and readability of the health professional survey Was there any content validity testing? We did not do formal content validity testing although we did do some pretesting, see below: (p7) Pretesting/piloting of our proposed final versions of the surveys was carried out with parents (see engagement in previous section) and health professionals (nurses and doctors, n=5) was carried out on one occasion; no revisions were identified as being required. How long did the survey take to complete? (p7) Three versions, each taking about 3-5 minutes to complete, were created one for health professionals… Did the researcher assist families to complete? We have addressed this comment in text. (p7) Typically, the researcher did not assist parents to complete the survey, although support was available, as needed. P7 Ethics Not described is how participants received information about the study results. We have addressed this in text. (p8) Direct feedback to individual survey participants was not possible (due to anonymity of survey responses) but findings will be shared with the broad population of parents through the Parent and Carer’s Research Forum, hospital newsletter, social media etc. and with health professionals via Grand Rounds and other meetings. Results These are participant characteristics not demographics – need to change the heading. (p8) Heading changed as requested. There is no description of the denominator to understand the response rate. How many potential participants received the survey or how many were requested to complete the survey and declined? We have added this information in: (p8) Of the parents approached, there was a 9-10% decline rate (typical reasons for declining being focused on child). In total, 137 parents completed the survey…. (p10) In total 151 health professionals participated in the survey (decline rate not calculated as staff were approached by email as well as directly but typical reason for declining being ‘too busy’); ……. How many completed electronically and how many paper based? We have added this information in. (p8) Parents: All but three surveys (n=134) were completed electronically. (p10) HPs: All completed electronically. Table 1 – it will be helpful to understand the patients in the high dependency unit – are these patients who have been in PICU? (p9) Table 1 has been amended with a note to describe HDU occupancy. * The high dependency unit (HDU) provides level 2 critical care 39. The HDU patient population includes patients who have deteriorated on the ward, high acuity patients post-operatively as well as some step-downs from PICU with higher care needs than can be delivered on a ward. For international audience the terminology of Assistant Nurse Practitioner, Assistant Practitioner, Advanced Clinical Practitioner and Acute Care Team needs to be explained. (p10) We have added this information as a note into Table 2: * Allied Health Professional is a term that includes physiotherapists and occupational therapists. We did not collect data on the specific profession of AHPs. **Assistant Practitioners are not registered practitioners but they support care and have a high level of skill through their experience and training 40. *** Advanced Clinical Practitioners are nurses or AHPs trained to Masters level on an approved ACP course who deliver clinical caseload management autonomously to acute and complex patient groups 41. **** Acute Care Team is the nurse led Rapid Response Team in the study hospital. Table 2 What are the professions of allied health (p10) See comment above, we have added this information into the note at the bottom of Table 2. P 10 core findings will be improved by presenting the positive findings first. Where this has been possible to do/made sense in terms of the ‘narrative’, we have presented the positive findings first in each sub-theme. Overall satisfaction and competence – not clear what the competence relates to. This scale is not described on P6. This was an error on our part, reference to competence removed. Qualitative findings:– it is not reported how many parents provided comments x We have addressed this for both parents and HPs. (p8) Around half the parents (n= 27 CDE and n=38 non-CDE) provided open text responses. (p10) Forty four percent (n=66) of HPs provided at least one open text, with just under half of these (n=25) at least three open text responses; some provided up to nine. Nor is it evident whether the quotes are selected from a few or many parents’ comments. Within the limited word limit we had drawn quotations from across the CDE and non-CDE parents and across the HPs. This is now more apparent as we have now added labels to all quotations, see an example from parents’ findings: (p12) One parent noted that the “device seems to make ‘obs’ quicker” (CDE, P46) with another noting it was “wonderful for speed and efficiency…and a great observation checklist for the nurses” (CDE P57). Typical responses included parents thinking that the technology “lowers the risk of mistakes being made when using paper” (CDE P3), delivers the “right results we need to know about her” (CDE P9) and noting that if HPs “do obs on paper they can lose paper obs and have to do them again” (non-CDE P11). P12 Findings from health professionals’ surveys – need to report the actual findings and statistics in text and it will improve the readability to report what was found first then detail differences. Thank you for your comment here. We have worked with statisticians to enhance the analysis and presentation of the statistics. However, we have retained a view that our intention has primarily been to present descriptive statistics. Arising from the discussions we have removed the comparisons in the parent data but retained these within the health professional data. We have added in more detail about the statistical tests used in the Materials and Methods section (sub-section Analysis): (p8) Descriptive statistics, mean (M) and standard deviation (SD), are presented to describe variables measured on a continuous scale, categorical variables are reported using counts and percentages. For the health professional data, Chi-squared and Fishers exact test were used to assess between group differences when the outcome of interest was categorical and independent T-test was used when outcome was continuous. We have added in t tests and p values in the reporting of the HP data, as appropriate, within the main text and provided an explanation of methods at the start of reporting the core findings of the HPs. (p13) Comparisons were made between groups on the continuous data using t tests. The means, standard deviations and significance levels (p values) are reported in Table 5 and the statistically significant t tests are reported in the text. An example of the additions to the main text is provided below. (p13) However, those in the D-VS group had significantly higher levels of confidence that they could recognise that a child’s health is deteriorating than those in the R-VS group (t (18, 93) = 2.46, p = .024). Similarly, the D-VS group had significantly higher levels of overall satisfaction with DETECT e-PEWS than those in the R-VS group (t (17,20) = 2.82, p =.012). The D-VS group also had significantly higher levels of satisfaction with being able to ‘obtain a device’ (t (138) = -2.44, p = .016). The reporting using headings linked to Smith’s chain of prevention should be described if this was planned. We had done this (although perhaps easy to miss) and we direct you to the sentence below. (p13) First, the data are presented for overall satisfaction and then the remaining results are presented under headings linked to the key aspects of Smith’s [21] chain of prevention. This section of the manuscript p12 – 14 needs the most work as it is hard to follow. We have revised the discussion, and with improved signposting and the additional text that we have included throughout the discussion, this section is now easier to follow. We have presented our work in an important and interesting way. The integrated use of both the Chain of Prevention and the Theoretical Framework of Acceptability is innovative and unlike other published PEWS implementation papers. We acknowledge that we have used two frameworks to help structure the discussion and this may have been difficult to follow. However, we have now included a new explanatory paragraph and also tried to ensure that the acceptability constructs are better signposted in the text. (p14) The discussion contextualises the perceptions of the clinical utility [33] and acceptability in line with our stated definitions of these concepts [37]. However, we frame the discussion within the five rings of the chain of prevention (Figure 1) [21] and we note that whilst Smith’s focus is entirely on health professionals, ours encompasses parents. We chose to structure the findings using the chain of prevention as each ‘ring’ is a discrete component important in the prevention of deterioration. When specifically considered, the acceptability constructs from Theoretical Framework of Acceptability v2 [37] (see also Figure 3) are signposted in brackets as Construct 1, Construct 2 etc. The last few lines p12 and on p15 there are statistics provided but these are not clearly presented We have revised our presentation of statistics both in text and within the tables. Additionally, we have removed some comparison data (e.g. influence of length of time using the device) as we felt it did not crucially add to the findings). Note with the changes made to the paper, these no longer appeat on the pages referred to above. For example, (p13) However, those in the D-VS group had significantly higher levels of confidence that they could recognise that a child’s health is deteriorating than those in the R-VS group (t (18, 93) = 2.46, p = .024). No satisfaction scale described in survey development but reported here. We did not use a specific satisfaction scale although specific questions asked participants to indicate their level of satisfaction on a Likert scale in response to an item. Were there differences in responses based on profession or professional experience? We did not analyse the data at this level of granularity as we did not believe our sample was large enough. P15 “A similar pattern…. “ this need to be reworded to explain the finding first. The wording ‘similar pattern’ has been removed and the finding reworded, as follows. (p15) The D-VS group had higher ‘confidence that Vitals [e-PEW score app] assists a timely response to signs of deterioration’ than the R-VS group although the group comparison only approached statistical significance. Discussion The concepts of clinical utility and acceptability are raised here but there needs to be greater clarity informing the survey. This section is insufficiently developed and is difficult to follow. We have removed the some of the description of clinical utility and acceptability to the Introduction section as advised earlier in your comments. The discussion should more clearly identify how this study adds to or confirms or refutes others’ research in the area and include recommendations We believe that we have actually already done this in places within the paper as can be seen from the examples below. • (p17) Generally, PEWS studies only consider monitoring acceptability from the perspective of health professionals [6, 7, 13]; however, our study also addressed acceptability from the perspectives of parents. • (p17) It is interesting to note that other escalation of care studies focus attention on information and/or education about how to express concern [46-48], but do not present evidence of educating parents about their child’s vital signs. However, we have also included some more statements of where we have added to the body of research. • (p18) Clearly opportunity costs do need better consideration in future implementation work and attention needs to be paid to how perceived threats can be better managed. • (p18) The requirement for apps to be device agnostic would help reduce the number of devices being carried and could reduce the burden. • (p19) …………….perceptions of opportunity costs (Construct 5) could be reduced if respected professional champions were given time, support and organisational backing to drive forward implementation. Limitations The lack of measures for the concepts of interest is a major limitation We have now added this to this section. (p19) The lack of validated measures for the concepts of interest can be seen to be a limitation. The small sample of health professionals is acknowledged but the sample of 137 parents and sample of 8 children is not acknowledged. This has now been addressed. (p19) The sample size for parents and professionals is relatively small compared to the population of all parents whose children were receiving care The sample of parents is not likely to be as diverse as the whole population of eligible parents; a more targeted matrix sampling approach might be considered in future. Generalisability should be addressed. This has now been addressed. (p19) These limitations mean that the generalisability of the results is limited. Conclusion This should be stand alone and highlight key findings ie not refer to figure. We have removed the reference to Figure 3. Reviewer #2: General Comments: This study aimed to examine how parents, children and health professionals view and engage with the DETECT electronic Paediatric early warning systems (PEWS) apps, with a particular focus on its clinical utility and its acceptability. Overall, the study is well-written and presents interesting and novel findings. I have some major and some minor comments, which needs to be addressed before proceeding further. Thank you for your positive comments here and your helpful comments in how to improve the work. We’re grateful for the time you’ve take to help us improve the manuscript. We also appreciate the time it will take to consider our revisions. Major Comments: The study employed a non-probability sampling technique for selection of samples. This method has several limitations and could limit the validity of the study results. The authors have not discussed this issue. We have now added this to the limitations section. (p19) Various factors limit the samples of parents and health professionals and thus potentially limit the validity and robustness of the findings. One key limitation that a non-probability sampling technique was used; the limitations associated with convenience sampling include sampling and selection bias, limits to generalisability of findings and less granularity of data. Further, the sample size for parents and professionals is relatively small compared to the population of all parents whose children were receiving care and all professionals using the DETECT system. Recruitment of participants was done during the ongoing pandemic. This could influence the characteristics of patients included in the study. They may not be representative of the patients attending the hospital prior to the pandemic. We have addressed this below in the limitations section. (p19) Additionally, recruitment of parents occurred during the Covid-19 pandemic (fewer admissions) and we were not able to recruit consistently across all months that the study was open due to staff shortages, reduced access to wards). Thus, the population of non-CDE children may not be representative of the total hospital population pre-pandemic (e.g., elective surgeries cancelled, only the acutely unwell children remained or were admitted to hospital). However, our pre-pandemic baseline data (not reported in this paper) suggests that our CDE population is representative as pre-pandemic critical deterioration occurred, most commonly, in children who were acutely unwell or required emergency surgical care. Was any power analysis done? How did the authors decide on the sample size requirement? We did not do a power calculation for the sample size. As the surveys were not validated and only intended to generate descriptive data we did not do sample size calculations. I believe the category of children is severely under powered to derive any meaningful conclusions. I suggest the authors add more children to the sample or eliminate this group from analysis. Based on your suggestion we have eliminated this group from our analysis. Was any piloting of the questionnaire performed? Yes. (p7) Pretesting/piloting of our proposed final versions of the surveys was carried out with parents (see engagement in previous section, n=11) and health professionals (nurses and doctors, n=5) was carried out on one occasion; no revisions were identified as being required. The analysis is incomplete. I recommend that the authors take the help of an experienced statistician to enhance the data analysis. Thank you for your comment here. We have worked with statisticians to enhance the analysis and presentation of the statistics. However, we have retained a view that our intention has primarily been to present descriptive statistics. Arising from the discussions we have removed the comparisons in the parent data but retained these within the health professional data. We have added in more detail about the statistical tests used in the Materials and Methods section (sub-section Analysis): (p8) Descriptive statistics, mean (M) and standard deviation (SD), are presented to describe variables measured on a continuous scale, categorical variables are reported using counts and percentages. For the health professional data, Chi-squared and Fishers exact test were used to assess between group differences when the outcome of interest was categorical and independent T-test was used when outcome was continuous. We have added in t tests and p values in the reporting of the HP data, as appropriate, within the main text and provided an explanation of methods at the start of reporting the core findings of the HPs. (p13) Comparisons were made between groups on the continuous data using t tests. The means, standard deviations and significance levels (p values) are reported in Table 5 and the statistically significant t tests are reported in the text. An example of the additions to the main text is provided below. (p13)However, those in the D-VS group had significantly higher levels of confidence that they could recognise that a child’s health is deteriorating than those in the R-VS group (t (18, 93) = 2.46, p = .024). Similarly, the D-VS group had significantly higher levels of overall satisfaction with DETECT e-PEWS than those in the R-VS group (t (17,20) = 2.82, p =.012). The D-VS group also had significantly higher levels of satisfaction with being able to ‘obtain a device’ (t (138) = -2.44, p = .016). Minor Comments: • The referencing style in not in accordance with the journal’s style. Please review the author instructions or refer to any recent paper published in the Journal. Referencing style updated to PLoS (as per EndNote) • Abstract; open and closed question? Clarify.. (p2) closed (tick box or sliding scale) and open (text based) question, Materials and Methods: o Prospective or cross-sectional? (p6) Prospective o young people (aged 7-18 years old) ? adolescents? Removed now children/young people/adolescents’ data no longer reported. o Group 1 (children whose children had not experienced a critical deterioration event during admission…) ? Revise This has now been revised. (p6) Group 1 (parents whose children had…… o “Although consent is not required for NHS professionals involved in evaluating an intervention, consent from the health professionals was gained via a ‘tick box’ on the survey. “ Incorrect statement. Consent is implied for procedures involving diagnosis or treatments withing the hospital facilities. This was a research project where a new instrument was being investigated. Any research involving human subjects require ethical approval (Declaration of Helsinki). Thank you for this comment. We have deleted the sentence and integrated HPs consent into the same sentence as parental consent. This now reads as follows: (p6) Consent by parents and health professionals for participation in the survey was gained via a ‘tick box’ at the start of the survey. o Analysis: inputted? Inputted deleted. o Mean and SD are descriptive statistics. How can this be used to compare distributions? List any statistical test used. Sentence rephrased. (p8) Descriptive statistics, mean (M) and standard deviation (SD), are presented to describe variables measured on a continuous scale, categorical variables are reported using counts and percentages. For the health professional data, Chi-squared and Fishers exact test were used to assess between group differences when the outcome of interest was categorical and independent T-test was used when outcome was continuous. Submitted filename: Response to reviewers 8th Jan 2022.docx Click here for additional data file. 12 Jun 2022
PONE-D-21-34519R1
Clinical utility and acceptability of a whole-hospital, pro-active electronic paediatric early warning system (the DETECT study): a prospective e-survey of parents and health professionals.
PLOS ONE Dear Dr. Carter, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please see one minor remaining comment from one of the reviewers below. Please submit your revised manuscript by Jul 25 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Hanna Landenmark Staff Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for addressing the comments. Minor edit required to conclusion as the children participants have been removed so need to be removed from conclusion: "it is evident that the DETECT system has had success across three key groups of stakeholders: children, parents, and health professionals" ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Fenella J Gill ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
13 Jun 2022 Rebuttal to reviewer Thank you for giving up your time to review our paper and for noting the one minor change we had missed in our previous revision. The new text is presented in red. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Minor edit required to conclusion as the children participants have been removed so need to be removed from conclusion: "it is evident that the DETECT system has had success across three key groups of stakeholders: children, parents, and health professionals" Thank you for this comment, we have removed the reference to three stakeholder groups and the reference to children. The text now reads. (p25) … it is evident that the DETECT system has had success across two key groups of stakeholders: parents, and health professionals. Submitted filename: Rebuttal letter 13th June 2022.docx Click here for additional data file. 12 Aug 2022 Clinical utility and acceptability of a whole-hospital, pro-active electronic paediatric early warning system (the DETECT study): a prospective e-survey of parents and health professionals. PONE-D-21-34519R2 Dear Dr. Carter, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Miquel Vall-llosera Camps. Senior Editor PLOS ONE Reviewers' comments: 2 Sep 2022 PONE-D-21-34519R2 Clinical utility and acceptability of a whole-hospital, pro-active electronic paediatric early warning system (the DETECT study):  a prospective e-survey of parents and health professionals. Dear Dr. Carter: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Miquel Vall-llosera Camps Staff Editor PLOS ONE
  45 in total

Review 1.  Implementation of a paediatric early warning system as a complex health technology intervention.

Authors:  Heather Duncan; Adrienne P Hudson
Journal:  Arch Dis Child       Date:  2020-08-11       Impact factor: 3.791

2.  The Parent Role in Advocating for a Deteriorating Child: A Qualitative Study.

Authors:  Patrick W Brady; Barbara K Giambra; Susan N Sherman; Caitlin Clohessy; Allison M Loechtenfeldt; Kathleen E Walsh; Samir S Shah; Carole Lannon
Journal:  Hosp Pediatr       Date:  2020-08-12

3.  Developing a Tool to Support Communication of Parental Concerns When a Child is in Hospital.

Authors:  Gemma Heath; Hermione Montgomery; Caron Eyre; Carole Cummins; Helen Pattison; Rachel Shaw
Journal:  Healthcare (Basel)       Date:  2016-01-13

Review 4.  Paediatric early warning systems for detecting and responding to clinical deterioration in children: a systematic review.

Authors:  Veronica Lambert; Anne Matthews; Rachel MacDonell; John Fitzsimons
Journal:  BMJ Open       Date:  2017-03-13       Impact factor: 2.692

Review 5.  Is there a role for patients and their relatives in escalating clinical deterioration in hospital? A systematic review.

Authors:  Abigail K Albutt; Jane K O'Hara; Mark T Conner; Stephen J Fletcher; Rebecca J Lawton
Journal:  Health Expect       Date:  2016-10-26       Impact factor: 3.377

Review 6.  Technology Acceptance in Mobile Health: Scoping Review of Definitions, Models, and Measurement.

Authors:  Camille Nadal; Corina Sas; Gavin Doherty
Journal:  J Med Internet Res       Date:  2020-07-06       Impact factor: 5.428

7.  Dynamic Electronic Tracking and Escalation to reduce Critical care Transfers (DETECT): the protocol for a stepped wedge mixed method study to explore the clinical effectiveness, clinical utility and cost-effectiveness of an electronic physiological surveillance system for use in children.

Authors:  Gerri Sefton; Bernie Carter; Steven Lane; Matthew Peak; Ceu Mateus; Jen Preston; Fulya Mehta; Bruce Hollingsworth; Roger Killen; Enitan D Carrol
Journal:  BMC Pediatr       Date:  2019-10-17       Impact factor: 2.125

Review 8.  Acceptability of digital health interventions: embracing the complexity.

Authors:  Olga Perski; Camille E Short
Journal:  Transl Behav Med       Date:  2021-07-29       Impact factor: 3.046

9.  How to implement a PEWS in a resource-limited setting: A quantitative analysis of the bedside-PEWS implementation in a hospital in northeast Brazil.

Authors:  Karin S van der Fluit; Matthijs C Boom; Marlon B Brandão; Gabriel D Lopes; Paula G Barreto; Deborah C F Leite; Ricardo Q Gurgel
Journal:  Trop Med Int Health       Date:  2021-07-21       Impact factor: 2.622

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.