| Literature DB >> 35284379 |
Uirá Duarte Wisnesky1, Scott W Kirkland1, Brian H Rowe1,2, Sandra Campbell3, Jeffrey Michael Franc1.
Abstract
Background: Mass casualty incidents (MCIs) can occur as a consequence of a wide variety of events and often require overwhelming prehospital and emergency support and coordinated emergency response. A variety of disaster triage systems have been developed to assist health care providers in making difficult choices with regards to prioritization of victim treatment. The simple triage and rapid treatment (START) triage system is one of the most widely used triage algorithms; however, the research literature addressing real-world or simulation studies documenting the classification accuracy of personnel using START is lacking. Aims andEntities:
Keywords: START; disaster medicine; emergency medicine; mass casualty incidents; systematic review; triage
Mesh:
Year: 2022 PMID: 35284379 PMCID: PMC8907512 DOI: 10.3389/fpubh.2022.676704
Source DB: PubMed Journal: Front Public Health ISSN: 2296-2565
Figure 1Literature search flow diagram.
Descriptive summary of the studies included in this review.
|
|
|
|
|
|---|---|---|---|
| Arshad et al. ( | Determine if modification of the START system by the addition of an Orange category would reduce over- and under-triage rates in a simulated mass-casualty incident exercise. | • Quantitative non-randomized comparative study | • The FDNY-START system may allow providers to prioritize casualties using an intermediate category (Orange) more properly aligned to meet patient needs, and as such, may reduce the rates of over-triage compared with START. |
| Badiali et al. ( | Address whether “last-minute” START training of nonmedical personnel during a disaster or mass-casualty incident would result in more effective triage of patients. | • Quantitative randomized controlled trial | • Even a “last-minute” training on the START triage protocol allows nonmedical personnel to better identify and triage the victims of a disaster or MCI. |
| Bolduc et al. ( | Compare both accuracy and speed (triage time) of computer-based (electronic) to traditional paper-based (manual) START triage during a mass-casualty incident in a hospital setting. | • Quantitative non-randomized comparative study | • No significant difference in accuracy of triage when comparing electronic and manual methods, regardless of triage provider type or acuity of patient presentations |
| Buono et al. ( | Evaluate the accuracy of triage using an embedded algorithm in a wireless electronic system compared to traditional methods of triage. | • Quantitative non-randomized comparative study | • The control manual group had a 73.7% (CI: 56.9–86.6%) accuracy when compared to the gold standard. |
| Challen and Walter ( | Assess the predictive power of three different triage systems using data from an actual mass-casualty incident (the London bombings of 7th July 2005). | • Quantitative non-randomized comparative study | • The triage systems performed identically in identifying the critically injured, with sensitivity 50% and specificity 100% if using only the highest priority, or sensitivity 75% and specificity 99% if using the top 2 priority groups. |
| Crews ( | Evaluate the efficacy of START triage during actual mass-casualty incidents and full-scale MCI exercises. | • Mixed-methods study | • Data analysis from actual incidents and exercises confirm that “just-in-time” training does increase the accuracy of the START triage model used from 42 to 73%. |
| Curran-Sills and Franc ( | Compare emergency department triage nurses' time to triage and accuracy of a simulated mass-casualty incident population using a computerized version of CTAS or START systems. | • Quantitative non-randomized comparative study | • The cumulative triage accuracy for the cCTAS and START tools were 70/90 (77.8%) and 65/90 (72.2%), respectively. |
| Djalali et al. ( | Test the association between the level of preparedness and the level of response performance during a full-scale hospital exercise | • Quantitative descriptive study | • The preparedness of the chosen hospital was 59%, while the response performance was evaluated as 70%. |
| Ellebrecht et al. ( | Analyze the assigned triage level of casualties and compare paramedic's performance. | • Quantitative non-randomized comparative study | • Overall correct accuracy rate was 81.5%. |
| Ersoy and Akpinar ( | Examine the accuracy of triage decision-making among emergency physicians using a multiple casualty scenario. | • Quantitative descriptive study | • Overall accuracy rate ranged from 83.6 to 90.0% for four immediate casualties, 26.4 to 78.2% for seven urgent casualties, 70.9 to 91.8% for four delayed casualties, and 82.7 to 97.3% for two dead cases. |
| Ferrandini-Price et al. ( | Determine the e?ciency in the execution of the START triage, comparing virtual reality to clinical simulation in a mass-casualty incident. | • Quantitative non-randomized comparative study | • No significant differences between the clinical simulation with actors group (88.3% [SD = 9.65]) and the virtual reality simulation (87.2% [SD = 7.2]) |
| Ingrassia et al. ( | Test a new disaster simulation suite evaluating its application during the same type of full-scale exercise on two different occasions. | • Quantitative non-randomized comparative study | • No differences were found as regards triage or prehospital treatment accuracy. |
| Ingrassia et al. ( | Develop a core curriculum of disaster medicine centered on blended learning and simulation tools | • Quantitative non-randomized comparative study | • The blended approach and the use of simulation tools were appreciated by all participants and successfully increased participants' knowledge of disaster medicine and basic competencies in performing mass-casualty triage. |
| Ingrassia et al. ( | Explore the ability of virtual reality simulation, compared with live simulation, to test mass casualty triage skills, in terms of triage accuracy, intervention correctness, and speed to complete triage, of naive medical students using the START triage algorithm in a simulated mass-casualty incident scenario and to detect the increase in this expertise after a brief learning session on mass casualty triage. | • Quantitative randomized controlled trial | • No significant differences in START triage accuracy when comparing virtual reality and live simulation. |
| Izumida et al. ( | Propose a triage training system in which the expression of information changes according to the skill level of each trainee. | • Quantitative non-randomized comparative study | • The results revealed the system was e?ective to implement triage quickly and accurately. |
| Jain et al. ( | Compare unmanned aerial vehicle technology (UAV) to standard practice in triaging casualties at a mass-casualty incident | • Quantitative randomized controlled trial | • No significant differences in START triage accuracy when comparing UAV technology and standard practice. |
| Kahn et al. ( | Analyzed whether START is accurate in assigning acuity levels to victims of a real train crash. | • Quantitative descriptive study | • No triage level met both the 90% sensitivity and 90% specificity requirement set forth in the hypothesis. |
| Khan ( | Evaluate the mass-casualty incident triage skills of the medical staff like doctors and nurses at Hamad General Hospital Emergency Department. | • Quantitative randomized controlled trial | • The study results report 90% triage accuracy in the intervention group and 70% in control group with a difference of 20–30%. |
| Lee and Franc ( | Assess the ability to implement a two-step Emergency Department triage model with pre-triage using START, then subsequent triage using CTAS, during a mass-casualty incident using a computer-based disaster simulation. | • Quantitative randomized controlled trial | • No significant difference in accuracy of triage and patient flow when comparing a two-step emergency department triage model (CTAS + START) to START alone. |
| Lima et al. ( | Describe the teaching strategy based on the Multiple Victims Incident simulation, discussing and evaluating the performance of the students involved in the initial care of trauma victims. | • Quantitative descriptive study | • Overall accuracy rate was 94.1% |
| Loth et al. ( | Examine an adapted training protocol using START triage principles, which incorporated visually complex triage situations | • Quantitative non-randomized comparative study | • A short, directed triage training tool in improving the recognition of triage features was shown to be effective. |
| McCoy et al. ( | Evaluate the feasibility and effectiveness of using tele-simulation to deliver an emergency medical services course on mass-casualty incident training to healthcare providers overseas. | • Quantitative descriptive study | • There was significant difference in accuracy of triage when comparing providers |
| McElroy et al. ( | Describe the planning and implementation process, share results, and facilitate other regions as they conduct similar preparatory drills. | • Quantitative descriptive study | • Of the 445 transported patients, 270 (60%) were entered correctly into the state patient tracking system; 68 (25.2%) upgrades and 34 (12.6%) downgrades from scene triage categories were noted. |
| Mills et al. ( | Compare the simulation efficacy of a bespoke virtual-reality (VR) mass-casualty incident simulation with an equivalent live simulation scenario designed for undergraduate paramedicine students. | • Mixed-methods study | • No significant differences were observed in accuracy in each platform. The VR simulation provided near identical simulation efficacy for paramedicine students compared to the live simulation. |
| Navin et al. ( | Evaluate the operational viability of Sacco Triage Method and to compare its performance to START. | • Quantitative non-randomized comparative study | • Sacco Triage Method scoring was more accurate at 91.7% than START assessments at 71.0%. |
| Risavi et al. ( | Assess the effectiveness of written and moulage scenarios using video instruction for mass-casualty triage by evaluating skill retention at six months post intervention. | • Quantitative non-randomized comparative study | • No significant differences between written and moulage testing results at either initial testing or at six months. |
| Riza'I et al. ( | Evaluate the accuracy of triage decisions made by first-year medical students after receiving two intervention methods. | • Quantitative non-randomized comparative study | • The mean of method 2 (8.03 ± 0.72) was significantly improved for correct triage compared with the mean of method 1 (6.33± 1.63) for 54 students ( |
| Sapp et al. ( | Evaluate the accuracy of triage decisions made by newly enrolled first-year medical students after receiving a brief educational intervention. | • Quantitative non-randomized comparative study | • Overall accuracy rate was 64.3%. First-year medical students who received brief START training achieved triage accuracy scores similar to those of emergency medical providers in previous studies. |
| Schenker et al. ( | Evaluate the accuracy and speed for the triage of multiple patients during a disaster drill by Emergency Medical Service personnel. | • Quantitative descriptive study | • Overall triage accuracy rate was 78%, exceeding data suggesting that the triage accuracy rates using different triage strategy algorithms are approximately 45% to 55%. |
| Silvestri et al. ( | Compare the START and SALT classifications of patients to a published reference standard category, and evaluated the accuracy of the START method applied by emergency medical services personnel in a field simulation. | • Quantitative non-randomized comparative study | • SALT triage system was overall more accurate triage method than START at classifying patients, specifically in the delayed and immediate categories. |
| Simoes et al. ( | Analyze the quality of pre-hospital care provided by agencies in Vitória-Espirito Santo, Brazil. | • Quantitative descriptive study | • Overall correct accuracy rate was 92.5% using START. |
| Wu et al. ( | Evaluate the effectiveness of a brief training course on (START. | • Quantitative non-randomized comparative study | • The trainees' scores increased significantly after the training ( |
.
Master's thesis.
.
Transparency of the studies.
|
|
|
|
|
|
|---|---|---|---|---|
| Arshad et al. ( | ✗ | Stated | ✓ | • Lack of pertinent information (age, gender, years of service, training, and experience) about the comparison group. |
| • Challenges of implementing system-wide changes to EMS protocols and training personnel. | ||||
| Badiali et al. ( | ✗ | Not stated | ✗ | Not reported. |
| Bolduc et al. ( | ✗ | Stated | ✓ | • Single-center study. |
| • Ordering of different triage modalities may have impacted triage time. | ||||
| • Simulation conducted differently between groups. | ||||
| Buono et al. ( | ✗ | Not stated | ✓ | • Small sample size. |
| • Unintentionally ambiguous scenarios made triage level determination difficult. | ||||
| Challen and Walter ( | ✓ | Stated | ✓ | • There was a paucity of available documentation. |
| • Data collection challenges since staff at the incident scenes were using their own tags as well as official supplies. | ||||
| • There was missing data within the medical records. | ||||
| Crews ( | ✗ | Not stated | ✓ | • Lack of previous studies. |
| • Confinement of geographical region studied. | ||||
| Curran-Sills and Franc ( | ✗ | Stated | ✓ | • One group (nurses) were non-randomized. |
| • Simulation was done with paper-based assessment tool, which is an oversimplification of actual triage. | ||||
| • It only includes adult victims. | ||||
| Djalali et al. ( | ✗ | Stated | ✓ | • Sample size from only one hospital. |
| • Response performance indicators were limited to command and control actions. | ||||
| Ellebrecht et al. ( | ✓ | Stated | ✓ | • Limited generalizability |
| Ersoy and Akpinar ( | ✗ | Not stated | ✓ | • The scale of the decisions may not reflect the real conditions that physicians encounter in their daily practice. |
| Ferrandini-Price et al. ( | ✓ | Stated | ✓ | • Both groups were not comprised by the same individuals, so that there could be a variability due to the possible individual variations |
| • The use of | ||||
| Ingrassia et al. ( | ✗ | Stated | ✓ | • For practical reasons treatment accuracy was evaluated only in the pre-hospital phase. |
| • Although similar, the two scenarios were not identical since there were slight differences with regard to the resources available to each group. | ||||
| • The evaluation of performance indicators could be observer biased. | ||||
| • Since it was necessary to set a time limit, it is clear that the overall evaluation of the hospital response to the simulations is potentially biased by shorter simulation time. | ||||
| Ingrassia et al. ( | ✗ | Stated | ✓ | • Apart from the theoretical knowledge acquired and the increase of mass-casualty triage skills, the students were not evaluated for an improvement in other medical disaster management competencies. |
| Ingrassia et al. ( | ✓ | Stated | ✓ | • Small sample size. |
| • Selection bias. | ||||
| Izumida et al. ( | ✗ | Not stated | ✗ | Not reported. |
| Jain et al. ( | ✓ | Stated | ✓ | • Technological challenges. |
| • Small sample size. | ||||
| Kahn et al. ( | ✓ | Stated | ✓ | • The study methodology could not discern whether errors in assignment of triage categories resulted from failure of the triage algorithm as a tool or failure of emergency personnel to apply it correctly. |
| • Possibly over-triage bias as researchers did observe that some of the assigned triage levels differed from what strict application of the START algorithm would have mandated. | ||||
| • The black, or “deceased,” category was not examined. | ||||
| Khan ( | ✓ | Not stated | ✓ | • Small sample size. |
| • Single-center study. | ||||
| • Using only one tool or system of triage (START). | ||||
| Lee and Franc ( | ✓ | Stated | ✓ | • Logistical and technological challenges. |
| • Issues during data collection. | ||||
| • Potential Hawthorne effect. | ||||
| • Unknown experience of participants with START prior to study. | ||||
| Lima et al. ( | ✗ | Stated | ✓ | • Lack of preparation of victims to act accordingly to injuries. |
| • Displacement of the victims from the triage area to the canvases for care during simulation. | ||||
| • Place of collection and the limitation of the material used in the simulation to care for the victims were not well-defined for the participants as well. | ||||
| Loth et al. ( | ✗ | Stated | ✓ | • Small sample size. |
| • Pictures only showed one victim at a time, which isn't realistic for an MCI. | ||||
| • This study failed to show significance for its secondary objective of improvement in triage accuracy. | ||||
| McCoy et al. ( | ✗ | Stated | ✓ | • Voluntary enrolment in the course, thus sample may not be representative of all professions. |
| • Not designed as an observational-analytical study so not powered to detect differences between groups. | ||||
| • Heterogeneous group of “other” participants. | ||||
| McElroy et al. ( | ✓ | Stated | ✗ | Not reported. |
| Mills et al. ( | ✗ | Stated | ✓ | • Small sample size of participants |
| • Small number of patients (victims) | ||||
| Navin et al. ( | ✗ | Not stated | ✓ | • Assessment and scoring of victims were done from reading patient profile cards and not by making actual physiologic assessment. |
| • Exercises assumed unlimited transport and treatment resources. | ||||
| • The impact of the familiarity of the scene is unknown. | ||||
| • STM triage and resource management software was not tested. | ||||
| Risavi et al. ( | ✗ | Stated | ✗ | Not reported. |
| Riza'I et al. ( | ✗ | Not stated | ✓ | • Small sample size. |
| Sapp et al. ( | ✗ | Not stated | ✓ | • Lack of information of participants previous MCI training. |
| • Limited generalizability to the general population as the study was done with medical students | ||||
| Schenker et al. ( | ✗ | Not stated | ✗ | • Not reported |
| Silvestri et al. ( | ✗ | Not stated | ✓ | • Some of the volunteer victims might not have appropriately displayed their injuries on the cards they were wearing, which could account for some of the under-triage |
| Simoes et al. ( | ✓ | Stated | ✗ | Not reported |
| Wu et al. ( | ✗ | Not stated | ✓ | • Seniority of the participants were not taken into consideration. |
| • The same written test was given before and after the training session, which may rise the concern of improvement comes from short-term practice but not learning. |
✓Reported.
✗Not reported.
Potential conflict of interest.
Typology of simulations.
|
|
|
|
|
|
|---|---|---|---|---|
| Arshad et al. ( | • Land disaster (motor vehicle accidents) | • Computer-based (victims description) | • Unclear | Unclear |
| Badiali et al. ( | • Unclear | • Paper-based (victims description) | • Unclear | Derived from a web-based platform, which clear defines how the cases were created |
| Bolduc et al. ( | • Land disaster (train derailment) | • Live simulation (actors) | • Emergency Department | Unclear |
| Buono et al. ( | • Unclear | • Unclear | • Unclear | Unclear |
| Challen and Walter ( | • Bomb threats/terrorist attack (shooting) | • Retrospective analysis of real mass casualty incident | • Not applicable: retrospective analysis | Medical records |
| Crews ( | • Bomb threats/terrorist attack (shooting) | • Retrospective analysis of real mass casualty incident | • Not applicable: retrospective analysis | Real MCI |
| Curran-Sills and Franc ( | • Unclear | • Paper-based (victims description) | • Emergency Department | Derived from a web-based platform ( |
| Djalali et al. ( | • Explosions (chemical explosion) | • Unclear | • Hospital | Unclear |
| Ellebrecht et al. ( | • Air disaster (airplane collision) | • Live simulation (actors) | • Airport | Unclear |
| Ersoy et al. ( | • Land disaster (motor vehicle accidents) | • Paper-based (questionnaire with a MCI scenario) | • Unclear | Borrowed from another study, which was created by the study researchers |
| Ferrandini-Price et al. ( | • Unclear | • Virtual reality (head mounted display) | • Unclear | Created by healthcare professionals |
| Ingrassia et al. ( | • Structural collapse (ceiling collapse) | • Live simulation (actors) | • Unclear | Created by researchers |
| Ingrassia et al. ( | • Land disaster (motor vehicle accidents) | Computer-based (electronic simulation designed using Adobe Flash) | • University campus | Unclear |
| Ingrassia et al. ( | • Land disaster (motor vehicle accidents) | • Virtual reality (joystick) | • University campus | Derived from a web-based platform (VictimBase) but unclear how MCIs scenarios were created and validated |
| Izumida et al. ( | • Unclear | • Virtual reality (head mounted display) | • Unclear | Unclear |
| Jain et al. ( | • Land disaster (motor vehicle accidents) | • Live simulation (actors) | • Airport runway | Real MCI |
| Kahn et al. ( | • Land disaster (motor vehicle accidents) | • Retrospective analysis of real mass casualty incident | • Not applicable: retrospective analysis | Medical records |
| Khan ( | • Unclear | • Paper-based (details not reported) | • Emergency Department | Unclear |
| Lee and Franc ( | • Unclear | • Computer-based (SurgeSim) | • Emergency Department | Derived from a web-based platform (SurgeSim version 2.2.0) but unclear how MCIs scenarios were created and validated |
| Lima et al. ( | • Land disaster (motor vehicle accidents) | • Live simulation (actors) | • University campus | Created by researchers |
| Loth et al. ( | • Unclear | • Computer-based (latent images) | • University campus | Unclear |
| McCoy et al. ( | • Bomb threats/terrorist attack (shooting) | • Virtual reality (broadcasting) | • High-rise office building | Unclear |
| McElroy et al. ( | • Bomb threats/terrorist attack (terrorist attack) | • Computer-based (details not reported) | • University campus, soccer stadium and airport | Created by a private firm, but unclear how scenarios were created and validated |
| Mills et al. ( | • Land disaster (motor vehicle accidents) | • Virtual reality (actors) | • Virtual reality: Police academy's ground | Created by researchers |
| Navin et al. ( | • Structural collapse (building collapse) | • Live simulation (actors and mannequins) | • Fire Department academy | Unclear |
| Risavi et al. ( | • Unclear | • Paper-based | • Unclear | Unclear |
| Riza'I et al. ( | • Unclear | • Paper-based (details not reported) | • Unclear | Unclear |
| Sapp et al. ( | • Toxic release (sarin gas) | • Paper-based (questionnaire with a clinical scenario) | • University campus | Created by healthcare professionals |
| Schenker et al. ( | • Explosions (chemical explosion) | • Live simulation | • Unclear | Created by healthcare professionals |
| Silvestri et al. ( | • Explosions (chemical explosion) | • Live simulation (actors and mannequins) | • University campus | Created by researchers |
| Simoes et al. ( | • Land disaster (motor vehicle accidents) | • Retrospective analysis of a simulation exercise | • Unclear | Medical records |
| Wu et al. ( | • Unclear | • Paper-based (details not reported) | • Unclear | Unclear |
Assessment of accuracy outcomes.
|
|
|
|
|
|---|---|---|---|
| Arshad et al. ( | • Accuracy (total and all sub-groups) | • START | Not reported |
| • Over-triage (total and all sub-groups) | • Modified START | ||
| • Under-triage (total and all sub-groups) | |||
| Badiali et al. ( | • Accuracy (total and all sub-groups) | • Non-START training | Not reported |
| • Over-triage (total and black sub-group) | • START last minute training | ||
| • Under-triage (total and black sub-group) | |||
| Bolduc et al. ( | • Accuracy (total and all sub-groups) | • START manual | Expert opinion |
| • START electronic | |||
| Buono et al. ( | • Accuracy (total) | • START (WIISARD | Expert opinion |
| • START (WIISARD | |||
| • START (Control | |||
| Challen and Walter ( | • Sensitivity (subgroup red, subgroup red + yellow) | • START | Outcomes regard sensitivity and specificity. |
| • Specificity (subgroup red, subgroup red + yellow) | • Manchester Sieve | Baxt and Upeniek criticality | |
| • CareFlight triage | |||
| Crews ( | • Accuracy (total) | • START and the total population, year 2016 | Expert opinion |
| • Over-triage (total) | • START and the total population, year 2017 | ||
| • Under-triage (total) | • START and the total population, year 2018 | ||
| Curran-Sills and Franc ( | • Accuracy (total) | • START | Expert opinion |
| • Over-triage (total) | • CTAS | ||
| • Under-triage (total) | |||
| Djalali et al. ( | • Accuracy (subgroup green, and subgroup yellow) | • START | Not reported |
| Ellebrecht et al. ( | • Accuracy (total, and all subgroups with exception of black) | • START | Not reported |
| • Over-triage (total, subgroup yellow, and subgroup green) | |||
| • Under-triage (total, subgroup red, and subgroup yellow) | |||
| Ersoy et al. ( | • Accuracy (total and all sub-groups) | • START | Not reported |
| • Over-triage (total and all sub-groups) | |||
| • Under-triage (total and all sub-groups) | |||
| Ferrandini-Price et al. ( | • Accuracy (total) | • START with clinical simulation with actors | Expert opinion |
| • START with virtual reality | |||
| • START with both clinical simulation with actors group and virtual reality | |||
| Ingrassia et al. ( | • Accuracy (total and all sub-groups) | • START with virtual reality on day 1 | Expert opinion |
| • Over triage (green sub-group, yellow sub-group, and black sub-group) | • START with virtual reality on day 3 | ||
| • Under triage (green sub-group, yellow sub-group, and red sub-group) | • START with live simulation on day 1 | ||
| • START with live simulation on day 3 | |||
| Ingrassia et al. ( | • Accuracy (total) | • START before learning module (pre-test) | Not reported |
| • START after learning module (post-test) | |||
| Ingrassia et al. ( | • Accuracy (total and all sub-groups) | • START with disaster medicine training in the in pre-hospital setting | Not reported |
| • Over-triage (total and all sub-groups with the exception of red ED trained subgroup, red pre-hospital non-trained subgroup) | • START without previous training in medical disaster management in pre-hospital settings | ||
| • Under-triage (total and all sub-groups with the exception of green trained and non-trained subgroup, and trained yellow subgroup) | • START with disaster medicine training in the emergency department | ||
| • START without previous training in medical disaster management in the emergency department | |||
| Izumida et al. ( | • Accuracy (total) | • START with a novel training system | Not reported |
| • START with a training system in which difficulty does not change dynamically | |||
| Jain et al. ( | • Accuracy (total) | • START with an unmanned aerial vehicle drone | Not reported |
| • START with live simulation | |||
| Kahn et al. ( | • Sensitivity (green, yellow, and red subgroups) | • START | Other triage guideline |
| • Specificity (green, yellow, and red subgroups) | |||
| • Positive predictive value (green, yellow, and red subgroups) | |||
| • Negative predictive value (green, yellow, and red subgroups) | |||
| • Positive likelihood (green, yellow, and red subgroups) | |||
| • Negative likelihood (green, yellow, and red subgroups) | |||
| • Accuracy (total) | |||
| • Over-triage (total) | |||
| • Under-triage (total) | |||
| Khan ( | • Accuracy (total) | • START intervention group | Not reported |
| • Over-triage (total) | • START control group | ||
| • Under-triage (total) | |||
| Lee and Franc ( | • Accuracy (total and all sub-groups, with exception of black) | • START (one-step triage) | Expert opinion |
| • Over-triage (total and all subgroups, with the exception of two-steps red sub-group, and one- and two-step black sub-groups) | • START and CTAS (two-step triage) | ||
| • Under-triage (total and all subgroups, with the exception of two-steps red sub-group, and one- and two-step black sub-groups) | |||
| • Under-triage (red classified as black) | |||
| • Under-triage (red classified as yellow) | |||
| Lima et al. ( | • Accuracy (total) | • START | Not reported |
| Loth et al. ( | • Accuracy (total) | • START with training in triage before training | Not reported |
| • START with training in triage after training | |||
| • START with training in transportation before training | |||
| • START with training in transportation after training | |||
| McCoy et al. ( | • Accuracy (total) | • START use by educator/technician/other | Not reported |
| • START use by EMT/paramedics | |||
| • START use by nurses | |||
| • START use by pharmacists | |||
| • START use by physicians | |||
| McElroy et al. ( | • Accuracy (total) | • START | Not reported |
| • Over-triage (total) | |||
| • Under-triage (total) | |||
| Mills et al. ( | • Accuracy (total) | • START using virtual reality | Not reported |
| • START using live simulation | |||
| Navin et al. ( | • Accuracy (total) | • START | Not reported |
| • Over-triage (total) | • Sacco Triage Method | ||
| • Under-triage (total) | |||
| Risavi et al. ( | • Accuracy (sub-groups green, yellow, and red) | • START with written triage first | Not reported |
| • Accuracy for moulage (mean number of patients triaged correctly) at 6 months (total) | • START with moulage triage first | ||
| • Accuracy for written scenario (mean number of patients triaged correctly) at baseline (total) | • START with written triage second | ||
| • Accuracy for written scenario (mean number of patients triaged correctly) at 6 months (total) | • START with moulage triage second | ||
| • Accuracy for moulage (mean number of patients triaged correctly) at baseline (total) | • START with moulage at baseline | ||
| • Over-triage (sub-groups green, yellow, and red) | • START with moulage at 6 months | ||
| • Under-triage (sub-groups green, yellow, and red) | • START with written scenario at baseline | ||
| • START with written scenario at 6 months | |||
| Riza'I et al. ( | • Accuracy (total) | • START with lecture method | Not reported |
| • Over-triage (total) | • START with simulation method | ||
| • Under-triage (total) | |||
| Sapp et al. ( | • Accuracy (total) | • START performed by students from year of 2008 | Expert opinion |
| • Over-triage (total) | • START performed by students from year of 2009 | ||
| • Under-triage (total) | • START performed by students from year of 2008 and 2009 | ||
| Schenker et al. ( | • Accuracy (total and all sub-groups, with exception of total black and first responding ambulance subgroup black) | • START performed on victims exiting triage area | Not reported |
| • Over-triage (total and sub-groups) | • START performed by first responding ambulance | ||
| • Under-triage (total and sub-groups) | • Sum of START performed on victims exiting triage area and by first responding ambulance (?) | ||
| Silvestri et al. ( | • Over-triage (total) | • START | Expert opinion |
| • Under-triage (total) | • SALT | ||
| Simoes et al. ( | • Accuracy (total) | • START | Not reported |
| • Over-triage (total) | |||
| • Under-triage (total) | |||
| Wu et al. ( | • Accuracy (total) | • START performed by medical staff before training | Not reported |
| • START performed by medical staff after training | |||
| • START performed by medical staff with no prior training before training | |||
| • START performed by medical staff with no prior training after training | |||
| • START performed by medical staff with prior training before training | |||
| • START performed by medical staff with prior training after training | |||
| • START performed by individuals with no prior training before training | |||
| • START performed by individuals with no prior training after training | |||
| • START performed by non-medical with no prior training before training | |||
| • START performed by non-medical with no prior training after training | |||
| • START performed by non-medical with prior training before training | |||
| • START performed by non-medical with prior training after training | |||
| • START performed by participants with prior training before training | |||
| • START performed by participants with prior training after training |
Wireless Internet Information System for Medical Response in Disasters.
Personal digital assistant.
Electronic triage tag.
Traditional paper technology.