STUDY OBJECTIVE: To determine the rate of interobserver reliability of the Canadian Emergency Department Triage and Acuity Scale (CTAS). METHODS: Ten physicians and 10 nurses were randomly selected to review and assign a triage level on 50 ED case summaries containing presenting complaint, mode of arrival, vital signs, and a verbatim triage note. The rate of agreement within and between groups of raters was determined using kappa statistics. One-way, 2-way analysis of variance (ANOVA) and combined ANOVA were used to quantify reliability coefficients for intraclass and interclass correlations. RESULTS: The overall chance-corrected agreement kappa for all observers was.80 (95% confidence interval [CI] .79 to .81), and the probability of agreement between 2 random observers on a random case was.539. For nurses alone, kappa=.84 (95% CI .83 to .85, P = .598), and for doctors alone, kappa= .83 (95% CI .81 to .85, P = .566). The 1-way, 2-way ANOVA and combined ANOVA showed that the reliability coefficients (84%) for both nurses and physicians were similar to the kappa values. A combined ANOVA showed there was a. 2-point difference with physicians assigning a higher triage level. CONCLUSION: The high rate of interobserver agreement has important implications for case mix comparisons and suggests that this scale is understood and interpreted in a similar fashion by nurses and physicians.
STUDY OBJECTIVE: To determine the rate of interobserver reliability of the Canadian Emergency Department Triage and Acuity Scale (CTAS). METHODS: Ten physicians and 10 nurses were randomly selected to review and assign a triage level on 50 ED case summaries containing presenting complaint, mode of arrival, vital signs, and a verbatim triage note. The rate of agreement within and between groups of raters was determined using kappa statistics. One-way, 2-way analysis of variance (ANOVA) and combined ANOVA were used to quantify reliability coefficients for intraclass and interclass correlations. RESULTS: The overall chance-corrected agreement kappa for all observers was.80 (95% confidence interval [CI] .79 to .81), and the probability of agreement between 2 random observers on a random case was.539. For nurses alone, kappa=.84 (95% CI .83 to .85, P = .598), and for doctors alone, kappa= .83 (95% CI .81 to .85, P = .566). The 1-way, 2-way ANOVA and combined ANOVA showed that the reliability coefficients (84%) for both nurses and physicians were similar to the kappa values. A combined ANOVA showed there was a. 2-point difference with physicians assigning a higher triage level. CONCLUSION: The high rate of interobserver agreement has important implications for case mix comparisons and suggests that this scale is understood and interpreted in a similar fashion by nurses and physicians.
Authors: Nathan R Hoot; Larry J LeBlanc; Ian Jones; Scott R Levin; Chuan Zhou; Cynthia S Gadd; Dominik Aronsky Journal: Ann Emerg Med Date: 2008-04-03 Impact factor: 5.721
Authors: Surita Parashar; Keith Chan; David Milan; Eric Grafstein; Alexis K Palmer; Chelsey Rhodes; Julio S G Montaner; Robert S Hogg Journal: AIDS Care Date: 2013-05-08
Authors: Dominik Aronsky; Ian Jones; Bill Raines; Robin Hemphill; Scott R Mayberry; Melissa A Luther; Ted Slusser Journal: AMIA Annu Symp Proc Date: 2008-11-06