OBJECTIVE: To determine the intrarater, interrater, and retest reliability of facial nerve grading of patients with facial palsy (FP) using standardized videos recorded synchronously during a self-explanatory patient video tutorial. STUDY DESIGN: Prospective, observational study. METHODS: The automated videos from 10 patients with varying degrees of FP (5 acute, 5 chronic FP) and videos without tutorial from eight patients (all chronic FP) were rated by five novices and five experts according to the House-Brackmann grading system (HB), the Sunnybrook Grading System (SB), and the Facial Nerve Grading System 2.0 (FNGS 2.0). RESULTS: Intrarater reliability for the three grading systems was very high using the automated videos (intraclass correlation coefficient [ICC]; SB: ICC = 0.967; FNGS 2.0: ICC = 0.931; HB: ICC = 0.931). Interrater reliability was also high (SB: ICC = 0.921; FNGS 2.0: ICC = 0.837; HB: ICC = 0.736), but for HB Fleiss kappa (0.214) and Kendell W (0.231) was low. The interrater reliability was not different between novices and experts. Retest reliability was very high (SB: novices ICC = 0.979; experts ICC = 0.964; FNGS 2.0: novices ICC = 0.979; experts ICC = 0.969). The reliability of grading of chronic FP with SB was higher using automated videos with tutorial (ICC = 0.845) than without tutorial (ICC = 0.538). CONCLUSION: The reliability of the grading using the automated videos is excellent, especially for the SB grading. We recommend using this automated video tool regularly in clinical routine and for clinical studies. LEVEL OF EVIDENCE: 4 xsLaryngoscope, 129:2274-2279, 2019.
OBJECTIVE: To determine the intrarater, interrater, and retest reliability of facial nerve grading of patients with facial palsy (FP) using standardized videos recorded synchronously during a self-explanatory patient video tutorial. STUDY DESIGN: Prospective, observational study. METHODS: The automated videos from 10 patients with varying degrees of FP (5 acute, 5 chronic FP) and videos without tutorial from eight patients (all chronic FP) were rated by five novices and five experts according to the House-Brackmann grading system (HB), the Sunnybrook Grading System (SB), and the Facial Nerve Grading System 2.0 (FNGS 2.0). RESULTS: Intrarater reliability for the three grading systems was very high using the automated videos (intraclass correlation coefficient [ICC]; SB: ICC = 0.967; FNGS 2.0: ICC = 0.931; HB: ICC = 0.931). Interrater reliability was also high (SB: ICC = 0.921; FNGS 2.0: ICC = 0.837; HB: ICC = 0.736), but for HB Fleiss kappa (0.214) and Kendell W (0.231) was low. The interrater reliability was not different between novices and experts. Retest reliability was very high (SB: novices ICC = 0.979; experts ICC = 0.964; FNGS 2.0: novices ICC = 0.979; experts ICC = 0.969). The reliability of grading of chronic FP with SB was higher using automated videos with tutorial (ICC = 0.845) than without tutorial (ICC = 0.538). CONCLUSION: The reliability of the grading using the automated videos is excellent, especially for the SB grading. We recommend using this automated video tool regularly in clinical routine and for clinical studies. LEVEL OF EVIDENCE: 4 xsLaryngoscope, 129:2274-2279, 2019.
Authors: Josef Georg Heckmann; Peter Paul Urban; Susanne Pitz; Orlando Guntinas-Lichius; Ildikό Gágyor Journal: Dtsch Arztebl Int Date: 2019-10-11 Impact factor: 5.594
Authors: Petar Stankovic; Jan Wittlinger; Robert Georgiew; Nina Dominas; Katrin Reimann; Stephan Hoch; Thomas Wilhelm; Thomas Günzel Journal: Eur Arch Otorhinolaryngol Date: 2020-01-27 Impact factor: 2.503
Authors: Martinus M van Veen; Tessa E Bruins; Madina Artan; Paul M N Werker; Pieter U Dijkstra Journal: Clin Otolaryngol Date: 2020-06-08 Impact factor: 2.597