Jonathon P Whitton1,2, Kenneth E Hancock3,4, Jeffrey M Shannon5, Daniel B Polley3,4. 1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, U.S.A.. Jonathon_Whitton@meei.harvard.edu. 2. Program in Speech Hearing Bioscience and Technology, Harvard-MIT Division of Health Sciences, and Technology, Cambridge, Massachusetts, U.S.A.. Jonathon_Whitton@meei.harvard.edu. 3. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, U.S.A. 4. Department of Otology and Laryngology, Harvard Medical School, Boston, Massachusetts, U.S.A. 5. Hudson Valley Audiology Center, Pomona, New York.
Abstract
OBJECTIVES/HYPOTHESIS: To compare hearing measurements made at home using self-administered audiometric software against audiological tests performed on the same subjects in a clinical setting STUDY DESIGN: Prospective, crossover equivalence study METHODS: In experiment 1, adults with varying degrees of hearing loss (N = 19) performed air-conduction audiometry, frequency discrimination, and speech recognition in noise testing twice at home with an automated tablet application and twice in sound-treated clinical booths with an audiologist. The accuracy and reliability of computer-guided home hearing tests were compared to audiologist administered tests. In experiment 2, the reliability and accuracy of pure-tone audiometric results were examined in a separate cohort across a variety of clinical settings (N = 21). RESULTS: Remote, automated audiograms were statistically equivalent to manual, clinic-based testing from 500 to 8,000 Hz (P ≤ .02); however, 250 Hz thresholds were elevated when collected at home. Remote and sound-treated booth testing of frequency discrimination and speech recognition thresholds were equivalent (P ≤ 5 × 10(-5) ). In the second experiment, remote testing was equivalent to manual sound-booth testing from 500 to 8,000 Hz (P ≤ .02) for a different cohort who received clinic-based testing in a variety of settings. CONCLUSION: These data provide a proof of concept that several self-administered, automated hearing measurements are statistically equivalent to manual measurements made by an audiologist in the clinic. The demonstration of statistical equivalency for these basic behavioral hearing tests points toward the eventual feasibility of monitoring progressive or fluctuant hearing disorders outside of the clinic to increase the efficiency of clinical information collection. LEVEL OF EVIDENCE: 2b. Laryngoscope, 126:2382-2388, 2016.
OBJECTIVES/HYPOTHESIS: To compare hearing measurements made at home using self-administered audiometric software against audiological tests performed on the same subjects in a clinical setting STUDY DESIGN: Prospective, crossover equivalence study METHODS: In experiment 1, adults with varying degrees of hearing loss (N = 19) performed air-conduction audiometry, frequency discrimination, and speech recognition in noise testing twice at home with an automated tablet application and twice in sound-treated clinical booths with an audiologist. The accuracy and reliability of computer-guided home hearing tests were compared to audiologist administered tests. In experiment 2, the reliability and accuracy of pure-tone audiometric results were examined in a separate cohort across a variety of clinical settings (N = 21). RESULTS: Remote, automated audiograms were statistically equivalent to manual, clinic-based testing from 500 to 8,000 Hz (P ≤ .02); however, 250 Hz thresholds were elevated when collected at home. Remote and sound-treated booth testing of frequency discrimination and speech recognition thresholds were equivalent (P ≤ 5 × 10(-5) ). In the second experiment, remote testing was equivalent to manual sound-booth testing from 500 to 8,000 Hz (P ≤ .02) for a different cohort who received clinic-based testing in a variety of settings. CONCLUSION: These data provide a proof of concept that several self-administered, automated hearing measurements are statistically equivalent to manual measurements made by an audiologist in the clinic. The demonstration of statistical equivalency for these basic behavioral hearing tests points toward the eventual feasibility of monitoring progressive or fluctuant hearing disorders outside of the clinic to increase the efficiency of clinical information collection. LEVEL OF EVIDENCE: 2b. Laryngoscope, 126:2382-2388, 2016.
Authors: Judy G Kopun; McKenna Turner; Sara E Harris; Aryn M Kamerer; Stephen T Neely; Daniel M Rasetshwane Journal: Am J Audiol Date: 2021-12-10 Impact factor: 1.636
Authors: Jenny X Chen; Jonathon P Whitton; Aravindakshan Parthasarathy; Kenneth E Hancock; Daniel B Polley Journal: Otol Neurotol Date: 2020-10 Impact factor: 2.619