Literature DB >> 34297641

Machine Learning-based Sleep Staging in Patients with Sleep Apnea Using a Single Mandibular Movement Signal.

Nhat-Nam Le-Dong1, Jean-Benoit Martinot2,3, Nathalie Coumans2, Valérie Cuthbert2, Renaud Tamisier4,5, Sébastien Bailly4,5, Jean-Louis Pépin4,5.   

Abstract

Entities:  

Mesh:

Year:  2021        PMID: 34297641      PMCID: PMC8759305          DOI: 10.1164/rccm.202103-0680LE

Source DB:  PubMed          Journal:  Am J Respir Crit Care Med        ISSN: 1073-449X            Impact factor:   21.405


× No keyword cloud information.
To the Editor: We all sleep, and sleep patterns and architecture influence our health and wellbeing. At present, the gold standard method for recording detailed sleep patterns to detect and monitor sleep disorders is in-laboratory overnight polysomnography (PSG), requiring specialized equipment and trained staff. This is no longer feasible in view of the size of the population with suspected sleep disorders, and especially in the coronavirus disease (COVID-19) era (1). Mandibular movements reveal the changes in trigeminal motor nucleus activity driven by brainstem centers involved in sleep and wake transitions (2, 3). The activity of upper airway muscles anchored on the mandibular jaw is the net result of the activation of brainstem respiratory and sleep centers and their respective interactions. This produces specific mandibular movement patterns reflecting the interactions between sleep stages and respiratory control. We previously demonstrated that sleep mandibular movements represent a powerful tool for characterizing respiratory disturbances in obstructive sleep apnea (OSA) (4–6). Figure 1 gives examples of how the different sleep stages each have typical mandibular movement signal patterns.
Figure 1.

The mandibular movements (MM) signal processed by machine learning to provide sleep staging. Typical example of two of the six channels (upper and lower trace) of the MM signal recorded by a single sensor during the four sleep stages in a single individual. Each trace represents a 210-second (3.5-min) time span of MM recordings by the Sunrise system (inertial measurement with six channels) during wakefulness (top), REM sleep, light sleep, and deep sleep (bottom). Thirty-second epochs were used for sleep stage classification. Sleep is detected when MM occur at the breathing frequency. During light sleep (N2), the amplitude of MM reaches several tenths of a millimeter and varies slightly. The movements during quiet respiration and light sleep are repeated at a frequency ranging between 0.15 and 0.60 Hz depending on central drive output. Deepening of sleep (N3) increases the upper airway’s resistance, and this is reflected by an increase in the amplitude of movement, which is also more stable than during N2. REM sleep is easily identified by irregular frequencies and changing amplitudes in MM that are on average smaller than non-REM sleep amplitudes. Cartoon images adapted from Freepik.com.

The mandibular movements (MM) signal processed by machine learning to provide sleep staging. Typical example of two of the six channels (upper and lower trace) of the MM signal recorded by a single sensor during the four sleep stages in a single individual. Each trace represents a 210-second (3.5-min) time span of MM recordings by the Sunrise system (inertial measurement with six channels) during wakefulness (top), REM sleep, light sleep, and deep sleep (bottom). Thirty-second epochs were used for sleep stage classification. Sleep is detected when MM occur at the breathing frequency. During light sleep (N2), the amplitude of MM reaches several tenths of a millimeter and varies slightly. The movements during quiet respiration and light sleep are repeated at a frequency ranging between 0.15 and 0.60 Hz depending on central drive output. Deepening of sleep (N3) increases the upper airway’s resistance, and this is reflected by an increase in the amplitude of movement, which is also more stable than during N2. REM sleep is easily identified by irregular frequencies and changing amplitudes in MM that are on average smaller than non-REM sleep amplitudes. Cartoon images adapted from Freepik.com. Recordings of mandibular movements throughout the night provide hundreds of temporal–spatial signals for modeling and identifying the different sleep stages. Our objective was to develop, train, and then validate an artificial intelligence algorithm to stage sleep using a single sensor detecting mandibular movements. This prospective study included 1,026 adults with suspected OSA referred for overnight in-laboratory PSG and simultaneous recordings of mandibular movements using the Sunrise system (IRB 00004890; number B707201523388). The PSG data (Somnoscreen Plus, Somnomedics) were manually scored by two experienced sleep technicians (interobserver agreement, 92.1%; 95% confidence interval [CI], 0.89–0.94; P < 0.001) in accordance with criteria of the American Academy of Sleep Medicine (7). The Sunrise system is composed of a coin-sized sensor attached by the sleep technician to the chin of the patient (Figure 1). The embedded inertial measurement device senses mandibular movements and is externally controlled by a smartphone application via Bluetooth, automatically transferring nightly data to a cloud-based infrastructure (2). Using the Extreme Gradient Boosting classifier as the core algorithm, we developed and progressively trained a machine learning sleep staging algorithm (8) using the overnight PSG and mandibular movement recordings from 800 of the patients. The algorithm automatically classified each 30-second epoch of mandibular movement patterns as wake, light non-REM (NREM; N1 + N2), deep NREM (N3), or REM sleep stage (Figure 1). N1 and N2 stages were combined in the automated scoring to reach the best compromise between clinical relevance and best model performances. The extracted features consisted of a combination of raw signals along the three axes of the accelerometer/gyroscope, processing modes (filters with different frequency bands, moving average), and statistical functions. The statistics applied to the above features were tendency toward centrality (mean, median), extreme values (min, max), quartiles, and SD, as well as the normal standardized version of all above features. The programming language was Python. Patients in the machine learning training set (n = 800 [451 males]) were aged 48.4 years (16.7) with a body mass index [BMI] of 29.1 kg/m2 (10.2), and neck circumference of 40.0 cm (5.0), all median (interquartile range [IQR]) respectively. PSG recordings showed apnea–hypopnea, respiratory disturbance, and microarousal indexes of 17.1 (27.5), 23.9 (28.5), and 24.2 (20.2), all median events/hour (IQR); and PSG sleep parameters: total sleep time 372 minutes (122.7), sleep efficiency 85.1% (13.7), and wake time 12.2% (16.5), all median (IQR). Patients in a separate validation set (n = 226 [116 males]) had similar characteristics: 46.5 years (17.5), a BMI of 32.3 kg/m2 (11.5), and neck circumference of 40.0 cm (5.0), all median (IQR); similar PSG indexes (20.3 [23.5], 27.0 [23.6], and 25.0 [20.3] for apnea–hypopnea, respiratory disturbance, and microarousal, all median [IQR] respectively), and sleep parameters: 397 min (95.7), 87.1% (11.8), and 11.5% (12.2), all median (IQR) for total sleep time, sleep efficiency, and wake time, respectively. In the validation set, quantitative agreement analysis between machine learning and human scorings was estimated using a linear mixed model by a two-way intraclass correlation coefficient (ICC) (A, 1) (95% CI) for total sleep time, wake time, light NREM, deep NREM, and REM sleep stages: 0.94 (0.93–0.96), 0.90 (0.88–0.92), 0.70 (0.63–0.76), 0.66 (0.58–0.73), and 0.65 (0.56–0.72), respectively. The mean (95% CI) measurement bias for total sleep time and the four sleep stages (as above) were −13.0 minutes (−52.9 to +19.0), +3.8% (−6.8 to +16.8), −14.9% (−31.1 to +1.8), +6.0% (−6.0 to +21.2), and +8.4 (−21.3 to +2.4). The algorithm classified sleep epochs with substantial qualitative agreement with manual PSG scorers, which improved as the size of the learning set was progressively increased (κ = 0.71 and accuracy = 78.3% using the full machine learning data set of 800 patients). As shown in Figure 2, a sleep stagewise receiver operating characteristics curve analysis confirmed the well-balanced performance for each target sleep stage.
Figure 2.

Stagewise receiver operating characteristics (ROC) curve analysis. This consisted of extracting prediction scores for each target stage (wake, light sleep, deep sleep, and REM sleep) and for each patient, then estimating the false and true positive rates of a binary one-versus-rest classification rule to establish the ROC curve. The 95% CIs of the area under the curve (AUC) and smoothing effect were obtained from empirical data (without using any resampling). The diagonal dashed line serves as a reference and shows the performance if sleep staging had been made randomly. The algorithm performed well in detecting REM sleep with a ROC–AUC of 0.96 (0.90–0.99) and non-REM deep sleep with a ROC–AUC of 0.97 (0.91–0.99). Only light non-REM sleep was slightly less well detected with an ROC–AUC of 0.86 (0.77–0.94). CI = confidence interval; DS = deep sleep; LS = light sleep; R = REM sleep; W = wake.

Stagewise receiver operating characteristics (ROC) curve analysis. This consisted of extracting prediction scores for each target stage (wake, light sleep, deep sleep, and REM sleep) and for each patient, then estimating the false and true positive rates of a binary one-versus-rest classification rule to establish the ROC curve. The 95% CIs of the area under the curve (AUC) and smoothing effect were obtained from empirical data (without using any resampling). The diagonal dashed line serves as a reference and shows the performance if sleep staging had been made randomly. The algorithm performed well in detecting REM sleep with a ROC–AUC of 0.96 (0.90–0.99) and non-REM deep sleep with a ROC–AUC of 0.97 (0.91–0.99). Only light non-REM sleep was slightly less well detected with an ROC–AUC of 0.86 (0.77–0.94). CI = confidence interval; DS = deep sleep; LS = light sleep; R = REM sleep; W = wake. Wakefulness was clearly discriminated from sleep states with a sensitivity of 88% (95% CI, 71–99%) and a specificity of 94% (85–98%). Moreover, the algorithm performed well in detecting REM sleep (sensitivity 83% [64–97%], specificity 89% [76–97%]) and deep sleep (sensitivity 84% [59–100%], specificity 90% [79–98%]). Light NREM sleep was slightly less well detected (sensitivity 60% [36–82%], specificity 88% [79–96%]). These findings indicate that machine learning analysis of mandibular movements identifies sleep stages with good agreement to that of individual manual scorers of PSG data. A strength of this work is that it was conducted using a real-life cohort consisting of both subjects for whom PSG detected no OSA and patients with a broad spectrum of OSA, who were randomly sampled into training and validation sets. Clear advantages of our approach are that it relies on a highly performant sleep staging algorithm processing signals from a single mandibular movement, facilitating the complex process of signal treatment and improving sleep staging reproducibility. Our study was designed to avoid the limitations occurring in other studies. First, PSG sleep staging was performed by two experienced technicians. Second, data from an independent set of patients were used to validate the algorithm. The input data were balanced using a random resampling (SMOTE) technique to minimize the effect of data imbalance. A conventional algorithmic framework implying manual feature extraction and a structured data-driven algorithm was adopted for better control and understanding of input data. Furthermore, the XGBoost algorithm offers several advantages over classical methods, including high efficiency in computation and resources, allowing for fast training and execution speed. In conclusion, the mandibular movement signal acquired from a compact inertial measurement device is suitable for automated sleep staging in adults presenting a broad spectrum of OSA severity. The proposed algorithm performs well for clinical applications and could present a major step forward toward unobtrusive, reliable, and cost-effective home-based sleep assessment and value-based care (9).
  8 in total

Review 1.  Estimation of the global prevalence and burden of obstructive sleep apnoea: a literature-based analysis.

Authors:  Adam V Benjafield; Najib T Ayas; Peter R Eastwood; Raphael Heinzer; Mary S M Ip; Mary J Morrell; Carlos M Nunez; Sanjay R Patel; Thomas Penzel; Jean-Louis Pépin; Paul E Peppard; Sanjeev Sinha; Sergio Tufik; Kate Valentine; Atul Malhotra
Journal:  Lancet Respir Med       Date:  2019-07-09       Impact factor: 30.700

2.  AASM Scoring Manual Updates for 2017 (Version 2.4).

Authors:  Richard B Berry; Rita Brooks; Charlene Gamaldo; Susan M Harding; Robin M Lloyd; Stuart F Quan; Matthew T Troester; Bradley V Vaughn
Journal:  J Clin Sleep Med       Date:  2017-05-15       Impact factor: 4.062

3.  Reshaping Sleep Apnea Care: Time for Value-based Strategies.

Authors:  Jean-Louis Pépin; Sébastien Baillieul; Renaud Tamisier
Journal:  Ann Am Thorac Soc       Date:  2019-12

4.  Mandibular position and movements: Suitability for diagnosis of sleep apnoea.

Authors:  Jean-Benoit Martinot; Jean-Christian Borel; Valérie Cuthbert; Hervé Jean-Pierre Guénard; Stéphane Denison; Philip E Silkoff; David Gozal; Jean-Louis Pepin
Journal:  Respirology       Date:  2016-11-06       Impact factor: 6.424

Review 5.  Neural Control of the Upper Airway: Respiratory and State-Dependent Mechanisms.

Authors:  Leszek Kubin
Journal:  Compr Physiol       Date:  2016-09-15       Impact factor: 9.090

Review 6.  How the brainstem controls orofacial behaviors comprised of rhythmic actions.

Authors:  Jeffrey D Moore; David Kleinfeld; Fan Wang
Journal:  Trends Neurosci       Date:  2014-06-02       Impact factor: 13.837

7.  Mandibular Movements As Accurate Reporters of Respiratory Effort during Sleep: Validation against Diaphragmatic Electromyography.

Authors:  Jean-Benoît Martinot; Nhat-Nam Le-Dong; Valerie Cuthbert; Stephane Denison; Philip E Silkoff; Hervé Guénard; David Gozal; Jean-Louis Pepin; Jean-Christian Borel
Journal:  Front Neurol       Date:  2017-07-21       Impact factor: 4.003

8.  Assessment of Mandibular Movement Monitoring With Machine Learning Analysis for the Diagnosis of Obstructive Sleep Apnea.

Authors:  Jean-Louis Pépin; Clément Letesson; Nhat Nam Le-Dong; Antoine Dedave; Stéphane Denison; Valérie Cuthbert; Jean-Benoît Martinot; David Gozal
Journal:  JAMA Netw Open       Date:  2020-01-03
  8 in total
  2 in total

Review 1.  Research trends in hypertension associated with obstructive sleep apnea: a bibliometric analysis.

Authors:  Yirou Niu; Hongwei Cai; Wei Zhou; Haiyan Xu; Xiaodan Dong; Shuang Zhang; Jiaxin Lan; Lirong Guo
Journal:  Sleep Breath       Date:  2022-05-17       Impact factor: 2.816

2.  Mandibular Movements are a Reliable Noninvasive Alternative to Esophageal Pressure for Measuring Respiratory Effort in Patients with Sleep Apnea Syndrome.

Authors:  Jean-Louis Pepin; Nhat-Nam Le-Dong; Valérie Cuthbert; Nathalie Coumans; Renaud Tamisier; Atul Malhotra; Jean-Benoit Martinot
Journal:  Nat Sci Sleep       Date:  2022-04-13
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.