| Literature DB >> 32934246 |
Vincent W-S Tseng1, Akane Sano2, Dror Ben-Zeev3, Rachel Brian3, Andrew T Campbell4, Marta Hauser5, John M Kane6, Emily A Scherer7, Rui Wang8, Weichen Wang4, Hongyi Wen9, Tanzeem Choudhury9.
Abstract
Schizophrenia is a severe and complex psychiatric disorder with heterogeneous and dynamic multi-dimensional symptoms. Behavioral rhythms, such as sleep rhythm, are usually disrupted in people with schizophrenia. As such, behavioral rhythm sensing with smartphones and machine learning can help better understand and predict their symptoms. Our goal is to predict fine-grained symptom changes with interpretable models. We computed rhythm-based features from 61 participants with 6,132 days of data and used multi-task learning to predict their ecological momentary assessment scores for 10 different symptom items. By taking into account both the similarities and differences between different participants and symptoms, our multi-task learning models perform statistically significantly better than the models trained with single-task learning for predicting patients' individual symptom trajectories, such as feeling depressed, social, and calm and hearing voices. We also found different subtypes for each of the symptoms by applying unsupervised clustering to the feature weights in the models. Taken together, compared to the features used in the previous studies, our rhythm features not only improved models' prediction accuracy but also provided better interpretability for how patients' behavioral rhythms and the rhythms of their environments influence their symptom conditions. This will enable both the patients and clinicians to monitor how these factors affect a patient's condition and how to mitigate the influence of these factors. As such, we envision that our solution allows early detection and early intervention before a patient's condition starts deteriorating without requiring extra effort from patients and clinicians.Entities:
Year: 2020 PMID: 32934246 PMCID: PMC7492221 DOI: 10.1038/s41598-020-71689-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Summary of the individual EMA scores. The error bars represent the standard deviations.
Figure 2Comparison of the mean root-mean-square errors (RMSE) of different prediction models. The error bars represent the confidence intervals.
Figure 3Post-hoc Tukey HSD pairwise comparisons of the mean RMSEs of the individual algorithms. The error bars represent the confidence intervals.
Figure 4Predicted symptom trajectory of one patient for depressed, stressed, and hearing voices by MTL-patients models and the ground truth (index represents the chronological order of each EMA response).
Figure 5Predicted symptom trajectory of one patient for hopeful, harm, and seeing things by m-SVR (RBF) models and the ground truth (index represents the chronological order of each EMA response).
Figure 6The mean feature weight for each modality for different EMA items (the values for the mean positive and negative weights are plotted separately).
Figure 7The mean feature weight for each periodicity for different EMA items (the values for the mean positive and negative weights are plotted separately).
Figure 8The mean feature weight for each window length for different EMA items (the values for the mean positive and negative weights are plotted separately).
Figure 9Characteristics of patients of two subtypes for symptom Depressed after applying K-Means clustering to the absolute feature weights and computing the mean absolute weights for each category. The radar charts show the mean aggregated feature weights for different (a) modalities, (b) periodicities, and (c) time-window in each subtype.
The top 10 predictive rhythm features and the associated mean feature weights for two different subtypes for symptom Depressed.
| Subtype 0 | Weight | Subtype 1 | Weight |
|---|---|---|---|
| #missed_calls | 0.046 | light | 0.058 |
| #SMS_sent | 0.042 | light | 0.057 |
| conversation_length | 0.041 | light | 0.057 |
| #incoming_calls | 0.038 | light | 0.057 |
| screen_on_time | 0.036 | light | 0.057 |
| screen_on_time | 0.036 | light | 0.056 |
| screen_on_time | 0.032 | light | 0.056 |
| #SMS_read | − 0.032 | screen_on_time | 0.051 |
| #outgoing_calls | 0.031 | screen_on_time | 0.051 |
| screen_on_time | 0.030 | screen_on_time | 0.050 |
The naming of the features follows the format [Modality][Rhythm Metric][Window Length], which denotes the modality of the sensor data, the rhythm metric, and the window length used for extracting the feature.
Figure 10CrossCheck system overview.
Summary of the sensing data collected in the study.
| Data type | Sensing data | Data description |
|---|---|---|
| Behavior | Acceleration | 3-axis acceleration from mobile phone with sample rate of 50–100 Hz |
| App usage | Number of apps used in the category of communication, entertainment, productivity, and social during every 15-min interval | |
| Call | Incoming and outgoing phone calls (and whether or not for incoming calls) | |
| SMS | Text message received (and whether they were read), sent, and drafted | |
| Screen on/off | Timestamps when screen was turned on and off | |
| Location | GPS (longitude and latitude) location of user | |
| Conversation | The onset and duration of conversation | |
| Sleep | Sleep duration, and bed and wake time | |
| Environment | Light | The ambient light intensity collected using smartphone’s light sensor |
| Sound | The volume of ambient sound |
EMA questions used in the study.
| Dimension | Description |
|---|---|
| Depressed | Have you been |
| Seeing things | Have you been |
| Harm | Have you been worried about people trying to |
| Hearing voices | Have you been bothered by |
| Sleep | Have you been |
| Stressed | Have you been feeling |
| Think | Have you been able to |
| Hopefulness | Have you been |
| Social | Have you been |
| Calm | Have you been feeling |
Options: 0—not at all; 1—a little; 2—moderately; 3—extremely.
The different rhythm categories and the corresponding rhythm metrics.
| Periodicity | Rhythm Metrics |
|---|---|
| Ultradian | Multi-scale entropy, power spectrum density (with period less than 20 h) |
| Circadian | M10, L5, relative amplitude, deviation from template, interday stability, intraday variability, power spectrum density (with period greater than 20 h and less than 30 h) |
| Infradian | Power spectrum density (with period greater than 30 h) |
The categorization, namely ultradian, circadian, and infradian, is based on whether the rhythm’s periodicity is less than, equal to, or greater than 24 h.
The three dimensions used to characterize each feature and the different factors in each of the dimensions.
| Dimension | Factor |
|---|---|
| Modality | Acceleration, app usage, call, SMS, screen on/off, location, conversation, sleep, light, sound |
| Periodicity | Ultradian, circadian, infradian rhythms |
| Time-window | Previous 2, 4, 6, 8, 10, 12 and 14 days |
Summary of features used in previous work.
| Features | Description |
|---|---|
| Physical activity | Durations of walking states, stationary state, and stationary plus in vehicle state |
| Speech and conversational interaction | The number of independent conversations and their duration as a proxy for social interaction. The ratio of detected human voice labels observed (e.g., amongst all inferred audio frames during a day, for example, 10% human voice) |
| Location and mobility | Distance traveled, the number of places visited, and location entropy from the location data, the number of places visited, the distance traveled, and location entropy using the centroid coordinates of visited places |
| Sleep | Sleep duration, sleep onset time, and wake time each 24 h period day based on the longest period of inferred sleep from ambient light, audio amplitude, activity, and screen on/off |
| Phone usage, calls, and texting | The number of phone lock/unlock events and the duration that the phone is unlocked. the number and duration of incoming and outgoing phone calls, and the number of incoming and outgoing SMS messaging |
| Ambient environment | Mean audio amplitude to determine the acoustic conditions ranging from quiet to loud environments. The standard deviation of the audio amplitude |