Literature DB >> 35199063

Breaking away from labels: The promise of self-supervised machine learning in intelligent health.

Dimitris Spathis1, Ignacio Perez-Pozuelo2, Laia Marques-Fernandez3, Cecilia Mascolo1.   

Abstract

Medicine is undergoing an unprecedented digital transformation, as massive amounts of health data are being produced, gathered, and curated, ranging from in-hospital (e.g., intensive care unit [ICU]) to person-generated data (wearables). Annotating all these data for training purposes in order to feed to deep learning models for pattern recognition is impractical. Here, we discuss some exciting recent results of self-supervised learning (SSL) applications to high-resolution health signals. These examples leverage unlabeled data to learn meaningful representations that can generalize to situations where the ground truth is inadequate or simply infeasible to collect due to the high burden or associated costs. The most prominent bottleneck of deep learning today is access to labeled, carefully curated datasets, and self-supervision on health signals opens up new possibilities to eliminate data silos through general-purpose models that can transfer to low-resource environments and tasks.
© 2021 The Author(s).

Entities:  

Keywords:  biomedical informatics; health signals; machine learning; transfer learning

Year:  2022        PMID: 35199063      PMCID: PMC8848012          DOI: 10.1016/j.patter.2021.100410

Source DB:  PubMed          Journal:  Patterns (N Y)        ISSN: 2666-3899


Introduction

Underutilized medical data and the label gap

Medical data have the power to transform lives. Advances in the ways in which we collect, process, interpret, and use these data can be used to save lives and transform our society. Although the overwhelming majority of current medical research now focuses on clinical data (labs, imaging, vitals, etc.), the average person visits a doctor only around five times a year. Further, recent advances in wearable sensing and mobile computing, alongside their widespread and growing adoption, have created new pathways for the collection of health and well-being data outside of the laboratory and hospital settings, in a longitudinal fashion. These devices can be used to “fill the gaps” that are often found in traditional clinical data, opening up new research and commercial directions for large-scale lifestyle monitoring and providing sources of truth in nondisease scenarios. For example, millions of people worldwide use such devices to track their physical activity and sleep, with increasingly more sophisticated predictive capabilities and a wider range of sensors used to monitor these human behaviors and activities. Concurrent to this self-monitoring revolution, seemingly disparate forces such as mature open-source scientific software libraries, easier data crowdsourcing and labeling, and the repurposing of specialized hardware (graphics cards) have enabled dramatic improvements in predictive modeling. Many machine learning (ML) tasks have achieved impressive performance, ranging from object recognition in images to outperforming experts in breast cancer screening. The common denominator in all these cases has been the curation of high-quality large datasets that allow models to exploit latent patterns and subsequently generalize in real-world scenarios. However, especially in medicine, where erroneous predictions can have grave consequences, the roll-out and adoption of such systems have been met with resistance, mostly citing algorithm interpretability reasons. Similar to how social networks learn our online behaviors, wearable and mobile devices monitor our activities in the real world. By tracking our sleep, steps, and eating and working habits, they create a holistic understanding of the most important components of our everyday health, until now only possible through subjective surveys. Although we recognize the value of such datasets, advances in ML for health and mobile sensing have not kept up with other areas. For example, over the last decade devices such as Fitbit or the iPhone have been collecting multi-modal sensor data at an unprecedented temporal resolution. However, effectively leveraging these datasets has presented many challenges, leading to these data being frequently overlooked for scientific and medical research. Central to this problem is obtaining quality annotations and ground truth, which can be costly, burdensome, and at times, even impossible, given the granularity of these data. In this article, we discuss the potential of self-supervised methods toward bridging the label gap in biomedical data.

Breaking away from labels

Supervised learning: Reaching its limits

Deep supervised learning requires a decent amount of labels and samples in order to achieve good performance. (Arguing about the optimal dataset/label size is definitely a very empirical problem at this moment and that is why we do not quote exact numbers here, since it depends on the complexity of the given problem and the model. There is evidence that accuracy plateaus faster with supervised models compared to self-supervised ones, and the gains are mostly in the low-data regime.) These manual labels—in the best scenario—are easier to obtain through crowdsourcing (cf. Imagenet), but in some cases, it is virtually impossible. For example, annotating wearable sensor timeseries for human activity recognition tasks a posteriori is not feasible without a video recording. On the other hand, given the fact that the amount of unlabeled data (e.g., all the images on the internet or the entire Fitbit user base) is considerably more heterogeneous and representative than some limited datasets, ongoing research and interest in this area have grown significantly. However, unsupervised learning is hard and, until recently, was less efficient than supervised learning. The first promising unsupervised studies in the area of health signals employed the successful paradigm of word2vec and reported results on a par with supervised models. However, the static vectors produced with these methods have limitations against context-specific representations.

Self-supervised learning: The quest for the best data representation

A simple yet exciting emerging idea is to obtain labels “for free” from the input data (x) through various transformations and, then, use conventional supervised objectives to predict them (). The representations obtained this way would be meaningful for downstream tasks with limited labeled data and linear classifiers (see Figure 1). This has been coined self-supervised (or predictive) learning (SSL) due to learning the supervision directly from the data. (The terminology surrounding unsupervised and SSL is a bit blurry. Unsupervised learning is used for a wide range of models, ranging from autoencoders and Boltzmann machines to principal component and cluster analysis. SSL can be seen as a subset of unsupervised learning, where supervisory signals are learned directly from the data. However, the two terms are sometimes used interchangeably.) Even before this term was coined, researchers used to handcraft pretext tasks, which exploited unlabeled data. The most common tasks involved predicting distorted versions of the spatial characteristics of image data by means of rescaling, rotating, patching, shuffling, colorizing, and inpainting missing parts.
Figure 1

Self-supervised learning for health signals

Here, we illustrate the case of ECG signals and the prominent methods, which leverage unlabeled data with self-supervised learning.

(A) Contrastive training maximizes the agreement between the original and the distorted view (flipped, rotated, or other augmentations).

(B) Generative models such as GANs involve two networks that contest with each other in a game to generate more realistic data.

(C) Time-aware models try to guess whether the signal follows the arrow of time.

(D) Masked models hide part of the signal and challenge the model to predict the original one. The representations learned from these methods can be reused on downstream transfer learning tasks with linear models (blue box). Self-supervision is more label efficient in low-data regimes (top right graph).

Self-supervised learning for health signals Here, we illustrate the case of ECG signals and the prominent methods, which leverage unlabeled data with self-supervised learning. (A) Contrastive training maximizes the agreement between the original and the distorted view (flipped, rotated, or other augmentations). (B) Generative models such as GANs involve two networks that contest with each other in a game to generate more realistic data. (C) Time-aware models try to guess whether the signal follows the arrow of time. (D) Masked models hide part of the signal and challenge the model to predict the original one. The representations learned from these methods can be reused on downstream transfer learning tasks with linear models (blue box). Self-supervision is more label efficient in low-data regimes (top right graph). However, one could argue that devising these increasingly complex pre-training tasks resembles traditional feature engineering that neural networks promised to automate. Therefore, more generic recent methods have switched their focus from inventing single data transformations to comparing such views in the latent space and, therefore, offering elegant methods of implicit clustering between pseudo-positive and negative samples. Notably, SimCLR achieved—for the first time—performance on a par with supervised models, by proposing a two-network training method for visual representations, which maximizes agreement between differently transformed views of the same sample via a contrastive cosine similarity loss in the latent space. More recently, BYOL claimed better results even without the negative pairs in its training objective through a similar two-network approach. This sounds surprisingly similar to another family of models: generative adversarial networks (GANs), where the objective draws from game-theoretic principles, and two networks contest with each other in a game to generate more realistic data (see Figure 1). A useful taxonomy is introduced in, where SSL models are grouped into three categories: generative (e.g., autoencoders), contrastive (e.g., SimCLR), and generative-contrastive (e.g., GANs or adversarial autoencoders). The main difference across these categories is the objective, ranging from reconstruction and contrastive losses to distributional divergences. We expect to see more overlap between generative, adversarial, and contrastive training in the future. These methods have shown promise that can indeed generalize to other data beyond images; however, data augmentations or objectives might need to be adapted when moving to a different type of input data., Furthermore, approaches such as the arrow of time exploit the temporal—rather than spatial—information of the data, by either artificially reverting the input sequences so as to distinguish between the correct and the reverse order (see Figure 1), or just predicting the future. We believe that models that anticipate and forecast the future are more robust and generalizable (for example, across hospitals). This seems to be particularly effective in language tasks as well, where models such as GPT-3 outperform every other method by just slicing the data in such a way so as to predict the next word.

Transfer learning: The art of fine-tuning

Transfer learning is the natural application of SSL. The term transfer describes a set of methods toward preserving and reusing previously acquired information, applied possibly to a slightly different domain. This stored information can further accelerate the training of a downstream task with usually limited training data. Modern transfer learning uses pre-trained networks as fixed feature extractors in a) linear downstream models, where a logistic regression classifier is trained to classify a new dataset based on the self-supervised embeddings, or b) further fine-tuning the model in a downstream task. This has shown remarkable results in vision and language domains, where the learned embeddings can be directly applied to smaller datasets.,

Self-supervision for health signals

We will now present some recent results of self-supervised models applied to biomedical signal data (for a comprehensive view of this topic, the reader may be interested in Chowdhury and colleagues' review on SSL on medicine).

Learning generalized physiological representations

While everyone can download off-the-shelf pre-trained models to further customize vision or language tasks, this is not the case for health signals. To this end, we recently developed Step2Heart, a self-supervised model that exploits the multi-modal data of modern wearables to learn meaningful representations, which generalize to several outcomes with transfer learning. The model maps activity data to future heart rate (HR) responses (implicitly applying the arrow of time principle) and can be used as a feature extractor for wearable data. For pre-training, we used a joint quantile loss function that accounts for the long tails of HR data, while downstream, we aggregated the window-level features to user-level ones and showcased the value captured by the learned embeddings through strong performance at inferring physiologically meaningful variables, ranging from anthropometrics to fitness, outperforming unimodal autoencoders and common biomarkers. For instance, the embeddings achieved an area under curve (AUC) of 0.68 for cardio-fitness prediction and an AUC of 0.80 for physical activity energy expenditure. Obtaining these outcomes in large populations can be valuable for downstream health-related inferences that would normally be costly and burdensome (for example, a fitness test requires expensive laboratory treadmill equipment and respiration instruments). Additionally, low fitness is a stronger predictor of mortality than diabetes, hypertension, and smoking. Overall, this is one of the first multi-modal self-supervised models for behavioral and physiological data, with implications for large-scale health and lifestyle monitoring (open source code and pre-trained models are available on https://github.com/sdimi/Step2heart).

Improving human activity recognition

A staple task in mobile health is that of human activity recognition (distinguishing between walking, sitting, running, etc., through activity sensors). This task is fundamental for the development of higher-level health monitoring applications. With SelfHAR, we recently showed that large unlabeled mobile accelerometer datasets can be leveraged to complement small, labeled ones. Our approach combines teacher-student self-training, which distills the knowledge of unlabeled and labeled datasets, while allowing for data augmentation, and multi-task self-supervision, which learns robust signal-level representations by predicting distorted versions of the input. SelfHAR achieved up to a 12% increase in performance (F1 score) using the same number of model parameters, by using up to 10 times fewer labeled data compared to supervised approaches. This work showed how to effectively distill, filter, and use unlabeled data orders of magnitude larger than supervised datasets. In a subsequent study, we studied the impact of the combinations of timeseries transformations in SimCLR models for activity recognition.

Subject-aware biosignal representations

Commonly, datasets with a small number of subjects, such as electroencephalograms (EEG), manifest high intersubject variability. Therefore, Cheng et al. proposed a self-supervised model with an adversarial subject identifier to minimize subject-specific content. They developed domain-inspired augmentations such as frequency-based perturbations to augment the signal, because the power in certain EEG frequency bands has been shown to be highly correlated with different brain activities. Through that, they found that temporal specific transformations (cutout and delay) are the most effective ones. Last, they investigated two scenarios: (1) using subject-specific distributions to compute the contrastive loss and (2) promoting subject invariance through adversarial training, finding that promoting subject invariance increases classification performance when training with a small number of subjects.

Data-efficient cardiac arrhythmia classification

Electrocardiogram (ECG) data are ubiquitous in healthcare settings and are increasingly common in personal devices such as the Apple Watch. Kiyasseh et al. proposed CLOCS, which leverages temporal and spatial invariances of ECG leads based on the two key observations: adjacent ECG segments of shorter duration will continue to share context, and recordings from different leads (at the same time) will reflect the same cardiac function and, thus, share context. A new idea was to define a positive pair as a representation of transformed instances that belong to the same patient. By doing so, the model implicitly personalizes the learned representations to each patient. Driven by this, they designed a new contrastive objective that outperformed supervised and generic self-supervised methods (in terms of AUC) such as BYOL, most notably, with only 25% of labelled training data.

Improving patient monitoring

Yèche et al. took the idea of inducing priors on contrastive losses a step forward. They design an objective that preserves the time dependency of the representations of the timeseries segments and outperforms unsupervised and supervised methods in predicting intensive care unit (ICU) decompensation, length of stay, and sepsis onset (on the MIMIC dataset). The versatility of this approach is twofold: when fully unsupervised, it is competitive to supervised models, and when used in a supervised manner, it outperforms contrastive methods. Tonekaboni et al. independently arrived to a similar formulation by ensuring that in the encoding space, the distribution of signals from within a neighborhood is distinguishable from the distribution of nonneighboring signals. Their models surpassed competitors such as the triplet loss and contrastive predictive coding in predicting diverse outcomes, ranging from atrial fibrillation to human activity recognition, assessed with clustering metrics such as the silhouette score and classification metrics such as the area under the precision-recall curve. They also showed better clusterability over other contrastive losses. Both studies highlighted the generality of such models, which can be reused in multiple downstream tasks.

Further implications

Impact on physician workload

Apart from improving accuracy, self-supervision has the potential to lower physicians' workload. Most healthcare systems are overstretched, staff suffer from burnout, and predictive clinical tools could serve to relieve some pressure, especially in increasingly aged populations. These patients require more investigations—such as imaging or ECGs—hence adding to the overload of clinicians. Thus, employing clinical domain experts to label large datasets whilst working in understaffed departments is an expensive and unrealistic expectation that contributes to the bottleneck for the adoption of ML. Radiologists were the first to acknowledge this vicious cycle, which is likely to propagate to other medical specialties. Therefore, it is of paramount importance to develop efficient models that require minimal—or zero—annotations and physician burden. Self-supervised models and the automation of time-consuming tasks are a feasible solution toward more efficient medicine, potentially enabling more scalable patient screening, preventing delays in time-sensitive diagnoses, clinical management decisions, and hence, improving prognoses. As a result, investing resources only on supervised models will slow down the adoption of ML in clinical settings and will disproportionally benefit high-income countries. Instead, label-efficient models pre-trained on large populations could generalize to global settings, which is of special relevance now that hospitals resume their elective operations whilst still fighting the COVID-19 pandemic.

Limitations

Self-supervised models tend to achieve remarkable transfer results with a fraction of labeled data. However, training these models can be computationally prohibiting, since it has been empirically shown that they require more data and more training iterations. Additionally, SSL is a multi-step process, involving a first step of generating augmentations, then training the pretext model, and last, fine-tuning to the target dataset, which makes the entire pipeline more complicated than simple end-to-end supervised models. Additionally, tracking progress in the area of health signals is not trivial, since there are many different tasks, modalities, baselines, and evaluation metrics. For instance, in some papers, the sign when measuring the gap between SSL and supervised models is positive and, in some, negative, which seems to point to benchmark dependencies.

Conclusion and future outlook

We demonstrated the potential of models that learn meaningful representations directly from unstructured data and presented some recent results in the area of biomedical signals. The underlying challenge here is to find the best representation for biomedical signals, which range from generic ones, such as in BYOL, to ones based on medical intuition and physiology., Given that we have strong statistical priors about the nature of these signals, we are particularly excited by the prospect presented by the latter. Zooming out, Andrew Ng has recently expressed some alarm that there is a considerable gap between proof-of-concept models and actual real-life deployments, due to differences in sensors, protocols, or data collection methods: “In contrast, any human [doctor] can walk down the street to the other hospital and do just fine” (https://www.fastcompany.com/90630654/stanford-ai-experts-healthcare). Sequential transfer learning, as seen, for example, in Step2Heart or in numerous recent works, is probably the first step to validating that learned representations can generalize across different tasks while being label-efficient. The next steps should focus on demonstrating how these models can perform equally well in changing environments (e.g., different hospitals, populations, or devices). Some exciting new approaches toward this direction include disentangled autoencoders and meta-learning for domain generalization. In short, access to high-quality labeled datasets is the main bottleneck of transferring ML advances to critical fields such as healthcare, and SSL appears to be a feasible solution.
  6 in total

1.  An efficient learning procedure for deep Boltzmann machines.

Authors:  Ruslan Salakhutdinov; Geoffrey Hinton
Journal:  Neural Comput       Date:  2012-04-17       Impact factor: 2.026

2.  Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey.

Authors:  Longlong Jing; Yingli Tian
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2020-05-04       Impact factor: 6.226

3.  Large-scale physical activity data reveal worldwide activity inequality.

Authors:  Tim Althoff; Rok Sosič; Jennifer L Hicks; Abby C King; Scott L Delp; Jure Leskovec
Journal:  Nature       Date:  2017-07-10       Impact factor: 49.962

Review 4.  Best practices for analyzing large-scale health data from wearables and smartphone apps.

Authors:  Jennifer L Hicks; Tim Althoff; Rok Sosic; Peter Kuhar; Bojan Bostjancic; Abby C King; Jure Leskovec; Scott L Delp
Journal:  NPJ Digit Med       Date:  2019-06-03

5.  Forecasting adverse surgical events using self-supervised transfer learning for physiological signals.

Authors:  Hugh Chen; Scott M Lundberg; Gabriel Erion; Jerry H Kim; Su-In Lee
Journal:  NPJ Digit Med       Date:  2021-12-08

6.  Association of Cardiorespiratory Fitness With Long-term Mortality Among Adults Undergoing Exercise Treadmill Testing.

Authors:  Kyle Mandsager; Serge Harb; Paul Cremer; Dermot Phelan; Steven E Nissen; Wael Jaber
Journal:  JAMA Netw Open       Date:  2018-10-05
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.