| Literature DB >> 35214317 |
Lauren J Wong1,2, Alan J Michaels1.
Abstract
Transfer learning is a pervasive technology in computer vision and natural language processing fields, yielding exponential performance improvements by leveraging prior knowledge gained from data with different distributions. However, while recent works seek to mature machine learning and deep learning techniques in applications related to wireless communications, a field loosely termed radio frequency machine learning, few have demonstrated the use of transfer learning techniques for yielding performance gains, improved generalization, or to address concerns of training data costs. With modifications to existing transfer learning taxonomies constructed to support transfer learning in other modalities, this paper presents a tailored taxonomy for radio frequency applications, yielding a consistent framework that can be used to compare and contrast existing and future works. This work offers such a taxonomy, discusses the small body of existing works in transfer learning for radio frequency machine learning, and outlines directions where future research is needed to mature the field.Entities:
Keywords: deep learning (DL); machine learning (ML); radio frequency machine learning (RFML); transfer learning (TL)
Mesh:
Year: 2022 PMID: 35214317 PMCID: PMC8875384 DOI: 10.3390/s22041416
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The difference between traditional ML (a), in which a new model is trained on a each domain/task pairing from random initialization, and TL (b), in which prior knowledge learned on one domain/task is used to support performance on a second domain and/or task where less (or no) labelled data were available.
Example RFML domain elements and tasks.
| Domain Elements | Tasks |
|---|---|
|
SNR AWGN Ricean Fading Multipath Effects Doppler Bandwidth Sample Rate Noise Floor IQ Imbalance Phase Imbalance Non-linear distortion |
SEI Localization Signal Detection End-to-End Communications SNR Estimation IQ Imbalance Estimation Signal Compression |
Figure 2The two-dimensional spectrum of “similarity” between source and target domains and tasks, with the origin (a) representing the same task and domain. For clarity, the settings that describe (a–i) are provided in Table 2.
Figure 3The proposed TL taxonomy for RFML.
Representative examples for TL settings in RFML.
| TL Setting | Use Case | Source Domain | Source Task | Target Domain | Target Task |
|---|---|---|---|---|---|
| Environment Adaptation | Move a Tx/Rx pair equipped with an AMC model from an empty field to a city center | Single Tx/Rx pair, AWGN channel | Binary AMC (BPSK/QPSK) | Same Tx/Rx pair, Multipath channel | Binary AMC (BPSK/QPSK) |
| Platform Adaptation | Transfer an AMC model between UAV | Single Rx, Many Tx, Fading channel w/ Doppler | Binary AMC (BPSK/QPSK) | Different Rx, Same Tx set, Fading channel w/ Doppler | Binary AMC (BPSK/QPSK) |
| Environment Platform Co-Adaptation | Transfer an AMC model between a ground-station and UAV | Single Rx, Many Tx, Multipath channel | Binary AMC (BPSK/QPSK) | Different Rx, Same Tx set, Fading channel w/ Doppler | Binary AMC (BPSK/QPSK) |
| Multitask Learning | Simultaneous signal detection and AMC | Single Tx/Rx pair, AWGN channel | Binary AMC (BPSK/QPSK) | Same Tx/Rx pair, AWGN channel | SNR Estimation |
| Sequential Learning | Addition of an output class(es) to an | Single Tx/Rx pair, AWGN channel | Binary AMC (BPSK/QPSK) | Same Tx/Rx pair, AWGN channel | Four-class AMC (BPSK/QPSK/ 16QAM/64QAM) |
The settings that describe points (a–i) on the two-dimensional spectrum of “similarity” between source and target domains and tasks shown in Figure 2.
| Setting | Description |
|---|---|
| (a) | The traditional ML setting where the source and target domains and tasks are the same. |
| (b) | The TL setting in which learned features from one domain are used to support performing the same task in a second domain. For example, using features learned to perform AMC in an AWGN channel to support performing AMC in a fading channel. |
| (c) | The setting in which source and target domains are so dissimilar that TL is unsuccessful, despite the source and target tasks being the same. |
| (d) | The TL setting in which learned features from one task are used to support a second task, while the source and target domains are the same. For example, using features learned to perform AMC to support SEI with the source and target domains being the same. |
| (e) | Likely the most challenging TL setting in which learned features from one domain and task are used to support performing a second task in a new domain. For example, using features learned to perform AMC in an AWGN channel to support performing SEI in a fading channel. |
| (f) | The setting in which source and target domains are so dissimilar that TL is unsuccessful, although the source and target tasks are somewhat similar. |
| (g) | The setting in which source and target tasks are so dissimilar that TL is unsuccessful, despite the source and target domains being the same. |
| (h) | The setting in which source and target tasks are so dissimilar that TL is unsuccessful, despite the source and target domains being somewhat similar. |
| (i) | The setting in which both source and target tasks and domains are dissimilar, preventing the use of successful TL. |