| Literature DB >> 32543445 |
Ziqing Liu1,2, Haiyang He3, Shixing Yan3, Yong Wang4, Tao Yang2, Guo-Zheng Li1.
Abstract
BACKGROUND: Traditional Chinese medicine (TCM) has been shown to be an efficient mode to manage advanced lung cancer, and accurate syndrome differentiation is crucial to treatment. Documented evidence of TCM treatment cases and the progress of artificial intelligence technology are enabling the development of intelligent TCM syndrome differentiation models. This is expected to expand the benefits of TCM to lung cancer patients.Entities:
Keywords: deep learning; lung cancer; medical record; model fusion; syndrome differentiation; traditional Chinese medicine
Year: 2020 PMID: 32543445 PMCID: PMC7327597 DOI: 10.2196/17821
Source DB: PubMed Journal: JMIR Med Inform
Figure 1Framework of the end-to-end traditional Chinese medicine syndrome differentiation model.
Figure 2Schematic of the deep learning–based multilabel classifier.
TCM syndrome factors for lung cancer and their frequencies.
| Syndrome factor | Frequency |
| Yin deficiency | 1069 |
| Qi deficiency | 1052 |
| Phlegm | 1036 |
| Stasis | 1035 |
| Cancer toxin | 766 |
| Irascibility | 522 |
| Wind | 294 |
| Thirst | 79 |
| Dampness | 72 |
| Yang deficiency | 27 |
| Qi stagnation | 19 |
| Blood deficiency | 6 |
Character encoding–based multilabel classification results.
| Model | Precision | Recall | F1 score | Hamming loss | Mean average precision | AUCa | |
|
| |||||||
|
| fastText | 0.8188 | 0.7923 | 0.8053 | 0.1202 | 0.8164 | 0.9211 |
|
| Text-CNNb | 0.8327 | 0.8342 | 0.8334 | 0.1042 | 0.8634 | 0.9472 |
|
| Text-RNNc | 0.8403 | 0.8240 | 0.8321 | 0.1231 | 0.8731 | 0.9021 |
|
| RCNNd | 0.8467 | 0.8352 | 0.8409 | 0.1005 | 0.8842 | 0.9324 |
|
| Text-HANe | 0.8314 | 0.8552 | 0.8431 | 0.0990 | 0.8361 | 0.9261 |
|
| |||||||
|
| fastText | 0.8447 | 0.8447 | 0.8447 | 0.0990 | 0.8752 | 0.9520 |
|
| Text-CNN | 0.8496 | 0.8505 | 0.8500 | 0.1094 | 0.8845 | 0.9399 |
|
| Text-RNN | 0.8267 | 0.8650 | 0.8454 | 0.1232 | 0.8010 | 0.9321 |
|
| RCNN | 0.8652 | 0.8648 | 0.8650 | 0.0987 | 0.9056 | 0.9466 |
|
| Text-HAN | 0.8580 | 0.8774 | 0.8676 | 0.0836 | 0.9022 | 0.9602 |
aAUC: area under the curve.
bText-CNN: text-convolutional neural network.
cText-RNN: text-recurrent neural network.
dRCNN: recurrent convolutional neural network.
eText-HAN: text-hierarchical attention network.
Word encoding–based multilabel classification results.
| Model | Precision | Recall | F1 score | Hamming loss | Mean average precision | AUCa | ||||||
|
| ||||||||||||
|
| fastText | 0.8376 | 0.8815 | 0.8590 | 0.040 | 0.8651 | 0.9810 | |||||
|
| Text-CNNb | 0.8241 | 0.8520 | 0.8378 | 0.0990 | 0.8468 | 0.9395 | |||||
|
| Text-RNNc | 0.8403 | 0.8240 | 0.8321 | 0.0960 | 0.8679 | 0.9403 | |||||
|
| RCNNd | 0.8461 | 0.8659 | 0.8559 | 0.0832 | 0.8532 | 0.9321 | |||||
|
| Text-HANe | 0.8367 | 0.8505 | 0.8435 | 0.0970 | 0.8366 | 0.9260 | |||||
|
| ||||||||||||
|
| fastText | 0.8690 | 0.8760 | 0.8725 | 0.033 | 0.8752 | 0.9520 | |||||
|
| Text-CNN | 0.8635 | 0.8338 | 0.8484 | 0.0886 | 0.8740 | 0.9479 | |||||
|
| Text-RNN | 0.8377 | 0.8783 | 0.8575 | 0.0782 | 0.9052 | 0.9640 | |||||
|
| RCNN | 0.8875 | 0.8548 | 0.8708 | 0.0532 | 0.9220 | 0.9632 | |||||
|
| Text-HAN | 0.8648 | 0.8857 | 0.8751 | 0.0789 | 0.9210 | 0.9575 | |||||
aAUC: area under the curve.
bText-CNN: text-convolutional neural network.
cText-RNN: text-recurrent neural network.
dRCNN: recurrent convolutional neural network.
eText-HAN: text-hierarchical attention network.
Fusion models for multilabel classification.
| Fusion model | Precision | Recall | F1 score | Hamming loss | Mean average precision | AUCa |
| Text-CNNb and Text-RNNc | 0.8898 | 0.8648 | 0.8771 | 0.0432 | 0.8836 | 0.9432 |
| Text-CNN and Text-HANd | 0.8905 | 0.8732 | 0.8818 | 0.0521 | 0.8876 | 0.9524 |
| Text-RNN and Text-HAN | 0.8890 | 0.8635 | 0.8761 | 0.0305 | 0.8968 | 0.9687 |
| Text-CNN, Text-RNN, and Text-HAN | 0.8920 | 0.8890 | 0.8884 | 0.0312 | 0.9012 | 0.9618 |
aAUC: area under the curve.
bText-CNN: text-convolutional neural network.
cText-RNN: text-recurrent neural network.
dText-HAN: text-hierarchical attention network.