| Literature DB >> 26606168 |
Jia-Ming Liu, Mingyu You, Zheng Wang, Guo-Zheng Li, Xianghuai Xu, Zhongmin Qiu.
Abstract
BACKGROUND: Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor.Entities:
Mesh:
Year: 2015 PMID: 26606168 PMCID: PMC4660085 DOI: 10.1186/1472-6947-15-S4-S2
Source DB: PubMed Journal: BMC Med Inform Decis Mak ISSN: 1472-6947 Impact factor: 2.796
Figure 1A simple example of RBM with 3 visible units and 4 hidden units.
Algorithm description of train deep belief network.
| Input: Data D = { |
|---|
| Output: The structure and learned initialization parameters of the DNN. |
| 1. Learn parameters |
| For k = 2: |
| 2. Initialize the k-th layer RBM by unroll the k-1th layer RBM to the kth layer, of which parameters |
| 3. Refine the parameters of kth layer RBM from data vectors generated from k-1th layer. |
| Return: Structure and parameters of the stacked RBMs. |
Greedy training process for deep belief network
Figure 2The training process of combination of DNN and HMM.
Number of data in each set.
| Training set | Patient dependent test set | Patient independent test set | |
|---|---|---|---|
| Cough Samples | 3873 | 3119 | 1742 |
| Noncough Samples | 2347 | 18862 | 8161 |
Figure 3Performances on patient dependent test set. The results are shown as a function of the number of layers and number of hidden units in each layer. The performance of baseline was generated from a conventional GMM-HMM model.
Result table with pretrained neural network on PD.
| # layers | # hidden units | F1 | ||||
|---|---|---|---|---|---|---|
| 1 | 512 | 0.804 | 0.906 | 0.678 | 0.855 | 0.892 |
| 1 | 1024 | 0.793 | 0.917 | 0.691 | 0.855 | 0.899 |
| 1 | 1536 | 0.799 | 0.915 | 0.692 | 0.857 | 0.899 |
| 1 | 2048 | 0.794 | 0.917 | 0.693 | 0.856 | 0.9 |
| 2 | 512 | 0.793 | 0.914 | 0.685 | 0.853 | 0.896 |
| 2 | 1024 | 0.794 | 0.919 | 0.695 | 0.857 | 0.901 |
| 2 | 1536 | 0.803 | 0.916 | 0.695 | 0.859 | 0.9 |
| 2 | 2048 | 0.804 | ||||
| 3 | 512 | 0.792 | 0.912 | 0.682 | 0.852 | 0.895 |
| 3 | 1024 | 0.793 | 0.914 | 0.686 | 0.853 | 0.897 |
| 3 | 1536 | 0.911 | 0.691 | 0.861 | 0.897 | |
| 3 | 2048 | 0.8 | 0.915 | 0.691 | 0.857 | 0.898 |
| 4 | 512 | 0.791 | 0.916 | 0.688 | 0.853 | 0.898 |
| 4 | 1024 | 0.803 | 0.914 | 0.691 | 0.858 | 0.898 |
| 4 | 1536 | 0.801 | 0.916 | 0.694 | 0.859 | 0.9 |
| 4 | 2048 | 0.782 | 0.921 | 0.693 | 0.852 | 0.902 |
| Baseline Method | 0.945 | 0.767 | 0.563 | 0.856 | 0.792 | |
Result table with randomly initialized neural network on PD.
| # layers | # hidden units | F1 | ||||
|---|---|---|---|---|---|---|
| 1 | 512 | 0.768 | 0.909 | 0.838 | 0.889 | |
| 1 | 1024 | 0.769 | 0.908 | 0.661 | 0.839 | 0.888 |
| 1 | 1536 | 0.783 | 0.903 | 0.661 | 0.843 | 0.886 |
| 1 | 2048 | 0.902 | 0.66 | 0.885 | ||
| 2 | 512 | 0.746 | 0.659 | 0.83 | 0.89 | |
| 2 | 1024 | 0.759 | 0.913 | 0.664 | 0.836 | |
| 2 | 1536 | 0.744 | 0.912 | 0.653 | 0.828 | 0.888 |
| 2 | 2048 | 0.745 | 0.912 | 0.654 | 0.828 | 0.888 |
| 3 | 512 | 0.755 | 0.905 | 0.648 | 0.83 | 0.884 |
| 3 | 1024 | 0.745 | 0.912 | 0.654 | 0.829 | 0.888 |
| 3 | 1536 | 0.735 | 0.908 | 0.641 | 0.821 | 0.883 |
| 3 | 2048 | 0.746 | 0.909 | 0.65 | 0.828 | 0.886 |
| 4 | 512 | 0.759 | 0.897 | 0.637 | 0.828 | 0.877 |
| 4 | 1024 | 0.736 | 0.905 | 0.637 | 0.82 | 0.881 |
| 4 | 1536 | 0.744 | 0.901 | 0.636 | 0.823 | 0.879 |
| 4 | 2048 | 0.693 | 0.88 | 0.573 | 0.787 | 0.853 |
Figure 4Performances on patient independent test set. The setting here is as same as Figure 3, except that these results are generated from a patient independent test set.
Result table with pretrained neural network on PI.
| # layers | # hidden units | F1 | ||||
|---|---|---|---|---|---|---|
| 1 | 512 | 0.823 | 0.904 | 0.724 | 0.864 | 0.89 |
| 1 | 1024 | 0.83 | 0.911 | 0.738 | 0.87 | |
| 1 | 1536 | 0.828 | 0.909 | 0.735 | 0.869 | 0.895 |
| 1 | 2048 | 0.818 | 0.913 | 0.735 | 0.865 | |
| 2 | 512 | 0.909 | ||||
| 2 | 1024 | 0.822 | 0.91 | 0.733 | 0.866 | 0.894 |
| 2 | 1536 | 0.824 | 0.912 | 0.736 | 0.868 | 0.896 |
| 2 | 2048 | 0.816 | 0.911 | 0.731 | 0.864 | 0.894 |
| 3 | 512 | 0.827 | 0.907 | 0.731 | 0.867 | 0.893 |
| 3 | 1024 | 0.835 | 0.904 | 0.73 | 0.869 | 0.892 |
| 3 | 1536 | 0.833 | 0.904 | 0.73 | 0.869 | 0.892 |
| 3 | 2048 | 0.831 | 0.906 | 0.732 | 0.869 | 0.893 |
| 4 | 512 | 0.813 | 0.91 | 0.727 | 0.862 | 0.893 |
| 4 | 1024 | 0.83 | 0.905 | 0.73 | 0.867 | 0.892 |
| 4 | 1536 | 0.829 | 0.909 | 0.736 | 0.869 | 0.895 |
| 4 | 2048 | 0.799 | 0.726 | 0.857 | 0.894 | |
| Baseline method | 0.971 | 0.765 | 0.632 | 0.868 | 0.801 | |
Result table with randomly initialized neural network on PI.
| # layers | # hidden units | F1 | Mic. Ave. | |||
|---|---|---|---|---|---|---|
| 1 | 512 | 0.769 | 0.903 | 0.692 | 0.836 | 0.879 |
| 1 | 1024 | 0.788 | 0.897 | 0.843 | 0.878 | |
| 1 | 1536 | 0.889 | 0.693 | 0.875 | ||
| 1 | 2048 | 0.803 | 0.891 | 0.694 | 0.876 | |
| 2 | 512 | 0.741 | 0.684 | 0.825 | 0.88 | |
| 2 | 1024 | 0.765 | 0.902 | 0.688 | 0.833 | 0.878 |
| 2 | 1536 | 0.747 | 0.905 | 0.682 | 0.826 | 0.877 |
| 2 | 2048 | 0.77 | 0.904 | 0.693 | 0.837 | 0.88 |
| 3 | 512 | 0.763 | 0.897 | 0.68 | 0.83 | 0.874 |
| 3 | 1024 | 0.755 | 0.908 | 0.691 | 0.832 | |
| 3 | 1536 | 0.757 | 0.905 | 0.688 | 0.831 | 0.879 |
| 3 | 2048 | 0.753 | 0.904 | 0.683 | 0.828 | 0.877 |
| 4 | 512 | 0.765 | 0.886 | 0.666 | 0.826 | 0.865 |
| 4 | 1024 | 0.759 | 0.9 | 0.681 | 0.829 | 0.875 |
| 4 | 1536 | 0.771 | 0.893 | 0.679 | 0.832 | 0.872 |
| 4 | 2048 | 0.709 | 0.894 | 0.642 | 0.801 | 0.861 |