Literature DB >> 33267156

Learning Using Concave and Convex Kernels: Applications in Predicting Quality of Sleep and Level of Fatigue in Fibromyalgia.

Elyas Sabeti1,2, Jonathan Gryak1, Harm Derksen3, Craig Biwer1, Sardar Ansari1,2, Howard Isenstein4, Anna Kratz5, Kayvan Najarian1,2,6,7.   

Abstract

Fibromyalgia is a medical condition characterized by widespread muscle pain and tenderness and is often accompanied by fatigue and alteration in sleep, mood, and memory. Poor sleep quality and fatigue, as prominent characteristics of fibromyalgia, have a direct impact on patient behavior and quality of life. As such, the detection of extreme cases of sleep quality and fatigue level is a prerequisite for any intervention that can improve sleep quality and reduce fatigue level for people with fibromyalgia and enhance their daytime functionality. In this study, we propose a new supervised machine learning method called Learning Using Concave and Convex Kernels (LUCCK). This method employs similarity functions whose convexity or concavity can be configured so as to determine a model for each feature separately, and then uses this information to reweight the importance of each feature proportionally during classification. The data used for this study was collected from patients with fibromyalgia and consisted of blood volume pulse (BVP), 3-axis accelerometer, temperature, and electrodermal activity (EDA), recorded by an Empatica E4 wristband over the courses of several days, as well as a self-reported survey. Experiments on this dataset demonstrate that the proposed machine learning method outperforms conventional machine learning approaches in detecting extreme cases of poor sleep and fatigue in people with fibromyalgia.

Entities:  

Keywords:  Empatica E4; Learning Using Concave and Convex Kernels; fibromyalgia; self-reported survey

Year:  2019        PMID: 33267156      PMCID: PMC7514931          DOI: 10.3390/e21050442

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

Fibromyalgia is medical condition characterized by widespread muscle pain and tenderness that is typically accompanied by a constellation of other symptoms, including fatigue and poor sleep [1,2,3,4,5,6,7,8,9]. Poor sleep, which is a cardinal characteristic of fibromyalgia, is strongly related to greater pain and fatigue, and lower quality of life [10,11,12,13,14,15,16]. As a result, any intervention that can improve sleep quality may enhance daytime functionality and reduce fatigue in people with fibromyalgia. Studies of sleep in fibromyalgia often rely on self-reported measures of sleep or polysomnography. While easy to administer, self-reported measures of sleep demonstrate limited reliability and validity in terms of their correspondence with objective measures of sleep. In contrast, polysomnography is considered the gold standard of objective sleep measurement; however, it is expensive, difficult to administer, especially on a large scale, and may lack ecological validity. Autonomic nervous system (ANS) imbalance during sleep has been implicated as a mechanism underlying unrefreshed sleep in fibromyalgia. ANS activity can be assessed unobtrusively through ambulatory measures of heart rate variability (HRV) and electrodermal activity (EDA) [17,18]. Wearable devices such as the Empatica E4 are able to directly, continuously, and unobtrusively measure autonomic functioning such as EDA and HRV [19,20,21,22]. In the literature, there are few studies in which machine learning methods are used for classification or prediction of conditions related to fibromyalgia, none of which use physiological signals. A recent survey paper [23] summarizes various types of machine learning methods that have been used in pain research, including fibromyalgia. Previously, using data from 26 individuals (14 individuals with fibromyalgia and 12 healthy controls), the relative performance of machine learning methods for classification of individuals with and without pain using neuroimaging and self-reported data have been compared [24]. In another study using MRI images of 59 subjects, support vector machine (SVM) and decision tree models were used to first distinguish healthy control patients from those with fibromyalgia or chronic fatigue syndrome, and then differentiate fibromyalgia from chronic fatigue syndrome [25]. In [26], an SVM trained on fMRI images was used to distinguish fibromyalgia patients from healthy controls. The combination of fMRI with multivariate pattern analysis has also been investigated in classifying fibromyalgia patients, rheumatoid arthritis patients and healthy controls [27]. Psychopathologic features within an ADABoost classifier have also been employed for classification of patients with fibromyalgia and arthritis [28]. In another recent work [29], secondary analysis of gene expression data from 28 patients with fibromyalgia and 19 healthy controls was used to distinguish between these two groups. In this study our immediate interest is to predict extreme cases of fatigue and poor sleep in people with fibromyalgia. For such an analysis, we use self-reported quality of sleep and fatigue severity, continuously collected data from the Empatica E4, to measure autonomic nervous system activity during sleep (Section 2). These signals are preprocessed to remove noise and other artifacts as described in Section 3.1. After preprocessing, a number of mathematical features are extracted, including various statistics, signal characteristics, and HRV features (Section 3.2). Section 4 provides a detailed description of our novel Learning Using Concave and Convex Kernels (LUCCK) machine learning method. This model, along with other conventional machine learning methods, were trained on the extracted features and used to predict extreme cases of poor sleep and fatigue, with our method yielding the best results (Section 5). We believe this analytical framework can be readily extended to outpatient monitoring of daytime activity, with applications to assessing extreme levels of fatigue and pain, such as those experienced by patients undergoing chemotherapy.

2. Dataset

The data used for this study was collected from a group of 20 adults with fibromyalgia and consists primarily of a set of signals recorded by an Empatica E4 wristband over the course of seven days (removing 1 h/day for charging/download). Most (80%) participants were female with mean age = 38.79 (min-max = 18–70 years). Of a possible 140 nights of sleep data, the sample had data for 119 (85%) nights. In this dataset, 19.9% of heartbeats were missing due to noisy signals or failure of the Empatica E4 in detecting beats. Data were divided into 5-min windows for HRV analysis; windows with more than 15% missing peaks were eliminated. This led to the exclusion of 30.9% of the windows. The signals used in this analysis are each patient’s blood volume pulse (BVP), 3-axis accelerometer, temperature, and EDA. In addition to these recordings, each subject self-reported his or her wake and sleep times, as well as self-assessed his or her level of fatigue and quality of sleep every morning. These data are labeled by self-reported quality of sleep (1 to 10, 1 being the worst) and level of fatigue (from 1 to 10, 10 indicating the highest level of fatigue).

3. Signal Processing: Preprocessing, Filtering, and Feature Extraction

The schematic diagram of Figure 1 represents our approach to analyzing the BVP and accelerometer signals in the fibromyalgia dataset. During preprocessing, we remove noise from the input signals and format them for future processing (via the Epsilon Tube filter). Once the BVP and accelerometer signals are fully processed, they along with the EDA and temperature signals can then be analyzed and features can be extracted, which in turn leads to the application of machine learning. The final output is a prediction model to which new data can be fed.
Figure 1

Schematic Diagram of the Proposed Processing System for BVP, accelerometer, EDA and temperature signals.

3.1. Preprocessing

To begin, the raw signals are extracted per patient according to his or her reported wake and sleep times. These are then split into two groups: awake and asleep. For each patient and day, the awake data is paired with the following night’s data and ensuing morning’s self-assessed level of fatigue and quality of sleep. Our approach to preprocessing BVP signals consists of a bandpass filter (to remove both the low-frequency components and the high-frequency noise), a wavelet filter (to help reduce motion artifacts while maintaining the underlying rhythm), and Epsilon Tube filtering. In order to least perturb the true BVP signal, we chose the Daubechies mother wavelet of order 2 (’db2’) as it closely resembles the periodic shape of the BVP signal. Other wavelets were also considered but ultimately discarded. Once we selected a mother wavelet, we performed an eight-level deconstruction of the input BVP signal. By setting threshold values for each level of detail coefficients (Table 1) and using the results to reconstruct the original signal, we were able to significantly reduce the amount of noise present without compromising the measurement integrity of the underlying physiological values. Utilizing this filter on a number of test cases showed that the threshold values produced consistently useful results regardless of the input, meaning tailored interactions are not required for each signal.
Table 1

Chosen coefficient thresholds for the 8-level wavelet decomposition.

Detail Coefficients LevelThreshold
894.38
7147.8
6303.1
5329.9
490.16
330.67
20
10
The accelerometer data was upsampled from 32 Hz to 64 Hz via spline interpolation to match the sampling frequency of the BVP signal. The other signals (temperature and EDA) were left unfiltered. We then use these preprocessed signals as input into our main filtering approach (Epsilon Tube), the output of which is then used for feature extraction (Section 3.2). After filtering of the BVP signal and interpolation of the accelerometer signal, the Epsilon Tube filter [30] is the final component of the preprocessing stage. As discussed in [30], since the BVP signal (and generally any impedance-plethysmography-based measurements) is very susceptible to motion artifact, reduction of this noise is a crucial part of the filtering process. This method uses the synchronized accelerometer data to estimate the motion artifact of BVP signal while leaving the periodic component intact. Let represent BVP values at time t, A a matrix whose rows are the accelerometer signals, and the vector of Epsilon Tube filter coefficients. Given the tube radius , the error of estimation, i.e., , is zero if the point falls inside the tube The Epsilon Tube filter is formulated as a constrained optimization problem that can be expressed as subject to where N is the length of BVP signal, and are slack variables, is the regularization term and c is a designated parameter that adjusts the trade-off between the two objectives. More information about the Epsilon Tube filter can be found in [30]. Taking both the BVP and accelerometer signals as input, the method assumes periodicity in the BVP signal and looks for a period of inactivity at the beginning of the data to use as a template for the rest of the signal. To achieve this, the calmest section of the accelerometer signal (as determined by the longest stretch during which the values never exceed one standard deviation from the mean of the signal) is found. The signal is then shifted so this period of inactivity is at the beginning, and the BVP signal is also shifted to ensure the timestamps remain aligned. The shifted signals are then fed into the Epsilon Tube algorithm, and the resulting output is used for feature extraction.

3.2. Feature Extraction

Once the BVP and accelerometer signals are processed, the full signal set is used for feature extraction. There are 91 features extracted from each of the following signals: Denoised (filtered) BVP signal, i.e., the output of the Epsilon Tube algorithm, with sampling frequency of 64 Hz. Low-band, mid-band, and high-band pass filters applied to the denoised BVP signal. Interpolated accelerometer signal, from 32 HZ to 64 Hz. Tube sizes from the Epsilon Tube filtering method, another output of the Epsilon Tube algorithm that has the time-varying tube size signal. Temperature signal, with sampling frequency of 4 Hz. EDA signal, with sampling frequency of 4 Hz. The calculated breaths per minute (BPM) signal based on the denoised BVP signal. The calculated HRV signal based on the denoised BVP signal. The extracted features are listed in Table 2. These are extracted from both the awake and the sleep signals, resulting in a full feature set consisting of 182 features. When feature selection is performed using Weka’s information gain algorithm [31] on the first four subjects, the only feature ranked consistently near the top is the average of the BVP signal after being run through a mid-band bandpass filter.
Table 2

The list of features extracted from all signals.

SignalsFeatures
Denoised BVPMean, Standard deviation, Variance, Power, Median, Frequency with the highest peak,
Amplitude of the frequency with highest peak, FFT power, Mean of FFT amplitudes,
Mean of the FFT frequencies, Median of FFT amplitudes (11 features)
Low-band denoisedMean, Standard deviation, Variance, Power, Median, Frequency with the highest peak,
BVPAmplitude of the frequency with highest peak, FFT power, Mean of FFT amplitudes,
Mean of the FFT frequencies, Median of FFT amplitudes (11 features)
Mid-band denoisedMean, Standard deviation, Variance, Power, Median, Frequency with the highest peak,
BVPAmplitude of the frequency with highest peak, FFT power, Mean of FFT amplitudes,
Mean of the FFT frequencies, Median of FFT amplitudes (11 features)
High-band denoisedMean, Standard deviation, Variance, Power, Median, Frequency with the highest peak,
BVPAmplitude of the frequency with highest peak, FFT power, Mean of FFT amplitudes,
Mean of the FFT frequencies, Median of FFT amplitudes (11 features)
Tube sizeMean, Standard Deviation, Variance, Power (4 features)
InterpolatedMean, Standard Deviation, Variance, Power (4 features)
accelerometer
Temperature signalMean, Standard Deviation, Variance, Power (4 features)
EDA signalMean, Standard Deviation, Variance, Power (4 features)
BPM signalMaximum, Minimum, Range, Mean, Standard deviation, Power (6 features)
HRVThe Kubios Standard HRV feature set [32] (25 features)

4. Machine Learning: Learning Using Concave and Convex Kernels

The final step in the analysis pipeline is the creation of a model that can be used to predict the extreme cases of quality of sleep or level of fatigue for people with fibromyalgia. As detailed in Section 5, in addition to testing a number of conventional machine learning methods, we tested a novel supervised machine learning called Learning Using Concave and Convex Kernels (LUCCK). A key factor in the classification of complex data is the ability of the machine learning algorithm to use vital, feature-specific information to detect settled and complex patterns of changes in the data. The LUCCK method does this by employing similarity functions (defined below) to capture and quantify a model for each of the features separately. The similarity functions are parametrized so that the concavity or convexity of the function within the feature space can be modified as desired. Once the similarity functions and attendant parameters are chosen, the model uses this information to reweight the importance of each feature proportionally during classification.

4.1. Notation

In this section, is a real-valued vector of features such that , and is a real-valued (scalar) feature. Throughout this section, we consider d classes, n features and m (data) samples; also the indexes ; ; and are used for classes, features and samples respectively. Additionally, refers to samples in class .

4.2. Classification Using a Similarity Function

An instructive model for comparison to the Learning Using Concave and Convex Kernels method is the k-nearest neighbors algorithm [33,34,35] and weighted k-nearest neighbors algorithm [36]. In k-nearest neighbors, a test sample is classified by comparing it to the k nearest training samples in each class. This can make the classification sensitive to a small subset of samples. Instead, LUCCK classifies test data by comparing it to all training data, properly weighted according to their distance to , which is determined by a similarity function. One major difference between LUCCK and weighted k-nearest neighbors is that our approach is based on a similarity function that can be highly non-convex. A fat-tailed (relative to a Gaussian) distribution is more realistic for our data, given that there is a small but non-negligible chance that large errors may occur during measurement, resulting in a large deviation in the values of one or more of the features. The LUCCK method allows for large deviations in a few of the features with only a moderate penalty. Methods based on convex notions of similarity or distance (such as the Mahalanobis distance) are unable to deal adequately with such errors. Suppose that the feature space is comprised of real-valued vectors . A similarity function is a function that measures the closeness of to the origin, and satisfies the following properties: for all ; for all ; if is non-zero and . The value measures the closeness between the vectors and . Using the similarity function , a classification algorithm can be created as follows: The set of training data C is a subset of and is a disjoint union of d classes: . Let be the cardinality of C and define for all k so that . To measure the proximity of a feature vector to a set Y of training samples, we simply add the contributions of each of the elements in Y: A vector is classified in class , where k is chosen such that is maximal. This classification approach can also be used as the maximum a posteriori estimation (details can be found in Appendix A).

4.3. Choosing the Similarity Function

The function has to be chosen carefully. Let be defined as the product where and only depends on the i-th feature. The function is again a similarity function satisfying the properties for all , and whenever . After normalization, the can be considered as probability density functions. As such, the product formula can be interpreted as instance-wise independence for the comparison of training and test data. In the naive Bayes method, features are assumed to be independent globally [37]. Summing over all instances in the training data allows for features to be independent in our model. Next we need to choose the functions . One could choose , so that is a Gaussian kernel function (up to a scalar). However, this does not work well in practice: One or more of the features is prone to large errors —The value of is close to 0 even if and only differ significantly in a few of the features. This choice of is therefore very sensitive to small subsets of bad features. The curse of dimensionality—For the training data to properly represent the probability distribution function underlying the data, the number of training vectors should be exponential in n, the number of features. In practice, it usually is much smaller. Thus, if is a test vector in class , there may not be a training vector in for which is not small. Consequently, let for some parameters . The function can behave similarly to the Cauchy distribution. This function has a “fat tail": as the rate that goes to 0 is much slower than the rate at which goes to 0. We have The function Q has a finite integral if for all i, though this is not required. Three examples of this function can be found in Appendix B.

4.4. Choosing the Parameters

Values for the parameters and must be chosen to optimize classification performance. The value of is the most sensitive to changes in x when is maximal. An easy calculation shows that this occurs when . Since the value directly controls the wideness of ’s tail, it is reasonable to choose a value for that is close to the standard deviation of the i-th feature. Suppose that the set of training vectors is where for all j. Let , where be the standard deviation of the i-th feature. Let where is some fixed parameter. Next we choose the parameters . We fix a parameter that will be the average value of . If we use only the i-th feature, then we define for any set Y of feature vectors. For in the class , gives the average value of over . The quantity measures how much closer is to samples in the class than to vectors in the set C of all feature vectors except itself. This value measures how well the i-th feature can classify as lying in as opposed to some other class. If we sum over all and ensure that the result is non-negative we obtain The can be chosen so that they have the same ratios as and sum up to : In terms of complexity, if n is the number of features and m is the number of training samples then the complexity of the proposed method would be .

4.5. Reweighting the Classes

Sometimes a disproportionate number of test vectors are classified as belonging to a particular class. In such cases one might get better results after reweighting the classes. The weights can be chosen so that all are greater than or equal to 1. If p is a probability vector, then we can reweight it to a vector where If the output of the algorithm consists of the probability vectors the algorithm can be modified so that it yields the output . A good choice for the weights can be learned by using a portion of the training data. To determine how well a training vector can be classified using the remaining training vectors in , we define The value is an estimate for the probability that lies in the class , based on all feature vectors in C except itself. We consider the effect of reweighting the probabilities , by If lies in the class , then the quantity measures how badly is misclassified if the reweighting is used. The total amount of misclassification is We would like to minimize this over all choices of . As this is a highly nonlinear problem, making optimization difficult, we instead minimize instead. This minimization problem can be solved using linear programming, i.e., by minimizing the quantity for the variables and new variables under the constraints that and for all k and j with and .

5. Experiments

In this section, the performance of LUCCK is first compared with other common machine learning methods using four conventional datasets, after which its performance on the fibromyalgia dataset is evaluated.

5.1. UCI Machine Learning Repository

In this set of experiments, LUCCK in compared to some well-known classification methods on a number of datasets downloaded from the University of California, Irvine (UCI) Machine Learning Repository [38]. Each method was tested on each dataset using 10-fold cross-validation, with the average performance and execution time across all folds provided in Table 3. Table 4 contains the average values for accuracy and time across all four datasets.
Table 3

Comparison of our proposed method (LUCCK) with other machine learning methods in terms of accuracy and running time, averaged over 10 folds.

DatasetMethodAccuracy (%)Time (s)
Sonar (208 samples)LUCCK87.421.5082
3-NN81.660.0178
5-NN81.050.0178
Adaboost82.191.0239
SVM81.000.0398
Random Forest (10)78.140.1252
Random Forest (100)83.391.1286
LDA74.900.0343
Glass (214 samples)LUCCK82.560.3500
3-NN68.720.0161
5-NN67.040.0162
Adaboost50.820.5572
SVM35.570.0342
Random Forest (10)75.310.1062
Random Forest (100)79.240.9319
LDA63.280.0155
Iris (150 samples)LUCCK95.930.1508
3-NN96.090.0135
5-NN96.540.0135
Adaboost93.820.4912
SVM96.520.0143
Random Forest (10)94.810.0889
Random Forest (100)95.290.7686
LDA98.000.0122
E. coli (336 samples)LUCCK87.610.5937
3-NN85.080.0190
5-NN86.430.0193
Adaboost74.130.6058
SVM87.530.0448
Random Forest (10)84.560.1075
Random Forest (100)87.340.9265
LDA81.460.0182
Table 4

Model accuracy with standard deviation and execution time for each model, averaged across the four UCI datasets.

MethodAccuracy (%)Time (s)
LUCCK88.38 ± 5.550.6507
3-NN82.89 ± 11.270.0166
5-NN82.77 ± 12.290.0167
Adaboost75.24 ± 18.180.6695
SVM75.16 ± 27.150.0333
Random Forest (10)83.21 ± 8.650.1070
Random Forest (100)86.32 ± 6.840.9389
LDA79.41 ± 14.490.0201

5.2. Fibromyalgia Dataset

In this study, we have created a model that can be used to predict the quality of sleep or level of fatigue for people with fibromyalgia. The labels are self-assessed scores ranging from 1 to 10. Attempts to develop a regression model showed less promise than the results from a binary split. The most likely reason for this failure of the linear regression model is the nature of self-reported scores, especially those related to patient assessment of their level of pain. This fact is primarily due to the differences in individual levels of pain-tolerance. In previous studies [39,40], proponents of neural "biomarkers" argued that self-reported scores are unreliable, making objective markers of pain imperative. In another study [24], self-reported scores were found to be reliable only for extreme cases of pain and fatigue. Consequently, in this study, binary classification of extreme cases of fatigue and poor sleep is investigated. In this situation, a cutoff value is selected: patients that chose a value less than the threshold are placed in one group, while those that chose a value above the threshold are placed in another. As such, the values >8 are chosen for extreme cases of fatigue, and the values <4 are chosen for extreme cases of poor sleep quality. In this way, binary classifications are possible (>8 vs. <8 for fatigue and >4 vs. <4 for sleep). Using the extracted feature set, machine learning algorithms are applied and tested using 10-fold cross-validation. This is done in a way so as to prevent the data from any one patient being in multiple folds: all of a given patient’s data are including entirely in a single fold. In addition, in order to address possibly imbalanced data during fold creation, random undersampling is performed to ensure the ratio between the two classes is not less than 0.3 (this rate is chosen since the extreme cases are at most 30 percent of the [1,10] interval of self-reported scores). This prevents the methods from developing a bias towards the larger class.

5.2.1. Results with Conventional Machine Learning Methods

A number of conventional machine learning models listed in Table 5 were applied to the extracted data in this study. As can be seen, many major machine learning methods were tested. For each of these methods, various configurations were tested, and the best sets of parameters were chosen using cross-validation (hyperparameter optimization). For instance, we used the combination of AdaBoost with different types of standard methods such as Decision Stump and Random Forest in order to explore the possibility of improving the performance of these methods via boosting. The k-nearest neighbor method with was used in this experiment. For the weighted k-nearest neighbor method [36], the inversion kernel (inverse distance weights) with resulted in the best performance. For the Neural Network algorithm, the Weka (Waikato Environment for Knowledge Analysis) [41] multilayer perceptron with two hidden layers was used. The results of using these machine learning approaches for prediction of extreme sleep quality (cutoff of 4) and fatigue level (cutoff of 8) are presented in Table 5. As shown in this table, the AdaBoost method based on random forest yielded the best results for quality of sleep (based on area under the receiver operating characteristic curve, or AUROC). For level of fatigue, the neural network was the best performing model.
Table 5

Results of conventional machine learning methods.

MethodSleepFatigue
Accuracy (%)AUROCAccuracy (%)AUROC
AdaBoost - Decision Stump62.070.6346.640.55
AdaBoost - Random Forest59.970.6551.240.55
K-Nearest Neighbor60.550.5551.880.53
Weighted K-Nearest Neighbor65.270.6268.050.51
Neural Network63.470.6454.800.59
Random Forest63.320.6352.460.57
Support Vector Machine64.470.5055.940.50
LUCCK66.950.6687.590.68

5.2.2. Results with Our Machine Learning Method: Machine Learning Using Concave and Convex Kernels

In addition to the aforementioned conventional methods, we also used our machine learning approach that resulted in superior performance compared to the standard machine learning methods discussed above. Recall that in the Learning Using Concave and Convex Kernels algorithm, test data is classified by comparing it to all training data, properly weighted according to information extracted from each of the features (see Section 4 for further details). The results of applying our method to fibromyalgia are presented in Table 5, with cutoff values of 4 and 8 for quality of sleep and level of fatigue, respectively. As can be seen, LUCCK was able to vastly outperform other models on the fatigue outcome; however, the improvement on sleep outcome was not significant. This disparity is likely due to the different feature spaces for the sleep and fatigue outcomes. In general, the feature space for fatigue is significantly more dispersed, due to there being more samples (during daytime) and also that daytime activity negatively affects the signal quality, increasing dispersion. In contrast, signals (and their associated features) recorded during sleep are of better quality. This leads to the better prediction result for sleep in all methods used. Our proposed LUCCK algorithm can ameliorate the nature of the fatigue feature space, as it is specifically designed to reduce the effect of training data for which there is a large deviation from test data. As such, LUCCK was able to vastly outperform other models on the fatigue outcome. We should note that while the cohort size in this study seems to be limited, the continuous recording of physiological signals for seven days and nights created a comprehensive dataset. Additionally, similar to k-NN and its weighted version (and unlike SVM and neural network models), LUCKK can be trained even with few samples, which is one advantage of the proposed algorithm.

6. Conclusions and Discussion

In this study we primarily focused on prediction of the extreme cases of fatigue and poor sleep. As such, we have created preprocessing/conditioning methods that have the ability to improve the quality of parts of the signals with low quality due to motion artifact and noise. In addition, we identified a set of mathematical features that are important in extracting patterns from physiological signals that can distinguish poor and good clinical outcomes for applications such as fibromyalgia. Additionally, we showed that our proposed machine learning method outperformed the standard methods in predicting the outcomes such as fatigue and sleep quality. Generally, our proposed framework (preprocessing, mathematical features, and proposed machine learning method) can be employed in any study that involves prediction using BVP, HRV and EDA signals. The epsilon tube filter is covered by US Patent 10,034,638, for which Kayvan Najarian is a named inventor.
  29 in total

Review 1.  Nonrestorative sleep.

Authors:  Kristen C Stone; Daniel J Taylor; Christina S McCrae; Anupama Kalsekar; Kenneth L Lichstein
Journal:  Sleep Med Rev       Date:  2008-06-06       Impact factor: 11.609

2.  Autonomic activity during human sleep as a function of time and sleep stage.

Authors:  J Trinder; J Kleiman; M Carrington; S Smith; S Breen; N Tan; Y Kim
Journal:  J Sleep Res       Date:  2001-12       Impact factor: 3.981

3.  'This constant being woken up is the worst thing' - experiences of sleep in fibromyalgia syndrome.

Authors:  Alice Theadom; Mark Cropley
Journal:  Disabil Rehabil       Date:  2010       Impact factor: 3.033

4.  Fluctuations in autonomic nervous activity during sleep displayed by power spectrum analysis of heart rate variability.

Authors:  A Baharav; S Kotagal; V Gibbons; B K Rubin; G Pratt; J Karin; S Akselrod
Journal:  Neurology       Date:  1995-06       Impact factor: 9.910

5.  Comparison of machine classification algorithms for fibromyalgia: neuroimages versus self-report.

Authors:  Michael E Robinson; Andrew M O'Shea; Jason G Craggs; Donald D Price; Janelle E Letzen; Roland Staud
Journal:  J Pain       Date:  2015-02-20       Impact factor: 5.820

6.  Decreased sleep spindles and spindle activity in midlife women with fibromyalgia and pain.

Authors:  Carol A Landis; Martha J Lentz; James Rothermel; Dedra Buchwald; Joan L F Shaver
Journal:  Sleep       Date:  2004-06-15       Impact factor: 5.849

7.  Sleep problems in fibromyalgia and rheumatoid arthritis compared with the general population.

Authors:  N K Belt; E Kronholm; M J Kauppi
Journal:  Clin Exp Rheumatol       Date:  2009 Jan-Feb       Impact factor: 4.473

Review 8.  Sleep in fibromyalgia patients: subjective and objective findings.

Authors:  S M Harding
Journal:  Am J Med Sci       Date:  1998-06       Impact factor: 2.378

9.  A predictive algorithm to identify genes that discriminate individuals with fibromyalgia syndrome diagnosis from healthy controls.

Authors:  Nada Lukkahatai; Brian Walitt; Enrique J Deandrés-Galiana; Juan Luis Fernández-Martínez; Leorey N Saligan
Journal:  J Pain Res       Date:  2018-11-21       Impact factor: 3.133

Review 10.  Machine learning in pain research.

Authors:  Jörn Lötsch; Alfred Ultsch
Journal:  Pain       Date:  2018-04       Impact factor: 6.961

View more
  2 in total

1.  Prediction of postoperative cardiac events in multiple surgical cohorts using a multimodal and integrative decision support system.

Authors:  Renaid B Kim; Olivia P Alge; Gang Liu; Ben E Biesterveld; Glenn Wakam; Aaron M Williams; Michael R Mathis; Kayvan Najarian; Jonathan Gryak
Journal:  Sci Rep       Date:  2022-07-05       Impact factor: 4.996

2.  Predicting Poor Sleep Quality in Fibromyalgia with Wrist Sensors.

Authors:  Olivia Alge; S M Reza Soroushmehr; Jonathan Gryak; Anna Kratz; Kayvan Najarian
Journal:  Annu Int Conf IEEE Eng Med Biol Soc       Date:  2020-07
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.