| Literature DB >> 31998737 |
Graziella Orrù1, Angelo Gemignani1, Rebecca Ciacchini1, Laura Bazzichi2, Ciro Conversano1.
Abstract
Here, we report an investigation on the accuracy of the Toronto Alexithymia Scale, a measure to assess alexithymia, a multidimensional construct often associate to fibromyalgia. Two groups of participants, patients with fibromyalgia (n = 38), healthy controls (n = 38) were administered the Toronto Alexithymia Scale and background tests. Machine learning models achieved an overall accuracy higher than 80% in detecting both patients with fibromyalgia and healthy controls. The parameter which alone has demonstrated maximum efficiency in classifying the single subject within the two groups has been the item 3 of the alexithymia scale. The analysis of the most informative features, based on all scales administered, revealed that item 3 and 13 of the alexithymia questionnaire and the visual analog scale scores were the most informative attributes in correctly classifying participants (accuracy above 85%). An additional analyses using only the alexithymia scale subset of items and the visual analog scale scores has shown that the predictors which efficiently classified patients with fibromyalgia and controls were the item 3 and 7 (accuracy = 85.53%). Our findings suggest that machine learning models analysis based on the Toronto Alexithymia Scale subset of items scores accurately distinguish patients with fibromyalgia from healthy controls.Entities:
Keywords: alexithymia; chronic conditions; experiment; fibromyalgia; machine learning
Year: 2020 PMID: 31998737 PMCID: PMC6970411 DOI: 10.3389/fmed.2019.00319
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Demographic characteristics and performance on the administered tests for each group of participants (FMS and HC groups).
| Gender | FMS | 38 | 1.026 | 0.1622 |
| HC | 38 | 1.368 | 0.4889 | |
| Age | FMS | 38 | 51.500 | 12.1160 |
| HC | 38 | 47.921 | 10.1512 | |
| Education | FMS | 38 | 2.553 | 0.6857 |
| HC | 38 | 3.026 | 0.8849 | |
| TAS-20 total score | FMS | 38 | 58.763 | 13.9040 |
| HC | 38 | 40.868 | 10.5937 | |
| QUID-S | FMS | 38 | 0.3021 | 0.12307 |
| HC | 38 | 0.0955 | 0.13065 | |
| QUID-A | FMS | 38 | 0.3605 | 0.20966 |
| HC | 38 | 0.1087 | 0.16838 | |
| QUID-E | FMS | 38 | 0.2903 | 0.18242 |
| HC | 38 | 0.0682 | 0.12096 | |
| QUID-M | FMS | 38 | 0.3000 | 0.23950 |
| HC | 38 | 0.0584 | 0.12191 | |
| QUID-T | FMS | 38 | 0.3074 | 0.14993 |
| HC | 38 | 0.0858 | 0.11988 | |
| QUID-NWC | FMS | 38 | 14.342 | 6.1039 |
| HC | 38 | 4.447 | 5.5298 | |
| VAS | FMS | 38 | 6.184 | 2.3233 |
| HC | 38 | 2.105 | 2.4026 | |
| HADS total score | FMS | 38 | 19.237 | 7.5137 |
| HC | 38 | 7.316 | 5.1993 |
TAS-20, Toronto alexithymia scale; QUID components: QUID-S, sensory; QUID-A, affective; QUID-E, evaluative; QUID-M, miscellaneous; QUID-NWC, number of words chosen to describe the pain experience; VAS, visual analog scale; HADS, hospital anxiety and depression scale;
Education level: 1 = 5-year; 2 = 8-year; 3 = 13-year; 4 = 17-year.
t-test and effect sizes (d).
| Age | 1.396 | 0.32 |
| Education | −2.608 | −0.598 |
| TAS-20 total score | 6.311 | 1.448 |
| QUID-S | 7.095 | 1.628 |
| QUID-A | 5.773 | 1.324 |
| QUID-E | 6.255 | 1.435 |
| QUID-M | 5.541 | 1.271 |
| QUID-T | 7.115 | 1.632 |
| QUID-NWC | 7.406 | 1.699 |
| VAS | 7.523 | 1.726 |
| HADS total score | 8.043 | 1.845 |
TAS-20, Toronto alexithymia scale; QUID components: QUID-S, sensory; QUID-A, affective; QUID-S, sensory; QUID-A, affective; QUID-E, evaluative; QUID-M, miscellaneous; QUID-NWC, number of words chosen to describe the pain experience; VAS, visual analog scale; HADS, hospital anxiety and depression scale;
Effect sizes according to Cohen's (.
Accuracies as measured by % correct, AUC, F1, and correct classification obtained by five different ML classifiers (using 10-fold cross-validation and 5-fold cross-validation).
| Naïve Bayes | 86.84 | 0.90 (d = 1.81) | 0.87 | FMS 34/38 | HC 32/38 |
| Logistic regression | 72.37 | 0.78 (d = 1.29) | 0.72 | FMS 28/38 | HC 27/38 |
| Simple logistics | 94.74 | 0.93 (d = 2.09) | 0.95 | FMS 36/38 | HC 36/38 |
| Support vector machine | 90.79 | 0.91 (d = 1.90) | 0.91 | FMS 34/38 | HC 35/38 |
| Random forest | 88.16 | 0.94 (d = 2.20) | 0.88 | FMS 35/38 | HC 32/38 |
| Naïve Bayes | 86.84 | 0.91 (d = 1.89) | 0.87 | FMS 34/38 | HC 32/38 |
| Logistic regression | 71.05 | 0.79 (d = 1.14) | 0.71 | FMS 29/38 | HC 25/38 |
| Simple logistics | 86.84 | 0.87 (d = 1.59) | 0.87 | FMS 33/38 | HC 33/38 |
| Support vector machine | 88.16 | 0.88 (d = 1.66) | 0.88 | FMS 34/38 | HC 33/38 |
| Random forest | 86.84 | 0.95 (d = 2.32) | 0.87 | FMS 34/38 | HC 32/38 |
Perfect classification of exemplars in the two categories has an AUC of 1 and a F1 of 1. AUC stands for Area Under the Curve in ROC analysis and F1. In order to compare AUC with the best known effect size measure Cohen's d, is included. Classifiers were run with default parameters of Weka and therefore without any parameter tuning.
Accuracies as measured by % correct, AUC, and F1 obtained by five different ML classifiers (using 10-fold cross-validation and 5-fold cross-validation) with a preliminary best attributes selection.
| Naïve Bayes | 86.48 | 0.90 (d = 1.81) | 0.87 | FMS 33/38 | HC 33/38 |
| Logistic regression | 84.21 | 0.77 (d = 1.04) | 0.84 | FMS 32/33 | HC 32/33 |
| Simple logistics | 94.74 | 0.93 (d = 2.09) | 0.95 | FMS 36/38 | HC 36/38 |
| Support vector machine | 90.79 | 0.91 (d = 1.90) | 0.91 | FMS 34/35 | HC 35/38 |
| Random forest | 88.16 | 0.93 (d = 2.09) | 0.88 | FMS 34/38 | HC 33/38 |
| Naïve Bayes | 82.89 | 0.90 (d = 1.81) | 0.83 | FMS 32/38 | HC 31/38 |
| Logistic regression | 75.00 | 0.74 (d = 0.90) | 0.75 | FMS 30/38 | HC 27/38 |
| Simple logistics | 86.84 | 0.87 (d = 1.59) | 0.87 | FMS 33/38 | HC 33/38 |
| Support vector machine | 86.84 | 0.87 (d = 1.59) | 0.87 | FMS 33/38 | HC 33/38 |
| Random forest | 85.53 | 0.92 (d = 1.98) | 0.86 | FMS 34/38 | HC 31/38 |
Perfect classification of exemplars in the two categories has an AUC of 1 and a F1 of 1. AUC stands for Area Under the Curve in ROC analysis and F1. In order to compare AUC with the best know effect size measure Cohen's d, is included. Classifiers were run with default parameters of Weka and therefore without any parameter tuning.