| Literature DB >> 25307488 |
Jonathan O'Muircheartaigh1, Andre Marquand, Duncan J Hodkinson, Kristina Krause, Nadine Khawaja, Tara F Renton, John P Huggins, William Vennart, Steven C R Williams, Matthew A Howard.
Abstract
Recent reports of multivariate machine learning (ML) techniques have highlighted their potential use to detect prognostic and diagnostic markers of pain. However, applications to date have focussed on acute experimental nociceptive stimuli rather than clinically relevant pain states. These reports have coincided with others describing the application of arterial spin labeling (ASL) to detect changes in regional cerebral blood flow (rCBF) in patients with on-going clinical pain. We combined these acquisition and analysis methodologies in a well-characterized postsurgical pain model. The principal aims were (1) to assess the classification accuracy of rCBF indices acquired prior to and following surgical intervention and (2) to optimise the amount of data required to maintain accurate classification. Twenty male volunteers, requiring bilateral, lower jaw third molar extraction (TME), underwent ASL examination prior to and following individual left and right TME, representing presurgical and postsurgical states, respectively. Six ASL time points were acquired at each exam. Each ASL image was preceded by visual analogue scale assessments of alertness and subjective pain experiences. Using all data from all sessions, an independent Gaussian Process binary classifier successfully discriminated postsurgical from presurgical states with 94.73% accuracy; over 80% accuracy could be achieved using half of the data (equivalent to 15 min scan time). This work demonstrates the concept and feasibility of time-efficient, probabilistic prediction of clinically relevant pain at the individual level. We discuss the potential of ML techniques to impact on the search for novel approaches to diagnosis, management, and treatment to complement conventional patient self-reporting.Entities:
Keywords: arterial spin labeling; biomarker; machine learning; pain
Mesh:
Year: 2014 PMID: 25307488 PMCID: PMC4322468 DOI: 10.1002/hbm.22652
Source DB: PubMed Journal: Hum Brain Mapp ISSN: 1065-9471 Impact factor: 5.038
Figure 1Study design illustrating the order of visits for each participant (a). The three main investigations performed in this study as well as the datasets used for each are also indicated (b). Note that, although the order of surgery was pseudorandomized, in this figure Session 3 indicates the session acquired after left‐sided third molar extraction (TME), Session 5 right‐sided TME. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]
Figure 2VAS indices indicate significant between‐session differences in perceived pain (a) but not in alertness (b).
Figure 3Predictive probabilities for ASL images collected at rest and ASL images collected postsurgically for both right and left extraction. The x‐axis indicates the probability that each scan was derived from the presurgery condition. Red squares indicate predictions for images collected postsurgery and are classified correctly if they have a predictive probability less than 0.5 (vertical black dashed line). The blue diamonds indicate the perfusion images acquired from the presurgery scanning session and are correctly classified if they have a predictive probability greater than 0.5. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]
Figure 4Accuracy, sensitivity, and specificity of the GP classifiers are demonstrated in (a) as a function of the number of ASL volumes (per subject and session) used for training the classifier. Area under the curves (AUC) is also shown with the respective ROC curves shown in (b) for classifiers trained with 1, 2, 4, and 6 images to demonstrate the stability of the ROC curves. [Color figure can be viewed in the online issue, which is available at http://wileyonlinelibrary.com.]
Categorical prediction accuracy for the classifiers for postextraction pain states against presurgical states using all scans (i)
|
| True positive | True negative | Accuracy |
|
|---|---|---|---|---|
| (i) All Presurgery vs. Postsurgery | ||||
| 12 | 0.95 | 0.95 | 0.95 | 0.001 |
| (ii) Follow‐up vs. Postsurgery (Left) | ||||
| 1 | 0.6 | 0.6 | 0.6 | 0.17 |
| 2 | 0.95 | 0.75 | 0.85 | 0.001 |
| 3 | 0.8 | 0.7 | 0.75 | 0.01 |
| 4 | 0.8 | 0.8 | 0.8 | 0.001 |
| 5 | 0.85 | 0.8 | 0.825 | 0.001 |
| 6 | 0.85 | 0.85 | 0.85 | 0.001 |
| (iii) Follow‐up vs. Postsurgery (Right) | ||||
| 1 | 0.75 | 0.65 | 0.7 | 0.02 |
| 2 | 0.9 | 0.8 | 0.85 | 0.001 |
| 3 | 1 | 0.9 | 0.95 | 0.001 |
| 4 | 1 | 0.85 | 0.925 | 0.001 |
| 5 | 1 | 0.85 | 0.925 | 0.001 |
| 6 | 1 | 0.85 | 0.925 | 0.001 |
Classifier performance and significance is demonstrated as a function of the number of ASL images used per subject for the left (ii) and right (iii) postsurgical versus follow‐up scans.
Figure 5Representative GPC image patterns that separate left postsurgical vs. follow‐up/no‐surgery GP states. The top row provide anatomical slice locations in MNI template space. Lower rows illustrate the GP pattern discerned using 1 to 6 images to train the classifier. (Blue‐light blue colormap = negative GPC voxelwise coefficients favor postsurgical scans; Red–yellow colormap = positive GPC voxelwise coefficients favor no‐surgery classification).