| Literature DB >> 34882263 |
Patrick Friedrich1, Kaustubh R Patil2,3, Lisa N Mochalski2,3, Xuan Li2,3, Julia A Camilleri2,3, Jean-Philippe Kröll2,3, Lisa Wiersch2,3, Simon B Eickhoff2,3, Susanne Weis2,3.
Abstract
Hemispheric asymmetries, i.e., differences between the two halves of the brain, have extensively been studied with respect to both structure and function. Commonly employed pairwise comparisons between left and right are suitable for finding differences between the hemispheres, but they come with several caveats when assessing multiple asymmetries. What is more, they are not designed for identifying the characterizing features of each hemisphere. Here, we present a novel data-driven framework-based on machine learning-based classification-for identifying the characterizing features that underlie hemispheric differences. Using voxel-based morphometry data from two different samples (n = 226, n = 216), we separated the hemispheres along the midline and used two different pipelines: First, for investigating global differences, we embedded the hemispheres into a two-dimensional space and applied a classifier to assess if the hemispheres are distinguishable in their low-dimensional representation. Second, to investigate which voxels show systematic hemispheric differences, we employed two classification approaches promoting feature selection in high dimensions. The two hemispheres were accurately classifiable in both their low-dimensional (accuracies: dataset 1 = 0.838; dataset 2 = 0.850) and high-dimensional (accuracies: dataset 1 = 0.966; dataset 2 = 0.959) representations. In low dimensions, classification of the right hemisphere showed higher precision (dataset 1 = 0.862; dataset 2 = 0.894) compared to the left hemisphere (dataset 1 = 0.818; dataset 2 = 0.816). A feature selection algorithm in the high-dimensional analysis identified voxels that most contribute to accurate classification. In addition, the map of contributing voxels showed a better overlap with moderate to highly lateralized voxels, whereas conventional t test with threshold-free cluster enhancement best resembled the LQ map at lower thresholds. Both the low- and high-dimensional classifiers were capable of identifying the hemispheres in subsamples of the datasets, such as males, females, right-handed, or non-right-handed participants. Our study indicates that hemisphere classification is capable of identifying the hemisphere in their low- and high-dimensional representation as well as delineating brain asymmetries. The concept of hemisphere classifiability thus allows a change in perspective, from asking what differs between the hemispheres towards focusing on the features needed to identify the left and right hemispheres. Taking this perspective on hemispheric differences may contribute to our understanding of what makes each hemisphere special.Entities:
Keywords: Brain asymmetry; Functional laterality; Machine learning; Neuroimaging; Volumetry
Mesh:
Year: 2021 PMID: 34882263 PMCID: PMC8844166 DOI: 10.1007/s00429-021-02418-1
Source DB: PubMed Journal: Brain Struct Funct ISSN: 1863-2653 Impact factor: 3.270
Fig. 1Methodological overview. A Processing after creating the measurement of interest (in this case VBM values). Images were aligned onto a symmetrical, sample-specific template. The hemispheres were split, aligned and z-standardized. B Processing steps for conventional statistical comparison. Two outcomes were generated: a laterality quotient image was created which represents the averaged vbm asymmetry per voxel. Significant asymmetries were accessed via demeaned one-sample T-test with threshold-free cluster enhancement. C Processing steps for low-dimensional classification. Dimensionality of all hemispheres was reduced via UMAP. The low-dimensional representation of each hemisphere was fed into a support vector machine to classify hemispheres as left or right. We assessed the precision for classifying the left and right hemispheres based on their low-dimensional representation. D Processing steps for high-dimensional classification. Voxels of the left and right hemispheres were fed into a LASSO classifier, which gave the classification accuracy for each hemisphere as left or right on the basis of selected features. The Boruta feature selection algorithm was applied based on a random forest classifier, to identify the voxels that were most informative for correct classification of a given hemisphere
Fig. 3Comparing laterality quotient, t test and Boruta feature selection. All images are depicted on the right hemispheres. Results of dataset 1 are depicted on the upper panel and results of dataset 2 are depicted in the lower panel. A Laterality quotient. Positive values indicate rightward asymmetry and negative values indicate leftward asymmetry. B Significant voxels with p value below 0.05 corrected with TFCE. C Boruta selection. Yellow voxels were chosen as relevant features for distinguishing between the left and right hemispheres as a result of the cross-validation process. D Comparison between LQ maps with either the t test results or Boruta selection based on the dice similarity coefficient at different LQ thresholds. Dice similarity coefficients (y-axis) are shown for Boruta selection (blue) and t test (green) at different LQ thresholds (x-axis) for LQ maps of rightward (upper panel) and leftward (lower panel) voxels
Fig. 2Low-dimensional embedding of the left and right hemispheres. Kernel-density estimate plots are visualized in the left column, showing the probability density of the left (blue) and right (orange) hemispheric 2-dimensional representation. Scatterplots in the right column show the distribution of the left and right hemispheres in two dimensions. Results of dataset 1 are depicted on the upper panel and results of dataset 2 are depicted in the lower panel
Fig. 4Comparing hemisphere classifiability between subsamples. A Low-dimensional classification in females vs. males (left column) and right-handed vs. non-right-handed participants (right column). Each panel depicts a KDE-plot with an embedded scatterplot, in which both the left (blue) and right (orange) hemispheres are depicted in their low-dimensional representation. The first dimension is always depicted on the x-axis. The reported accuracy values represent the averaged SVM based cross-validated accuracy for identifying the side of a given hemisphere. B Boruta selection. The left column depicts the comparison between females (light green) and males (blue) as well as their overlap (dark green). The right column depicts the comparison between right-handed (light green) and non-right-handed participants (blue) as well as their overlap (dark green). The dice similarity coefficient (DSC) represents the overlap between voxels in females and males, or right and non-right-handed participants, which contributed to correctly classifying the hemispheres as either left or right. For both subfigures, the results of dataset 1 are depicted in the upper part and results of dataset 2 are shown in the lower part of the figure