| Literature DB >> 24904260 |
Christoph Reichert1, Robert Fendrich2, Johannes Bernarding3, Claus Tempelmann4, Hermann Hinrichs5, Jochem W Rieger6.
Abstract
Perception is an active process that interprets and structures the stimulus input based on assumptions about its possible causes. We use real-time functional magnetic resonance imaging (rtfMRI) to investigate a particularly powerful demonstration of dynamic object integration in which the same physical stimulus intermittently elicits categorically different conscious object percepts. In this study, we simulated an outline object that is moving behind a narrow slit. With such displays, the physically identical stimulus can elicit categorically different percepts that either correspond closely to the physical stimulus (vertically moving line segments) or represent a hypothesis about the underlying cause of the physical stimulus (a horizontally moving object that is partly occluded). In the latter case, the brain must construct an object from the input sequence. Combining rtfMRI with machine learning techniques we show that it is possible to determine online the momentary state of a subject's conscious percept from time resolved BOLD-activity. In addition, we found that feedback about the currently decoded percept increased the decoding rates compared to prior fMRI recordings of the same stimulus without feedback presentation. The analysis of the trained classifier revealed a brain network that discriminates contents of conscious perception with antagonistic interactions between early sensory areas that represent physical stimulus properties and higher-tier brain areas. During integrated object percepts, brain activity decreases in early sensory areas and increases in higher-tier areas. We conclude that it is possible to use BOLD responses to reliably track the contents of conscious visual perception with a relatively high temporal resolution. We suggest that our approach can also be used to investigate the neural basis of auditory object formation and discuss the results in the context of predictive coding theory.Entities:
Keywords: ambiguous stimulus; anorthoscopic; bistable perception; object integration; real-time fMRI; slit viewing
Year: 2014 PMID: 24904260 PMCID: PMC4033165 DOI: 10.3389/fnins.2014.00116
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Presented stimulus. (A) The shape of a 3-loop-figure that was simulated to move horizontally back and forth behind a narrow aperture. (B,C) Show snapshots of the figure that can be seen through the simulated aperture at two different points in time. Although only line segments are visible at each point in time the subject is able to perceive the figure as a whole. At a specific width of the aperture the participant's conscious perception switches spontaneously between vertically jumping line segments and the horizontally moving figure. From (B) to (C) the figure moves to the left.
Figure 2Perceptual switch intervals. Histograms show the relative occurrence how long a perceptual state lasted for (A) 15 participants of the offline experiment and (B) 10 participants of the online experiment. A gamma probability density function (pdf) was fitted to the distributions. The median duration of one percept in the offline experiment was 12.1 s (pdf shape: 2.00; scale: 7.51). In the online experiment the median duration of the percept was 21.4 s (pdf shape: 5.29; scale: 4.26).
Figure 3Accuracy of offline tracking. (A) Single subject decoding accuracy obtained by a leave-one-run-out cross-validation. On average 79.2% (SE: 2.6%) of the acquisition time the decoded percept was correct. For each participant the mean guessing level as determined by permutation testing is approximately 50% and the fraction of correctly decoded perceptual states exceeds the 95% confidence interval of the guessing level. Participants are sorted by performance. (B) Time course of the distance (arbitrary unit) of the data points in classification space to the separating hyperplane constructed by a SVM for participant d, Run 5 (92.7% accuracy). White background signifies intervals in which the participant indicated perceiving the integrated object and gray background signifies intervals with line percepts. EPI-volumes for which the conscious percept was correctly identified are shown as circles, incorrectly identified percepts as crosses.
Figure 4Accuracy of online tracking. Single subject decoding accuracy obtained during the online decoding experiment with feedback. On average 82.8% (SE: 2.4%) of the time the content of the conscious percept was correctly decoded. An additional offline cross-validation procedure applied to the same data revealed 88.8% (SE: 1.9%) accuracy on average. Participants are sorted by performance.
Figure 5Significance of classifier weights and activation differences. Brain patterns were extracted from the group classifiers of both offline and online experiments and overlaid on an MNI template. Brain regions were masked by p-values of classifier weights, thresholded at p < 0.1. P-values were obtained in a permutation test. Hot colors indicate higher BOLD signal during object percepts and cool colors indicate lower BOLD signals during object percepts.