| Literature DB >> 34983835 |
Luke E Miller1,2,3,4, Cécile Fabio2,3,4, Malika Azaroual2,3,4, Dollyane Muret5, Robert J van Beers6,7, Alessandro Farnè2,3,4,8, W Pieter Medendorp6.
Abstract
Perhaps the most recognizable sensory map in all of neuroscience is the somatosensory homunculus. Although it seems straightforward, this simple representation belies the complex link between an activation in a somatotopic map and the associated touch location on the body. Any isolated activation is spatially ambiguous without a neural decoder that can read its position within the entire map, but how this is computed by neural networks is unknown. We propose that the somatosensory system implements multilateration, a common computation used by surveying and global positioning systems to localize objects. Specifically, to decode touch location on the body, multilateration estimates the relative distance between the afferent input and the boundaries of a body part (e.g., the joints of a limb). We show that a simple feedforward neural network, which captures several fundamental receptive field properties of cortical somatosensory neurons, can implement a Bayes-optimal multilateral computation. Simulations demonstrated that this decoder produced a pattern of localization variability between two boundaries that was unique to multilateration. Finally, we identify this computational signature of multilateration in actual psychophysical experiments, suggesting that it is a candidate computational mechanism underlying tactile localization.Entities:
Keywords: computation; neural network; somatosensory; tactile localization
Mesh:
Year: 2022 PMID: 34983835 PMCID: PMC8740579 DOI: 10.1073/pnas.2102233118
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 12.779
Fig. 1.Examples of geometric computations. (A) Idealized example of how multi/trilateration can be used to localize an object within a country. Given the distance between two baseline landmarks whose locations are known (d1), an object (e.g., the Eiffel tower) can be localized by calculating its distance from each landmark individually (d2 and d3). (B) Path integration: By computing over distances and angles traveled (d1 and d2), a rat can calculate how much it needs to travel (d3) to return to its starting position. (C) Visuomotor reaching: The distance between the hand and object (d3) can be computed by evaluating over the eye-centered hand distance (d1) and object distance (d2). (D) Egocentric tactile localization: The vector (black arrow) for reaching to a touch (black dot) on the arm may be computed using trilateration. First, an arm-centered touch location is trilaterated (see next section; see Fig. 2) by computing distance between the touch and the elbow (d1) and wrist (d2). An egocentric representation of touch could be derived by further taking into account the distance between the reaching hand and the elbow (d3) and wrist (d4).
Fig. 2.Neural network implementing trilateration. (A) Trilateral computation for tactile localization: The location of touch on the arm is computed by integrating two estimates (d1 and d2) of the distance between each joint (x1 and x2; elbow and wrist) and the sensory input x3. (B) Neural network implementation of trilateration: (Lower) the encoding layer is composed of homogenous tuning curves across the space of the sensory surface (in percent); (Upper) the decoding layer is composed of two subpopulations of neurons with distance-dependent gradients in tuning properties (shown: firing rate and tuning width). The distance of a tuning curve from its “anchor” is coded by the luminance, with darker colors corresponding to neurons that are closer to the limb boundary. (C) Activations for each layer of the network averaged over 5,000 simulations. Each circle corresponds to a unit of the neural network. (Lower) Encoding layer; (Middle) decoding layer; (Upper) posterior probabilities of localization for each decoding subpopulation (blue and red) and their integration by the Bayesian decoder (purple).
Fig. 3.Simulation results for trilateration. (A) Localization accuracy for the estimates of each decoding subpopulation (Upper; L1, blue; L2, red) and after integration by the Bayesian decoder (Lower; LINT, purple). (B) Decoding noise for each decoding subpopulation (Upper) increased as a function of distance from each landmark. Note that distance estimates are made from the 0% and 100% locations for the first (blue) and second (red) decoding subpopulations, respectively. Integration via the Bayesian decoder (Lower) led to an inverted U-shaped pattern across the surface. Note the differences in the y axis range for both panels.
Fig. 4.Results of behavioral experiments. The results of (A) Experiment 1 and (B) Experiment 2. Perceived location and perceptual variability as a function of touch location (0 = elbow; 100 = wrist). Tactile localization in both experiments was very accurate (Upper rows). The line corresponds to a linear regression fit to the group-level data. The variable errors in percent (Lower rows) exhibited the expected signature of trilateration. The line corresponds to a trilateral regression (see ) fit to the group-level data.
Fig. 5.Effects of a third landmark on localization. (A) Simulation results of the first prediction: adding a third landmark in the middle of sensory surface predicts an inverted W-shaped pattern of decoding variance. Inset: The receptive fields of the decoder subpopulation centered on this third landmark. (B) Results from a single participant in Experiment 3. The purple line corresponds to the fit of the model with only two landmarks. The green line corresponds to the fit of the model with a third landmark on the proximal interphalangeal joint. This model provides a significantly better fit. (C) Model fits to Experiment 3 from Cholewiak and Collins (11): An inverted U-shaped pattern was observed when there were two landmarks (elbow and wrist; purple). Confirming the model’s prediction, an inverted W-shaped pattern was observed when there was an additional third landmark (a stimulator) added to the middle of the forearm (green).
Fig. 6.Computational and implementational model predictions. (A) First computational prediction: Baseline decoding noise increases as a function of stimulus length (blue curves) and is substantially higher than when the stimulus is modeled as a point (purple curve); all experiments in the present study used point-like stimuli. (B) Second computational prediction: Effect of increased variability in the second landmark location (the mark 100% on the surface). As variability increases (from 0 to 30% of tactile space; in steps of 5%), the inverted U-shaped pattern becomes more linear (less symmetrical). (C) Third computational prediction: Patterns of decoding variance for sensory surfaces of different sizes. (D) Decoding noise increases linearly as a function of size. Modifying the size of tactile space will modify perceptual variability. (E) Fourth implementational prediction: Simulated subpopulation response for touch at 15% in two conditions: without (orange curve) and with (blue curve) microstimulation of neurons coding for the third quarter of the limb. (F) Decoded log-likelihoods for these two conditions. In the case of E, microstimulation would modify the distance estimate derived by the Bayesian decoder.