| Literature DB >> 28158189 |
Mark Roper1, Chrisantha Fernando2,3, Lars Chittka1.
Abstract
The ability to generalize over naturally occurring variation in cues indicating food or predation risk is highly useful for efficient decision-making in many animals. Honeybees have remarkable visual cognitive abilities, allowing them to classify visual patterns by common features despite having a relatively miniature brain. Here we ask the question whether generalization requires complex visual recognition or whether it can also be achieved with relatively simple neuronal mechanisms. We produced several simple models inspired by the known anatomical structures and neuronal responses within the bee brain and subsequently compared their ability to generalize achromatic patterns to the observed behavioural performance of honeybees on these cues. Neural networks with just eight large-field orientation-sensitive input neurons from the optic ganglia and a single layer of simple neuronal connectivity within the mushroom bodies (learning centres) show performances remarkably similar to a large proportion of the empirical results without requiring any form of learning, or fine-tuning of neuronal parameters to replicate these results. Indeed, a model simply combining sensory input from both eyes onto single mushroom body neurons returned correct discriminations even with partial occlusion of the patterns and an impressive invariance to the location of the test patterns on the eyes. This model also replicated surprising failures of bees to discriminate certain seemingly highly different patterns, providing novel and useful insights into the inner workings facilitating and limiting the utilisation of visual cues in honeybees. Our results reveal that reliable generalization of visual information can be achieved through simple neuronal circuitry that is biologically plausible and can easily be accommodated in a tiny insect brain.Entities:
Mesh:
Year: 2017 PMID: 28158189 PMCID: PMC5291356 DOI: 10.1371/journal.pcbi.1005333
Source DB: PubMed Journal: PLoS Comput Biol ISSN: 1553-734X Impact factor: 4.475
Fig 1Schematic representation of DISTINCT and MERGED models. Representation of how the lobula orientation-sensitive neurons (LOSN) connect to each models’ Kenyon cells.
The DISTINCT model’s Kenyon cells (red neurons) receive LOSN inputs from just one quadrant of the visual field, either the dorsal or ventral half of the left or right eye. In this example the dorsal Kenyon cells each have an inhibitory (triangle) LOSN type A synapse and three LOSN type B excitatory (circle) synapses (see Methods: Table 1 type 046). The dorsal DISTINCT Kenyon cells in this example each have one excitatory type A and one inhibitory type B synapse (see Methods: Table 1 type 001). The MERGED model Kenyon cells (green neurons) have the same configuration types as the respective dorsal and ventral DISTINCT neurons, but this model combines visual input originating from either the dorsal or ventral regions of both eyes; in the example the ventral MERGED neuron has one inhibitory connection from a type A LOSN and three excitatory LOSN type B synapses from the dorsal left eye and therefore must have the respective three excitatory type B and one inhibitory type A synapses from the ventral right eye.
Lobula orientation-sensitive neuron to Kenyon cell configuration types.
| 001: 1A+, 1B- | 010: 2A+, 5B- | 019: 3A+, 13B- | 028: 7A+, 3B- | 037: 11A+, 13B- |
| 044: 1A-, 1B+ | 053: 2A-, 5B+ | 062: 3A-, 13B+ | 071: 7A-, 3B+ | 080: 11A-, 13B+ |
List of all 86 lobula large-field orientation-sensitive neurons (LOSNs) to mushroom body Kenyon cell configurations. Format [configuration ID]: [number of LOSN type A synapses]A[+/- = excitatory/inhibitory synapses], [number of LOSN type B synapses]B[+/- = excitatory/inhibitory synapses]. The first 43 configurations each had one or more LOSN type A excitatory connection and one or more LOSN type B inhibitory connection. The second 43 configurations were the reciprocal of these with type A inputs being inhibitory and type B excitatory. The use of prime numbers provided a simple way to exclude duplicate responses i.e. 2A+, 5B- would generate the same Kenyon cell response as 4A, 10B-. All synaptic weights were set to 1 or -1 for the individual excitatory and inhibitory connections respectively.
Fig 2Model results for experiment set 1. Exemplary summary of honeybee behaviour and model performance for the discrimination tasks.
In the behavioural experiments [18] different groups of honeybees were differentially trained on a particular pattern pair, one rewarding (CS+) and one unrewarding (CS-). (a) Blue diamonds: honeybee result, percentage of correct pattern selections after training. Red squares: performance accuracy of the DISTINCT simulated bee when test stimuli were presented in the centre of the field of view. Green triangles: performance accuracy of the MERGED simulated bee for the centralised stimuli. Error bars show standard deviation of the Kenyon cell similarity ratios (as a percentage, and centred on the simulated bee performance value; which was equivalent to average Kenyon cell similarity ratio over all simulation trials). Standard deviations were not available for the behaviour results. Small coloured rectangle on x-axis shows the corresponding experiment colour identifiers in (b, c). (b, c) Performance accuracy of the DISTINCT (b) and MERGED (c) simulated bees when comparing the rewarding patterns (CS+) with the corresponding correct (TSCOR) and incorrect (TSINC) pattern pairs when these patterns were horizontally offset between 0 and ±200 pixels in 25 pixel increments (see d). Colour of region indicates the corresponding experiment in (a), performance at 0 horizontal pixel offset in (b), (c) is therefore also the same corresponding DISTINCT or MERGED result in (a) (d) Example of the correct and incorrect pattern images when horizontally offset by -200 pixels to 0 pixels, similar images were created for +25 pixels to +200 pixels. Experiment images were 300 x 150 pixels in size; patterns occupied a 150 x 150 pixel box cropped as necessary. Number in top right of each image indicates number of pixels it was offset by; these were not displayed in actual images. Red dotted lines show how pattern was subdivide into the dorsal left eye, dorsal right eye, ventral left eye and ventral right eye regions. Each region extended a lobula orientation-sensitive neuron of type A and a type B to the models’ mushroom bodies (see Fig 1). The DISTINCT simulated bee performs much better than the MERGED model’s simulated bee and empirical honeybee results when there is no offset in the patterns (a), but with only a small offset (±75 pixels) the DISTINCT simulated bee is unable to discriminate the patterns (b) whereas the simulated bee based on the MERGED model is able to discriminate most of the patterns over a large range of offsets (c).
Fig 3Model results for experiment set 2. Summary of honeybee behaviour and model performance for the generalization tasks.
(a) The two sets of quadrant patterns (each set having similarly orientated bars in each quadrant of the pattern) that were used during the behavioural experiments [19, 20]). Honeybees were trained on random pairs of a rewarding pattern (CS+) and unrewarding pattern (CS-) selected from the two pattern sets, different groups of bees were tested on the reversal such that the CS- pattern would become the CS+ and vice-versa. (b) Blue diamonds: honeybee result, percentage of correct choice selections when tested with novel patterns of varying degrees of difference from the training patterns (here the correct pattern (TSCOR) is the pattern the bees visited most often). Red squares: DISTINCT simulated bee performance when comparing each of the six rewarding patterns in a pattern set (a) against a novel test pattern pair (TSCOR and TSINC). Green triangles: MERGE simulated bee results for the rewarding pattern sets compared against each test pattern pair. Error bars show standard deviation of the Kenyon cell similarity ratios (as a percentage, and centred on the simulated bee performance value; which was equivalent to average Kenyon cell similarity ratio over the all simulation trials). Standard deviations were not available for the behaviour results. For simple generalisations (i) where the novel correct patterns had the similarly oriented bars to the rewarding pattern set and incorrect test pattern was similar to the unrewarding training patterns the DISTINCT and MERGED simulated bee performances were almost identical to those of the real honeybee results. For the harder generalisations; (ii) correct test pattern had one quadrant incorrect—the incorrect test pattern had all quadrants incorrect, (iii) correct pattern had all quadrants correct—incorrect pattern had three quadrants correct, (iv) mirror images and left-right reversals of the rewarding pattern layout, the simulated bee based on the DISTINCT model correctly generalised all pattern pairs but performed substantially worse than the real bees. The MERGED simulated bee failed most experiments in (iii) but did typically perform better than the DISTINCT bee in (ii) & (iv). Both simulated bees failed to generalise correctly if the correct pattern was a chequerboard, whereas real honeybees typically rejected this novel stimulus.
Fig 4Model results for cross pattern experiments. Summary of honeybee behaviour and simulated bees’ performance for the discrimination of simple cross patterns.
In the behavioural experiments [11] different groups of honeybees were differentially trained on a particular cross pattern pair, one rewarding (CS+) and one unrewarding (CS-). Blue: honeybee result, percentage of correct CS+ pattern selections after training. Red: performance accuracy of the DISTINCT simulated bee. Green: performance accuracy of the MERGED simulated bee. Behaviour Error bars for honeybee shows standard deviation. Error bars for models shows standard deviation of the Kenyon cell similarity ratios (as a percentage, and centred on the simulated bee performance value; which was equivalent to average Kenyon cell similarity ratio over the all simulation trials). (a) Discrimination of 90° cross and 45° rotation of this pattern. The DISTINCT simulated bee easily discriminates the patterns but honeybees cannot, the simulated bee based on the MERGED model achieved below 60% accuracy. (b) Discrimination of a 22.5° cross pattern and the same pattern rotated through 90°, both of the simulated bees and real honeybees can discriminate these cross patterns.
Fig 5Schematic representation of the models. The pattern processing stages for the type A and type B lobula large-field orientation-sensitive neurons (LOSN) and their connectivity to the mushroom body Kenyon cells.
(a) Each simulated eye perceives one half of the test image (left eye shown). Lamina: converts a given pattern image into a binary (black/white) retinotopic representation. Medulla: extracts edges resolvable by honeybees and determines the length of all orientations (0°-180°) within the upper and lower image halves. Lobula: within the upper and lower image regions, the LOSN firing rates for the type A and type B neurons are calculated (see Fig 6). The same process is repeated for the right eye producing in total eight LOSN responses. These are then passed to the appropriate 10,320 (DISTINCT model) or 5,160 (MERGED model) mushroom body Kenyon cells. (b) Firing rate responses of our theoretical LOSNs (type A: orange, type B: blue) to a 280 pixel edge at all orientations between 0°–180°; tuning curves adapted from honeybee electrophysiological recordings [13]. (c) Scale factor applied to the LOSN firing rates dependent on the total edge pixel length in each pattern quadrant, nonlinear scaling factor derived from dragonfly neuronal responses to oriented bars with differing bar lengths [14].
Fig 6Worked example of LOSN calculations. Simplified example of the lobula orientation-sensitive neuron (LOSN) type A and type B firing rate response calculations.
(a) Here we calculate values for just the left dorsal eye (quadrant 1) with only horizontal (0°) and vertical (90°) edges presented. In the single horizontal bar example (top) 75% of the overall edge length is at a 0° orientation (600 pixels out of total edge length of 800 pixels) and 25% of the edges at 90° orientations, thus the LOSN responses are influenced more by the response curve values at 0° than 90°. Conversely, the vertical bar is influenced more by the response curve values at 90°, resulting in overall higher LOSN firing rates. The two horizontal bars example (bottom) has the same proportion of orientations as the single horizontal bar (top). Although the total edge length is doubled, the LOSN firing rates are not twice as high; instead they are scaled using the non-linear scaling factor derived from dragonflies (see Fig 5 and Eq 1). Note that the LOSN type A firing rate is the same for a single vertical bar as it is for two horizontal bars (52 Hz). (b) LOSN type A and type B response curve values for 0° and 90° (see Fig 5). (c) LOSN scale factors for 800 and 1600 pixel edges (See Fig 5).
Fig 7Worked example of LOSN firing rate response calculations. Simplified example of the lobula orientation-sensitive neuron (LOSN) type A and type B firing rate response calculations.
(a, b) Left: rewarding pattern (CS+) (single horizontal bar). Here we used the DISTINCT model to calculate the Kenyon cell activations to this pattern. Right: graphical representation of all Kenyon cell activations (red: fired, black: inhibited). In these example we again only processed the top-left quadrant of the visual field (see Fig 6). (c) The correct test stimulus (TSCOR) and the resultant Kenyon cell activations. (d) Incorrect test stimulus (TSINC) and its Kenyon cell activation pattern. (e) Black dots show if differences occur between the activation of respective Kenyon cells when presented with rewarding pattern and correct test patterns. (f) Differences between the rewarding and incorrect stimuli Kenyon cell activations. The rewarding (CS+) and correct test stimuli (TSCOR) both present mostly horizontal edges; however due to the difference in edge lengths, the lobula orientation-sensitive neuron firing rates are markedly different. Nonetheless, the combination of excitatory and inhibitory synaptic connections from these lobula neurons to the Kenyon cells (see Table 1) produces very similar Kenyon cell activations. Using the Euclidian distances between the Kenyon cell activations of the CS+ and TSCOR, and CS+ and TSINC responses this simulation produced a Kenyon cell similarity ratio of (1 − (7.6158 / (7.6158 + 13.6382))) = 0.64 (see Eq 2); indicating that for this simulation our DISTINCT model would generalize from the single horizontal bar pattern (CS+) to the two horizontal bars pattern (TSCOR), in preference to the single vertical bar stimulus (TSINC).