| Literature DB >> 26582183 |
Antoine Wystrach1,2, Alex Dewar2, Andrew Philippides2, Paul Graham3.
Abstract
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.Entities:
Keywords: Ants; Image matching; Route navigation; Snapshot; View-based homing
Mesh:
Year: 2015 PMID: 26582183 PMCID: PMC4722065 DOI: 10.1007/s00359-015-1052-1
Source DB: PubMed Journal: J Comp Physiol A Neuroethol Sens Neural Behav Physiol ISSN: 0340-7594 Impact factor: 1.836
Fig. 1Simulating natural environments. a We generated six simulated worlds, two of each of three types: tussocks only (bottom); trees only (middle); trees and tussocks (top). Within each world we generated 8 training routes radiating from the centre of each world (blue lines). b Route performance was measured by asking how accurately could the route memories (given a particular eye design) be used to recover the route heading at different displacements from the route (red dots in a indicate release locations for one training route; red arrows in b indicate recovered headings at these locations). c The visual field was varied from 36° to 360° but always kept symmetrical about the forward facing direction. d Along with visual field, we co-varied resolution. Here, for the same scene, we show resolutions from 0.25°–180°. e The directional error (mean and 95 % confidence interval) is shown for locations at different distances from the training routes in each of the three world types: trees only (green); trees and tussocks (blue); and, tussocks only (red). Data presented here were collected from simulations with high resolution (0.35°) and a full visual field of 360°. The dashed line at 90° represents chance and the x-axis is non-linear to emphasise the region of interest. f For the same data as (e) we look at signal strength. Signal strength for a specific test location is defined as the degree of familiarity in the most familiar direction divided by the median familiarity from across all tested directions. The most familiar direction is that with the lowest value in the rIDF (see “Methods”). The graphs show mean signal strength (with 95 % CI) and the colours are as above. Inset shows directional error as a function of signal strength averaged for each release distance
Fig. 2Performance as a function of resolution and azimuthal visual field. For the three world types (columns) we show how performance varies as a function of visual resolution and the azimuthal extent of the visual field. This analysis is repeated for release locations at different distances away from the training route (rows). In each panel grey levels represent mean directional error, with lighter shades meaning better performance. Errors have been interpolated from 10 visual field sizes ×10 resolutions regularly spaced on the maps (triangle-based cubic interpolation). Isolines are used to represent absolute number of pixels across resolution and visual field size. Red dot represents the visual field and resolution of Melophorus bagoti (Schwarz et al. 2011)
Fig. 3Performance as a function of number of visual subfields. For worlds containing tussocks, trees or trees and tussocks (left, middle and right, respectively) performance is shown when visual matching is undertaken using one (red), two (yellow), three (green) or four (blue) visual subfields across a range of release positions for a total visual field of 300° and resolution of 5°. Data shown are means with 95 % confidence intervals