| Literature DB >> 30356241 |
Jonathan Schneider1,2,3, Nihal Murali4, Graham W Taylor2,3,5, Joel D Levine1,3.
Abstract
Drosophila melanogaster are known to live in a social but cryptic world of touch and odours, but the extent to which they can perceive and integrate static visual information is a hotly debated topic. Some researchers fixate on the limited resolution of D. melanogaster's optics, others on their seemingly identical appearance; yet there is evidence of individual recognition and surprising visual learning in flies. Here, we apply machine learning and show that individual D. melanogaster are visually distinct. We also use the striking similarity of Drosophila's visual system to current convolutional neural networks to theoretically investigate D. melanogaster's capacity for visual understanding. We find that, despite their limited optical resolution, D. melanogaster's neuronal architecture has the capability to extract and encode a rich feature set that allows flies to re-identify individual conspecifics with surprising accuracy. These experiments provide a proof of principle that Drosophila inhabit a much more complex visual world than previously appreciated.Entities:
Mesh:
Year: 2018 PMID: 30356241 PMCID: PMC6200205 DOI: 10.1371/journal.pone.0205043
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Theoretical visual acuity of Drosophila melanogaster.
Image of Drosophila melanogaster represented after various theoretical bottlenecks. A: Image of a female D. melanogaster re-sized through a 32×32 bottleneck. B: The same image, but adjusted using AcuityView [4] for a viewing distance of 3 body lengths using the inter-ommatidial angle of 4.8° [5]. C: The same image and distance, but using a conservative estimate of the effective acuity determined by Juusola et al. [6] of approximately 1.5°.
Fig 2Our fly-eye merges engineered and biological architectures.
Schematics of a ‘standard’ convolutional network, our fly-eye model, and a simplified visual connectome of Drosophila. A: Architecture of Zeiler and Fergus [13], receiving the original 181×181 pixel image of an individual Drosophila melanogaster. B: Our fly-eye model, receiving a 29×29 down-scaled image of an individual Drosophila, and showing connections between feature maps. The initial three feature maps are a custom 6-pixel convolutional filter (‘R1-R6’; black pathway) and two 1×1 convolutional filters (‘R7’ and ‘R8’; red pathway). All other convolutions are locally connected filters. See S1 Table for complete connectivity map. C: A simplified map of the fly visual circuit receiving the same down-scaled image of another D. melanogaster. The connections among the neurons implemented in our model are displayed, illustrating the connections and links within and between layers (adapted from [14] and [15]). See S2 Table for performance of these models on a traditional image-classification dataset.
Performance on D. melanogaster re-identification.
| Model Name | Resolution | Accuracy (F1 Score) |
|---|---|---|
| ResNet18 [ | 158×158 | 0.9426 ± 0.0358 |
| Zeiler and Fergus [ | 158×158 | 0.9373 ± 0.0365 |
| Human Performance | 158×158 | 0.1309 |
| Zeiler and Fergus [ | 29×29 | 0.8549 ± 0.0778 |
| ResNet18 [ | 29×29 | 0.8357 ± 0.0909 |
| 29×29 | ||
| 29×29 | ||
| Human Performance | 29×29 | 0.0829 |
| Random Chance | 0.05 |
Mean and standard deviation shown (n = 3 independent datasets)
1 For Zeiler and Fergus and ResNet18, the “resolution” was a bottleneck (see Methods).
2 An example of high precision / low recall can be seen in S4 Table as ID 10 is assigned to the right fly 94% of the time, but fly 10 is only correctly identified 37% of the time.
3 The zoom was applied randomly without preserving aspect ratio (see S2 Fig).