| Literature DB >> 28056761 |
Jolyon Troscianko1, John Skelhorn2, Martin Stevens3.
Abstract
BACKGROUND: Quantifying the conspicuousness of objects against particular backgrounds is key to understanding the evolution and adaptive value of animal coloration, and in designing effective camouflage. Quantifying detectability can reveal how colour patterns affect survival, how animals' appearances influence habitat preferences, and how receiver visual systems work. Advances in calibrated digital imaging are enabling the capture of objective visual information, but it remains unclear which methods are best for measuring detectability. Numerous descriptions and models of appearance have been used to infer the detectability of animals, but these models are rarely empirically validated or directly compared to one another. We compared the performance of human 'predators' to a bank of contemporary methods for quantifying the appearance of camouflaged prey. Background matching was assessed using several established methods, including sophisticated feature-based pattern analysis, granularity approaches and a range of luminance and contrast difference measures. Disruptive coloration is a further camouflage strategy where high contrast patterns disrupt they prey's tell-tale outline, making it more difficult to detect. Disruptive camouflage has been studied intensely over the past decade, yet defining and measuring it have proven far more problematic. We assessed how well existing disruptive coloration measures predicted capture times. Additionally, we developed a new method for measuring edge disruption based on an understanding of sensory processing and the way in which false edges are thought to interfere with animal outlines.Entities:
Keywords: Animal coloration; Background matching; Camouflage; Crypsis; Disruptive coloration; Image processing; Pattern analysis; Predation; Signalling; Vision
Mesh:
Year: 2017 PMID: 28056761 PMCID: PMC5217226 DOI: 10.1186/s12862-016-0854-2
Source DB: PubMed Journal: BMC Evol Biol ISSN: 1471-2148 Impact factor: 3.260
Descriptions of the methods used to measure prey conspicuousness
| Camouflage Category | Variable Name | Filtering Method | Basic Description |
|---|---|---|---|
| Edge Disruption | GabRat | Gabor Filter | Average ratio of ‘false edges’ (edges at right angles to the prey outline) to ‘salient edges’ (edges parallel with the prey outline). See Additional file |
| VisRat | Canny Edge Detector | Proportion of Canny edge pixels in the prey’s outline region [ | |
| DisRat | Canny Edge Detector | Proportion of Canny edge pixels in the prey’s outline region [ | |
| Mean Edge-region Canny Edges | Canny Edge Detector | Proportion of Canny edge pixels in the prey’s outline region. | |
| Edge-intersecting cluster count | None | Count of the number of changes in the pattern around the prey’s outline [ | |
| Pattern/Object detection | SIFT | Difference-of-Gaussians, Hough Transform | Uses Hough transform to find features in the prey, then counts how many similar features are found in the background [ |
| HMAX | Gabor Filter | Breaks down a bank of Gabor Filter outputs into layers that describe patterns with some invariance to scale and orientation [ | |
| Pattern | PatternDiff | Fourier Bandpass | Sums the absolute difference between the prey’s pattern statistics [ |
| Euclidean Pattern Distance | Fourier Bandpass | Euclidean distance between normalised descriptive pattern statistics [ | |
| Luminance | Mean Backgorund Luminance | Luminance | Mean luminance |
| Mean Luminance Difference | Luminance | Absolute difference between mean prey and mean background luminance | |
| LuminanceDiff | Luminance | Sum of absolute luminance histogram bins [ | |
| Contrast Difference | Luminance | Absolute difference of contrast between prey and background, where contrast is the standard deviation in luminance levels |
Fig. 1Examples of prey and edge disruption measurements. (a) Sample prey highlighted in blue against its background image. The ‘local’ region within a radius of one body-length is highlighted in red. (b) Examples of prey generated with the disruptive algorithm (left) and background-matching algorithm (right). These prey were chosen as their GabRat values were near the upper and lower end of the distribution (see below). (c) Illustration of the GabRat measurement. Red and yellow false colours indicate the perceived edges run orthogonal to the prey’s outline (making disruptive ‘false edges’), blue false colours indicate the perceived edges are parallel to the prey’s outline (making ‘coherent edges’). GabRat values are only measured on outline-pixels, so these values have been smoothed with a Gaussian filter (σ = 3) to illustrate the approximate field of influence. The prey on the left has a high GabRat value of 0.40, while the prey on the right has a low GabRat value (0.20). (d) Canny edges are highlighted in the images. Edges inside the prey are highlighted in blue, edges in the prey’s outline region are green, and edges outside the prey are red. The VisRat and DisRat disruption metrics are formed from the ratios of these edges. (e) Gabor filter kernels (sigma = 3), shown in false colour at the different angles measured
Fig. 2Capture time prediction accuracy. The predictive performance of camouflage metrics tested in this study ranked from best to worst. All camouflage metrics were continuous variables using one degree of freedom in each model with the exception of treatment type, which was categorical, consuming two degrees of freedom. Note that DisRat and VisRat performed better when fitted with a polynomial