Literature DB >> 17359541

The integration window for shape cues is a function of ambient illumination.

Ernest Greene1.   

Abstract

Minimal discrete shape cues, i.e., dots that marked positions on the outer boundary of namable objects, were divided into two subsets, which were shown very quickly with a variable delay between subsets. Recognition of a given object required integration of the information provided by the two subsets, and previous research had found that recognition declined as the delay between subsets was increased. The present experiment found the decline in recognition to be linear for each of several levels of ambient illumination, dropping rapidly under photopic test conditions, and with the slope being progressively less steep with transition into the scotopic range. The change in the duration of information persistence may be related to the density of information that is provided under various lighting conditions, and a requirement that the information be buffered against noise or "packaged" to accommodate successive saccades.

Entities:  

Year:  2007        PMID: 17359541      PMCID: PMC1838908          DOI: 10.1186/1744-9081-3-15

Source DB:  PubMed          Journal:  Behav Brain Funct        ISSN: 1744-9081            Impact factor:   3.759


Background

"All the connections set up between sensations by the formation of ideas tend to persist, even when the original conditions of connection are no longer fulfilled." Titchener [1] It is well established that brief stimulation can initiate sustained neural activity that allows information to be sampled or integrated over time intervals that far outlast the duration of the stimulus. In vision, the persistence of information has been variously described as visual information store [2], iconic memory [3], and short-term visual storage [4]. Previous research from this laboratory found that the information persistence needed for recognition of transient discrete shape cues is affected by the level of ambient room illumination [5]. In those experiments, objects were represented using a sparse sampling of dots that marked the outer boundary of each object. Fig. 1 shows an example from that study, which was used also in the present experiment. The upper left panel of Fig. 1 shows the full inventory of dots that specified locations on the outer boundary. A sample was drawn from that inventory for display to a given subject, as illustrated in the upper right panel, and this sample was designated as the "display set." The display set was further divided into subsets, one containing the dots lying at odd positions in the sequence, and the other containing the dots at even positions, as shown in the lower panels of Fig. 1.
Figure 1

The upper left panel shows the full complement of boundary dots for one of the shapes to be identified. A sampling of these dots is shown in the upper right panel as filled circles, this being an example of a display set. To pick the display set for a given subject, the sampling began at a randomly selected starting point, shown by the arrow, and included every Nth dot, counting clockwise from this location. [See text for discussion of how N was determined for each shape.] The display set was then divided into subsets, one containing the odd dots from the counting process, and the other containing the even dots. These are shown in the lower left and right panels, respectively. The dots in each subset were displayed as a group, varying the time interval between each subset as a function of room illumination.

The upper left panel shows the full complement of boundary dots for one of the shapes to be identified. A sampling of these dots is shown in the upper right panel as filled circles, this being an example of a display set. To pick the display set for a given subject, the sampling began at a randomly selected starting point, shown by the arrow, and included every Nth dot, counting clockwise from this location. [See text for discussion of how N was determined for each shape.] The display set was then divided into subsets, one containing the odd dots from the counting process, and the other containing the even dots. These are shown in the lower left and right panels, respectively. The dots in each subset were displayed as a group, varying the time interval between each subset as a function of room illumination. Under these test conditions, the prior work found that if the two subsets were displayed with minimal delay between offset of the first subset and onset of the second, recognition levels were relatively high [5]. However, adding a delay between the two subsets impaired recognition of the shapes, and the degree of impairment was a function of ambient light level [5]. One experiment examined the amount of information persistence with normal room lighting versus darkness, and found that recognition levels dropped fairly quickly in the former, but only moderately in the latter even with subset delays of over 200 ms [5]. A second experiment tested in a dim room, and found an intermediate rate of decline, along with evidence that the decrease was a linear function of the delay interval [5]. These results [5] provided evidence for differentials in the persistence of shape-cue information that were a function of light level, but the delay intervals were not optimal for showing the rate of decline at each level of ambient illumination. The present experiment provided a more strategic sampling of time intervals, and has yielded evidence for linear declines having slopes that are a function of this illumination.

Methods

Ten USC undergraduates served as subjects in the experiment. Subjects had normal or corrected to normal visual acuity. Except for the task instructions described below, they were naive to the hypothesis under consideration. Subjects received course credit for their participation. The shapes to be identified were taken from the Macmillan Visual Dictionary [6] or from Hemera's clip art [7]. A custom program positioned a 64 × 64 array over the image, requiring that the object span the full dimension of the array in either the vertical or the horizontal direction. Then the cells of the array that fell on the outer boundary of the object were marked, meaning that the column and row position of each boundary location was entered into an address table. To provide a consistent rule for adjacency and basis for specifying distance among marked locations, a requirement was imposed that one could use only a continuous sequence of adjacent cell locations, not allowing inclusion of any cell previously visited. One hundred fifty (150) shapes were used in the present experiment, as shown in Table 1 (following References). Each shape was displayed to a given subject only once using a minimal transient discrete cue protocol. In this protocol, only some of the dots that mark the boundary of the object are shown, designated as the display set. The number of dots in the display set, and their spacing, was chosen to provide approximate equivalence in potential for recognition (as determined by earlier experiment). As illustrated in Fig. 1, the method for selecting the display set for a given subject began by randomly choosing a starting point and then selecting every Nth dot. The value of N ranged from 3 to 10. For each of the objects, Table 1 lists the value of N (designated as the "skip factor"), as well as the percentage and number of dots in the display set.
Table 1

The names of shapes used in both experiments are listed below

Shape #Shape NamePerimeterAreaSkipDot%Dot#
1alarm clock2762588520.2956
2anchor210664616.6735
3angel240148642560
4antique car180156542545
5antique chair2021329333.6668
6baboon3161398333.54106
7baby bottle1471228333.3349
8badge1602060333.7554
9banana1801385812.7823
10bat156786520.5132
11bear2131527425.3554
12bee3091453333.33103
13beetle2691345333.4690
14bell15682942539
15binoculars1761592333.5259
16boot1831708812.5723
17bottle143866812.5918
18bowling pin134890520.1527
19buffalo2381688333.6180
20bull3021270333.44101
21burro3591426425.0790
22butterfly30620551010.1331
23c clamp267952333.3389
24camel2821956714.5441
25candelabra31275042578
26cap1581530425.3240
27car136834812.517
28cat2481532520.1650
29chair2171857520.2844
30chick1771252616.9530
31cordless drill2401835333.3380
32christmas tree1901423333.6864
33coat2712365333.5891
34coat hanger160649616.8827
35cow2561499520.3152
36cowboy boot18918641010.0519
37dagger183748714.7527
38deer3531354714.4551
39desk lamp223588425.1156
40dinosaur209827425.3653
41dog2801517616.7947
42dragonfly2461068333.3382
43duck1721025616.8629
44dumbbell185183752037
45elephant2611761616.8644
46fighter jet2401154616.6740
47fire extinguisher2931995333.4598
48fire hydrant1841145333.762
49fish1981364333.3366
50flask144905333.3348
51flower2222704333.3374
52flying pheasant2291235425.3358
53fox245961333.4782
54frog3712078425.0793
55giraffe3531137812.7545
56glasses21564952043
57glove2161165333.3372
58goose164916520.1233
59gramophone2301687425.2258
60guitar151760714.5722
61gun171968425.1543
62hammer (ball peen)156416333.3352
63hammer (claw)168506520.2434
64hand shovel144540333.3348
65hat1611496616.7727
66heart1712685911.1119
67helmet1611912425.4741
68hen2181190812.8428
69hippo2551984333.3385
70horse3481588520.1170
71horseshoe2731310911.3631
72house2072647425.1252
73humming bird162747911.1118
74industrial hook208143142552
75iron2262039333.6376
76jack rabbit2451686128.5721
77kangaroo246860520.3350
78knife133355617.2923
79leaf2591294714.2937
80light bulb1451461714.4821
81lion2831334520.1457
82lizard242976616.9441
83macaw158726425.3240
84man249888425.363
85man's shoe1571167812.7420
86microscope2881003333.3396
87monkey256892333.5986
88moth2571642714.437
89motor scooter228121442557
90motorcycle2391355714.6435
91mushroom1871504520.3238
92music stand193592520.2139
93ostrich244843911.4828
94pan1511239333.7751
95passenger plane2431038616.8741
96pear1461174714.3821
97pelican2481389333.4783
98pepper1561068333.3352
99piano2981844520.1360
100pickup154790520.1331
101pig2201357616.8237
102pipe151503714.5722
103pliers22451742556
104porpoise168860714.2924
105pot1771754425.4245
106power boat1991262520.140
107propane torch151728333.7751
108ram3921682333.42131
109rat19278542548
110rhino1871247333.6963
111rifle13525752027
112rooster2491453616.8742
113sailboat2101008333.3370
114saxophone242902616.9441
115scissors250118652050
116sea gull2541132333.4685
117sea horse17262642543
118sea lion2021675333.6668
119shark185831714.5927
120sheep2321587333.6278
121ship propeller2621665616.7944
122shorts1922309520.3139
123sickle176473333.5259
124slipper139830714.3920
125snail176989333.5259
126snake173407425.4344
127sock144823616.6724
128spider3631112333.33121
129spoon134416425.3734
130spray bottle1801034333.3360
131starfish2111301911.3724
132submarine147769425.1737
133swordfish20059361734
134table2891357714.5342
135table lamp1841187520.1137
136teapot1851930425.4147
137teddy bear2381571333.6180
138telephone200201261734
139tiger236103142559
140toilet2252301333.3375
141tractor2381864333.6180
142trumpet216895333.3372
143turtle1711100520.4735
144umbrella1991764617.0934
145vase1641562617.0728
146violin174800425.2944
147windmill2431330425.161
148wine glass2342091520.0947
149wolf2671441425.0967
150woman's shoe162874616.6727

For each shape, the table also provides the following information: Perimeter: the number of dots in the full inventory of boundary locations ; Area: the number of dots enclosed within the perimeter, and including the perimeter dots; Skip: the skip factor, which specified that every Nth dot would be included in the sample that was shown to a given subject; Dot% and Dot#: the percentage and number of dots that were displayed as a result of applying the skip factor

The names of shapes used in both experiments are listed below For each shape, the table also provides the following information: Perimeter: the number of dots in the full inventory of boundary locations ; Area: the number of dots enclosed within the perimeter, and including the perimeter dots; Skip: the skip factor, which specified that every Nth dot would be included in the sample that was shown to a given subject; Dot% and Dot#: the percentage and number of dots that were displayed as a result of applying the skip factor For the present experiment the display set was then divided into two subsets, each containing roughly half of the dots to be displayed. A convention was applied that numbered the address positions of the display set, specifying each odd position as belonging to one subset, and each even position to the other. These were designated as odd and even subsets, as illustrated in the lower two panels of Fig. 1. As detailed below, each subset was displayed as a group, first the odd subset and then the even subset. Varying the time interval between displays of these subsets was a major variable of the experiment, as described below. Testing was done in a room that had no windows, and fluorescent tubes housed in standard recessed ceiling fixtures with plastic diffusion panes provided the lighting. The level of ambient illumination from these fixtures was controlled by the addition of opaque occluding panels that were held in channels that were coplanar to the surface of the fixture. Each fixture had two panels, one over each end, which could be slid apart to alter the area of the opening through which light could flow. This provided for control of ambient illumination without any change in color temperature of the light. Three levels of ambient illumination were used in the experiment, designated as bright, dim and dark. Ambient light levels were measured with a Tektronix J17 photometer, which uses a cosine corrected head having certified calibration. The light readings were taken from the location of the seated subject. Mean illumination was 303 lux for the bright condition, and was 13.3 lux for the dim condition. The lights were turned completely off for the dark condition, and the illumination was functionally zero. Measures were also taken of the amount of light being reflected from the art-board frame and from the wall surrounding the display board (both of which were the same shade of ivory). When the room was bright, the luminance of these surfaces was 25 Cd/m2, and for the dim condition the luminance was 1 Cd/m2. Stimulus shapes were presented using a display board having a 64 × 64 array of LEDs, each of which could be illuminated under control of a computer and microprocessor slave. The GaAlAs LEDs emitted at a wavelength of 660 nm, and had a rise/fall time for emission in the range of 50–100 nanoseconds. Two levels of LED emission were used. With the room bright, the emission level was set to 96 Cd/m2. When the room was either dim or dark, the emission was set at 7 Cd/m2, the lower level being used because brief flashes that are substantially brighter can produce afterimages. The display board was attached to a wall at a viewing distance of 3.5 m, and with an elevation above eye level of approximately 10 degrees. At this distance the diameter of each LED was 4.9 arc', center-to-center spacing was 7.4 arc', and the dimensions of the full array, i.e., measured from center-to-center of the outside elements, was 7.7 × 7.7 arc°. Each dot of the display set was shown on the LED array by allowing current to flow through the specified LED for 0.1 ms, this being designated as T1. It is convenient to describe the display of a given address as a pulse, so T1 specifies pulse width, as illustrated in Fig. 2, this figure having been used in previous work [5].
Figure 2

A. The duration that a given LED was illuminated was 0.1 ms. This is designated as T1. B. The dots within a given subset were displayed sequentially with a pulse spacing of 0.1 ms, measured from onset to onset. C. Here the pulse sequences for the odd and even subsets are illustrated like beads on a string. The time required to display a given subset varied with subset size, with the longest interval being 6.6 ms. The temporal separation of the two subsets, designated as T3, varied as a function of room illumination. The ranges for the T3 interval were: bright (0–40 ms); dim (0–80 ms); dark (0–160 ms).

A. The duration that a given LED was illuminated was 0.1 ms. This is designated as T1. B. The dots within a given subset were displayed sequentially with a pulse spacing of 0.1 ms, measured from onset to onset. C. Here the pulse sequences for the odd and even subsets are illustrated like beads on a string. The time required to display a given subset varied with subset size, with the longest interval being 6.6 ms. The temporal separation of the two subsets, designated as T3, varied as a function of room illumination. The ranges for the T3 interval were: bright (0–40 ms); dim (0–80 ms); dark (0–160 ms). Figure 2 also shows that the successive members of each subset were displayed with a 0.1 interval between onset of one pulse and onset of the next, this being T2. In other words, each was shown with no temporal separation between offset of a given pulse and onset of the next. Each pulse lasted only 0.1 ms, so a subset containing 20 addresses would be shown in 2 ms. From Table 1 one can see that the number of dots being displayed ranged from 17 (for the car) to 131 (for the ram). This provides a range from the smallest to the largest subset of 8 to 66 dots, thus across all shapes a given subset was displayed in a time that was no less than 0.8 ms, and no more than 6.6 ms. A major variable of interest was the time interval between subsets, which was measured from offset of the final pulse in the odd subset till onset of the first pulse in the even subset. This was designed as T3. As outlined in the introduction, Greene [5] found a decline of recognition as a function of T3, with the rate of decline being a function of the level of ambient illumination. Therefore, a different range of T3 values was chosen for each level of room illumination, the goal being to sample the range where the greatest decline was likely to be seen. To be specific, when the room was bright, the T3 intervals were: 0, 10, 20, 30 and 40 ms. When the room was dim, the T3 intervals were: 0, 20, 40, 60 and 80 ms. When the room was dark, these values were: 0, 40, 80, 120 and 160 ms. The order of room illumination was determined at random for each subject. Subjects were dark adapted for 20 minutes prior to testing with the room being dark. Shapes that had been assigned to a given level of room illumination were tested as a block, i.e., each was display successively with illumination being the same. For each level of room illumination the order of shape presentation was random, which provided for a random order of T3 values. Recognition of a given object required integration of shape cues that were provided by the two subsets. Pilot work had shown that the hit rate from display of a single subset would be in the 20% range. Observing hit rates that are substantially above this value provides evidence of the degree to which the shape cues from the two subsets are being combined by the visual system, which may be described as information persistence or iconic memory.

Results

Previous research had demonstrated that the time interval within which shape information can be integrated shows large differentials as a function of room illumination [5]. The goal of the present research was to provide T3 intervals that would better sample the range over which a given lighting condition would affect recognition. For a given subject, each shape was displayed only once at one of the fifteen treatment combinations – five levels of T3 interval across three levels of room illumination. The shapes were approximately matched for difficulty level on the basis of the number of dots in the display sample, and the response variable was successful recognition (yes/no). Mean recognition level across subjects (hit rate) for each of the fifteen treatment combinations are plotted in Fig. 3, and a linear regression line has been fit to the data for each level of room illumination. At T3 = 0 the hit rates for the bright, dim and dark conditions were 65, 70 and 76 percent, respectively, which depart only moderately from the 75% hit rate that was expected for displays having no temporal separation. From these initial levels, the plots for the three conditions show linear declines, having slopes that were progressively less steep with bright, dim and dark room illumination, respectively.
Figure 3

Mean percent recognition (hit rate) dropped at a steep rate in the bright room (open circles), at a moderate rate in the dim room (gray filled circles), and at a relatively shallow rate in the dark room (black filled circles). Statistical modeling showed the decline to be significant at p < .001 for each condition, and there was no indication of departure from the linear regression lines. These results indicate that the information from the odd and even subsets can combine to allow for recognition over longer periods as room illumination is reduced.

Mean percent recognition (hit rate) dropped at a steep rate in the bright room (open circles), at a moderate rate in the dim room (gray filled circles), and at a relatively shallow rate in the dark room (black filled circles). Statistical modeling showed the decline to be significant at p < .001 for each condition, and there was no indication of departure from the linear regression lines. These results indicate that the information from the odd and even subsets can combine to allow for recognition over longer periods as room illumination is reduced. For statistical confirmation of effects, the appropriate model for this binary data is a generalized linear model with binominal errors [8]. Dot percentage and T3 interval were fixed effects, and subjects and shape were random effects. A separate model was fit to the data from each room illumination condition, since (by design) the ranges of T3 intervals were not comparable. Logit values, i.e., loge (proportion/1 – proportion), were calculated, and treatment differences were compared using the standard error of the difference for these values. For each of the three levels of room illumination, there was a significant decline in the hit rate (p < .001 for each). There was no significant turning point in the response for any level of ambient illumination, i.e., no quadratic effect, with the largest probability being 0.54. This indicates that the decline in recognition is completely linear over the intervals tested for each of the room illumination conditions. Dot percentage was not a significant factor for any of the three models, with the largest probability being 0.32. This indicates substantial success in rendering the shapes to be equivalent in their level of difficulty. Note that proper variance measures for the data are only possible using the logit scores, which precludes the use of error bars on the hit-rate means that are shown in Fig. 3. However, standard errors of the mean can be provided for the logit transformed values, and these are shown in Table 2, along with predictions of hit rate that are provided by the models.
Table 2

For each treatment combination, the mean logit score and the standard error of the mean is shown.

T3 (ms)LightDimDark
MeanSEMMeanSEMMeanSEM

00.617 (0.650)0.2610.958 (0.723)0.3021.229 (0.774)0.271
100.382 (0.594)0.256
20-0.107 (0.473)0.2550.481 (0.618)0.292
30-0.384 (0.405)0.257
40-0.822 (0.305)0.2640.180 (0.545)0.2910.660 (0.659)0.248
60-0.154 (0.462)0.289
80-0.688 (0.335)0.2970.345 (0.585)0.246
120-0.173 (0.457)0.244
160-0.388 (0.404)0.244

The logit scores provided the basis for statistical modeling of the data, which found a significant (p < .001) decline in hit rate for each of the room illumination conditions. These models also provide predictions of hit rate, i.e., the values that fall on the linear regression line, for each of the T3 values within the sampled range. These are shown in parentheses beneath each mean logit value. These predictions are very similar to the observed hit rates that are shown in Fig. 3.

For each treatment combination, the mean logit score and the standard error of the mean is shown. The logit scores provided the basis for statistical modeling of the data, which found a significant (p < .001) decline in hit rate for each of the room illumination conditions. These models also provide predictions of hit rate, i.e., the values that fall on the linear regression line, for each of the T3 values within the sampled range. These are shown in parentheses beneath each mean logit value. These predictions are very similar to the observed hit rates that are shown in Fig. 3. In the previous study [5] the level of shape recognition in a bright room appeared to be nearly asymptotic at 35–40% with T3 intervals in the 90–270 ms range. Thus the 35% hit rate observed here with the room bright and with T3 equal to 40 ms may be at or near the floor level. However, the earlier study [5] found that dark room recognition remained at or above 60% with T3 intervals of 90 and 270 ms, whereas the present study found a hit rate of 43% with a T3 of 160 ms. The present study differed from the previous [5] protocol only in the use of an expanded inventory of shapes, and in sampling a more restricted range of T3 intervals. Thus there is no obvious basis for this difference for the dark-room condition. In any event, the earlier result raises the possibility that recognition rates will asymptote at T3 intervals that are longer than those tested here, and the floor level may be progressively higher for bright, dim and dark levels of room illumination.

Discussion

Prior research from this laboratory [5] used spaced dots to mark the outer boundary of namable objects. For a given object (shape) the dots were divided into two subsets, and were displayed with various intervals of delay between the first and the second subset. Successful recognition of shapes was a function of the duration of this delay, and also of the ambient level of illumination, being shorter when the room was bright, longer in a dim room, and longer yet when the room was completely dark. The present work confirms these effects, and we can now specify that each level of room illumination provides a range in which an increase in subset interval will produce a linear decline in recognition. Recognition was found to be fairly equivalent in the 65–75% range irrespective of ambient light level when the subset interval was zero. From there the increase in subset interval produced linear declines, dropping recognition into the 35–45% range with subset intervals of 40 ms in the bright room, 80 ms in the dim room, and 160 ms when the room was dark. It is possible that the interval over which information persists, i.e., information persistence, is determined by the level of ambient illumination. It is well understood that the visual system dramatically increases its sensitivity under low-light conditions, and for threshold detection, stimuli are integrated over a longer interval [9-12]. Visible persistence, i.e., the duration over which a very brief stimulus is subjectively perceived [13-16], is also affected by the level of ambient illumination. Di Lollo & Bischof [17] review this relationship and cite twelve studies that have reported changes in integration time as a function of ambient illumination, these effects being attributed to visible persistence. However, Coltheart [14], among others, has argued that information persistence – the integration of information over time – may be mediated by perceptual mechanisms other than visible persistence. The prior work from this laboratory [5] examined whether the information persistence required for object recognition could be explained by the duration of visible persistence, and found that the two manifestations of persistence had different time courses. It appears that the neural mechanisms that provide for the subjective judgment of stimulus duration are not the same as those that allow for integration of successive shape cues. As an alternative to the concept that information persists for a fixed amount of time that is a function of ambient illumination, it is possible that the interval over which information can be combined is closely tied to the density of the information being provided. In this model, information from a given moment would be "compartmentalized" and buffered against interference from noise and/or incompatible information. Thus with photopic levels of illumination, where large amounts of information are being delivered, the temporal compartment would be relatively short. The compartment interval would become wider as ambient illumination declined, given that the lower illumination also decreased the density of the information being provided at any given moment, as well as the potential for interference. The ability to set the width of the temporal compartment as a function of information density would be especially useful for animals that are highly mobile or move their eyes, as these actions drastically change the image content being provided to the retina from one moment to the next. Stimulus events that occurred at the same moment would be included in a given temporal compartment. It may be relevant, therefore, that another study from this laboratory [18] has found that the degree of simultaneity in the presentation of border dots determines the percentage of shapes that can be identified. Lack of simultaneity in the millisecond and even submillisecond range produces a significant linear decline in recognition. A few studies have examined the question of whether the complexity of the information to be processed affects integration time, most being done using a visible persistence protocol of one kind or another. Loftus & Hanna [19], for example, randomly divided visual stimuli into two halves that were presented successively. The stimuli were judged to be most "complete" if there was minimal delay between each half, and progressively less complete with increasing temporal separation. They found that simple dot patterns were affected more at a given delay interval than were complex scenes, suggesting longer persistence of the information contained in the complex scene. Thus, to the extent that one wishes to consider the subjective judgment of "completeness" to be an indication of information persistence, these results are opposite of what would be predicted by the "information density" hypothesis suggested above. Similar results have been reported by Erwin & Herschenson [20], who assessed the duration of visible persistence by having subjects adjust the onset time of a second stimulus to the perceived offset time of a first stimulus. They evaluated three kinds of stimuli – a blank field, a dark field, and a field containing seven letters. They found that the field of letters persisted about 35 ms longer than the other two stimulus sets if the subjects were required to report the letters. A follow-up study [21] found that the degree of redundancy (and thus complexity) of the letter strings affected the duration of persistence. Conversely, Irwin & Yeomans [22] argue against the concept that the width of the integration window is a function of the amount of information to be processed. They used a task developed by Hogben & Di Lollo [23] wherein stimulus elements are positioned within a 5 × 5 matrix, displaying a first subset of 12 elements at random positions within the matrix, followed at a variable interval by a second subset of 12 elements. The task is to report which position of the matrix has been left empty, which essentially reflects the duration of visible persistence of the first subset. Irwin & Yeomans [22] conducted five experiments using this protocol, manipulating the degree of stimulus complexity, e.g., letters vs. Xs; upright letters vs. inverted letters, and failed to find any effect of complexity on the duration of visible persistence. They argue that the tasks used by Loftus & Hanna [19] and by Erwin [20,21] assessed cognitive processing operations rather than persistence of the stimulus trace, per se. Prior results from this laboratory [5] found that the interval for integration of shape cues is not related to the duration of visible persistence. It would not be surprising, therefore, if differences in information density provided by various levels of illumination affected shape recognition in a manner that differed from its influence on visible persistence. But additionally, it should be said that the hypothesis relating the integration interval to the density of information pertains to the totality of information provided by the scene. The studies of how complexity of stimuli affects duration of visible persistence [19-22] were not manipulating ambient illumination, and the differentials in stimulus complexity, e.g., upright letters vs. inverted letters, would not produce much net change in the abundance of data being delivered by the entire visual scene.

Conclusion

Whether one views the process as a change in duration of information persistence, or as compartmentalizing stimulus elements as a function of information density, the present results confirm that there is a change in the duration over which partial shape cues can be combined as one transitions from photopic to scotopic viewing conditions. Additionally, we now know that percent recognition is a linear function of the interval between cue subsets, with a slope that is a function of room illumination. The range for this linear decline is relatively short when the room is bright, and becomes progressively longer with decreasing room illumination.

List of Abbreviations

arc° : degrees of visual angle arc' : minutes of visual angle Cd/m2 : candela per meter squared GaAlAs : gallium, aluminum and arsenic LED : light emitting diode Loge : natural log m : meters ms : milliseconds N : number used to specify which dots from address list will be displayed nm : nanometers ns : nanoseconds p : probability T1 : pulse width T2 : temporal separation within a given subset T3 : temporal separation between subsets

Competing interests

The author declares that he has no competing interests.
  15 in total

1.  Seeing better at night: life style, eye design and the optimum strategy of spatial and temporal summation.

Authors:  E J Warrant
Journal:  Vision Res       Date:  1999-05       Impact factor: 1.886

Review 2.  Intensity dependence of perceived duration: data, theories, and neural integration.

Authors:  S J Nisly; G S Wasserman
Journal:  Psychol Bull       Date:  1989-11       Impact factor: 17.737

3.  Duration of visible persistence in relation to stimulus complexity.

Authors:  D E Irwin; J M Yeomans
Journal:  Percept Psychophys       Date:  1991-11

4.  Further evidence for two components in visual persistence.

Authors:  D E Erwin
Journal:  J Exp Psychol Hum Percept Perform       Date:  1976-05       Impact factor: 3.332

5.  Temporal summation for grating patches detected at low light levels.

Authors:  G L Savage
Journal:  Optom Vis Sci       Date:  1996-06       Impact factor: 1.973

6.  The phenomenology of spatial integration: data and models.

Authors:  G R Loftus; A M Hanna
Journal:  Cogn Psychol       Date:  1989-07       Impact factor: 3.468

7.  Perceptual integration and perceptual segregation of brief visual stimuli.

Authors:  J H Hogben; V di Lollo
Journal:  Vision Res       Date:  1974-11       Impact factor: 1.886

Review 8.  Inverse-intensity effect in duration of visible persistence.

Authors:  V Di Lollo; W F Bischof
Journal:  Psychol Bull       Date:  1995-09       Impact factor: 17.737

Review 9.  Iconic memory: a review and critique of the study of short-term visual storage.

Authors:  G M Long
Journal:  Psychol Bull       Date:  1980-11       Impact factor: 17.737

10.  Simultaneity in the millisecond range as a requirement for effective shape recognition.

Authors:  Ernest Greene
Journal:  Behav Brain Funct       Date:  2006-11-29       Impact factor: 3.759

View more
  6 in total

1.  Evaluating the contribution of shape attributes to recognition using the minimal transient discrete cue protocol.

Authors:  Ernest Greene; R Todd Ogden
Journal:  Behav Brain Funct       Date:  2012-11-12       Impact factor: 3.759

2.  Shape recognition elicited by microsecond flashes is not based on photon quantity.

Authors:  Ernest Greene
Journal:  Iperception       Date:  2014-03-20

Review 3.  New encoding concepts for shape recognition are needed.

Authors:  Ernest Greene
Journal:  AIMS Neurosci       Date:  2018-07-01

4.  Evaluating persistence of shape information using a matching protocol.

Authors:  Ernest Greene; Michael J Hautus
Journal:  AIMS Neurosci       Date:  2018-02-03

5.  Retinal encoding of ultrabrief shape recognition cues.

Authors:  Ernest Greene
Journal:  PLoS One       Date:  2007-09-12       Impact factor: 3.240

6.  Shapes displayed with durations in the microsecond range do not obey Bloch's law of temporal summation.

Authors:  Ernest Greene; R Todd Ogden
Journal:  Iperception       Date:  2013-08-14
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.