Brand names are often considered a special type of words of special relevance to examine the role of visual codes during reading: unlike common words, brand names are typically presented with the same letter-case configuration (e.g., IKEA, adidas). Recently, Pathak et al. (European Journal of Marketing, 2019, 53, 2109) found an effect of visual similarity for misspelled brand names when the participants had to decide whether the brand name was spelled correctly or not (e.g., tacebook [baseword: facebook] was responded more slowly and less accurately than xacebook). This finding is at odds with both orthographically based visual-word recognition models and prior experiments using misspelled common words (e.g., viotin [baseword: violin] is identified as fast as viocin). To solve this puzzle, we designed two experiments in which the participants had to decide whether the presented item was written correctly. In Experiment 1, following a procedure similar to Pathak et al. (European Journal of Marketing, 2019, 53, 2109), we examined the effect of visual similarity on misspelled brand names with/without graphical information (e.g., anazon vs. atazon [baseword: amazon]). Experiment 2 was parallel to Experiment 1, but we focused on misspelled common words (e.g., anarillo vs. atarillo; baseword: amarillo [yellow in Spanish]). Results showed a sizeable effect of visual similarity on misspelled brand names - regardless of their graphical information, but not on misspelled common words. These findings suggest that visual codes play a greater role when identifying brand names than common words. We examined how models of visual-word recognition can account for this dissociation.
Brand names are often considered a special type of words of special relevance to examine the role of visual codes during reading: unlike common words, brand names are typically presented with the same letter-case configuration (e.g., IKEA, adidas). Recently, Pathak et al. (European Journal of Marketing, 2019, 53, 2109) found an effect of visual similarity for misspelled brand names when the participants had to decide whether the brand name was spelled correctly or not (e.g., tacebook [baseword: facebook] was responded more slowly and less accurately than xacebook). This finding is at odds with both orthographically based visual-word recognition models and prior experiments using misspelled common words (e.g., viotin [baseword: violin] is identified as fast as viocin). To solve this puzzle, we designed two experiments in which the participants had to decide whether the presented item was written correctly. In Experiment 1, following a procedure similar to Pathak et al. (European Journal of Marketing, 2019, 53, 2109), we examined the effect of visual similarity on misspelled brand names with/without graphical information (e.g., anazon vs. atazon [baseword: amazon]). Experiment 2 was parallel to Experiment 1, but we focused on misspelled common words (e.g., anarillo vs. atarillo; baseword: amarillo [yellow in Spanish]). Results showed a sizeable effect of visual similarity on misspelled brand names - regardless of their graphical information, but not on misspelled common words. These findings suggest that visual codes play a greater role when identifying brand names than common words. We examined how models of visual-word recognition can account for this dissociation.
Brand names and logotypes, but not common words, are easily influenced by visual information.In legal litigations against potential copycats, paying attention to the visual similarity of the replaced letters with the original is essential.
INTRODUCTION
Nowadays, most researchers in the field of visual word recognition would agree that identifying a printed word is based primarily on the mapping of the visual input onto abstract, case‐invariant, orthographic representations (see Grainger, 2018; Grainger & Dufau, 2012, for reviews; but see Agrawal et al., 2020, for an alternative view). There is indeed a large amount of empirical evidence favoring this view with common words. For instance, in masked priming paradigms, the processing of the word TABLE is virtually the same when briefly preceded by the prime table (i.e., orthographically, but not visually identical) or TABLE (i.e., orthographically and visually identical; see Jacobs et al., 1995, for behavioral evidence; see Dehaene et al., 2004, for fMRI evidence; see Vergara‐Martínez et al., 2015, for ERP evidence). Similarly, evidence in favor of the processing of abstract representations rather than visual information in word recognition has been obtained with paradigms that directly reflect the degree of lexical activity generated by an item (i.e., single‐presentation [unprimed] techniques).1 For instance, in lexical decision experiments with both normotypical adults and young readers, a pseudoword like viotin, created by replacing the letter l from violin with the visually similar letter t, yields virtually the same response times and error rates as a pseudoword like viocin, created by replacing the letter l with the visually dissimilar letter c (see Gutierrez‐Sigut et al., 2022; Perea & Panadero, 2014). If visual information had played a relevant role when identifying words, one would have expected longer decision times and more errors to viotin than to viocin (i.e., t is more confusable with l than c).This study examined whether these basic principles also apply to identifying a special type of words: brand names. Brand names are aimed at being easily recognizable and distinctive. To that end, brand names are not only written with a specific letter‐case configuration (e.g., IKEA in uppercase, adidas in lowercase), but also they are often presented with a distinctive graphical layout: a particular typeface, color, and design (i.e., a logo, see Foroudi et al., 2017, for a historical review). Indeed, in one of the first reviews on visual word recognition, Henderson (1987) designated brand names as ‘the most fertile ground’ to research the various routes (e.g., orthographically based vs. holistic) to word identification. However, none of the leading visual word recognition models currently addresses whether brand names are identified just as common words. One reason for this is that research on brand name identification has been extremely scarce (see Gontijo et al., 2002; Gontijo & Zhang, 2007; Martin & Davis, 2019; Perea et al., 2015, 2021, for exceptions). Thus, examining the similarities and differences between recognizing brand names and common words has important implications for a comprehensive visual‐word recognition model.Prior research has revealed some similarities and differences between the identification of brand names and common words. On the one hand, brand names show several key effects in the literature on visual word recognition: transposed‐letter effects (Pathak et al., 2019; Perea et al., 2021), masked identity priming effects (Martin & Davis, 2019; Perea et al., 2015), and first‐letter advantage (Pathak et al., 2019). On the other hand, Gontijo et al. (2002) reported a smaller right visual field advantage for brand names than common words in a lateralized lexical decision. They interpreted this pattern as due to ‘category‐specific lexical processing’ of brand names. Likewise, in lexical decision, brand names are identified faster when presented with their characteristic letter‐case configuration than when presented with their less frequent letter‐case configuration (e.g., IKEA faster than ikea; see Gontijo et al., 2002; Perea et al., 2015). This pattern, however, may not be specific to brand names. It resembles the advantage of the initial capitalization of proper names (e.g., Mary faster than mary; Peressotti et al., 2003) or common nouns in German (Haus faster than haus; [all German nouns are capitalized] see Labusch et al., 2022; Wimmer et al., 2016).2More important, in a recent study, Pathak et al. (2019) found an effect of visual similarity with misspelled brand names that may suggest an over‐reliance on visual codes when identifying brand names. The participants’ task was to decide whether a letter string, which was a brand name embedded in its logo, was spelled correctly or not. For the misspelled items, they replaced an external letter of a brand name with either a visually similar letter or a visually dissimilar letter (e.g., facebook→tacebook vs. xacebook). When the brand names were presented for unlimited time (i.e., as in standard word recognition experiments), Pathak et al. (2019) found longer response times (around 20–30 ms) and more errors (4–5%) for visually similar misspelled brand names (e.g., tacebook) than for visually dissimilar misspelled brand names (e.g., xacebook). They also found the same effects when the stimuli were presented briefly (100 ms). Pathak et al. (2019) concluded that their findings ‘highlight the importance of the visual similarity (dissimilarity) of the substituted letters used in fake logotypes.’ (p. 2118).3Undeniably, the effect of visual similarity reported by Pathak et al. (2019) with brand names cannot be captured by current orthographically based visual word recognition models. Furthermore, the effect of visual similarity reported by Pathak et al. (2019) is at odds with previous behavioral evidence using misspelled words in lexical decision experiments (e.g., Gutierrez‐Sigut et al., 2022; Perea & Panadero, 2014). Moreover, in a recent electrophysiological experiment, Gutierrez‐Sigut et al. (2022) found that the ERP waves of the visually dissimilar pseudowords and visually similar pseudowords were virtually the same. The apparent dissociation between the role of visual similarity when identifying misspelled brand names versus common words immediately engenders the following questions: (1) when identifying a brand name, do readers rely more heavily on the visual characteristics of the stimulus than when identifying a common word?, and (2) does the brand name's graphical information modulates these differences?Importantly, there is a critical methodological difference between the Pathak et al. (2019) experiment and both Perea and Panadero (2014) and Gutierrez‐Sigut et al. (2022) experiments that could explain the conflicting outcomes. In the Pathak et al. (2019) experiment, the correctly spelled and misspelled brand names were presented with all their graphical information (i.e., the brand name with a specific typeface, color, and design). In contrast, in the Perea and Panadero (2014) and the Gutierrez‐Sigut et al. (2022) experiments, the items were presented in a standard typeface (Times New Roman and Courier, respectively). There is some evidence that suggests that graphical information may enhance the confusability of misspelled brand names. Using the same procedure as Pathak et al. (2019), Perea et al. (2021) found that response times were longer and more error‐prone when a misspelled brand name created via letter transposition/replacement (e.g., amzaon or amceon) was embedded in its complete graphical information than when presented unformatted. Perea et al. (2021) also found that the transposed‐letter effect (i.e., the difference in response times between misspelled transposed‐letter versus replacement‐letter items) was slightly greater for misspelled brand names with their graphical information than for unformatted misspelled brand names. To account for these findings, Perea et al. (2021) suggested that, when encountering brand names, readers encode not only abstract, orthographic information (i.e., letter identity; letter order) but also other sources of information based on visual codes (e.g., graphical information, typography). Thus, an important step to determine whether brand names and common words are processed differently is to test whether the effects of visual similarity reported by Pathak et al. (2019) are reduced (or even eliminated) when the brand names are presented unformatted. This was the goal of Experiment 1.Thus, Experiment 1 was designed to examine: (1) whether an effect of visual similarity occurs in misspelled brand names – this would replicate Pathak et al. (2019) – and (2) whether this effect is boosted by the graphical information of the brand names by presented them embedded in their logo or plain format. As in the Pathak et al. (2019) study, the participants in our study had to decide whether a brand name was correctly spelled or not. The misspelled items were created by replacing an internal letter of a brand name with either a visually similar letter (e.g., amazon→anazon) or a visually different letter (e.g., amazon→atazon; see Figure 1). To select the visually similar and visually dissimilar letters, we employed the Simpson et al. (2012) ratings of visual similarity. Unlike Pathak et al. (2019), we replaced an internal letter rather than an external letter. The reason for this change was twofold: (1) to have a scenario closer to that employed by Perea and Panadero (2014) and by Gutierrez‐Sigut et al. (2022) with misspelled common words; (2) to examine whether the visual similarity effect found by Pathak et al. (2019) also applies to internal letter positions. The stimuli were presented either with all their graphical information (i.e., as in the Pathak et al., 2019, study) or unformatted (Times New Roman, as in Perea et al.'s, 2021, Experiment 1).
FIGURE 1
Examples of the correctly spelled brand names and misspelled brand names in Experiment 1, both in full graphical format and unformatted
Examples of the correctly spelled brand names and misspelled brand names in Experiment 1, both in full graphical format and unformattedIf lexical access is primarily based on abstract, orthographic codes, one would predict a null effect of visual similarity on misspelled items. This would be particularly so when the items are presented unformatted (i.e., the processing of graphical information from the logos could be beyond the models’ scope). To anticipate the findings, we found a sizeable effect of visual similarity. While misspelled brand names produced longer and more error responses when presented with their graphical information (i.e., replicating Perea et al.’s, 2021, study), the effect of visual similarity was virtually identical for misspelled items regardless of Format (full graphical information vs. unformatted). This pattern of findings led to Experiment 2, using common words as stimuli. We defer a justification of the rationale of this second experiment until the discussion of Experiment 1.
EXPERIMENT 1 (CORRECTLY SPELLED BRAND NAMES VS. MISSPELLED BRAND NAMES)
Method
Participants
A total of 34 individuals took part in the experiment (16 women). Their mean age was 22.97 years (SD = 3.56). As we had 120 trials in each condition, this sample size ensured 4080 observations in each level of visual similarity. While the effects of visual similarity reported by Pathak et al. (2019) were quite large, we chose a sample size that could detect small‐size effects (Brysbaert & Stevens, 2018). The participants were recruited with Prolific Academic, a UK‐based online crowdworking platform (http://prolific.ac). Only native Spanish university students with no reading problems and normal (corrected‐to‐normal) vision could participate. All the participants gave informed consent before the experiment and received monetary compensation (2.75€).
Materials
We selected 10 popular brand names that do not constitute a common word in Spanish (amazon, Colgate, Google, intel, LACOSTE, Levi’s, DISNEY, MERCADONA, NESCAFÉ, SAMSUNG). For comparison purposes, we chose a number of items only slightly higher than the set selected by Pathak et al. (2019; i.e., 10 vs. seven). For each brand name, we created two misspelled brand names: (1) we replaced an internal consonant letter with a visually similar letter (amazon→anazon; visually similar condition); and (2) we substituted an internal consonant letter (the same as in the above condition) with a visually dissimilar letter (amazon→atazon; visually dissimilar condition). The range of the visual similarity of the replaced letters in the Simpson et al.'s (2012) ratings was 4.4–5.3 (M = 4.7) for the visually similar letters and 1.1–1.6 (M = 1.3) for the visually dissimilar letters. Both sets of pseudowords were matched on bigram frequency (the mean log bigram frequencies were 1.73 and 1.83 in the B‐Pal database [Davis & Perea, 2005] for the visually similar and visually dissimilar conditions, p = .39). None of the stimuli had any orthographic neighbors in Spanish.
Procedure
The experiment's script was written in Psychopy 3 software (Peirce & MacAskill, 2018) and was conducted online using Pavlovia (www.pavlovia.org). Before starting the experiment, all the participants filled out a questionnaire with demographic data (age, gender, education level) via LimeSurvey (www.limesurvey.org). The participants were advised to do the experiment on a computer in a quiet room without any distractions. To be familiarized with the task, the participants received 14 practice trials before the experimental phase. The participant's task was to decide whether the presented item was correctly spelled or not (‘M’ for yes and ‘Z’ for no). They were asked to make their response as quickly and accurately as possible. Within a given trial, a fixation cross was presented in the center of the screen for 500 ms. Afterward, the target item was presented until a response was made (or until a maximum of 2000 ms). As in the Pathak et al. (2019) and Perea et al. (2021) experiments, and due to the relatively small set of target brand names, each item was presented several times – note that Perea et al. (2021) found a similar pattern of effects across time. All items were presented in a randomized order, resulting in 480 experimental trials (240 correctly spelled brand names; 120 visually similar misspelled brand names; 120 visually dissimilar misspelled brand names). There were short breaks after every 100 trials. Altogether, the experiment took 18–20 min to complete.
Results and discussion
For the latency data analyses, we removed the error responses (4.7% for correctly spelled brand names; 4.4% for misspelled brand names) and very fast responses (less than 250 ms; 0 observations). The deadline for responding was 2000 ms – lack of response before this deadline was coded as an error (62 data points; less than 0.4% of data). The average response times and error rates in each of the conditions are presented in Table 1.
TABLE 1
Average correct reaction times (in ms) and percentage of errors for correctly spelled and misspelled brand names (visually similar vs. visually dissimilar) in Experiment 1
Correct brand name
Visually similar misspelled name
Visually dissimilar misspelled name
Visual similarity effect
Format
Full logo
675 (4.0)
704 (7.2)
674 (3.2)
30 (4.0)
Plain
686 (5.4)
693 (5.2)
656 (2.0)
37 (3.2)
Average correct reaction times (in ms) and percentage of errors for correctly spelled and misspelled brand names (visually similar vs. visually dissimilar) in Experiment 1The latency and accuracy data were analyzed in separate analyses using Bayesian linear mixed‐effects models via the brms package (Bürkner, 2017) in R (R Core Team, 2020). This approach allowed us to use models that converged with the most complex random factor structure of the design regarding subjects’ and items’ intercepts and slope (see Barr et al., 2013, for discussion on random‐factor structure). In the latency analyses, we chose family = exgaussian() because this distribution fits reasonably well the positive skew of response times, whereas, in the accuracy analyses, we chose family = bernoullli() due to the binary nature of the responses (1 = correct; 0 = incorrect). For the misspelled items, the two fixed factors were Format (all graphical information vs. unformatted; encoded as −0.5 and 0.5) and Letter similarity (similar vs. dissimilar; encoded as −0.5 and 0.5). The model was:The models were analogous for the correctly spelled items, except that the only fixed factor was Format. The number of iterations was 5000 (1000 iterations as a warmup) using four chains. All models converged (Rˆ was 1.00 in all cases). Unlike frequentist approaches to linear mixed‐effects models, Bayesian linear mixed‐effects models do not provide a p value for each estimate but rather a 95% credible interval (95% CrI). We interpreted an effect as significant when the 95% CrI of its coefficient estimate did not include zero. For the interested readers, we present the frequentist analyses in Appendix – the results of these analyses mimicked those of the Bayesian analyses.
Misspelled brand names
Response times to misspelled brand names were faster when presented unformatted than when presented with all their graphical information, b = −17.31, SE = 4.93, 95% CrI (−27.2, −7.61). Also, latencies were faster when the brand names were visually dissimilar to the original than when they were visually similar, b = 19.68, SE = 7.56, 95% CrI (4.51, 34.88). There were no signs of an interaction between these two factors, b = −2.19, SE = 6.36, 95% CrI (−14.79, 10.63; see Figure 2 for the posterior estimates including their 50%, 75%, 89%, 95%, and 100% credible intervals).
FIGURE 2
Highest Density Intervals with the 50%, 75%, 89%, 95%, and 100% Credible Intervals for the estimates of the Bayesian Linear Mixed‐Effects models on response time (left panels) and accuracy (right panels) in Experiment 1. The top panels correspond to misspelled brand names and the bottom panels correspond to correctly spelled brand names
Highest Density Intervals with the 50%, 75%, 89%, 95%, and 100% Credible Intervals for the estimates of the Bayesian Linear Mixed‐Effects models on response time (left panels) and accuracy (right panels) in Experiment 1. The top panels correspond to misspelled brand names and the bottom panels correspond to correctly spelled brand namesThe accuracy data on the misspelled brand names showed a similar pattern. Error rates (i.e., ‘yes’ responses) were lower for misspelled brand names when unformatted than when presented with their graphical information, although the 95% credible interval crossed zero, b = 0.61, SE = 0.36, 95% CrI (−0.04, 1.36). Furthermore, the error rates were higher when the misspelled brand name was visually similar to the original than when it was dissimilar, b = −0.85, SE = 0.33, 95% CrI (−1.53, −0.29). These two effects did not interact, b = −0.20, SE = 0.41, 95% CrI (−1.05, 0.59; see top panels of Figure 2 for the posterior distributions).
Correctly spelled brand names
Response times to brand names were slightly faster and with fewer error responses when embedded with their graphical information than when presented unformatted, although their coefficient estimates overlapped zero in both dependent variables (response times: b = 7.56, SE = 3.92, 95% CrI (−0.28, 14.47); accuracy: b = −0.22, SE = 0.22, 95% CrI (−0.65, 0.23)) (see bottom panel of Figure 2 for the posterior distributions).This experiment successfully replicated, using another set of items, the effect of letter visual‐similarity on misspelled brand names reported by Pathak et al. (2019) on both response times and accuracy data. Thus, it extended Pathak et al.'s (2019) findings, which were obtained with external letter replacements (tacebook vs. xacebook), to internal letter replacements (e.g., anazon vs. atazon). Furthermore, the participants had more difficulties at classifying a misspelled brand name when presented with its graphical information than when presented unformatted (see also Perea et al., 2021, for the same pattern). Note that we found the opposite trend for correctly spelled logos, thus suggesting a ‘yes’ bias for those items with graphical information (see Table 1).More important, the effects of format (full graphical information vs. unformatted) and visual similarity (similar vs. dissimilar) had additive effects on misspelled brand names. The effect of visual similarity on misspelled brand names was remarkably similar when the items contained their complete graphical information or when presented unformatted. Thus, while graphical information modulated the participants’ decision (i.e., biasing participants toward a ‘yes’ response), it did not alter the magnitude of the visual similarity effects of misspelled brand names.Once we have shown that logos and brand names produce a strong effect of visual similarity, it is important to check whether this effect could have been modulated by the items being presented many times. Of note, when using this same procedure, Perea et al. (2021) found that the magnitude of the transposed‐letter effect on misspelled logos and brand names (i.e., an index of letter position coding: amzaon vs. amcion) was similar across the experiment. However, in the present scenario, one might argue that the repeated presentation of the stimuli could have caused the participants to focus their attention on visual elements of the stimuli in the middle and final blocks of the experiments, thus increasing the effects of visual similarity for the misspelled items. To examine this possibility, we divided the 480 experimental trials into three blocks (first block: trials 1–160; second block: trials 161–320; third block: trials 321–480), and generated delta plots: a descriptive analysis of the effect magnitude (i.e., RT difference between conditions) as a function of time (i.e., .1, .3, .5, .7, and .9 quantiles averaged across participants) for the data in each block (see De Jong et al., 1994; Ridderinkhof, 2002). Delta plots are an excellent visualization method to examine the distributional differences between conditions. As shown in Figure 3, we found a remarkably consistent effect of visual similarity in the three blocks: the difference was sizeable in the leading edge of the RT distribution (.1 quantile) and increased gradually in the succeeding quantiles. Thus, the effect of visual similarity was not affected by the items’ successive repetitions (see also Perea et al., 2021, for a similar pattern when examining the transposed‐letter effects with logos using the same procedure). Interestingly, the increase in the effect of visual similarity in the higher quantiles (i.e., the positive slope in the delta plot) is consistent with the idea that the accumulation of evidence for ‘no’ responses was slower for anazon than for atazon. Of note, a flat line in the delta plot (i.e., a constant shift across RT distribution) would have suggested that the locus of the effect was at an early encoding stage (see Gomez & Perea, 2014, for discussion).
FIGURE 3
Delta Plots depicting the visual similarity effect (computed as the difference in RT between visually similar and visually dissimilar conditions) for each quantile for misspelled items in the first, second and third block of Experiment 1
Delta Plots depicting the visual similarity effect (computed as the difference in RT between visually similar and visually dissimilar conditions) for each quantile for misspelled items in the first, second and third block of Experiment 1Given the robust effects of visual similarity with brand names in the present experiment and the Pathak et al. (2019) experiment, the question is: Why did previous experiments using misspelled common words fail to show a visual similarity effect? There is a potential methodological difference that might account for this discrepancy. Both Pathak et al. (2019) and the present experiment used a small set of brand names (7 and 10, respectively) – these were presented repeatedly either in their correct or misspelled form. In contrast, in the experiments with misspelled common words (e.g., Gutierrez‐Sigut et al., 2022; Perea & Panadero, 2014), the participants received each item only once (e.g., viotin [List 1] or viocin [List 2]); furthermore, the participants did not receive the original baseword (e.g., violin).4To reconcile these conflicting findings, one might argue that memory retrieval of common words after repeated presentations could be more sensitive to other sources of information, including visual similarity. While the delta plots presented above argue against this possibility for logos and brand names (see Figure 3), it may be the case that the scenario is different for common words. Indeed, an influential memory model such as Logan's (1990) instance theory states that processing common words and misspelled words differs in initial presentations and repeated presentations. In the former case, identifying a word or a misspelled word would follow the usual route described by visual word recognition models – the so‐called algorithmic computation in Logan's (1990) terms. In contrast, after several exposures to the same stimulus, performance would change from this ‘algorithmic mode’ to an episodically based process of memory retrieval (Logan, 1990). Empirical evidence suggests that, to some degree, recalling or recognizing an item after repeated exposure might be sensitive to visual similarity (e.g., Logie et al., 2000; see also Chubala et al., 2019). If the above interpretation holds in a word recognition laboratory task, one would expect that visual similarity effects arise with misspelled common words when using the paradigm of Experiment 1. Alternatively, if abstract, orthographic representations are the primary force driving the identification of common words even with repeated exposures, one would expect a negligible visual similarity effect.Thus, Experiment 2 was designed to examine, using the same procedure as in Experiment 1, whether visual similarity effects occur for misspelled common words (e.g., anarillo vs. atarillo; baseword: amarillo [yellow]). To keep a scenario similar to that of Experiment 1, the set of basewords was also 10 – these stimuli could be presented correctly spelled or misspelled (either visually similar or visually dissimilar). Furthermore, we substituted the same letters as in Experiment 1 (e.g., from amarillo: anarillo and atarillo; this is parallel to the letter replacements of anazon and atazon from amazon).
EXPERIMENT 2 (COMMON WORDS VS. MISSPELLED COMMON WORDS)
We tested an additional sample of 34 participants (15 women) using Prolific Academic for participant recruitment with the same profile as in Experiment 1. The participants’ mean age was 22.7 years (SD = 2.75). The participants received a small monetary compensation for their participation and gave informed consent before the experiment.We selected 10 Spanish words of similar length (7–9 letters) as the brand names of Experiment 1: amarillo, delgado, PERSONA, negocio, sentido, FRACASO, nivel, MERCADO, DIFÍCIL, and COMERCIAL. The average written frequency of these words was 102.9 per million (range: 23–303) in the EsPal database (Duchon et al., 2013). To keep conditions similar to Experiment 1, five of these items were also presented in uppercase (e.g., PERSONA). For each baseword, we created two pseudowords by replacing an internal consonant. The substituted letter could be visually similar or visually dissimilar to the original letter. We replaced the same consonant as in Experiment 1 (e.g., in Experiment 1, anazon/atazon [amazon]; in Experiment 2, anarillo/atarillo [amarillo]). None of the pseudowords had any orthographic neighbors other than their baseword. Bigram frequency was matched for the two sets of pseudowords (the mean log bigram frequencies in Davis & Perea's, 2005, B‐Pal database were 2.40 and 2.52 for the visually similar and visually dissimilar conditions, p = .28). To avoid the stimuli’ uniformity – note that items in Experiment 1 were presented either embedded in their full logo or unformatted; the stimuli were presented indistinctly in two standard typefaces (Times New Roman, Arial).It was the same as in Experiment 1 except that, instead of instructing the participants to categorize brand names as correctly spelled or not, here they had to decide whether the item was a correctly spelled Spanish word or not.As in Experiment 1, we removed error responses in the response time data (3.8% for correctly spelled words; 2.5% for misspelled words) and anticipatory responses (less than 250 ms; 2 observations, less than 0.013% of the data). The deadline for responding was 2000 ms – lack of response before the deadline was coded as an error (24 observations; less than 0.15% of data. For words, the mean RT was 622 ms and the error rate was 3.8%. The inferential analyses on the misspelled items were analogous to those in Experiment 1 except that we only had one fixed factor (visual similarity: similar vs. dissimilar, encoded as −0.5 and 0.5).The response times’ analyses showed that misspelled words were responded slightly slower when the replaced letter was visually similar to the baseword's original letter than when it was visually dissimilar (627 vs. 617 ms, respectively), b = 5.0, SE = 6.6, 95% CrI (−8.3, 18.2). Similarly, there were no clear signs of an effect of visual similarity in the accuracy data (3.2 vs. 1.8% of errors for the visually similar and visually dissimilar items), b = −0.45, SE = 0.35, 95% CrI (−1.15, 0.26; see Figure 4 for the posterior estimates).
FIGURE 4
Highest Density Intervals with the 50%, 75%, 89%, 95%, and 100% Credible Intervals for each of the estimates of the Bayesian Linear Mixed‐Effects models on response time (left panel) and accuracy (right panel) for misspelled common words in Experiment 2
Highest Density Intervals with the 50%, 75%, 89%, 95%, and 100% Credible Intervals for each of the estimates of the Bayesian Linear Mixed‐Effects models on response time (left panel) and accuracy (right panel) for misspelled common words in Experiment 2Thus, the present experiment showed that misspelled words were not affected by their visual similarity with their basewords (e.g., atarillo and anarillo produced comparable response times and error rates) even when the paradigm involved repeated presentations of each item. This negligible effect of visual similarity is entirely consistent with earlier research where misspelled words were presented once (e.g., Gutierrez‐Sigut et al., 2022; Perea & Panadero, 2014).We believe, however, that the small numerical trend in the latency data deserves further scrutiny. Following the logic stated earlier, one might argue that the effect of visual similarity on misspelled common words could arise due to episodically based memory retrieval once the items have been presented several times, but not in the initial part of the experiment. Similar to in Experiment 1, we divided the 480 experimental trials into three blocks (first block: trials 1–160; second block: trials 161–320; third block: trials 321–480) and created the delta plots of visual similarity effect (visually similar – visually dissimilar) as a function of response time (.1, .3, .5, .7, and .9 quantiles).As shown in Figure 5, the delta plots on misspelled words’ did not show any signs of a visual similarity effect in the initial part of the experiment (Block 1). The general pattern was analogous in the second and third parts (Blocks 2 and 3), except for some hints of a visual similarity effect on misspelled common words, especially at the higher quantile (.9).5 To check whether the data could support this observation, we conducted a post hoc analysis of the visual similarity effect on the latency data of Block 3. Results did not reveal an effect of visual similarity either, b = 7.35, SE = 6.96, 95% CrI (−6.61, 21.35). We also inspected whether the visual similarity effect on misspelled words increased across blocks in the error rates, but there were no trend of an effect (1.20, 1.36, and 1.54% in Blocks 1, 2, and 3, respectively). Taken together, these analyses suggest that visual similarity plays a minimal role, if anything, in the identification of misspelled common words – it is only for the slowest responses in Block 2, where there are some hints of an effect.
FIGURE 5
Delta Plots depicting the visual similarity effect for each quantile for misspelled words in the first, second and third block of Experiment 2
Delta Plots depicting the visual similarity effect for each quantile for misspelled words in the first, second and third block of Experiment 2
GENERAL DISCUSSION
It has been suggested that brand names form a lexical category different from common words, possibly requiring different processing strategies (see Gontijo et al., 2002). Somewhat surprisingly, these alleged nuances have not been systematically investigated. Recently, in a task in which the participants had to decide whether a brand name was correctly spelled, Pathak et al. (2019) found a sizeable effect of visual similarity for misspelled brand names when embedded with their complete graphical information (tacebook being responded more slowly than xacebook). These findings cannot be captured by leading orthographically based visual word recognition models (i.e., they would have predicted similar response times to tacebook and xacebook). This visual similarity effect is also at odds with previous experiments with misspelled common words (e.g., viotin = viocin; Gutierrez‐Sigut et al., 2022; Perea & Panadero, 2014). These discrepancies raise the questions of (1) whether brand name identification is radically different from the identification of common words, and (2) whether the visual similarity effect obtained with logos and brand names could also been obtained with common words when using a parallel procedure.To examine these issues, we designed two experiments where the participants had to decide whether an item was spelled correctly or not. Experiment 1 included misspelled brand names (either visually similar or dissimilar: amazon vs. atazon; i.e., as the Pathak et al., 2019, study), with a twist: the items could be presented with their complete graphic design or unformatted. This manipulation allowed us to examine whether the brand name's graphical information was the factor responsible for the effect of visual similarity on misspelled brand names reported by Pathak et al. (2019). Regardless of graphical information, we found a sizeable visual similarity effect (anazon slower and more error‐prone than atazon). Thus, graphical information was not the factor responsible for the visual similarity effect on misspelled brand names. There was, however, another methodological difference between the previous experiments with misspelled brand names and misspelled common words: misspelled brand names were presented repeatedly, whereas misspelled common words were presented only once. To examine whether the items’ repeated presentation was the factor responsible for the visual similarity effect found in Experiment 1 (or in the Pathak et al., 2019, study), we designed Experiment 2. The general procedure of Experiment 2 was parallel to that of Experiment 1 except that we used misspelled common words (e.g., anarillo vs. atarillo [the baseword is amarillo [yellow]] instead of brand names. Results did not show any clear signs of an effect of visual similarity with misspelled common words, thus confirming prior work (e.g., Gutierrez‐Sigut et al., 2022; Perea & Panadero, 2014) with a different procedure. Furthermore, we also found that the items’ repeated exposure was not the source of the visual similarity effect obtained with brand names. We now discuss the importance of these findings for a comprehensive visual word identification model.From a theoretical perspective, the null effect of visual similarity for misspelled common words in Experiment 2 (i.e., anarillo vs. atarillo [baseword: amarillo] produced similar response times) is the predicted effect in all orthographically based models of visual word identification. As indicated in the Introduction, these models assume that abstract, orthographic representations containing the identity and order of the items’ letters are the critical components that drive the process of visual word identification (e.g., see Davis, 2010; Dehaene et al., 2005). However, these models cannot accommodate why misspelled brand names show an effect of visual similarity (Experiment 1; see also Pathak et al., 2019). Experiment 1 ruled out the explanation of this phenomenon as being due to the graphical information of brand names per se (i.e., the effect is approximately the same magnitude for logos and unformatted brand names). Besides, Experiment 2 refuted an explanation of the effect of visual similarity with misspelled brand names in terms of episodic‐based memory retrieval mechanisms: unlike brand names, the pattern of data was relatively stable across blocks (see Figures 3 and 5). Thus, the most sensible explanation is that visual similarity effects in misspelled brand names are due to their inherent characteristics. Remember that, unlike common words, brand names are typically written with a particular visual configuration, including a characteristic letter‐case arrangement and typography. Indeed, even pre‐reading children can quickly identify popular brand names in their typical typeface (e.g., see Masonheimer et al., 1984).Given the present findings, the issue is: should we take the easy path that asserts that brand name identification is just too different from the recognition of common words? Or should we accept the challenge of examining whether visual‐word recognition models can be expanded to accommodate the peculiarities of identifying brand names? After all, Henderson (1987) indicated that brand names would be an excellent choice to explore the intricacies that underlie the process of visual word identification. A reasonable starting point is to assume various routes in the access to the mental lexicon. Following Henderson (1987) and Davis (1999), it is essential to distinguish the processing strategies we typically employ while reading from those we are capable of employing. While an orthographically based route may be the most efficient route for skilled readers – and the only one that has been simulated in computational models of visual word recognition, it may not be the only route. For instance, the SOLAR model of visual word recognition (Davis, 1999) included an orthographically based route and a route based on the word's visual codes. Consistent with this idea, Fischer‐Baum and Kajander (2014) described an individual with brain damage who could read single words reasonably well but was heavily impaired in various letter identification tasks. Fischer‐Baum and Kajander concluded that these findings suggest an additional route to word meaning, not based on abstract letter units.Indeed, there is some empirical evidence of a visual similarity effect on misspelled common words in some populations. While adult and young readers show similar response times and error rates to viotin and viocin, individuals with developmental dyslexia commit more errors to viotin than to viocin (Perea & Panadero, 2014) – this pattern was interpreted in terms of over‐reliance on visual codes in individuals with dyslexia. Gutierrez‐Sigut et al. (2022) observed a similar dissociation with hearing versus deaf readers: only deaf readers showed an effect of visual similarity with misspelled common words, which was restricted to the ERP waves. Gutierrez‐Sigut et al. (2022) suggested that the orthographic representations were not precise enough to rapidly override the visual characteristics of words during lexical access. These findings suggest that a comprehensive model of visual word recognition should pay more attention to the role of visual codes. Of note, the claim that visual codes could play a role during word recognition is not new. As far back as 1963, Havens and Foote suggested that, when presenting a word, its potential competitors during word identification would be those lexical units ‘differing from it only with respect to a form similar middle letter’ (p. 7). For lime, the word line would be a strong competitor (i.e., m and n are visually similar), but not so the word life (m and f are visually dissimilar).Notably, in a recent paper, Agrawal et al. (2020) proposed a biologically plausible model of visual word recognition that uses visual codes and can readily capture the effects of visual similarity in misspelled brand names. In their model, lexical access is driven by neurons tuned to letter retinal shape position, together with a compositional code of the single letter responses. As lexical decision times to pseudowords in this model ‘are driven by the dissimilarity between the viewed string and the nearest word.’ (p. 13; see also figure 6 in Agrawal et al., 2020), it would be more difficult to say ‘no’ as a brand name to anazon (visually similar to amazon) than to atazon (visually dissimilar to amazon). Furthermore, given that the visual representations of words are compared with their representation in memory, Agrawal et al.’s compositional model can also capture the processing advantage of IKEA over ikea: the visual input IKEA would be closer to its stored representation than ikea. This same idea would also apply to other types of printed stimuli with a characteristic letter‐case configuration (e.g., Mary faster than mary; Buch faster than buch [in common words in German]; Peressotti et al., 2003; see also Labusch et al., 2022; Sulpizio & Job, 2018; Wimmer et al., 2016). Likewise, as noted by Agrawal et al. (2020; see figure 6A), their model can also capture visual similarity effects (e.g., faster responses to nevtral‐NEUTRAL than neztral‐NEUTRAL; Marcet & Perea, 2017, 2018), transposed‐letter effects (e.g., faster responses to jugde‐JUDGE than to jupte‐JUPTE; Perea & Lupker, 2003) and the first‐letter advantage (Tydgat & Grainger, 2009).A limitation of the current implementation of Agrawal et al.’s (2020) compositional model is that it cannot easily capture the negligible effects of visual similarity for misspelled common words. The model would have predicted longer response times to anarillo than atarillo, as happened with misspelled words. Furthermore, this model cannot capture that the pair table‐TABLE (orthographically identical, but visually different) in masked priming lexical decision is as effective as TABLE‐TABLE (Jacobs et al., 1995; Perea et al., 2015; Vergara‐Martínez et al., 2015; see also Vergara‐Martínez et al., 2020). The compositional model would have predicted an advantage for visually identical pairs. Instead, this null effect is a natural consequence in all analytical‐based models of visual‐word recognition. Nonetheless, there are masked priming findings that suggest that, under some circumstances, the word's visual characteristics may play a role during visual word recognition. First, in deaf readers, lexical decision times to TABLE‐TABLE are faster than for table‐TABLE (Gutiérrez‐Sigut et al., 2019; Perea et al., 2016b). Second, for pseudowords, lexical decision times are faster for SUPLE‐SUPLE than suple‐SUPLE in skilled readers (e.g., see Jacobs et al., 1995; Perea et al., 2015; Vergara‐Martínez et al., 2015). Third, when the task does not require lexical access (e.g., a masked prime same‐different task where participants have to decide whether a reference and a target stimulus are the same or not), response times are faster to TABLE‐TABLE than to table‐TABLE (Perea et al., 2016a). And fourth, for acronyms, DNA‐DNA has an advantage over dna‐DNA (i.e., priming effects are case sensitive; Kinoshita et al., 2021). While the compositional model can easily accommodate these four findings, any model that assumes a feedforward mapping of visual codes onto abstract, case‐invariant letter units would have predicted no differences between TABLE‐TABLE and table‐TABLE.A sensible strategy to explain this intricate pattern is to assume full interactivity across levels (see Carreiras et al., 2014, for review). The idea is that feedback from higher processing levels may override the differences in visual codes (see Vergara‐Martínez et al., 2021, for evidence of early top‐down effects, via the N170 component, during word recognition). This feedback would modulate the interplay between the visual codes and the word's stored representations, in line with Grossberg and Stone’s (1986) resonance model (see also Van Orden & Goldinger, 1994). In this framework, logos and brand names – being typically presented with the same color, design, and typeface – may have less rich stored orthographic representations than common words. As a result, logos and brand names could be particularly sensitive to visual codes (i.e., anazon would be more perceptually similar to amazon than atazon). Instead, given that readers are accustomed to encoding normal words in various formats, their abstract representations would be richer; hence, top‐down feedback could quickly override the influence of visual codes. While admittedly speculative, this reasoning could also capture why individuals with less rich orthographic representations (e.g., individuals with dyslexia, Perea & Panadero, 2014; or deaf readers, Gutierrez‐Sigut et al., 2022) show visual similarity effects with common words. Clearly, further development of Agrawal et al.’s (2020) compositional model is necessary to offer a more detailed balance of the interactive nature of visual‐word recognition; nonetheless, the idea of a role for visual codes in the access to the word's stored representations is worth exploring in future research.We acknowledge that a potential limitation of the present experiments is that they focused on orthographic processing (i.e., ‘is the stimulus correctly spelled?’). Although there is no a priori reason why task requirements would affect the dissociation between visual similarity effects for brand names and common words, further research would be needed to uncover the nuances of orthographic processing in brand names versus common words. One such option would be to focus on tasks that require access to lexical‐semantic information via a semantic categorization task while recording the participants’ event‐related potentials. For instance, the participants would have to decide whether the printed stimulus (either brand name or common word) relates to a category such as travel (e.g., Lufthansa vs. McDonalds; station vs. library). The brand names or the common words could be presented correctly spelled or not (e.g., McDoralds vs. McDotalds for brand names; librany vs. libraby for common words), thus allowing a precise examination of the time course of the effects.In sum, the present experiments revealed a dissociation of the effects of visual similarity for misspelled brand names and misspelled common words: visual similarity plays a significant role in identifying misspelled brand names (e.g., anazon is closer to amazon than atazon), but not in identifying misspelled common words (e.g., anarillo and atarillo behave similarly). This pattern suggests that various sources of information drive the process of visual‐word recognition. Moreover, our findings may also have practical implications. When deciding whether two brand names have a similar spelling in legal litigation against copycat brands (see Chow, 2010), it is critical to pay attention to how visually similar the original and replacement letters are (e.g., anazon is more confusable with amazon than atazon). We hope these findings will encourage scholars to dwell on the manifold intricacies of the front‐end of visual‐word recognition (see Balota et al., 2006).
CONFLICT OF INTEREST
All authors declare no conflict of interest.
AUTHOR CONTRIBUTION
Manuel Perea: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Funding acquisition (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Ana Baciero: Conceptualization (equal); Data curation (equal); Investigation (equal); Writing – original draft (equal); Writing – review & editing (equal). Melanie Labusch: Conceptualization (equal); Data curation (equal); Methodology (equal); Software (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal). María Fernández‐López: Conceptualization (equal); Investigation (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Ana Marcet: Conceptualization (equal); Funding acquisition (equal); Investigation (equal); Writing – original draft (equal); Writing – review & editing (equal).