| Literature DB >> 21290187 |
Inge Timmers1, Job van den Hurk, Francesco Di Salle, M Estela Rubio-Gozalbo, Bernadette M Jansma.
Abstract
Most humans are social beings and we express our thoughts and feelings through language. In contrast to the ease with which we speak, the underlying cognitive and neural processes of language production are fairly complex and still little understood. In the hereditary metabolic disease classic galactosemia, failures in language production processes are among the most reported difficulties. It is unclear, however, what the underlying neural cause of this cognitive problem is. Modern brain imaging techniques allow us to look into the brain of a thinking patient online - while she or he is performing a task, such as speaking. We can measure indirectly neural activity related to the output side of a process (e.g. articulation). But most importantly, we can look into the planning phase prior to an overt response, hence tapping into subcomponents of speech planning. These components include verbal memory, intention to speak, and the planning of meaning, syntax, and phonology. This paper briefly introduces cognitive theories on language production and methods used in cognitive neuroscience. It reviews the possibilities of applying them in experimental paradigms to investigate language production and verbal memory in galactosemia.Entities:
Mesh:
Year: 2011 PMID: 21290187 PMCID: PMC3063545 DOI: 10.1007/s10545-010-9266-4
Source DB: PubMed Journal: J Inherit Metab Dis ISSN: 0141-8955 Impact factor: 4.982
Fig. 1Speech production model. Displayed are cognitive stages (left) and brain areas (right) involved in language production. The numbers in the boxes represent estimates of temporal encoding for each type of information. After picture presentation (0 ms), the visual system encodes the stimulus and activates a preverbal concept. The appropriate lexical entries are selected (150-225 ms, medial temporal gyrus (d)). The next stage involves syntactic encoding (left inferior frontal gyrus (IFG), taking place around 250-350 ms post stimulus (a)). Finally, phonological encoding takes place (300-500 ms, posterior superior temporal gyrus, angular gyrus (c)). The message is then presumably assembled in the left IFG. After all planning has taken place, the finished speech plan is sent to (pre-) motor areas (b) to be prepared for articulation. An online self-monitoring feedback loop (275 – 400 ms, superior temporal gyrus (e)) is capable of keeping track of the speech production process and intervenes if required. It has to be noted that boxes or stages are for display purpose only. Speech production does not involve encapsulated modules, but involves several brain regions that interact in a cascading manner. (Model adapted from Indefrey and Levelt 2004, plus recent temporal information, for example Sahin et al. 2009)
Overview of example paradigms to study language production and working memory difficulties in galactosemia. An overview is given of example paradigms, and the concept of which they are based, to study the different language production and working memory stages. By implementing these paradigms it can be examined whether there is a difference in the time course of information processing (EEG/ERP) and in the neural correlates (fMRI) of these stages between galactosemia patients and healthy volunteers
|
|
|
|
|
|---|---|---|---|
| Conceptualization | People prefer a chronological order when planning their sentence (e.g. “After I saw my favourite meal, I became hungry.”) instead of a non-chronological or reversed order (e.g. “Before I became hungry, I saw my favourite meal.”). This is presumably because non-chronologically ordered sentences require a higher working memory demand (Munte et al. | Conceptualization conditions in which easy (“After”) and difficult (“Before”) sentences have to be produced are compared. | EEG/ERP (Marek et al. |
| Semantics | The picture-word-interference (PWI) paradigm designed by Glaser and Düngelhoff ( | Subjects see a picture and see or hear an irrelevant word at the same time. They are asked to ignore this distractor and to name the picture. In case a picture of, for example, a dog is presented with the word cat, naming of “dog” is hampered due to semantic interference. | EEG/ERP (Hirschfeld et al. |
| Syntax | Based on animated visual scenes, overt sentence production with varying levels of syntax (easy, medium, complex) can be elicited. The conditions are compared. | PET (Indefrey et al. | |
| Phonology | The picture-word-interference (PWI) paradigm designed by Glaser and Düngelhoff ( | Subjects see a picture and see or hear an irrelevant word at the same time. They are asked to ignore this distractor and to name the picture. In case a picture of, for example, a duck is presented with the word “dusk” (orthographically related), naming of “duck” is facilitated due to phonological relatedness. | fMRI (de Zubicaray et al. |
| Articulation | Synchronized syllable repetitions Several kinds of tasks: overt speech by reading a passage from a book; the phonation of a monotone vowel; lip movements and tongue movements (both without actual vocalization), to separate vocalisation from articulation | fMRI (Riecker et al. | |
| Verbal working memory | People prefer a chronological order when planning their sentence (e.g. “After I saw my favourite meal, I became hungry.”) instead of a non-chronological or reversed order (e.g. “Before I became hungry, I saw my favourite meal.”). | Processing easy (“After”) and difficult (“Before”) sentences, and comparing the conditions. Interestingly, individuals with higher verbal working memory span showed a greater difference between the two conditions. | EEG/ERP (Munte et al. |
Fig. 2Local field potentials (LFPs) versus ERPs during syntactical encoding. A descriptive comparison is made between the intracranial local field potentials of Sahin et al. (2009) and the extracranial EEG/ERP study of our group. Both studies investigated the brain’s response to the encoding of syntax. Sahin et al. instructed their participants to make grammatical inflections while our participants were asked to utter a complete sentence in response to an animated scene. Lower panel: Overlay of average LFP and ERP within the same time scale. Interestingly, despite the differences in the method and in the syntactic task, the morphology of the waveforms is strikingly similar in the target peak latencies (200, 320, 450 ms). Granting the assumption that peaks in LFP and ERP signal reflect maximal neural activity, this descriptive comparison suggests common aspects in the two signal types for language encoding. Upper panel: The brain area depicted in blue represents Broca’s area, i.e. the location of the intracranial recording. The red circle reflects the presumed source of the EEG data (in correspondence with the PET study results of Indefrey et al. 2001; 2003, using the same paradigm). The EEG source still has to be confirmed
Fig. 3Working memory model. The anterior temporal pole (a) is believed to play an important role in semantic memory retrieval and representation of specific semantic items. Regions in the fusiform gyrus (b) have proven to show differential responses to different categories of objects, converging in specificity from posterior to anterior regions. The inferior frontal gyrus (IFG) (c, d, e) is involved in several (semantic) working memory related tasks: rehearsal (c), selection (d) and production (e). The lateral temporal cortex (f) is related to the perception of motion, of both biological (dorsal) and artificial (ventral) objects, and to lexical memory, whereas the posterior superior temporal gyrus (h) is the presumed region where phonologic loops are maintained. Finally, dorsolateral prefrontal cortex (dlPFC) (g) has an overall executive role in working memory tasks (after Cabeza et al. 2002; and Martin and Chao 2001). Obvious is the overlap of this memory network with the language network depicted in Fig. 1