Literature DB >> 29194499

Emotion recognition associated with polymorphism in oxytocinergic pathway gene ARNT2.

Daniel Hovey1, Susanne Henningsson1, Diana S Cortes2, Tanja Bänziger3, Anna Zettergren1,4, Jonas Melke1, Håkan Fischer2, Petri Laukka2, Lars Westberg1.   

Abstract

The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio-visual stimuli in women (n = 309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.

Entities:  

Mesh:

Substances:

Year:  2018        PMID: 29194499      PMCID: PMC5827350          DOI: 10.1093/scan/nsx141

Source DB:  PubMed          Journal:  Soc Cogn Affect Neurosci        ISSN: 1749-5016            Impact factor:   3.436


Introduction

The ability to correctly understand the emotional expression of other individuals, and thus to interpret their internal states and intentions, is essential for social interactions and relationships. This ability, present in humans as well as non-human primates, is displayed very early in infancy (Tate ; Grossmann and Johnson, 2007). The heritability of emotion recognition and of cortical processing of facial emotion is moderate (Anokhin ; Knafo-Noam and Uzefovsky, 2013). Other observations that support a genetic component are that first-degree relatives appear to share impairments in emotion recognition (Neves ; Oerlemans ; Allott ), and recent positive genetic association findings in large samples (Warrier ). The neuropeptides oxytocin (OXT) and arginine vasopressin (AVP) have been implicated in social behaviors in animals as well as humans (Lee ; Baribeau and Anagnostou, 2015). Intranasal OXT administration has been shown to enhance the recognition of emotion—positive and negative—in faces (Domes ; Shahrestani ) and in body language (Bernaerts ), whereas AVP administration to men has been shown to reduce recognition of negative emotion in male faces (Uzefovsky ) and the perception of friendliness in happy male faces (Thompson ). Administration of either neuropeptide has also been reported to influence the processing and memory of emotional faces (Guastella , 2010; Rimmele ; Meyer-Lindenberg ). OXT and AVP are nonapeptides with peripheral and central functions, synthesized in the paraventricular and supraoptic nuclei of the hypothalamus and released through the pituitary to the periphery as well as into the brain by local dendrites and synapses in regions including the amygdala, the hippocampus, the striatum and the brainstem (Ross and Young, 2009; Baribeau and Anagnostou, 2015). Aryl hydrocarbon receptor nuclear translocator 2 (ARNT2) and single-minded 1 (SIM1) are two dimerizing transcription factors that participate in the development of OXT and AVP neurons in the paraventricular and supraoptic nuclei in mice (Michaud ; Kublaoui ; Duplan ). A few studies suggest that variation in the genes encoding ARNT2 and SIM1 may be associated with human phenotypes related to social cognition (Chakrabarti ; Ramachandrappa ; Di Napoli ; Hovey ). The transmembrane glycoprotein cluster of differentiation 38 (CD38) has been shown to be important for social behavior in mice via an influence on hypothalamic OXT release (Jin ). Human gene association studies have reported associations between variation in CD38 and face processing and phenotypes charachterized by social impairment (Munesue ; Sauer ), and expression of the gene has been coupled to social skills (Riebold ). The effects of OXT and AVP are mediated by the G-protein-coupled OXT receptor (OXTR) and vasopressin receptors (AVPR1A and AVPR1B). These receptors are expressed in different regions in different species in a manner that suggests that its involvement in social attention is conserved through evolution (Young ; Yoshida ; Roper ; Boccia ; Freeman ). Human genetic studies have shown associations between phenotypes related to social cognition and face processing, and polymorphisms in OXTR (Tost ; Ebstein ; Westberg and Walum, 2013; Skuse ; LoParo and Waldman, 2015), AVPR1A (Yirmiya ; Walum ; Meyer-Lindenberg ; Tansey ; Ebstein ; Kantojärvi ; Uzefovsky ) and AVPR1B (Wu ; Francis ), respectively. Variation in the OXTR has also been associated with emotion recognition (Rodrigues ; Melchers ; Chen ). Here, we studied the recognition of 12 different emotional expressions in three modalities, i.e. visual, auditory and audio–visual, using ecologically valid video and sound recordings, and genotyped 25 single nucleotide polymorphisms (SNPs) in eight genes (Table 1) linked to OXT and AVP signaling. These included OXT, AVP, OXTR, AVPR1A and AVPR1B, the two transcription factors genes ARNT2 and SIM1, as well as CD38. To the best of our knowledge, previous studies have only investigated the OXTR gene in relation to emotion recognition. We hypothesized that variation in these genes, by virtue of their role in social behavior and cognition, would be associated with ability to discern human emotional expression. Since emotion recognition has been linked to autism traits, alexithymia, emotional expressivity and perspective taking (Oberman ; Ponari ; Bird and Cook, 2013; Cook ; Brewer ; Berggren ; Fridenson-Hayo ; Trubanova ) and thus may be involved in a potential relationship between emotion recognition and genetic variation, post hoc tests included assessment of correlations with these self-reported traits, as well as potential associations for SNPs significantly associated with emotion recognition.
Table 1.

SNP information

GeneSNPChromosomePositionReferences
OXTrs274021020p13downstreamYrigollen et al. (2008)
rs2770378downstreamChakrabarti et al. (2009); Hovey et al. (2014)
rs4813627downstreamMileva-Seitz et al. (2013)
AVPrs274020420p13downstreamYrigollen et al. (2008)
OXTRrs76322873p253′Hovey et al. (2015); LoParo and Waldman (2015); Walum et al. (2012)
rs10427783′ UTRIsrael et al. (2009); Lerer et al. (2008)
rs237887intron 3LoParo and Waldman (2015); Skuse et al. (2014); Wu et al. (2012)
rs2254298intron 3Inoue et al. (2010); Israel et al. (2009); LoParo and Waldman (2015); Wu et al. (2012)
rs53576intron 3Bakermans-Kranenburg and van IJzendoorn (2008); Rodrigues et al. (2009); Tost et al. (2010)
rs4686302exon 3Wu et al. (2012)
rs45649705′ UTRHovey et al. (2015); Johansson et al. (2012a, b)
rs22684985′Christ et al. (2016); Laursen et al. (2014); Melchers et al. (2013)
rs757755′Wang et al. (2009)
AVPR1Ars1183226612q14-155′Stein et al. (2014)
rs108779695′Yang et al. (2010)
rs1042615exon 1Bernhard et al. (2016)
rs111748113′ UTRMaher et al. (2011)
rs15870973′Levran et al. (2009)
AVPR1Brs353696931q32exon 1Francis et al. (2016)
ARNT2rs390189615q25.1intron 1Chakrabarti et al. (2009); Hovey et al. (2014); Di Napoli et al. (2014)
rs4778599intron 5Chakrabarti et al. (2009)
rs4072568exon 18Swarbrick et al. (2011)
SIM1rs37343546q16.3exon 3Hovey et al. (2014); Swarbrick et al. (2011)
CD38rs64491824p15.32intron 1Hovey et al. (2014); Jamroziak et al. (2009); Riebold et al. (2011)
rs3796863intron 7Munesue et al. (2010); Sauer et al. (2012)

UTR, untranslated region.

SNP information UTR, untranslated region.

Materials and methods

Participants

The study included 492 participants, recruited from the normal population, for whom both behavioral and genetic data were available, 182 men (age range: 18–36 years, mean ± s.d.: 23.7 ± 3.1) and 310 women (age range: 18–34 years, mean ± s.d.: 23.0 ± 3.2). All included participants were Caucasian, right-handed, fluent in Swedish, healthy and reported no past or present psychiatric diseases or substance abuse. All participants provided written informed consent in accordance with the Declaration of Helsinki. The study was approved by the Stockholm regional ethical review board (2012/1511-31/2). Ethnicity was assessed by asking which country parents and grand-parents were born in. Due to previously reported differences in allele frequencies and associations between different ethnic groups, e.g. (Barzan ), subjects of non-Caucasian or unknown ethnicity (n = 91) were excluded from the original sample (n = 583). Allele frequencies differed significantly between the Caucasians and non-Caucasians also in the current sample.

Multimodal emotion recognition task

Emotion recognition accuracy was assessed using the emotion recognition assessment in multiple modalities (ERAM) test, which is based on video clips of emotion expressions portrayed by professional actors from the Geneva Multimodal Emotion Portrayal corpus (Bänziger ). Actors were instructed to improvise interactions wherein they expressed emotions while pronouncing pseudolinguistic sentences with standard content (e.g. ‘ne kali bam sud molen!’). Each video shows close-up frontal views of the actor’s face and upper torso and contains facial, vocal and bodily cues to emotion. The ERAM test contains 72 items conveying 12 different emotional expressions: the positive emotions happiness, interest, pleasure, pride and relief, and the negative emotions hot anger, anxiety, despair, disgust, panic fear, irritation and sadness. The items were presented in three conditions—video only (24 video clips presented without sound), audio only (24 sounds presented alone) and audio–video (24 video clips presented with sound)—which allowed for separate assessment of visual, auditory and audio–visual emotion recognition ability. The duration of the video clips ranged between 1 and 5 s and sound levels were normalized within each of the 10 actors. Participants on average required 15 min to complete the ERAM test. Experiments were conducted individually using Authorware (Adobe Systems Inc, San Jose, CA) running on computers to present stimuli and record responses. Video content was presented on 24″ LED monitors and audio content was presented through headphones (AKG K619, AKG Acoustics GmbH, Vienna, Austria) with volume kept constant across participants. Participants first took part in a brief training session and were then presented with the video only stimuli, followed by the audio only and lastly audio–video stimuli, in a fixed order. For each stimulus, participants were instructed to choose the label that best represented the expression conveyed by that portrayal from a list of 12 alternatives (which were the same as the 12 intended expressions). A match between the chosen label and the intended expression was scored as a correct response. The multimodal emotion recognition task was followed by other tests on social perception and memory as well as self-report questionnaires, resulting in a test battery with a duration of approximately one and a half hours. The additional data will be presented in future scientific reports.

Genotyping

DNA was extracted from saliva samples using OraGene DNA self-collection kit (DNA Genotek, Inc, Ottawa, ON, Canada). In total, 25 SNPs in eight different genes (Table 1) were genotyped with KASPar, a competitive allele-specific polymerase chain reaction SNP genotyping system using FRET quencher cassette oligos (http://www.lgcgenomics.com). The genotyping success rate was >95%. The SNPs were chosen either because they have been shown to influence protein function or because they have been associated with behavioral phenotypes (Table 1). All SNPs had a minor allele frequency > 5%.

Questionnaires

Autism traits were measured by means of the autism quotient scale (AQ; Baron-Cohen ), alexithymia was measured by the Toronto Alexithymia Scale (TAS-20; Bagby ; Simonsson-Sarnecki ), including subscales measuring reduced ability to identify and describe one’s own feelings as well as the preference and habit to focus on external factors rather than feelings. Expressivity was measured by the Berkeley expressivity questionnaire (BEQ; Gross and John, 1997). The subscale perspective taking of the interpersonal reactivity index (IRI; Davis, 1983) was used to measure the habit or motivation of taking another person’s perspective as this subscale was deemed relevant for the phenotype of emotion recognition. The other subscales were not included.

Statistical analysis

We studied three different emotion recognition phenotypes as primary outcome variables, namely recognition of the emotional expression of faces (visual), of the emotional expression of voices (audio) and their combination (audio–visual). Linear regression analyses in SPSS (version 23, Armonk, NY: IBM Corp) were used. The 25 SNPs were analyzed for men and women separately, since previous studies have demonstrated that the underlying mechanisms of facial emotion processing vary between the sexes (Stevens and Hamann, 2012; Thompson and Voyer, 2014). The statistical threshold for the associations was therefore corrected for 25 SNPs, three phenotypes and two sexes, resulting in a threshold of alpha = 0.00033. The accuracy was conceptualized as the percentage of correct answers. To control for the large number of options and therefore types of potential errors (false alarms) or potential response biases (e.g. the tendency to choose the anger response button every time the subject is uncertain of what emotion is displayed), the accuracy measure was also determined as the joint probability of (i) the emotion being correctly labeled (e.g. the number of times the emotion anger is labeled as anger divided by the number of times the emotion anger is shown) and (ii) the response option being correctly used (e.g. the number of times the emotion anger was labeled as anger divided by the number of times the anger response button was chosen) (Wagner, 1993). Since the results for this measure were almost identical to those of the main analyses, we only mention the results for Wagner’s response bias control measure for the main finding in the Results section. Differences in performance between outcome variables were assessed by paired samples t-tests and differences between men and women with independent sample t-tests. Descriptives (mean ± s.d.) and uncorrected P-values are shown in the Results section.

Results

Allele frequencies are displayed in Table 2. All SNPs were in Hardy Weinberg equilibrium. For the emotion recognition task, performance was higher for the audio–visual condition (M: 0.65 ± 0.13; F: 0.68 ± 0.12) than for the visual (M: 0.53 ± 0.11; F: 0.55 ± 0.13; P-values < 0.001) or the audio conditions (M: 0.50 ± 0.13; F: 0.49 ± 0.12, P-values < 0.001). There was a nominally significant sex difference for the audio–visual condition only (P = 0.02).
Table 2.

Association analyses between OXT- and vasopressin-relevant SNPs and recognition of emotional expressions presented in three modalities: audio, visual and audiovisual

GeneSNP MAFMeasureP-value
n**Mean emotion recognition accuracy ± s.d.
MalesFemales
OXTrs27402100.32nsns
rs27703780.49nsns
rs48136270.39Ans0.0374/163/73GG: 0.47 ± 0.11GA: 0.49 ± 0.13AA: 0.51 ± 0.10
AVPrs27402040.40nsns
OXTRrs76322870.23nsns
rs10427780.37nsns
rs2378870.40AV0.05ns66/88/27AA: 0.67 ± 0.12AG: 0.65 ± 0.13GG: 0.61 ± 0.15
rs22542980.12nsns
rs535760.37nsns
rs46863020.14AVns0.04226/76/6CC: 0.69 ± 0.12CT: 0.66 ± 0.13TT: 0.60 ± 0.11
A0.02ns138/40/3CC: 0.51 ± 0.13CT: 0.46 ± 0.13TT: 0.46 ± 0.04
rs45649700.08nsns
rs22684980.45V0.02ns50/97/34TT: 0.56 ± 0.12TC: 0.53 + 0.11CC: 0.50 ± 0.11
rs757750.15Vns0.007231/68/10GG: 0.54 ± 0.12GT: 0.58 ± 0.14TT: 0.62 ± 0.13
AVPR1Ars118322660.052nsns
rs108779690.14Ans0.04232/73/3TT: 0.50 ± 0.11TC: 0.46 ± 0.12CC: 0.47 ± 0.21
rs10426150.43AVns0.004101/155/53GG: 0.65 ± 0.14GA: 0.69 ± 0.11AA: 0.71 ± 0.13
rs111748110.13Ans0.02238/67/5CC: 0.50 ± 0.11CA: 0.46 ± 0.12AA: 0.43 ± 0.14
rs15870970.09Ans0.01255/52/2CC: 0.50 ± 0.11CT: 0.45 ± 0.11TT: 0.46 ± 0.18
AVPR1Brs353696930.06nsns
ARNT2rs39018960.38AVns0.02128/133/47CC: 0.70 ± 0.12CT: 0.67 ± 0.12TT: 0.65 ± 0.13
rs47785990.34AVns0.00001*147/124/38GG: 0.71 ± 0.10GA: 0.67 ± 0.13AA: 0.62 ± 0.12
rs40725680.20V0.03ns117/55/9GG: 0.54 ± 0.11GA: 0.54 ± 0.11AA: 0.42 ± 0.08
SIM1rs37343540.15nsns
CD38rs64491820.20AVns0.004207/91/11CC: 0.67 ± 0.12CG: 0.70 ± 0.12GG: 0.74 ± 0.11
rs37968630.32A0.01ns81/81/20CC: 0.52 ± 0.11CA: 0.48 ± 0.14AA: 0.45 ± 0.14

MAF, minor allele frequency; ns, non-significant as in uncorrected P-value > 0.05; A, audio; V, visual; AV, audio–visual.

Survives correction for multiple testing (P < 0.0003), uncorrected P-values displayed.

n, number of subjects in the group for which the association was significant (P < 0.05) of emotional expressions presented in three modalities: audio, visual and audio–visual.

Association analyses between OXT- and vasopressin-relevant SNPs and recognition of emotional expressions presented in three modalities: audio, visual and audiovisual MAF, minor allele frequency; ns, non-significant as in uncorrected P-value > 0.05; A, audio; V, visual; AV, audio–visual. Survives correction for multiple testing (P < 0.0003), uncorrected P-values displayed. n, number of subjects in the group for which the association was significant (P < 0.05) of emotional expressions presented in three modalities: audio, visual and audio–visual. In women, the ARNT2 SNP rs4778599 showed a significant association, surviving correction for multiple testing, with emotion recognition of audio–visual stimuli (P = 0.00001, beta = −0.24, Table 2) that was also significant after controlling for response biases using Wagner’s (1993) unbiased hit rate (P = 0.00006). Post hoc tests of specific emotional expressions, pooling the visual, audio and audio–visual items, showed strongest associations for despair (P = 0.00004, beta = −0.23). Nominally significant associations were also observed for hot anger (P = 0.005), anxiety (P = 0.02) and relief (P = 0.01) such that the phenotypic value was highest for the common GG genotype. There were no significant effects or trends for auditory or visual emotion recognition (P > 0.05). No significant associations for ARNT2 rs4778599 were observed in men (P > 0.7), who also did not show the same pattern of mean differences between genotypes (GG: 0.64 ± 0.14 GA: 0.66 ± 0.11 AA: 0.64 ± 0.13; see Table 2 for mean ± s.d. for the different genotypes in women), indicating that the lack of association in men was not due to the smaller sample size (n =182 men and 309 women). The SNP by sex interaction was nominally significant (P = 0.006) in a full facorial general linear model, thus showing that the association was larger in women than in men. Due to the absence of replication sample, as an alternative to replication we split the sample into two random halves using the random numbers RV. Bernoulli function in SPSS, resulting in two subsamples (n = 157 and 167 women, respectively). There were nominally significant associations with the rs4778599 SNP in both subsamples of women (P-values = 0.003 and 0.002; betas = −0.23 and −0.25). No associations surviving correction for multiple testing were found for other SNPs in any of the genes with either audio, visual or audio–visual stimuli. Trend associations are displayed in Table 2. Previously reported relationships between emotion recognition and autism traits, alexithymia, emotional expressivity and perspective taking, motivated the post hoc correlations and association tests with the rs4778599 SNP. Scores for these tests as well as nominally significant correlations with audio–visual emotion recognition are indicated in Table 3 for the 307 women and 181 men with data for questionnaires, genes and emotion recognition performance. Controlling for the questionnaire scores by adding them to the regression model did not affect the association between the ARNT2 SNP and recognition of audio–visual emotion (P< 0.0001 for all models and non-significant interaction terms). The ARNT2 SNP also did not display significant associations with any of the questionnaire scores (P-values > 0.15).
Table 3.

Descriptives (mean ± s.d.) and correlations with questionnaire scores

Scales Score
Sex difference
Correlation P-value with audio–visual emotion recognition (Pearson r)
FMP-valueFM
Autism quotient AQ16 ± 517 ± 50.0020.12 (−0.09)0.52 (−0.05)
Alexithymia TAS46 ± 1246 ± 100.550.01 (−0.14)0.16 (−0.11)
Expressivity BEQ4.9 ± 0.84.1 ± 0.8<0.00010.14 (0.09)0.56 (−0.04)
Perspective taking (IRI)18 ± 418 ± 50.450.33 (0.06)0.21 (0.09)

F, females; M, males.

Descriptives (mean ± s.d.) and correlations with questionnaire scores F, females; M, males.

Discussion

We have demonstrated an association in women between audio–visual emotion recognition and the rs4778599 SNP in intron five of ARNT2, a gene that encodes a transcription factor involved in the development of OXT and vasopressin neurons in the hypothalamus. There were no significant or trend associations for accuracy on auditory or visual emotion recognition. The fact that we only observed an association in the multimodal, i.e. audio–visual, condition suggests that the association of the ARNT2 SNP with recognition of emotion may reflect an influence of OXT, or vasopressin, on multimodal integration. OXT has indeed been shown to influence different sensory modalities and promote cross-modal cortical development (Zheng ). The audio–visual condition displayed higher accuracy than the auditory and the visual conditions, indicating it was the easier condition of the three. Since performance was not within reach of perfect for any of the three condition, one could imagine that an easier condition would imply larger variation and a higher power of finding an association. However, the standard deviations were similar for the three conditions, and, in particular, the variation for the audio–visual condition was not the higher one. Although none of the associations between OXTR SNPs survived correction for multiple testing, we did find a nominally significant association between the rs2268498 T allele and superior visual emotion recognition (Table 2), which is in line with previous studies of this SNP and emotion recognition abilities (Rodrigues ; Melchers , 2015; Chen ). There is also evidence that the rs2268498 polyorphism is functional since it has been related to expression levels of the OXTR in the human hippocampus (Reuter ). There is some evidence that genetic variation in ARNT2 may be associated with autism spectrum conditions (Chakrabarti ; Vaags ; Di Napoli ; Hovey ), which have been linked to impairments in emotion recognition in some (Berggren ; Fridenson-Hayo ) but not all (Castelli, 2005; Tracy ) studies. Functional SNPs upstream of rs4778599—in a block ranging from intron one to three, including also the intron one rs3901896 that showed a nominal association (Table 2) in our study—have been associated with Asperger syndrome (Di Napoli ). The intron one and five SNPs have been reported to be in high linkage disequilibrium in a Swedish population (Hovey ). Although there is as of yet no independent evidence of rs4778599 being functional, there is some previous evidence that this SNP and the intronic rs3901896 may be linked to autism spectrum diagnosis and autism traits (Chakrabarti ; Hovey ). In our sample, scores on the AQ scale were not associated with ARNT2 SNPs. AQ scores also did not modify the association between the ARNT2 SNP and audio–visual emotion recognition. The rs4778599 G allele, proposed to be associated with elevated autism risk or autism scores in a previous study, was in our sample associated with superior emotion recognition. Even though previous evidence of an association with autism did not survive correction for multiple testing (Chakrabarti ), this direction of association was unexpected. However, if no relationship is to be expected between emotion recognition deficits and AQ scores, it may not be as contradictory as it appears. The discrepancy between studies regarding a relationship between autism spectrum conditions and emotion recognition may be due to the occurrence of emotion recognition deficits only in specific subgroups with autism spectrum conditions (Nuske ; Berggren ). A recent study attempting to identify subgroups of autism patients based on performance on a complex emotion recognition task did indeed show that that only one smaller subgroup of autism patients displayed accuracies that were lower than the range for the controls (Lombardo ). A related theory posits that emotion recognition deficits in autism are present only in subgroups with comorbid alexithymia (Bird and Cook, 2013; Cook ; Brewer ; Oakley ). In this study, there was a nominally significant correlation between alexithymia scores and audio–visual emotion recognition (Table 3) in the expected direction. Alexithymia scores were however not associated with the ARNT2 SNP and did not modify the association with emotion recognition, and the correlation thus does not give any insights into the mechanism by which ARNT2 variation may have an influence on emotion recognition. Although we included the four questionnaires to further analyze and understand the association as well as the relationship between emotion recognition and the OXT and/or vasopressin systems in general, none of them appeared to influence the association between the ARNT2 polymorphism and emotion recognition, and thus we could not pursue this line of investigation. As mentioned, the lack of correlation in this study between emotion recognition and AQ may reflect that the subgroups for which such a correlation has been reported are absent in the present sample. This may also be the case for the less investigated measures of expressivity and perspective taking. The lack of association between questionnaire scores and the ARNT2 polymorphism should be viewed in the light of the absence of correlation between the questionnaires and emotion recognition. Needless to say, the associations and correlations should be interpreted with caution until replicated in independent samples. The association between emotion recognition and the ARNT2 SNP was only observed in women. Sex differences have been suggested for facial affect recognition (McBain ; Vassallo ), the neural mechanisms of face processing (Fischer ; Ino ) and emotion processing (Kret and De Gelder, 2012). In this study, women were slightly superior to men with respect to recognition of emotional audio–visual stimuli. Interestingly, animal studies have shown sexual dimorphism in the cerebral expression pattern of ARNT2 before gonadal formation (Dewing ) and sexual dimorphism is an established fact when it comes to the OXT and vasopressin systems (Westberg and Walum, 2013; Dumais and Veenema, 2016), both of which are modulated by ARNT2 (Michaud ). Furthermore, emotion processing is influenced by sex hormones (Toffoletto ), which are naturally sex-specific, and also modulate the effects of OXT (Gabor ). It therefore stands to reason that the effect of an SNP in this system may have sex-specific effects. In addition, sex-specific effects of genetic variation on emotion recognition have recently been reported (Warrier ), further supporting a sex-specific genetic architecture underlying variation in this phenotype. Furthermore, null mutations in ARNT2 cause a variety of phenotypes in humans, including but not limited to neurological abnormalities, congenital hypopituitarism and abnormalities of the kidneys (Webb ) suggesting that ARNT2 is involved in several different pathways. In summary, we report a novel association between the ARNT2 SNP rs4778599 and emotion recognition in women, that further emphasizes and expands previous findings of OXT and vasopressin involvement in emotion recognition.

Funding

This work was supported by the Swedish Research Council (P.L., H.F. and L.W.). The funding bodies had neither role in the design and conduct of the study, preparation, review or approval of the manuscript, nor in the decision to submit the article for publication. Conflict of interest. None declared.
  105 in total

1.  A Swedish translation of the 20-item Toronto Alexithymia Scale: cross-validation of the factor structure.

Authors:  M Simonsson-Sarnecki; L G Lundh; B Törestad; R M Bagby; G J Taylor; J D Parker
Journal:  Scand J Psychol       Date:  2000-03

2.  Intranasal arginine vasopressin enhances the encoding of happy and angry faces in humans.

Authors:  Adam J Guastella; Amanda R Kenyon; Gail A Alvares; Dean S Carson; Ian B Hickie
Journal:  Biol Psychiatry       Date:  2010-05-05       Impact factor: 13.382

Review 3.  A review on sex differences in processing emotional signals.

Authors:  M E Kret; B De Gelder
Journal:  Neuropsychologia       Date:  2012-01-08       Impact factor: 3.139

4.  Functional characterization of an oxytocin receptor gene variant (rs2268498) previously associated with social cognition by expression analysis in vitro and in human brain biopsy.

Authors:  Martin Reuter; Christian Montag; Steffen Altmann; Fabian Bendlow; Christian Elger; Peter Kirsch; Albert Becker; Susanne Schoch-McGovern; Matthias Simon; Bernd Weber; Andrea Felten
Journal:  Soc Neurosci       Date:  2016-07-27       Impact factor: 2.083

Review 5.  Vasopressin and oxytocin receptor systems in the brain: Sex differences and sex-specific regulation of social behavior.

Authors:  Kelly M Dumais; Alexa H Veenema
Journal:  Front Neuroendocrinol       Date:  2015-05-04       Impact factor: 8.606

6.  Functionality of promoter microsatellites of arginine vasopressin receptor 1A (AVPR1A): implications for autism.

Authors:  Katherine E Tansey; Matthew J Hill; Lynne E Cochrane; Michael Gill; Richard Jl Anney; Louise Gallagher
Journal:  Mol Autism       Date:  2011-03-31       Impact factor: 7.509

7.  Unsupervised data-driven stratification of mentalizing heterogeneity in autism.

Authors:  Michael V Lombardo; Meng-Chuan Lai; Bonnie Auyeung; Rosemary J Holt; Carrie Allison; Paula Smith; Bhismadev Chakrabarti; Amber N V Ruigrok; John Suckling; Edward T Bullmore; Christine Ecker; Michael C Craig; Declan G M Murphy; Francesca Happé; Simon Baron-Cohen
Journal:  Sci Rep       Date:  2016-10-18       Impact factor: 4.379

8.  Interaction between oxytocin genotypes and early experience predicts quality of mothering and postpartum mood.

Authors:  Viara Mileva-Seitz; Meir Steiner; Leslie Atkinson; Michael J Meaney; Robert Levitan; James L Kennedy; Marla B Sokolowski; Alison S Fleming
Journal:  PLoS One       Date:  2013-04-18       Impact factor: 3.240

9.  Genetic modulation of oxytocin sensitivity: a pharmacogenetic approach.

Authors:  F S Chen; R Kumsta; F Dvorak; G Domes; O S Yim; R P Ebstein; M Heinrichs
Journal:  Transl Psychiatry       Date:  2015-10-27       Impact factor: 6.222

10.  Theory of mind is not theory of emotion: A cautionary note on the Reading the Mind in the Eyes Test.

Authors:  Beth F M Oakley; Rebecca Brewer; Geoffrey Bird; Caroline Catmur
Journal:  J Abnorm Psychol       Date:  2016-08
View more
  4 in total

1.  Facial Emotion Recognition and Polymorphisms of Dopaminergic Pathway Genes in Children with ASD.

Authors:  Zhuo Liu; Jun Liu; Zengyu Zhang; Hong Yu; Fengpei Hu
Journal:  Behav Neurol       Date:  2020-11-04       Impact factor: 3.342

2.  Mixed support for a causal link between single dose intranasal oxytocin and spiritual experiences: opposing effects depending on individual proclivities for absorption.

Authors:  Diana S Cortes; Michael Skragge; Lillian Döllinger; Petri Laukka; Håkan Fischer; Mats E Nilsson; Daniel Hovey; Lars Westberg; Marcus Larsson; Pehr Granqvist
Journal:  Soc Cogn Affect Neurosci       Date:  2018-09-11       Impact factor: 3.436

3.  Oxytocin Receptors Regulate Social Preference in Zebrafish.

Authors:  Petronella Kettunen; Lars Westberg; Jenny Landin; Daniel Hovey; Bo Xu; David Lagman; Anna Zettergren; Dan Larhammar
Journal:  Sci Rep       Date:  2020-03-25       Impact factor: 4.379

4.  The effect of intranasal oxytocin on visual processing and salience of human faces.

Authors:  Daniel Hovey; Louise Martens; Bruno Laeng; Siri Leknes; Lars Westberg
Journal:  Transl Psychiatry       Date:  2020-09-19       Impact factor: 6.222

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.