Literature DB >> 23170067

What's In a Note: Construction of a Suicide Note Corpus.

John P Pestian1, Pawel Matykiewicz, Michelle Linn-Gust.   

Abstract

This paper reports on the results of an initiative to create and annotate a corpus of suicide notes that can be used for machine learning. Ultimately, the corpus included 1,278 notes that were written by someone who died by suicide. Each note was reviewed by at least three annotators who mapped words or sentences to a schema of emotions. This corpus has already been used for extensive scientific research.

Entities:  

Keywords:  computational linguistics; corpus; natural language processing; suicide

Year:  2012        PMID: 23170067      PMCID: PMC3500150          DOI: 10.4137/BII.S10213

Source DB:  PubMed          Journal:  Biomed Inform Insights        ISSN: 1178-2226


Introduction

This research centres on the content of suicide notes. The topic has been studied a great deal, but not to this magnitude. Over three years, suicide notes were collected, digitized, annotated, and analyzed. The sections of this paper present what is known about these notes, how they were collected and annotated, and a discussion of some implications.

Background

Content of notes

Across all age groups, between 10% and 43% of those who end their lives leave a suicide note. What is in a suicide note? Menniger suggested that the wish to die, the wish to kill and the wish to be killed must be present for suicide to occur,1 but there is a paucity of research about the presence of those three motives in suicide notes. Brevard, Lester, and Yang2 analyzed notes to determine whether Menniger’s three concepts were present in suicide notes. Without controlling for gender, they reported more evidence for the wish to be killed in suicide notes of completers (those who die) than in the notes of non-completers.2 Leenaars et al3 revisited Menninger’s triad by comparing 22 suicide notes and 22 parasuicide notes that were carefully matched. They concluded that the notes from suicide completers were more likely to have content reflecting anger or revenge and less likely to show escape as a motive. Additionally, although not statistically significant, completers often tended to show self-blame or self-punishment. Another study of 224 suicide notes from 154 subjects characterized note-leavers as young females of non-widowed marital status with no history of previous suicide attempts, no previous psychiatric illness, and religious beliefs. Suicide notes written by young people were found to be longer, rich in emotions, and often begging for forgiveness. Another study noted that genuine notes often included statements such as the experience of adult trauma, expressions of ambivalence, feelings of love, hate and helplessness, constricted perceptions, loss, and self-punishment. One important and consistent finding is the need to control for differences in age and gender.3

Using suicide notes for clinical purposes

Of those who attempt suicide for the second time, at least 15% are lethal. As noted by Freedenthal “determining the likelihood of a repeated attempt is an important role of a medical facility’s psychiatric intake unit and notoriously difficult because of a patient’s denial, intent for secondary gain, ambivalence, memory gaps, and impulsivity”.4 One indicator of the severity and intent is simply the presence of a suicide note. Analysis has shown that patients presenting at an emergency department with non-fatal self-harm in addition to a suicide note are more likely to be at increased risk for completing suicide later.5 Evidence from a suicide note may illuminate true intentions, but the lack of one does not obviate important questions such as: without a note is the patient’s risk less severe? how many patients died by suicide without leaving a note? or is there a difference between the notes of completers and those of attempters? Valente matched notes from 25 completers and 25 attempters and found differences in thematic content surrounding fear, hopelessness, and distress.6 On the other hand, Leenaars found no significant difference between thematic groups.3 The emergence of Natural Language Processing (NLP) and machine learning methods presents the opportunity to re-examine the previous efforts with new analytical tools. Those tools, however, require an annotated corpus of suicide notes.

Methods

Corpus development

A corpus is a collection of written works. An annotated corpus is one that has been reviewed for certain characteristic words, concepts, or sentences, such as: anger, hopelessness or peace. Here we collected 1,319 notes written by people before they died by suicide. The notes were collected between 1950 and 2012 by Drs. Edwin Shneidman, UCLA, and John Pestian, Cincinnati Children’s Hospital Medical Center (CCHMC). Database construction began in 2009 and has been approved by the CCHMC’s Institutional Review Board (#2009-0664). Each note was scanned into the Suicide Note Module (SNM) of our clinical decision support framework called CHRISTINE.7 The scanned notes were then transcribed to a text-based version by a professional transcriptionist. Each note was then reviewed for errors by three separate reviewers. Their instructions were to correct transcription errors but to leave indigenous errors like spelling, grammar and so forth alone.

Anonymization

To assure privacy, the notes were anonymized. To retain their value for machine learning purposes, personal identification information was replaced with like values that obscure the identity of the individual.8 All female names were replaced with “Jane,” all male names were replaced with “John,” and all surnames were replaced with “Johnson.” Dates were randomly shifted within the same year. For example, Nov. 18, 2010, may have been changed to May 12, 2010. All addresses were changed to 3333 Burnet Ave., Cincinnati, OH, 45229, the CCHMC’s main campus.

Annotators

It is the role of an annotator to review a note and select those words, phrases, or sentences that represent a particular emotion. Recruiting the most appropriate annotators led us to consider “vested volunteers,” or volunteers with an emotional connection to the topic. The emotional connection is what makes this approach different than crowd-sourcing,9 where there is no known emotional connection. In our case, these vested volunteers who are routinely called suicide loss survivors, were generally active in a number of suicide communities. Approximately 1,500 members of several online communities were notified of the study via e-mail or indirectly via Facebook’s suicide bereavement resource pages. Of those communities, two groups: Karyl Chastain Beal’s online support groups Families and Friends of Suicides and Parents of Suicides, and the Suicide Awareness Voices of Education, directed by Dan Reidenberg, Psy.D., were most active. The notification to potential participants included information about the study, its funding source and what would be expected of a participant. Respondents were vetted in two stages. The first stage was to meet the inclusion criteria (at least 21 years of age, English as a primary language, willingness to read and annotate 50 suicide notes). The second stage included an e-mail sent to participants. In the e-mail, respondents were asked to describe their relationship to the person lost to suicide, the time since the loss, and whether or not the bereaved person had been diagnosed with any mental illness. Demographic information about the vested volunteers is described below. Once fully vetted, they were given access to an automated training site. Training consisted of an online review and annotation of 10 suicide notes. If the annotators agreed with the gold standard at least 50% of the time, they were asked to annotate 50 more notes. They also were reminded that they could opt out of the study at any time if they had any difficulties and were given several options for support.

Emotional assignment

Each note in the shared task’s training and test set was annotated by at least three individuals. Annotators were asked to identify the following emotions: abuse, anger, blame, fear, guilt, hopelessness, sorrow, forgiveness, happiness, peacefulness, hopefulness, love, pride, thankfulness, instructions, and information. A special web-based tool was used to collect, monitor, and arbitrate annotator’s activity. The tool collects annotation at the word and sentence level. It also allows for different concepts to be assigned to the same word. This feature made it impossible possible to use a simple κinter-annotator agreement coefficient.10 Instead, Krippendorff’s α11 with Dice’s coincidence index12 was used. Artstein and Poesio13 provided excellent explanation of the differences and applicability of variety of agreement measures. There is no need to repeat their discourse; however, it is worth explaining how it applies to the suicide note annotation. Table 1 shows an example of a single note annotation done by three different coders. At a glance, one can see that the agreement measure has to accommodate multiple coders (a1, a2, a3), missing data, and multi-level agreement (“anger, hate” and “anger, blame” where d = 1/2 vs. “hate” and “anger, hate” where d = 1/3. Krippendorff’s α accommodates all these needs and enables calculations for different spans. Despite the annotators being asked to annotate sentences, they usually annotated clauses and in some cases phrases. For this shared task, the annotation at the token level was merged to create sentence level labels. This is only an approximation to what happens in suicide notes. Many notes do not have typical English grammar structure so none of the known text segmentation tools would work well with this unique corpora. Nevertheless, this crude approximation yields similar inter-annotator agreement (see Table 2). Finally, a single gold standard was created from the three sets of sentence level annotations. There was no reason to adopt any a priori preference for one annotator over another, so the democratic principle of assigning a majority annotation was used (see Table 1). This remedy is somewhat similar to the Delphi method, but not as formal.14 The majority of annotation consists of those codes assigned to the document by two or more of the annotators. Several problems are possible with this approach. For example, it could be that majority of the annotation will be empty. The arbitration phase focused on notes with the lowest inter-annotator agreement where that situation could occur. Annotators were asked to re-review the conflicting notes but, not all of them completed this final stage of the annotation process. Thirty-seven percent of sentences had a concept assigned by only one annotator.
Table 1

Example of a note annotation for different span with corresponding Krippendorff’s α and the majority rule.

IHateYou.ILoveYou.α
Token
a1HateLove
a2Anger, hateAnger, hateLoveLove≈0.570
a3Anger, blameAnger, blameAnger, blameLoveLoveLove
Sentence
a1HateLove
a2Anger, hateLove≈0.577
a3Anger, blameLove
Majority
mAnger, hateLove
Table 2

Annotator characteristics.

Annotators
Response to callDirectly contacted1500
Indirectly contacted via social mediaUnknown
Did not meet inclusion criteria10
Completed training169
Withdrew before completing annotation17
Respondents who fully completed the annotation64
Annotators performanceNumber of notes annotated at least once1278
Number of notes annotated at least twice1225
Number of notes annotated at least three times1004
Mean (SD) annotation time per note4.4 min (1.3 min)
Token inter-annotation agreement0.535
Sentence inter-annotation agreement0.546
Gender and ageMales10%
Females90%
Average age (SD)47.3 (11.2)
Age range23–70
Education levelHigh school degree26
Associates degree13
Bachelors23
Masters34
Professional (Ph.D/MD/JD)4
Connection to suicideSurvivor of a loss to suicide70
Mental health professional18
Other12
Time since loss occurred0–2 years27
3–5 years25
6–10 years14
11–15 years13
16 years or more12
Relationship to the suicideChild31
Sibling23
Spouse or partner15
Other relative9
Parent8
Friend5

Notes: Some survey questions were not completed. Thus eliminating the ability to sum columns.

Results

The characteristics of the annotators are described in Table 2.

Note content

Selected characteristics of the data are found in Table 3. This table provides an overview of the data using Linguistic Inquiry and Word Count, 2007. This software contains within it a default set of word categories and a default dictionary that defines those words should be counted in the target text files.15
Table 3

Frequency and example of assigned emotions.

DescriptionFrequencyExample
Instructions609Careful, cyanide gas in the bathroom
Hopelessness601I just didn’t want to live anymore
Love472I love her
Information430I have no debts except for what my wife knows
Guilt423Forgive me please
Sorrow342Oh, how I suffer
Blame235I have been pushed around too much
Hopefulness216You will a happy and healthy life
Thankfulness187You, John have been so good to me and Jane
Anger183Well, Jane I hope this makes you happy!
Fear154I am terrified
Happiness/ peacefulness119I’m ready for the next step with joy and anticipation
Pride89We have another sweet little daughter
Forgiveness61I do not blame you for anything, my dear
Abuse53Life is so cruel when you are persecuted by in-laws and ex-wife

Discussion

This paper reports on the results of an initiative to create and annotate a corpus of suicide notes that can be used for machine learning analysis. Sentiment analysis is the process of identifying emotions in text and then evaluating that process. Finding emotional sentiment in text is complex because each annotator brings a different psychological perspective. Agreement between annotators in the range of 0.60–0.80 is considered substantial while a range of 0.40–0.59 is considered moderate.16 Our moderate performance is what we expected given the amount of notes and annotator differences. In a post-hoc error analysis we found that about 120 sentences were responsible for most of the annotators confusion. Nevertheless, this corpus provides much opportunity for understanding the language of those who have died by suicide. In particular it creates a vital resource for scientists to conduct machine learning and data mining on a large corpus of suicidal language. In one instance it was used as the basis for an international challenge in which 22 teams built machine learning methods designed to identify emotions in the suicide notes.17 Future uses will no doubt include development of machine learning methods that leads to clinical application. Finally, while this work has focused on the content of emotions and all the challenges of psychological phenomenology that come with this approach, future work should consider how the structural characteristics, including parts of speech and sentence length, should be included.
  6 in total

1.  Comparison of suicide attempters and completers.

Authors:  Sharon M Valente
Journal:  Med Law       Date:  2004

2.  A comparison of suicide notes written by suicide completers and suicide attempters.

Authors:  A Brevard; D Lester; B J Yang
Journal:  Crisis       Date:  1990-05

3.  Challenges in assessing intent to die: can suicide attempters be trusted?

Authors:  Stacey Freedenthal
Journal:  Omega (Westport)       Date:  2007

4.  Self-harm or attempted suicide? Do suicide notes help us decide the level of intent in those who survive?

Authors:  Wally Barr; Maria Leitner; Joan Thomas
Journal:  Accid Emerg Nurs       Date:  2007-07-02

5.  Sentiment Analysis of Suicide Notes: A Shared Task.

Authors:  John P Pestian; Pawel Matykiewicz; Michelle Linn-Gust; Brett South; Ozlem Uzuner; Jan Wiebe; K Bretonnel Cohen; John Hurdle; Christopher Brew
Journal:  Biomed Inform Insights       Date:  2012-01-30

6.  Personalizing Drug Selection Using Advanced Clinical Decision Support.

Authors:  John Pestian; Malik Spencer; Pawel Matykiewicz; Kejian Zhang; Alexander A Vinks; Tracy Glauser
Journal:  Biomed Inform Insights       Date:  2009-06-23
  6 in total
  8 in total

1.  Finding warning markers: Leveraging natural language processing and machine learning technologies to detect risk of school violence.

Authors:  Yizhao Ni; Drew Barzman; Alycia Bachtel; Marcus Griffey; Alexander Osborn; Michael Sorter
Journal:  Int J Med Inform       Date:  2020-04-25       Impact factor: 4.046

Review 2.  Recent Advances in Clinical Natural Language Processing in Support of Semantic Analysis.

Authors:  S Velupillai; D Mowery; B R South; M Kvist; H Dalianis
Journal:  Yearb Med Inform       Date:  2015-08-13

3.  Looking for Razors and Needles in a Haystack: Multifaceted Analysis of Suicidal Declarations on Social Media-A Pragmalinguistic Approach.

Authors:  Michal Ptaszynski; Monika Zasko-Zielinska; Michal Marcinczuk; Gniewosz Leliwa; Marcin Fortuna; Kamil Soliwoda; Ida Dziublewska; Olimpia Hubert; Pawel Skrzek; Jan Piesiewicz; Paula Karbowska; Maria Dowgiallo; Juuso Eronen; Patrycja Tempska; Maciej Brochocki; Marek Godny; Michal Wroczynski
Journal:  Int J Environ Res Public Health       Date:  2021-11-09       Impact factor: 3.390

4.  Can Text Messages Identify Suicide Risk in Real Time? A Within-Subjects Pilot Examination of Temporally Sensitive Markers of Suicide Risk.

Authors:  Jeffrey J Glenn; Alicia L Nobles; Laura E Barnes; Bethany A Teachman
Journal:  Clin Psychol Sci       Date:  2020-05-28

5.  Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth.

Authors:  Marcel Adam Just; Lisa Pan; Vladimir L Cherkassky; Dana L McMakin; Christine Cha; Matthew K Nock; David Brent
Journal:  Nat Hum Behav       Date:  2017-10-30

Review 6.  Machine learning as the new approach in understanding biomarkers of suicidal behavior.

Authors:  Alja Videtič Paska; Katarina Kouter
Journal:  Bosn J Basic Med Sci       Date:  2021-08-01       Impact factor: 3.363

7.  Evidence for Underregistration of Suicide.

Authors:  M A Riedinger; R F P de Winter
Journal:  Case Rep Psychiatry       Date:  2020-11-12

Review 8.  A Focused Review of Language Use Preceding Death by Execution.

Authors:  Sarah Hirschmüller; Boris Egloff
Journal:  Front Psychol       Date:  2018-05-15
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.