Silent brain infarction (SBI) is defined as the presence of 1 or more brain lesions, presumed to be because of vascular occlusion, found by neuroimaging (magnetic resonance imaging, MRI or computed tomography, CT) in patients without clinical manifestations of stroke. SBIs are more common than stroke and can be detected on MRI in 20% of healthy elderly [1-3]. Studies have shown that SBIs are associated with increased risk of subsequent stroke, cognitive decline, and deficiency in physical function [1,2]. Despite the high prevalence and serious consequences, there is no consensus on the management of SBI as routinely discovering SBIs is challenged by the absence of corresponding diagnosis codes and the lack of the knowledge about the characteristics of the affected population, treatment patterns, or the effectiveness of therapy [1]. Even though there is strong evidence shows that antiplatelet and statin therapies are effective in preventing recurrent stroke in patients with prior stroke, the degree to which these results might apply to patients with SBI is unclear. Although SBI is understood by some clinicians to be pathophysiologically identical to stroke (and thus similarly treated), others view SBI as an incidental neuroimaging finding of unclear significance. The American Heart Association/American Stroke Association has identified SBI as a major priority for new studies on stroke prevention because the population affected by SBI falls between primary and secondary stroke prevention [4].In addition to SBI, white matter disease (WMD) or leukoaraiosis is another common finding in neuroimaging of elderly. Similar to SBI, WMD is usually detected incidentally on brain scans and is commonly believed to be a form of microvascular ischemic brain damage resulting from typical cardiovascular risk factors [5]. WMD is associated with subcortical infarcts due to small vessel disease and is predictive of functional disability, recurrent stroke, and dementia [6-8]. SBI and WMD are related, but it is unclear whether they result from the same, independent, or synergistic processes [9,10]. As with SBI, there are no proven preventive treatments or guidelines regarding the initiation of risk factor–modifying therapies when WMD is discovered.
Objectives
Identifying patients with SBI is challenged by the absence of corresponding diagnosis codes. One reason is that SBI-related incidental findings are not included in a patient’s problem list or other structured fields of electronic health records (EHRs); instead, the findings are captured in neuroimaging reports. A neuroimaging report is a type of EHR data that contains the interpretation and finding from neuroimage such as CT and MRI in unstructured text. Incidental SBIs can be detected by the review of neuroradiology reports obtained in clinical practice, typically performed manually by radiologists or neurologists. However, manually extracting information from patient narratives is time-consuming, costly, and lacks robustness and standardization [11-14]. Natural language processing (NLP) has been leveraged to perform chart review for other medical conditions by automatically extracting important clinical concepts from unstructured text. Researchers have used NLP systems to identify clinical syndromes and biomedical concepts from clinical notes, radiology reports, and surgery operative notes [15]. An increasing amount of NLP-enabled clinical research has been reported, ranging from identifying patient safety occurrences [16] to facilitating pharmacogenomic studies [17]. Our study focuses on developing NLP algorithms to routinely detect incidental SBIs and WMDs.
Methods
Study Setting
This study was approved by the Mayo Clinic and Tufts Medical Center (TMC) institutional review boards. This work is part of the Effectiveness of Stroke PREvention in Silent StrOke project, which is to use NLP techniques to identify individuals with incidentally discovered SBIs from radiology reports, at 2 sites: Mayo Clinic and TMC.
Gold Standard
The detailed process of generating the gold standard is described in Multimedia Appendix 1. The gold standard annotation guideline was developed by 2 subject matter experts: a vascular neurologist (LYL) and a neuroradiologist (PHL), and the annotation task was performed by 2 third-year residents (KAK, MSC) from Mayo and 2 first-year residents (AOR, KN) from TMC. Each report was annotated with 1 of the 3 labels for SBI (positive SBI, indeterminate SBI, or negative SBI) and one of the 3 labels for WMD (positive WMD, indeterminate WMD, or negative WMD).The gold standard dataset includes 1000 radiology reports randomly retrieved from the 2 study sites (500 from Mayo Clinic and 500 from TMC) corresponding to patients with no prior or current diagnosis of stroke or dementia. To calculate interannotator agreement (IAA), 400 out of the 1000 reports were randomly sampled and double read. The gold standard dataset was equally split to 3 subsets for training (334), developing (333), and testing (333).
Experimental Methods
We compared 2 NLP approaches. One was to define the task an information extraction (IE) task, where a rule-based IE system can be developed to extract SBI or WMD findings. The other was to define the task as a sentence classification task, where sentences can be classified to contain SBI or WMD findings.
Rule-Based Information Extraction
We adopted the open source NLP pipeline, MedTagger, as the infrastructure for the rule-based system implementation. MedTagger is a resource-driven, open source unstructured information management architecture–based IE framework [18]. The system separates task-specific NLP knowledge engineering from the generic NLP process, which enables words and phrases containing clinical information to be directly coded by subject matter experts. The tool has been utilized in the eMERGE consortium to develop NLP-based phenotyping algorithms [19]. Figure 1 shows the process workflow. The generic NLP process includes sentence tokenization, text segmentation, and context detection. The task-specific NLP process includes the detection of concept mentions in the text using regular expressions and normalized to specific concepts. The summarization component applies heuristic rules for assigning the labels to the document.
Figure 1
Rule system process flow. SBI: silent brain infarction; WMD: white matter disease.
For example, the sentence “probable right old frontal lobe subcortical infarct as described above,” is processed as an SBI concept with the corresponding contextual information with status as “probable,” temporality as “present,” and experiencer as “patient.”The domain-specific NLP knowledge engineering was developed following 3 steps: (1) Prototype algorithm development, (2) Formative algorithm development using the training data, and (3) Final algorithm evaluation. We leveraged pointwise mutual information [20] to identify significant words and patterns associated with each condition for prototyping the algorithm (Multimedia Appendix 2). The algorithm was applied to the training data. False classified reports were manually reviewed by 2 domain experts (LYL, PHL). Keywords were manually curated through an iteratively refining process until all issues were resolved. The full list of concepts, keywords, modifiers, and diseases categories are listed in Textbox 1.Rule system process flow. SBI: silent brain infarction; WMD: white matter disease.Confirmation keywords—disease-finding SBI: infarct, infarcts, infarctions, infarction, lacune, lacunesConfirmation keywords—disease modifier SBI: acute, acute or subacute, recent, new, remote, old, chronic, prior, chronic foci of, benign, stable small, stableConfirmation keywords—disease location SBI: territorial, lacunar, cerebellar, cortical, frontal, caudate, right frontoparietal lobe, right frontal cortical, right frontal lobe, embolic, left basal ganglia lacunar, basal ganglia lacunar, left caudate and left putamen lacunarConfirmation keywords—disease-finding WMD: leukoaraiosis, white matter, microvascular ischemic, microvascular leukemic, microvascular degenerativeExclusion WMD: degenerative changes
Machine Learning
The machine learning (ML) approach allows the system to automatically learn robust decision rules from labeled training data. The task was defined as a sequential sentence classification task. We adopted Kim’s convolutional neural network (CNN) [21] and implemented using TensorFlow 1.1.02 [22]. The model architecture, shown in Figure 2, is a variation of the CNN architecture of Collobert R [23].
Figure 2
Convolutional neural network architecture with 2 channels for an example sentence.
We also adopted 3 traditional ML models—random forest [24], support vector machine [25] and logistic regression [26]—for baseline comparison. All models used word vector as input representation, where each word from the input sentence is represented as the k-dimensional word vector. The word vector is generated from word embedding, a learned representation for text where words that have the same meaning have a similar representation. Suppose x1, x2, … , xn is the sequence of word representations in a sentence whereHere, Exi is the word embedding representation for word xi with the dimensionality d. In our ML experiment, we used Wang’s word embedding trained from Mayo Clinic clinical notes where d=100 [27]. The embedding model is the skip-gram of word2vec, an architecture proposed by Mikolov T [28]. Let xi:i+k-1 represent a window of size k in the sentence. Then the output sequence of the convolutional layer iswhere f is a rectify linear unit function, wk and bk are the learning parameters. Max pooling was then performed to record the largest number from each feature map. By doing so, we obtained fixed length global features for the whole sentence, that is,Then the features are fit into a fully connected layer with the output being the final feature vector O=wmk + b. Finally, a softmax function is utilized to make final classification decision, that is,where θ is a vector of the hyper parameters of the model, such as wk, bk, w and b.Convolutional neural network architecture with 2 channels for an example sentence.
Evaluation Metric
For evaluation of the quality of the annotated corpus, Cohen kappa was calculated to measure the IAA during all phases [29]. As the primary objective of the study is case ascertainment, we calculated the IAA at the report level.A 2 x 2 confusion matrix was used to calculate performance score for model evaluation: positive predictive value (PPV), sensitivity, negative predictive value (NPV), specificity, and accuracy using manual annotation as the gold standard. The McNemar test was adopted to evaluate the performance difference between the rule-based and ML models [30,31]. To have a better understanding of the potential variation between neuroimaging reports and neuroimages, we compared the model with the best performance (rule-based) with neuroimaging interpretation. A total of 12 CT images and 12 MRI images were stratified—randomly sampled from the test set. A total of 2 attending neurologists read all 24 images and assigned the SBI and WMD status. The cases with discrepancies were adjudicated by the neuroradiologist (PHL) The agreement was assessed using kappa and F-measure [32].
Results
Interannotator Agreements Across Neuroimaging Reports
Among the total 400 double-read reports, 5 reports were removed because of invalid scan types. The IAAs across Mayo and Tufts neuroimaging reports were 0.87 and 0.91. Overall, there is a high agreement between readers on both reports (Tables 1 and 2). Age-specific prevalence of SBI and WMD is provided in Multimedia Appendix 2.
Table 1
Interreader agreement across 207 Mayo neuroimaging reports.
Interannotator agreement
Computed tomography (n=63)
Magnetic resonance imaging (n=144)
Total (n=207)
% agree
kappa
% agree
kappa
% agree
kappa
Silent brain infarction
98.4
0.92
97.2
0.83
97.6
0.87
White matter disease
100.0
1.00
98.6
0.97
99.0
0.98
Table 2
Interreader agreement across 188 Tufts Medical Center neuroimaging reports.
Interannotator agreement
Computed tomography (n=80)
Magnetic resonance imaging (108)
Total (n=188)
% agree
kappa
% agree
kappa
% agree
kappa
Silent brain infarction
98.8
0.79
99.1
0.94
99.5
0.91
White matter disease
100.0
1.00
99.1
0.98
99.5
0.99
Interreader agreement across 207 Mayo neuroimaging reports.Interreader agreement across 188 Tufts Medical Center neuroimaging reports.
Natural Language Processing System Performance
Overall, the rule-based system yielded the best performance of predicting SBI with an accuracy of 0.991. The CNN achieved the best score on predicting WMD (0.994). Full results are provided in Table 3.
Table 3
Performance on test dataset against human annotation as gold standard.
Evaluation of natural language processing, model name
Sensitivity
Specificity
Positive predictive value
Negative predictive value
Accuracy
Silent brain infarction (n=333)
Rule-based system
0.925
1.000
1.000
0.990
0.991
CNNa
0.650
0.993
0.929
0.954
0.952
Logistic regression
0.775
0.983
0.861
0.970
0.958
SVMb
0.825
1.000
1.000
0.977
0.979
Random forest
0.875
1.000
1.000
0.983
0.986
White matter disease (n=333)
Rule-based system
0.942
0.909
0.933
0.921
0.928
CNN
0.994
0.994
0.994
0.994
0.994
Logistic regression
0.906
0.865
0.896
0.877
0.888
SVM
0.864
0.894
0.917
0.830
0.877
Random forest
0.932
0.880
0.913
0.906
0.910
aCNN: convolutional neural network.
bSVM: support vector machine.
According to the McNemar test, we found the difference between rule-based system and CNN on SBI is considered to be statistically significant (P value=.03). We found no statistically significant difference between the rest of the models.Table 4 lists the evaluation results of NLP and gold standard derived from reports against the neuroimaging interpretation for SBI and WMD. Both NLP and gold standard had moderate-high agreements with the neuroimaging interpretation, with kappa scores around .5. Our further analysis showed the practice graded findings (gold standard and NLP) achieved high precision and moderate recall scores compared with the neuroimaging interpretation. Through the confirmation with Mayo and TMC radiologists, we believed such discrepancy was because of the inconsistency in documentation standards related to clinical incidental findings, causing SBIs and WMDs underreported.
Table 4
Comparison of the neuroimaging interpretation with gold standard and natural language processing.
Evaluation of natural language processing against the neuroimaging interpretation
F-measure
kappa
Precision
Recall
Silent brain infarction (n=24)
Gold standard
0.74
0.50
0.92
0.69
NLPa
0.74
0.50
0.92
0.69
White matter disease (n=24)
Gold standard
0.78
0.56
0.86
0.80
NLP
0.74
0.49
0.85
0.73
aNLP: natural language processing.
Performance on test dataset against human annotation as gold standard.aCNN: convolutional neural network.bSVM: support vector machine.Comparison of the neuroimaging interpretation with gold standard and natural language processing.aNLP: natural language processing.
Discussion
Machine Learning Versus Rule
In summary, the rule-based system achieved the best performance of predicting SBI, and the CNN model yielded the highest score of predicting WMD. When detecting SBI, the ML models were able to achieve high specificity, NPV, and PPV but moderate sensitivity because of the small number of positive cases. Oversampling is a technique to adjust the class distribution of training data to balance the ratio between positive and negative cases [33]. This technique was applied to the training data to help boost the signals of positive SBIs. The performance was slightly improved but was limited by the issue of overfitting, a situation when a model learns the training data too well. Due to that, unnecessary details and noises in the training data can create negative impact to the generalizability of the model. In our case, the Mayo reports have larger language variation (noise) because of a free style of documentation method, whereas TMC uses a template-based documentation method. According to the sublanguage analysis, Mayo had 212 unique expressions for describing no acute infarction, whereas TMC had only 12. Therefore, the model trained on oversampled data had a bias toward the expressions that only appeared in the training set. When predicting WMD, the ML model outperformed the rule-based model. The reason is because the dataset for WMD is more balanced than SBI (60% positive cases), which allows the system to equally learn from both classes (positive and negative). The overall performance on WMD is better than SBI because WMDs are often explicitly documented as important findings in the neuroimaging report.
False Prediction Analysis
Coreference resolution was the major challenge to the rule-based model for identifying SBIs. Coreference resolution is an NLP task to determine whether 2 mentioned concepts refer to the same real-world entity. For example, in Textbox 2, “The above findings” refers to “where there is an associated region of nonenhancing encephalomalacia and linear hemosiderin disposition.” To determine if a finding is SBI positive, the system needs to extract both concepts and detect their coreference relationship.“Scattered, nonspecific T2 foci, most prominently in the left parietal white matter where there is an associated region of nonenhancing encephalomalacia and linear hemosiderin disposition. Linear hemosiderin deposition overlying the right temporal lobe (series 9, image 16) as well. No abnormal enhancement today. The above findings are nonspecific but the evolution, hemosiderin deposition, and gliosis suggest post ischemic change. ”For the ML system, the false positives from the identification of SBIs were commonly contributed by disease locations. As the keywords foci, right occipital lobe, right parietal lobe, right subinsular region, and left frontal region often coexisted with SBI expressions, the model assigned higher weights to these concepts when the model was trained. For example, the expression: “there are a bilateral intraparenchymal foci of susceptibility artifact in the right occipital lobe, right parietal lobe, right subinsular region and left frontal region” has 4 locations with no mention of “infarction” appearing in the sentence. The ML system still predicted it as SBI positive. Among all ML models, the CNN yielded the worse NPV, which suggested the CNN was more likely to receive false signals from disease locations. Our next step is to further refine the system by increasing the volume of training size through leveraging distant supervision to obtain additional SBI positive cases.
Limitations
Our study has several limitations. First, despite the high feasibility of detecting SBIs from neuroimaging reports, there is a variation between NLP-labeled neuroimaging reports and neuroimages. Second, the performances of the ML models are limited by the number of annotated datasets. Additional training data are required to have a comprehensive comparison between the rule-based and ML systems. Third, the systems were only evaluated using datasets from 2 sites; the generalizability of the systems may be limited.
Conclusions
We adopted a standardized data abstraction and modeling process to developed NLP techniques (rule-based and ML) to detect incidental SBIs and WMDs from annotated neuroimaging reports. Validation statistics suggested a high feasibility of detecting SBIs and WMDs from EHRs using NLP.
Authors: Karen L Furie; Scott E Kasner; Robert J Adams; Gregory W Albers; Ruth L Bush; Susan C Fagan; Jonathan L Halperin; S Claiborne Johnston; Irene Katzan; Walter N Kernan; Pamela H Mitchell; Bruce Ovbiagele; Yuko Y Palesch; Ralph L Sacco; Lee H Schwamm; Sylvia Wassertheil-Smoller; Tanya N Turan; Deidre Wentworth Journal: Stroke Date: 2010-10-21 Impact factor: 7.914
Authors: Catherine A McCarty; Rex L Chisholm; Christopher G Chute; Iftikhar J Kullo; Gail P Jarvik; Eric B Larson; Rongling Li; Daniel R Masys; Marylyn D Ritchie; Dan M Roden; Jeffery P Struewing; Wendy A Wolf Journal: BMC Med Genomics Date: 2011-01-26 Impact factor: 3.063
Authors: Joshua C Denny; Marylyn D Ritchie; Melissa A Basford; Jill M Pulley; Lisa Bastarache; Kristin Brown-Gentry; Deede Wang; Dan R Masys; Dan M Roden; Dana C Crawford Journal: Bioinformatics Date: 2010-03-24 Impact factor: 6.937
Authors: Alida A Gouw; Wiesje M van der Flier; Franz Fazekas; Elisabeth C W van Straaten; Leonardo Pantoni; Anna Poggesi; Domenico Inzitari; Timo Erkinjuntti; Lars O Wahlund; Gunhild Waldemar; Reinhold Schmidt; Philip Scheltens; Frederik Barkhof Journal: Stroke Date: 2008-03-06 Impact factor: 7.914
Authors: Brett R South; Shuying Shen; Makoto Jones; Jennifer Garvin; Matthew H Samore; Wendy W Chapman; Adi V Gundlapalli Journal: Summit Transl Bioinform Date: 2009-03-01
Authors: Sunyang Fu; Guilherme S Lopes; Sandeep R Pagali; Bjoerg Thorsteinsdottir; Nathan K LeBrasseur; Andrew Wen; Hongfang Liu; Walter A Rocca; Janet E Olson; Jennifer St Sauver; Sunghwan Sohn Journal: J Gerontol A Biol Sci Med Sci Date: 2022-03-03 Impact factor: 6.053
Authors: David M Kent; Lester Y Leung; Eric J Puttock; Andy Y Wang; Patrick H Luetmer; David F Kallmes; Jason Nelson; Sunyang Fu; Chengyi Zheng; Ellen M Vickery; Hongfang Liu; Alastair J Noyce; Wansu Chen Journal: Ann Neurol Date: 2022-08-17 Impact factor: 11.274
Authors: David M Kent; Lester Y Leung; Yichen Zhou; Patrick H Luetmer; David F Kallmes; Jason Nelson; Sunyang Fu; Chengyi Zheng; Hongfang Liu; Wansu Chen Journal: Neurology Date: 2021-08-10 Impact factor: 11.800
Authors: Lester Y Leung; Yichen Zhou; Sunyang Fu; Chengyi Zheng; Patrick H Luetmer; David F Kallmes; Hongfang Liu; Wansu Chen; David M Kent Journal: Mayo Clin Proc Date: 2022-04-27 Impact factor: 11.104
Authors: Lester Y Leung; Sunyang Fu; Patrick H Luetmer; David F Kallmes; Neel Madan; Gene Weinstein; Vance T Lehman; Charlotte H Rydberg; Jason Nelson; Hongfang Liu; David M Kent Journal: BMC Neurol Date: 2021-05-11 Impact factor: 2.474
Authors: Andrew Wen; Sunyang Fu; Sungrim Moon; Mohamed El Wazir; Andrew Rosenbaum; Vinod C Kaggal; Sijia Liu; Sunghwan Sohn; Hongfang Liu; Jungwei Fan Journal: NPJ Digit Med Date: 2019-12-17
Authors: Sunyang Fu; Guilherme S Lopes; Sandeep R Pagali; Bjoerg Thorsteinsdottir; Nathan K LeBrasseur; Andrew Wen; Hongfang Liu; Walter A Rocca; Janet E Olson; Jennifer St Sauver; Sunghwan Sohn Journal: J Gerontol A Biol Sci Med Sci Date: 2022-03-03 Impact factor: 6.053
Authors: Sunyang Fu; Lester Y Leung; Anne-Olivia Raulli; David F Kallmes; Kristin A Kinsman; Kristoff B Nelson; Michael S Clark; Patrick H Luetmer; Paul R Kingsbury; David M Kent; Hongfang Liu Journal: BMC Med Inform Decis Mak Date: 2020-03-30 Impact factor: 2.796