| Literature DB >> 33870061 |
Bharath Kandimalla1, Shaurya Rohatgi2, Jian Wu3, C Lee Giles1,2.
Abstract
Subject categories of scholarly papers generally refer to the knowledge domain(s) to which the papers belong, examples being computer science or physics. Subject category classification is a prerequisite for bibliometric studies, organizing scientific publications for domain knowledge extraction, and facilitating faceted searches for digital library search engines. Unfortunately, many academic papers do not have such information as part of their metadata. Most existing methods for solving this task focus on unsupervised learning that often relies on citation networks. However, a complete list of papers citing the current paper may not be readily available. In particular, new papers that have few or no citations cannot be classified using such methods. Here, we propose a deep attentive neural network (DANN) that classifies scholarly papers using only their abstracts. The network is trained using nine million abstracts from Web of Science (WoS). We also use the WoS schema that covers 104 subject categories. The proposed network consists of two bi-directional recurrent neural networks followed by an attention layer. We compare our model against baselines by varying the architecture and text representation. Our best model achieves micro- F 1 measure of 0.76 with F 1 of individual subject categories ranging from 0.50 to 0.95. The results showed the importance of retraining word embedding models to maximize the vocabulary overlap and the effectiveness of the attention mechanism. The combination of word vectors with TFIDF outperforms character and sentence level embedding models. We discuss imbalanced samples and overlapping categories and suggest possible strategies for mitigation. We also determine the subject category distribution in CiteSeerX by classifying a random sample of one million academic papers.Entities:
Keywords: citeseerx; digital library; neural networks; scientific papers; subject category classification; text classification; text mining
Year: 2021 PMID: 33870061 PMCID: PMC8025978 DOI: 10.3389/frma.2020.600382
Source DB: PubMed Journal: Front Res Metr Anal ISSN: 2504-0537
FIGURE 1Subject category (SC) classification architecture.
FIGURE 2Number of training documents (blue bars) and the corresponding values (red curves) for best performance (top) and worst performance (bottom) SC’s. Green line shows improved ’s produced by the second-level classifier.
FIGURE 3Top: Micro-’s of our DANN models that classify abstracts into 81 SCs. Variants of models within each group are color-coded. Bottom: Micro-’s of our best DANN models that classify abstracts into 81 SCs, compared with baseline models.
FIGURE 4Distribution of ’s across 81 SC’s obtained by the first level classifier.
Results of the top 10 SCs of classifying one million research papers in CiteSeerX, using our best model.
| Rank | Subject categories | Fraction (%) |
|---|---|---|
| 1 | Biology | 23.85 |
| 2 | Computer science | 19.17 |
| 3 | Mathematics | 5.06 |
| 4 | Engineering | 4.97 |
| 5 | Public environment | 3.45 |
| 6 | Physics | 3.16 |
| 7 | Environmental sciences | 1.81 |
| 8 | Astronomy astrophysics | 1.79 |
| 9 | Neurosciences neurology | 1.52 |
| 10 | Chemistry | 1.47 |
FIGURE 5Normalized Confusion Matrix for closely related classes in which a large fraction of “Geology” and “Mineralogy” papers are classified into “GeoChemistry GeoPhysics” (A), and a large fraction of Zoology papers are classified into “biology” or “ecology” (B), a large fraction of “TeleCommunications,” “Mechanics” and “EnergyFuels” papers are classified into “Engineering” (C).
FIGURE 6t-SNE plot of closely related SCs.