Literature DB >> 35571722

RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm.

Hamid Gharagozlou1, Javad Mohammadzadeh1, Azam Bastanfard1, Saeed Shiry Ghidary2.   

Abstract

Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformers (BERT) word embedding, enriched by an improved artificial bee colony (ABC) algorithm for pretraining and a reinforcement learning-based algorithm for training backpropagation (BP) algorithm. BERT can be comprised in downstream work and fine-tuned as a united task-specific architecture, and the pretrained BERT model can grab different linguistic effects. Existing algorithms typically train the AS model with positive-negative pairs for a two-class classifier. A positive pair contains a question and a genuine answer, while a negative one includes a question and a fake answer. The output should be one for positive and zero for negative pairs. Typically, negative pairs are more than positive, leading to an imbalanced classification that drastically reduces system performance. To deal with it, we define classification as a sequential decision-making process in which the agent takes a sample at each step and classifies it. For each classification operation, the agent receives a reward, in which the prize of the majority class is less than the reward of the minority class. Ultimately, the agent finds the optimal value for the policy weights. We initialize the policy weights with the improved ABC algorithm. The initial value technique can prevent problems such as getting stuck in the local optimum. Although ABC serves well in most tasks, there is still a weakness in the ABC algorithm that disregards the fitness of related pairs of individuals in discovering a neighboring food source position. Therefore, this paper also proposes a mutual learning technique that modifies the produced candidate food source with the higher fitness between two individuals selected by a mutual learning factor. We tested our model on three datasets, LegalQA, TrecQA, and WikiQA, and the results show that RLAS-BIABC can be recognized as a state-of-the-art method.
Copyright © 2022 Hamid Gharagozlou et al.

Entities:  

Mesh:

Year:  2022        PMID: 35571722      PMCID: PMC9106472          DOI: 10.1155/2022/7839840

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Today, the questions charged in numerous domains in cyberspace, such as Stack Overflow and GitHub, are progressing quotidianly. QA is one of the vital branches of natural language processing (NLP) that can have the ability to answer questions automatically. QA can be made in two ways: Several methods focus on generating answers that usually employ developing networks like generative adversarial network (GAN) to create answers [1]. Nonetheless, they cannot guarantee accurate meaning and grammar. Another category of methods uses AS, one of the essential subtasks of QA which is also applied in other fields such as machine comprehension [2]. Over the last few years, the problem has been gaining an increasing amount of attention [3, 4]. A question q and a set of candidate answers A = {a1,  a2,  a3,…,  a} are given, and the goal is to attain a ∈ A as the best answer to question q. Questions and answers can have various lengths, and multiple answers may be the true answer to a question. From the literature, there are numerous methods for AS based on traditional and deep learning methods [5]. The traditional approaches rely more on search engine [6], information retrieval [7, 8], handcrafted rules [9], or machine learning models [10, 11]. Information retrieval-based models work based on the keywords without using any semantic data, which makes it challenging to obtain the correct answers [12]. Handcrafted rule-based techniques cannot enfold all patterns, and their performance is delimited [13, 14]. In machine learning-based methods, features are manually made, so their quality laboriously depends on feature extraction [15, 16]. Some criteria and classifiers, including edit distance and support vector machine, consider the matching associations between AS pairs [11]. Typically, traditional methods suffer from two major weaknesses. First, they mostly do not use semantic information in keywords, features, or rules, causing them not to consider all-side relationships between QA pairs. Second, feature extraction and handmade rules are not flexible, leading to inferior generalization capability. After the appearance of deep learning, many problems in many domains [17-23], including AS, have been overshadowed by it. Deep learning-based methods for AS usually employ a convolutional neural network (CNN) [24] or LSTM to grab semantic features on various levels. The main task is to estimate the semantic similarity between a question-answer pair, which can be regarded as a text similarity calculation or classification work. A CNN is employed to model the hierarchical structures of sentences and evaluate their matching amount [25]. At the same time, an LSTM is considered to generate the embeddings of questions and answers while keeping sequential dependency information. Although deep models can only achieve limited improvement, they face some difficulties. They forge the embedding representation of the question-answer pair with one neural network design. This results in paying attention to one-side features and ignoring the other complex semantic features among question-answer couples. After that, models that try to comprehend languages were developed [26]. These models realize language syntactic and semantic rules in different methods, including next word and sentence prediction and masked word prediction [27]. They recognize a language and can make new texts with correct syntax and semantic rules. The BERT model [27] is one of the latest language models, being superior to all other developed language models. This model has grabbed advantage of the statement offered in transformers [28], which is currently widely employed in NLP tasks [29]. The success of deep models mainly relies on architecture, training algorithms, and selection of features employed in training. All these make the design of deep networks a complex optimization problem [30]. In many methods, the topology and transfer functions are set, and the space of possible networks is spanned by all potential values of the weights and biases [31]. In [32, 33] and [34], ant colony optimization [35], tabu search [36], simulated annealing [37], and genetic algorithm [38] were utilized for the training of neural networks with fixed topology. The neural network learning optimization process discovers the weight configuration associated with the lowest output error. Nevertheless, finding the optimal weight for deep models largely depends on weight initialization that has a more significant impact on neural network performance than network architecture and training examples [39]. AS methods, including in-depth ones, utilize gradient-based algorithms such as BP and Levenberg–Marquardt (LM) [40] for model weight optimization. While the BP algorithms converge in the first-order derivatives, the LM ones converge with second-order derivatives [41]. The main problem of BP and LM is the sensitivity to the initial weights, which leads to getting stuck in the local optimization [42]. To deal with this problem, global search approaches, having the power to evade local minima, are being employed to pretrain weights, such as population-based metaheuristic (PBMH) algorithms [43-45]. Among PBMH algorithms, ABC is one of the most powerful algorithms for optimization problems, which has two advantages over traditional algorithms: no need to calculate gradients and not getting caught up in local optimizations [46]. This algorithm is based on the intelligent behavior of bees, containing two general concepts: food sources and artificial bees. Artificial bees are looking for food sources with high nectar. The position of the food source shows a solution to the optimization problem, and the amount of nectar equals the quality of a solution. Although the food source position is a critical factor determining whether a bee selects a food source, some necessary information is still missing when bees produce a neighboring food source. One of the other main problems in AS is imbalanced classes, since the member number of positive class, including the question and the corresponding answer, is much smaller than that of negative class, including the question and the non-corresponding answer, which reduces the performance of existing methods. Proposed methods with an imbalanced problem are generally divided into two groups: data-level methods and algorithmic-level methods. In data-level algorithms, training data is manipulated to balance class distribution by an oversampling minority class, undersampling majority class, or both. SMOTE [47] is an oversampling system that generates new examples by linear interpolation between adjacent minority samples. Near Miss [48] is an undersampling method that deals with an imbalanced problem by accidentally removing samples from a larger class. This algorithm eliminates the data of the larger class when viewing two data points belonging to two various classes that are close in terms of distribution. Oversampling algorithms can increase the possibility of overfitting, and undersampling algorithms lose valuable information in the majority class. In algorithmic-level methods, the importance of the minority class rises with techniques such as cost-sensitive learning, ensemble learning, and decision threshold adjustment. In the cost-sensitive learning methods, different costs are allocated to the wrong classification of each class in the loss function, which is more for the minority class. Ensemble learning-based solutions train multiple subclassifications and adopt voting to get better results. Threshold adjustment techniques train the classifier in the imbalanced dataset and change the decision threshold during the test. Deep learning-based methods have also been suggested to classify imbalanced data. The paper [49] introduced a loss function for deep models that equally receives classification errors from the majority and minority classes. Another study in [50] learns the discriminative features of imbalanced data while maintaining intercluster and interclass margins. The authors in [51] presented a method based on the bootstrapping algorithm that balances training data of convolutional network per mini-batch. An algorithm is proposed by [52] for optimizing network weights and class-sensitive costs. In [53], the authors extracted complex samples in the minority class and improved their algorithm by batchwise optimization with Class Rectification Loss function [54]. In the last few years, deep reinforcement learning has been successfully used in computer games, robots' control, recommendation systems [55-57], etc. For classification problems, deep reinforcement learning has helped eradicate noisy data and learn better features, which significantly improved classification performance. Nonetheless, little research has been accomplished on applying deep reinforcement learning to imbalanced classification. Deep reinforcement learning is ideally appropriate for imbalanced classification as its learning mechanism, and specific reward function is comfortable paying more attention to minority class by giving higher rewards or penalties. This paper presents an attention mechanism-based LSTM model for AS, called RLAS-BIABC, established on the BERT word embedding, reinforcement learning, and an improved ABC algorithm. The main body of the RLAS-BIABC model consists of two attention-mechanism-based bidirectional LSTM (BLSTM)networks and a feedforward network to calculate the similarity of the question-answer pair. The model aims to learn both positive and negative pairs. The positive pair is related to the question and real answer, while the negative one considers each question with the other answers. We use BERT as word embedding to learn the semantic similarity between sentences without pre-engineered features. What is more, we introduce an improved ABC algorithm for RLAS-BIABC, whose task is to find weight initialization in all LSTMs, the attention mechanism, and feedforward network to begin the BP algorithm. In this regard, we modify the ABC algorithm by applying mutual learning between two selected position parameters to choose the candidate food source with higher fitness. In addition, in the BP step, our proposed method employs reinforcement learning to handle imbalanced classification in the proposed method. In this respect, we define the AS problem as a guessing game divided into a sequential decision-making process. At each step, the agent takes an environmental state represented by a training instance and then executes a two-class classification operation under the guidance of a policy. If the classifier accomplishes the operation well, it will take a positive reward; otherwise, it will take a negative reward. The minority class is more rewarded than the majority one. The agent's goal is to get as many cumulative rewards as possible during the sequential decision-making process, that is, to classify the samples as accurately as possible. We assess the RLAS-BIABC model on three standard datasets, TrecQA, LegalQA, and WikiQA, and show RLAS-BIABC to be superior to other methods that use random weighting. The main contributions of the article are as follows: (1) We consider the BERT word embedding, which is the last developed model for many languages. (2) Instead of using the random weight system for the model weights, we define an encoding strategy and compute an initial value using an improved ABC algorithm. (3) We consider the AS problem a sequential decision-making process and propose a deep reinforcement learning framework for imbalanced classification. (4) We study the performance of the proposed model through experiments and compare it with the other methods that use the random weight for initialization and are faced with the imbalanced classification problem. The rest of this article is organized as follows: Section 2 presents a short review of related works. Section 3 introduces the ABC algorithm. Section 4 describes the framwork of the proposed model. Section 5 exhibits evaluation metrics, datasets, andresults. Section 6 provides a conclusion and future works.

2. Related Work

Until now, many approaches to the QA problem have been proposed. This section provides an overview of the methods based on machine learning and deep learning. The first proposed approaches were based on feature engineering. In these methods, the relationship between question and answer is measured by repeating common words, where bag-of-words and bag-of-grams [58] are commonly applied for this purpose. These methods are not logical because they do not respect semantic and linguistic features in sentences. Subsequently, however, some studies have utilized language resources such as WordNet [59] to resolve the semantic problem but failed to remove linguistic limitations. Some researchers considered sentences' syntactic and semantic structure [60]. Some authors employed the dependency tree and the tree edit distance algorithm [15, 61]. The research [62] confirmed that tools such as WordNet and NER [63] could play an influential role in selecting semantic features. The article [64] provided an effective solution for automated feature selection. These methods were one of the first attempts to eliminate feature engineering. Later, with the advent of deep learning, many methods used deep models as an automatic feature engineering tool. Recently, in-depth learning has covered a wide range of applications of NLP[18]. Moreover, recurrent neural network (RNN) and CNN are applied as two strong arms of deep learning in feature extraction [20, 21]. The behavior of deep learning methods with question-answer pairs is divided into two categories. In the first category, question and answer are two distinct elements, and deep networks reach their representation vectors separately. Typically, various criteria are adopted to measure the similarity between them. The authors in [65] offered a compare-aggregate system that applies many metrics for similarity measuring. The study [66] utilized the ELMo language model [26] to overcome question and answer work. The results reveal the superiority of language models. In the second category, question and answer are assumed to be a single sentence. In [67], a CNN-based approach is presented to score question-answer pairs in a pointwise manner. Another technique in [68] applies the BLSTM network for question answering. Primarily, the embedding of question and answer words is learned and then entered into a BLSTM network, and later the embedding of each sentence is estimated based on the average of its words. Lastly, the answer-question connection is fed to a feedforward network. Siamese network [69] is an essential branch of in-depth learning that has been applied in all fields, especially QA. The network provides two separate representation vectors for question and answer. In [70], the first deep learning task is presented for the AS task. In this study, the most relevant answer to the question is extracted using a CNN and logistic regression. The research [71] implemented the idea presented in [70]. The authors tried to make different models using hidden layers, convolution operations, and activation functions to improve the results. Another work in [72] mixes various models to produce representation vectors for every sentence. In [73], the authors convert each point model into a pair model. Their idea was that pair models could further enhance model performance. The pair model was also applied to the model in [72]. The study [74] is a preprocessing operation. In this research, named entities are replaced with a unique token that facilitates selecting candidate answers. The impressive effectiveness of this technique was confirmed by applying it to the model presented in [73]. Meanwhile, the authors in [75] claimed that not all the named entities could be replaced with one token, so they considered a token for each named entity. It was later found that using the attention mechanism could produce more valuable models. Unlike the Siamese-based technique, the attention mechanism uses context-sensitive interactions [76] between question and answer. The attention mechanism was first proposed for machine translation but was later employed in other applications such as question answering [77, 78]. The approach in [79] considered the attention mechanism and RNNs to succeed in the answer-selection task. It was based on the attention mechanism proposed in [80]. In [81], the authors employed a method based on inter-weighted alignment networks to determine the similarity between a question-answer pair. The article [82] suggested a scheme based on a bidirectional alignment mechanism and stacked RNNs. In the first works, the attention mechanism was performed only on RNN, but later [83] pointed out that combining a CNN and attention mechanism could be more efficient.

3. Background

3.1. Long Short-Term Memory (LSTM)

In a nutshell, RNNs [84] are designed to model sequential inputs. In these networks, a data sequence is mapped to a series of hidden states. The output is then generated using the following equations:where W and U are weight matrices and b means bias. θ and τ represent the activation functions such as ReLU and Tanh. x ∈ ℝ is the input with dimension d, and h ∈ ℝ equals the hidden layer with size h at time t. RNNs have proven to be successful in many areas of NLP, such as text generation [85] and text summarization [86]. However, later, it became clear that as the length of the input of these networks increases, they suffer from problems such as gradient explosion and vanishing [87]. The LSTM network proposed by Hochreiter and Schmidhuber [88] can prevent the mentioned problems. This is because memory units can effectively handle long dependencies. In particular, LSTM consists of several control gates and one memory unit. Let x, h, and c represent input, hidden state, and memory cell at time t, respectively. Given a sequence of inputs (x1, x2,…, x), LSTM should calculate a sequence of hidden units (h1, h2,…, h) and memory cells (c1, c2,…, c) as output. In terms of formula, the specified process can be defined as follows [89]:where W and b are network parameters. i, f, and o display input gate, forget gate, and output gate, respectively. σ stands for sigmoid function. Although many problems can be solved under the umbrella of LSTM networks [18, 19, 90], experiments show that BLSTM can be more effective than LSTM. A BLSTM network [91] is an extended LSTM net that processes input from start to end and vice versa. This process generates two hidden vectors, and , for a specific input at the moment of t. Thus, the connected vectors, namely , form the final hidden vector. The information extracted by the units in the LSTM network is equally important in making the final decision, which reduces system performance. To illustrate the point, consider the sentence “Despite being from Uttar Pradesh, as she was brought up in Bengal, she is convenient in Bengali.” In this sentence, words like “Bengali” and “Bengal” should be given more attention, while this is not the case in an original LSTM network. To overcome this problem, the attention mechanism has been considered. In an attention mechanism system, the importance of each hidden layer with a coefficient in the interval [0, 1] is involved in the construction of the final vector. Formally, the hidden unit vector for a particular input of length T is calculated by considering the coefficient α for each hidden vector h as follows:

3.2. Artificial Bee Colony (ABC) Algorithm

The ABC algorithm is a technique inspired by the intelligent behaviors of bees in nature. Two general concepts form the main body of the algorithm ABC: food sources and artificial bees. Artificial bees are looking for food sources with high nectar. The position of the food source indicates a solution to the optimization problem, and the amount of nectar corresponds to the quality of a solution. ABC involves three different groups of bees: employed, onlooker, and scout. Employed bees search for food sources with higher nectar in the vicinity of other food sources around them and share their information with onlooker bees in the dance area. The numbers of employed and onlooker bees are the same, and each is equal to half of the colony. Each employed bee exists in a hive, so the number of employed bees equals the total hives. Like employed bees, onlooker bees search for the best food sources in their neighborhood. Employed bees whose food resources do not improve after a few steps are converted to scouts, and a new search begins. The optimization process of ABC is summarized as follows: Initialization Stage. Food sources as bee locations in the search space are initialized as follows: where i refers to the i-th solution that takes the integer value in the interval [1, BN], whereBN is the total number of solutions. Each solution consists of D elements, where D shows the number of weights to be optimized. xmin and xmax are the lowest and highest value in the solution i, respectively. Employed Bee Stage. After initialization, the employed bees identify new sources in the neighborhood of existing food ones. Now they calculate the quality of the designated food sources. If their quality is better, they erase the information of previous sources from memory, replacing it with that of new sources. Otherwise, the data of earlier sources will remain unchanged. Formally, this step can be described by the following formula: where k has an integer value in the interval [1, BN], φ is a random decimal value in [−1,1], and v is a new food source derived from the change of an element x. Onlooker Bee Stage. At this phase, the employed bees provide information to the onlooker bees. Onlooker bees calculate the value of the information and select the new solution based on the probability value. As in the previous step, if the new solution has more nectar, the previous position information will be replaced with the new solution. The possibility of choosing a new solution can be formulated as follows: where fit(xi) is the fitness value for the i-th solution. According to (7), the higher the fit(xi) is, the more likely the observer bee will accept this solution. The onlooker bee goes to it if the selection is performed, and a new solution is generated according to (6). Scout Bee Stage. In the last step, scout bees are employed to escape the local optimum. More specifically, any solution that fails to improve the process after some cycles becomes a scout bee, and the food source is dropped. Therefore, a new food source replaces the old one according to (6). The four steps mentioned above are performed up to several times to meet the termination criteria. The complete ABC algorithm is given in Algorithm 1.

4. The Framework of RLAS-BIABC

The proposed algorithm considers two critical options for classification. In the first step, we formulate a vector that includes all the learnable weights in our model, and we optimize it utilizing the ABC algorithm. Then, we apply the BP algorithm in the rest of the path. Besides, another problem that most classifiers suffer from, including ours, is imbalanced data. To take this aspect into account, we employ the opinions of reinforcement learning. We present these two ideas in two separate sections. The general architecture of the proposed model is shown in Figure 1. Consider a question Q containing a sequence of n words, where Q=(q1, q2,…, q), with the answer A, where A=(a1, a2,…, a) including m words. Let a, q ∈ ℝ show the D-dimensional visual presentations of a word. Two LSTMs are provided for each question and answer. Two pairs of positive and negative data are used to learn the model. In the positive pair (Q, A), A is the correct answer to question Q; the output of the model should go to one. Meanwhile, in the negative pair (Q, A′), where A′ is the fake answer to the question, the network should move to zero for this pair. The embedding calculated by LSTMs for question and answer is expressed as follows:where and are the output of i-th BLSTM related to the question and answer, respectively. α and β are the attention weights of each unit that are computed as follows:where W, W, b, b represent the parameters of the attention mechanism. After determining the efficient representation of question and answer by the attention mechanism, we form a vector consisting of the connected q, a, and |q − a| according to Figure 1 and enter it into a feedforward network. It has been experimentally confirmed that the difference between two representation vectors can act in a successful decision [92].
Figure 1

The proposed LSTM-similarity model.

4.1. BERT-Based Word Embedding

Word embedding serves as a function of mapping words to semantic vectors for use in deep learning algorithms. Word embedding is a reliable way to extract significant representations of words established in their context. Much research has been conducted to find the best meaningful word representations on neural network models such as Skip-gram [93], GloVe [94], and FastText [95]. Over the last few years, the pretrained language model (PLM), which is a black box with prior knowledge of the natural language and is fine-tuned in NLP works, has been much applied. PLM models generally use unlabeled data to learn model parameters [96]. The paper considers the BERT model [27], one of the latest techniques in the PLM trends. BERT is a bidirectional language model trained on big datasets such as Wikipedia to generate contextual representations. In addition, it is commonly fine-tuned from a neural network dense layer for different classification duties. The fine-tuning functionality includes the contextual or the problem-specific meaning with the pretrained generic meaning and trains it for a classification task. Figure 2 indicates the architecture of a BERT model. BERT uses a bidirectional transformer, in which its representations are jointly conditioned on both the left and right context in layers [97], which differentiates it from the other models, including Word2Vec and GloVe, that build an embedding in one direction to dismiss its contextual differences.
Figure 2

Architecture of the BERT model.

4.2. Pretraining Stage

Weight initialization is an essential point in designing a neural network, the nonobservance of which leads to misleading the model. The proposed structure has two LSTM networks, two attention mechanisms, and one feedforward neural network, each of which has its weights that must be trained. The paper uses an improved ABC algorithm for pretraining weights.

4.2.1. Mutual Learning-Based ABC

In the standard ABC algorithm, artificial bees randomly select a food source position and change it to create a new position. If the fitness value of the new solution is better, it will replace the current solution. Otherwise, no change will be applied. In other words, in a D-dimensional optimization problem, one dimension is randomly selected, its value is changed, and the better outcome is selected in each iteration. Based on (6), the newly generated solution v depends on only two parameters, x and x, making the food source v uncontrollable, sometimes larger and sometimes smaller than the current food source. In the ABC algorithm, a food source with a higher fitness value is required. To always produce a food source a higher value, we consider the fitness information acquired by mutual learning between current and neighboring food sources.where Fit and Fit indicate the fitness value of the current food source and the neighboring food source, respectively. φ shows a uniform random number in the interval [0, F], in which F is a nonnegative constant named the mutual learning factor. As we can see, the value v depends on their position and their value of fitness. By comparing the current and neighboring food sources, the fitness values of new solutions move to better sources. That is, if the current food source has higher suitability, the candidate solution will move toward a better solution; otherwise, it will tend to move toward the neighboring source. This learning strategy allows making a better candidate solution. The parameter F plays an essential role in balancing the perturbation between related food positions. In addition, F must be a nonnegative positive number to ensure it goes to a better solution. As F increases from zero to a particular value, the perturbation on the corresponding position decreases, meaning that the fitness value of the new food source is close to the higher fitness. A large value of F weakens the power of exploitation and exploration.

4.2.2. Encoding Strategy

Encoding means the weights are arranged in a vector, which is considered the bees' position in ABC. Choosing the right layout is a challenging task; however, we tried to design the best encoding strategy possible after several experiments. Figure 3 denotes an example of the encoding for two LSTMs, two attention mechanisms, and a two-layer feedforward network. Note that all weight matrices are stored in rows.
Figure 3

Placement of weights in a vector.

4.2.3. Fitness Function

The purpose of the fitness function is to measure the efficiency of a solution. The paper employs the following function as a competency function:where T is the total number of training samples and y and are the target and predicted labels for the i-th data, respectively.

4.3. Classification

Reinforcement learning (RL) [98] is a subfield of machine learning that solves a problem by making successive decisions [99, 100]. Recently, reinforcement learning has achieved excellent results in classification because it can learn valuable features or select high-level samples from noise data. In [101], the classification problem was defined as a sequential decision-making process that used several factors to learn the optimal policy. However, complex simulations between agents and environments have somewhat increased the time complexity. Another work in [102] submitted a solution for learning a relationship in text noise data. For this purpose, the proposed model is divided into two parts: instance selector and relational classifier. The instance selector is designed to extract quality sentences from noise data with the agent help. At the same time, the relational classifier learns better performance from selected clean data and gives delayed reward feedback to the instance selector. Finally, the model results in a better classification and quality dataset. The authors in [103-106] considered deep reinforcement learning to learn the helpful training data features. Generally, they improved the valuable features of the classifier. The work in [107] used reinforcement learning to classify time series data in which the reward function and the Markov model are designed. So far, little research has been done on the classification of unbalanced data, especially the processing of natural languages using reinforcement learning. In [108], an ensemble pruning method that picks the best sub-classifiers under the reinforcing learning umbrella was developed. This method was effective for small data because it was practically impossible to choose classifiers with many subcategories. This section describes how to apply reinforcement learning to prevent imbalanced classification. Overall, the agent receives a sample at each step and classifies it. After that, the environment gives immediate and next rewards to the agent. A positive reward is assigned to the agent by the environment when it categorizes the sample correctly. Otherwise, it receives a negative reward. Finally, the agent learns the optimal behavior by maximizing the aggregate rewards and then can classify the samples as accurately as possible. Let D={(x1, l1), (x2, l2),…, (x, l)} be training data, where x=(q, a) is thei-th sample so that q and a are the i-th question and answer that enter the model, respectively. l ∈ {0,1} shows the target of the i-th example. We consider the following conditions for an agent.

4.3.1. Policy π

The policy π is a mapping function π : S⟶A where π(s) denotes the action a performed by an agent in state s. In our work, the proposed classification with the set weight θ is recognized as policy π.

4.3.2. State s

Each example of the training dataset is described as a state. The agent takes the first data x1 as the initial state s1 at the start of the training. State s at each time step t corresponds to x in the training dataset. The order of the samples in each iteration is different for the agent.

4.3.3. Action a

The action performed by the agent is to predict the category label. Hence, the agent's performance is related to the training dataset label. The recommended model is a binary classifier, a ∈ {0,1}, where zero and one show the minority and majority classes, respectively. In this context, the relevant question and answer are one, and the irrelevant question and answer are zero.

4.3.4. Reward r

The agent receives a positive score if the sample is classified correctly and a negative score otherwise. Since minority class instances are more critical because of their small number, the algorithm should consider the size of the score for the minority class more. The reward function is described as follows:where λ ∈ [0,1], and D and D are related to the minority and majority classes, respectively. l is the label of the sample x. The bonus amount is considered the cost of predicting the label. According to this relation, when λ < 1, the amount of the cost of the minority class is more. If the distribution of all classes is balanced, λ = 1, then the prediction cost of all classes is the same. We will examine the different values of λ in our experiments.

4.3.5. Terminal E

The episode is a transition trajectory from the initial state to the terminal state {(s1, a1, l1), (s2, a2, l2),…, (s, a, l)}. An episode finishes when all instances in the training data are classified or when the agent misclassifies the instance from the minority class.

4.3.6. Transition Probability P

The model transition probability, i.e., p(s|s, a), is deterministic. The agent transfers from state s to state s according to the order of instances in the dataset. In the proposed model, the π policy takes the input data and calculates its label probability: The agent aims to identify the data input sample as accurately as possible. The best performance is attributed to the agent when it can maximize its cumulative rewards as follows: Equation (14) is called the return function, the total accumulated return from time t with the discount factor γ ∈ (0,1] until the time when the agent moves in the search space. The action value Q in RL expresses the expected return for action a in state s, which can be defined as follows: Equation (15) can be extended according to the Bellman Equation [109]: By maximizing function Q under policy π, we can maximize cumulative rewards, namely Q. The optimal policy π obtained under function Q, which is a policy that performs best for our model, is as follows: By combining (16) and (17), function Q is computed as follows: For low dimensions, the values of the function Q are collected in a table to obtain the optimal value according to the recorded values. However, the function Q can no longer be solved when the dimensions of the problem are continuous. To solve this problem, a deep Q-learning algorithm was adopted to model the function Q with a deep neural network. To that end, the tuple (s, a, r, s′) obtained from (18) is stored in replay memory M. The agent selects a mini-batch B of transitions from M randomly and executes the dissent gradient algorithm on the deep Q network according to the following loss function:where y is the prediction of the function Q, which is formulated as follows:where s′ indicates the next state s, and a′ is the action executed in state s′.

4.4. Overall Algorithm

We design the simulation environment according to the contents defined above. The network architecture of the policy largely depends on the complexity and number of training examples. In this context, the input of the network depends on the structure of the training samples, and the output is equal to the number of classes of instance data. The general training algorithm of the model presented in Algorithm 2 is shown. First, the initial weights of the policy π are initialized using the ABC algorithm, and then the agent continues the training process until the optimal policy is reached. The choice of action is made based on the greedy policy, and the selected action is evaluated by Algorithm 3. The algorithm is repeated E times, where E in this paper is considered 15,000. At each step, the policy network weights are stored.

5. Results

5.1. Datasets

A dataset with many negative pairs can be one of the best options to evaluate the performance of the proposed system. We run our experiments on three datasets, LegalQA, TrecQA, and WikiQA, which are widely considered by many researchers. All three datasets have more negative than positive pairs. The statistical information of all datasets is shown in Table 1:
Table 1

Statistical information of LegalQA, TrecQA, and WikiQA datasets.

Dataset (TRAIN/DEV/TEST)# questions# QA pairs% correct
LegalQA10,526/1,593/3,035100,590/11,965/26,91321.8/24.4/22.9
TrecQA1,229/82/10053,417/1,148/1,51712.0/19.3/18.7
WikiQA873/126/24320,360/1,130/2,35212.0/12.4/12.5

“% correct” means the proportion of matched QA pairs.

TrecQA [110] is derived from TREC track data. Yao et al. [10] made a complete version of the positive and negative pair set. Two training datasets, TRAIN and TRAIN-ALL, are available in this database. The correctness of the answers in TRAIN-ALL is checked automatically by matching pairs with regular expressions. All answers in the TRAIN, DEV, and TEST data were judged manually. We employ the TRAIN-ALL data to train our model. LegalQA [111] is a Chinese dataset of legal consultative questions collected from a Chinese association. Users' online questions have been answered by licensed lawyers. LegalQA includes four fields: question subject, question body, answer, and label. The positive pair is provided as ground truth directly online. WikiQA [112] is an open-domain QA dataset in which each question is linked to a Wikipedia page that is assumed to be the topic of the year. To eliminate answer sentence prejudice, all answers in the summary section of the page are considered candidate answers.

5.2. Evaluation Metrics

According to previous research, MAP and MRR are the most common criteria for evaluating answer-selection tasks [77]. MAP measures the ability to rank answers to return the corresponding answer. However, MRR is repeated if a high-scoring match is found: MAP (mean average precision) calculates the mean average precision on the ranking results as follows: where Q denotes the set of questions, n is the number of answers to the i-th question, and R means the set of ranked results to question j from the best result to the j-th answer. MRR (mean reciprocal rank) evaluates the model suitability according to the position of the first correct answer, computed as follows: where r indicates the position of the first matching answer for the i-th question.

5.3. Baseline Methods

We evaluate our RLAS-BIABC model with several state-of-the-art methods for answer selection. The following are the details of these methods: KABLSTM [113] is a knowledge-aware method based on attentive BLSTM networks. This method uses knowledge graphs (KG) to learn the representation of questions and answers. EATS [75] adopted an RNN network to measure the similarity between the QA pair. First, it replaces each named entity with a specific word. This system calculates sentence representation vectors by the attention mechanism. Finally, these vectors are entered into the feedforward network, and the similarity is calculated by the sigmoid function in the last layer. AM-BLSTM [114] considered two LSTM networks for a question and answer separately. The resulting embeddings were combined and entered into an multilayer perceptron (MLP) network for classification. Moreover, traditional techniques, such as penalties for each class, have been employed to prevent imbalanced classification. BERT-Base [115] introduced a search engine and transformer model method for selecting answers. This article adopts simple models such as Jaccard similarity and compare-aggregate to rank the answers to a question. DRCN [116] offered an architecture based on a densely connected recurrent and co-attentive network in which hidden features are maintained at the top layer. Connection operations in this paper are performed using the attention mechanism to preserve information better. In addition, an autoencoder has been adopted to reduce the volume of information. P-CNN [117] introduced a new approach using a positional CNN for text matching that considers positional information at the word, phrase, and sentence levels. DARCNN [118] combined BLSTM, self-attention, cross-attention, and CNN to find the global and local features of the question and candidate answer, leading to better semantic modeling. Finally, it utilizes an MLP network to assign a score to a question-answer pair. DASL [119] submitted a model with a Bayesian neural network (BNN) to effectively optimize the loss in the ranking learning process. Another study of this article is how to combine active learning and self-paced learning for model training. KAAS [120] applied an interactive knowledge-enhanced attention network for AS that extracts rich features of question and answer knowledge at several levels. Additionally, an attention and self-attention network is considered to learn the semantic features of sentences.

5.4. Details of Implementation

In this work, Python and PyTorch have been utilized for the implementation. Jupyter has been used to implement project codes. Another library used in this study is NLTK. This library provides classes and methods for processing natural languages in Python. This library can perform a wide range of NLP operations. We use a two-layer BLSTM. Moreover, due to the connection of vectors in the two networks, we employ batch normalization before the data enters the feedforward neural network. Table 2 indicates the values of the other parameters.
Table 2

The parameters of the model.

ParameterValue
Batch size128
Embedding dim60
Max sentence length80
Activation function (LSTM and dense)ReLU
Dense hidden layer8
Our project uses a 64-bit Windows operating system with 64 GB of RAM and GPU. The best model was obtained for the LegalQA, TrecQA, and WikiQA after 50, 60, and 100 epochs, respectively. The whole process of our training took 5, 20, and 60 hours for the three datasets.

5.5. Experimental Results

Due to heuristic algorithms working randomly, we repeated all the experiments 10 times. Quantitative results of the three datasets are given in Table 3. In addition to comparing the proposed method with the state-of-the-art algorithms, to evaluate the effectiveness of ABC and RL components on the model, we employ three techniques: AS + random weight, AS-BIABC, and RLAS. AS + random weight is a system applying only random weights for initial weighting. Models AS-BIABC and RLAS accept only ABC and RL, respectively. For the LegalQA dataset, the RLAS-BIABC model has beaten other models, including IKAAS, in the MAP and MRR criteria, so that our model has reduced the error by more than 40% and 24% in these two criteria. By comparing RLAS-BIABC with AS-BIABC and RLAS, we can see that it decreases the error rate by about 51%, indicating the importance of the initialization and RL approaches. For the TrecQA dataset, our algorithm obtained the highest MAP and MRR, followed by EAT algorithm. The error improving rate in this database is approximately 30.13% and 21.00% for MAP and MRR criteria, respectively. In the WikiQA dataset, RLAS-BIABC decreases the classification error by more than 32% and 42% compared to IKAAS and DRCN, respectively.
Table 3

The evaluation results of the proposed model and other models.

MethodDataset
LegalQATrecQAWikiQA
MAPMRRMAPMRRMAPMRR
KABLSTM [113]0.7510.7900.792†0.844†0.732†0.749†
EATS [75]0.7780.8100.854†0.881†0.700†0.715†
AM-BLSTM [114]0.7860.8360.8180.8270.7800.788
BERT-Base [115]0.8380.8500.8230.8120.813†0.828†
DRCN [116]0.8280.8590.8020.8320.804†0.862†
P-CNN [117]0.7150.7290.6800.6980.734†0.737†
DARCNN [118]0.7000.7450.7430.7250.734†0.750†
DASL [119]0.8040.8160.8240.8310.7680.795
IKAAS [120]0.8250.8830.8230.8680.8350.849
AS + random weight0.758 ± 0.0000.801 ± 0.0010.796 ± 0.0000.806 ± 0.0020.771 ± 0.0020.792 ± 0.009
AS-BIABC0.788 ± 0.0120.815 ± 0.0080.802 ± 0.0050.826 ± 0.0020.803 ± 0.0000.845 ± 0.025
RLAS0.855 ± 0.1020.872 ± 0.0180.862 ± 0.0140.883 ± 0.1500.852 ± 0.0250.876 ± 0.026
RLAS-BIABC0.895 ± 0.0200.912 ± 0.0010.898 ± 0.0150.906 ± 0.0920.888 ± 0.0360.891 ± 0.017

† indicates that the results are taken from the articles.

Next, we prove that the improved ABC is more powerful than others. To do this, we fix all pieces of our algorithm for a fair comparison, including the LSTM networks, the attention mechanisms, and the reinforcement learning, and only change the trainer. To reach this goal, we compare our offered trainer with six conventional algorithms, including GDM [121], GDA [122], GDMA [123], OSS [124], and BR [125], and eight metaheuristic algorithms, including GWO [126], BAT [127], DA [128], SSA [129], COA [130], HMS [131], WOA [132], and ABC [133]. In all metaheuristic methods, population size and function evaluations are 100 and 3,000, respectively. The rest of the parameters of the algorithms are shown in Table 4. The results of metaheuristic and conventional algorithms are collected in Table 5. RLAS-AM-BR and RLAS-BABC performed best for all datasets for conventional and metaheuristic algorithms. As we expected, the metaheuristic algorithms perform better than the conventional ones. Without exaggeration, the improved ABC has a more acceptable performance than all of them, so that compared to the best algorithm, i.e., the main version of ABC, it can diminish the error by approximately 16%.
Table 4

Parameter setting of experiments.

AlgorithmParameterValue
ABCLimit n e × dimensionality of problem
n o 50% of the colony
n e 50% of the colony
n s 1
GWONo parameters
BATConstant for loudness update0.4
Constant for an emission rate update0.6
Initial pulse emission rate0.002
DAScaling factor0.3
Crossover probability0.7
SSANo parameters
COADiscovery rate of alien solutions
HMSNumber of clusters5
Minimum mental processes2
Maximum mental processes5
C1
WOAB1
Table 5

The performance of other methods for initialization.

MethodDataset
LegalQATrecQAWikiQA
MAPMRRMAPMRRMAPMRR
RLAS-BGDM0.796 ± 0.0020.819 ± 0.0260.824 ± 0.0930.836 ± 0.0260.810 ± 0.0560.825 ± 0.136
RLAS-BGDA0.783 ± 0.1250.776 ± 0.0950.769 ± 0.0250.786 ± 0.2690.745 ± 0.1360.761 ± 0.002
RLAS-BGDMA0.791 ± 0.0050.772 ± 0.1030.796 ± 0.1260.812 ± 0.2360.793 ± 0.0260.793 ± 0.005
RLAS-BOSS0.810 ± 0.1360.814 ± 0.0040.853 ± 0.0230.863 ± 0.0260.840 ± 0.0270.855 ± 0.127
RLAS-BBR0.842 ± 0.0090.853 ± 0.0000.860 ± 0.0360.878 ± 0.1200.852 ± 0.1030.870 ± 0.035
RLAS-BGWO0.771 ± 0.2050.783 ± 0.0180.755 ± 0.0720.781 ± 0.1260.755 ± 0.0250.773 ± 0.026
RLAS-BBAT0.862 ± 0.0030.818 ± 0.0190.876 ± 0.0930.880 ± 0.2390.852 ± 0.0610.873 ± 0.082
RLAS-BDA0.816 ± 0.0720.829 ± 0.0220.863 ± 0.0020.883 ± 0.0560.836 ± 0.0820.862 ± 0.091
RLAS-BSSA0.747 ± 0.0290.769 ± 0.0720.750 ± 0.0420.763 ± 0.0250.746 ± 0.0410.755 ± 0.001
RLAS-BCOA0.860 ± 0.0850.889 ± 0.0890.882 ± 0.0630.897 ± 0.2370.872 ± 0.0930.862 ± 0.017
RLAS-BHMS0.849 ± 0.0020.880 ± 0.1230.879 ± 0.0900.893 ± 0.0360.840 ± 0.1000.870 ± 0.009
RLAS-BGDM0.752 ± 0.0120.753 ± 0.0270.769 ± 0.0580.789 ± 0.0850.731 ± 0.0000.760 ± 0.018
RLAS-BABC0.875 ± 0.0040.906 ± 0.0210.888 ± 0.0460.900 ± 0.0820.878 ± 0.0160.889 ± 0.023

5.5.1. The Effect of the Reward Value of Majority Class

The environment helps the agent achieve the goal by considering the reward function. This article considers two different rewards for the minority and majority classes. Minority class reward was set to +1/−1 while the majority class was set to +λ/−λ. To investigate the effect of the value of λ on the proposed model, we test it with the values in the set {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}. The results of this experiment for the three datasets are indicated in Figure 4. As we see, for the LegalQA dataset, when λ has a value in the range [0,0.4], we have an uptrend, while we have a downtrend for the values (0.4, 1]. Hence, we fixed the value of λ for this dataset to 0.5. The best value of λ for both TrecQA and WikiQA datasets is 0.5. Generally, as the dataset size increases, the number of negative pairs increases, so λ tends to decrease. For λ=0, the importance of the majority class is overlooked, and for most λ=1, the importance of both classes is equal.
Figure 4

The process of changing the criteria by modifying the value of λ for the three datasets: (a) LegalQA dataset; (b) TrecQA dataset; (c) WikiQA dataset.

5.5.2. Exploration on Loss Function

Traditional techniques, including manipulating the loss function and data augmentation, can also deal with data imbalances. However, they largely depend on the issue at hand. In the meantime, the loss function has a more colorful role because it can make the minority class more prominent. To check the inefficiency of the loss functions on our model, we selected the five functions Weighted Cross-Entropy (WCE) [134], Balanced Cross-Entropy (BCE) [135], Focal Loss (FL) [136], Dice Loss (DL) [137], and Tversky Loss (TL) [138]. The WCE and BCE loss functions give weight to the positive and negative samples. The FL function is suitable for applications with imbalanced data. It downweights the contribution of uncomplicated examples and allows the model to focus more on learning complex samples [139]. The evaluation results of these loss functions for the three datasets are shown in Table 6. The results show that all the functions have about the same MRR and MAP in the three datasets. As expected, the FL function performs better than the others, so it is about 51.16% better than the algorithm with the usual loss function, i.e., the RLAS-BABC model.
Table 6

The results of various loss functions on the model.

ModelDataset
LegalQATrecQAWikiQA
MAPMRRMAPMRRMAPMRR
AS-BIABC + WCE0.781 ± 0.0020.819 ± 0.0260.772 ± 0.0050.780 ± 0.1450.795 ± 0.0100.792 ± 0.012
AS-BIABC + BCE0.789 ± 0.0000.812 ± 0.1200.786 ± 0.0730.804 ± 0.0250.783 ± 0.0740.814 ± 0.002
AS-BIABC + FL0.842 ± 0.0480.838 ± 0.0560.839 ± 0.0900.829 ± 0.0120.832 ± 0.0050.822 ± 0.006
AS-BIABC + DL0.838 ± 0.0890.808 ± 0.1350.810 ± 0.0740.770 ± 0.2030.806 ± 0.0820.804 ± 0.120
AS-BIABC + TL0.785 ± 0.0960.783 ± 0.5820.821 ± 0.0060.800 ± 0.0410.823 ± 0.0180.799 ± 0.005

5.5.3. Case Study

In this section, we intend to qualitatively evaluate the effectiveness of reinforcement learning in our model. For this purpose, we randomly select a sample from the TrecQA dataset. Given the question, “When were the Nobel Prize awards first given?” top answers are given in Table 7. The left column presents the model results without using reinforcement learning, and the right column shows the model results with reinforcement learning. Our results say that the model without reinforcement learning is more inclined to assign a higher score to negative responses. However, the model with reinforcement learning has assigned as many scores as possible to the answers to the question.
Table 7

For the question “When were the Nobel Prize awards first given?” the table shows the top-5 answers from the model with and without reinforcement learning.

RankRanked answers w/o RLRanked answers by RL
1The first awards ceremony took place in 1901The award to Doctors Without Borders echoes the first Nobel Peace Prize of the century, given in 1901, of which the founder of the red cross was a corecipient
2The five-member awards committee works in secrecy during its five or six meetings a year and refuses to comment on or release candidates' namesThe prizes, first awarded in 1901, are always presented on Dec 10, anniversary of Nobel's death
3 In 1901, Sweden bestowed the inaugural Nobel Prize in Medicine on a Berliner, Emil von Behring, for his serum against diphtheriaAmong them is the winner of the first prize in 1901, Sully Prudhomme
4The prizes, first awarded in 1901, are always presented on Dec 10, anniversary of Nobel's deathThe first awards ceremony took place in 1901
5“We all know that there are still major problems to be faced,” said awards committee chairman Francis SejerstedA day after the announcement, for example, critic Norman Holmes Pearson grumbled that this woman, Pearl Buck, was given the Nobel Prize in Literature

“In 1901” is the ground truth answer, and italicized words are terms that appear in the question.

5.5.4. Exploration on Word Embedding

Word embedding is one of the main components of deep learning models because the input is interpreted as a vector, and in case of incorrect embedding, the model may be misled. This study uses the BERT model as a word embedding, developed as one of the latest embedding models. In order to check other word embeddings on our model, we employ four word embeddings: One-Hot encoding [140], CBOW [141], Skip-gram [93], GloVe [94], and FastText [95]. One-Hot encoding is the vital process of altering the categorical data variables to be supplied to deep learning algorithms, improving predictions and classification accuracy. This word embedding makes a new binary feature for each class and allocates a value of 1 to the feature of each sample that corresponds to its original class. CBOW and Skip-gram are models that use neural networks to map a word to its embedding vector. The GloVe word embedding is an unsupervised learning algorithm performed on a corpus's aggregated global word-word cooccurrence statistics. FastText is word embedding that is an extension of the Skip-gram model. Instead of learning vectors for words, this method represents each word as an n-gram of characters. The results of this experiment are shown in Table 8. As expected, One-Hot encoding has the worst performance among all word embeddings, so in the TrecQA dataset, where this word embedding shows the best performance, the improvement rates for the MAP and MRR criteria are about 64.70% and 72.91%, respectively. CBOW and Skip-gram perform almost identically in all three datasets due to their similar architecture, with both being superior to the GloVe word embedding. FastText serves as the best word embedding for all models but still acts poorly on the BERT model. The BERT model decreases errors by more than 11%, 10%, and 19% compared to the FastText model for the WikiQA, TrecQA, and LegalQA datasets.
Table 8

The results of various word embeddings on the model.

Word embeddingDataset
LegalQATrecQAWikiQA
MAPMRRMAPMRRMAPMRR
One-Hot encoding0.679 ± 0.0420.569 ± 0.0020.711 ± 0.1200.653 ± 0.0810.649 ± 0.0890.589 ± 0.093
CBOW0.869 ± 0.0060.843 ± 0.0000.889 ± 0.0780.869 ± 0.1200.836 ± 0.0120.828 ± 0.010
Skip-gram0.874 ± 0.0520.872 ± 0.0750.878 ± 0.0300.858 ± 0.0020.847 ± 0.0140.853 ± 0.014
GloVe0.812 ± 0.0270.853 ± 0.0820.795 ± 0.1400.821 ± 0.0740.782 ± 0.0390.806 ± 0.009
FastText0.881 ± 0.0020.901 ± 0.0410.886 ± 0.0930.876 ± 0.0020.861 ± 0.0990.870 ± 0.000

5.5.5. The Effect of the Parameter F on the Model

To examine the effect of the parameter F expressed by (10) on the proposed method algorithm performance, F is set to 0.5, 1, 1.5, 2, 2.5, 3.5, 4, 4.5, and 5. The results obtained by these settings for the three datasets are shown in Figure 5. As can be seen, for the LegalQA dataset, when F rises from 0 to 2, the algorithm performs better and better. However, it can be observed that when F increases from 2 to 5, the method performance decreases. This means that a small or large value of F weakens the algorithm performance. For the TrecQA and WikiQA datasets, the algorithm with F equal to 1.5 and 2 has the best performance compared to other values.
Figure 5

The process of changing the criteria by modifying the value of F for the three datasets: (a) LegalQA dataset; (b) TrecQA dataset; (c) WikiQA dataset.

6. Conclusion and Future Works

This paper presented an approach called RLAS-BIABC for AS, established on an attention mechanism-based LSTM method and the BERT word embedding, combined with an improved ABC algorithm for pretraining and reinforcement learning for training the BP algorithm. The RLAS-AM-ABC model aims to classify the two positive and negative classes, in which the positive pair includes a question and a real answer. In contrast, the negative couple carries a question and a fake answer. Due to many negative pairs in the dataset, the RLAS-BIABC is converted to an imbalanced classification. To overcome this problem, we formulate our model as a sequential decision-making process. In this regard, the environment assigned a reward to each classification act at each step, where a minority class has a higher reward. It continued until the agent mistakenly categorized a minority class sample or the number of episodes ended. Initial weighting is another essential characteristic of deep models, which can result in getting stuck in a local optimum. To solve this concern, we initialized the policy weights with the improved ABC algorithm. The paper proposed a mutual learning technique that alters the produced candidate food source with the higher fitness between two individuals chosen by a mutual learning factor. We designed experiments to examine the factors influencing the model. The analyses demonstrate the power of reinforcement learning, BERT, and the improved ABC algorithm for selecting answers. In future work, while improving the proposed model, we will try to examine the effectiveness of the proposed classifier on other NLP applications. Another task would be to provide a model for generating the answer to a question. As a solution, we will focus on GANs, which today has many applications in almost every field, including NLP tasks.
  16 in total

1.  The MiPACQ clinical question answering system.

Authors:  Brian L Cairns; Rodney D Nielsen; James J Masanz; James H Martin; Martha S Palmer; Wayne H Ward; Guergana K Savova
Journal:  AMIA Annu Symp Proc       Date:  2011-10-22

2.  Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

Authors:  Alex Graves; Jürgen Schmidhuber
Journal:  Neural Netw       Date:  2005 Jun-Jul

3.  Development, implementation, and a cognitive evaluation of a definitional question answering system for physicians.

Authors:  Hong Yu; Minsuk Lee; David Kaufman; John Ely; Jerome A Osheroff; George Hripcsak; James Cimino
Journal:  J Biomed Inform       Date:  2007-03-12       Impact factor: 6.317

4.  Weighted rank aggregation of cluster validation measures: a Monte Carlo cross-entropy approach.

Authors:  Vasyl Pihur; Susmita Datta; Somnath Datta
Journal:  Bioinformatics       Date:  2007-05-05       Impact factor: 6.937

5.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

6.  Video Captioning by Adversarial LSTM.

Authors:  Yang Yang; Jie Zhou; Jiangbo Ai; Yi Bin; Alan Hanjalic; Heng Tao Shen; Yanli Ji
Journal:  IEEE Trans Image Process       Date:  2018-07-12       Impact factor: 10.856

7.  Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data.

Authors:  Salman H Khan; Munawar Hayat; Mohammed Bennamoun; Ferdous A Sohel; Roberto Togneri
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2017-08-17       Impact factor: 10.451

8.  Fetal Ultrasound Image Segmentation for Measuring Biometric Parameters Using Multi-Task Deep Learning.

Authors:  Zahra Sobhaninia; Shima Rafiei; Ali Emami; Nader Karimi; Kayvan Najarian; Shadrokh Samavi; S M Reza Soroushmehr
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2019-07

9.  Towards reliable named entity recognition in the biomedical domain.

Authors:  John M Giorgi; Gary D Bader
Journal:  Bioinformatics       Date:  2020-01-01       Impact factor: 6.937

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.