Literature DB >> 24382983

A clinical decision support system for femoral peripheral arterial disease treatment.

Alkın Yurtkuran1, Mustafa Tok2, Erdal Emel1.   

Abstract

One of the major challenges of providing reliable healthcare services is to diagnose and treat diseases in an accurate and timely manner. Recently, many researchers have successfully used artificial neural networks as a diagnostic assessment tool. In this study, the validation of such an assessment tool has been developed for treatment of the femoral peripheral arterial disease using a radial basis function neural network (RBFNN). A data set for training the RBFNN has been prepared by analyzing records of patients who had been treated by the thoracic and cardiovascular surgery clinic of a university hospital. The data set includes 186 patient records having 16 characteristic features associated with a binary treatment decision, namely, being a medical or a surgical one. K-means clustering algorithm has been used to determine the parameters of radial basis functions and the number of hidden nodes of the RBFNN is determined experimentally. For performance evaluation, the proposed RBFNN was compared to three different multilayer perceptron models having Pareto optimal hidden layer combinations using various performance indicators. Results of comparison indicate that the RBFNN can be used as an effective assessment tool for femoral peripheral arterial disease treatment.

Entities:  

Mesh:

Year:  2013        PMID: 24382983      PMCID: PMC3871503          DOI: 10.1155/2013/898041

Source DB:  PubMed          Journal:  Comput Math Methods Med        ISSN: 1748-670X            Impact factor:   2.238


1. Introduction

Various engineering techniques have been adapted to health care delivery systems and the quality of health care services has been improved using these artificial intelligence techniques. It has been proven that introducing machine learning tools into clinical decision support systems can easily increase the decision accuracy and decrease costs and the dependency on highly qualified specialists. Since artificial neural networks (ANN) can easily be trained for identifying the patterns and extracting rules using a small number of cases, they are widely used as a powerful tool for clinical decision support systems [1]. Peripheral arterial disease (PAD) is a common pathologic disease worldwide. Peripheral arterial disease is a disease in which plaque, which is made up of fat, cholesterol, calcium, fibrous tissue, and other substances in the blood, builds up in the arteries that carry blood to head, organs, and limbs. PAD affects more than 30 million people worldwide, and while it can strike anyone, it is most common in people over age 65 [2]. PAD is associated with a significant burden in terms of morbidity and mortality, due to claudication, rest pain, ulcerations, and amputations. In case of mild or moderate peripheral arterial diseases, a medical or conservative therapy can be chosen but the gold-standard treatment of severe PAD is a surgical or an endovascular revascularization [2]. However, up to 30% of patients are not candidates for such interventions, due to excessive surgical risks or unfavorable vascular involvements. The presence of diffuses and multiple and distal arterial stenosis renders successful revascularization sometimes impossible. These “no-option” patients are left to medical therapy, which may slow the progression of disease at best [3]. It is very difficult to decide whether surgical or medical treatment is the best option since PAD depends on many factors like anatomic location, symptoms, comorbidities, and risk about cardiac condition or anesthesia. Cardiovascular surgeons should prefer the best appropriate choice of treatment and most of the time the decision allows the surgeon with his own experience. Cardiovascular specialists widely use intersociety Consensus for the classification of PADs' (TASC II) (Trans-Atlantic intersociety Consensus), which is based on the anatomic locations of lesions [3]. In this work, we present a clinical treatment decision support system using a radial basis function neural network (RBFNN) in order to help doctors to make an accurate treatment decision for patients having femoral PAD. Proposed RBFNN was compared to three different multilayer perceptron (MLP) networks and results indicate that the proposed RBFNN outperforms MLP networks. Based on our extensive literature review, no previous study was carried out which included a decision support system for clinical treatment of femoral PAD. The remainder of this paper is organized as follows. Section 2 summarizes previous studies; Section 3 covers the clinical data and input and output features of the proposed model. Section 4 gives a brief introduction to the RBFNN and experiments. Related results are given in Section 5 and finally Section 6 concludes the paper.

2. Related Work

In recent years, there have been many studies that focused on decision support systems to improve the accuracy of decisions for diagnosis and treatment of diseases. Such decision support systems frequently depend on ANN-based perceptive algorithms that are built upon previous patient records. To cite a few but significant works of others, Mehrabi et al. [4] used a MLP network and a RBFNN to classify chronic obstructive pulmonary (COPD) and congestive heart failure (CHF) diseases. They used Bayesian regularization to enhance the performance of MLP network. Moreover, they integrated K-means clustering algorithm and k-nearest neighborhood, to define centers for hidden neurons and to identify the spread, respectively. They have shown that both COPD and CHF have been classified using the MLP networks and the RBFNN accurately. Subashini et al. [5] proposed a polynomial kernel for the support vector machine (SVM) and the RBFNN for ascertaining the diagnostic accuracy of cytological data obtained from the Wisconsin breast cancer database. They have shown that RBFNN outperformed SVM for accurately classifying the tumors. Lewenstein [6] used RBFNN as a tool for diagnosis of coronary artery disease. The research was performed using 776 data records and over 90% accuracy was achieved for classifying. A short review of recent studies reveal numerous use of ANN techniques for diagnosis of diabetes mellitus [7-12], chest diseases [13-17], Parkinson disease [18, 19], breast cancer [5, 20–23], thyroid disease [24-26] and cardiovascular diseases [4, 6, 27–36]. Broomhead and Lowe [37] were the first to use the RBFNN in designing neural networks. In recent years, the RBFNN have attracted extensive research interest. [38-42] Wu et al. [19] used RBFNN to accurately identify Parkinson's disease. The data for training the RBFNN was obtained by means of deep brain electrodes implanted into a Parkinson's disease patient's brain. The output of the study indicated that RBFNNs could be successfully designed and used to identify tremors on set pattern even for small number of spikes.

3. The Clinical Data

The input data set for training ANNs has been obtained from discharge reports dated from 2008 to 2012 within patient records of the department of thoracic and cardiovascular surgery clinic of a university hospital. 186 records with 114 male patients aged around 53 ± 7 and with 72 female patients aged as 58 ± 5 have been analyzed. Each patient's report contains one final treatment decision, which is taken here as an output class value of the corresponding input data set that is as follows. Class 1: medical treatment decision (89 patients). Class 2: surgery or endovascular treatment decision (97 patients). All samples have a total of 16 features and these features were determined by consultations with cardiologists, surgeons, and anesthetists. Features, output classes and their normalized values are given in Table 1. Description of selected features is summarized in Tables 2–5.
Table 1

Features and their normalized values.

FeatureComment
Age (years)Divided by 100
SexFemale = 0, male = 1
Fontaine stageStage I = 0, stage II-a = 0, stage II-b = 2, stage III = 3, stage IV = 4 (see Table 4)
Lesion type (TASC classification)Type A = 0, type B = 1, type C = 3, type D = 4 (see Table 5)
Sensitivity to anesthesiaLow = 0, medium-high = 1
Distal bedAbsence = 0, presence = 1
Embolism (percent)Divided by 100
LDL cholesterol levelNormal = 0, near/above normal = 1, BH = 2, high = 3, very high = 4 (see Table 3)
SmokingAbsence = 0, presence = 1
ExsmokerAbsence = 0, presence = 1
HypertensionAbsence = 0, presence = 1
Blood pressureNormal = 0, pre-HTN = 1, stage I = 2, stage II = 3 (see Table 2)
Diabetes mellitusAbsence = 0, presence = 1
Other peripheral disease historyAbsence = 0, presence = 1
Family historyAbsence = 0, presence = 1
Current medical treatment Absence = 0, presence = 1

Treatment decisionMedical treatment = −1, operation = 1
Table 2

Blood pressure level categories in adults.

ClassificationSystolic pressure (mm Hg)Diastolic pressure (mm Hg)
Normal<120<80
Prehypertension120–13980–89
Stage I140–15990–99
Stage II>160>100
Table 5

TASC classification [3].

4. Radial Basis Function Neural Network (RBFNN)

The RBFNN [43] has a feed forward architecture with 3 layers: (i) an input layer, (ii) a hidden layer, and (iii) an output layer. A typical RBFNN is shown in Figure 1. The input layer of m nodes accepts m-dimensional features as input data vector. The hidden layer, which is fully connected to the input layer, is composed of n radial basis function neurons. Each hidden layer neuron operates as a radial basis function that does a nonlinear mapping of feature space into output space. The output layer consists of c neurons, which calculate the weighted sum of the output of the each hidden layer node.
Figure 1

An example of RBFNN.

The most commonly employed radial basis function for hidden layers is the Gaussian function [44, 45] and is determined by mean vectors (cluster centers) and covariance matrices C where j = 1,…, n. Covariance matrices are assumed to be in the form C = σ 2 I. Let Φ(x) be the Gaussian function representing the jth hidden node defined as where x = [x 1,x 2,…,x ] is the input feature vector, = [μ 1,μ 2,…,μ ] and σ 2 are the mean vector and the variance of the jth neuron, respectively. The kth output of the RBFNN is computed according to (2) In (2), w = [w 1, w 2,…, w ] is the vector of the weights between hidden and output layer and w 0 is the bias for k = 1,…, c. In order to design a RBFNN, the value of mean vectors ( ) representing the location of cluster centers and variances (σ 2) for hidden neurons have to be calculated first. K-means clustering algorithm is used to determine the value of mean vectors which is given as follows.

Step 1

Initialize by choosing m random values for n hidden nodes (μ , i = 1,…, m, j = 1,…, n) as initial cluster centers.

Step 2

Assign a randomly selected input data sample x to the nearest jth cluster center using the Euclidean norm.

Step 3

Recalculate including the assigned sample.

Step 4

Repeat Steps 2 and 3 until mean vectors do not change ( new≅ old). The number of hidden neurons n, which should be determined experimentally, is effective on the performance of the RBFNN. Generally, it is assumed that variances of all clusters are identical and equal to σ 2 which is calculated as follows where d is the maximum distance between cluster centers and η is an empirical scale factor and controls the smoothness of the nonlinear mapping function. Once the location of centers and their variances are determined, weights between the hidden layer and the output layer can be calculated. Equation (2) may be rewritten in the vector form as In (4), Y is the (n × 1) dimensional output vector, H is the (n × (m + 1)) dimensional hidden neuron matrix, and W is the ((m + 1) × 1) dimensional weight vector. To reduce the computational effort W is directly calculated from the least squares pseudoinverse by (5)

5. Experiments

5.1. Measures for Performance Evaluation

In our experiments, in order to evaluate the performance of the proposed RBFNN effectively and accurately, several performance indicators such as area under the receiving operating characteristics curve (AUC), accuracy, sensitivity (recall), specificity, positive predictive value (PPV) (precision), negative predictive value (NPV), F-score, and Yuden Index are analyzed [35, 36]. All these performance indicators are determined by using a confusion matrix, which is composed of the results of a binary (true/false) classification in terms of true positive (tp), false positive (fp), false negative (fn), and true negative (tn) counts. A confusion matrix for a binary classification is presented in Table 6. Accuracy is used to assess the overall effectiveness of the classifier (see (6)). Sensitivity is the ratio of correctly classified samples to all samples in that class (see (7)). Specificity measures the proportion of negatives, which are correctly identified (see (8)). PPV is the accuracy in a specified class (see (9)) and NPV is the proportion of cases with negative results that are correctly classified (see (10)). Finally, F-measure and Yuden Index, which are widely used performance indicators to assess neural network classification performances, are depicted in (11) and (12). Another important performance indicator of neural networks is the area under the receiving operating characteristics curve (AUC). Receiving operating characteristics curve is constructed by plotting the sensitivity versus (1-specificity) values for variety of cutoff points between 0.00 and 1.00. Furthermore, the Hosmer-Lemeshow (H-L) chi-square statistic is used as a numerical indicator of overall calibration
Table 6

Confusion matrix for binary classification.

Class/classifiedAs positiveAs negative
Positivetpfn
Negativefptn

5.2. Computational Results

Neural networks are prone to overfitting, especially when there are only a limited number of data. In order to estimate the performance of the neural networks accurately by reducing the bias and the variance on predicted results, 10-fold cross-validation method is used in this study. Multifold cross-validation, in which dynamic sets of validation and test data are used, is an efficient technique to avoid overfitting compared to regularization, early stopping, or data pruning especially when data are very scarce [43]. In 10-fold cross-validation, a data set is randomly partitioned into 10 equal subsamples having approximately equal number of samples from each class. Using this data set, while the RBFNN training is done by the first nine subsamples, the validation is done only by the last subsample. This training and testing process is repeated for 10 times by rotating each subsample to be used only once as the validation subsample. The mean and standard deviation of performance indicators for each neural network model are then reported. In this study, as mentioned in Section 4, the cluster center locations for all Gaussian functions, which are employed as radial basis functions, are determined using K-means clustering algorithm. Network weights of the output layer are determined by the pseudoinverse method (4). Following preliminary tests, the empirical scale factor is set to η = 0.6. For simplicity and ease of calculation, it is assumed that all variances are identical and equal to σ 2. A program is written in C++ language to employ the proposed RBFNN model. The optimum number of hidden nodes for a RBFNN model should be carefully determined as it directly affects the performance of the network. In this study, in order to choose the optimum number of centers for the proposed network, several preliminary experiments are conducted by stepwise change of the number of centers from 2 to 50. For each case, an average mean square error (MSE) is calculated using the 10-fold cross-validation. Figure 2 shows the MSE values with respect to the number of centers. Referring to Figure 2, the minimum MSE = 0.036 is achieved for 29 clusters and therefore the number of hidden nodes was set to 29.
Figure 2

MSE versus number of clusters for proposed RBFNN.

After attaining the optimal RBFNN, the performance is compared to three different Pareto optimal three-layer MLP networks. In our study, MLP models were generated and implemented using the ANN module provided within the STATISTICA software (v 11.0) published by the Statsoft, Inc. MLP networks were constructed using the Automated Network Search (ANS) strategy for creating predictive models of STATISTICA. Best three MLP networks were retained by the ANS, trying different number of hidden units (1–30), different input/output activation functions (identity, logistic, tanh, and exponential) and different training algorithms such as the Gradient Descent, the Broyden-Fletcher-Goldfarh-Shanno (BFGS) (Quasi-Newton), the Conjugate Gradient Algorithm (CGA), or the Levenberg-Marquardt Algorithm using an error function of sum of squares. Moreover, a 10-fold cross-validation technique is selected to avoid overfitting and oscillation. The best three MLP networks which were determined using the ANS are summarized in Table 7. MLP-13 and MLP-23 employs the BFGS algorithm where the weights and biases are updated using the Hessian matrix performance index at the current values of the weights and biases. BFGS has high memory requirements due to storing the Hessian matrix. On the other hand, MLP-7 utilizes the CGA, which is a fast training algorithm for MLP networks that proceeds by a series of line searches through error space. In CGA, learning rate and momentum are calculated adaptively in each iteration. In the ANS module, the learning rate is calculated by the Golden Search rule while the Fletcher and Reeves formula [46] is used for momentum calculations.
Table 7

Selected MLP networks.

Network nameTraining algorithmHidden activation functionOutput activation functionNumber of hidden units
MLP-13BFGStanhLogistic13
MLP-23BFGSIdentityLogistic23
MLP-7CGALogisticIdentity7
Table 8 lists the mean of performance indicator results using the 10-fold cross-validation method for each network. Considering Table 8, it is noticeable that the mean classification accuracy of RBFNN (0.950) is better than any one of MLP networks (MLP-13 = 0.881, MLP-23 = 0.838, and MLP-7 = 0.800). Prediction capabilities based on AUC show that the proposed RBFNN outperforms all other MLP networks (RBFNN = 0.949, MLP-13 = 0.873, MLP-23 = 0.839, and MLP-7 = 0.793). The average sensitivity values for MLP networks are 0.896, 0.835, and 0.816 for MLP-13, MLP-23, and MLP-7, respectively. On the other hand, proposed RBFNN gives an average sensitivity of 0.953, which indicates that the RBFNN performs better on classifying cases having positive condition. Based on specificity, the RBFNN (94.8%) is superior to MLP-13 (86.8%), MLP-23 (84.0%), and MLP-7 (78.8%). F-measure and Yuden Index are the most widely used stand-alone performance indicators for classification studies. F-measure and Yuden Index values are 0.947 and 0.901 for the proposed RBFNN while 0.872 and 0.764 for MLP-13, 0.829, and 0.675 for MLP-23 and 0.783 and 0.604 for MLP-7, respectively. The mean PPV's are 0.849, 0.824, 0.753 and 0.942, while the mean NPV's are 0.909, 0.851, 0.843, and 0.958 for MLP-13, MLP-23, MLP-7, and RBFNN, respectively. These findings also show that a RBFNN performs better than MLP networks. In general, all models were good-fit models based on the H-L   statistics  (H-L < 12.0).
Table 8

Mean of performance indicators for MLP networks and RBFNN.

MLP-13MLP-23MLP-7RBFNN
AUC0.8730.8390.7930.949
Cutoff point0.4430.5420.3920.510
Accuracy0.8810.8380.8000.950
Sensitivity0.8960.8350.8160.953
Specificity0.8680.8400.7880.948
PPV0.8490.8240.7530.942
NPV0.9090.8510.8430.958
F-score0.8720.8290.7830.947
Yuden index0.7640.6750.6040.901
H-L 10.38610.21111.6327.880
In order to make precise and pairwise comparison between networks, two-tailed t tests are employed to show the statistical significance level of the difference of the mean of performance indicators for the RBFNN and MLP networks. Tables 9, 10, and 11 show the results of statistical tests. The mean, the standard deviation (SD), and the 95% confidence interval (CI) of each result are given in Tables 9–11. In the last column of Tables 9–11, a “+” sign denotes that the difference of performance indicator means is statistically significant at a 0.05 level, while a “–” sign indicates a difference which is not significant. The t test results clearly indicate that the difference between the proposed RBFNN network and MLP networks are statistically significant for all the indicators except the H-L statistic between MLP-23 and RBFNN. Therefore, it is evident that the proposed RBFNN is a better classifier for identifying the treatment type of femoral PAD's when compared to MLP networks.
Table 9

Comparison of MLP-13 and RBFNN.

MLP-13RBFNNStatistical significance
Mean ± SD95% CIMean ± SD95% CI
AUC0.873 ± 0.0180.862–0.8850.949 ± 0.0280.931–0.966+
Cutoff0.443 ± 0.0100.437–0.4490.510 ± 0.0110.503–0.517+
Accuracy0.881 ± 0.0160.871–0.8910.950 ± 0.0220.936–0.964+
Sensitivity0.896 ± 0.0210.883–0.9090.953 ± 0.0150.944–0.963+
Specificity0.868 ± 0.0180.857–0.8790.948 ± 0.0300.929–0.966+
PPV0.849 ± 0.0230.835–0.8640.942 ± 0.0340.920–0.963+
NPV0.909 ± 0.0190.897–0.9210.958 ± 0.0130.949–0.966+
F-score0.872 ± 0.0180.861–0.8830.947 ± 0.0240.932–0.962+
Yuden index0.764 ± 0.0330.744–0.7850.901 ± 0.0440.873–0.928+
H-L 10.386 ± 2.1259.069–11.7037.880 ± 1.5576.915–8.845+
Table 10

Comparison of MLP-23 and RBFNN.

MLP-23RBFNN Statistical significance
Mean ± SD95% CIMean ± SD95% CI
AUC0.839 ± 0.0180.828–0.8500.949 ± 0.0280.931–0.966+
Cutoff0.542 ± 0.0160.532–0.5520.510 ± 0.0110.503–0.517+
Accuracy0.838 ± 0.0170.827–0.8480.950 ± 0.0220.936–0.964+
Sensitivity0.835 ± 0.0200.823–0.8470.953 ± 0.0150.944–0.963+
Specificity0.840 ± 0.0180.829–0.8510.948 ± 0.0300.929–0.966+
PPV0.824 ± 0.0210.811–0.8360.942 ± 0.0340.920–0.963+
NPV0.851 ± 0.0190.839–0.8620.958 ± 0.0130.949–0.966+
F-score0.829 ± 0.0180.818–0.8400.947 ± 0.0240.932–0.962+
Yuden index0.675 ± 0.0340.654–0.6970.901 ± 0.0440.873–0.928+
H-L 10.211 ± 3.4098.098–12.3247.880 ± 1.5576.915–8.845
Table 11

Comparison of MLP-7 and RBFNN.

MLP-7RBFNNStatistical significance
Mean ± SD95% CIMean ± SD95% CI
AUC0.789 ± 0.0190.778–0.8010.949 ± 0.0280.931–0.966+
Cutoff0.392 ± 0.0090.386–0.3980.510 ± 0.0110.503–0.517+
Accuracy0.800 ± 0.0200.787–0.8120.950 ± 0.0220.936–0.964+
Sensitivity0.823 ± 0.0280.805–0.8400.953 ± 0.0150.944–0.963+
Specificity0.782 ± 0.0170.772–0.7920.948 ± 0.0300.929–0.966+
PPV0.746 ± 0.0210.733–0.7590.942 ± 0.0340.920–0.963+
NPV0.850 ± 0.0250.834–0.8650.958 ± 0.0130.949–0.966+
F-score0.782 ± 0.0220.769–0.7960.947 ± 0.0240.932–0.962+
Yuden index0.605 ± 0.0410.579–0.6300.901 ± 0.0440.873–0.928+
H-L 11.632 ± 2.16910.288–12.9767.880 ± 1.5576.915–8.845+

6. Conclusion

In this work, an artificial intelligence model that determines the treatment type for femoral PAD is presented. The proposed model, which is based on the RBFNN framework, is compared to three Pareto optimal MLP networks using a repeated 10-fold cross-validation method for the reliability of results. The proposed RBFNN possesses superior performance than MLP networks in terms of performance measures such as AUC, accuracy, sensitivity, specificity, positive predictive value, negative predictive value, F-score, and Yuden Index. This work clearly indicates that RBFNN is a viable and powerful tool as a clinical decision support system for classifying the treatment options regarding femoral PADs. Future studies may cover using metaheuristic algorithms to determine optimal design parameters of RBFNNs such as the number and the location of centers or variances of clusters and as a result enhance the classification performance.
Table 3

Cholesterol level categories in adults.

LDL cholesterol level (mg/dL)LDL cholesterol category
<100 Optimal
100–129 Near optimal/above optimal
130–159Borderline high
160–189High
>190Very high
Table 4

Fontaine stages [2].

StagesDetails
Stage IAsymptomatic, incomplete blood vessel obstruction
Stage II-aClaudication at a distance of greater than 200 meters
Stage II-bClaudication distance of less than 200 meters
Stage IIIRest pain, mostly in the feet
Stage IVNecrosis and/or gangrene of the limb
  23 in total

1.  Use of genetic algorithms for neural networks to predict community-acquired pneumonia.

Authors:  Paul S Heckerling; Ben S Gerber; Thomas G Tape; Robert S Wigton
Journal:  Artif Intell Med       Date:  2004-01       Impact factor: 5.326

2.  Early assessment of patients with suspected acute myocardial infarction by biochemical monitoring and neural network analysis.

Authors:  J Ellenius; T Groth; B Lindahl; L Wallentin
Journal:  Clin Chem       Date:  1997-10       Impact factor: 8.327

3.  Artificial neural network algorithms for early diagnosis of acute myocardial infarction and prediction of infarct size in chest pain patients.

Authors:  Kai M Eggers; Johan Ellenius; Mikael Dellborg; Torgny Groth; Jonas Oldgren; Eva Swahn; Bertil Lindahl
Journal:  Int J Cardiol       Date:  2006-06-21       Impact factor: 4.164

4.  Three learning phases for radial-basis-function networks.

Authors:  F Schwenker; H A Kestler; G Palm
Journal:  Neural Netw       Date:  2001-05

5.  A neural computational aid to the diagnosis of acute myocardial infarction.

Authors:  William G Baxt; Frances S Shofer; Frank D Sites; Judd E Hollander
Journal:  Ann Emerg Med       Date:  2002-04       Impact factor: 5.721

6.  Predictions of coronary artery stenosis by artificial neural network.

Authors:  B A Mobley; E Schechter; W E Moore; P A McKee; J E Eichner
Journal:  Artif Intell Med       Date:  2000-03       Impact factor: 5.326

7.  Support vector machines for diagnosis of breast tumors on US images.

Authors:  Ruey-Feng Chang; Wen-Jie Wu; Woo Kyung Moon; Yi-Hong Chou; Dar-Ren Chen
Journal:  Acad Radiol       Date:  2003-02       Impact factor: 3.173

8.  Classification of MCA stenosis in diabetes by MLP and RBF neural network.

Authors:  Uyman Ergün; Necaattin Barýpçý; Ahmet Tevfik Ozan; Selami Serhatlýoğlu; Erkin Oğur; Firat Hardalaç; Inan Güler
Journal:  J Med Syst       Date:  2004-10       Impact factor: 4.460

9.  Computer-aided diagnosis of emphysema in COPD patients: neural-network-based analysis of lung shape in digital chest radiographs.

Authors:  Giuseppe Coppini; Massimo Miniati; Marco Paterni; Simonetta Monti; Ezio Maria Ferdeghini
Journal:  Med Eng Phys       Date:  2006-03-15       Impact factor: 2.242

10.  Comparison of prediction model for cardiovascular autonomic dysfunction using artificial neural network and logistic regression analysis.

Authors:  Zi-Hui Tang; Juanmei Liu; Fangfang Zeng; Zhongtao Li; Xiaoling Yu; Linuo Zhou
Journal:  PLoS One       Date:  2013-08-05       Impact factor: 3.240

View more
  2 in total

1.  Procedural surgical skill assessment in laparoscopic training environments.

Authors:  Munenori Uemura; Pierre Jannin; Makoto Yamashita; Morimasa Tomikawa; Tomohiko Akahoshi; Satoshi Obata; Ryota Souzaki; Satoshi Ieiri; Makoto Hashizume
Journal:  Int J Comput Assist Radiol Surg       Date:  2015-08-08       Impact factor: 2.924

2.  Mitoxantrone suppresses vascular smooth muscle cell (VSMC) proliferation and balloon injury-induced neointima formation: An in vitro and in vivo study.

Authors:  Yuan Teng; Ziyi Wang; Wen Li; Jianxing Yu; Zhen Shan; Chun Liang; Shenming Wang
Journal:  Bosn J Basic Med Sci       Date:  2017-11-20       Impact factor: 3.363

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.