| Literature DB >> 35162153 |
Miguel Angel Ortíz-Barrios1, Matias Garcia-Constantino2, Chris Nugent2, Isaac Alfaro-Sarmiento1.
Abstract
The classifier selection problem in Assistive Technology Adoption refers to selecting the classification algorithms that have the best performance in predicting the adoption of technology, and is often addressed through measuring different single performance indicators. Satisfactory classifier selection can help in reducing time and costs involved in the technology adoption process. As there are multiple criteria from different domains and several candidate classification algorithms, the classifier selection process is now a problem that can be addressed using Multiple-Criteria Decision-Making (MCDM) methods. This paper proposes a novel approach to address the classifier selection problem by integrating Intuitionistic Fuzzy Sets (IFS), Decision Making Trial and Evaluation Laboratory (DEMATEL), and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The step-by-step procedure behind this application is as follows. First, IF-DEMATEL was used for estimating the criteria and sub-criteria weights considering uncertainty. This method was also employed to evaluate the interrelations among classifier selection criteria. Finally, a modified TOPSIS was applied to generate an overall suitability index per classifier so that the most effective ones can be selected. The proposed approach was validated using a real-world case study concerning the adoption of a mobile-based reminding solution by People with Dementia (PwD). The outputs allow public health managers to accurately identify whether PwD can adopt an assistive technology which results in (i) reduced cost overruns due to wrong classification, (ii) improved quality of life of adopters, and (iii) rapid deployment of intervention alternatives for non-adopters.Entities:
Keywords: Decision Making Trial and Evaluation Laboratory (DEMATEL); Intuitionistic Fuzzy Sets (IFS); People with Dementia (PwD); Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS); classifier; multi-criteria decision making (MCDM); public health; technology adoption
Mesh:
Year: 2022 PMID: 35162153 PMCID: PMC8834594 DOI: 10.3390/ijerph19031133
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 3.390
Figure 1Proposed methodology for selecting the most suitable classifier considering the implementation context of technology adoption.
Profile of decision-making participants.
| Expert | Profession | Experience (Years) | Current Position | Participation in the Design of Reminding Technologies |
|---|---|---|---|---|
| 1 | Biomedical Engineering | >20 | Professor in Biomedical Engineering | Yes |
| 2 | Systems Engineering | >10 | Lecturer in Data Analytics | Yes |
| 3 | Systems Engineering | >20 | Senior Lecturer in Ambient Assisted Living | Yes |
| 4 | Systems Engineering | >10 | Professor of Image Processing | Yes |
| 5 | Biomedical Engineering | >10 | Assistant Professor in Signal Analysis | Yes |
| 6 | Industrial Engineering | >20 | Associate Professor | Yes |
| 7 | Systems Engineering | >10 | Associate Professor | Yes |
Figure 2The proposed decision-making structure for selecting the most suitable classifier for AT adoption.
Description of classifier selection criteria.
| Classifier Selection Criterion | Sub-Criteria | Definition |
|---|---|---|
| Classifier performance (F1) | Predictive ability (SF1) | It measures the predictive capability of a classification algorithm; in this context, how well the classifier distinguishes adopters and non-adopters of a particular AT [ |
| Applicability (F2) | Ease of comprehension by non-experts (SF7) | This factor denotes how explainable the algorithm is and verifies whether it is easy to understand by clinicians who are often unskilled in this kind of application. This is of interest considering that medical staff will be directly involved in the classifier implementation. |
| Replicability (F3) | No sub-criteria | This criterion considers the financial investment underpinning the classifier development process as well as its validation in the practical scenario. |
| Adaptability (F4) | Missing data estimation (SF9) | It evaluates how flexible the algorithm is when addressing common data drawbacks (i.e. missing data), different implementation conditions, and diverse variable types. Not effectively responding to this context may limit the application of the classifier in the real world. |
| Classifier architecture (F5) | Data gathering (SF12) | It exhibits different classifier design aspects including data gleaning, training, and validation which may flatten the learning curve of clinicians while laying the groundwork for the design of agile healthcare processes for PwD. |
Direct-relation matrix in Intuitionistic Fuzzy Sets—Decision-maker 1 (Adaptability sub-criteria).
| SF9 | SF10 | SF11 | ||||
| SF9 | 0 | 0 | 0.90 | 0.10 | 0.50 | 0.45 |
| SF10 | 0.50 | 0.45 | 0 | 0 | 0.90 | 0.10 |
| SF11 | 0.50 | 0.45 | 0.10 | 0.90 | 0 | 0 |
Direct-relation matrix in standard fuzzy subsets—Decision-maker 1 (Adaptability sub-criteria).
| SF9 | SF10 | SF11 | |
| SF9 | 0 | 0.90 | 0.53 |
| SF10 | 0.53 | 0 | 0.90 |
| SF11 | 0.53 | 0.10 | 0 |
Crisp direct-relation matrix—Decision-maker 1 (Adaptability sub-criteria).
| SF9 | SF10 | SF11 | |
| SF9 | 0 | 3.60 | 2.10 |
| SF10 | 2.10 | 0 | 3.60 |
| SF11 | 2.10 | 0.40 | 0 |
Aggregated crisp direct-relation matrix for Adaptability matrix.
| SF9 | SF10 | SF11 | |
| SF9 | 0 | 2.18 | 1.91 |
| SF10 | 2.65 | 0 | 2.23 |
| SF11 | 2.66 | 2.45 | 0 |
Normalized direct-relation matrix for Adaptability sub-criteria.
| SF9 | SF10 | SF11 | |
| SF9 | 0 | 0.409 | 0.360 |
| SF10 | 0.499 | 0 | 0.419 |
| SF11 | 0.501 | 0.461 | 0 |
Total influence matrix for Adaptability sub-criteria.
| SF9 | SF10 | SF11 | D | |
| SF9 | 2.184 | 2.270 | 2.097 | 6.551 |
| SF10 | 2.796 | 2.234 | 2.361 | 7.391 |
| SF11 | 2.885 | 2.629 | 2.140 | 7.654 |
| R | 7.865 | 7.133 | 6.598 |
Prominence and relation values within the classifier selection model.
| Criterion (F)/Sub-Criterion (SF) |
|
| Dispatcher | Receiver |
|---|---|---|---|---|
|
|
|
| X | |
| Predictive ability (SF1) | 10.306 | 0.883 | X | |
| Computational time (SF2) | 8.582 | −1.141 | X | |
| Negative recall (SF3) | 9.425 | −0.289 | X | |
| Positive recall (SF4) | 9.711 | 0.230 | X | |
| Positive predictive value (SF5) | 9.583 | 0.376 | X | |
| Negative predictive vaalue (SF6) | 9.647 | −0.058 | X | |
|
|
|
| X | |
| Ease of comprehension (SF7) | 43.118 | −1.000 | X | |
| Interpretability (SF8) | 43.118 | 1.000 | X | |
|
|
|
| X | |
|
|
|
| X | |
| Missing data estimation (SF9) | 14.416 | −1.314 | X | |
| Management of continuous and discrete variables (SF10) | 14.524 | 0.258 | X | |
| Online learning (SF11) | 14.252 | 1.056 | X | |
|
|
|
| X | |
| Data gathering (SF12) | 12.827 | −0.173 | X | |
| Overtraining effect (SF13) | 11.584 | −1.520 | X | |
| Amount of input data (SF14) | 12.868 | 0.322 | X | |
| Validation (SF15) | 13.039 | 1.174 | X | |
| Statistical classification (SF16) | 12.589 | 0.198 | X |
Figure 3Ranking of classifier selection criteria.
Local and global weights of classifier selection sub-criteria.
| IF-DEMATEL | ||
|---|---|---|
| Criterion (F)/Sub-Criterion (SF) | LW | GW |
|
|
| |
| Predictive ability (SF1) | 0.180 | 0.035 |
| Computational time (SF2) | 0.150 | 0.029 |
| Negative recall (SF3) | 0.165 | 0.032 |
| Positive recall (SF4) | 0.170 | 0.033 |
| Positive predictive value (SF5) | 0.167 | 0.033 |
| Negative predictive value (SF6) | 0.168 | 0.033 |
|
|
| |
| Ease of comprehension (SF7) | 0.500 | 0.102 |
| Interpretability (SF8) | 0.500 | 0.102 |
|
|
| |
|
|
| |
| Missing data estimation (SF9) | 0.334 | 0.066 |
| Management of continuous and discrete variables (SF10) | 0.336 | 0.067 |
| Online learning (SF11) | 0.330 | 0.066 |
|
|
| |
| Data gathering (SF12) | 0.204 | 0.044 |
| Overtraining effect (SF13) | 0.184 | 0.040 |
| Amount of input data (SF14) | 0.205 | 0.044 |
| Validation (SF15) | 0.207 | 0.045 |
| Statistical classification (SF16) | 0.200 | 0.043 |
Figure 4Impact-digraph map for (a) criteria, (b) classifier performance, (c) applicability, (d) adaptability, and (e) classifier architecture.
List of key performance indexes utilized in the modified TOPSIS.
| Sub-Factor/Factor | Key Performance İndex | Mathematical Formula |
|---|---|---|
| Predictive ability (SF1) | Average accuracy | |
| Computational time (SF2) | Average run time |
|
| Negative recall (SF3) | Average recall (-) | |
| Positive recall (SF4) | Average recall (+) | |
| Positive predictive value (SF5) | Average precision (+) | |
| Negative predictive value (SF6) | Average precision (-) | |
| Ease of comprehension (SF7) | Model appropriation | If the algorithm is easy to appropriate |
| Interpretability (SF8) | Box type | If it a black-box algorithm (0); white-box algorithm (1) |
| Replicability (F3) | Unit replication cost | It the learning process cost is higher than £727.48 (0); otherwise (1) |
| Missing data estimation (SF9) | Capability of missing data management | If the algorithm is capable of handling missing data (1); otherwise (0) |
| Management of continuous and discrete variables (SF10) | Management of continuous and discrete variables | If the algorithm works with both continuous and discrete variables (1); otherwise (0) |
| Online learning (SF11) | Online learning | If the algorithm is of online-learning type (1); otherwise (0) |
| Data gathering (SF12) | Easiness of data collation | If the feature set of the model can be collated through available data sources and/or simple self-administered surveys (1); otherwise (0) |
| Overtraining effect (SF13) | Overtraining | If the algorithm evidences overtraining effect (0); otherwise (1) |
| Amount of input data (SF14) | Number of input variables | Number of patient features that the classifier needs for displaying the prediction |
| Validation (SF15) | Access to validation datasets | If the algorithm has access to validated datasets (1); otherwise (0) |
| Statistical classification (SF16) | Algorithm nature | If the algorithm is based on statistical modelling (1); otherwise (0) |
The performance matrix in TOPSIS application.
| SF1 | SF2 | SF3 | SF4 | SF5 | SF6 | SF7 | SF8 | F3 | |
| A1 | 0.85 | 1 | 0.664 | 0.584 | 0.717 | 0.61 | 0 | 0 | 1 |
| A2 | 0.85 | 1 | 0.504 | 0.637 | 0.637 | 0.425 | 0 | 1 | 1 |
| A3 | 0.825 | 1 | 0.478 | 0.212 | 0.239 | 0.557 | 0 | 0 | 0 |
| A4 | 0.349 | 1 | 0.239 | 0.584 | 0.504 | 0.159 | 1 | 1 | 0 |
| A5 | 0.875 | 2 | 0.185 | 0.239 | 0.212 | 0.159 | 0 | 0 | 0 |
| A6 | 0.825 | 5 | 0.371 | 0.132 | 0.132 | 0.504 | 1 | 1 | 0 |
| A7 | 0.825 | 1 | 0.212 | 0.265 | 0.239 | 0.265 | 1 | 1 | 1 |
| A+ | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| A- | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| W | 0.035 | 0.029 | 0.032 | 0.033 | 0.033 | 0.033 | 0.102 | 0.102 | 0.185 |
| SF9 | SF10 | SF11 | SF12 | SF13 | SF14 | SF15 | SF16 | ||
| A1 | 1 | 0 | 1 | 1 | 1 | 2 | 1 | 0 | |
| A2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | |
| A3 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | |
| A4 | 1 | 1 | 1 | 1 | 0 | 2 | 1 | 1 | |
| A5 | 1 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | |
| A6 | 1 | 1 | 1 | 1 | 0 | 2 | 1 | 1 | |
| A7 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | |
| A+ | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | |
| A- | 0 | 0 | 0 | 0 | 1 | 2 | 0 | 0 | |
| W | 0.066 | 0.067 | 0.066 | 0.044 | 0.04 | 0.044 | 0.045 | 0.043 |
Matrix of distances to the ideal solution.
| SF1 | SF2 | SF3 | SF4 | SF5 | SF6 | SF7 | SF8 | F3 | |
| A1 | 0.00050 | 0.00021 | 0.00062 | 0.00075 | 0.00060 | 0.00072 | 0.01040 | 0.01040 | 0.00856 |
| A2 | 0.00009 | 0.00000 | 0.00057 | 0.00038 | 0.00038 | 0.00073 | 0.01040 | 0.00000 | 0.00000 |
| A3 | 0.00012 | 0.00000 | 0.00061 | 0.00099 | 0.00097 | 0.00052 | 0.01040 | 0.01040 | 0.03423 |
| A4 | 0.00108 | 0.00021 | 0.00097 | 0.00075 | 0.00083 | 0.00106 | 0.00260 | 0.00260 | 0.03423 |
| A5 | 0.00047 | 0.00084 | 0.00099 | 0.00103 | 0.00104 | 0.00106 | 0.01040 | 0.01040 | 0.03423 |
| A6 | 0.00091 | 0.01346 | 0.00097 | 0.00108 | 0.00108 | 0.00098 | 0.00666 | 0.00666 | 0.03423 |
| A7 | 0.00012 | 0.00000 | 0.00093 | 0.00094 | 0.00097 | 0.00094 | 0.00000 | 0.00000 | 0.00000 |
| SF9 | SF10 | SF11 | SF12 | SF13 | SF14 | SF15 | SF16 | Si+ | |
| A1 | 0.00109 | 0.00449 | 0.00109 | 0.00048 | 0.00040 | 0.00194 | 0.00051 | 0.00185 | 0.21121 |
| A2 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00160 | 0.00000 | 0.00000 | 0.00185 | 0.12656 |
| A3 | 0.00436 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.25021 |
| A4 | 0.00109 | 0.00112 | 0.00109 | 0.00048 | 0.00000 | 0.00194 | 0.00051 | 0.00046 | 0.22586 |
| A5 | 0.00109 | 0.00112 | 0.00109 | 0.00194 | 0.00000 | 0.00194 | 0.00051 | 0.00046 | 0.26192 |
| A6 | 0.00279 | 0.00287 | 0.00279 | 0.00124 | 0.00000 | 0.00008 | 0.00130 | 0.00118 | 0.27977 |
| A7 | 0.00000 | 0.00000 | 0.00000 | 0.00194 | 0.00160 | 0.00000 | 0.00000 | 0.00000 | 0.08629 |
Matrix of distances to anti-ideal solution.
| SF1 | SF2 | SF3 | SF4 | SF5 | SF6 | SF7 | SF8 | F3 | |
| A1 | 0.00016 | 0.01703 | 0.00005 | 0.00003 | 0.00007 | 0.00004 | 0.00000 | 0.00000 | 0.00856 |
| A2 | 0.00064 | 0.01346 | 0.00007 | 0.00018 | 0.00018 | 0.00004 | 0.00000 | 0.01040 | 0.03423 |
| A3 | 0.00057 | 0.01346 | 0.00005 | 0.00000 | 0.00000 | 0.00010 | 0.00000 | 0.00000 | 0.00000 |
| A4 | 0.00000 | 0.01703 | 0.00000 | 0.00003 | 0.00002 | 0.00000 | 0.00260 | 0.00260 | 0.00000 |
| A5 | 0.00018 | 0.00757 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 |
| A6 | 0.00002 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00042 | 0.00042 | 0.00000 |
| A7 | 0.00057 | 0.01346 | 0.00000 | 0.00001 | 0.00000 | 0.00001 | 0.01040 | 0.01040 | 0.03423 |
| SF9 | SF10 | SF11 | SF12 | SF13 | SF14 | SF15 | SF16 | Si- | |
| A1 | 0.00109 | 0.00000 | 0.00109 | 0.00048 | 0.00014 | 0.00279 | 0.00051 | 0.00000 | 0.17899 |
| A2 | 0.00436 | 0.00449 | 0.00436 | 0.00194 | 0.00102 | 0.00008 | 0.00203 | 0.00000 | 0.27829 |
| A3 | 0.00000 | 0.00449 | 0.00436 | 0.00194 | 0.00006 | 0.00008 | 0.00203 | 0.00185 | 0.17025 |
| A4 | 0.00109 | 0.00112 | 0.00109 | 0.00048 | 0.00006 | 0.00279 | 0.00051 | 0.00046 | 0.17289 |
| A5 | 0.00109 | 0.00112 | 0.00109 | 0.00000 | 0.00006 | 0.00279 | 0.00051 | 0.00046 | 0.12195 |
| A6 | 0.00017 | 0.00018 | 0.00017 | 0.00008 | 0.00006 | 0.00000 | 0.00008 | 0.00007 | 0.04103 |
| A7 | 0.00436 | 0.00449 | 0.00436 | 0.00000 | 0.00102 | 0.00008 | 0.00203 | 0.00185 | 0.29538 |
Figure 5Ranking of alternative classification algorithms considered as a support for the adoption of a mobile-based technology in PwD.
Figure 6Contrast between SAW, TOPSIS, and VIKOR rankings.
Figure 7Pearson correlation results (confidence interval = 95.0%).
Figure 8Spearman rank correlation results (confidence interval = 95.0%).