| Literature DB >> 35368316 |
Priyanka Chaurasia1, Sally McClean2, Chris D Nugent2, Ian Cleland2, Shuai Zhang2, Mark P Donnelly2, Bryan W Scotney2, Chelsea Sanders3, Ken Smith4, Maria C Norton5, JoAnn Tschanz3.
Abstract
The work described in this paper builds upon our previous research on adoption modelling and aims to identify the best subset of features that could offer a better understanding of technology adoption. The current work is based on the analysis and fusion of two datasets that provide detailed information on background, psychosocial, and medical history of the subjects. In the process of modelling adoption, feature selection is carried out followed by empirical analysis to identify the best classification models. With a more detailed set of features including psychosocial and medical history information, the developed adoption model, using kNN algorithm, achieved a prediction accuracy of 99.41% when tested on 173 participants. The second-best algorithm built, using NN, achieved 94.08% accuracy. Both these results have improved accuracy in comparison to the best accuracy achieved (92.48%) in our previous work, based on psychosocial and self-reported health data for the same cohort. It has been found that psychosocial data is better than medical data for predicting technology adoption. However, for the best results, we should use a combination of psychosocial and medical data where it is preferable that the latter is provided from reliable medical sources, rather than self-reported.Entities:
Keywords: Assistive technologies; Dementia; Medical history; Reminder application; Technology adoption
Year: 2021 PMID: 35368316 PMCID: PMC8933362 DOI: 10.1007/s00779-021-01572-x
Source DB: PubMed Journal: Pers Ubiquitous Comput ISSN: 1617-4909 Impact factor: 3.006
Fig. 1An overview of the TAUT, indicating sources of the data and set of features used in the project
Summary table comparing different technology acceptance models built and proposed
| Models and theories | Description |
|---|---|
| TAM [ | Based on perceived usefulness and ease of use are key to the acceptance of the technology. Lacks explanatory and predictive power. No experimental evaluation is done and does not include social influences |
| PIADS [ | Extension of TAM and incorporates personal factors and external factors like people and society. Criticised for lack of consideration of explanatory behaviour and experimental evaluation |
| UTAUT [ | Improvement on TAM and PIADS. Identifies more reliable features like age, gender, willingness to use, user expectations, and willingness to use. Also, includes facilitating conditions (infrastructure) as a determining factor |
| STAM [ | Developed to understand mobile phone usage and adoption by older patients. More related to mobile phone usage in general and not evaluated for assessing assistive technology adoption |
| Scherer et al. (2007) [ | Framework for modelling the selection of assistive technologies. Used only for selecting an assistive technology and does not consider long-term adoption |
| TAUT [ | The proposed TAUT model investigates factors that affect technology adoption in the cognitively impaired older cohort. Range of features used such as subject’s background, environmental and social perspectives, and medical history information are used. The proposed adoption model is to be built by using information related to users’ compliance with the TAUT app usage, users’ background, details of cognitive assessment, and medical history data. Does not rely on complex questionnaires but include generally easily obtainable demographic and health information |
Fig. 2Screenshots from the TAUT app showing (a) upcoming reminders list and (b) reminder creation screens
User adoption matrix profiling adopters and non-adopters based on capability and willingness
| Adoption modelling | Capability | ||
|---|---|---|---|
| Yes (proficient) | No (non-proficient) | ||
| Willingness | Yes (interested) | Adopter | Non-adopter (1) |
| No (not interested) | Non-adopter (2) | Non-adopter (3) | |
Data dictionary used for profiling adopter and non-adopter
| Code | Frequency | Included/removed | Class | |
|---|---|---|---|---|
| Not recruited | 0.00 | 687 | - | - |
| Enrolled in study | 1.00 | 21 | Included | Adopter |
| Unreachable | 2.00 | 2 | Removed | - |
| Refused by phone/letter | 3.00 | 146 | Included | Non-adopter |
| Deceased prior to study | 4.00 | 58 | Removed | - |
| Moved | 5.00 | - | - | - |
| Temporary moved | 6.00 | - | - | - |
| Cannot locate | 7.00 | 92 | Removed | - |
| Out of area | 8.00 | 21 | Removed | - |
| Ineligible | 9.00 | 6 | Included | Non-adopter |
Parameters used to train each model
| Algorithm | Weka | Parameters used to train each model | |
|---|---|---|---|
| NN | Multilayer Perceptron | normalizeAttributes = True batchSize = 100 decay = False validationSetSize = 0 trainingTime = 500 | resume = False autoBuild = True normalizeNumericClass = True learningRate = 0.3 reset = True |
| C4.5 DT | J48 | seed = 1 unpruned = False confidenceFactor = 0.25 numFolds = 3 batchSize = 100 reducedErrorPruning = False useLaplace = False | doNotMakeSplitPointActualValue = False binarySplits = False doNotCheckCapabilities = False minNumObj = 2 useMDLcorrection = True collapseTree = True |
| SVM | SMO | numFolds = -1 randomSeed = 1 batchSize = 100 kernel = Poly Kernel | checksTurnedOff = False filterType = Normalize training data toleranceParameter = 0.01 epsilon = 1.0E-12 |
| NB | NaïveBayes | useKernelEstimator = False batchSize = 100 | displayModelInOldFormat = False useSupervisedDiscretization = False |
| AB | AdaBoostM1 | seed = 1 weightThreshold = 100 batchSize =100 | numIterations resume = 2 useResampling = False |
| IBK | batchSize = 100 KNN = 1 distanceWeighting = No distance weighting | windowSize = 0 meanSquared = False crossValidate = False | |
| CART | SimpleCART | seed = 1 batchSize = 100 useOneSE = False usePrune = True | numFoldsPruning = 5 minNumObj = 2.0 heuristic = True sizePer = 1.0 |
Details of 11 features selected from the CCSMA data [4]
| Features | Details | |||
|---|---|---|---|---|
| Gender | Male = 1, female = 2 | |||
| Age (years) | - | |||
Education level (educ) | 0 = No education 1 to 10 grade 11 = Eleventh grade/no diploma 12 = High school diploma or GED 13 = Some college | 13 = Some college 14 = 2 years of college 15 = 3 years of college 16 = College degree (B.A., B.S.) 17 = Some post-graduate work | 18 = M.A., M.S. 19 = Some doctoral work 20 = Doctoral degree 97 = Refused 98 = Don’t know 99 = Missing | |
| Job category | 1 = Professor, technical, manager 2 = Clerical, sales 3 = Service | 4 = Agriculture 5 = Processing 6 = Machine | 7 = Bench work 8 = Structural | 9 = Miscellaneous 10 = Never employed |
| Dementia code AD pure (padom) | 1 = AD-clean 2 = AD with other dementia 3 = AD-VaD | 4 = VaD without AD 5 = Other dementia 9 = anycind as of x12 | 10 = Screened normal 11 = Evaluated normal 99= Unable to determine | |
| lastV | 1 = v1 | 4 = v2 | 7= v3 | 10 = v4 |
| lastObs | 1 = v1, 2 = c1, 3 = f1, 4 = v2, 5 = c2, 6 = f2, 7 = v3, 8 = c3, 9 = f3, 10 = v4, 11 = c4, 12 = f4 | |||
| Heart attack self-reported (MI) | 0 = Never (birth-lastVdxK) 1 = Prevalent (before v1) | 2 = Incident (v1 to RC/Dem) 3 = Post-dementia onset | 4 = During PV wave(s) 9 = Missing | |
| Stroke self-reported (CVA) | 0 = Never (birth - lastVdxK) 1 = Prevalent (before v1) | 2 = Incident (v1 to RC/Dem) 3 = Post-dementia onset | 4 = During PV wave(s) 9 = Missing | |
| Hypertension self-reported (HTN) | 0 = Never (birth - lastVdxK) 1 = Prevalent (before v1) | 2 = Incident (v1 to RC/Dem) 3 = Post-dementia onset | 4 = During PV wave(s) 9 = Missing | |
| High cholesterol self-reported (Chol) | 0 = Never (birth - lastVdxK) 1 = Prevalent (before v1) | 2 = Incident (v1 to RC/Dem) 3 = Post-dementia onset | 4 = During PV wave(s) 9 = Missing | |
| * CIND = Cognitive impairment not dementia | * PV wave (s) = Periodic wave of visit | * VaD = Vascular dementia | ||
Details of the features considered from the UPDB dataset
| Feature | Feature details and labels | |
|---|---|---|
| Total HOSP | Number of times a subject is hospitalised in the HOPS category for any of the 10 diseases in between the year 1996 and 2013 | {NoneHosp, FewTimesHosp, LotHosp} |
| Total AS | Number of times a subject is hospitalised in the AS category for any of the 10 diseases in between the year 1996 and 2013 | {NoneAS, FewAS, LotAS} |
Total Heart Total Cancer Total Chronic Total Accident Total Stroke Total AD Total Diabetes Total Influenza Total Nephritis Total Septicemia | Number of times a subject is hospitalised in the HOSP and/or AS category for each of these diseases in between the year 1996 and 2013 | {NoneHeart, FewHeart, LotHeart} {NoneCancer, VisitedForCancer} {NoneChronic, VisitedChronic} {NoneAccident, VisitAccident} {NoneStroke, VisitedStroke} {NoneAD, VisitAD} {NoneDiabetes, VisitDiabetes} {NoneInfluenza, VisitInfluenza} {NoneNephritis, VisitNephritis} {NoneSepticemia, VisitSepticemia} |
| Recent 3 years of HOSP | Number of times a subject is hospitalised in the HOSP category in between the year 2010 and 2013 | {NoneHospRecent, VisitHospRecent} |
| Recent 3 years of AS | Number of times a subject is hospitalised in the AS category in between the year 2010 and 2013 | {NoneASRecent, VisitASRecent} |
Heart_recent3Years Cancer_recent3Years Chronic_recent3Years Accident_recent3Years Stroke_recent3Years AD_recent3Years Diabetes_recent3Years Influenza_recent3Years Nephritis_recent3Years Septicemia_recent3Years | Number of times a subject is hospitalised in the HOSP and/or AS category for each of these diseases in between the year 2010 and 2013 | |
{NoneHeartRecent, VisitHeartRecent} {NoneCancerRecent, VisitedForCancerRecent} {NoneChronicRecent, VisitedChronicRecent} {NoneAccidentRecent, VisitAccidentRecent} {NoneStrokeRecent, VisitedStrokeRecent} {NoneADRecent, VisitADRecent} | {NoneDiabatesRecent, VisitDiabatesRecent} {NoneInfluenzaRecent, VisitInfluenzaRecent} {NoneNephritisRecent, VisitNephritisRecent} {NoneSepticemiaRecent, VisitSepticemiaRecent} | |
Final logistic regression parameters obtained
| Variable | Model Log Likelihood | Change in -2 Log Likelihood | df | Sig. of the change |
|---|---|---|---|---|
| Total AS | −143.348 | 30.745 | 2 | .000 |
| Total Chronic | −129.676 | 3.401 | 1 | .065 |
| Total Stroke | −144.557 | 33.163 | 1 | .000 |
| Total AD | −130.388 | 4.826 | 1 | .028 |
| Total Diabetes | −135.937 | 15.924 | 1 | .000 |
| Total Influenza | −129.865 | 3.780 | 1 | .052 |
| Total Nephritis | −136.354 | 16.757 | 1 | .000 |
| Hosp_recent | −130.377 | 4.804 | 1 | .028 |
| AS_recent | −129.577 | 3.203 | 1 | .073 |
Average prediction accuracies (%) of the models obtained for 24, univariate analysis, and multivariate analysis feature sets
| Dataset | NN | C4.5 DT | SVM | NB | AB | ||
|---|---|---|---|---|---|---|---|
| All 24 features | 84.02 | 74.56 | 72.78 | 59.17 | 66.86 | 85.80 | 77.25 |
| 3 features from univariate analysis | 59.76 | 59.6 | 58.58 | 63.90 | 63.90 | 59.76 | 59.76 |
| 9 features from multivariate analysis | 75.74 | 71.01 | 69.23 | 58.58 | 66.86 | 76.33 | 71.0 |
Average prediction accuracies, F-Measure, and ROC area for 32 and 30 features CCSMA+UPDB dataset
| 32 features (lastObs and lastV included) | 30 features (lastObs and lastV excluded) | ||||
|---|---|---|---|---|---|
| Algorithm | Results | ||||
| NN | Avg. prediction accuracy = 93.4911% | Avg. prediction accuracy = 95.2663% | |||
| Class | F-Measure | ROC area | F-Measure | ROC area | |
| Refuser | 0.961 | 0.821 | 0.971 | 0.886 | |
| Adopter | 0.792 | 0.821 | 0.862 | 0.886 | |
| Weighted avg. | 0.931 | 0.821 | 0.952 | 0.886 | |
| C4.5 DT | Avg. prediction accuracy = 88.1657% | Avg. prediction accuracy = 88.1657% | |||
| Class | F-Measure | ROC area | F-Measure | ROC area | |
| Refuser | 0.928 | 0.905 | 0.927 | 0.903 | |
| Adopter | 0.677 | 0.905 | 0.688 | 0.903 | |
| Weighted avg. | 0.883 | 0.905 | 0.884 | 0.903 | |
| SVM | Avg. prediction accuracy = 84.6154% | Avg. prediction accuracy = 84.0237% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.907 | 0.724 | 0.904 | 0.707 | |
| Adopter | 0.552 | 0.724 | 0.526 | 0.707 | |
| Weighted avg. | 0.844 | 0.724 | 0.837 | 0.707 | |
| NB | Avg. prediction accuracy = 66.2722% | Avg. prediction accuracy = 67.4556% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.769 | 0.641 | 0.779 | 0.635 | |
| Adopter | 0.374 | 0.641 | 0.382 | 0.635 | |
| Weighted avg. | 0.699 | 0.641 | 0.709 | 0.635 | |
| AB | Avg. prediction accuracy = 75.7396% | Avg. prediction accuracy = 75.7396% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.842 | 0.725 | 0.842 | 0.728 | |
| Adopter | 0.481 | 0.725 | 0.481 | 0.728 | |
| Weighted avg. | 0.778 | 0.725 | 0.778 | 0.728 | |
| Avg. prediction accuracy = 99.4083% | Avg. prediction accuracy = 97.0414% | ||||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.996 | 1.000 | 0.982 | 0.997 | |
| Adopter | 0.983 | 1.000 | 0.918 | 0.997 | |
| Weighted avg. | 0.994 | 1.000 | 0.971 | 0.997 | |
| Avg. prediction accuracy = 85.2071% | Avg. prediction accuracy = 84.6154% | ||||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.912 | 0.802 | 0.909 | 0.765 | |
| Adopter | 0.545 | 0.802 | 0.500 | 0.765 | |
| Weighted avg. | 0.847 | 0.802 | 0.836 | 0.765 | |
Average prediction accuracies, F-Measure, and ROC area for 20 and 18 features CCSMA+UPDB dataset (recent years’ disease set removed from UPDB)
| 20 features (lastObs and lastV included)—recent years disease set removed from UPDB | 18 features (lastObs and lastV excluded)—recent years disease set removed from UPDB | ||||
|---|---|---|---|---|---|
| Algorithm | Results | ||||
| NN | Avg. prediction accuracy = 94.0828% | Avg. prediction accuracy = 97.0414% | |||
| Class | F-Measure | ROC area | F-Measure | ROC area | |
| Refuser | 0.965 | 0.830 | 0.982 | 0.969 | |
| Adopter | 0.821 | 0.830 | 0.918 | 0.969 | |
| Weighted avg. | 0.939 | 0.830 | 0.971 | 0.969 | |
| C4.5 DT | Avg. prediction accuracy = 86.9822% | Avg. prediction accuracy = 86.9822% | |||
| Class | F-Measure | ROC area | F-Measure | ROC area | |
| Refuser | 0.920 | 0.892 | 0.920 | 0.892 | |
| Adopter | 0.645 | 0.892 | 0.645 | 0.892 | |
| Weighted avg. | 0.871 | 0.892 | 0.871 | 0.892 | |
| SVM | Avg. prediction accuracy = 83.432% | Avg. prediction accuracy = 84.0237% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.900 | 0.703 | 0.904 | 0.707 | |
| Adopter | 0.517 | 0.703 | 0.526 | 0.707 | |
| Weighted avg. | 0.832 | 0.703 | 0.837 | 0.707 | |
| NB | Avg. prediction accuracy = 72.1893% | Avg. prediction accuracy = 72.1893% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.820 | 0.672 | 0.820 | 0.668 | |
| Adopter | 0.390 | 0.672 | 0.390 | 0.668 | |
| Weighted avg. | 0.744 | 0.672 | 0.744 | 0.668 | |
| AB | Avg. prediction accuracy = 75.7396% | Avg. prediction accuracy = 75.7396% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.842 | 0.725 | 0.842 | 0.728 | |
| Adopter | 0.481 | 0.725 | 0.481 | 0.728 | |
| Weighted avg. | 0.778 | 0.725 | 0.778 | 0.728 | |
| Avg. prediction accuracy = 99.4083% | Avg. prediction accuracy = 97.0414% | ||||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.996 | 1.000 | 0.982 | 0.997 | |
| Adopter | 0.983 | 1.000 | 0.918 | 0.997 | |
| Weighted avg. | 0.994 | 1.000 | 0.971 | 0.997 | |
| Avg. prediction accuracy = 86.9822% | Avg. prediction accuracy = 84.0237% | ||||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.922 | 0.834 | 0.904 | 0.767 | |
| Adopter | 0.607 | 0.834 | 0.526 | 0.767 | |
| Weighted avg. | 0.866 | 0.834 | 0.837 | 0.767 | |
Average prediction accuracies, F-Measure, and ROC area for 20 and 18 features CCSMA+UPDB dataset (total years’ disease set removed from UPDB)
| 20 features (lastObs and lastV included)—total years disease set removed from UPDB | 18 features (lastObs and lastV excluded)—total years disease set removed from UPDB | ||||
|---|---|---|---|---|---|
| Algorithm | Results | ||||
| NN | Avg. prediction accuracy = 87.574% | Avg. prediction accuracy = 84.0237% | |||
| Class | F-Measure | ROC area | F-Measure | ROC area | |
| Refuser | 0.922 | 0.894 | 0.900 | 0.831 | |
| Adopter | 0.696 | 0.894 | 0.597 | 0.831 | |
| Weighted avg. | 0.882 | 0.894 | 0.847 | 0.831 | |
| C4.5 DT | Avg. prediction accuracy = 76.9231% | Avg. prediction accuracy = 75.7396% | |||
| Class | F-Measure | ROC area | F-Measure | ROC area | |
| Refuser | 0.853 | 0.711 | 0.845 | 0.701 | |
| Adopter | 0.466 | 0.711 | 0.438 | 0.701 | |
| Weighted avg. | 0.784 | 0.711 | 0.773 | 0.701 | |
| SVM | Avg. prediction accuracy = 76.9231% | Avg. prediction accuracy = 78.1065% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.853 | 0.690 | 0.861 | 0.697 | |
| Adopter | 0.466 | 0.690 | 0.479 | 0.697 | |
| Weighted avg. | 0.784 | 0.690 | 0.794 | 0.697 | |
| NB | Avg. prediction accuracy = 60.355% | Avg. prediction accuracy = 61.5385% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.722 | 0.597 | 0.730 | 0.598 | |
| Adopter | 0.309 | 0.597 | 0.330 | 0.598 | |
| Weighted avg. | 0.649 | 0.597 | 0.659 | 0.598 | |
| AB | Avg. prediction accuracy = 73.3728% | Avg. prediction accuracy = 73.3728% | |||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.833 | 0.688 | 0.833 | 0.688 | |
| Adopter | 0.348 | 0.688 | 0.348 | 0.688 | |
| Weighted avg. | 0.747 | 0.688 | 0.747 | 0.688 | |
| Avg. prediction accuracy = 89.9408% | Avg. prediction accuracy = 85.7988% | ||||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.939 | 0.969 | 0.911 | 0.939 | |
| Adopter | 0.721 | 0.969 | 0.647 | 0.939 | |
| Weighted avg. | 0.900 | 0.969 | 0.864 | 0.939 | |
| Avg. prediction accuracy = 79.8817% | Avg. prediction accuracy = 78.1065% | ||||
| Class | F-Measure | ROC area | F-Measure | ROC | |
| Refuser | 0.874 | 0.773 | 0.862 | 0.766 | |
| Adopter | 0.500 | 0.773 | 0.464 | 0.766 | |
| Weighted avg. | 0.808 | 0.773 | 0.792 | 0.766 | |
Average prediction accuracies (%) for models built using CCSMA, UPDB, and CCSMA+UPDB combined
| CCSMA (11 features) | UPDB (24 features) | CCSMA+UPDB (20 features) | |
|---|---|---|---|
| 90.75 | 84.02 | 94.08 | |
| 84.97 | 74.56 | 86.98 | |
| 72.83 | 72.78 | 83.43 | |
| 46.82 | 59.17 | 72.19 | |
| 68.79 | 66.86 | 75.74 | |
| 92.48 | 85.8 | 99.41 | |
| 87.28 | 77.25 | 86.98 |
ANOVA table for CCSMA, UPDB, and CCSMA+UPDB methods significance test
| Df | SS | MS | F | |||
|---|---|---|---|---|---|---|
| ANOVA table for CCSMA and UPDB | Accuracy_all$UPDB | 1 | 1442.7 | 1442.7 | 48.97 | 0.000918 |
| Residuals | 5 | 147.3 | 29.5 | |||
| ANOVA table for CCSMA+UPDB and CCSM | Accuracy_all$CCSMA_UPDB | 1 | 1442.7 | 1442.7 | 48.97 | 0. 00432 |
| Residuals | 5 | 147.3 | 29.5 | |||
| ANOVA table for CCSMA+UPDB and UPDB | Accuracy_all$CCSMA+UPDB | 1 | 503.3 | 503.3 | 133.9 | 0.000085 |
| Residuals | 5 | 18.8 | 3.8 |
Fig. 3Influence diagram of features impacting on technology adoption [8]
Fig. 4Influence diagram based on the combined features from the CCSMA and UPDB datasets
Reasons for refusal by the subjects in the TAUT study
| Reason of refusal | |
| Prefers not to try to learn device and app | |
| Dissatisfied with reminder device; subject unable to learn new reminding tool | |
| Subject’s failing health; subject put on hospice | |
| Participant unable to learn/use technology | |
| The informant reported: participant cannot learn to use device because he/she cannot remember what it is when it alarms—gets nervous that it is fire alarm going off or thinks it is a remote control | |
| Too busy and prefers to use physical the calendar on the fridge. Would like to simplify his life | |
| The participant could not learn to use technology | |
| The participant has a hard time learning smartphone/App and does not care to use it. Prefers regular calendar. Gave satisfaction survey | |
| Reported “not good” at technology and did not use the device | |
| Due to failing health. Participant is on O2 tank 24/7 for pneumonia. Follow-up OK | |
| Inconvenient to carry around extra phone | |
| Too busy and not interested in learning new technology (smartphone or App) but might look into “simple” tablet through AARP. Okay to contact for follow-up | |
| Participant not interested in using or learning how to use the smartphone | |
| Daughter called and insisted that parents are removed from the study |
Fig. 5(a) Distribution of male and female in the adopter and refuser class. (b) Educational information of adopters and refuser class
Mean and standard deviation values for adopter and refuser class
| Class | Mean | STDEV |
|---|---|---|
| Adopter | 89.76 | 3.11 |
| Refuser | 90.59 | 3.99 |