| Literature DB >> 35009562 |
Boštjan Šumak1, Saša Brdnik1, Maja Pušnik1.
Abstract
To equip computers with human communication skills and to enable natural interaction between the computer and a human, intelligent solutions are required based on artificial intelligence (AI) methods, algorithms, and sensor technology. This study aimed at identifying and analyzing the state-of-the-art AI methods and algorithms and sensors technology in existing human-computer intelligent interaction (HCII) research to explore trends in HCII research, categorize existing evidence, and identify potential directions for future research. We conduct a systematic mapping study of the HCII body of research. Four hundred fifty-four studies published in various journals and conferences between 2010 and 2021 were identified and analyzed. Studies in the HCII and IUI fields have primarily been focused on intelligent recognition of emotion, gestures, and facial expressions using sensors technology, such as the camera, EEG, Kinect, wearable sensors, eye tracker, gyroscope, and others. Researchers most often apply deep-learning and instance-based AI methods and algorithms. The support sector machine (SVM) is the most widely used algorithm for various kinds of recognition, primarily an emotion, facial expression, and gesture. The convolutional neural network (CNN) is the often-used deep-learning algorithm for emotion recognition, facial recognition, and gesture recognition solutions.Entities:
Keywords: IUI; artificial intelligence; human–computer intelligent interaction; intelligent user interfaces; sensors
Mesh:
Year: 2021 PMID: 35009562 PMCID: PMC8747169 DOI: 10.3390/s22010020
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1A general architecture of a multimodal HCII system (adapted from [6]).
Figure 2Systematic mapping study process adapted from [127].
Figure 3Research process map.
Research questions.
| Research Question | |
|---|---|
| RQ1 | What have been trends and demographics of the literature within the field of HCII? For the first main research question, the following set of research sub-questions were formulated: |
| RQ1.1 | What is the annual number of publications in HCII field? (Publication count by year). |
| RQ1.2 | Which studies in the HCII are most cited? (Top-cited studies). |
| RQ1.3 | Which countries are contributing the most to the HCII field, based on the affiliations of the researchers? (Active countries). |
| RQ1.4 | Which venues (i.e., journals, conferences) are the main targets of articles in the HCII field, measured by the number of published articles? (Top venues). |
| RQ2 | What has been research space of the literature within the field of HCII in the last decade? For the second main research question, following sub-questions were formulated: |
| RQ2.1 | What type of research is conducted in the HCII field? (Research type, e.g., quantitative, qualitative, and mixed). |
| RQ2.2 | What type of research methods have been conducted in the HCII studies? (research method type, e.g., [ |
| RQ 2.3 | What research methodology is used for validating or evaluating the proposed HCII solution? (Research methodology, e.g., experiment, case study, etc.). |
| RQ 2.4 | What are the common data collection methods in the HCII studies? (Data collection method, e.g., measurement with sensors, questionnaires, observing users, image processing, etc.). |
| RG2.5 | What is the main standpoint of the research studies? |
| RQ2.6 | What phase of the HCII development and evaluation are presented in existing studies (analysis, design, implementation, and testing) (HCII development phase). |
| RQ3 | What sensors technology and intelligent methods and algorithms have been used in the development and evaluation of solutions for HCII? For the third main research question, following sub-questions were formulated: |
| RQ3.1 | What is the main aim of the intelligent recognition? (Recognition-of, e.g., emotion, gesture, etc.). |
| RQ3.2 | What is the main data source for the evaluation of the proposed solution of the HCII? (Data source, e.g., audio signal, audiovisual information, sensor, etc.). |
| RQ3.3 | What type of the sensor was used in the studies? (Sensor-type, e.g., camera, Kinect, etc.). |
| RQ3.4 | What AI method and algorithms were used? (AI-methods and algorithms used, e.g., ANN, CNN, etc.). |
Articles retrieved from the selected digital libraries using the specified search string.
| Database | Nr. of Articles |
|---|---|
| ACM | 889 |
| IEEE | 1488 |
| MDPI | 46 |
| Science Direct | 1449 |
| Scopus | 1395 |
| Web of Science | 421 |
| Together | 5642 |
Inclusion and exclusion criteria.
| Criteria | Description | |
|---|---|---|
| I1 | Field | Include studies addressing intelligent interaction or intelligent user interfaces. |
| I2 | Language | The article must be written in English. |
| I3 | Availability | The article must be accessible electronically. |
| I4 | Literature type | Include articles published in peer-reviewed journals, conference proceedings, or a book (e.g., lecture notes). |
| E1 | Year | Exclude literature, published before the year 2010. |
| E2 | Duplicates | Exclude any duplicated studies found in multiple databases. |
| E3 | Research area | Exclude non-computer science or non-human–computer interaction literature. |
| E4 | Methodology type | Exclude articles that report results of a systematic literature review or systematic mapping study. |
| E5 | Language | Exclude articles not written in English. |
| E6 | Field | Exclude studies outside of the scope of HCII or II. |
| E7 | Exclude articles less than four pages long that do not provide enough information about the study conducted. |
Steps in screening and selection of the relevant literature.
| Step | Activity | Nr. of Articles |
|---|---|---|
| I | Automatic search in digital libraries | 5642 |
| II | Applying E1 | 3335 |
| III | Screening by title and abstract (applying I1-I4) | 657 |
| IV | Applying E6 (removing the duplicates) | 622 |
| V | Screening with fast reading the manuscript (applying E3-E7) | 454 |
Figure 4Flow diagram of the database searches and article screening process.
Coding and classification scheme.
| Variable | Description | |
|---|---|---|
| EC1 | Article type | Journal article, conference paper, book section |
| EC2 | Research type | Quantitative, qualitative, mixed |
| EC3 | Research method type | Validation research, evaluation research, solution proposal, Philosophical paper, opinion paper, experience paper |
| EC4 | Research strategy | Case study, experiment, survey, grounded theory, user Study, field study, mixed study, exploratory study, literature review |
| EC5 | Data collection method | Interview, meta-analysis, observing users, prototype development, questionnaire, systematic literature review, systematic mapping study, usability test, user experience evaluation, Wizard of Oz, measurement with sensors, simulation, existing database data analysis |
| EC6 | Research standpoint | Accessible UI, adaptive UI, artificial intelligence, brain computer interface (BCI), human–computer interaction (HCI), human–machine interaction (HMI), intelligent interaction (II), intelligent UI |
| EC7 | HCII development phase | Analysis, design, implementation, testing |
| EC8 | Study environment | Laboratory setting, real-world setting |
| EC9 | Recognition of | 3D Gaze, activity, attention, behavior, body motion, etc. |
| EC10 | Data source | Audio, audiovisual information, camera, ECG, EDA, EEG, EMG, EOG, eye gaze, etc. |
| EC11 | Sensor type | Accelerometer, ambient sensors, biometric sensor, blood volume pulse sensor (BVP), camera, EEG sensor, eye tracker, etc. |
| EC12 | AI methods used | Adaboost algorithm, ANN, back propagation neural network (BPNN), bag-of-features (BOF), Bayesian deep-learning network (BDLN), BN, bidirectional long short-term memory recurrent neural network (BLSTM), C4.5, Combinatorial fusion analysis (CFA), etc. |
Figure 5Number of studies published through years 2010–2021 (all = 454).
Figure 6Percentage of papers based on publication type (all = 454).
Top ten journals and conferences regarding the number of published articles.
| Journal | Nr. of Articles |
|---|---|
|
| 21 |
|
| 9 |
|
| 6 |
|
| 5 |
|
| 5 |
|
| 4 |
|
| 4 |
|
| 3 |
|
| 3 |
|
| 3 |
| Conference |
|
| International Conference on Affective Computing and Intelligent Interaction (ACII) | 56 |
| Humaine Association Conference on Affective Computing and Intelligent Interaction | 24 |
| Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia) | 15 |
| International Conference on Intelligent Computing and Control Systems (ICICCS) | 8 |
| International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) | 7 |
| IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER) | 5 |
| International Conference on Intelligent Environments | 5 |
| International Conference on Intelligent Human-Machine Systems and Cybernetics | 5 |
| International conference on Intelligent User Interfaces | 4 |
| Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 4 |
Top ten most cited studies by the number of all paper citations.
| Title | Year | Journal/Conference | Nr. of Citations |
|---|---|---|---|
| Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection [ | 2016 |
| 219 |
| Stress recognition using wearable sensors and mobile phones [ | 2013 | Humaine Association Conference on Affective Computing and Intelligent Interaction | 214 |
| EmotionMeter: A Multimodal Framework for Recognizing Human Emotions [ | 2019 |
| 188 |
| Sparse Autoencoder-Based Feature Transfer Learning for Speech Emotion Recognition [ | 2013 | 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction | 152 |
| Deep-learning analysis of mobile physiological, environmental and location sensor data for emotion detection [ | 2019 |
| 102 |
| EEG-Based Mobile Robot Control Through an Adaptive Brain–Robot Interface [ | 2014 |
| 96 |
| Gender-Driven Emotion Recognition Through Speech Signals For Ambient Intelligence Applications [ | 2013 |
| 79 |
| From Activity Recognition to Intention Recognition for Assisted Living Within Smart Homes [ | 2017 |
| 78 |
| Error weighted semi-coupled hidden markov model for audio-visual emotion recognition [ | 2012 |
| 77 |
| Detecting Naturalistic Expressions of Nonbasic Affect Using Physiological Signals [ | 2012 |
| 68 |
Top ten most cited studies by the average number of paper citations per Year.
| Title | Year | Journal/Conference | Average Citations per Year |
|---|---|---|---|
| EmotionMeter: A Multimodal Framework for Recognizing Human Emotions [ | 2019 |
| 62.67 |
| Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection [ | 2016 |
| 36.5 |
| Deep-learning analysis of mobile physiological, environmental and location sensor data for emotion detection [ | 2019 |
| 34 |
| Stress recognition using wearable sensors and mobile phones [ | 2013 | Humaine Association Conference on Affective Computing and Intelligent Interaction | 23.78 |
| Identifying Stable Patterns over Time for Emotion Recognition from EEG [ | 2019 |
| 19 |
| Sparse Autoencoder-Based Feature Transfer Learning for Speech Emotion Recognition [ | 2013 | 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction | 16.89 |
| From Activity Recognition to Intention Recognition for Assisted Living Within Smart Homes [ | 2017 |
| 15.6 |
| FER-net: facial expression recognition using deep neural net [ | 2021 |
| 15 |
| MultiD-CNN: A multi-dimensional feature learning approach based on deep convolutional networks for gesture recognition in RGB-D image sequences [ | 2020 |
| 13.5 |
| EEG-Based Mobile Robot Control Through an Adaptive Brain–Robot Interface [ | 2014 |
| 12 |
Top 10 countries regarding the contribution to the literature.
| Country | Nr. of Articles | % |
|---|---|---|
| China | 128 | 28 |
| USA | 57 | 13 |
| India | 47 | 10 |
| United Kingdom | 33 | 7 |
| Germany | 24 | 5 |
| Japan | 19 | 4 |
| South Korea | 18 | 4 |
| Italy | 13 | 3 |
| Australia | 13 | 3 |
| Canada | 12 | 3 |
Figure 7Number of studies by year and research type.
Figure 8Number of studies by year and research methodology.
Figure 9Number of studies by year and research method type.
Figure 10Number of studies by year and data collection method.
Figure 11Number of studies by year and research standpoint.
Figure 12Distribution of HCII solutions’ development phases according to the research standpoint.
Figure 13Distribution of HCI recognition solutions according to the research standpoint.
Figure 14Distribution of data sources in existing HCI recognition solutions.
Figure 15Distribution of sensors in existing HCI recognition solutions.
Figure 16Distribution of AI methods and algorithms according to the research standpoint.
Figure 17Distribution of AI methods and algorithms for HCI recognition solutions.
Figure 18Distribution of data sources used in AI methods and algorithms.
Figure 19Distribution of sensors used in AI methods and algorithms.
Figure 20Distribution of AI methods and algorithms HCI recognition used for HCI recognition solutions.