| Literature DB >> 35408387 |
Antonio Costantino Marceddu1, Luigi Pugliese1, Jacopo Sini1, Gustavo Ramirez Espinosa1,2, Mohammadreza Amel Solouki1, Pietro Chiavassa1, Edoardo Giusto1, Bartolomeo Montrucchio1, Massimo Violante1, Francesco De Pace3.
Abstract
Teaching is an activity that requires understanding the class's reaction to evaluate the teaching methodology effectiveness. This operation can be easy to achieve in small classrooms, while it may be challenging to do in classes of 50 or more students. This paper proposes a novel Internet of Things (IoT) system to aid teachers in their work based on the redundant use of non-invasive techniques such as facial expression recognition and physiological data analysis. Facial expression recognition is performed using a Convolutional Neural Network (CNN), while physiological data are obtained via Photoplethysmography (PPG). By recurring to Russel's model, we grouped the most important Ekman's facial expressions recognized by CNN into active and passive. Then, operations such as thresholding and windowing were performed to make it possible to compare and analyze the results from both sources. Using a window size of 100 samples, both sources have detected a level of attention of about 55.5% for the in-presence lectures tests. By comparing results coming from in-presence and pre-recorded remote lectures, it is possible to note that, thanks to validation with physiological data, facial expressions alone seem useful in determining students' level of attention for in-presence lectures.Entities:
Keywords: behavioral analysis; facial expressions; heart rate variability; image databases; neural networks; physiological data
Mesh:
Year: 2022 PMID: 35408387 PMCID: PMC9003217 DOI: 10.3390/s22072773
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Normalized confusion matrix of the neural network described in [25], trained by using the “Ensemble 1” dataset [29].
Figure 2The workflow of the proposed analysis approach. The dashed arrows represent the WS calibration feedback.
Figure 3Comparison of the AGs obtained from the and during the in-presence and remote lectures with different windows sizes.
Figure 4ABs of the in-presence and remote lectures obtained from and with a WS of 100 samples and applying the 5 min attention counters every second.