| Literature DB >> 35937140 |
Caleb Vatral1, Gautam Biswas1, Clayton Cohn1, Eduardo Davalos1, Naveeduddin Mohammed1.
Abstract
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.Entities:
Keywords: DiCoT; distributed cognition; human performance; learning analytics (LA); mixed reality (MR); multimodal data; multimodal learning analytics (MMLA); simulation based training (SBT)
Year: 2022 PMID: 35937140 PMCID: PMC9353401 DOI: 10.3389/frai.2022.941825
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Figure 1Illustration of the interactions between each of the five DiCoT themes and how they work together to construct the entire cognitive system.
The 18 principles of DiCoT analysis, summarized from Blandford and Furniss (2006).
|
|
|
|---|---|
| 1. Space and cognition | The role space and spatial layout play in supporting cognition |
| 2. Perceptual principle | Spatial representations support cognition more than non-spatial representations, as long as there is a clear mapping between the space and that which the space represents |
| 3. Naturalness principle | Cognition is aided when the form of a representation matches the properties of what it represents |
| 4. Subtle bodily supports | Individuals often use their body to support cognitive processes |
| 5. Situation awareness | People need to be informed of and understand what has previously happened, what is currently going on, and what is planned |
| 6. Horizon of observation | The information that can be seen or heard by a person; closely related to and influencing situation awareness |
| 7. Arrangement of equipment | The layout of equipment affects what information people have access to, and thus their ability to process it |
| 8. Information movement | Information moves around a system in a number of ways, which all have unique functional consequences |
| 9. Information transformation | Information can be represented in many forms, and often must transform between these forms when moving and when being processed |
| 10. Information hubs | A central focus or source where different channels of information meet and are processed together |
| 11. Buffering | If incoming information interferes with ongoing activities, buffering allows the information to be held until an appropriate time where it will not interfere |
| 12. Communication bandwidth | Different modalities of communication often carry different amounts of information. For example, face-to-face communication offers more information than computer-mediated communication |
| 13. Informal communication | Not all communication is formal, and sometimes informal communication can carry very important information that is not otherwise passed |
| 14. Behavioral trigger factors | Groups of people can operate together without an overall plan by individually responding appropriately to certain local trigger factors |
| 15. Mediating artifacts | People often bring artifacts into coordination to support completion of a task |
| 16. Creating scaffolding | People often simplify their cognitive tasks by utilizing their environment |
| 17. Representation-Goal Parity | When an artifact is used to represent the system's goal, representations closer to the goal of the user are more powerful |
| 18. Coordination of Resources | Different information structures can be coordinated to aid in cognition |
Figure 2The overall theoretical framework to combine qualitative DiCoT analysis with quantitative multimodal analytics for understanding learner behaviors in simulation-based training.
Figure 3Layout of the simulated hospital room from three viewpoints: the head camera (top-left), foot camera (bottom-left), and an abstract map representation (right).
Figure 4Cognitive task model for the nursing simulation domain.
Figure 5Example of the distributed cognition in the context of nurse training across the physical layout, artifacts, and social themes.
Sample dialogue from S1 demonstrating evaluation of the nurse.
| 1 | Nurse: | I'm going to stop this infusion really quickly. |
| 2 | Patient: | Why? |
| 3 | Nurse: | Because when we give red blood cells, an indication that you're having a reaction to it is low back pain and feeling itchy. So it sounds like you're having a reaction to the blood transfusion. |
Figure 6The overall computational architecture used for the quantitative analysis.
Figure 7An example of dialogue from scenario 1 which has been annotated using the developed tagging schema.
Figure 8An example of fixation overlay from Scenario 2 used for manual annotation. In this frame, the resulting AOI is “patient”.
Figure 9The complete timeline of events for scenario S1 containing annotated data from participant position, action, gaze, and speech.
Figure 10The complete timeline of events for scenario S2 containing annotated data from participant position, action, gaze, and speech.
Figure 11Distribution of gaze across five major object categories conditioned on the nurse's position in the room for each scenario.
Figure 12Distribution of total speech acts conditioned on the nurse's position in the room for each scenario.
Figure 13Marginal distribution of nurse gaze across five major object categories for each scenario.