| Literature DB >> 35155588 |
Marlena R Fraune1, Iolanda Leite2, Nihan Karatas3, Aida Amirova4, Amélie Legeleux5, Anara Sandygulova4, Anouk Neerincx5, Gaurav Dilip Tikas6, Hatice Gunes7, Mayumi Mohan8, Nida Itrat Abbasi7, Sudhir Shenoy9, Brian Scassellati10, Ewart J de Visser11, Takanori Komatsu12.
Abstract
The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.Entities:
Keywords: human-robot interaction; methodology; qualitative; quantitative; replication; reproducibility; research; statistics
Year: 2022 PMID: 35155588 PMCID: PMC8832512 DOI: 10.3389/frobt.2021.772141
Source DB: PubMed Journal: Front Robot AI ISSN: 2296-9144
The keywords and their frequencies from the 16 submissions (*denotes keywords from authors who contributed to the paper).
| Keyword | Frequency | Keyword | Frequency |
|---|---|---|---|
| Child-robot interaction* | 5 | Interdisciplinary* | 1 |
| Human-robot interaction* | 4 | Language learning* | 1 |
| Trust | 3 | Mental Model | 1 |
| Human-robot teaming | 2 | Multimodal explanation | 1 |
| Robot-Assisted Therapy* | 2 | Multimodal sensing* | 1 |
| Children with Autism* | 2 | Navigation | 1 |
| Acceptability* | 1 | Non-Expert User* | 1 |
| Adaptive instruction | 1 | Pain Management* | 1 |
| Anthropomorphism* | 1 | Parental inclusion* | 1 |
| Artificial Social Intelligence | 1 | Programming by Demonstrations* | 1 |
| Coaching | 1 | Reciprocal peer tutoring* | 1 |
| Collaborative and social computing devices* | 1 | Reinforcement Learning* | 1 |
| Computer systems organization* | 1 | Scene understanding | 1 |
| Emotion Recognition* | 1 | Social attributions | 1 |
| External interfaces for robotics* | 1 | Social robot* | 1 |
| Group Dynamics* | 1 | Socially assistive robotics | 1 |
| Healthcare* | 1 | Tactile perception | 1 |
| Human Factors | 1 | Team Innovation Capability* | 1 |
| Human-AI Teaming | 1 | Team Performance* | 1 |
| Human-centered computing* | 1 | Technology acceptance | 1 |
| Humanoid Home care | 1 | Theory of Mind | 1 |
| Intent prediction | 1 | Understandability | 1 |
| Interactive explanation | 1 | Wellbeing assessment* | 1 |
Time table for Session 1 and Session 2.
| Session 1 (United States and East Asia) | Session 2 (United States and Europe) | ||||
|---|---|---|---|---|---|
| MT 17:00 | JST 09:00 | Opening remarks | MT 07:00 | GMT 14:00 | Opening remarks |
| MT 17:15 | JST 09:15 | Breakout mentoring 1 | MT 07:15 | GMT 14:15 | Breakout mentoring 2 |
| MT 18:15 | JST 10:15 | Coffee break | MT 08:15 | GMT 15:15 | Coffee break |
| MT 18:30 | JST 10:30 | Individual work-time/ask mentor | MT 08:30 | GMT 15:30 | Individual work-time/ask mentor |
| MT 19:30 | JST 11:30 | Whole group discussion: lessons learned | MT 09:30 | GMT 16:30 | Whole group discussion: lessons learned |
| MT 20:00 | JST 12:00 | Closing remarks | MT 10:00 | GMT 17:00 | Closing remarks |
| MT 20:15 | JST 12:15 | Break until Session 2 | MT 10:15 | GMT 17:15 | Workshop end |
FIGURE 1The workshop sessions included Breakout mentoring (main discussion with a main mentor, secondary mentor and two mentees), Individual work time/ask mentor (individual working, discussing and asking questions with different mentors) and Whole group discussion parts. A, B, C and D in Breakout mentoring and Individual work-time/ask mentor parts refer to the breakout rooms. Times are written in Japanese Standard Time (JST) and Mountain Time (MT).
FIGURE 2In total 29 participants participated to the workshop. 21 of them filled out the survey, and 12 of them contributed to this paper (red: mentor, blue: mentee, yellow: mentee’s colleague).
FIGURE 3Survey results means reported. Error bars represent standard error. Participants reported answers on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree).
FIGURE 4Graphical representation of Lessons Learned topics.