| Literature DB >> 35685618 |
Annika Boos1, Olivia Herzog1, Jakob Reinhardt1, Klaus Bengler1, Markus Zimmermann2.
Abstract
When do we follow requests and recommendations and which ones do we choose not to comply with? This publication combines definitions of compliance and reactance as behaviours and as affective processes in one model for application to human-robot interaction. The framework comprises three steps: human perception, comprehension, and selection of an action following a cue given by a robot. The paper outlines the application of the model in different study settings such as controlled experiments that allow for the assessment of cognition as well as observational field studies that lack this possibility. Guidance for defining and measuring compliance and reactance is outlined and strategies for improving robot behaviour are derived for each step in the process model. Design recommendations for each step are condensed into three principles on information economy, adequacy, and transparency. In summary, we suggest that in order to maximise the probability of compliance with a cue and to avoid reactance, interaction designers should aim for a high probability of perception, a high probability of comprehension and prevent negative affect. Finally, an example application is presented that uses existing data from a laboratory experiment in combination with data collected in an online survey to outline how the model can be applied to evaluate a new technology or interaction strategy using the concepts of compliance and reactance as behaviours and affective constructs.Entities:
Keywords: compliance; human-robot interaction; reactance; robotics; trust
Year: 2022 PMID: 35685618 PMCID: PMC9171073 DOI: 10.3389/frobt.2022.733504
Source DB: PubMed Journal: Front Robot AI ISSN: 2296-9144
FIGURE 1Process model of compliance and reactance as actions following the perception and cognition of a given cue.
FIGURE 2Structural probability tree diagram depicting the compliance–reactance framework, as proposed in this paper.
FIGURE 3Probability tree for context A: human and robot with similar task urgency, corresponding to a difference in task urgency of zero.
FIGURE 4Probability tree for context B: the robot is assigned a task that is perceived as marginally more urgent than that of the participant, corresponding to a difference in task urgency of one.