| Literature DB >> 24363649 |
Jin Joo Lee1, W Bradley Knox1, Jolie B Wormwood2, Cynthia Breazeal1, David Desteno2.
Abstract
We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.Entities:
Keywords: computational trust model; human-robot interaction; interpersonal trust; machine learning; nonverbal behavior; social signal processing
Year: 2013 PMID: 24363649 PMCID: PMC3850257 DOI: 10.3389/fpsyg.2013.00893
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1A participant engaging in a 10-min conversation with a teleoperated humanoid robot, Nexi, here expressing the low-trust cue of hand touching.
Figure 2Lab room setup for human-subjects experiment. Participants engaged in a “get-to-know-you” interaction with another random participant. Slips of paper on the table listed some conversation topic suggestions.
Figure 3Annotated nonverbal behaviors of participants. Gestures within a category are mutually exclusive.
The 30 features for the domain-knowledge model which narrowed and extended the initial set of features by following the three guidelines from Phase 1.
| Self | Frequency | # Times gesture emitted | |
| Self | Duration | % Time gesture held | |
| Self | Joint | Mean( | |
| Self | Joint | Mean( | |
| Partner | Frequency | # Times gesture emitted | |
| Partner | Duration | % Time gesture held | |
| Partner | Joint | Mean( | |
| Partner | Joint | Mean( | |
| Diff | Frequency | ||
| Diff | Duration | ||
| Diff | Joint | ||
| Diff | Joint |
Figure A1This plot shows the hyper-parameters values C and γ selected (not showing repetitions) at different iterations of the outer cross-validation loop for the SVMs built in Phases 2 and 3. SVM-D (in Phases 2 and 3) and SVM-S (in Phase 3) have relatively stable hyper-parameter values across cross-validation folds, while SVM-S (in Phase 2) can vary between eight sets of values.
The mean prediction error of the SVM-D (domain-knowledge) model, and its comparison to that of the SVM-S (standard-selection) model, .
| SVM-D | 0.74 | – |
| 0.83 | ||
| Human | 1.00 | |
| SVM-S | 1.00 | |
| Random | 1.46 |
An asterisk symbol denotes statistical significance.
Figure 4The distribution of tokens given by participants. The majority (41%) gave two tokens. An a priori model based on this distribution will always predict two tokens.
Confusion matrix for SVM-D revealing the model having difficulty distinguishing when an individual has a higher degree of trust toward their partner.
Twelve new features, consisting of the frequencies in which the low-trust templates are emitted by the participant, their partner, and the difference in frequency between them, used to train the final SVM model.
The updated comparisons of the baseline models to the new SVM-D model with a total of 42 features, which are listed in Tables .
| SVM-D | 0.71 | – |
| 0.83 | ||
| SVM-S | 0.86 | |
| Human | 1.00 | |
| Random | 1.46 |
Of note, to maintain fair comparisons to a model not using domain knowledge, the SVM-S model uses the full 42 features detailed in section 4.2.1.1 (thus, not needing the variable ranking to find a smaller subset). An asterisk symbol denotes statistical significance.
Figure 5The effects of excluding categories of features on the trust model's mean prediction error. The legend lists the exact features of a category that were excluded, and their descriptions can be found in Tables 1, 4.
Leave-one-out Nested Cross Validation