Literature DB >> 33137764

See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion.

N Fazeli1, M Oller2, J Wu3, Z Wu3, J B Tenenbaum3, A Rodriguez2.   

Abstract

Humans are able to seamlessly integrate tactile and visual stimuli with their intuitions to explore and execute complex manipulation skills. They not only see but also feel their actions. Most current robotic learning methodologies exploit recent progress in computer vision and deep learning to acquire data-hungry pixel-to-action policies. These methodologies do not exploit intuitive latent structure in physics or tactile signatures. Tactile reasoning is omnipresent in the animal kingdom, yet it is underdeveloped in robotic manipulation. Tactile stimuli are only acquired through invasive interaction, and interpretation of the data stream together with visual stimuli is challenging. Here, we propose a methodology to emulate hierarchical reasoning and multisensory fusion in a robot that learns to play Jenga, a complex game that requires physical interaction to be played effectively. The game mechanics were formulated as a generative process using a temporal hierarchical Bayesian model, with representations for both behavioral archetypes and noisy block states. This model captured descriptive latent structures, and the robot learned probabilistic models of these relationships in force and visual domains through a short exploration phase. Once learned, the robot used this representation to infer block behavior patterns and states as it played the game. Using its inferred beliefs, the robot adjusted its behavior with respect to both its current actions and its game strategy, similar to the way humans play the game. We evaluated the performance of the approach against three standard baselines and show its fidelity on a real-world implementation of the game.
Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

Entities:  

Year:  2019        PMID: 33137764     DOI: 10.1126/scirobotics.aav3123

Source DB:  PubMed          Journal:  Sci Robot        ISSN: 2470-9476


  2 in total

1.  Multichannel haptic feedback unlocks prosthetic hand dexterity.

Authors:  Moaed A Abd; Joseph Ingicco; Douglas T Hutchinson; Emmanuelle Tognoli; Erik D Engeberg
Journal:  Sci Rep       Date:  2022-02-11       Impact factor: 4.379

2.  DeepClaw 2.0: A Data Collection Platform for Learning Human Manipulation.

Authors:  Haokun Wang; Xiaobo Liu; Nuofan Qiu; Ning Guo; Fang Wan; Chaoyang Song
Journal:  Front Robot AI       Date:  2022-03-15
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.