Literature DB >> 17416160

Incremental learning of tasks from user demonstrations, past experiences, and vocal comments.

Michael Pardowitz1, Steffen Knoop, Ruediger Dillmann, Raoul D Zöllner.   

Abstract

Since many years the robotics community is envisioning robot assistants sharing the same environment with humans. It became obvious that they have to interact with humans and should adapt to individual user needs. Especially the high variety of tasks robot assistants will be facing requires a highly adaptive and user-friendly programming interface. One possible solution to this programming problem is the learning-by-demonstration paradigm, where the robot is supposed to observe the execution of a task, acquire task knowledge, and reproduce it. In this paper, a system to record, interpret, and reason over demonstrations of household tasks is presented. The focus is on the model-based representation of manipulation tasks, which serves as a basis for incremental reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the data, resulting in more general task knowledge. A measure for the assessment of information content of task features is introduced. This measure for the relevance of certain features relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well. The results of the incremental growth of the task knowledge when more task demonstrations become available and their fusion with relevance information gained from speech comments is demonstrated within the task of laying a table.

Entities:  

Mesh:

Year:  2007        PMID: 17416160     DOI: 10.1109/tsmcb.2006.886951

Source DB:  PubMed          Journal:  IEEE Trans Syst Man Cybern B Cybern        ISSN: 1083-4419


  3 in total

1.  Integrating verbal and nonverbal communication in a dynamic neural field architecture for human-robot interaction.

Authors:  Estela Bicho; Luís Louro; Wolfram Erlhagen
Journal:  Front Neurorobot       Date:  2010-05-21       Impact factor: 2.650

2.  Role of expressive behaviour for robots that learn from people.

Authors:  Cynthia Breazeal
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2009-12-12       Impact factor: 6.237

3.  Vision-Based Learning from Demonstration System for Robot Arms.

Authors:  Pin-Jui Hwang; Chen-Chien Hsu; Po-Yung Chou; Wei-Yen Wang; Cheng-Hung Lin
Journal:  Sensors (Basel)       Date:  2022-03-31       Impact factor: 3.576

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.