Literature DB >> 24110957

A new method of concurrently visualizing states, values, and actions in reinforcement based brain machine interfaces.

Jihye Bae, Luis G Sanchez Giraldo, Eric A Pohlmeyer, Justin C Sanchez, Jose C Principe.   

Abstract

This paper presents the first attempt to quantify the individual performance of the subject and of the computer agent on a closed loop Reinforcement Learning Brain Machine Interface (RLBMI). The distinctive feature of the RLBMI architecture is the co-adaptation of two systems (a BMI decoder in agent and a BMI user in environment). In this work, an agent implemented using Q-learning via kernel temporal difference (KTD)(λ) decodes the neural states of a monkey and transforms them into action directions of a robotic arm. We analyze how each participant influences the overall performance both in successful and missed trials by visualizing states, corresponding action value Q, and resulting actions in two-dimensional space. With the proposed methodology, we can observe how the decoder effectively learns a good state to action mapping, and how neural states affect the prediction performance.

Entities:  

Mesh:

Year:  2013        PMID: 24110957     DOI: 10.1109/EMBC.2013.6610770

Source DB:  PubMed          Journal:  Conf Proc IEEE Eng Med Biol Soc        ISSN: 1557-170X


  1 in total

1.  Kernel temporal differences for neural decoding.

Authors:  Jihye Bae; Luis G Sanchez Giraldo; Eric A Pohlmeyer; Joseph T Francis; Justin C Sanchez; José C Príncipe
Journal:  Comput Intell Neurosci       Date:  2015-03-17
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.