| Literature DB >> 24110957 |
Jihye Bae, Luis G Sanchez Giraldo, Eric A Pohlmeyer, Justin C Sanchez, Jose C Principe.
Abstract
This paper presents the first attempt to quantify the individual performance of the subject and of the computer agent on a closed loop Reinforcement Learning Brain Machine Interface (RLBMI). The distinctive feature of the RLBMI architecture is the co-adaptation of two systems (a BMI decoder in agent and a BMI user in environment). In this work, an agent implemented using Q-learning via kernel temporal difference (KTD)(λ) decodes the neural states of a monkey and transforms them into action directions of a robotic arm. We analyze how each participant influences the overall performance both in successful and missed trials by visualizing states, corresponding action value Q, and resulting actions in two-dimensional space. With the proposed methodology, we can observe how the decoder effectively learns a good state to action mapping, and how neural states affect the prediction performance.Entities:
Mesh:
Year: 2013 PMID: 24110957 DOI: 10.1109/EMBC.2013.6610770
Source DB: PubMed Journal: Conf Proc IEEE Eng Med Biol Soc ISSN: 1557-170X