| Literature DB >> 31001196 |
Franz Hell1,2, Carla Palleis1,3, Jan H Mehrkens4, Thomas Koeglsperger1,3, Kai Bötzel1.
Abstract
Deep brain stimulation has developed into an established treatment for movement disorders and is being actively investigated for numerous other neurological as well as psychiatric disorders. An accurate electrode placement in the target area and the effective programming of DBS devices are considered the most important factors for the individual outcome. Recent research in humans highlights the relevance of widespread networks connected to specific DBS targets. Improving the targeting of anatomical and functional networks involved in the generation of pathological neural activity will improve the clinical DBS effect and limit side-effects. Here, we offer a comprehensive overview over the latest research on target structures and targeting strategies in DBS. In addition, we provide a detailed synopsis of novel technologies that will support DBS programming and parameter selection in the future, with a particular focus on closed-loop stimulation and associated biofeedback signals.Entities:
Keywords: DBS target; adaptive; deep brain stimulation; feedback; machine learning; reinforcement learning
Year: 2019 PMID: 31001196 PMCID: PMC6456744 DOI: 10.3389/fneur.2019.00314
Source DB: PubMed Journal: Front Neurol ISSN: 1664-2295 Impact factor: 4.003
Figure 1Schematic of general adaptive closed loop DBS for adaptive adjustment of deep brain stimulation (DBS) parameters based upon real time patient measurements, such as electrophysiological signals (e.g., LFP, ECoG, EMG), neurochemical parameters and behavioral measurements and machine learning. First, latent features from different possible signal sources are learned using machine learning approaches to extract behavioral (clinical) states (e.g., bradykinesia, rigidity, tremor) and corresponding and predictive latent neural states (e.g., beta and high frequency oscillations). Then, actual states are compared with ideal states to compute a reward and stimulation parameters (e.g., VTA, stimulation frequency, etc.) adjusted and finally learned via reinforcement learning (Q-Learning is shown as an example). In this closed-loop paradigm, the stimulation parameters (actions) are adjusted within clinical limits based on the reward and the extracted latent states.