Literature DB >> 33485027

Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography.

Alexander Kensert1, Gilles Collaerts1, Kyriakos Efthymiadis2, Gert Desmet3, Deirdre Cabooter4.   

Abstract

An important challenge in chromatography is the development of adequate separation methods. Accurate retention models can significantly simplify and expedite the development of adequate separation methods for complex mixtures. The purpose of this study was to introduce reinforcement learning to chromatographic method development, by training a double deep Q-learning algorithm to select optimal isocratic scouting runs to generate accurate retention models. These scouting runs were fit to the Neue-Kuss retention model, which was then used to predict retention factors both under isocratic and gradient conditions. The quality of these predictions was compared to experimental data points, by computing a mean relative percentage error (MRPE) between the predicted and actual retention factors. By providing the reinforcement learning algorithm with a reward whenever the scouting runs led to accurate retention models and a penalty when the analysis time of a selected scouting run was too high (> 1h); it was hypothesized that the reinforcement learning algorithm should by time learn to select good scouting runs for compounds displaying a variety of characteristics. The reinforcement learning algorithm developed in this work was first trained on simulated data, and then evaluated on experimental data for 57 small molecules - each run at 10 different fractions of organic modifier (0.05 to 0.90) and four different linear gradients. The results showed that the MRPE of these retention models (3.77% for isocratic runs and 1.93% for gradient runs), mostly obtained via 3 isocratic scouting runs for each compound, were comparable in performance to retention models obtained by fitting the Neue-Kuss model to all (10) available isocratic datapoints (3.26% for isocratic runs and 4.97% for gradient runs) and retention models obtained via a "chromatographer's selection" of three scouting runs (3.86% for isocratic runs and 6.66% for gradient runs). It was therefore concluded that the reinforcement learning algorithm learned to select optimal scouting runs for retention modeling, by selecting 3 (out of 10) isocratic scouting runs per compound, that were informative enough to successfully capture the retention behavior of each compound.
Copyright © 2021. Published by Elsevier B.V.

Keywords:  Deep q-learning; Machine learning; Method development; Reinforcement learning; Retention models

Mesh:

Year:  2021        PMID: 33485027     DOI: 10.1016/j.chroma.2021.461900

Source DB:  PubMed          Journal:  J Chromatogr A        ISSN: 0021-9673            Impact factor:   4.759


  1 in total

1.  Prediction of the performance of pre-packed purification columns through machine learning.

Authors:  Qihao Jiang; Sohan Seth; Theresa Scharl; Tim Schroeder; Alois Jungbauer; Simone Dimartino
Journal:  J Sep Sci       Date:  2022-03-20       Impact factor: 3.614

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.