Literature DB >> 35721960

Learning of Iterative Learning Control for Flexible Manufacturing of Batch Processes.

Libin Xu1, Weimin Zhong1, Jingyi Lu1,2, Furong Gao3, Feng Qian1, Zhixing Cao1.   

Abstract

Flexible manufacturing as an essential component of smart manufacturing implements the customized production mode, thereby requesting fast controller adaptation for producing different goods but still with high precision. This problem becomes even more acute for batch processes. Here we present a solution called learning of iterative learning control (ILC) based on neural networks. It is able to recommend control parameters for ILC controllers accordingly, so as to yield fast tracking error convergence and smaller steady-state error for disparate set-point profiles, which is deemed an abstraction of different production needs. The method substantially outperforms a benchmark ILC on a variety of systems and cases, thereby showing its potential for deployment in the industrial Internet of Things.
© 2022 The Authors. Published by American Chemical Society.

Entities:  

Year:  2022        PMID: 35721960      PMCID: PMC9202061          DOI: 10.1021/acsomega.2c01741

Source DB:  PubMed          Journal:  ACS Omega        ISSN: 2470-1343


Introduction

Batch processes are among the two predominant production approaches in modern industry, which fundamentally support the development of many high-end industries producing such items as semiconductors and pharmaceuticals.[1,2] Despite the inferior production efficiency as generally compared to continuous processes, batch processes are indispensable and in fact are gaining ever-increasing attention. Such an observation is underpinned by a twofold reason: (i) goods of remarkable complexity and also of high added value are produced in a batch processing fashion with a multitude of processing steps organized sequentially, which are yet difficult to reconfigure to satisfy continuous production constraints; (ii) the rapidly fluctuating customer demands and the increasing pursuit of personalization in the present society collectively give birth to flexible manufacturing, which is largely tantamount to producing goods in small batches with myriads of disparate configurations. This is indeed the outstanding merit of batch processes. As small as the production scale may be, there are difficulties in precise regulation. On top of the notorious presence of considerable nonlinearity, time variation, and uncertainties incurred by the underlying complex mechanisms,[3] such flexibility renders the precise regulation of batch processes an even more daunting task. Just as every coin has two sides, a notable shortcut is enabled by the repeated operational pattern of batch processes. This is iterative learning control (ILC), which was initially devised for robot arm regulation[4] and is essentially a feedforward controller in stark contrast with most classic controllers such as PID or model predictive control (MPC).[5] The underlying idea is revolutionary, as it vividly mimics the learning process of human beings and well explains the word ”learning” it bears. An ILC controller distills information from the tracking error in the past to better tune the control input of the present trial (termed “batch” thereafter), etc., until achieving the perfect tracking of the given set-point profile. Notably, over the past decades, a multitude of achievements for better ILC have been witnessed both theoretically[6−11] and practically.[12−17] Encouraging endeavors of applying ILC in practice include injection molding,[18] bioreactors,[13] and batch chemical reactors,[19] whereas considerable theoretical efforts are devoted to answering the longstanding question—how to synthesize an ILC controller against various uncertainties. Such efforts include the introduction of feedback,[7] multipoint compensation,[8] adaptive tuning,[9,11] and optimal design[20] for linear systems and nonlinear systems. The review here is apparently not exhaustive due to limited space, and readers are encouraged to refer to excellent surveys in refs (3 and 21). Yet, readers should bear in mind that almost all the aforementioned are only suitable for one specific set-point profile except for adaptive ones. Despite the ability of the adaptively tuned controllers to track multiple set-point profiles, its complicated controller structure requires substantial expertise for intricate implementation in a real setting, which is not usually satisfied in industry. The ILC design for multiple set-point profiles is, to the best of our knowledge, rarely discussed. Notably, we will show that a universal ILC design leads to divergent control performance for different set-point profiles. Arguably, this problem is pivotal to flexible manufacturing. The variation of set-point profiles abstractly stands for the switching of processing needs for producing different goods, and fast catering for such needs indicates the profit improvement and waste reduction, e.g., unqualified products, thereby calling for quick deployment of a precise controller. In this paper, we intend to present an intelligent system to recommend suitable ILC controllers for different set-point profiles so as to achieve faster convergence, which means waste reduction, and smaller steady-state tracking error, which means improved quality. Such an intelligent system is implemented via neural networks, more specifically, multilayer perceptrons. The most recent decade has witnessed the profound impact neural networks make in many domains, including playing the Go game,[22] industrial processes,[23,24] natural language processing,[25] and understanding gene expression.[26] The near-omnipotence of the neural network stems from the universal approximation theorem,[27,28] which states that a one-layer feedforward neural network is able to approximate any continuous function, as long as there are adequate neurons. Such a characteristic perfectly suits our need to develop quantitative mapping from set-point profiles to ILC controllers. The development of such mapping serves as the core of this paper. Indeed, there are endeavors integrating ILC and neural network for better tracking performance reported in the literature. A neural network based ILC reported in ref (29) uses a neural network to approximate the nonlinear component in ILC output so as to achieve precise positioning compensation as well as expedite the iteration convergence. Similarly, ref (30) proposes a learning process with adaptable training parameters for both the intra- and interbatch domains and further shows that the synthesis of the controller is independent of any linearization and any complex optimization problem. Both attempts illustrate that neural networks can play an important role in the synthesis of the ILC controller of improved performance; yet neither is suitable for the case in flexible manufacturing with varying production needs, where the timely and expedient controller tuning to meet the production needs matters more. As such, we make use of neural networks to develop a recommender system suggesting controller configurations accordingly to achieve fast and precise ILC regulation, or equivalently better quality and higher production efficiency simultaneously. The remainder of the paper is organized as follows: Section 2 presents the system formulation and a motivating example; the main method is described in Section 3; results are discussed in Section 4; and Section 5 concludes the work and provides an outlook.

Problem Statement

System Formulation

Without loss of generality, we assume that the system of interest is in the formwhere , , and are the input signal, the internal state, and the output signal of the system, respectively, with n, n, and n being the dimensions. Besides, k ∈ [1, ∞) and t ∈ [1, T] are the cycle (or batch) and time indices, respectively. The cycle duration is denoted as T. The functions f and g are smooth functions. Such a formulation is so general to cover most cases reported in the literature.[3,31] The ILC control can be presented in the following general form If the real-time information e(t) is not incorporated, the ILC control law reduces to the feedforward type—its original flavor. As a pilot study, we will only focus on the classic PD-type ILC,[32] which is Here, the tracking error is defined aswith y(t) being the set-point profile. The parameters k and k in eq are proportional and derivative gains that define the ILC control performance, thereby calling for careful tuning. Note that the derivative is approximated by one-step backward finite difference in eq , owing to the discrete-time nature of the system eq . Indeed, the control law in eq can be reorganized into a compact form so as to improve the efficiency of numeric implementation. By collecting e(t) and u(t) of the entire duration and formulating supervectors, one can have thatwhere is an identity matrix, the operator matrix T2 isand Specifically, if the system in eq becomes a linear time-invariant (LTI) system, i.e.,with matrices A, B, and C standing for system matrix, input matrix, and output matrix, respectively, it is also possible to rewrite the LTI system into a compact formby applying the same trick as before. Here in eq , the matrices G and G are as followswhereas the supervectors thereof are Again, the form in eq is helpful for numeric implementation. Furthermore, the matrices A, B, and C can be functions of time, i.e., A(t), B(t), and C(t). If so, the system of interest becomes a linear time-varying (LTV) system, which is, in some literature,[33] thought to be a linearization of a nonlinear system around a given set-point profile. Note that similar formulation in eq is also valid for LTV systems but with slight modifications. The objective of the paper is to present a function mapping from the set-point profile y(t) to PD-type ILC parameters k and k so as to minimize some function of tracking error e(t), which is generally interpreted as the control performance.

Motivating Example

Next we will show why carefully tuning k and k for each set-point profile is of great importance by using a toy nonlinear system as an example. Let us consider the systemwhich is regulated by PD-type ILC with k = k = −0.3, and the second internal state x2 serves as the process output as well. The system is operated within a duration T = 10 s, and its data is collected every 0.1 s. That means there are 100 data points in each cycle. It clearly shows in Figure that a fine-tuned ILC controller that works well for one set-point profile may not work for another, even possibly leading to tracking error fluctuation (see Figure b). Either slow convergence or fluctuation of tracking error indicates the economic loss in practice. Hence, what people expects from ILC is the monotonic convergence of tracking error, which mathematically meansfor any positive integer k. This point is not new and has been bred in refs (34 and 35) and later strongly emphasized in ref (11). In short summary, the sensitivity of the ILC performance to set-point profile change, particularly the marked performance degradation, motivates us to develop a mapping from set-point profile to k and k.
Figure 1

ILC controller performance is highly sensitive to set-point profiles. (a,b) For the same system, eq regulated by the same PD-type ILC law, different set-point profiles lead to divergent responses of the cycle tracking error. It clearly shows that an ILC controller that achieves monotonic decrease on tracking error for one set-point profile still may have a cyclewise fluctuating tracking error for another. At point 1, perfect tracking is achieved, whereas the tracking performance is rather poor at point 2. (c,d) Corresponding process output of points 1 and 2 indicated in (a) and (b).

ILC controller performance is highly sensitive to set-point profiles. (a,b) For the same system, eq regulated by the same PD-type ILC law, different set-point profiles lead to divergent responses of the cycle tracking error. It clearly shows that an ILC controller that achieves monotonic decrease on tracking error for one set-point profile still may have a cyclewise fluctuating tracking error for another. At point 1, perfect tracking is achieved, whereas the tracking performance is rather poor at point 2. (c,d) Corresponding process output of points 1 and 2 indicated in (a) and (b).

Methods

Prior to establishing such a mapping, one needs to figure out how to represent different set-point profiles. People may argue to use the supervector for the purpose; however, the high dimension of Y may impose a burden on the subsequent model training, for example, by increasing computational cost. Hence, in general, it is not trivial to introduce a low-dimensional representation. Fortunately, set-point profiles are not arbitrary in practice but in some standard form, for instance, step-change signal and slope-climbing signal. These signals can be represented by much shorter vectors. For instance, a step-change signal can be determined by three factors a, b, and c, where a, b, are the levels of the steps prior to and after the change, respectively, and c is the time when the change occurs. As such, any step-change signal can be conveniently represented as a point in space as shown in Figure . By randomly sampling points in the space of s, one can get a set S = [s1, s2, ..., s] representing a collection of set-point profiles. Indeed, such low-dimensional representation is general, as many complex set-point profiles can be approximated by a series of step-change signals. For the sake of neat presentation, we only focus on step-change set-point profiles.
Figure 2

Parameterization and vectorization of set-point profiles in the form of step change.

Parameterization and vectorization of set-point profiles in the form of step change. Due to the powerful capability of functional approximation of the neural network, the mapping is decided to be neural-network-based. That is a mappingwhere θ encapsulates weights and biases of the neural network and will be determined through training. The neural network we use in this paper is the feedforward multilayer perceptron (MLP). If well trained, the neural network together with the PD-type ILC constitutes the learning of iterative learning control (LILC), the main result of the paper, which is shown in Figure .
Figure 3

Block diagram of the proposed LILC. The neural network aided recommender system quickly suggests appropriate k and k for the ILC controller according to the low-dimensional representation of set-point profile y, while catering to the needs of fast-tracking error convergence.

Block diagram of the proposed LILC. The neural network aided recommender system quickly suggests appropriate k and k for the ILC controller according to the low-dimensional representation of set-point profile y, while catering to the needs of fast-tracking error convergence. Next we are going to fill in the last puzzle—the loss function for training. First, we define the following tracking error indexfor each recommended {k, k} and given set-point profile y(t). The index sums the tracking error of the first N cycles, thereby implicitly emphasizing the cyclewise decrease of tracking error. Note that N is a hyperparameter that needs tuning and should not be too small; otherwise, the steady state may not reached. If one would like to yield a smaller steady-state tracking error, a larger weight can be imposed on the term ∥E(t)∥. Then, by summing the error index for each point in the set S, one can have the loss functionfor a given θ. Following that, the neural network training becomes an optimization problem Such an optimization problem can be solved by many standard optimization tools; however, given its neural network structure, the back-propagation (BP) algorithm is probably more efficient than others. Note that BP is still a gradient base method, and the gradient thereof can be calculated by automatic differentiation, which has been included in many machine learning packages such as PyTorch. Besides, the Adam optimizer[36] can be used to update the neural network parameters θ. After each update, the neural network is able to recommend a new batch of {k, k}, which is fed to the system regulated by ILC for simulation and calculation of . It should be noted that given the structure of the loss function, the simulation step can be implemented in parallel to accelerate the training. Subsequently, the new gradient is computed again and used to update θ. These steps keep looping until a proper neural network aided recommender system is obtained. The entire training procedures of LILC are summarized in Algorithm 1. Note that variable cycle duration usually occurs for batch processes, particularly in pharmaceutical industry, thereby becoming an important issue for iterative learning control. Indeed, there are many solutions reported,[37,38] among which the truncation method is the simplest.[37] Our proposed method here can be easily extended to cater for variable cycle duration by the truncation method, i.e., equating the cycle duration T to the minimal duration of all cycles.

Numerical Experiments

Data Set

First, we sampled 1250 data points uniformly from the normalized space [0,1]3, of which 80% (1000 data points) form the training set and the rest become the test set. The data distributions of the training set and the test set in the normalized space [0,1]3 are visualized in Figure . By doing so, we impose constraints on the range of the low dimension representation of set-point profiles, and it matches the reality that set-point profiles are not allowed to be arbitrarily chosen but are within a certain range. For linear systems, the range of a and b is [30, 40], whereas that of nonlinear systems is [3, 6]. The range of c is [200, 800] for linear systems, while it is [20, 80] for nonlinear systems. The data points should be scaled as per the ranges and converted into the set-point profiles y for simulation steps in the training.
Figure 4

Data distribution of the training set (a) and the test set (b) in the normalized space [0,1]3.

Data distribution of the training set (a) and the test set (b) in the normalized space [0,1]3.

Neural Network Training

Here we use a three-layer MLP wherein there are 3, 10, and 2 neurons in the input, hidden, and output layers, respectively. All the activation functions are ReLU. All the weights are initialized with Xavier uniform[39] with a gain of 0.05, while the biases are set to 0 except the ones of the output layer, which are set to the fixed values of k and k of some benchmark and will be detailed later. The neural network is trained by using Adam optimizer with the learning rate set to 0.001. The training is implemented in a minibatch fashion with a size of 250. Note that for some initialization, the network may generate k and k that yield tracking error divergence and interrupt the training process. To circumvent the problem, the neural network parameters are initialized in a small-value region, and the biases of the output layer are set to a pair of k and k that yields a converged tracking error. This is akin to the idea of fine-tuning of neural networks.

Results

LTI System

We first tested LILC on an LTI system, which is defined by matrices The system indeed describes the typical dynamics of injection molding.[40,41] The sampling period of the system is 0.01 s, and the cycle duration is 10 s, which equivalently means there are T = 1000 points in a cycle. The internal states are initialized as . The benchmark ILC that LILC will be compared against has the controller parameters k = −0.01 and k = −0.3. In order to achieve accelerated convergence of the averaged cycle loss (ACL), both ILC controllers are initialized with a PI controller in the first cycle, whose control law is In this case, K is set to 0.001. The hyperparameter N plays an important role in controlling performance, and it should generally be chosen to be not less than 20 so as to ensure that the steady state is reachable. An empirical value N = 50 is selected to appropriately trade off the transient and steady-state control performance, and the value will be used in the remaining examples of the paper. The control performance of benchmark ILC and LILC is compared in Figure , and it clearly shows that LILC is able to track the given set-point profile almost perfectly in cycle 2, whereas benchmark ILC still has a marked overshoot. The averaged cycle loss (ACL) is defined as the squared error averaged for each time point within a cycle. Such an index as a function of the cycle of both ILC and LILC is plotted in Figure a and d for noise-free and unit normal process noise, respectively. For the noisy case, if the accepting ACL is less than 0.01 (denoted as dashed gray line in Figure ), LILC converged 2.5 times faster than the benchmark ILC, implying a remarkable reduction of waste.
Figure 5

Control performance of benchmark ILC and LILC on an LTI system. The process outputs at cycles 1, 2, and 5 are plotted.

Figure 6

Control performance of LILC and the benchmark ILC indexed by ACL is compared on an LTI system (a,d), an LTV system (b,e), and a nonlinear system (c,f). (a), (b), and (c) correspond to the noise-free case, whereas (d), (e), and (f) are for the case wherein the process noise is subject to unit normal distribution. The blue stands for LILC, while green stands for the benchmark ILC. The mean (solid line) and standard deviation (std, shaded area) are calculated for all the data points in the test set.

Control performance of benchmark ILC and LILC on an LTI system. The process outputs at cycles 1, 2, and 5 are plotted. Control performance of LILC and the benchmark ILC indexed by ACL is compared on an LTI system (a,d), an LTV system (b,e), and a nonlinear system (c,f). (a), (b), and (c) correspond to the noise-free case, whereas (d), (e), and (f) are for the case wherein the process noise is subject to unit normal distribution. The blue stands for LILC, while green stands for the benchmark ILC. The mean (solid line) and standard deviation (std, shaded area) are calculated for all the data points in the test set. In some batch processes, there exist repetitive disturbances which need to be rejected. Here we also show the capability of LILC to reject the repetitive disturbance in the LTI system. Within this example, the benchmark and neural network remain the same as mentioned before except the process noise, which replaced by a deterministic sine signal The result is shown in Figure , where LILC outperforms the benchmark ILC on repetitive disturbance rejection.
Figure 7

Averaged cycle loss of LILC and benchmark ILC in the repetitive disturbance case.

Averaged cycle loss of LILC and benchmark ILC in the repetitive disturbance case.

LTV System

LILC is further tested on an LTV system. All the configurations remain the same except the system matrix A, which has a slope change from the 200th to 700th data points, i.e., The results for the LTV system are shown in Figure b and e. In both the noise-free and noisy cases, LILC robustly outperforms the benchmark ILC.

Nonlinear System

Another test is performed on a continuous stirred tank reactor (CSTR),[42] whose dynamics are described by The sampling time T is equal to 0.1. The other parameters are θ = 1, β = 0.3, γ = 20, and D = 0.072.[43] The second internal state also serves the process output of the system and is required to follow the set-point profile y. The cycle duration is 10 s or equivalently T = 100 data points in a cycle. The system is initialized with , for any k. The benchmark ILC is set with k = −6.00 and k = −35. Additionally, the PI controller for the first cycle is set with K = 0.5. The results for both cases are shown in Figure c and f, and substantial improvement of LILC against the benchmark ILC is clearly observed. Finally we present how to use the proposed LILC method to resolve the problem shown in Figure . Within this example, the benchmark ILC uses k = k = −0.3. The tracking error comparison in terms of ACL of both LILC and the benchmark ILC for two different set-point profiles is summarized in Figure a and b, where LILC outperforms the benchmark ILC on the speed of error convergence and steady-state tracking error. Indeed, the advantage of LILC in terms of steady-state tracking error is tangible. Such an observation is again confirmed in Figure . Figure c,d shows that LILC achieves almost perfect tracking, while the benchmark ILC starts to fluctuate after 8 s, which is generally unacceptable in industrial reality. In all, it again advocates the superior performance of LILC.
Figure 8

Averaged cycle loss and the tracking performance at cycle 50 when LILC and benchmark ILC tracking two different set-point profiles for the motivating example eq . (a) and (b) correspond to the averaged cycle losses of two different set points, respectively. (c) and (d) correspond to the tracking performances at cycle 50.

Averaged cycle loss and the tracking performance at cycle 50 when LILC and benchmark ILC tracking two different set-point profiles for the motivating example eq . (a) and (b) correspond to the averaged cycle losses of two different set points, respectively. (c) and (d) correspond to the tracking performances at cycle 50.

Discussion

In this paper, we presented learning of the ILC method for batch processes that need to manufacture different products. As a pilot study, different manufacturing needs for various products are abstracted as different set-point profiles for the same process. We used a toy nonlinear system as an example showing that different set-point profiles for the same process with the same ILC controller may lead to divergent regulation performance, thereby clearly showing the needs for adaptive ILC tuning for different set-point profiles. Set-point profiles were represented in low dimensions to facilitate the neural network training. The well-trained neural network was able to robustly outperform the benchmark ILC on an LTI system, an LTV system, and a nonlinear system no matter whether process noise is present or absent. Though used for ILC tuning, the method is quite general and is able to solve a range of tuning problems such as weight tuning of model predictive controller and controller tuning for multiagent systems. Hence, it is worth further exploring in the future. It should be noted that the LILC serves for the controller tuning for one specific batch process. However, the LILC framework is rather flexible to achieve the interprocess generalization given that the class of the processes can be parametrized. These parameters can be lumped together with the parameters of set-points and are as a whole fed to neural networks. By collecting more data for various combination of processes and set-points, the intelligent recommendation for ILC controllers can be achieved. In fact, there is an underlying assumption behind the method; that is, we require the model of the process to be readily available for training. Despite being seemingly strict at first glance, it is possible to satisfy in practice. Such a process model can be obtained by system identification based on data or derivation based on first-principles. The former is possible because of the abundance of data given the rapid development of 5G and cloud-based technology and increasing deployment of the industrial Internet of Things. For example, in the injection molding industry, such techniques can help to collect abundant data to develop a precise model for each type of injection molding machines of the same manufacturer. The mechanistic modeling is also possible, as some manufacturers provide such services by using their rich knowledge about equipment they sell. Alternatively, the transfer learning technique is also helpful to circumvent such an assumption. Indeed, this is also the major point to be distinguished from model-free optimization methods for batch processes.[44] Additionally, it is also worthwhile to investigate the robustness of LILC, including the robustness against model mismatch, repeatable disturbance, and stochastic factors on different parts of a system, as well as its application to stochastic batch processes, biological processes in particular.[45−50]
  12 in total

1.  Mastering the game of Go with deep neural networks and tree search.

Authors:  David Silver; Aja Huang; Chris J Maddison; Arthur Guez; Laurent Sifre; George van den Driessche; Julian Schrittwieser; Ioannis Antonoglou; Veda Panneershelvam; Marc Lanctot; Sander Dieleman; Dominik Grewe; John Nham; Nal Kalchbrenner; Ilya Sutskever; Timothy Lillicrap; Madeleine Leach; Koray Kavukcuoglu; Thore Graepel; Demis Hassabis
Journal:  Nature       Date:  2016-01-28       Impact factor: 49.962

2.  Computationally Efficient Data-Driven Higher Order Optimal Iterative Learning Control.

Authors:  Ronghu Chi; Zhongsheng Hou; Shangtai Jin; Biao Huang
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2018-04-09       Impact factor: 10.451

3.  A Stochastic Model of Gene Expression with Polymerase Recruitment and Pause Release.

Authors:  Zhixing Cao; Tatiana Filatova; Diego A Oyarzún; Ramon Grima
Journal:  Biophys J       Date:  2020-08-03       Impact factor: 4.033

4.  Neural-network-based iterative learning control of nonlinear systems.

Authors:  Krzysztof Patan; Maciej Patan
Journal:  ISA Trans       Date:  2019-09-03       Impact factor: 5.468

5.  Quality-Driven Regularization for Deep Learning Networks and Its Application to Industrial Soft Sensors.

Authors:  Chen Ou; Hongqiu Zhu; Yuri A W Shardt; Lingjian Ye; Xiaofeng Yuan; Yalin Wang; Chunhua Yang
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2022-02-18       Impact factor: 10.451

6.  Linear mapping approximation of gene regulatory networks with stochastic dynamics.

Authors:  Zhixing Cao; Ramon Grima
Journal:  Nat Commun       Date:  2018-08-17       Impact factor: 14.919

7.  Accuracy of parameter estimation for auto-regulatory transcriptional feedback loops from noisy data.

Authors:  Zhixing Cao; Ramon Grima
Journal:  J R Soc Interface       Date:  2019-04-26       Impact factor: 4.118

8.  Analytical distributions for detailed models of stochastic gene expression in eukaryotic cells.

Authors:  Zhixing Cao; Ramon Grima
Journal:  Proc Natl Acad Sci U S A       Date:  2020-02-18       Impact factor: 11.205

9.  Neural network aided approximation and parameter inference of non-Markovian models of gene expression.

Authors:  Qingchao Jiang; Xiaoming Fu; Shifu Yan; Runlai Li; Wenli Du; Zhixing Cao; Feng Qian; Ramon Grima
Journal:  Nat Commun       Date:  2021-05-11       Impact factor: 14.919

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.