Literature DB >> 35967037

Quality-Relevant Process Monitoring with Concurrent Locality-Preserving Dynamic Latent Variable Method.

Qi Zhang1, Shan Lu2, Lei Xie1, Qiming Chen1, Hongye Su1.   

Abstract

A concurrent locality-preserving dynamic latent variable (CLDLV) method is proposed to extract the correlation between process variables and quality variables for quality-related dynamic process monitoring. Given that dynamic process data can easily be contaminated by noise and outliers and conventional dynamic latent variable models lack robustness, a low-rank autoregressive model is developed to deal with autocorrelation and cross-correlation properties among the data. Then neighborhood structure information is integrated into the partial least squares model, which can better reveal the essential structure of the data. The final concurrent projection of the latent structures is employed to monitor output-related faults and input-related process faults that affect quality. The Tennessee Eastman process and hot strip mill process are used to demonstrate the effectiveness of CLDLV-based detection and diagnostic methods.
© 2022 The Authors. Published by American Chemical Society.

Entities:  

Year:  2022        PMID: 35967037      PMCID: PMC9366793          DOI: 10.1021/acsomega.2c02118

Source DB:  PubMed          Journal:  ACS Omega        ISSN: 2470-1343


Introduction

The rapid development of sensor technology and computer technology has facilitated data acquisition techniques. Data-driven modeling and control methods have thus become a hot research topic in the field of process control.[1] Current developments in industrial sensors allow the real-time monitoring of hundreds of variables—temperature, flow, pressure and concentration.[2] These variables are typically sampled at intervals of seconds or minutes, but some important polymer qualities and conversions still need to be measured offline, requiring hours of time. In order to effectively monitor process performance, one needs to utilize not only the measurement data of numerous process variables but also quality data that is not available frequently.[3,4] Multivariate projection methods such as principal component analysis (PCA)[5] and partial least squares (PLS)[6] and canonical correlation analysis (CCA)[7] have been widely used to monitor the process, gaining widespread attention and application.[8,9] Quality variables are used to guide the decomposition of process data and extract the potential variables most relevant to product quality, so PCA-based methods are susceptible to process perturbations, thereby reducing performance.[10] In recent years, methods based on PLS and CCA can discover correlations between process variables and quality variables. These methods show better performance in fault detection related to process quality.[8,11−13] CCA analyzes the relationship between two sets of variables after projection to reflect the degree of correlation, but this does not directly reveal the mapping of the two sets of variables.[14] PLS takes into account the ideas of PCA and CCA[15] and tries to find the multidimensional direction in the X-space to explain the multidimensional direction with the largest variance in the Y-space, which has gained wide application.[16,17] However, PLS mainly studies the variance information among data, resulting in local structural information being ignored. Due to the impact of the structural relationship among the data on dimensionality reduction, the loss of this information may affect the results after projection mapping, thereby affecting the monitoring results. Manifold learning methods proposed in recent years assume that each sample point can be approximately reconstructed by a linear combination of multiple adjacent points, which is beneficial to reveal the essential structure among the data.[18−20] Then, locality-preserving projections-based fault detection methods have also been proposed for fault detection.[21−23] Therefore, a desired monitoring model should analyze both the variance and the local structure of the data. Modern industrial processes tend to be complex, with operating units and control loops interacting and interrelated. Therefore, feedback control effects among energy, material, and information make it easy for faults to propagate through the control loops. Thus, the process variables obtained from the measurements are highly interrelated and autocorrelated.[24] The vector autoregressive (VAR) method obtains regression coefficients from multidimensional time series by linear transformation, which is advantageous in dealing with highly correlated data.[2,25] However, the significant defect of the VAR model is that it is sensitive to noise and outliers. Unfortunately, noise and outliers are ubiquitous in industrial processes. Therefore, the effect of the sample quality needs to be considered to study the dynamic relationship of process variables. Furthermore, noise prevents the features of the data matrix from growing in tandem with the dimensions of the data acquisition, resulting in multicollinearity. For the case of multicollinearity, the linear transformation loses information about the matrix because the linear transformation reduces the rank and the regression coefficient is uncertain.[26] The recently proposed low-rank learning assumes that the variables are correlated,[27] that is, the acquired data matrix is not full-rank, and then some works have confirmed that low-rank learning can reduce the influence of noise to improve robustness. As a result, a low-rank VAR method helps with the noise of the process and also reduces multicollinearity. We introduce a low-rank structure for the coefficients of the high-dimensional time series, which appropriately reduces the effect of noise on data correlation. Then, the locality-preserving DLV model is naturally derived to discover local manifold graphs by adding extra cross-view correlation between neighbors. Finally, standard PLS has many latent variables that are not orthogonal to the output variables due to oblique projection; in other words, the principal space still has residuals.[3,28] To address this shortcoming, the input space is concisely decomposed into output-related and input-related subspaces, and faults are projected concurrently in the four subspaces. The fault diagnosis method is established in four subspaces, and the effectiveness of the algorithm is verified by the Tennessee Eastman (TE) process and an industrial case of hot rolled steel. The main contributions of this work are summarized as follows: A low-rank vector autoregressive model is proposed to deal with the autocorrelation and cross-correlation properties among the data, which helps to reduce the high autocorrelation while improving robustness. The neighborhood structure information is integrated into the PLS model, which can better reveal the essential structure of the data. The proposed method divides the subspace by concurrent projections to help detect faults and diagnose fault variables, thus appropriately improving the performance of monitoring. The remainder of this paper is organized as follows. Section reviews the related works. Section presents the algorithms, which are evaluated on Section . Finally, Section draws the concluding remarks of our paper.

Related Works

Review of PLS Models

Process data and quality data are collected to form an input matrix X and a output matrix Y. PLS projects X and Y into the low-latitude space defined by l latent variables, and the following regression model can be built:where T = [t1, ..., t] is the score matrix. P = [p1, ..., p] and Q = [q1, ..., q] are the loading matrix for X and Y, and l is the number of PLS factors. X̃ and Ỹ are the modeling residuals. The objective of the PLS embedded in this algorithm is to find the solution of the following problem: Details of the PLS algorithm can be found in refs (8) and (16).

Multicollinearity

Given a data set {y, x, ···, x}of n statistical units, a linear regression model assumes that the relationship between the dependent variable y and the p-vector of regressors x is linear. The above regression model can be stacked together and written in matrix notation as Y = Xβ + ε. The ordinary least squares estimator can be computed as In the multicollinearity case, a set of variables is perfectly multicollinear if there exists one or more exact linear relationships among some of the variables. Specifically, it can be expressed as In such a case, the design matrix X has less than full rank, and therefore, the matrix XTX cannot be inverted. Furthermore, multicollinearity is fairly common when dealing with raw datasets that often contain redundant noise. In this case, the equation in modified form with an error term v is obtained through Although there is no exact linear relationship between the variables, the inverse of the matrix XTX is ill-conditioned. As a result, the calculated results can be very sensitive to small changes in the data, and the regression coefficients can be very inaccurate or very sample-dependent.

Review of the Dynamic Latent Variable Model

The conventional dynamic latent variable (DLV) algorithms are based on the concept that a small number of latent variables often drive changes in high-dimensional data. A built objective function then generates a score vector that explains the variation in the process.[29]where β ⊗w is the Kronecker product and Z is the extended matrix of past data. Note that the Kronecker product is a special form of the outer product, where the total order increases after multiplication. Specifically, the linear transformation will lose the information of the matrix, because it decreases the rank, e.g., rank(AB) ≤ rank (B). The VAR model is sensitive to noise and the data matrix X is highly correlated due to noise, which leads to multicollinearity. As a result, obtaining a regression matrix with as low a rank as possible not only reduces the noise perturbation appropriately but also solves the multicollinearity problem appropriately.

Concurrent Locality-Preserving Dynamic Latent Variable Model

Dynamic Analysis

For time series , the time interval is T. The data at time t can be expressed aswhere represents the coefficient matrix of the vector autoregressive model, ε is the Gaussian noise, and d is the time lags. For convenience, eq is rewritten as:where A = [A1T, ..., AT] ∈ , V = [xT, ..., xT] ∈ . Noting the high correlation of data and noise contamination, the large information redundancy of the time-series data can lead to multicollinearity. It is of interest to study VAR models under high dimensional scaling, where the transition matrix governing the temporal dynamics exhibits a more complex structure. As a result, L1 norm regularization is imposed to learn the low-rank projection in the regression modelwhere Z and Q are defined as and , respectively. The optimal solution of the coefficient matrix A can be obtained by the coordinate descent method or least angle regression. However, λ cannot be explicitly selected and tuned, so low-rank regression is adopted. The rank of A is explicitly decided by constraining the rank of A to be s < min (n, k) Although the general rank minimization is a nonconvex and NP-hard problem, the objectives with rank constraints are solvable. According to the Eckart–Young theorem,[30] when Q is nonsingular, singular value decomposition (SVD) is performed in Z to obtain Z = UDVT. Then, the global minimizer can be written aswhere p(Z) = UDVT is the best s-rank approximation of matrix Z in terms of the Frobenius norm, and D consists of the largest s singular values of Z. When Q is singular, SVD is performed in Q to obtain Q = UDVT. Therefore, eq is transformed aswhere Ω = VTA and E = UTZ. The global minimizer of eq can be obtained throughwhere D is a diagonal matrix consisting of all the nonzero singular value of Q, r(Q) is the rank of Q, a can be set as the zero matrix, and E consists of the first r(Q) rows of E.(31,32)

Quality-Related Analysis

Suppose that the process data matrix X is formed by sampling m variables n times, and the data matrix Y is formed by measuring n quality variables. In order to obtain the impact of quality variables on the process, it is necessary to find the maximum correlation after the projection of X and Y and maximize the correlation between the score vectors t and u. The objective of eq is not only to maximize the covariance of t and u but also to ensure that t and u have the maximum variance. Considering the importance of structural information among the data, the nearest neighbor relationship between the quality and process variables needs to be considered to reveal the intrinsic structure of data. As a result, is defined as:where S and S are the similarity matrices. sand sreflect the neighborhood information between samples, which are defined aswhere is the mean square distance, and similarly for d, and d and d are two adjustable parameters controlling kernel widths. Then, NE(x) (or NE(y)) denotes the index set for the local neighbors of x or x. The CLDLV-based model harbors the idea that if the data points are closed in the input high-dimensional space, after projection, their low-dimensional representation should be still close. Thus, the new objective function is defined aswhere = wÂ. Remark 1: The CLDLV-based model will be equivalent to the concurrent DLV model if s and scontain all zero elements, i.e., no neighborhood information is used. To maximize the objective, the Lagrange multipliers are adopted. Taking derivatives with respect to and q lead to After solving eq , λ = θ can be obtained. Defining the symmetric matrix Ψ = , the following equation can be obtained: Remark 2: and q are the equilibrium with the largest projection variance and the largest correlation between them, and the CCA method is only the largest correlation. In addition, by introducing S̃, the information integrity on the local structure is guaranteed, and only needs to be calculated. After finding the optimal solution of , the score vector can be expressed as

Concurrent Projection

However, the principal subspace of the standard PLS includes many variables that are orthogonal to Y and not useful for predicting Y.(28) According to the idea of concurrent PLS,[3] this study decomposes the measurement block into four subspaces for monitoring. With the proposed CLDLV algorithm, input X and output Y can be modeled dynamically as follows:where T represent covariations in X that are related to the predictable part , T stands for the variations in X that are useless for predicting Y, and T represents the variations in Y not predicted by X. The predicted output is obtained by Ŷ =TQT. SVD is performed on Ŷ:where Q = VD and P = RQTVD–1. For the unpredictable output Ŷ = Y – TQT, PCA is performed on to yield the output-principal scores T and output residuals .where T = QTỸ and Ỹ = (I – QQT)Ỹ. The output-independent inputs X – TP† are formed by projecting on the orthogonal complement of Span {P}, where P† = (PTP)−1PT. PCA is performed on to yield the input-principal scores T and input residuals where T = PTX̃̃ and X̃ = (I – PPT)X̃̃.

Fault Detection and Diagnosis

As mentioned above, based on eq , the input and output data spaces are concurrently projected to five subspaces: a joint input/output subspace that captures covariations between input and output, an output-principal subspace, an output-residual subspace, an input-principal subspace, and an input-residual subspace. Table shows the monitoring statistics and the corresponding control limits. In Table , , the variances of the principal components, Λ and Λ can be obtained in the same way. The control limit can be determined by F distribution and χ2 distribution.
Table 1

Monitoring Statistics and Control Limits

statisticscalculationcontrol limit
Tc2tcTΛc–1tc
Tx2txTΛx–1tx
Ty2tyTΛy–1ty
Qxx~2gxχhx, α2
Qyy~2gyχhy, α2
After a fault is detected, the cause of the fault needs to be diagnosed. Usually, contribution plots are used to locate a fault to a variable in the process and then diagnose the cause of the fault.[33] Based on the obtained subspace, the quality-relevant reconstruction contribution plot is used for contribution analysis based on the reconstruction of the fault detection index along the direction of a variable. The fault detection index of the reconstructed sample x is:where Γ is given in Table .
Table 2

Diagnose Statistics

index (x)Γx
Tc2PcΛ–1PcT
Tx2(IPcPc)(PxΛx–1PxT)(IPcPc)
Ty2(IPcPc)(QyΛy–1QyT)(IPcPc)
Qx(IPcPc)(IPxPxT)(IPcPc)
Reconstruction based contribution (RBC) for variable j and sample x is defined aswhere ξ is the jth column of the identity matrix I. To sum up, the full description of the CLDLV method is listed in Algorithm 1.

Case Studies

TE Process

The TE process simulator simulates a continuous chemical process originally developed for the research and development of engineering control and control strategy design, which was proposed by Downs and Vogel.[34] The TE process consists of several operating units such as a continuous stirred reactor, a separator, a gas–liquid separation column, a vapor stripper, a reboiler, and a centrifugal compressor. The entire system has 12 operating variables (XMV) and 41 measured variables (XMEA). It is a challenging problem in the field of process control. Recently, an improved TE process has been proposed with a more straightforward design of perturbations by Andreas.[35] In this version, the variables are highly correlated, with 20 process disturbances predefined, and only the activated disturbances have an impact on the process. The revision of the Tennessee Eastman process model is shown in Figure . Moreover, the disturbances are augmented with eight random variation disturbances, as shown in Table .
Figure 1

The improved TE process model (additional measurements in red).

Table 3

Extended Process Disturbancesa

numberdescription
IDV(1)–IDV(20)Table 8 in ref (34), p. 250
IDV(21)A feed temperature (stream 1)
IDV(22)E feed temperature (stream 3)
IDV(23)A feed pressure (stream 1)
IDV(24)D feed pressure (stream 2)
IDV(25)E feed pressure (stream 3)
IDV(26)A and C feed pressure (stream 4)
IDV(27)pressure fluctuation in the cooling water recirculating unit of the reactor
IDV(28)pressure fluctuation in the cooling water recirculating unit of the condenser

Numbers 21–28 are random variations.

The improved TE process model (additional measurements in red). Numbers 21–28 are random variations.

Correlation Analysis

In this work, the data generated by the improved TE process are used to test the performance of CLDLV. The correlation plot for the original data X is drawn in Figure , with the autocorrelation and cross-correlation plot in blue, and the red line is the confidence limit of 95%. The autocorrelation exceeds the confidence limits, and there is severe cross-correlation between the variables. In addition, the autocorrelation and cross-correlation shows multiple peaks, which implies that there is also severe autocorrelation among the variables.
Figure 2

Autocorrelation and cross-correlation of X.

Autocorrelation and cross-correlation of X. After the modeling process, the autocorrelation and cross-correlation of T, P, and Q are analyzed in Figure . The correlation plot of T is depicted in Figure a, there is almost no cross-correlation among variables, and the variation is within the confidence limit. Although there is still autocorrelation, it has been greatly weakened. Figure b depicts the autocorrelation and cross-correlation of P; all cross-correlations tend to zero and are within the confidence limits. The autocorrelation between the variables is also closer to zero. In Figure c, the autocorrelation of Q also tends to zero, and the cross-correlation coefficients are close to zero except at lag 0. In summary, after CLDLV processing, the dynamic relationship in the original data matrix is appropriately reduced. The effect of multicollinearity is reduced, which facilitates the determination of model parameters.
Figure 3

Autocorrelation and cross-correlation.

Autocorrelation and cross-correlation.

Fault Detection and Diagnosis

According to the improved TE process model, multiLoop-mode1.mdl[35] is used to simulate two faults to verify the performance of the algorithm, and the parameters of the simulation are kept at their initial settings. Among the process variables, the operational variables XMV(9) and XMV(12) are excluded because they are constants in the setup and do not provide information for troubleshooting. In this test, the input variables are XMEAS(1-36) and XMV(1-10), where XMEAS(1-36) are the process measurements and XMV(1-10) are the manipulated variables; the output variables are XMEAS(37-41), where XMEAS(37-41) are the quality measurements. For DPCA and KPCA, the variables are XMEAS(1-41) and XMV(1-10), and the time lag of DPCA is set to 2. The improved TE simulink generates 500 samples without faults as training samples, and 1000 test samples are also generated with the introduction of two faults given as follows: Fault 1: IDV(26), A and C feed pressure (stream 4) disturbances are introduced starting from sample 301. Fault 2: IDV(28), pressure fluctuation in the cooling water re-circulating unit of the condenser are introduced starting from sample 301. After the training data have been collected, the process monitoring results for the proposed methods are shown in Figure . It can be seen that the CLDLV-based method detects the occurrence of faults in four subspaces. T2 detects all faults and has good fault detection capability. T2 also detects the occurrence of faults, but some normal samples are identified as faults. T2 also detects faults, but there are some samples of faults that are not detected. Q detects the occurrence of faults, which indicates that the occurrence of faults is related to the output-principal subspace.
Figure 4

CLDLV-based monitoring result of fault 1.

CLDLV-based monitoring result of fault 1. To show the effectiveness of the CLDLV-based method, the detection results of the four methods—KPCA, DPCA, DCCA, and DPLS—for fault 1 are given in Figure . All four methods can detect the occurrence of faults, but the detection rate of DCCA is low. This is due to the fact that DCCA only finds the relationship between the projected process variables and the quality variables, lacking a direct mapping between them. Compared with DCCA, although KPCA detects most of the faults and shows that the fault occurs near the 301st sample, the false alarm rate is too high. DPCA can detect the occurrence of faults, but the detection rate is reduced compared with the proposed method. DPLS has better fault detection results under both T2 and SPE, which indicates that DPLS takes into account the ideas of PCA and CCA and solves the mapping problem of quality variables and process variables.
Figure 5

Monitoring results of fault 1.

Monitoring results of fault 1. The CLDLV-based diagnosis results are shown in Figure . From the diagnostic results of the four subspaces, variables 18, 35, 48, and 49 are the main causes of the faults. In the T2-based contribution plot, the contribution of variable 18 is the largest, and variable 35, 48, and 49 have weaker contributions. The test results of T2 and T2 are similar, and the fault causes are relatively consistent. In the T2-based contribution plots, multiple variables have a high contribution to the occurrence of the fault. Variables 48 and 49 in the Q-based contribution plot are the obvious contributing variables and are considered to be the main causes of the faults.
Figure 6

CLDLV-based diagnosis result of fault 1.

CLDLV-based diagnosis result of fault 1. For fault 2, the results of CDPLS-based fault detection are shown in Figure , all four subspaces can detect faults. Almost all fault samples in T2 are detected, although several normal samples were identified as faults. The fault samples in T2 are also detected, but there are slightly more undetected fault samples than T2. After the faults occurred in T2, some fault samples are not correctly identified. Almost all faults are detected in Q with a low false alarm rate.
Figure 7

CLDLV-based monitoring result of fault 2.

CLDLV-based monitoring result of fault 2. As a comparison, the fault detection results of KPCA, DPCA, DCCA, and DPLS are also given in Figure . Faults are detected by all four methods, with both DPCA and DPLS showing good fault detection and T2 and SPE both detecting faults. The DPCA-based method tends to be normal after 500 samples, and most of the faults cannot be detected; the DPLS-based SPE detects faults after 500 samples, but T2 tends to be normal as well. For the DCCA method, there is a low detection rate. KPCA still has a very high detection rate, but the false alarm rate is very serious, which almost treats the entire process as a fault.
Figure 8

Monitoring results of fault 2.

Monitoring results of fault 2. The results of CLDLV-based fault diagnosis are shown in Figure . The most obvious contribution of variable 22 is shown in all four subspaces, so variable 22 should be the main cause of fault 2. The fault contribution plots based on T2, T2, and Q are more obvious in finding the variable 22 with fault contributions. In addition, the variable 30 is also diagnosed in T2, which is the cause of the fault.
Figure 9

CLDLV-based diagnosis result of fault 2.

CLDLV-based diagnosis result of fault 2. In order to compare the computational complexity, the average detection time of the two faults is also compared in Table . Compared with the other four methods, the CLDLV method requires longer detection time, which is due to the fact that more matrix inversion and SVD are involved in the CLDLV method, as shown in Algorithm 1. For fault diagnosis, the CLDLV method takes longer, with an average diagnosis time of 21.36 s. The algorithm is constantly improving to deal with more complex processes, making the computational complexity significantly increased. As a result, the proposed method takes longer to detect faults.
Table 4

Average Detection Time

methodKPCADPCADCCADPLSCLDLV
time (s)1.711.022.072.934.50

Hot Strip Mill Processes

The hot strip mill involves physical quantities such as rolling force, thickness, temperature, and speed due to the many process rolling circuits. There is a coupling effect of variables in the rolling process, and changes in any one variable may cause a chain reaction of related operating units. In addition, environmental and temperature perturbations make the collected data inevitably noisy, which seriously reduces the reliability of the monitoring model. The industrial hot strip mill basically consists of six units in sequence: heating furnace, roughing mill, hot output roller conveyor and flying shear, finishing mill, laminar flow cooling, and winder, as shown in Figure . This work takes a 1700 mm strip hot strip rolling line as the object, and the data are measured in the field. The process variables are roll slit, rolling force, and bending roll force for seven stands with a total of 20 variables, and the quality variable is considered as the exit thickness of the finishing end stand. Two thousand measured data values under normal working conditions are selected as sample data, and 3000 measured data values containing faults are used as real-time data. For DPCA and KPCA, the variables are 20 variables, and the time lag of DPCA is set to 2.
Figure 10

Hot strip mill processes.

Hot strip mill processes. The fault is the malfunction of gap control loop in the fourth frame. When this fault occurs, it directly affects the sampling value of the roll gap of the fourth stand, and the sampling value of the rolling force of the fourth stand is also affected. The fault also affects the roll gap and rolling force sampling values of the following racks and then affects the final outlet thickness. The fault started from the 801st sample. The CLDLV-based fault detection results are shown in Figure , where the faults are well detected in all four subspaces. The detection results show that faults fall down at the 1200th sample, so it will be difficult to be detected. However, all faults are detected in both T2 and T2, and faults are detected under T2. Also, in Q-based subspace, faults are barely detected near the 1200th sample due to loop feedback control.
Figure 11

CLDLV-based monitoring result of the hot strip mill.

CLDLV-based monitoring result of the hot strip mill. The CLDLV-based method is compared with KPCA, DPCA, DCCA, and DPLS to check the performance of the algorithm, and the detection results are depicted in Figure .
Figure 12

Monitoring results of hot strip mill.

Monitoring results of hot strip mill. These four methods can also detect the faults, but the detection rate and false alarm rate of all four methods are not as good as the CLDLV-based method. Among the four methods, KPCA has the highest fault detection rate and can detect almost all fault samples. However, the false positive rate in T2 is the highest among the four methods. In DPCA, faults near the 1200th sample are difficult to be detected, and the SPE statistic has a high false alarm rate. The DCCA method also barely detects faults near the 1200th sample. In contrast, the DPLS method has better detection results, but the detection rate is also lower than the CLDLV method. The results of the CLDLV-based fault diagnosis are shown in Figure . The fault diagnosis results in all four spaces show the greatest contribution of variables 7 and 14. Variable 15 also contributes slightly to the occurrence of faults. Variable 7 is the variation of the stand roll gap and variables 14 and 15 is the stand rolling force. The contribution of the other variables is limited and is not the main cause of the fault.
Figure 13

CLDLV-based diagnosis result of the hot strip mill.

CLDLV-based diagnosis result of the hot strip mill.

Conclusions

In this paper, a concurrent locality-preserving dynamic latent variable method is proposed to extract the correlation between process variables and quality variables for quality-related dynamic process monitoring. Given that dynamic process data can easily be contaminated by noise and outliers and conventional dynamic latent variable models lack robustness, a low-rank autoregressive model is developed to deal with autocorrelation and cross-correlation properties among the data. Neighborhood structure information is then integrated into the PLS model, which can better reveal the essential structure of the data. The final concurrent projection of the latent structures is used to monitor output-related faults and input-related process faults that affect quality. The Tennessee Eastman process and hot strip mill process are used to demonstrate the effectiveness of CLDLV-based detection and diagnostic methods.
  5 in total

1.  Nonlinear dimensionality reduction by locally linear embedding.

Authors:  S T Roweis; L K Saul
Journal:  Science       Date:  2000-12-22       Impact factor: 47.728

2.  Quality relevant data-driven modeling and monitoring of multivariate dynamic processes: the dynamic T-PLS approach.

Authors:  Gang Li; Baosheng Liu; S Joe Qin; Donghua Zhou
Journal:  IEEE Trans Neural Netw       Date:  2011-11-14

Review 3.  Partial Least Squares (PLS) methods for neuroimaging: a tutorial and review.

Authors:  Anjali Krishnan; Lynne J Williams; Anthony Randal McIntosh; Hervé Abdi
Journal:  Neuroimage       Date:  2010-07-23       Impact factor: 6.556

4.  Face recognition using laplacianfaces.

Authors:  P Niyogi
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2005-03       Impact factor: 6.226

5.  Optimal Exact Least Squares Rank Minimization.

Authors:  Shuo Xiang; Yunzhang Zhu; Xiaotong Shen; Jieping Ye
Journal:  KDD       Date:  2012
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.