Literature DB >> 31941069

Lie Group Methods in Blind Signal Processing.

Dariusz Mika1, Jerzy Jozwik2.   

Abstract

This paper deals with the use of Lie group methods to solve optimization problems in blind signal processing (BSP), including Independent Component Analysis (ICA) and Independent Subspace Analysis (ISA). The paper presents the theoretical fundamentals of Lie groups and Lie algebra, the geometry of problems in BSP as well as the basic ideas of optimization techniques based on Lie groups. Optimization algorithms based on the properties of Lie groups are characterized by the fact that during optimization motion, they ensure permanent bonding with a search space. This property is extremely significant in terms of the stability and dynamics of optimization algorithms. The specific geometry of problems such as ICA and ISA along with the search space homogeneity enable the use of optimization techniques based on the properties of the Lie groups O ( n ) and S O ( n ) . An interesting idea is that of optimization motion in one-parameter commutative subalgebras and toral subalgebras that ensure low computational complexity and high-speed algorithms.

Entities:  

Keywords:  Independent Component Analysis; Lie algebra; Lie groups; geometric optimization; independent subspace analysis; sensors; toral subalgebra

Year:  2020        PMID: 31941069      PMCID: PMC7013945          DOI: 10.3390/s20020440

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


1. Introduction

Blind signal processing (BSP) is currently one of the most attractive and fast-growing signal processing areas with solid theoretical foundations and many practical applications. BSP has become a vital research topic in many areas of application, particularly in biomedical engineering, medical imaging, speech and image recognition, communication systems, geophysics, economics, and data analysis. The term “blind processing” originates from the basic feature of these processing methods, i.e., the fact that there is no need to use any training data or a priori knowledge to obtain results. These methods include, among others, Independent Component Analysis (ICA), independent subspace analysis (ISA), sparse component analysis (SCA), nonnegative matrix factorization (NMF), singular value decomposition (SVD), principal component analysis (PCA) and minor component analysis (MCA) as well as the related eigenproblem and invariant subspace problem. Optimization problems of this kind often occur in the context of artificial neural networks, signal processing, pattern recognition, computer vision and numeric [1]. BSP is widely used in biomedical engineering, in technical diagnostics as well as in energy. The work [2] presents the use of SCA to analyze biomedical EEG and fMRI signals proving the effectiveness of this method in the detection of ocular artifacts. The use of SCA in technical diagnostics is presented in [3]. The three-dimensional geometric features-based SCA algorithm was used for compound faults diagnosis of roller bearing. A similar topic was discussed in [4] where NMF was used to extract error signals. The conducted experiments confirmed the effectiveness of these methods in extract the fault features and diagnosis for roller bearing. An interesting use of BSP techniques in energy issues is presented in [5]. Bayesian-optimized bidirectional Long Short-Term Memory (LSTM) method was used for energy disaggregation aiming to identify the individual contribution of appliances in the aggregate electricity load. The use of machine learning techniques as k-means clustering and Support Vector Machine for low-complexity energy disaggregation is presented in [6]. This paper primarily focuses on ICA and ISA problems, which does not, however, limit the applicability of the described methods to other types of problems. The scope of this paper is mainly limited to presenting the geometry of ICA and ISA problems and the application of Lie groups and Lie algebra without providing specific algorithms. Standard Independent Component Analysis (ICA) consists of the linear transformation of multidimensional data such that the transformed signal components are as much statistically independent as possible. The effectiveness of ICA depends on the correct choice of a cost function and an optimization strategy. Most numerical optimization techniques assume that the model’s parameter space is a usual Euclidean space. In many cases, however, the parameter space has a non-linear structure with its unique non-Euclidean geometry. From a mathematical point of view, the space of search equipped with an inner product takes on the properties of Riemannian manifolds, often with desired algebraic properties [7]. The authors of works in this field take advantage of the specific internal geometry and algebraic properties of models such as the orthogonal group or the special orthogonal group . Apart from the general group properties, these groups also have the structure of a smooth differential manifold, and thus acquire the Lie group properties and the corresponding Lie algebra. The application of this convenient structure to ICA algorithms is described in [8,9]. From the point of view of standard optimization techniques, in issues of this type one deals with the so-called constrained optimization. The problem of constrained optimization occurs in many issues related to signal processing. In the case of ICA, optimization of this kind consists of looking for extrema of the cost function on the set of matrices satisfying the condition of orthonormal columns (). However, with standard constrained algorithms one operates in Euclidean space, so in each iterative step the matrix orthogonality is lost. To restore the orthogonality condition, it is necessary to perform orthogonalization in each iterative step (e.g., by the well-known Gram-Schmidt orthogonalization process), which, however, reduces the convergence rate of the algorithms. Other algorithms use the Lagrangian method of optimization by the addition to the cost function of the so-called penalty function to prevent an orthogonality deviation. However, such algorithms are characterized by a low convergence rate and poor quality of the achieved optimum. If there is a limitation in the form of matrix orthogonality, one can use an alternative method that ensures “locked” with the hyper-surface of orthogonal matrices during optimization motion. This method uses the group structure of a set of orthogonal square matrices which, apart from the properties of a smooth differential manifold, provides the set with the properties of a special structure known as a Lie group.

2. Model Definition (ICA, ISA)

Standard independent components analysis (ICA) consists of estimating a sequence of p statistically independent components (ICs) and the mixing matrix of dimension with only a sequence of n observed signals . Giving the source signals and observed signals in the form of the source vector and the vector of observed (mixed) signals , where T stands for transposition, the standard linear ICA model takes the form (1): This assumes that there is no additional noise signal in the observed signal (Figure 1). The ICA model thereby formulated is characterized by a scale and permutation ambiguity, i.e., it is possible to scale (multiply by a given constant) of any source signals and at the same time to divide the i-th column of the mixing matrix by this constant, while the observed signal remains unchanged. The same phenomenon will occur at random transposition of any rows of the source vector (permutation of the source vector ) and the same transposition of the columns of the mixing matrix . It is customary to assume that the source signals have the unit variance (). In non-negative ICA, it is additionally assumed that the source signals satisfy the condition [10,11].
Figure 1

Schematic block scheme of Independent Component Analysis.

A solution for the ICA problem when consists of finding the demixing (filtration) matrix where the filtration matrix belongs to a general linear group of non- singular matrices . Source signals are obtained via (2): where is the estimator of a source vector (it meets the statistical assumptions for a source signal). To reduce the computation load in ICA, the pre-processing usually involves performing the whitening of the observed signal to obtain the signal , where is the whitening matrix, with unit variance and the decorrelation . Assuming that , we get (3): From this it follows that . Hence, the transformation from to takes place via an orthogonal matrix . Therefore, if , then the matrix must be an orthogonal matrix (permutation matrix), and thus a new filtering matrix (after whitening) must also satisfy the orthogonality condition. The whitening of the observed signal therefore simplifies the ICA problem from optimization on the general linear group (matrices only satisfying the invertibility condition ) to optimization on the special orthogonal group (matrices satisfying the orthogonality condition ). Both groups are Lie groups at the same time. Standard ICA is based on the assumption that , i.e., the number of source signals is known and equal to the number of observed signals . ICA also yields interesting results in a more general case when the number of estimated source signals p is unknown. In this case, it can be . When , i.e., the number of observed signals is smaller than the number of source signals, the problem is known as over complete bases ICA, whereas when it is called under complete bases ICA. This kind of problem can be formally considered to be unconstrained optimization on the Stiefel manifold [12,13]. It is also possible to solve ICA problems for the case . This type of problem is often called Single Channel Source Separation [14,15]. Hyvarinen and Hoyer introduced independent subspace analysis (ISA) [16] by omitting the statistical independence condition between extracted source components. The source vector is composed in -tuple , where for a given tuple a statistical dependence between its source signals is allowed, while signals belonging to different tuples are statistically independent. When using the whitening process, the ISA problem boils down to finding orthogonal matrices as in standard ICA. However, due to the statistical relationship between the source signals in the tuple, ISA problem optimization cannot be performed on an ordinary Stiefel manifold. It is necessary to introduce a different, more universal manifold allowing for additional symmetries. This manifold is known as a flag manifold. Traditionally, the ICA model assumes the statistical independence of extracted source signals. It turns out, however, that there are reasons to replace the orthonormality condition with the condition of source signal normality [17]. A precise definition of the ICA problem consists of finding a linear non-orthogonal transformation (of the coordinate system) of multidimensional data such that the transformed data have minimal mutual information. Hyvarinen [18] demonstrated (in an ICA problem) the differences between the use of cost functions based on mutual information and those based on the so-called non-Gaussianity. Achieving maximum de-correlation by maximizing the sum of non-Gaussianity of independent components (ICs) is not necessarily related to the minimization of mutual information (MI). In addition, the orthonormality condition leads to a smaller subset of matrices, which simplifies the optimization process yet may reduce its quality. Orthonormality imposes a greater limitation on the degrees of freedom than normality. In standard ICA, the orthonormality condition of filtering matrices reduces the number of the degrees of freedom to , while the normality condition increases the number of free parameters to , which considerably improves the quality of obtained results. A problem of this type can be formally considered to be the unconstrained optimization of an oblique manifold [19,20].

3. Geometry of ICA, ISA and Other BSP Models

The manifolds frequently arise from BSP tasks for a general do not have the group properties. Nevertheless, they are homogenous spaces of the Lie groups. A homogenous space is a manifold on which the Lie group acts transitively [21]. This property is fundamental for the considered manifolds because it enables analyzing them as quotient spaces. As mentioned in Section 2, the optimization problem in standard ICA (ISA) boils down to optimizing on the general linear group (matrices only satisfy the invertibility condition ). The whitening of the observed signal simplifies the ICA problem to optimization on the special orthogonal group (matrices satisfy the orthogonality condition ). In the case of an under complete problem , i.e., when the number of extracted ICs is smaller than the number of observed signals, the set of filtering matrices can be treated as an orthogonal Stiefel manifold defined as the set of orthonormal matrices of dimension with the form (4): which can be regarded as the quotient space arising from the orthogonal group. Lie group acts transitively on the Stiefel manifold via (5): where , . It is possible to demonstrate that for two given points there exists such that . This means that starting from any point it is possible to reach any point by the action. Resorting to group theory terminology, one can say that the entire manifold is equivalent to the single orbit of a given point where The point on the manifold can be expressed via a certain point on . The mapping is surjective, i.e., many to one (projective mapping). Redundancy of this mapping is described by so-called the isotropy subgroup of the point . It is a set of matrices that do not change The isotropy subgroup of the group of the point has the form (8): where is any matrix that satisfies the condition . It is easy to check that the isotropy condition of point is satisified (9): Choosing the isotropy subgroup is a set . In this shot two orthogonal matrices represent the same point of the Stiefel manifold if their first p columns are identical or equivalently, if they are related by right multiplication of a matrix of the form where is an orthogonal matrix group of dimension [1]. From a mathematical point of view, we say that such representations are in an equivalence relation. All matrices in an equivalence relation form what is called the equivalence class . Thus, the point on the Stiefel manifold is the equivalence class of orthogonal matriceswith identical first p columns, while the Stiefel manifold is a quotient space of the form (10): and specifically as where . However, is isomorphic to , i.e., , therefore . There are many applications for the problem formulated as the finding of (zero) extreme of a given field defined in a non-Euclidean subspace of dimension embedded in the Euclidean space . This non-Euclidean subspace is known as the Grassmann manifold [22,23]. Grassmann manifolds can be described as an equivalence class of orthogonal matrices spanning the same p-dimensional subspace . Therefore, from a theoretical point of view, the Grassmann manifold can be expressed as the quotient space and given that , the Grassmann manifold can also be seen as the quotient space . In this case, the equivalence class is a set of square n-dimensional orthogonal matrices whose first p columns span the same p-dimensional subspace. Manifolds of this type are used, among others, in invariant subspace analysis, application-driven dimension reduction and subspace tracking [24,25]. When there is a need for a simultaneous (parallel) subspace extraction, as is the case in independent subspace analysis (ISA), one resorts to the concept of generalized flag manifold, which is a manifold consisting of orthogonal subspaces that constitutes a generalization of both Stiefel and Grassmann manifolds [26,27,28]. The generalized flag manifold is defined as (11): where the orthogonal matrix takes the form (12): where for a specified is a set of orthogonal bases that span subspaces . The subspaces are orthogonal relative to each other and satisfy the condition (13): Points on the flag manifold are a set of vector spaces which can be decomposed as (13). If all , the manifold is reduced to the Stiefel manifold . If , it is reduced to the Grassmann manifold . It is abbreviated as where . The orthogonal group also acts transitively on the manifold via simple matrix multiplication (14): The isotropy subgroup of the group of the point has the form (15): where is a block-diagonal matrix of the form , , is any matrix that satisfies the condition . It is easy to check that the isotropy condition of point is satisified (16): where , is an equivalence class of the point on . This means that any two matrices and satisfying the condition are identified with the very same point on the manifold . Given the above, As it was already mentioned, the manifold is locally isomorphic to as a homogenous space when all and to the manifold when . In terms of optimization, the homogeneity of the considered differential manifolds enables the search (optimization motion) in the group or , and the use of optimization techniques that are well known and adapted to these types of groups. Section 4 presents the basic ideas of optimization methods used in and the concept of toral subalgebra that is characteristic of problems of this type.

4. Lie Group Optimization Methods. One-Parameter Subalgebra and Toral Subalgebra

The idea of a standard optimization procedure based on the Lie groups consists of performing the optimization motion in the Lie algebra space and then mapping exp to find a solution in the Lie group (manifold). The optimization motion in the group starting from the point (matrix) therefore consists of, first, the transition to the Lie algebra via mapping inversely to the exponentiation of motionin the Lie algebra (performing an operation of addition (of matrices) in the abelian group) in order to obtain a new antisymmetric matrix and, finally, returning to the Lie group via exponential mapping . A simple update method using line search procedure relies on finding the search direction in Lie algebra calculating the gradient of cost function in Lie algebra space. This gradient must be skew-symmetric (see Appendix A) so (18) [9]: Applying the steepest descent procedure with small constant update factor we start from , move to , map to and finally perform rotating (multiplicative) update . This kind of optimization method is called a geodesic flow method [9]. At this point it is necessary to comment on motion in the Lie algebra. In our context, the addition of vectors in the Lie algebra can only be useful if it is matched by multiplication in the Lie group . Then one can write (19): As was already mentioned, this equation holds true only when the matrices and commutate . This condition is satisfied for all matrices with . When , this condition is not satisfied for all matrices in the algebra. When the matrices not commutate (non-abelian Lie algebra), Equation (19) is not satisfied and optimization motion in the Lie algebra (sum A+B) in a direction of e.g., the cost function gradient will not be reflected in the Lie group . However, taking , which is tantamount to selecting an initial matrix , this condition will always be satisfied. In this case, , and Equation (19) is satisfied too. This is tantamount to motion in the one-parameter Lie algebra. By selecting for a random antisymmetric matrix and a scalar , all matrices of this form commutate with each other (). A set of such matrices is in itself a Lie algebra known as a one-parameter subalgebra of the Lie algebra . The subalgebra is an abelian (commutative) algebra related to the one-parameter subgroup . Optimization motion in the subalgebra is therefore an equivalent (generalization) to the idea of linear motion in Euclidean space. In this case, the optimization procedure consists of searching for a minimum of the cost function along the subalgebra (for a chosen search direction ), which corresponds to the search along the subgroup . Having found the cost function minimum , where is a starting point, a new direction of linear searches is selected, and the procedure is repeated until the desired convergence is achieved. Plumblay [8] proposed a modification of the standard procedure described above. This modification consists of moving the point of “origin” of the Lie algebra from a neutral element of the group to point . Due to the group properties , it can be written that for some matrix . Moving from the matrix to is therefore equivalent to moving from the identity matrix to the matrix . This procedure consists of moving from the matrix to in the Lie algebra and then returning to the group via the exponential mapping and, finally, determining . This is equivalent to the concept of optimization motion in the one-parameter abelian subalgebra described above. The above optimization procedures are computationally expensive due to the necessity of performing (computationally expensive) matrix exponentiation in every iterative step. The representation of antisymmetric matrices in the Jordan canonical form enables the decomposition of optimization movement in the group to commutative rotations in orthogonal planes. Every antisymmetric matrix can be presented in a block-diagonal form (for (20): where is a block-diagonal matrix of the form , , denotes the dimensional antisymmetric matrices [29]. This form is known as the Jordan canonical form. Since the relationship holds true, the matrix can be decomposed into a sum of the form where is the matrix only containing the i-th Jordan matrix and zeros beyond it (21): The exponentiation of thereby presented matrix yields an orthogonal matrix of the form (22): where are the dimensional rotation matrices. The matrix can be decomposed into a product of the matrix where has the form (23): One can notice that the exponentiation of the matrix in the Jordan form is reduced to a simple and inexpensive calculation of the functions and , which significantly increases the speed of optimization algorithms. The Jordan canonical form of antisymmetric matrix can be obtained via symmetric eigenvalue decomposition [29]. It can be observed that the antisymmetric matrix commutates with the symmetric matrix , which means that and have the same eigenvectors and eigenvalues. The eigenvalues occur in pairs corresponding to individual Jordan matrices . This form can be visualized as compounding rotations (represented by ) in mutually orthogonal planes. In addition, the rotation matrices commutate . The commutation property of the rotation matrix provides the possibility of using the optimization procedure on , moving in the Lie algebra . The case of is interesting from a geometrical point of view. The Jordan canonical form of the antisymmetric matrix contains two blocks (matrices) : Here, the orthogonal matrix takes the form (25): A visual representation of this case shows rotations in two mutually orthogonal planes, which corresponds to toral geometry (Figure 2).
Figure 2

Visual representation of the toral subalgebra for . The angles , and the matrices and are as in Equation (23). The broken line marks the search curve for the case .

From the point of view of optimization procedures, the rotation angles and should not be free parameters (independent of each other). For the procedure to make sense, the curve over which the search is carried out after a complete rotation (or its multiple) relative to one of the planes of rotation should return to the starting point on the toral surface (Figure 2). This is possible when for some , the relationships and are satisfied for the integers and . Therefore, the angle of rotation should be described by the relationship or , where is a rational number. This concept is naturally transferred to a general case of for . The Jordan canonical form represents optimization motion in one-parameter Lie subalgebra as a rotation in mutually orthogonal planes, and these rotations are commutative. The geometry of 2-dimensional torus for can also be generalized to the geometry of the -dimensional torus in where for even or for odd . This perception of motion in leads to the concept of toral subalgebra . If we consider a general case of motion on the surface of a -dimensional torus where the angles of rotation are not interrelated by the above relationship, and individual independent planes of rotation are represented by a set of commuting matrices (). The motion (or rather rotation) on each of the independent planes of rotation can be expressed in the form of a parameterized curve , or actually via its simple exponentiation . The set of independent parameters that can be identified with the angles of rotation forms a coordinate system on the toral subalgebra . Compared to the original –dimensional search space , the toral subalgebra is, however, an abelian algebra, which means that motion in this search space is commutative. This ensures the possibility of motion in all directions specified by the coordinates and their sum in the form will be reflected in the composition of rotations . The optimization procedure based on such a concept consists of decomposing to canonical form a specific antisymmetric matrix (this can be, for example, a cost function gradient as in the method of steepest descent), and thereby formulating a toral subalgebra. Since the orthogonal matrix in the Jordan decomposition (24) is constant, the transition to a new point in the search space is done by determining values of the and functions corresponding to planes of rotation. After finding in the subalgebra the point that minimizes the cost function, a new antisymmetric matrix is calculated and again presented in the Jordan canonical form, which establishes a new toral subalgebra. The procedure is repeated until the set minimum cost function is reached. A different problem concerns the determination of the direction of search and the manner of search along the subalgebra. The selection of directions and the manner of searches depend on the adopted optimization procedure. It can be the steepest descent (SD) method and, in general, geodesic flow, Newton’s method or conjugate gradients. This problem has been extensively studied in [12,22,30].

5. Experimental Results

To illustrate the presented optimization methods on Lie groups, we will first present a rather simply simulation experiment. The purpose of this example is to show how different algorithms work on optimization problems with unitarity constraint. To this end, let us consider the Lie group of complex numbers with the unit module , which is isomorphic with the group. The unitarity constraint of elements of this group is a unit circle on the complex plane. The cost function we will minimize will be with constraint . For optimization, we will use five types of steepest descent (SD) algorithms: algorithm SD unconstrained on the Euclidean space, algorithm SD on the Euclidean space with constraint restoration, algorithm SD on the Euclidean space with penalty function, non-geodesic algorithm SD on Riemannian space, geodesic algorithm SD on Riemannian space. In Algorithm (1) update rule has the form where is the step size. The quantity is the gradient (The gradient of the function defined in the complex space has the form [31]: where and is respectively the real and imaginary part of a complex number .) of the cost function on the Euclidean space. Algorithm (2) uses the same update rule, but after each iteration the unitarity condition is restored in the form . In Algorithm (3) we used the Lagrange multiplier method. The penalty function of the form weighted by a Lagrange parameter has been added to the initial cost function in order to penalize deviations from unitarity. In this case, update rule is . In the case of (4), the algorithm works in the Riemannian space (unit circle) determined by the condition . At each point of the algorithm determines the search direction tangent to the unit circle and after each iteration the obtained point is projected back to the unit circle. In this case, update rule has the form where is the projection operator on the unit circle. In Algorithm (5) we used a multiplicative optimization algorithm on the Lie group described in Section 4. In this case, the update rule has the form where is imaginary part of . The starting point of each algorithm is . Figure 3 shows the results of the simulation.
Figure 3

Comparison of SD algorithms for minimizing the cost function on group . Methods in the Euclidean versus Riemannian space (Lie group methods). * algorithm SD unconstrained on the Euclidean space (1). algorithm SD on the Euclidean space with constraint restoration (2). + algorithm SD on the Euclidean space with penalty function (3). o non-geodesic algorithm SD on Riemannian space (4). geodesic algorithm SD on Riemannian space (5).

In the point the cost function reaches its minimum . However, this is the undesirable minimum determined in the Euclidean space by Algorithm (1) not taking into account the unitarity constraint (Figure 3). The minimum considering this constraint is at the point and the value of the cost function reaches its minimum over the Riemannian space, i.e., on the unit circle . Unconstrained SD Algorithm (1) and with penalty function (3) achieve undesirable minimums in points respectively from and from while Algorithms (2) (4) and (5) minimum appropriate . In the case of Algorithms (2) and (4), a characteristic “zig-zag” is associated with lowering the constraint surface and undesirable from the point of view of optimizing properties. Algorithm (4) determines the SD direction tangent to the constraint condition, thus leaving the unit circle in the optimization motion. The resulting point is again projected into a unit circle. Algorithm (5) using the multiplicative update rule on the Lie group (phase rotation) described in this article naturally ensures the condition of unitarity at each step. The optimization movement takes place at each step along the geodesic line. This simple one-dimensional example is only intended to present the idea of algorithms on Lie groups. The following is an example of using these methods on a real signal. As an example of the practical application of optimization methods on Lie groups, we will present a solution to the ICA problem. As the source signals, three speech recordings and a quasi-noisy signal (harmonic signal with high noise content) (Figure 4) with a length of 5000 samples (1.25 s) were used. The source signals were mixed using a four-by-four random mixing matrix. The four observed signals are shown in Figure 4. A group optimization algorithm was used to implement ICA.
Figure 4

Comparison of ICA results using INFOMAX algorithm and optimization on the group, (a) source signals, (b) observed signals (mixed), (c) ICA results for the INFOMAX algorithm, (d) ICA results for the algorithm on the group .

For comparison, the INFOMAX algorithm in its original form was also used [32]. Based on visual inspection and listening to the separated components, it can be concluded that ICA results using the INFOMAX algorithm and optimization on the group are good with scale and permutation accuracy. The INFOMAX algorithm with the assumed convergence criterion converges after about 30–40 steps, while the algorithm on the group after about 20 steps. Figure 5 shows the sum of entropy values of separated components depending on the iteration number.
Figure 5

Comparison of entropy sum value of received components, (a) INFOMAX algorithm, (b) optimization algorithm on a group .

The optimization algorithm on the group converges to (The entropy value was determined according to an approximate relationship [32]: while the INFOMAX algorithm to . Listening to the results and comparison with the sources confirms the better ICA separation results obtained by the group optimization algorithm.

6. Conclusions

This paper described the application of the Lie group methods for blind signal processing, including ICA and ISA. Theoretical fundamentals of the Lie groups and the Lie algebra as well as the geometry of problems occurring in BSP and basic optimization techniques based on the use of Lie groups are presented. Owing to the specific geometry and algebraic properties of BSP problems, it is possible to use Lie group methods to solve these problems. The homogeneity of search space (parameters) in BSP problems enables the use of optimization techniques based on the Lie group methods for the groups and . It has been demonstrated that the one-parameter subalgebra ensures the convenient property of commutating search directions. In addition, the presentation of an antisymmetric matrix (search direction) in the Jordan canonical form establishes the toral subalgebra , which—in terms of optimization algorithms—ensures low computational complexity and high process dynamics.
  3 in total

1.  Emergence of phase- and shift-invariant features by decomposition of natural images into independent feature subspaces.

Authors:  A Hyvärinen; P Hoyer
Journal:  Neural Comput       Date:  2000-07       Impact factor: 2.026

2.  Algorithms for nonnegative independent component analysis.

Authors:  M D Plumbley
Journal:  IEEE Trans Neural Netw       Date:  2003

3.  Descent algorithms on oblique manifold for source-adaptive ICA contrast.

Authors:  Suviseshamuthu Easter Selvan; Umberto Amato; Kyle A Gallivan; Chunhong Qi; Maria Francesca Carfora; Michele Larobina; Bruno Alfano
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2012-12       Impact factor: 10.451

  3 in total
  1 in total

1.  Single Channel Source Separation with ICA-Based Time-Frequency Decomposition.

Authors:  Dariusz Mika; Grzegorz Budzik; Jerzy Józwik
Journal:  Sensors (Basel)       Date:  2020-04-03       Impact factor: 3.576

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.