Literature DB >> 24991640

Convergence results on iteration algorithms to linear systems.

Zhuande Wang1, Chuansheng Yang2, Yubo Yuan3.   

Abstract

In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods.

Entities:  

Mesh:

Year:  2014        PMID: 24991640      PMCID: PMC4061780          DOI: 10.1155/2014/273873

Source DB:  PubMed          Journal:  ScientificWorldJournal        ISSN: 1537-744X


1. Introduction

The primal goal of this paper is to study the iterative methods of the linear systems: where A is a given n × n complex or real matrix. It is well known that linear systems arise in studies in many areas such as engineering and industrial science. For example, in the field of numerical solutions of differential-algebraic equations (DAE) and ordinary differential equations (ODE) [1-3] it is very important to solve (1). In digital image and signal processing, especially in compressed sensing, Stojnic [4] has mentioned that the systems (1) are the mathematical background of compressed sensing problems and studied sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations for the case of the so-called block-sparse unknown vectors. In the blind source separation of signal, Congedo et al. [5] have showed that it is very important to solve (1) and proposed a special method with joint singular value decomposition. In the field of biomedical engineering, Deo et al. [6] have mentioned that the cardiac electrical activity can be described by the bidomain equations and pointed out that the numerical solution of partial differential equations (PDEs) associated with bidomain problems often leads to (1). Moreover, they have proposed a novel preconditioner for the PCG method to solve (1) and a cheap iterative method such as successive overrelaxation (SOR) to further refine the solution for a desired accuracy. In 2008, Shou et al. [7] have showed that the reconstruction of epicardial potentials (EPs) from body surface potentials (BSPs) can be characterized as an ill-posed inverse problem and geometric errors in the ECG inverse problem will directly affect the calculation of transfer matrix A in (1). In the field of systems and control science, Ding and Chen [8] have pointed out that Sylvester equations in systems and control especially Lyapunov equations in continuous- and discrete-time stability analysis can be converted into equivalent equations as (1). In the field of machine learning many problems of classification and regression, such as single-hidden layer neural networks [9, 10], support vector machines, functional neural networks, and so on, can be summarized as (1). Therefore, the solution of (1) is very important in scientific computing. The methods to solve linear systems can be roughly divided into two categories: direct methods and iterative methods 1. Iterative methods are more suitable than direct methods for large linear systems [11, 12]. The current research on iterative algorithms has been more mature, but how to make it fit the new architecture model is complicated. In order to gain more good performance, acceleration has been applied and architecture has been considered [13]. In this paper, we do some research with the iterative algorithm. To this end, the paper is organized as follows. In Section 2, we introduce the backward MPSD (backward modified preconditioned simultaneous displacement) iterative method which is a unified form of some important backward iterations. In Section 3, we first introduce some important lemmas which will be used and then we obtained the convergence results between backward MPSD iteration and Jacobi iteration. We also proposed convergence results between some backward iterations and Jacobi iteration in the corollaries. In Section 4, some examples and numerical experiments have been done to make sure of the correctness of results. Especially, we point out that the backward iteration is better than the original one in many cases just like Example 4.

2. A Unified Framework of Iteration Matrix and Algorithm

The basic idea to solve (1) is matrix splitting. If we let where D = diag⁡(A) is a diagonal matrix obtained from A and nonsingular and C and C are strictly lower and upper triangular matrices obtained from A, (1) becomes the equivalent one: At this moment, D −1 A = (I − L − U). The Jacobi iterative matrix is The MPSD (modified preconditioned simultaneous displacement) iterative method is studied in [14-17]. If τ > 0 is a real constant, obviously (3) is equivalent to At the same time, if ω 1, ω 2 are real constants, we can obtain the following equivalent from (5): It is easy to verify that With (7), we can construct the backward MPSD iterative method as follows: where which we named as backward MPSD iterative matrix. Also, we have the following algorithm. Backward MPSD Algorithm Step  0 (Input). Matrix A, vector b, τ > 0, ω 1, ω 2, algorithm stop cutoff ϵ. Step  1 (Initialization). Compute D = diag⁡(A), C , C , L = D −1 C , U = D −1 C , x (0) = 0, and set i≔0. Step  2. Compute matrix according to (9). Step  3. Compute x ( with (8). Step  4. If ||x (−x (||2 2 ⩽ ϵ, then stop and accept x ( as the solution of (1); else i : = i + 1; go to step 3.

Remark 1

With special values of ω 1,   ω 2, and τ, we have the following. When ω 1 = 0, ω 2 = 0, and τ = 1, we obtain the Jacobi iterative method. When ω 1 = 0, ω 2 = 0, and τ = ω, we obtain the backward JOR iterative method. When ω 1 = 1, ω 2 = 0, and τ = 1, we obtain the backward G-S iterative method. When ω 1 = ω, ω 2 = 0, and τ = ω, we obtain the backward SOR iterative method. When ω 1 = ω, ω 2 = 0, and τ = α, we obtain the backward AOR iterative method. When ω 1 = ω, ω 2 = ω, and τ = ω(2 − ω), we obtain the backward SSOR iterative method. When ω 1 = ω, ω 2 = ω, and τ = ω, we obtain the backward EMA iterative method. When ω 1 = ω, ω 2 = ω, and τ = α, we obtain the backward PSD iterative method. When ω 1 = ω, ω 2 = ω, and τ = 1, we obtain the backward PJ iterative method. The convergence relationship between the Gauss-Seidel iterative matrix and the Jacobi iterative matrix is studied in [12], and the generalized results are studied in [18]. Some eigenvalue relationships between other iterative matrices and Jacobi iterative matrix are studied with the p-cyclic case in [19-26]. Some backward iterations are studied in [27]. In the following we consider the convergence results between the backward MPSD iterative matrix and the Jacobi iterative matrix and obtain convergence relationships between some other backward iterative matrices and Jacobi matrix.

3. Convergence Results

In order to obtain the convergence results, we give some well-known results which will be used in the proof of Theorem 7 as follows.

Definition 2 (see [13])

The splitting A = M − N with A and M nonsingular is called a regular splitting if M −1 ≥ 0 and N ≥ 0. It is called a weak regular splitting if M −1 ≥ 0 and M −1 N ≥ 0. It is obvious that a regular splitting is a weak regular splitting.

Lemma 3 (see [13])

The nonnegative matrix T ∈ R is convergent; that is, ρ(T) < 1 if and only if (I − T)−1 exists and (I − T)−1 = ∑ T ≥ 0.

Lemma 4 (see [13])

Let A = M − N be a weak regular splitting of A, H = M −1 N. Then the following statements are equivalent. A −1 ≥ 0; that is, A is inverse-positive. A −1 N ≥ 0. ρ(H) = ρ(A −1 N)/(1 + ρ(A −1 N)) so that ρ(H) < 1.

Lemma 5 (see [12])

Let A ≥ 0 be an irreducible n × n matrix. Then A has a positive real eigenvalue equal to its spectral radius; to ρ(A), there corresponds an eigenvector x > 0, ρ(A) increases when any entry of A increases, ρ(A) is a simple eigenvalue of A.

Lemma 6 (see [12])

Let A = (a ) ≥ 0 be an irreducible n × n matrix. Then for any x > 0, either or By the lemmas above, we give the convergence theorem in the following.

Theorem 7

Let the coefficient matrix A of (1) be irreducible with a ≠ 0, ∀ i, B = U + L ≥ 0 the Jacobi matrix, and the backward MPSD iterative matrix. Then, for 0 ≤ ω < τ ≤ 1, k = 1,2, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward MPSD iterative method are either both convergent or both divergent.

Proof

Combining ρ(ω 2 L) = ρ(ω 1 U) = 0 with Lemma 3, we have (I − ω 2 L)−1 ≥ 0, (I − ω 1 U)−1 ≥ 0, and Since a ≠ 0 and A is irreducible, I − L − U = D −1 A and B = L + U are irreducible. By 0 ≤ ω < τ ≤ 1, k = 1,2, we have (1 − τ)I + (τ − ω 1)U + (τ − ω 2)L ≥ 0 and irreducible. Thus, by (12), and is irreducible. By Lemma 5, there exists and corresponding vector x = (x 1, x 2,…, x ) > 0, such that ; namely, Let η = τ − ω + λω , k = 1,2; by calculation, that is, (1) Since B ≥ 0 is irreducible, by Lemma 5, ρ(B) > 0. If , by 0 ≤ ω < τ ≤ 1, k = 1,2, . If , then λ − (1 − τ) ≥ 0 because the left side of (14) is nonnegative, Thus λ ≥ 1 − τ. By (14), If λ = 1 − τ, by (16), we have η 1 Ux + η 2 Lx ≤ 0; that is, Since η > 0, k = 1,2, B = (b ) ≥ 0 and x > 0, we obtain that B = 0. Thus, ρ(B) = 0. This contradicts ρ(B) > 0. So, . (2) For mutually exclusive relations, consider the following. (i) If 0 < ρ(B) < 1, let and then Since M −1 = (I − ω 2 L)−1(I − ω 1 U)−1 ≥ 0 and N ≥ 0, T = M − N is a regular splitting: By B ≥ 0, 0 < ρ(B) < 1, and 0 < τ ≤ 1, we know that T −1 = (1/τ)(I − B)−1 ≥ 0. By Lemma 4, . Combine this with the result in (1), we have . If , by (14), we have Since η > 0, k = 1,2, b = 0, ∀ i, that is, By λ < 1 and 1 − ω > 0  (k = 1,2), there is Thus, Combining (23) with (29), we have By Lemma 6, we obtain that 0 < ρ(B) < 1. (ii) If , by (14), we have namely, Bx = x. Since x > 0, we have By Lemma 6, we obtain that ρ(B) = 1. (iii) If , by (15), we have Since η > 0, k = 1,2, b = 0, ∀ i, that is, By λ > 1 and 1 − ω > 0  (k = 1,2), there is Thus, Combining (31) with (33), we have By Lemma 6, we obtain that ρ(B) > 1. If ρ(B) = 1 and , by (1), we obtain that or . Thus, by (i) and (iii), we know that 0 < ρ(B) < 1 or ρ(B) > 1. This contradicts ρ(B) = 1. So, . If ρ(B) > 1 and , by (i) and (ii), we have ρ(B) ≤ 1. This contradicts ρ(B) > 1. So, . With special values of ω 1, ω 2, and τ, we have the following corollaries.

Corollary 8

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward JOR iterative matrix. Then, for 0 ≤ ω ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward JOR iterative method are either both convergent or both divergent.

Corollary 9

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward Gauss-Seidel iterative matrix. Then, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward Gauss-Seidel iterative method are either both convergent or both divergent.

Corollary 10

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward SOR iterative matrix. Then, for 0 ≤ ω ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward SOR iterative method are either both convergent or both divergent.

Corollary 11

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward AOR iterative matrix. Then, for 0 ≤ ω < α ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward AOR iterative method are either both convergent or both divergent.

Corollary 12

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward SSOR iterative matrix. Then, for 0 ≤ ω ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward SSOR iterative method are either both convergent or both divergent.

Corollary 13

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward EMA iterative matrix. Then, for 0 ≤ ω ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward EMA iterative method are either both convergent or both divergent.

Corollary 14

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward PSD iterative matrix. Then, for 0 ≤ ω < α ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward PSD iterative method are either both convergent or both divergent.

Corollary 15

Let the coefficient matrix A of (1) be irreducible, B = U + L ≥ 0 the Jacobi matrix, and the backward PJ iterative matrix. Then, for 0 ≤ ω ≤ 1, we have the following. ρ(B) > 0, . One and only one of the following mutually exclusive relations is valid. . . . Thus, The Jacobi iterative method and the backward PJ iterative method are either both convergent or both divergent.

Remark 16

The convergence results between the backward MPSD and Jacobi iterative matrix are proposed, and The convergence results between some special cases of backward MPSD (such as backward JOR, backward G-S, backward EMA, and backward PSD) and Jacobi iterative matrix are obtained. These results involve some special iterative methods which are proposed in the references.

4. Numerical Examples

In this section, we show five examples. The first three examples are used to show the convergence of the proposed iterative methods. Example 4 is used to show the divergence of the proposed iterative methods. Example 5 shows that the backward iterative methods are better than the origin methods when the upper triangular part dominates the lower triangular part. In the following figures, horizontal axis denotes the numbers of iterations and vertical axis denotes the errors of iterations.

Example 1

Let the coefficient matrix A and the vector b of (1) be The Jacobi iterative matrix is By caculation, we obtain 0 < ρ(B) = 1/2 < 1. With these iterative methods and the presented algorithm, the solution is x = (2,2, 2,2). Let τ = α = 1/2, ω 1 = ω 2 = ω = 1/4. We obtain the backward PSD iterative matrix , and Let τ = 1, ω 1 = ω 2 = ω = 1/2. We obtain the backward PJ iterative matrix , and Let τ = ω = 1/2, ω 1 = ω 2 = 0. We obtain the backward JOR iterative matrix , and Let τ = ω 1 = ω 2 = ω = 1/2. We obtain the backward EMA iterative matrix , and

Example 2

In order to obtain the numerical solution of the laplace equation under a uniform square mesh of five-point difference approximations, and the interior mesh points as shown in Figure 1 [28], we can obtain the linear system (1), where the matrix A and the vector b of (1) are
Figure 1

Uniform square mesh of five-point difference.

The Jacobi iterative matrix is By caculation, we obtain 0 < ρ(B) = 1159/1601 < 1. Let τ = α = 1/2, ω 1 = ω 2 = ω = 1/4. We obtain the backward PSD iterative matrix , and Let τ = 1, ω 1 = ω 2 = ω = 1/2. We obtain the backward PJ iterative matrix , and Let τ = ω = 1/2, ω 1 = ω 2 = 0. We obtain the backward JOR iterative matrix , and Let τ = ω 1 = ω 2 = ω = 1/2. We obtain the backward EMA iterative matrix , and From Figures 2, 3, 4, and 5, the errors of Jacobi iteration are denoted by blue circles and that of MPSD iteration is denoted by red stars. By the figures above, We know that Jacobi iteration is better than backward PSD, JOR, and EMA iteration and is worse than PJ iteration under the values of ω 1, ω 2, and τ in this example.
Figure 2

The errors of PSD and Jacobi iteration.

Figure 3

The errors of PJ and Jacobi iteration.

Figure 4

The errors of JOR and Jacobi iteration.

Figure 5

The errors of EMA and Jacobi iteration.

Example 3

Let the coefficient matrix A of (1) be where q = −p/n, r = −p/(n + 1), and s = −p/(n + 2) [28]. Here, we let n = 100 and p = 1. By caculation, we obtain 0 < ρ(B) = 199/203 < 1. Let τ = α = 1/2, ω 1 = ω 2 = ω = 1/4. We obtain the backward PSD iterative matrix , and Let τ = 1, ω 1 = ω 2 = ω = 1/2. We obtain the backward PJ iterative matrix , and Let τ = ω = 1/2, ω 1 = ω 2 = 0. We obtain the backward JOR iterative matrix , and Let τ = ω 1 = ω 2 = ω = 1/2. We obtain the backward EMA iterative matrix , and

Example 4

Let the coefficient matrix A and the vector b of (1) be The Jacobi iterative matrix is By caculation, we obtain ρ(B) = 987/305 > 1. It shows that the backward MPSD iteration is invalid for this example. Let τ = α = 1/2, ω 1 = ω 2 = ω = 1/4. We obtain the backward PSD iterative matrix , and Let τ = 1, ω 1 = ω 2 = ω = 1/2. We obtain the backward PJ iterative matrix , and Let τ = ω = 1/2, ω 1 = ω 2 = 0. We obtain the backward JOR iterative matrix , and Let τ = ω 1 = ω 2 = ω = 1/2. We obtain the backward EMA iterative matrix , and

Example 5

Let the coefficient matrix A of (1) be We can see the analogous matrix A in [29]. By caculation, we obtain ρ(B) = 811/822 < 1. Let τ = 1, ω 1 = 1, ω 2 = 0. We obtain the backward Gauss-Seidel iterative matrix , and The Gauss-Seidel iterative matrix , and Thus, From Figure 6, the red stars denote the error of backward Guass-Seidel iteration and the blue circles denote that of Guass-Seidel. So, the backward iterative methods are better than the original methods under the assumption that the upper triangular part dominates the lower triangular part.
Figure 6

The errors of backward Guass-Seidel and Guass-Seidel iteration.

5. Conclusions

The Jacobi iteration is the basic iteration for linear systems and easier to the analysis of the convergence than other iterations. In the paper, we proposed the backward MPSD iteration and obtained the convergence result between backward MPSD iteration (including iterations such as backward JOR, backward G-S, backward EMA, and backward PSD) and Jacobi iteration. We pointed out that the backward MPSD iteration and the Jacobi iteration are either both convergent or both divergent under the assumptions in Theorem 7. So, we can ascertain the convergence or divergence of backward MPSD iteration by Jacobi iteration. In some case, the backward iteration is better than the original one.
  2 in total

1.  Reduced-order preconditioning for bidomain simulations.

Authors:  Makarand Deo; Steffen Bauer; Gernot Plank; Edward Vigmond
Journal:  IEEE Trans Biomed Eng       Date:  2007-05       Impact factor: 4.538

2.  Truncated total least squares: a new regularization method for the solution of ECG inverse problems.

Authors:  Guofa Shou; Ling Xia; Mingfeng Jiang; Qing Wei; Feng Liu; Stuart Crozier
Journal:  IEEE Trans Biomed Eng       Date:  2008-04       Impact factor: 4.538

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.