Literature DB >> 29434502

Regular approximate factorization of a class of matrix-function with an unstable set of partial indices.

G Mishuris1, S Rogosin2.   

Abstract

From the classic work of Gohberg & Krein (1958 Uspekhi Mat. Nauk.XIII, 3-72. (Russian).), it is well known that the set of partial indices of a non-singular matrix function may change depending on the properties of the original matrix. More precisely, it was shown that if the difference between the largest and the smallest partial indices is larger than unity then, in any neighbourhood of the original matrix function, there exists another matrix function possessing a different set of partial indices. As a result, the factorization of matrix functions, being an extremely difficult process itself even in the case of the canonical factorization, remains unresolvable or even questionable in the case of a non-stable set of partial indices. Such a situation, in turn, has became an unavoidable obstacle to the application of the factorization technique. This paper sets out to answer a less ambitious question than that of effective factorizing matrix functions with non-stable sets of partial indices, and instead focuses on determining the conditions which, when having known factorization of the limiting matrix function, allow to construct another family of matrix functions with the same origin that preserves the non-stable partial indices and is close to the original set of the matrix functions.

Entities:  

Keywords:  asymptotic methods; factorization of matrix-functions; unstable partial indices

Year:  2018        PMID: 29434502      PMCID: PMC5806012          DOI: 10.1098/rspa.2017.0279

Source DB:  PubMed          Journal:  Proc Math Phys Eng Sci        ISSN: 1364-5021            Impact factor:   2.704


Introduction

A given invertible square matrix is called factorizable if it can be represented in the form with continuous invertible factors G±(x),(G±)−1(x), possessing analytic continuation into the corresponding half-plane Π±={z=x+iy:Im ±z<0}, and The representation (1.1) is called the right (or right-sided) factorization on the real axis. It is also referred to as right continuous factorization. If we have the representation then it is called the left (or left-sided) factorization. If the right- (or left-) factorization exists, then the integers , called the partial indices, are determined uniquely up to their order, e.g. . The factors G−, G+ are not unique (e.g. [1]). In general, the partial indices for the right-factorization and for the left-factorization are not necessarily the same. Throughout this paper, we will only deal with the right factorization. The case of the left factorization can be handled analogously. Several of the general facts on factorization have been presented in the survey paper [2] (see also [3,4]). In particular, it is well known that the sum of the partial indices is equal to the winding number (or the Cauchy index) of the determinant of the given invertible square matrix G: The factorization is called a canonical factorization if all the partial indices are equal to 0, i.e. . It is said (e.g. [5], p. 50) that a non-singular matrix function G(x) has a stable set of partial indices if there exists δ>0 such that any matrix function F(x) from the δ-neighbourhood of G(x) (i.e. ∥F−G∥<δ) has the same set of partial indices (right or left). If not, then G(x) has an unstable set of partial indices. It has been shown (see [5-7] and also [8]) that a set of partial indices is stable if and only if . In the unstable case, a small deformation of the matrix function G(x) can lead to changes in the partial indices. More precisely, there exists a sequence of matrix functions F(x) that converge to G(x), but which has a distinct set of partial indices, i.e. where Λ(x)≠Λ(x). We note that, in all the known examples illustrating such a situation (e.g. [5,6]), the sequences of the factors , do not possess limiting values (as ) from the same chosen space as G(x). On the contrary, even in the case of unstable partial indices, we can easily construct a sequence of factors , in (1.5) with the same partial indices as the original matrix, i.e. Λ(x)=Λ(x). Indeed, in a simple example we present the pair , , where are arbitrary matrices belonging to the same space as G∓, such that and as . Let us now consider a family of matrix functions that possesses a factorization. Any matrix that satisfies the following asymptotic relation: will be called a perturbation of the matrix G.

Definition 1.1

Let be a given factorizable matrix. Its perturbation G is considered ‘regular’, if there exists ε0>0 such that the matrix G possesses a bounded factorization (i.e. for all ε∈[0,ε0) and z∈Π±). Otherwise the perturbation is considered ‘singular’.

Lemma 1.2

The partial indices of the regular perturbation G are the same as those of G.

Proof.

Let and G be a regular perturbation of G(x), Hence By taking the limit as ε→+0, we obtain Λ=Λ due to the uniqueness of the partial indices. ▪

Remark 1.3

If the partial indices of G satisfy the condition , then any perturbation of G is regular.

Remark 1.4

To highlight the role of the condition of boundedness of the factors, we present here a variant of the classical example of Gohberg & Krein [5], p. 264. Let us consider the following matrix: It is clear that this matrix possesses a factorization, where , Λ(x)=G0(x) and the partial indices . Consider a slight perturbation of the matrix G0(x): We note that for sufficiently small ε>0, the matrices are close to each other, such that On the other hand, for all fixed ε>0, the matrix G(x) possesses the following factorization with partial indices : For each fixed ε>0, the factors in this factorization admit analytic continuations into the corresponding half-plane, where they are bounded. However, these factors are not uniformly bounded for ε∈[0,ε0) for any ε0>0. Hence G(x) is a singular perturbation of the above diagonal matrix G0(x) (we denote it by ). This example provides a simple, but not unique, method for the construction of the singular perturbation of any n×n diagonal matrix . Moreover, by replacing ε with ε in this procedure, we can construct a singular perturbation for any factorizable matrix (figure 1), which is arbitrarily close to the given matrix G0(x).
Figure 1.

Possible types of perturbations, G, (∥G−G0∥<ε), in the cases of stable (a) and unstable (b) sets of partial indices of the matrix-function G0. (Online version in colour.)

Possible types of perturbations, G, (∥G−G0∥<ε), in the cases of stable (a) and unstable (b) sets of partial indices of the matrix-function G0. (Online version in colour.)

Definition 1.5

Let G(x) be a perturbation of a factorizable matrix function G0(x). If there exists another perturbation satisfying then we say that is a ‘k-guided perturbation’ of G. It follows from (1.9) that for each regular perturbation G(x) of the matrix function G0(x), there exists a singular k-guided perturbation for any k≥1. The above-mentioned properties are illustrated in figure 1. In figure 1a, we show that for any factorizable matrix G0(x), with a stable set of partial indices, there is a ε-neighbourhood containing only regular perturbations. Figure 1b illustrates the case of unstable partial indices of G0(x). Here, the situation is more delicate, as we can see that in each ε-neighbourhood of G0(x) we can find either regular or singular perturbations. The aim of this paper is to consider the construction of a regular k-guided (k>1) perturbation for a given perturbation G(x) of the matrix function G0(x). Perturbation G(x) is of the same type as in [9,10], and G0(x) has a known factorization with unstable partial indices. For k=1, this is trivial but has no practical use. The notion of k-guided perturbation helps us to highlight possible structure of the set of partial indices of those matrices locating in ε-neighbourhood of a matrix with unstable partial indices. Clearly, with the higher k-guided perturbation is possible to construct the more accurate approximation of the original set it creates. The factorization technique is a powerful tool in solving practical problems [11-18], and any approximate factorization will allow a wide range of practical problems to be tackled with some level of accuracy. The establishment of an approximate (e.g. [19]) or an asymptotic procedure [9,10,20-22] is a challenging problem, because an exact factorization is possible only in a number of special cases (see [2] and references therein). Similarly, the mentioned non-uniqueness of the factorization problem does not prevent it being effectively used in practice. However, one needs to be careful in using the approximate factorization in the case of unstable indices, as it may introduce not only quantitative but also qualitative deviation of the approximate solution from the original one. Here, any links between the partial indices of the factorization problem and the particular physical properties of the problem in question are crucial (see extended discussion in [2], cf. also [25,26]). This question is beyond the scope of this paper. Below, we discuss whether, and under which conditions, it is possible to find an n×n matrix function , sufficiently close to a given regular perturbation G(x) of the matrix function G0(x), and possessing an unstable set of partial indices. More exactly, we ask when it is possible to find while preserving the partial indices of G(x)? To reach an answer to this question in the case of unstable partial indices, a new definition of the asymptotic factorization is given and applied. The method, as proposed in [9,10], is generalized and employed. We find conditions under which our asymptotic procedure is effective, and its properties and details are illustrated by examples. The efficiency of the procedure is also illuminated by numerical results. The paper is organized as follows: In §2, we present necessary definitions and notations supplied by necessary basic facts from factorization theory. Next, (§3) we consider certain perturbations of matrices factorized with unstable partial indices, and describe an algorithm for the construction of an approximate factorization of the perturbed matrices, while preserving the initial partial indices. The conditions for realization of this algorithm are also derived here, which are simply solvability conditions for a certain boundary value problem. We also provide examples where the solvability conditions are satisfied and unsatisfied. We conclude with illustrations of the obtained numerical results and a discussion of their importance in §4.

Asymptotic factorization: definitions

To proceed, we make some necessary definitions. We denote by , , the set of bounded matrix functions with locally Hölder-continuous entries, endowed with the norm ∥⋅∥: In this paper, we consider only matrices of the class , where refers to invertible matrices. It should be noted that our method can be also be applied to a wider class of matrix functions, e.g. having no prescribed modulus of continuity. Below, we give a definition of the asymptotic factorization only in the case of the regular perturbation of a given matrix function, since we lack for the moment a formal procedure which distinguishes between the cases of regular and singular perturbations and as yet have no useful example of the construction of the asymptotic factorization of a singularly perturbed matrix function.

Definition 2.1

Let be a given factorizable matrix () and G(x) be its regular perturbation. We say that a set of pairs of matrix functions, and , (m=1,2,…N), and a diagonal matrix Λ(x) of the form (1.2) represent an asymptotic factorization (of the order N) of the matrix function if the following conditions are satisfied: (i) there exists a sequence of functions θ(ε), k=1,2,…N+1, that vanishes at the point ε=0, such that for any k=1,2,…,N (ii) there exist matrices of the form: (iii) there exists ε0>0 such that the matrices are analytical in Π∓, respectively, and bounded in uniformly with respect to ε∈[0,ε0), (iv) the following estimate is valid for any m=1,2,…,N The representation is called an asymptotic factorization (of order N) of the matrix G(x). We note that conditions (2.2) and (2.3) guarantee that the matrices and belong to the required class, and thus both terms on the left-hand side of (2.4) represent two essentially different factorizations. As a simple example of the sequence (2.2), we could consider θ(ε)=ε. Some clarifications are required for this definition: — We are concerned with regular perturbations of the given matrix, i.e. we are looking for representations (2.3) possessing factors and that are bounded in z in the corresponding half-planes, where our choice is motivated purely by applications. In fact, we can replace the boundedness conditions for and , by other conditions, such as polynomial growth/decay at infinity. — The given definition does not require uniqueness. Indeed, as was demonstrated in the example in [10], which considers the case of stable partial indices, there was no uniqueness, even with the enforcement of additional conditions at infinity. — The parameter N is also involved in the process of asymptotic factorization. In the case when and the series is converging, we can say that the asymptotic factorization becomes the standard factorization, where the factors are defined by their converging series. — The method used in [10], in the case of stable partial indices, allows to construct for some matrix functions the factors of the factorization as converging asymptotic series, and preserving the partial indices, i.e. Λ(x)=Λ(x). However, even in this case, no uniqueness can be guaranteed. — The factors in the representation (2.3) are continuous with respect to ε≥0. — Although the asymptotic factorization is not unique, we can prove, similarly to lemma 1.2, the uniqueness of the partial indices (Λ(x)=Λ(x)). — If an asymptotic factorization of order N>1 exists, then the matrix function is an (N+1)-guided perturbation of the following perturbation G(x) of the matrix G(x): Although we only consider in this paper the factorization of a matrix function on the real axis, we can tackle in the same way the factorization of matrices defined on any oriented curve Γ which divides the complex plane into two domains D− and D+, by changing the diagonal entries in Λ(x) to , t∓∈D∓, or simply to if 0∈D+. Let us now consider a matrix function , admitting a factorization (1.1), with unstable partial indices (), and its perturbation , which depends on a small parameter ε such that Our motivating question is the following: how to distinguish a class of possible perturbations that allows constructing an asymptotic procedure described in (2.2)–(2.5). In the case of stable partial indices, such a type of perturbation and the corresponding asymptotic procedure was always possible as shown in [9,10]. We demonstrate below that the statement is no longer valid in the case of unstable partial indices and some additional conditions are required.

Asymptotic factorization: procedure

Let us consider an invertible, bounded, locally Hölder continuous matrix of the form where θ1(ε)=o(1) as ε→0 and N is bounded and Hölder continuous on . We suppose additionally that, when ε=0, the matrix G0(x) possesses a factorization with unstable partial indices (), and has factors and , which admit analytic continuation into the semi-planes Π∓, respectively, and which are bounded in . We look for an asymptotic factorization of the matrix G(x) of the type (3.1), which is a regular perturbation of G0(x), up to some stage of the asymptotic procedure. For simplicity, we will consider θ(ε)=ε (see remark 3.6, cf. [9], Lemma 3.6).

First step of the asymptotic factorization

First, we present the matrix G(x) in the following form: where , , and the unknown matrix-functions must be analytically extended into Π∓, together with their inverses, and bounded in , respectively. Note that (3.2) differs from the representation used for the case of stable partial indices (cf. [9,10]). Comparing the term with parameter ε, we arrive at the following boundary condition for : For brevity, we introduce the following notation: Hence, unknown matrix-functions have to satisfy the boundary condition In order to determine solvability conditions for this problem and to find a representation for the solution, we present here a few facts from the theory of boundary value problems [23,24]. It is known that any bounded locally Hölder continuous function can be uniquely, up to arbitrary constant , represented as the sum of two functions which are analytic in Π− and Π+, an bounded in and , respectively, where is the Cauchy-type integral [23], p. 52 Representation (3.6), and further formulae, remain valid in the matrix case too. Therefore, the formal solution to (3.5) has the following form: and where C0 is a constant n×n matrix, which is, in coordinate-wise terms, and Such a representation gives the bounded solution of the first-order asymptotic factorization problem (3.2), with factors involving analytic matrices which are uniquely defined by (3.4), if and only if certain solvability conditions are satisfied. These conditions simply require that have no singular points at ∓i. Partly, we can use arbitrary constant (the entries of the matrix C0), but not all the solvability conditions are satisfied by the proper choice of .

Solvability conditions

Here, we present the necessary and sufficient solvability conditions for boundary value problem (3.5), which is equivalent to the first step of the asymptotic factorization [23], p. 120. — if for certain k,q+1≤k≤n, we have , then the boundedness of at z=−i follows whenever we choose such that — if for certain k,q+1≤k≤n, we have , then the corresponding must be chosen as in (3.12), and the entries m0,(τ) have to satisfy conditions — if for certain k,1≤k≤p, we have , then the boundedness of at z=i follows whenever we choose such that — if for certain k,1≤k≤p, we have , then the corresponding must be chosen as in (3.14), and the entries m0,(τ) have to satisfy conditions — if the pair (l,j) is such that 1≤l≤p,q+1≤j≤n, then additional solvability conditions must satisfy — if the pair (l,j) is such that either 1≤l≤n,1≤j≤q, or p+1≤l≤n,1≤j≤n, then we have no condition on the entries m0,(τ); the corresponding constants can take arbitrary value.

Theorem 3.1

Formula (3.2) gives the first-order bounded asymptotic factorization for all ε smaller than a certain positive ε1 if and only if the solvability conditions (3.13), (3.15) and (3.16) are satisfied, and the constants are chosen accordingly. If the conditions of the theorem are satisfied, then the matrix functions give a bounded solution to the problem (3.5). Moreover, the matrices , are bounded in the neighbourhoods of z=−i, z=i, respectively. By choosing sufficiently small ε1>0, we can guarantee that the matrix functions , are invertible in the corresponding semi-planes. Thus, for ε∈[0,ε1), formula (3.2) gives the first-order bounded asymptotic factorization. To demonstrate the necessity of the theorem’s conditions, we suppose that formula (3.2) gives the first-order bounded asymptotic factorization. Then the matrix functions have to satisfy boundary condition (3.3), being analytically extended into Π∓ together with their inverses, and bounded in , respectively. The boundary value problem (3.3) is equivalent to (3.5), and for invertibility of matrices we must, in particular, have boundedness of the matrix functions (3.8) and (3.9) in the neighbourhoods of z=−i and z=i, respectively. The latter leads to the necessity of the conditions of the theorem. ▪

Remark 3.2

The numbers of solvability conditions and conditions on the choice of the constants satisfy the following relations. — The number of solvability conditions is given by — (n−q)n constants are chosen according to (3.12) and np constants are equal to 0, as in (3.14). In the (n−q)p cases described in (3.16) these choices of the constants must coincide. — n(n−p+q) constants can be chosen arbitrarily.

Remark 3.3

The obtained result can be interpreted in the following manner. Let the matrix function G(x) be a perturbation of G0(x). Then, in particular, G(x) is in the ε-neighbourhood of G0(x) (figure 1a). If the matrix G(x) satisfies the above solvability conditions, then there exists for all sufficiently small ε the matrix which possesses a factorization with the same unstable set of partial indices as G0(x). The matrix is in the ε2-neighbourhood of G(x) (figure 1b). This means that for each point of linear manifold of the matrices G(x), as defined by (3.2), which satisfies the solvability conditions, there exists a point (matrix ) in its ε2-neighbourhood which preserves the initial partial indices, i.e. according to definition 1.5, the latter matrix is the regular 2-guided perturbation.

Further steps of the asymptotic factorization

Let the solvability conditions be satisfied and the constants chosen accordingly. By solving the corresponding boundary value problems, we can refine the first-order factorization up to the rth step of the factorization using the representation which leads to the boundary value problem and The formal solution to problem (3.20) can be presented as and which features a new constant matrix C. It becomes the solution for the considered class if and only if the solvability conditions (3.12)–(3.16) are satisfied (in this case, we replace the functions m0,(x) with the functions m(x), and the constants with the constants ), while the constants are chosen accordingly. If at a certain step r=N+1, at least one solvability condition fails, then the procedure for the asymptotic factorization is stopped at this point.

Remark 3.4

In this case, we summarize the situation as follows. Let the matrix function G(x) be a regular perturbation of G0(x). In particular, (as for N=1) G(x) is in the ε-neighbourhood of G0(x). If the matrix function G(x) satisfies the above solvability conditions at each step r,1≤r≤N, then for all sufficiently small ε there exists a matrix which possesses a factorization with the same set of unstable partial indices as G0(x). The matrix is in the ε-neighbourhood of G(x). This means that for each point of linear manifold of the matrices G(x) that satisfies the solvability conditions, there exists an (N+1)-guided perturbation. Thus, with a larger number of steps we can proceed in our approximate factorization more closely to the index-preserving approximation to a given matrix G(x).

Remark 3.5

If at least one solvability condition fails at the Nth step of the approximation, then we can only construct an approximate factorization up to the order N−1. If a solvability condition fails at the first step of the approximation, then we do not have a tool to construct a regular k-guided perturbation for any k>1.

Example of the perturbed matrix satisfying the first-order solvability conditions

We apply the above-described asymptotic procedure to the matrix function G(x) of the form and show that this matrix possesses an asymptotic factorization with the same partial indices as G0(x). Here the matrix function G0(x) is given by (1.6). The matrix function G(x) can be represented in the following form: where Λ(x) is the same as in remark 1.4 and the matrix function N(x) is given by or Thus, G(x) can be thought of as a small perturbation of the matrix function G0(x)=Λ(x) (). The matrix function N(x) takes the following representation (uniform in and in ε) on any finite interval: where is a bounded matrix.

Remark 3.6

The introduced small parameter ϕ(x,ε) has the following properties (cf. [9], Lemma 3.6): and We note here that . In our case, we can prove that θ1(ε)=O(ε). We can thus later use an artificial small parameter ε instead of θ1(ε).

Remark 3.7

In fact, the first-order decay of ϕ(x,ε) at infinity (3.29) is crucial to the behaviour of θ1(ε) with respect to ε. If, for example, , then it leads to only θ1(ε)=O(ε1/2). The first step of the asymptotic factorization procedure. We look for a pair of matrix functions , which forms an approximate solution, up to ε1, of the functional equation We remind here that (cf. (3.3)). The approximate solution to (3.30) can be found from the matrix boundary value problem (3.5) that takes in this case the form: where M0,(x)=N(x). Bounded solutions to (3.31) have to satisfy the relation where is a constant matrix. Hence, and For analyticity of in the corresponding half-planes, it is necessary and sufficient that the following conditions be fulfilled: — , ; — the constant is chosen arbitrarily; — the solvability condition holds; — . In the case of the matrix function N(x) given by (3.27), we have and Here the solvability condition is satisfied and thus the constant can be chosen accordingly Finally, the constant can be chosen arbitrarily. Thus, the first-order approximation for the factorization of G(x) is given by the following formula: where matrices are presented in (3.35) and (3.36) with the above-described choice of constants. In order to estimate the quality of the approximation, it is customary to define the following remainder matrix: Direct calculations show that ΔK1,(x)=O(ε2) as ε→+0 and thus is the 2-guided perturbation for the matrix G(x).

Remark 3.8

Matrix ΔK1,(x) has an interesting behaviour, as a consequence of a special property of the matrix N(x): n11=−n22. Namely, it tends to the diagonal matrix as , specifically, Thus, by taking we have and by taking we have The above two characteristic values of the constant will be used in our numerical description of the behaviour of the remainder ΔK1,(x) of the first-order approximate factorization of the matrix (3.25). In this example, we have restricted our calculation to only the first step of the approximation. In principle, the procedure for the next steps has already been described. However, there is no guarantee that the next step will be successful and a higher order guided perturbation will have been derived.

Example of a matrix which does not satisfy the solvability conditions

Simple changes to the matrix G(x) can lead to a violation of the solvability conditions for the corresponding boundary value problem. Let us consider As before, , and thus possesses a factorization with partial indices . We note that We apply the above-described asymptotic procedure to our matrix , and show that this matrix cannot possess a bounded first-order asymptotic factorization with the same partial indices as . The corresponding matrix is given by The first step of the asymptotic factorization leads to the problem where the matrix function can be represented in the following form: and and Bounded solutions to (3.45) have to satisfy the relation where is a constant matrix. In this case, and thus, the solvability condition is satisfied only for ε=0. For all ε≠0, there is no approximate solution (up to ε2) of the functional equation (similar to (3.30)).

Remark 3.9

Note that, by construction, Hence, presents an example of the regular perturbation of , for which no regular k-guided perturbation (k>1) exists while construction of a singular perturbation remains an open problem.

Numerical examples and discussion

In this section, we analyse the quality of the approximation provided by the 2-guided perturbation performed in §3d. First, we consider the case when c21=0, and thus the limiting value of the remainder, ΔK1 does not vanish at infinity. Specifically, we estimate the element on the main diagonal in the following way: In figure 2a,b, those components are presented in their normalized forms. We can see that the estimate is true (see the discussion on the small parameter following formula (3.41)). Furthermore, the matrix converges to its limiting values more quickly for larger values of the small parameter, while the oscillations decay more slowly for smaller values.
Figure 2.

Diagonal elements, Δk(x,ε), j=1,2, of matrix ΔK1,(x) defined for various values of parameter ε, the constant . The elements are normalized to the value of parameter ε2. (Online version in colour.)

Diagonal elements, Δk(x,ε), j=1,2, of matrix ΔK1,(x) defined for various values of parameter ε, the constant . The elements are normalized to the value of parameter ε2. (Online version in colour.) In figure 3a,b, the remaining two components are depicted in the same normalized forms. Preserving the same estimate, where ΔK1,(x)=O(ε2) as ε→0, the components now decay to O(x−1), as . The trend is also clearly visible here, that the smaller ε is, the slower it converges to its limiting value. In other words, the small parameter ε determines the magnitude of the reminder matrix, but the oscillations are larger in this case, and more pronounced along the real axis.
Figure 3.

The other two elements, Δk(x,ε), i+j=3, of matrix ΔK1,(x) for ε=1;0.1;0.01, and constant . It is clear that both entries vanish, i.e. k→0 as . (Online version in colour.)

The other two elements, Δk(x,ε), i+j=3, of matrix ΔK1,(x) for ε=1;0.1;0.01, and constant . It is clear that both entries vanish, i.e. k→0 as . (Online version in colour.) Interestingly, the components on the main diagonal are comparable in value, but not equal, while the remaining two differ in value by almost a factor of two. Moreover, the latter are also two times smaller in magnitude than the diagonal elements. The situation changes when we consider the second case, where . The respective graphs are presented in figures 4 and 5. Now, all the components decay at infinity as O(x−1), as , and simultaneously have the same estimate of (O(ε2)) when ε→0, as predicted. The magnitudes of the components are, however, more balanced in the sup norm . This demonstrates that we can choose an optimal approximation preserving some specified requirement by varying the value of the arbitrary constant c21. Comparing these two cases, it is clear that the second is preferable to the first for the reasons discussed earlier.
Figure 4.

Diagonal elements, Δk(x,ε), j=1,2, of matrix ΔK1, for various ε and the constant . The elements are normalized to the value of parameter ε2. The horizontal lines show the limiting values of the normalized components at infinity. (Online version in colour.)

Figure 5.

The other two elements Δk(x,ε), i+j=3, of matrix ΔK1,(x) for ε= 1;0.1;0.01, and the constant . It is clear that both entries vanish, i.e. k→0 as . (Online version in colour.)

Diagonal elements, Δk(x,ε), j=1,2, of matrix ΔK1, for various ε and the constant . The elements are normalized to the value of parameter ε2. The horizontal lines show the limiting values of the normalized components at infinity. (Online version in colour.) The other two elements Δk(x,ε), i+j=3, of matrix ΔK1,(x) for ε= 1;0.1;0.01, and the constant . It is clear that both entries vanish, i.e. k→0 as . (Online version in colour.) Any specific factorization will of course require its own analyses. However, if the estimate is true, then the reminder can be estimated by We note that this property may change in the next step if we wish to and can continue the approximation procedure (the conditions will remain valid for the next step). Here, the limiting values for the first step will also play their role. We can deliver a similar formula based on the two consequent approximations, where two sets of constants will then be involved: c (first step) and d (second step), j,l=1,2. Judging by the magnitude of the reminder for both the presented examples, we can conclude that the 2-guided perturbation may be sufficient for practical purposes. Thus, if even one approximation step is practically possible, meaning that conditions (3.12)–(3.16) are satisfied, then we can use this approximation directly in solving the Wiener–Hopf equation. To close, we must highlight that, if conditions (3.12)–(3.16) for matrix G, with unstable partial indices, are not satisfied, the question of how to compute a valuable approximate factorization for such a matrix-function remains open.
  5 in total

1.  On effective criterion of stability of partial indices for matrix polynomials.

Authors:  N V Adukova; V M Adukov
Journal:  Proc Math Phys Eng Sci       Date:  2020-06-03       Impact factor: 2.704

2.  Applying an iterative method numerically to solve n × n matrix Wiener-Hopf equations with exponential factors.

Authors:  Matthew J Priddin; Anastasia V Kisil; Lorna J Ayton
Journal:  Philos Trans A Math Phys Eng Sci       Date:  2019-11-25       Impact factor: 4.226

3.  An explicit Wiener-Hopf factorization algorithm for matrix polynomials and its exact realizations within ExactMPF package.

Authors:  V M Adukov; N V Adukova; G Mishuris
Journal:  Proc Math Phys Eng Sci       Date:  2022-07-06       Impact factor: 3.213

Review 4.  The Wiener-Hopf technique, its generalizations and applications: constructive and approximate methods.

Authors:  Anastasia V Kisil; I David Abrahams; Gennady Mishuris; Sergei V Rogosin
Journal:  Proc Math Phys Eng Sci       Date:  2021-10-20       Impact factor: 2.704

5.  Scattering on a square lattice from a crack with a damage zone.

Authors:  Basant Lal Sharma; Gennady Mishuris
Journal:  Proc Math Phys Eng Sci       Date:  2020-03-18       Impact factor: 2.704

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.