Literature DB >> 30839663

On Shepard-Gupta-type operators.

Umberto Amato1, Biancamaria Della Vecchia2.   

Abstract

A Gupta-type variant of Shepard operators is introduced and convergence results and pointwise and uniform direct and converse approximation results are given. An application to image compression improving a previous algorithm is also discussed.

Entities:  

Keywords:  Direct and converse results; Gupta-type variant; Image compression; Shepard operators

Year:  2018        PMID: 30839663      PMCID: PMC6132384          DOI: 10.1186/s13660-018-1823-7

Source DB:  PubMed          Journal:  J Inequal Appl        ISSN: 1025-5834            Impact factor:   2.491


Introduction

In the last decades Shepard operators have been object of several papers, thanks to their properties interesting in classical approximation theory and in scattered data interpolation problems. In particular Shepard operators are linear, positive, rational operators, of interpolatory-type, preserving constants and achieving approximation results not possible by polynomials. Pointwise and uniform approximation error estimates, converse results, bridge theorems, saturation statements, simultaneous approximation results can be found for example in [1-7]. Applications of Shepard operators to scattered data interpolation problems, image compression and CAGD can be found for example in [8-17]. On the other hand Gupta introduced a variant of classical Bernstein operator and similar modifications of well-known positive operators of Bernstein-type were studied by him, his collaborators and other researchers (see e.g. [18-25]). It was an open problem to consider variants of Gupta-type for Shepard operators. The aim of the present paper is to give a positive answer to the above question, introducing a generalization of Gupta-type of Shepard operator depending on a real positive parameter. Convergence results and uniform and pointwise approximation error estimates for such operator are given in Theorems 2.1–2.2 in Sect. 2.1. As a particular case, we obtain the first pointwise approximation error estimate for the original Shepard operator on equispaced mesh. Theorem 2.3 settles converse results and saturation statements for our operator. The corresponding proofs are based on direct estimates for the Shepard–Gupta-type operators. In Sect. 2.2 an application to image compression is examined improving an analogous algorithm in [9] and numerical experiments confirming the outperformance of such technique compared with other algorithms are also shown.

Results

For consider the nodes matrix . Then, for any function we denote by the Shepard operator defined by with and (cf. [26]). From (1) we deduce that is a positive, linear operator, preserving constants, interpolating f at , , and is a rational function of degree for s even. Here we assume because of theoretical complications for (see, e.g., [3, 4]). The approximation behavior of operator is well known and direct and converse results, saturation statements and simultaneous approximation estimates not possible by polynomials and corresponding to several nodes meshes distributions can be found for example in [1, 2, 4–7, 13, 27]. Applications to scattered data interpolation problems, CAGD and image compression were also examined (see e.g. [8-16]). On the other hand Gupta introduced variants of Bernstein-type operators, studying the approximation properties of such operators (see e.g. [17-25]). In the following subsection we extend such an approach to and study Shepard–Gupta-type operators.

Approximation by Shepard–Gupta-type operators

For any and let with . From the definition it follows immediately that , i.e. for , we find back the original Shepard operator (1). Moreover, is a positive, linear operator of interpolatory-type and is stable in the Fejér sense, i.e., , We remark that Gupta variants of Bernstein-type operators depend on a positive parameter, not appearing in the kernel basis; here the parameter α appears both in the kernel basis , both in the exponents in the inner summations at the r.h.s. in (2). If we denote by the closest knot to x, with , then (and also of if ) influences in a small neighborhood of x strongly—the “strong local control property”—as a consequence of the large value of in that range compared with the other terms. Consequently for n and s fixed and α increasing, tends continuously to the step function with . Analogously we can work for the closest knot to x, with . By such asymptotic behavior we can use the operator to successfully compress images expressed by piecewise constants (see Sect. 2.2). Now we show that we can use to approximate functions from . Indeed, let be the usual supremum norm on of and the usual modulus of continuity of f. Moreover, C, are positive constants possibly having different values even in the same formula; we say that iff and .

Theorem 2.1

Let . Then, for any and ,

Remark 2.1

Estimate (4) yields the uniform convergence, as , of to .

Proof

Since the operator interpolates at , , let , . Then assume to be the closest knot to x, with (the case when is the closest knot to x can be treated analogously). Therefore We have Since for and , working as usual (see e.g. [2]), it follows that Moreover, Again by (5) Hence by (6) Finally, collecting the above estimations, working as usual (see e.g. [2])  □ Moreover, a pointwise approximation error estimate can be deduced.

Theorem 2.2

Let . Then, for any , and for any , with the closest knot to x.

Remark 2.2

From Theorem 2.2, for , we obtain This is the first pointwise estimate for Shepard operator on an equispaced mesh and it reflects the interpolatory character of at the knots , and the constants preservation property. A similar estimate was obtained for a generalization of Shepard operator in [9]. The result in (7) is interesting; indeed the Shepard operator is strongly influenced by the mesh distribution and pointwise error estimates, for Shepard operators on nonuniformly spaced meshes present a function depending on the mesh thickness at the r.h.s. (see e.g. [2, 4]); to the contrary for the equispaced case pointwise estimates as in [2, 4] are against nature. Following the proof of Theorem 2.1 we have Obviously Moreover, since , , Similarly we work for . Collecting all estimates, the assertion follows. □ Finally, we present the converse results for our operators.

Theorem 2.3

If where the sign ∼ does not depend on f. Moreover

Remark 2.3

First we observe that estimation (8) is a counterpart of (4) and is the analogous in some senses of the relation by Totik [28], with the classical Bernstein operator, and the second order modulus of smoothness of Ditzian and Totik where . On the other hand, due to the interpolating behavior of , we cannot have the estimation (8) with “lim” (instead of “lim sup”) because of a result stated in [3, p. 77] (cf. also [7, Theorem 2.1, p. 310]). From (8) we deduce that direct estimate (4) cannot be improved. Combining estimation (8) with the equivalence relation (see, e.g. [29]) , with the K-functional allows one to characterize such K-functionals. Finally, the saturation problem for is settled by Eqs. (9)–(10). We start to prove (8). From (2) we can write the operator as Now if we verify that with again the closest knot to x and with certain positive fixed reals , , ϵ, then by using ([30, Theorem 2.1]) it follows that First we prove (11)–(14). We deduce Eq. (11) immediately by definition. Following the proofs of Theorems 2.1–2.2 we obtain that is (12). Now we verify (13). Again working as in the proofs of Theorems 2.1–2.2, and by and , (13) follows. Now we prove (14). Indeed i.e. we deduce (14). From (15) and (4) we have (cf. [7, p. 315]) Now we recall that ([7, Lemma 3.1, p. 315]) Therefore and from (4), (16) and (17) we deduce (8). The proofs of (9) and (10) are omitted since they are analogous to the proof of Theorem 2.2 p. 316 in [7]. □

Application to image compression

In this Section we apply the operator to a problem of image compression. An image can be considered from a mathematical point of view as a matrix of size pixels, where the number of pixels affects resolution of an image and the size of he file that stores it (the higher the number of pixels, the better its resolution, the larger the file). As a degraded (compressed) image, we split the original image into consecutive blocks of size , choosing only the left-upper pixel from each block. We obtain a new image with a lower number of pixels ( pixels), and therefore a worse resolution and a smaller size of the file. The resulting compression ratio is . We aim at decompressing the reduced image to rebuild the full resolution one. Since the sensors of the cameras are uniformly distributed according to a bidimensional grid, we need a bidimensional interpolation process based on equispaced mesh; in addition, for physical reasons related to the range of the color intensity of the red, green and blue components (), it is preferable to rely on a positive operator. Therefore we consider the bidimensional operator defined by with , , , , . We observe that for computer calculations the nonbarycentric-type representations at the right hand side in (18) are suitable. We can write Eq. (18) as This allows one to develop a two-step procedure, each one involving the same unidmensional operator of the type (2) applied first to the rows of the matrix of pixels and then to the columns of the matrix resulting after application of the first step (or vice versa). We will compare the results obtained by the operator with bi-linear, bi-cubic and bi-spline methods. For the comparison we used the Signal-to-Noise Ratio, SNR, defined as with B denoting the number of bits necessary to represent the intensity of the pixels and where is the original image in the pixels i, j, , , and is the resulting image after decompression by the original bidimensional Shepard operator, operator, bi-linear, bi-cubic and bi-spline functions. The SNR compares the level of the compression error to the level of the signal: the higher SNR, the better the approximation of the original image. By construction of the operators (cf. (3)) there are better approximate images that can be represented by piecewise constant functions; therefore a synthetic image having such a feature will be considered. We notice that tuning of the parameter α permits one to get a better approximation error. According to the comment above we consider as an example of image a chessboard (Fig. 1) with 2048 pixels for both coordinates () having 20 alternating boxes for each row or column of the chessboard. The usual 8-bit gray scale representation is considered for the color, so that . We generated reduced resolution images at compression ratios ().
Figure 1

Image of chessboard chosen as a test example

Image of chessboard chosen as a test example The value of SNR for bi-linear, bi-cubic, bi-spline, Shepard (), , operators, with and compression ratio , is shown in Table 1.
Table 1

SNR of the decompressed images for the Chessboard test example at compression ratio and for bi-linear, bi-cubic, bi-spline, Shepard (), , operators, . The higher SNR, the more accurate the methodology

Methodρ = 4ρ = 9ρ = 16ρ = 25ρ = 36
Original Shepard (s = 4)79.677.476.075.074.0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{1.1,4}$\end{document}GM,N1.1,4 80.378.176.775.674.6
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{1.3,4}$\end{document}GM,N1.3,4 81.679.277.876.775.7
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{2,4}$\end{document}GM,N2,4 85.382.480.779.578.3
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{3,4}$\end{document}GM,N3,4 89.885.883.782.381.0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{5,4}$\end{document}GM,N5,4 98.591.788.586.584.7
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{10,4}$\end{document}GM,N10,4 120.4106.499.795.592.2
Original Shepard (s = 6)82.079.678.177.076.0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{1.1,6}$\end{document}GM,N1.1,6 82.980.378.877.776.6
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{1.3,6}$\end{document}GM,N1.3,6 84.481.680.078.877.7
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{2,6}$\end{document}GM,N2,6 89.185.383.382.080.6
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{3,6}$\end{document}GM,N3,6 95.689.887.085.283.6
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$G_{M,N}^{5,6}$\end{document}GM,N5,6 108.898.593.790.788.3
Bi-linear73.372.070.369.468.5
Bi-cubic73.872.070.869.969.0
Bi-spline73.271.470.269.368.4
SNR of the decompressed images for the Chessboard test example at compression ratio and for bi-linear, bi-cubic, bi-spline, Shepard (), , operators, . The higher SNR, the more accurate the methodology We can see that the Shepard–Gupta-type operator (18) gives the best results at any compression ratio and that accuracy improves when α increases. Figure 2 shows the decompressed images for bi-linear, bi-cubic, bi-spline, Shepard (), , operators, , obtained for compression ratio 25. We notice the gray color of the truly white boxes in the chessboard for bi-spline and bi-cubic operators (middle and right upper plots). It is due to overshoots (pixels having intensities greater than 1) and undershoots (pixels with intensity less than 0). As is well known these artifacts are particularly deleterious for images. Bi-linear and Shepard–Gupta-type operators being stable in the Fejér sense do not suffer from this artifact.
Figure 2

From top to bottom and left to right: the chessboard image decompressed by bi-linear, bi-cubic, bi-spline, Shepard (), , , Shepard (), and operators starting from the image compressed with ratio 25

From top to bottom and left to right: the chessboard image decompressed by bi-linear, bi-cubic, bi-spline, Shepard (), , , Shepard (), and operators starting from the image compressed with ratio 25 To better appreciate this artifact and differences among the above methodologies, Fig. 3 shows the (absolute) error of the decompressed images for only bi-cubic and bi-spline operators at different compression ratios (, 25, 49), since the other operators are not affected by the overshoot-undershoot artifact. Overshoots and undershoots are represented with red and blue color, respectively.
Figure 3

Error of the decompressed images for the chessboard test example for the considered methods (particular). From top to bottom and left to right bi-cubic and bi-spline for , bi-cubic and bi-spline for , bi-cubic and bi-spline for . Blue and red colors indicate undershoots and overshoots, respectively

Error of the decompressed images for the chessboard test example for the considered methods (particular). From top to bottom and left to right bi-cubic and bi-spline for , bi-cubic and bi-spline for , bi-cubic and bi-spline for . Blue and red colors indicate undershoots and overshoots, respectively A full assessment of all considered methods is graphically given in Fig. 4 in a particularization of Fig. 3. The figure shows the smaller error (higher SNR) achieved by the Shepard–Gupta-type method.
Figure 4

Error of the decompressed images for the Chessboard test example for the considered methods (particular). From top to bottom and left to right: bi-linear, bi-cubic, bi-spline, Shepard (), , , Shepard (), , operators for compression ratio . Blue and red colors indicate undershoots and overshoots, respectively

Error of the decompressed images for the Chessboard test example for the considered methods (particular). From top to bottom and left to right: bi-linear, bi-cubic, bi-spline, Shepard (), , , Shepard (), , operators for compression ratio . Blue and red colors indicate undershoots and overshoots, respectively

Conclusions

The paper gives a positive answer to the problem to extend the Bézier variant technique introduced and studied by Gupta for the well-known linear positive operators of Bernstein-type, to the Shepard interpolator operator, widely used in rational approximation and scattered data interpolation problems. The authors construct and study the Shepard–Gupta-type operator and settle convergence results, uniform and pointwise approximation error estimates, converse theorems and saturation statements, improving in some sense analogous results for the original Shepard-type operator. The peculiar asymptotic behavior of the Shepard–Gupta-type operator allows one to successfully compress images represented by piecewise constants, improving previous algorithms.
  1 in total

1.  Spatiotemporal Interpolation Methods for the Application of Estimating Population Exposure to Fine Particulate Matter in the Contiguous U.S. and a Real-Time Web Application.

Authors:  Lixin Li; Xiaolu Zhou; Marc Kalo; Reinhard Piltner
Journal:  Int J Environ Res Public Health       Date:  2016-07-25       Impact factor: 3.390

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.