| Literature DB >> 31687485 |
Kristen Nock1, David Bonanno1, Paul Elmore2, Leslie Smith1, Vicki Ferrini3, Fred Petry4.
Abstract
We present research using single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. We performed numerical experiments of x15 upscaling along three midocean ridge areas in the Eastern Pacific Ocean. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Splines-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.Entities:
Keywords: Bathymetry; Computer science; Earth sciences; Single-image super-resolution; Upscaling
Year: 2019 PMID: 31687485 PMCID: PMC6820091 DOI: 10.1016/j.heliyon.2019.e02570
Source DB: PubMed Journal: Heliyon ISSN: 2405-8440
Fig. 1Areas used in this paper from the (top row) Juan de Fuca Ridge (middle row) North East Pacific Rise Ridge (bottom row) South East Pacific Rise Ridge. Respectively, these area are labeled JUAN, NEPR, and SEPR.
Fig. 2In each row, the inset to the left is high-resolution gridded sonar data obtained from global multi-resolution topography (GMRT), and low resolution seafloor topography from the general bathymetric chart of the oceans (GEBCO 2014) grid is to the right.
Fig. 3Two examples of high-resolution (HR) and low-resolution (LR) patches from the SEPR data sets. There is a scale difference of x6 between the HR and LR patch data.
Fig. 4Hypsomtery and roughness distributions for the JUAN, NEPR and SEPR areas. Fig. 4a and 4b are hypsomtery for the LR and HR grids, respectively. Fig. 4c and 4d are roughness distribution for the LR and HR grids.
Descriptions of Interpolation and SISR algotithms Used in Experiments.
| 1 | |
| Widely used algorithm. Uses the Generic Mapping Tools (GMT) software package | |
| 2 | |
| Uses Matlab regspline2d function. Used within open source GIS (GRASS GIS) package | |
| 3 | |
| Uses Matlab spline2d function. Interpolates data with a cubic spline in 2 dimensions. | |
| 4 | |
| Linear space-invariant interpolation scheme. An extension of cubic interpolation for interpolating data points on a 2D grid | |
| 5 | |
| Uses unconstrained least squares to solve regression. It is based on neighbor embedding, i.e. how a feature vector corresponding to a patch can be reconstructed by its neighbors in feature space | |
| 6 | |
| Very similar to NE + LS. Uses Langrange multipliers to solve problems with using standard SUM1-LS method | |
| 7 | |
| Uses sparse-representation modeling. Assumes a local Sparse-Land model: assumes that each patch from the images considered can be well represented using a linear combination of few atoms from a dictionary. | |
| 8 | |
| ANR uses Ridge Regression to learn exemplar neighborhoods, using these neighborhoods to precompute projections to map LR patches to HR domain. This method learns sparse dictionaries and regressors, anchored to dictionary atoms | |
| 9 | |
| Unlike ANR where neighborhood size is set, in GR the neighborhood coincides with whole dictionary in use. This method uses Ridge Regression/Collaborative Representation. | |
| 10 | |
| Based on ANR but instead of learning regressors on dictionary it uses full training material approach similar to methods like Simple Functions. A+ still trains the dictionary, but keeps training samples (neighborhood) after the dictionary is trained. | |
| 11 | |
| Exemplar-based -input LR image decomposed to fixed size overlapping patches size. This method uses jointly optimized collection of fixed number of local regressors. Overall, the determined “most appropriate regressor” is used to resolve the HR estimate | |
| 12 | |
| Does not rely on neighborhood embedding and sparse-coding, but uses locally linear regression. This replaces the single dictionary approach of other methods with many smaller ones. |
Internal test results: A) Root-mean squared error (rmse) values; B) Hypothesis testing p-values vs bicubic interpolation. Bold values indicate SISR algorithm shows significantly lower rmse compard to bicubic interpolation (reject null hypothesis); Italic - significantly larger rmse. Bold values are desired result.
| Bicubic | NE + LS | NE + NNS | Zeyde | ANR | GR | A+ | JOR | SRF | |
| (A) | |||||||||
| Juan | 34.49 | 34.18 | 34.82 | 33.79 | 33.78 | 33.80 | 34.02 | ||
| Nepr | 46.65 | 46.26 | 45.57 | 45.54 | |||||
| Sepr | 46.93 | 45.50 | |||||||
| All | 40.10 | 39.64 | 39.45 | 40.05 | |||||
| (B) | |||||||||
| Juan | 1 | 0.67 | 0.64 | 0.33 | 0.32 | 0.34 | 0.52 | ||
| Nepr | 1 | 0.62 | 0.13 | 0.16 | |||||
| Sepr | 1 | 0.16 | |||||||
| All | 1 | 0.47 | 0.17 | 0.92 | |||||
Fig. 5Internal test results: average RMSE results in JUAN, NEPR and SEPR regions for internal five-fold tests. Lower RMSE means a better result. A bar graph of the overall average RMSE scores for all data in the region with standard error of mean given by the error bar.
Fig. 6External test results: Bar graphs providing the RMSE results in the JUAN, NEPR and SEPR external tests. The RMSE for the data invariant linear filters, such as bicubic interpolation, will result in one value since these interpolation methods have no training mechanism. The trained methods give varied RMSE for the two regions external to the train region.
External test results: A) Rmse values; B) Hypothesis testing p-values vs bicubic interpolation. Bold values indicate SISR algorithm shows significantly lower rmse compard to bicubic interpolation (reject null hypothesis); Italic - significantly larger rmse. Bold values are desired result.
| Bicubic | NE + LS | NE + NNLS | Zeyde | ANR | GR | A+ | JOR | SRF | |
| (A) | |||||||||
| SEPR Models | 34.49 | 35.20 | 35.71 | 33.99 | 34.02 | ||||
| NEPR Models | 34.49 | 35.67 | 35.01 | 35.01 | 34.95 | 34.16 | 34.33 | ||
| SEPR Models | 46.65 | 46.65 | 47.10 | 45.89 | 46.22 | 46.60 | 46.18 | (45.28) | |
| JUAN Models | 46.65 | 46.20 | 46.58 | 45.97 | 45.90 | 46.16 | 46.26 | 45.82 | |
| NEPR Models | 46.93 | 45.26 | 50.47 | 46.01 | 46.80 | ||||
| JUAN Models | 46.93 | 45.56 | 45.95 | (45.26) | (45.05) | (45.13) | 46.26 | (45.12) | |
| (B) | |||||||||
| SEPR Models | 1 | 0.33 | 0.10 | 0.49 | 0.52 | ||||
| NEPR Models | 1 | 0.11 | 0.48 | 0.48 | 0.53 | 0.65 | 0.83 | ||
| SEPR Models | 1 | 1.00 | 0.55 | 0.28 | 0.54 | 0.94 | 0.51 | 0.052 | |
| JUAN Models | 1 | 0.53 | 0.92 | 0.34 | 0.29 | 0.49 | 0.58 | 0.24 | |
| NEPR Models | 1 | 0.099 | 0.13 | 0.36 | 0.90 | ||||
| JUAN Models | 1 | 0.18 | 0.34 | 0.098 | 0.062 | 0.074 | 0.51 | 0.069 | |