| Literature DB >> 35205579 |
Jianping Shi1, Hong Li1, Caiming Zhong1, Zhouyan He1, Yeling Ma1.
Abstract
A multi-exposure fused (MEF) image is generated by multiple images with different exposure levels, but the transformation process will inevitably introduce various distortions. Therefore, it is worth discussing how to evaluate the visual quality of MEF images. This paper proposes a new blind quality assessment method for MEF images by considering their characteristics, and it is dubbed as BMEFIQA. More specifically, multiple features that represent different image attributes are extracted to perceive the various distortions of MEF images. Among them, structural, naturalness, and colorfulness features are utilized to describe the phenomena of structure destruction, unnatural presentation, and color distortion, respectively. All the captured features constitute a final feature vector for quality regression via random forest. Experimental results on a publicly available database show the superiority of the proposed BMEFIQA method to several blind quality assessment methods.Entities:
Keywords: blind quality assessment; colorfulness; multi-exposure fused images; naturalness; structure
Year: 2022 PMID: 35205579 PMCID: PMC8871194 DOI: 10.3390/e24020285
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1An example of multi-exposure fusion images generated by three MEF algorithms. (a) MEF image generated by Merten’s algorithm [27] (MOS = 7.6957); (b) MEF image generated by Raman’s algorithm [28] (MOS = 4.2609); (c) MEF image generated by local energy weighting [29] (MOS = 4.3043).
Figure 2The framework of the proposed BMEFIQA method.
Figure 3(a–c) The corresponding exposure maps of the MEF images in Figure 1.
The details of the MEF image database [35].
| No. | Source Sequences | Size | Image Source |
|---|---|---|---|
| 1 | Balloons | 339 × 512 × 9 | Erik Reinhard |
| 2 | Belgium house | 512 × 384 × 9 | Dani Lischinski |
| 3 | Lamp1 | 512 × 384 × 15 | Martin Cadik |
| 4 | Candle | 512 × 364 × 10 | HDR Projects |
| 5 | Cave | 512 × 384 × 4 | Bartlomiej Okonek |
| 6 | Chinese garden | 512 × 340 × 3 | Bartlomiej Okonek |
| 7 | Farmhouse | 512 × 341 × 3 | HDR Projects |
| 8 | House | 512 × 340 × 4 | Tom Mertens |
| 9 | Kluki | 512 × 341 × 3 | Bartlomiej Okonek |
| 10 | Lamp2 | 512 × 342 × 6 | HDR Projects |
| 11 | Landscape | 512 × 341 × 3 | HDRsoft |
| 12 | Lighthouse | 512 × 340 × 3 | HDRsoft |
| 13 | Madison capitol | 512 × 384 × 30 | Chaman Singh Verma |
| 14 | Memorial | 341 × 512 × 16 | Paul Debevec |
| 15 | Office | 512 × 340 × 6 | Matlab |
| 16 | Tower | 341 × 512 × 3 | Jacques Joffre |
| 17 | Venice | 512 × 341 × 3 | HDRsoft |
Figure 4Source images in the benchmark MEF database [35].
The overall performance comparison results.
| Metrics | PLCC | SROCC | RMSE |
|---|---|---|---|
| DIIVINE | 0.491 | 0.403 | 1.452 |
| BLINDS-II | 0.534 | 0.346 | 1.409 |
| BRISQUE | 0.414 | 0.380 | 1.517 |
| CurveletQA | 0.371 | 0.337 | 1.548 |
| GradLog | 0.631 | 0.567 | 1.293 |
| ContrastQA | 0.458 | 0.412 | 1.482 |
| GWH-GLBP | 0.163 | 0.113 | 1.645 |
| OG | 0.523 | 0.525 | 1.421 |
| NIQMC | 0.519 | 0.404 | 1.425 |
| SCORER | 0.481 | 0.494 | 1.461 |
| BTMQI | 0.452 | 0.343 | 1.487 |
| HIGRADE-1 | 0.561 | 0.566 | 1.380 |
| HIGRADE-2 | 0.585 | 0.583 | 1.352 |
| Proposed |
|
|
|
Figure 5Scatter plots for the objective predicted scores and MOS values of sixteen NR-IQA methods. (a) DIIVINE [5]; (b) BLINDS_II [6]; (c) BRISQUE [7]; (d) CurveletQA [8]; (e) GradLog [9]; (f) ContrastQA [10]; (g) GWH-GLBP [11]; (h) OG [12]; (i) NIQMC [13]; (j) SCORER [14]; (k) BTMQI [17]; (l) HIGRADE-1 [18]; (m) HIGRADE-2 [18]; (n) Proposed.
Performance comparison of different feature combinations.
| Models |
|
|
| PLCC | SROCC | RMSE |
|---|---|---|---|---|---|---|
| Model-1 | √ | × | × | 0.646 | 0.506 | 1.272 |
| Model-2 | × | √ | × | 0.557 | 0.457 | 1.385 |
| Model-3 | × | × | √ | 0.389 | 0.416 | 1.535 |
| Model-4 | √ | √ | × | 0.651 | 0.520 | 1.263 |
| Model-5 | √ | × | √ | 0.658 | 0.640 | 1.255 |
| Model-6 | × | √ | √ | 0.604 | 0.599 | 1.328 |
| Model-7 | √ | √ | √ |
|
|
|
Performance comparison with different block sizes and the corresponding run time.
| Block Size | PLCC | SROCC | RMSE | Time (s) |
|---|---|---|---|---|
| 8 × 8 | 0.694 | 0.673 | 1.200 | 1.3383 |
| 16 × 16 | 0.677 | 0.655 | 1.227 | 0.4744 |
| 32 × 32 | 0.683 | 0.658 | 1.218 | 0.2857 |
| 64 × 64 | 0.688 | 0.659 | 1.210 | 0.2347 |
The results of execution time by different methods.
| Methods | DIIVINE | BLINDS_II | BRISQUE | CurveletQA |
|---|---|---|---|---|
| Time (s) | 6.6024 | 14.7361 | 0.0414 | 2.5141 |
|
|
|
|
|
|
| Time (s) | 0.0343 | 0.0259 | 0.0576 | 0.0314 |
|
|
|
|
|
|
| Time (s) | 1.9241 | 0.5878 | 0.0758 | 0.2602 |
|
|
|
| ||
| Time (s) | 1.9040 |
|