Literature DB >> 24723532

Blind prediction of natural video quality.

Michele A Saad, Alan C Bovik, Christophe Charrier.   

Abstract

We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

Entities:  

Mesh:

Year:  2014        PMID: 24723532     DOI: 10.1109/TIP.2014.2299154

Source DB:  PubMed          Journal:  IEEE Trans Image Process        ISSN: 1057-7149            Impact factor:   10.856


  2 in total

1.  Perceptual quality prediction on authentically distorted images using a bag of features approach.

Authors:  Deepti Ghadiyaram; Alan C Bovik
Journal:  J Vis       Date:  2017-01-01       Impact factor: 2.240

2.  No-Reference Video Quality Assessment Using Multi-Pooled, Saliency Weighted Deep Features and Decision Fusion.

Authors:  Domonkos Varga
Journal:  Sensors (Basel)       Date:  2022-03-12       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.