PROJECT TITLE :
Blind Prediction of Natural Video Quality - 2014
We tend to propose a blind (no reference or NR) video quality evaluation model that's nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine remodel domain, and on a model that characterizes the kind of motion occurring within the scenes, to predict video quality. We tend to use the models to define video statistics and perceptual options that are the basis of a video quality assessment (VQA) algorithm that doesn't need the presence of a pristine video to match against in order to predict a perceptual quality score. The contributions of this paper are threefold. one) We tend to propose a spatio-temporal natural scene statistics (NSS) model for videos. a pair of) We propose a motion model that quantifies motion coherency in video scenes. three) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to style a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform shut to the amount of high performing reduced and full reference VQA algorithms.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here