Rule-Based No-Reference Video Quality Evaluation Using Additionally Coded Videos

Abstract

This contribution presents a no-reference video quality metric, which is based on a set of simple rules that assigns a given video to one of four different content classes. The four content classes distinguish between video sequences which are coded with a very low data rate, which are sensitive to blocking effects, which are sensitive to blurring, and a general model for all other types of video sequences. The appropriate class for a given video sequence is selected based on the evaluation of feature values of an additional low-quality version of the given video, which is generated by encoding. The visual quality for a video sequence is estimated using a set of features, which includes measures for the blockiness, the blurriness, the spatial activity, and a set of additional continuity features. The way these features are combined to compute one overall quality value is determined by the feature class, to which the video has been assigned. We also propose an additional correction step for the visual quality value. The proposed metric is verified in a process, which includes visual quality values originating from subjective quality tests in combination with a cross validation approach. The presented metric significantly outperforms peak-signal-to-noise ratio as a visual quality estimator. The Pearson correlation between the estimated visual quality values and the subjective test results takes on values as high as 0.82.

Publication
Selected Topics in Signal Processing, IEEE Journal of