Improving the prediction accuracy of video quality metrics

Abstract

To improve the prediction accuracy of visual quality metrics for video we propose two simple steps: temporal pooling in order to gain a set of parameters from one measured feature and a correction step using videos of known visual quality. We demonstrate this approach on the well known PSNR. Firstly, we achieve a more accurate quality prediction by replacing the mean luma PSNR by alternative PSNR-based parameters. Secondly, we exploit the almost linear relationship between the output of a quality metric and the subjectively perceived visual quality for individual video sequences. We do this by estimating the parameters of this linear relationship with the help of additionally generated videos of known visual quality. Moreover, we show that this is also true for very different coding technologies. Also we used cross validation to verify our results. Combining these two steps, we achieve for a set of four different high definition videos an increase of the Pearson correlation coefficient from 0.69 to 0.88 for PSNR, outperforming other, more sophisticated full-reference video quality metrics.

Publication
Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on