Multi-way data analysis

2016

Books

Keimel, C.: "Design of Video Quality Metrics with Multi-Way Data Analysis: A data driven approach", Springer, Singapore, 2016, ISBN: 978-3-319-02680-0.
[Abstract] [BibTeX] [DOI]

This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling.
Download citation as [.bib File]
@book{Keimel-Springer2016,
title = {Design of Video Quality Metrics with Multi-Way Data Analysis: A data driven approach},
author = {Christian Keimel},
doi = {10.1007/978-981-10-0269-4},
isbn = {978-3-319-02680-0},
year = {2016},
date = {2016-01-01},
publisher = {Springer},
address = {Singapore},
series = {T-Labs Series in Telecommunication Services},
abstract = {This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling.},
howpublished = {Full text available from publisher},
keywords = {},
pubstate = {},
tppubtype = {book}
}

2014

Journal Articles

Hoßfeld, T., Keimel, C., Timmerer, C.: "Crowdsourcing Quality-of-Experience Assessments", Computer, 47 (9), pp. 98-102, 2014, ISSN: 0018-9162.
[Abstract] [BibTeX] [DOI]

Crowdsourced quality-of-experience (QoE) assessments are more cost-effective and flexible than traditional in-lab evaluations but require careful test design, innovative incentive mechanisms, and technical expertise to address various implementation challenges.
Download citation as [.bib File]
@article{Hossfeld-Computer2014,
title = {Crowdsourcing Quality-of-Experience Assessments},
author = {Tobias Hoßfeld and Christian Keimel and Christian Timmerer},
doi = {10.1109/MC.2014.245},
issn = {0018-9162},
year = {2014},
date = {2014-09-01},
journal = {Computer},
volume = {47},
number = {9},
pages = {98-102},
abstract = {Crowdsourced quality-of-experience (QoE) assessments are more cost-effective and flexible than traditional in-lab evaluations but require careful test design, innovative incentive mechanisms, and technical expertise to address various implementation challenges.},
howpublished = {Full text available from publisher},
keywords = {},
pubstate = {},
tppubtype = {article}
}

Inproceedings

Horch, C., Habigt, J., Keimel, C., Diepold, K.: "Evaluation of video quality fluctuations using pattern categorisation", Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on, pp. 117-122, 2014.
[Abstract] [PDF] [BibTeX] [DOI]

Fluctuations of video quality over time can have a significant influence on the overall perceived quality as represented by the QoE. Existing methodologies for subjective video quality assessment, however, are often not suitable for the evaluation of these quality fluctuations, especially if they occur within very small time frames. In this contribution, we therefore propose a new method, VIQPAC, which addresses this shortcoming by using a pattern categorisation approach. Instead of requiring the subjects to provide a continuous quality evaluation, the subjects assess the overall quality impression and the strength of the quality fluctuation, combined with a categorisation of the encountered fluctuation pattern. This allows us to determine the fluctuation dependent temporal changes in the quality. The results show that VIQPAC is able to capture the pattern and strength of quality fluctuations, allowing for a proper description of the temporal quality changes within a video sequence.
Download citation as [.bib File]
@inproceedings{Horch-QoMEX2014,
title = {Evaluation of video quality fluctuations using pattern categorisation},
author = {Clemens Horch and Julian Habigt and Christian Keimel and Klaus Diepold},
doi = {10.1109/QoMEX.2014.6982306},
year = {2014},
date = {2014-09-01},
booktitle = {Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on},
pages = {117-122},
abstract = {Fluctuations of video quality over time can have a significant influence on the overall perceived quality as represented by the QoE. Existing methodologies for subjective video quality assessment, however, are often not suitable for the evaluation of these quality fluctuations, especially if they occur within very small time frames. In this contribution, we therefore propose a new method, VIQPAC, which addresses this shortcoming by using a pattern categorisation approach. Instead of requiring the subjects to provide a continuous quality evaluation, the subjects assess the overall quality impression and the strength of the quality fluctuation, combined with a categorisation of the encountered fluctuation pattern. This allows us to determine the fluctuation dependent temporal changes in the quality. The results show that VIQPAC is able to capture the pattern and strength of quality fluctuations, allowing for a proper description of the temporal quality changes within a video sequence.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

2013

Inproceedings

Horch, C., Keimel, C., Habigt, J., Diepold, K.: "Length-independent refinement of video quality metrics based on multiway data analysis", Image Processing (ICIP), 2013 20th IEEE International Conference on, pp. 44-48, 2013.
[Abstract] [PDF] [BibTeX] [DOI]

In previous publications it has been shown that no-reference video quality metrics based on a data analysis approach rather than on modeling the human visual system lead to very promising results and outperform many well-known full-reference metrics. Furthermore, the results improve when taking the temporal structure of the video sequence into account by using multiway analysis methods. This contribution shows a way of refining these multiway quality metrics in order to make them more suitable for real-life applications and maintaining the performance at the same time. Additionally, our results confirm the validity of H.264/AVC bitstream no-reference quality metrics using multiway PLSR by evaluating this concept on an additional dataset.
Download citation as [.bib File]
@inproceedings{Horch-ICIP2013,
title = {Length-independent refinement of video quality metrics based on multiway data analysis},
author = {Clemens Horch and Christian Keimel and Julian Habigt and Klaus Diepold},
doi = {10.1109/ICIP.2013.6738010},
year = {2013},
date = {2013-09-01},
booktitle = {Image Processing (ICIP), 2013 20th IEEE International Conference on},
pages = {44-48},
abstract = {In previous publications it has been shown that no-reference video quality metrics based on a data analysis approach rather than on modeling the human visual system lead to very promising results and outperform many well-known full-reference metrics. Furthermore, the results improve when taking the temporal structure of the video sequence into account by using multiway analysis methods. This contribution shows a way of refining these multiway quality metrics in order to make them more suitable for real-life applications and maintaining the performance at the same time. Additionally, our results confirm the validity of H.264/AVC bitstream no-reference quality metrics using multiway PLSR by evaluating this concept on an additional dataset.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

Redl, A., Keimel, C., Diepold, K.: "Saliency based video quality prediction using multi-way data analysis", Quality of Multimedia Experience (QoMEX), 2013 Fifth International Workshop on, pp. 188-193, 2013.
[Abstract] [PDF] [BibTeX] [DOI]

Saliency information allows us to determine which parts of an image or video frame attracts the focus of the observer and thus where distortions will be more obvious. Using this knowledge and saliency thresholds, we therefore combine the saliency information generated by a computational model and the features extracted from the H.264/AVC bitstream, and use the resulting saliency-weighted features in the design of a video quality metric with multi-way data analysis. We used two different multi-way methods, the two dimensional principal component regression (2D-PCR) and multi-way partial least squares regression (PLSR) in the design of a no-reference video quality metric, where the different saliency levels are considered as an additional direction. Our results show that the consideration of the the saliency information leads to more stable models with less parameters in the model and thus the prediction performance increases compared to metrics without saliency information for the same number of parameters.
Download citation as [.bib File]
@inproceedings{Redl-Qomex2013,
title = {Saliency based video quality prediction using multi-way data analysis},
author = {Arne. Redl and Christian Keimel and Klaus Diepold},
doi = {10.1109/QoMEX.2013.6603235},
year = {2013},
date = {2013-07-01},
booktitle = {Quality of Multimedia Experience (QoMEX), 2013 Fifth International Workshop on},
pages = {188-193},
abstract = {Saliency information allows us to determine which parts of an image or video frame attracts the focus of the observer and thus where distortions will be more obvious. Using this knowledge and saliency thresholds, we therefore combine the saliency information generated by a computational model and the features extracted from the H.264/AVC bitstream, and use the resulting saliency-weighted features in the design of a video quality metric with multi-way data analysis. We used two different multi-way methods, the two dimensional principal component regression (2D-PCR) and multi-way partial least squares regression (PLSR) in the design of a no-reference video quality metric, where the different saliency levels are considered as an additional direction. Our results show that the consideration of the the saliency information leads to more stable models with less parameters in the model and thus the prediction performance increases compared to metrics without saliency information for the same number of parameters.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

2012

Inproceedings

Keimel, C., Habigt, J., KlausDiepold, .: "Hybrid No-Reference Video Quality Metric Based on Multiway PLSR", European Signal Processing Conference (EUSIPCO),2012, pp. 1244-1248, 2012, ISSN: 2219-5491.
[Abstract] [PDF] [BibTeX]

In real-life applications, no-reference metrics are more useful than full-reference metrics. To design such metrics, we apply data analysis methods to objectively measurable features and to data originating from subjective testing. Unfortunately, the information about temporal variation of quality is often lost due to the temporal pooling over all frames. Instead of using temporal pooling, we have recently designed a H.264/AVC bitstream no-reference video quality metric employing multiway Partial Least Squares Regression (PLSR), which leads to an improved prediction performance. In this contribution we will utilize multiway PLSR to design a hybrid metric that combines both bitstream-based features with pixel-based features. Our results show that the additional inclusion of the pixel-based features improves the quality prediction even further.
Download citation as [.bib File]
@inproceedings{Keimel-EUSIPCO2012,
title = {Hybrid No-Reference Video Quality Metric Based on Multiway PLSR},
author = {Christian Keimel and Julian Habigt and KlausDiepold},
issn = {2219-5491},
year = {2012},
date = {2012-01-01},
booktitle = {European Signal Processing Conference (EUSIPCO),2012},
pages = {1244-1248},
abstract = {In real-life applications, no-reference metrics are more useful than full-reference metrics. To design such metrics, we apply data analysis methods to objectively measurable features and to data originating from subjective testing. Unfortunately, the information about temporal variation of quality is often lost due to the temporal pooling over all frames. Instead of using temporal pooling, we have recently designed a H.264/AVC bitstream no-reference video quality metric employing multiway Partial Least Squares Regression (PLSR), which leads to an improved prediction performance. In this contribution we will utilize multiway PLSR to design a hybrid metric that combines both bitstream-based features with pixel-based features. Our results show that the additional inclusion of the pixel-based features improves the quality prediction even further.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

2011

Journal Articles

Keimel, C., Rothbucher, M., Shen, H., Diepold, K.: "Video is a Cube", Signal Processing Magazine, IEEE, 28 (6), pp. 41-49, 2011, ISSN: 1053-5888.
[Abstract] [PDF] [BibTeX] [DOI]

Quality of experience (QoE) is becoming increasingly important in signal processing applications. In taking inspiration from chemometrics, we provide an introduction to the design of video quality metrics by using data analysis methods, which are different from traditional approaches. These methods do not necessitate a complete understanding of the human visual system (HVS). We use multidimensional data analysis, an extension of well-established data analysis techniques, allowing us to better exploit higher-dimensional data. In the case of video quality metrics, it enables us to exploit the temporal properties of video more properly; the complete three-dimensional structure of the video cube is taken into account in metrics' design. Starting with the well-known principal component analysis and an introduction to the notation of multiway arrays, we then present their multidimensional extensions, delivering better quality prediction results. Although we focus on video quality, the presented design principles can easily be adapted to other modalities and to even higher dimensional data sets as well.
Download citation as [.bib File]
@article{Keimel-SPM2011,
title = {Video is a Cube},
author = {Christian Keimel and Martin Rothbucher and Hao Shen and Klaus Diepold},
doi = {10.1109/MSP.2011.942468},
issn = {1053-5888},
year = {2011},
date = {2011-01-01},
journal = {Signal Processing Magazine, IEEE},
volume = {28},
number = {6},
pages = {41-49},
abstract = {Quality of experience (QoE) is becoming increasingly important in signal processing applications. In taking inspiration from chemometrics, we provide an introduction to the design of video quality metrics by using data analysis methods, which are different from traditional approaches. These methods do not necessitate a complete understanding of the human visual system (HVS). We use multidimensional data analysis, an extension of well-established data analysis techniques, allowing us to better exploit higher-dimensional data. In the case of video quality metrics, it enables us to exploit the temporal properties of video more properly; the complete three-dimensional structure of the video cube is taken into account in metrics' design. Starting with the well-known principal component analysis and an introduction to the notation of multiway arrays, we then present their multidimensional extensions, delivering better quality prediction results. Although we focus on video quality, the presented design principles can easily be adapted to other modalities and to even higher dimensional data sets as well.},
keywords = {},
pubstate = {},
tppubtype = {article}
}

Inproceedings

Keimel, C., Habigt, J., Klimpke, M., Diepold, K.: "Design of no-reference video quality metrics with multiway partial least squares regression", Quality of Multimedia Experience (QoMEX), 2011 Third International Workshop on, pp. 49 -54, 2011, ISBN: 978-1-4577-1334-7.
[Abstract] [PDF] [BibTeX] [DOI]

No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider video in all its dimensions. We designed a H.264/AVC bitstream no-reference video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.
Download citation as [.bib File]
@inproceedings{Keimel-QoMEX2011,
title = {Design of no-reference video quality metrics with multiway partial least squares regression},
author = {Christian Keimel and Julian Habigt and Manuel Klimpke and Klaus Diepold},
doi = {10.1109/QoMEX.2011.6065711},
isbn = {978-1-4577-1334-7},
year = {2011},
date = {2011-09-01},
booktitle = {Quality of Multimedia Experience (QoMEX), 2011 Third International Workshop on},
pages = {49 -54},
abstract = {No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider video in all its dimensions. We designed a H.264/AVC bitstream no-reference video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

Keimel, C., Klimpke, M., Habigt, J., Diepold, K.: "No-reference video quality metric for HDTV based on H.264/AVC bitstream features", Image Processing (ICIP), 2011 18th IEEE International Conference on, pp. 3325-3328, 2011, ISSN: 1522-4880.
[Abstract] [PDF] [BibTeX] [DOI]

No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. Many proposed metrics extract features related to human perception from the individual video frames. Hence the video sequences have to be decoded first, before the metrics can be applied. In order to avoid decoding just for quality estimation, we therefore present in this contribution a no-reference metric for HDTV that uses features directly extracted from the H.264/AVC bitstream. We combine these features with the results from subjective tests using a data analysis approach with partial least squares regression to gain a prediction model for the visual quality. For verification, we performed a cross validation. Our results show that the proposed no-reference metric outperforms other metrics and delivers a correlation between the quality prediction and the actual quality of 0.93.
Download citation as [.bib File]
@inproceedings{Keimel-ICIP2011,
title = {No-reference video quality metric for HDTV based on H.264/AVC bitstream features},
author = {Christian Keimel and Manuel Klimpke and Julian Habigt and Klaus Diepold},
doi = {10.1109/ICIP.2011.6116383},
issn = {1522-4880},
year = {2011},
date = {2011-09-01},
booktitle = {Image Processing (ICIP), 2011 18th IEEE International Conference on},
pages = {3325-3328},
abstract = {No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. Many proposed metrics extract features related to human perception from the individual video frames. Hence the video sequences have to be decoded first, before the metrics can be applied. In order to avoid decoding just for quality estimation, we therefore present in this contribution a no-reference metric for HDTV that uses features directly extracted from the H.264/AVC bitstream. We combine these features with the results from subjective tests using a data analysis approach with partial least squares regression to gain a prediction model for the visual quality. For verification, we performed a cross validation. Our results show that the proposed no-reference metric outperforms other metrics and delivers a correlation between the quality prediction and the actual quality of 0.93.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

Keimel, C., Rothbucher, M., Diepold, K.: "Extending video quality metrics to the temporal dimension with 2D-PCR", Farnand, Susan P; Gaykema, Frans (Ed.): pp. 786713-1 - 786713-10, SPIE, 2011.
[Abstract] [PDF] [BibTeX] [DOI]

The aim of any video quality metric is to deliver a quality prediction similar to the video quality perceived by human observers. One way to design such a model of human perception is by data analysis. In this contribution we intend to extend this approach to the temporal dimension. Even though video obviously consists of spatial and temporal dimensions, the temporal aspect is often not considered well enough. Instead of including this third dimension in the model itself, the metrics are usually only applied on a frame-by-frame basis and then temporally pooled, commonly by averaging. Therefore we propose to skip the temporal pooling step and use the additional temporal dimension in the model building step of the video quality metric. We propose to use the two dimensional extension of the PCR, the 2D-PCR, in order to obtain an improved model. We conducted extensive subjective tests with different HDTV video sequences at 1920×1080 and 25 frames per seconds. For verification, we performed a cross validation to get a measure for the real-life performance of the acquired model. Finally, we will show that the direct inclusion of the temporal dimension of video into the model building improves the overall prediction accuracy of the visual quality significantly.
Download citation as [.bib File]
@inproceedings{Keimel-SPIE-EI2011-2D-PCR,
title = {Extending video quality metrics to the temporal dimension with 2D-PCR},
author = {Christian Keimel and Martin Rothbucher and Klaus Diepold},
editor = {Susan P Farnand and Frans Gaykema},
doi = {10.1117/12.872406},
year = {2011},
date = {2011-01-01},
journal = {Image Quality and System Performance VIII},
volume = {7867},
number = {1},
pages = {786713-1 - 786713-10},
publisher = {SPIE},
abstract = {The aim of any video quality metric is to deliver a quality prediction similar to the video quality perceived by human observers. One way to design such a model of human perception is by data analysis. In this contribution we intend to extend this approach to the temporal dimension. Even though video obviously consists of spatial and temporal dimensions, the temporal aspect is often not considered well enough. Instead of including this third dimension in the model itself, the metrics are usually only applied on a frame-by-frame basis and then temporally pooled, commonly by averaging. Therefore we propose to skip the temporal pooling step and use the additional temporal dimension in the model building step of the video quality metric. We propose to use the two dimensional extension of the PCR, the 2D-PCR, in order to obtain an improved model. We conducted extensive subjective tests with different HDTV video sequences at 1920×1080 and 25 frames per seconds. For verification, we performed a cross validation to get a measure for the real-life performance of the acquired model. Finally, we will show that the direct inclusion of the temporal dimension of video into the model building improves the overall prediction accuracy of the visual quality significantly.},
keywords = {},
pubstate = {},
tppubtype = {inproceedings}
}

2010

Technical Reports

Klimpke, M., Keimel, C., Diepold, K.: "Visuelle Qualitätsmetrik basierend auf der multivariaten Datenanalyse von H.264/AVC Bitstream-Features", Institute for Data Processing, Technische Universität München 2010.
[Abstract] [PDF] [BibTeX]

Download citation as [.bib File]
@techreport{Klimpke-TR-BitstreamFeatures-2010,
title = {Visuelle Qualitätsmetrik basierend auf der multivariaten Datenanalyse von H.264/AVC Bitstream-Features},
author = {Manuel Klimpke and Christian Keimel and Klaus Diepold},
url = {https://mediatum.ub.tum.de/node?id=1120198},
year = {2010},
date = {2010-11-01},
institution = {Institute for Data Processing, Technische Universität München},
keywords = {},
pubstate = {},
tppubtype = {techreport}
}