Research

My research interests include data-driven models using machine learning (AI) for audio-visual content under-standing and applications of deep learning in broadcasting/streaming and video processing, in particular multi-way (tensor) data analysis, QoE assessment with a particular focus on (processed) video and resulting video quality metrics, crowdsourcing, and interactive TV. These topics are briefly presented below.


Machine learning (AI)

Content generated for human consumption in the form of video, text, or audio, is unstructured from a machine perspective since the contained information is not readily available for processing. Using machine learning, especially deep learning approaches, enables us to extract the salient information from unstructured data in order to generate a descriptive structured representation that can be use in further applications, in particular for creating models explaining or predicting samples e.g. in recommendation systems.

Deep Learning for Multimedia

 

Off the shelf algorithms, however, are not always suitable for all problems encountered in multimedia applications, in particular if not enough training data is available. Therefore I’m looking into how to adapt existing algorithms to particular problems in the media domain, but also how to leverage non-supervised algorithms, e.g. deep reinforcement learning, in order to mitigate the lack of training data.

 

Multi-way data analysis

Multi-way data analysis - here Tucker3 decompositionUsing data-driven design methodologies in the development of prediction models, for example video quality prediction models, the data is usually  represented by a two-way array or matrix. In many applications, however, the data does have a multi-way nature and in flattening the data into a two-way representation, valuable information is lost in the model building process.

A better alternative is therefore the use of multi-way data analysis methods, for example multi-way PLSR, that consider all directions of the data in the process. In my contributions, I applied this approach so far to the design of video quality metrics, avoiding the temporal pooling usually applied,  resulting in an overall better prediction performance of the metrics.

Key publications
C. Keimel, M. Rothbucher, H. Shen, and K. Diepold, “Video is a Cube: Multidimensional Analysis and Video Quality Metrics,” in IEEE Signal Processing Magazine, vol. 28, no. 11, pp. 41-49, Nov., 2011. [ Abstract ] [PDF] [ BibTeX ] [DOI]

Quality of experience (QoE) is becoming increasingly important in signal processing applications. In taking inspiration from chemometrics, we provide an introduction to the design of video quality metrics by using data analysis methods, which are different from traditional approaches. These methods do not necessitate a complete understanding of the human visual system (HVS). We use multidimensional data analysis, an extension of well-established data analysis techniques, allowing us to better exploit higher-dimensional data. In the case of video quality metrics, it enables us to exploit the temporal properties of video more properly; the complete three-dimensional structure of the video cube is taken into account in metrics’ design. Starting with the well-known principal component analysis and an introduction to the notation of multiway arrays, we then present their multidimensional extensions, delivering better quality prediction results. Although we focus on video quality, the presented design principles can easily be adapted to other modalities and to even higher dimensional data sets as well.

Download citation as [.bib File]


@Article{Keimel-SPM2011,
Title = {Video is a Cube},
Author = {Keimel, C. and Rothbucher, M. and Hao Shen and Diepold, K.},
Journal = {Signal Processing Magazine, IEEE},
Year = {2011},
Month = nov,
Number = {6},
Pages = {41-49},
Volume = {28},

Abstract = {Quality of experience (QoE) is becoming increasingly important in signal processing applications. In taking inspiration from chemometrics, we provide an introduction to the design of video quality metrics by using data analysis methods, which are different from traditional approaches. These methods do not necessitate a complete understanding of the human visual system (HVS). We use multidimensional data analysis, an extension of well-established data analysis techniques, allowing us to better exploit higher-dimensional data. In the case of video quality metrics, it enables us to exploit the temporal properties of video more properly; the complete three-dimensional structure of the video cube is taken into account in metrics’ design. Starting with the well-known principal component analysis and an introduction to the notation of multiway arrays, we then present their multidimensional extensions, delivering better quality prediction results. Although we focus on video quality, the presented design principles can easily be adapted to other modalities and to even higher dimensional data sets as well.},
Doi = {10.1109/MSP.2011.942468},
ISSN = {1053-5888},
Keywords = {data analysis techniques;data sets;human visual system;multidimensional analysis;multidimensional data analysis;quality of experience;signal processing applications;video quality metrics;quality of service;video signal processing;QoE]}
}

C. Keimel, J. Habigt, M. Klimpke, and K. Diepold, “Design of No-Reference Video Quality Metrics with Multiway Partial Least Squares Regression,” in Third International Workshop on Quality of Multimedia Experience (QoMEX 2011), pp. 49-54, Sep., 2011. [ Abstract ] [PDF] [ BibTeX ] [DOI]

No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider video in all its dimensions. We designed a H.264/AVC bitstream no-reference video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.

Download citation as [.bib File]


@InProceedings{Keimel-QoMEX2011,
Title = {Design of no-reference video quality metrics with multiway partial least squares regression},
Author = {Keimel, C. and Habigt, J. and Klimpke, M. and Diepold, K.},
Booktitle = {Quality of Multimedia Experience (QoMEX), 2011 Third International Workshop on},
Year = {2011},
Month = sep,
Pages = {49 -54},

Abstract = {No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider video in all its dimensions. We designed a H.264/AVC bitstream no-reference video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.},
Doi = {10.1109/QoMEX.2011.6065711},
ISBN = {978-1-4577-1334-7},
Keywords = {H.264/AVC bitstream no-reference video quality metric;PLSR; data analysis methods;multiway partial least squares regression; no-reference video quality metrics; least squares approximations;regression analysis;video coding;}
}

C. Keimel, J. Habigt, and K. Diepold, “Hybrid No-Reference Video Quality Metric Based on Multiway PLSR,” in EUSIPCO 2012: 20th European Signal Processing Conference, pp. 1244-1248, Sep., 2012. [ Abstract ] [PDF] [ BibTeX ] [URL]

In real-life applications, no-reference metrics are more useful than full-reference metrics. To design such metrics, we apply data analysis methods to objectively measurable features and to data originating from subjective testing. Unfortunately, the information about temporal variation of quality is often lost due to the temporal pooling over all frames. Instead of using temporal pooling, we have recently designed a H.264/AVC bitstream no-reference video quality metric employing multiway Partial Least Squares Regression (PLSR), which leads to an improved prediction performance. In this contribution we will utilize multiway PLSR to design a hybrid metric that combines both bitstream-based features with pixel-based features. Our results show that the additional inclusion of the pixel-based features improves the quality prediction even further.

Download citation as [.bib File]


@InProceedings{Keimel-EUSIPCO2012,
Title = {{Hybrid No-Reference Video Quality Metric Based on Multiway PLSR}},
Author = {Christian Keimel and Julian Habigt and KlausDiepold},
Booktitle = {{European Signal Processing Conference (EUSIPCO),2012}},
Year = {2012},
Month = aug,
Pages = {1244-1248},

Abstract = {In real-life applications, no-reference metrics are more useful than full-reference metrics. To design such metrics, we apply data analysis methods to objectively measurable features and to data originating from subjective testing. Unfortunately, the information about temporal variation of quality is often lost due to the temporal pooling over all frames. Instead of using temporal pooling, we have recently designed a H.264/AVC bitstream no-reference video quality metric employing multiway Partial Least Squares Regression (PLSR), which leads to an improved prediction performance. In this contribution we will utilize multiway PLSR to design a hybrid metric that combines both bitstream-based features with pixel-based features. Our results show that the additional inclusion of the pixel-based features improves the quality prediction even further.},
ISBN = {978-1-4673-1068-0},
ISSN = {2219-5491},
Keywords = {least squares approximations;regression analysis; video coding;H.264/AVC bitstream no-reference video quality; bitstream-based features;data analysis methods; full-reference metrics;hybrid no-reference video quality metric; multiway PLSR;partial least squares regression; pixel-based features; quality prediction;temporal pooling;Feature extraction; Measurement;Quality assessment;Vectors; Video coding;Video recording;Visualization; Video quality metric;hybrid metric; multilinear data analysis;multiway PLSR;no-reference metric;trilinear PLS},
Url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6334343}
}

Video quality metrics

Video quality metrics taxonomy

Video quality metrics provide models that allow as us to predict the subjective visual quality from objectively measurable properties of the video sequences under test utilising a set of different features in a prediction model to provide an estimation of the video quality. Typical features are blur or blockiness, describing how strongly details have been lost e.g. due to the loss of high frequency components in quantisation and how visible blocks are e.g. due to block-based transformations in video coding, respectively.

One aim of video quality metrics is obviously to deliver quality predictions that are equivalent to the results from subjective testing, but additionally one major goal is also that the prediction can be performed in all situations. Therefore the focus of my contributions is on so-called no-reference video quality metrics that allow the video quality prediction without the availability of an undistorted reference.

Key publications
C. Keimel, J. Habigt, M. Klimpke, and K. Diepold, “Design of No-Reference Video Quality Metrics with Multiway Partial Least Squares Regression,” in Third International Workshop on Quality of Multimedia Experience (QoMEX 2011), pp. 49-54, Sep., 2011. [ Abstract ] [PDF] [ BibTeX ] [DOI]

No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider video in all its dimensions. We designed a H.264/AVC bitstream no-reference video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.

Download citation as [.bib File]


@InProceedings{Keimel-QoMEX2011,
Title = {Design of no-reference video quality metrics with multiway partial least squares regression},
Author = {Keimel, C. and Habigt, J. and Klimpke, M. and Diepold, K.},
Booktitle = {Quality of Multimedia Experience (QoMEX), 2011 Third International Workshop on},
Year = {2011},
Month = sep,
Pages = {49 -54},

Abstract = {No-reference video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider video in all its dimensions. We designed a H.264/AVC bitstream no-reference video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.},
Doi = {10.1109/QoMEX.2011.6065711},
ISBN = {978-1-4577-1334-7},
Keywords = {H.264/AVC bitstream no-reference video quality metric;PLSR; data analysis methods;multiway partial least squares regression; no-reference video quality metrics; least squares approximations;regression analysis;video coding;}
}

C. Keimel, T. Oelbaum, and K. Diepold, “No-reference video quality evaluation for high-definition video,” in Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, pp. 1145-1148, Apr., 2009. [ Abstract ] [PDF] [ BibTeX ] [DOI]

A no-reference video quality metric for High-Definition video is introduced. This metric evaluates a set of simple features such as blocking or blurring, and combines those features into one parameter representing visual quality. While only comparably few base feature measurements are used, additional parameters are gained by evaluating changes for these measurements over time and using additional temporal pooling methods. To take into account the different characteristics of different video sequences, the gained quality value is corrected using a low quality version of the received video. The metric is verified using data from accurate subjective tests, and special care was taken to separate data used for calibration and verification. The proposed no-reference quality metric delivers a prediction accuracy of 0.86 when compared to subjective tests, and significantly outperforms PSNR as a quality predictor.

Download citation as [.bib File]


@InProceedings{Keimel-ICASSP2009,
Title = {No-reference video quality evaluation for high-definition video},
Author = {Keimel, C. and Oelbaum, T. and Diepold, K.},
Booktitle = {Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on},
Year = {2009},
Month = apr,
Pages = {1145-1148},
Abstract = {A no-reference video quality metric for High-Definition video is introduced. This metric evaluates a set of simple features such as blocking or blurring, and combines those features into one parameter representing visual quality. While only comparably few base feature measurements are used, additional parameters are gained by evaluating changes for these measurements over time and using additional temporal pooling methods. To take into account the different characteristics of different video sequences, the gained quality value is corrected using a low quality version of the received video. The metric is verified using data from accurate subjective tests, and special care was taken to separate data used for calibration and verification. The proposed no-reference quality metric delivers a prediction accuracy of 0.86 when compared to subjective tests, and significantly outperforms PSNR as a quality predictor.},
Doi = {10.1109/ICASSP.2009.4959791},
ISBN = {978-1-4244-2354-5},
ISSN = {1520-6149},
Keywords = {AVC/H.264;high-definition video;no-reference video quality evaluation;received video encoding;temporal pooling method;video sequence;high definition video;image sequences;video coding;}
}

T. Oelbaum, C. Keimel, and K. Diepold, “Rule-Based No-Reference Video Quality Evaluation Using Additionally Coded Videos,” in Selected Topics in Signal Processing, IEEE Journal of, vol. 3, no. 2, pp. 294-303, Apr., 2009. [ Abstract ] [PDF] [ BibTeX ] [DOI]

This contribution presents a no-reference video quality metric, which is based on a set of simple rules that assigns a given video to one of four different content classes. The four content classes distinguish between video sequences which are coded with a very low data rate, which are sensitive to blocking effects, which are sensitive to blurring, and a general model for all other types of video sequences. The appropriate class for a given video sequence is selected based on the evaluation of feature values of an additional low-quality version of the given video, which is generated by encoding. The visual quality for a video sequence is estimated using a set of features, which includes measures for the blockiness, the blurriness, the spatial activity, and a set of additional continuity features. The way these features are combined to compute one overall quality value is determined by the feature class, to which the video has been assigned. We also propose an additional correction step for the visual quality value. The proposed metric is verified in a process, which includes visual quality values originating from subjective quality tests in combination with a cross validation approach. The presented metric significantly outperforms peak-signal-to-noise ratio as a visual quality estimator. The Pearson correlation between the estimated visual quality values and the subjective test results takes on values as high as 0.82.

Download citation as [.bib File]


@Article{Oelbaum-JSTSP2009,
Title = {Rule-Based No-Reference Video Quality Evaluation Using Additionally Coded Videos},
Author = {Oelbaum, T. and Keimel, C. and Diepold, K.},
Journal = {Selected Topics in Signal Processing, IEEE Journal of},
Year = {2009},
Month = apr,
Number = {2},
Pages = {294-303},
Volume = {3},

Abstract = {This contribution presents a no-reference video quality metric, which is based on a set of simple rules that assigns a given video to one of four different content classes. The four content classes distinguish between video sequences which are coded with a very low data rate, which are sensitive to blocking effects, which are sensitive to blurring, and a general model for all other types of video sequences. The appropriate class for a given video sequence is selected based on the evaluation of feature values of an additional low-quality version of the given video, which is generated by encoding. The visual quality for a video sequence is estimated using a set of features, which includes measures for the blockiness, the blurriness, the spatial activity, and a set of additional continuity features. The way these features are combined to compute one overall quality value is determined by the feature class, to which the video has been assigned. We also propose an additional correction step for the visual quality value. The proposed metric is verified in a process, which includes visual quality values originating from subjective quality tests in combination with a cross validation approach. The presented metric significantly outperforms peak-signal-to-noise ratio as a visual quality estimator. The Pearson correlation between the estimated visual quality values and the subjective test results takes on values as high as 0.82.},
Doi = {10.1109/JSTSP.2009.2015473},
ISSN = {1932-4553},
Keywords = {coded videos;rule-based no-reference video quality evaluation;signal-to-noise ratio;video blockiness; video blurriness;video sequences;visual quality estimator; image sequences;knowledge based systems; video signal processing;}
}


Crowdsourcing

Crowdsourcing: principal setupCrowdsourcing uses the Internet to assign simple tasks to a group of online workers. In the context of subjective QoE evaluation, Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of subjects for the evaluation task. This makes it not only possible to include a more diverse population and real life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly by circumventing bottle-necks in traditional laboratory setups. Moreover, the costs compared to a laboratory setup can often be reduced significantly.

In order to utilise these advantages, however, the differences between laboratory-based and crowd-based QoE evaluation and their influence on the results must be well understood and be considered in the design of crowd-sourced experiments. One of my main contributions included one of the first crowd-sourcing frameworks, QualityCrowd, but also the discussion of  potential challenges and proposing solutions how to manage them.

Key publications
T. Hoßfeld and C. Keimel, “Crowdsourcing in QoE Evaluation,” in Quality of Experience: Advanced Concepts, Applications and Methods, S. Möller and A. Raake, eds., Heidelberg: Springer, 2014, pp. 315-327. [ Abstract ] [ BibTeX ] [DOI]

 

T. Hoßfeld, C. Keimel, M. Hirth, B. Gardlo, J. Habigt, K. Diepold, and P. Tran-Gia, “Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing,” in IEEE Transactions on Multimedia, vol. 16, no. 2, pp. 541-558, Feb., 2014. [ Abstract ] [PDF] [ BibTeX ] [DOI]

 

C. Keimel, J. Habigt, and K. Diepold, “Challenges in Crowd-based Video Quality Assessment,” in Forth International Workshop on Quality of Multimedia Experience (QoMEX 2012),pp. 13-18, Jul., 2012. [ Abstract ] [PDF] [ BibTeX ] [DOI]

C. Keimel, J. Habigt, C. Horch, and K. Diepold, “QualityCrowd – A Framework for Crowd-based Quality Evaluation,” in Picture Coding Symposium 2012 (PCS2012), pp. 245-248, May, 2012.[ Abstract ] [PDF] [ BibTeX ] [DOI]

 

Visual quality assessment & QoE

Visual quality assessment - typical laboratory setupQuality of Experience (QoE) describes the overall experience of a user consuming/experiencing stimuli or as defined by Qualinet “the degree of delight or annoyance of the user of an application or service. It results from the fulfilment of his or her expectations with respect to the utility and/or enjoyment of the application or service in the light of the users’ personality and current state”. QoE is commonly assessed using standardised subjective testing methodologies.

Even though there is some discussion in how far theses existing methodologies allow for QoE assessment, the use of standardised subjective testing methods is still the usual approach to assess the QoE. In my contributions, I examined aspects of subjective testing methods and their influence on the testing results, in particular if the proscribed testing environments in the standards are necessary. Moreover, I contributed to the Qualinet QoE definition and created the TUM data sets for use in the research on video quality metrics.

Key publications
C. Horch, J. Habigt, C. Keimel and K. Diepold, “Evaluation of Video Quality Fluctuations Using Pattern Categorisation,” in Sixth International Workshop on Quality of Multimedia Experience (QoMEX 2014), pp.117-122, Sep., 2014. [ Abstract ] [PDF] [ BibTeX ] [DOI]

Fluctuations of video quality over time can have a significant influence on the overall perceived quality as represented by the QoE. Existing methodologies for subjective video quality assessment, however, are often not suitable for the evaluation of these quality fluctuations, especially if they occur within very small time frames. In this contribution, we therefore propose a new method, VIQPAC, which addresses this shortcoming by using a pattern categorisation approach. Instead of requiring the subjects to provide a continuous quality evaluation, the subjects assess the overall quality impression and the strength of the quality fluctuation, combined with a categorisation of the encountered fluctuation pattern. This allows us to determine the fluctuation dependent temporal changes in the quality. The results show that VIQPAC is able to capture the pattern and strength of quality fluctuations, allowing for a proper description of the temporal quality changes within a video sequence.

Download citation as [.bib File]


@InProceedings{Horch-QoMEX2014,
Title = {Evaluation of video quality fluctuations using pattern categorisation},
Author = {Horch, C. and Habigt, J. and Keimel, C. and Diepold, K.},
Booktitle = {Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on},
Year = {2014},
Month = sep,
Pages = {117-122},

Abstract = {Fluctuations of video quality over time can have a significant influence on the overall perceived quality as represented by the QoE. Existing methodologies for subjective video quality assessment, however, are often not suitable for the evaluation of these quality fluctuations, especially if they occur within very small time frames. In this contribution, we therefore propose a new method, VIQPAC, which addresses this shortcoming by using a pattern categorisation approach. Instead of requiring the subjects to provide a continuous quality evaluation, the subjects assess the overall quality impression and the strength of the quality fluctuation, combined with a categorisation of the encountered fluctuation pattern. This allows us to determine the fluctuation dependent temporal changes in the quality. The results show that VIQPAC is able to capture the pattern and strength of quality fluctuations, allowing for a proper description of the temporal quality changes within a video sequence.},
Doi = {10.1109/QoMEX.2014.6982306},
Keywords = {pattern classification; quality of experience;video signal processing; QoE; VIQPAC;continuous quality evaluation; overall perceived quality; overall quality impression; pattern categorisation approach; subjective video quality assessment; video quality fluctuations; video sequence; Bit rate;Correlation; Fluctuations; quality assessment; Streaming media;video sequences}
}

C. Keimel, A. Redl, and K. Diepold, “The TUM High Definition Video Datasets,” in Forth International Workshop on Quality of Multimedia Experience (QoMEX 2012), pp. 97-102, Jul., 2012. [ Abstract ] [PDF] [ BibTeX ] [DOI]

The research on video quality metrics depends on the results from subjective testing for both the design and development of metrics, but also for their verification. As it is often too cumbersome to conduct subjective tests, freely available data sets that include both mean opinion scores and the distorted videos are becoming ever more important. While many datasets are already widely available, the majority of these data sets focus on smaller resolutions. We therefore present in this contribution the TUM high definition datasets that include videos in both 1080p25 and 1080p50, encoded with different coding technologies and settings, H.264/AVC and Dirac, but also different presentation devices from reference monitors to home-cinema projectors. Additionally a soundtrack is provided for the home-cinema scenario. The datasets are made freely available for download under a creative commons license

Download citation as [.bib File]


@InProceedings{Keimel-QoMEX2012-DataSets,
Title = {The {TUM} High Definition Video Data Sets},
Author = {C. Keimel and A.Redl and K. Diepold},
Booktitle = {Fourth International Workshop on Quality of Multimedia Experience (QoMEX 2012)},
Year = {2012},
Month = jul,
Pages = {91-102},

Abstract = {The research on video quality metrics depends on the results from subjective testing for both the design and development of metrics, but also for their verification. As it is often too cumbersome to conduct subjective tests, freely available data sets that include both mean opinion scores and the distorted videos are becoming ever more important. While many datasets are already widely available, the majority of these data sets focus on smaller resolutions. We therefore present in this contribution the TUM high definition datasets that include videos in both 1080p25 and 1080p50, encoded with different coding technologies and settings, H.264/AVC and Dirac, but also different presentation devices from reference monitors to home-cinema projectors. Additionally a soundtrack is provided for the home-cinema scenario. The datasets are made freely available for download under a creative commons license.},
Doi = {10.1109/QoMEX.2012.6263865},
ISBN = {978-1-4673-0725-3},
Keywords = {high definition video;image sequences;video coding; Dirac coding technology; H.264/AVC coding technology;TUM high definition video datasets; home-cinema projectors; subjective testing; video encoding;video quality metrics;video sequences;Bit rate; Encoding; Licenses;Standards; Testing;Video sequences; 1080p25; 1080p50;HDTV;subjective testing; video quality assessment; QoE}
}

K. De Moor, S. Egger, C. Keimel, S. Möller, A. Raake, R. Schatz, and D. Strohmeier, “Definition of Quality and Definition of Experience,” in Qualinet White Paper on Definitions of Quality of Experience, European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003), P. Le Callet, S. Möller and A. Perkis, eds., Jun. , 2013. [ Abstract ] [PDF] [ BibTeX ] [URL]

This White Paper is a contribution of the European Network on Quality of Experience in Multimedia Systems and Services, Qualinet (COST Action IC 1003, see www.qualinet.eu), to the scientific discussion about the term “Quality of Experience” (QoE) and its underlying concepts. It resulted from the need to agree on a working definition for this term which facilitates the communication of ideas within a multidisciplinary group, where a joint interest around multimedia communication systems exists, however approached from different perspectives. Thus, the concepts and ideas cited in this paper mainly refer to the Quality of Experience of multimedia communication systems, but may be helpful also for other areas where QoE is an issue. The Network of Excellence (NoE) Qualinet aims at extending the notion of network-centric Quality of Service (QoS) in multimedia systems, by relying on the concept of Quality of Experience (QoE). The main scientific objective is the development of methodologies for subjective and objective quality metrics taking into account current and new trends in multimedia communication systems as witnessed by the appearance of new types of content and interactions. A substantial scientific impact on fragmented efforts carried out in this field will be achieved by coordinating the research of European experts under the catalytic COST umbrella. The White Paper has been compiled on the basis of a first open call for ideas which was launched for the February 2012 Qualinet Meeting held in Prague, Czech Republic. The ideas were presented as short statements during that meeting, reflecting the ideas of the persons listed under the headline “Contributors” in the previous section. During the Prague meeting, the ideas have been further discussed and consolidated in the form of a general structure of the present document. An open call for authors was issued at that meeting, to which the persons listed as “Authors” in the previous section have announced their willingness to contribute in the preparation of individual sections. For each section, a coordinating author has been assigned which coordinated the writing of that section, and which is underlined in the author list preceding each section. The individual sections were then integrated and aligned by an editing group (listed as “Editors” in the previous section), and the entire document was iterated with the entire group of authors. Furthermore, the draft text was discussed with the participants of the Dagstuhl Seminar 12181 “Quality of Experience: From User Perception to Instrumental Metrics” which was held in Schloß Dagstuhl, Germany, May 1-4 2012, and a number of changes were proposed, resulting in the present document. As a result of the writing process and the large number of contributors, authors and editors, the document will not reflect the opinion of each individual person at all points. Still, we hope that it is found to be useful for everybody working in the field of Quality of Experience of multimedia communication systems, and most probably also beyond that field.

Download citation as [.bib File]


@Techreport{LeCallet-QualinetQoE-WHP,
Title = {{Qualinet White Paper on Definitions of Quality of Experience}},
Author = {Brunnstr{\”o}m, Kjell and Beker, Sergio Ariel and De Moor, Katrien and Dooms, Ann and Egger, Sebastien and Garcia, Marie-Neige and Hossfeld, Tobias and Jumisko-Pyykk{\”o}, Satu and Keimel, Christian and Larabi, Chaker and Lawlor, Bob and Le Callet, Patrick and M{\”o}ller, Sebastian and Pereira, Fernando and Pereira, Manuela and Perkis, Andrew and Pibernik, Jesenka and Pinheiro, Antonio and Raake, Alexander and Reichl, Peter and Reiter, Ulrich and Schatz, Raimund and Schelkens, Peter and Skorin-Kapov, Lea and Strohmeier, Dominik and Timmerer, Christian and Varela, Martin and Wechsung, Ina and You, Junyong and Zgank, Andrej},
Year = {2013},
Editor = {Patrick Le Callet and Sebastian Möller and Andrew Perkis},
Month = Mar,
Note = {Qualinet White Paper on Definitions of Quality of Experience Output from the fifth Qualinet meeting, Novi Sad, March 12, 2013},

Abstract = {This White Paper is a contribution of the European Network on Quality of Experience in Multimedia Systems and Services, Qualinet (COST Action IC 1003, see www.qualinet.eu), to the scientific discussion about the term “Quality of Experience” (QoE) and its underlying concepts. It resulted from the need to agree on a working definition for this term which facilitates the communication of ideas within a multidisciplinary group, where a joint interest around multimedia communication systems exists, however approached from different perspectives. Thus, the concepts and ideas cited in this paper mainly refer to the Quality of Experience of multimedia communication systems, but may be helpful also for other areas where QoE is an issue. The Network of Excellence (NoE) Qualinet aims at extending the notion of network-centric Quality of Service (QoS) in multimedia systems, by relying on the concept of Quality of Experience (QoE). The main scientific objective is the development of methodologies for subjective and objective quality metrics taking into account current and new trends in multimedia communication systems as witnessed by the appearance of new types of content and interactions. A substantial scientific impact on fragmented efforts carried out in this field will be achieved by coordinating the research of European experts under the catalytic COST umbrella. The White Paper has been compiled on the basis of a first open call for ideas which was launched for the February 2012 Qualinet Meeting held in Prague, Czech Republic. The ideas were presented as short statements during that meeting, reflecting the ideas of the persons listed under the headline “Contributors” in the previous section. During the Prague meeting, the ideas have been further discussed and consolidated in the form of a general structure of the present document. An open call for authors was issued at that meeting, to which the persons listed as “Authors” in the previous section have announced their willingness to contribute in the preparation of individual sections. For each section, a coordinating author has been assigned which coordinated the writing of that section, and which is underlined in the author list preceding each section. The individual sections were then integrated and aligned by an editing group (listed as “Editors” in the previous section), and the entire document was iterated with the entire group of authors. Furthermore, the draft text was discussed with the participants of the Dagstuhl Seminar 12181 “Quality of Experience: From User Perception to Instrumental Metrics” which was held in Schloß Dagstuhl, Germany, May 1-4 2012, and a number of changes were proposed, resulting in the present document. As a result of the writing process and the large number of contributors, authors and editors, the document will not reflect the opinion of each individual person at all points. Still, we hope that it is found to be useful for everybody working in the field of Quality of Experience of multimedia communication systems, and most probably also beyond that field.},
Hal_id = {hal-00977812},
Hal_version = {v1},
Keywords = {QoE;white paper;Qualinet;Quality of Experience;definition},
Url = {https://hal.archives-ouvertes.fr/hal-00977812}
}

C. Keimel and K. Diepold, “On the use of reference monitors in subjective testing for HDTV,” in Second International Workshop on Quality of Multimedia Experience (QoMEX 2010), pp. 35 -40, Jun., 2010. [ Abstract ] [PDF] [ BibTeX ] [DOI]

Most international standards recommend the use of reference monitors in subjective testing for visual quality. But do we really need to use reference monitors? In order to find an answer to this question, we conducted extensive subjective tests with reference, color calibrated high quality and uncalibrated standard monitors. We not only used different HDTV sequences, but also two fundamentally different encoders: AVC/H.264 and Dirac. Our results show that using the uncalibrated standard monitor, the test subjects underestimate the visual quality compared to the reference monitor. Between the reference and a less expensive color calibrated high quality monitor, however, we were unable to find a statistically significant difference in most cases. This might be an indication that both can be used equivalently in subjective testing, although further studies will be necessary in order to get a definitive answer.

Download citation as [.bib File]


@InProceedings{Keimel-QoMEX2010,
Title = {On the use of reference monitors in subjective testing for HDTV},
Author = {Keimel, C. and Diepold, K.},
Booktitle = {Quality of Multimedia Experience (QoMEX), 2010 Second International Workshop on},
Year = {2010},
Month = jun,
Pages = {35-40},

Abstract = {Most international standards recommend the use of reference monitors in subjective testing for visual quality. But do we really need to use reference monitors? In order to find an answer to this question, we conducted extensive subjective tests with reference, color calibrated high quality and uncalibrated standard monitors. We not only used different HDTV sequences, but also two fundamentally different encoders: AVC/H.264 and Dirac. Our results show that using the uncalibrated standard monitor, the test subjects underestimate the visual quality compared to the reference monitor. Between the reference and a less expensive color calibrated high quality monitor, however, we were unable to find a statistically significant difference in most cases. This might be an indication that both can be used equivalently in subjective testing, although further studies will be necessary in order to get a definitive answer.},
Doi = {10.1109/QOMEX.2010.5518305},
ISBN = {978-1-4244-6959-8},
Keywords = {high definition television;video coding;H.264/AVC;Dirac;subjective testing; reference monitors;uncalibrated standard monitor;visual quality;Calibration; Computer displays;Data processing;HDTV;Performance evaluation; Testing;AVC/H.264; Dirac;HDTV; reference monitor;subjective testing}
}

Interactive TV

GLOBAL ITV STB prototypeIncreasingly the segregation of content according to device types becomes blurred, especially for so-called SmartTVs that combine traditional linear broadcast content with interactive, non-linear broadband content. So far multiple interactive TV (iTV) standards have been defined, usually limited to a specific broadcasting technology, for example the popular HbbTV for DVB broadcasting systems.

In order to enable a global market of sharing interactive content, the coexistence and interoperability of existing iTV systems across different broadcast technologies is necessary. The focus of my work is especially on adapting the successful DVB-based HbbTV to other broadcasting systems, but also designing migration concepts towards future HTML5, web-based systems.

Key publications
G. Calixto, C. Keimel, L. Costa and K. Merkel, “Analysis of Coexistence of Ginga and HbbTV in DVB and ISDB-Tb,” in Consumer Electronics – Berlin (ICCE-Berlin), 2014 IEEE Fourth International Conference on, pp. 83-87, Sep. 2014. [ Abstract ] [PDF] [ BibTeX ] [DOI]

In this paper, we examine the possible coexistence of Ginga and HbbTV as interactive TV (iTV) systems in the Brazilian and European broadcasting systems, ISDB-Tb and DVB, respectively. We compare both systems architectures, in particular with respect to their functional modules. Our analysis provides the necessary information to assess the possibilities of a joint framework that includes both Ginga and HbbTV, consequently leading to a potential foundation of a system that supports both Ginga and HbbTV applications.

Download citation as [.bib File]


@InProceedings{Calixto-ICCE-Berlin2014,
Title = {Analysis of coexistence of Ginga and HbbTV in DVB and ISDB-Tb},
Author = {Calixto, Gustavo Moreira and Keimel, Christian and de Paula Costa, Laisa Caroline and Merkel, Klaus and Zuffo, Marcelo Knorich},
Booktitle = {Consumer Electronics – Berlin (ICCE-Berlin), 2014 IEEE Fourth International Conference on},
Year = {2014},
Month = sep,
Pages = {83-87},

Abstract = {In this paper, we examine the possible coexistence of Ginga and HbbTV as interactive TV (iTV) systems in the Brazilian and European broadcasting systems, ISDB-Tb and DVB, respectively. We compare both systems architectures, in particular with respect to their functional modules. Our analysis provides the necessary information to assess the possibilities of a joint framework that includes both Ginga and HbbTV, consequently leading to a potential foundation of a system that supports both Ginga and HbbTV applications.},
Doi = {10.1109/ICCE-Berlin.2014.7034224},
Keywords = {Broadband communication; Data communication;Digital multimedia broadcasting; Digital video broadcasting;TV ;DTV; DVB; Ginga; HbbTV;ISDB-Tb; coexistence; interactive TV; iTV}
}