Crowdsourcing uses the Internet to assign simple tasks to a group of online workers. In the context of subjective QoE evaluation, Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of subjects for the evaluation task. This makes it not only possible to include a more diverse population and real life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly by circumventing bottle-necks in traditional laboratory setups. Moreover, the costs compared to a laboratory setup can often be reduced significantly.
In order to utilise these advantages, however, the differences between laboratory-based and crowd-based QoE evaluation and their influence on the results must be well understood and be considered in the design of crowd-sourced experiments. One of my main contributions included one of the first crowd-sourcing frameworks, QualityCrowd, but also the discussion of potential challenges and proposing solutions how to manage them.