Beyond the SUS: What I learned from using the UEQ

February 17, 2026

In UX research, standardized questionnaires such as the System Usability Scale (SUS) or the PSSUQ are often the first choice when it comes to measuring the perception of a product. For a current project, however, I deliberately chose the User Experience Questionnaire (UEQ) in order to obtain a more differentiated picture of the user experience and to test the questionnaire myself.

While the SUS provides a quick, aggregated score, the UEQ delves deeper into the dimensions of user experience: attractiveness, efficiency, clarity, controllability, stimulation, and originality. But practical application has shown me that the tool is much more complex to use than it appears at first glance.

The importance of meticulous preparation

If you are using the UEQ for the first time, you should resist the temptation to simply copy the items quickly into a survey tool. Learn from my mistakes, because my first important insight was: Read the manual thoroughly. It sounds trivial, but the devil is in the details, as I later discovered.

It is particularly important to take a close look at the 26 items in advance and not to forget the additional question at the end of the manual. This is often overlooked, but provides valuable context for later classification.

Pitfalls in data analysis

In my opinion, the biggest challenge arises after data collection. While other frameworks are very straightforward, analyzing the UEQ requires manual effort:

  • The problem of recoding: The fact that certain items need to be recoded is hardly documented in the accompanying literature. There is no clear indication of exactly which items are affected. Without this step, however, the results are unusable.
  • Structural inconsistencies: I found that the order of the items in the survey often does not match the structure of the official evaluation file. As a result, the individual items have to be manually sorted into the super items – an error-prone process that takes time.
  • Benchmark hurdles: The order of the categories in the benchmark does not necessarily correspond to the order in the evaluation. This discrepancy makes it difficult to quickly compare your own results with the reference values.

Interpretation: Complexity instead of a single score

Compared to the SUS, which provides a clear overall value, the UEQ does not offer a “single score.” Instead, you receive six individual scores for the different scales. This makes interpretation challenging.

The benchmark provided is only of limited help here. While with the SUS you know immediately whether a score of 75 is good or bad, classifying the UEQ scales requires significantly more effort. You have to learn to deal with this complexity and interpret the results in the context of the specific product goals.

Conclusion

Anyone using the UEQ should allow time for manual data preparation and intensive examination of the individual scales. For projects where a quick, simple score is sufficient, the SUS remains the more efficient choice.

https://www.ueq-online.org/