Abstract

For complex segmentation tasks, the achievable accuracy of fully automated systems is inherently limited. Specifically, when a precise segmentation result is desired for a small amount of given data sets, semi-automatic methods exhibit a clear benefit for the user. The optimization of human computer interaction (HCI) is an essential part of interactive image segmentation. Nevertheless, publications introducing novel interactive segmentation systems (ISS) often lack an objective comparison of HCI aspects. It is demonstrated that even when the underlying segmentation algorithm is the same throughout interactive prototypes, their user experience may vary substantially. As a result, users prefer simple interfaces as well as a considerable degree of freedom to control each iterative step of the segmentation. In this article, an objective method for the comparison of ISS is proposed, based on extensive user studies. A summative qualitative content analysis is conducted via abstraction of visual and verbal feedback given by the participants. A direct assessment of the segmentation system is executed by the users via the system usability scale (SUS) and AttrakDiff-2 questionnaires. Furthermore, an approximation of the findings regarding usability aspects in those studies is introduced, conducted solely from the system-measurable user actions during their usage of interactive segmentation prototypes. The prediction of all questionnaire results has an average relative error of 8.9%, which is close to the expected precision of the questionnaire results themselves. This automated evaluation scheme may significantly reduce the resources necessary to investigate each variation of a prototype's user interface (UI) features and segmentation methodologies.

Highlights

  • To the best of our knowledge, there is not one publication in which user based scribbles are combined with standardized questionnaires in order to assess an interactive image segmentation system’s quality

  • I.e. scalable, system to approximate pragmatic as well as hedonic usability aspects of a given interactive segmentation system

  • According to the mapping (Figure 6) introduced in Section 2.3.1, the adjective rating of the semi-manual and joint prototypes are excellent (88 respective 82), and the adjective associated with the guided prototype is good (67)

Read more

Summary

Introduction

To the best of our knowledge, there is not one publication in which user based scribbles are combined with standardized questionnaires in order to assess an interactive image segmentation system’s quality This type of synergetic usability measure is a contribution of this work. Both evaluation results are analyzed with respect to a joint prototype, defined in Section 2.2.3, incorporating aspects of both interface techniques. This novel automatic assessment of pragmatic as well as hedonic usability aspects is a contribution of this work. Our source code release for the automatic usability evaluation from user interaction log data can be found at https://github.com/mamrehn/interactive image segmentation evaluation

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call