DROID

Dynamic Remote Operation Incident Detection


Members

Reiterer, Jetter, Gerken, König, Demarmels

Description

The expressiveness and validity of usability evaluation is strongly dependent on the quality and quantity of data extracted from the observed user's behaviour.

Especially for long-term observation of the user experience and the user's learning process precise and adequate data should be collected over days, weeks or months, not just minutes or hours like in classical usability lab testing.

This is especially true with innovative interactive visualizations and visual interfaces. Their evaluation remains a challenge, because the benefits of new interaction paradigms and new visual languages only become evident after a certain learning period, in which the user becomes aware of the whole possibilities and the advantages of a system. [Plaisant 2004]

Furthermore artifical test settings and unrealistic test tasks can result in a misleading assessment of a system's suitability for the real user tasks. This is especially true for search tasks, mobile devices or ambient technology, which can only be simulated in the lab in a "let's pretend"-manner.

Accordingly the long-term observation of real users performing real tasks in their real living and working environment is highly desirable for assisting user-centered design processes.

Although there are already many research projects and commercial products providing help for data collection and analysis in the usability lab (ranging from screen recording and video analysis tools [Morae] to mouse motion data collection and visualization [Gellner]) there are still very few possibilities to apply these outside the lab in the real world. Considering the growing demand for highly cost-effective testing methods (especially in the field of mobile applications or intercultural interface design) new tools for unattended remote usability evaluation play an important role in supporting a user centered design.

Many attempts were made to offer such tools in the context of web usability for example by logging and visualization of user activity in internet browsers or on web servers ([Hilbert et al.], [Hong et al.], [Gonzalez, Alvarez]. Some of these tools have become very powerful commercial products by now. However, user behaviour in applications which are not based on simple HTML/hypertext interaction (e.g. Java applets/applications, C++ programs, Flash, ...) or not Web-based technology (e.g. mobile devices, ambient technology) cannot be observed with this technology.

The Human-Computer Interaction Group at the University of Konstanz wants to fill this gap with DROID (Dynamic Remote Operation Incident Detection) - a software framework and server application for capturing user behaviour in all kind of applications and on different platforms (desktop, cellular phones, PDAs, ...).

DROID will automatically collect data about user behaviour in the background of everday operation and transmit all relevant user interaction and system incidents within an application over the internet to a central logging server. This server plays the role of a usability data warehouse which provides a steady flow of usability-relevant data during development and post-deployment phase.

Key features of DROID are:

Dynamic Logging - the amount and the focus of transmitted data will be dynamically adapted to current usability questions, user privacy and available bandwidth

Remote Testing - the logging does not need to be done in a lab environment, but can be performed over the net from every application that is equipped with DROID components

Operation Incident Detection - all incidents or system events (ranging from normal UI operations like mouse selection or keyboard input to bug reports after system crashes) can be automatically detected in the background or can be manually triggered by the user

As a method of remote usability testing DROID will offer many advantages compared to classical usability data collection in the lab:

  • Everyday tasks are performed by real users
  • The users are located in normal working environments
  • The data is captured during real tasks and not during artificial test settings
  • Data capture is highly cost-effective and can be dynamically focussed on current usability issues
  • All installations with internet access can participate in usability evaluation without the need for installing and activating special event-recording software
  • No direct interaction between evaluator and user is necessary ('24-7' evaluation)
  • High quality machine-readable data without the need for intense human assessment and analysis of video material
  • Different levels of data capture: from simple start-/stop-session logging for usage frequency statistics to detailed capturing of mouse motion and keyboard events
  • Additional possibilities like Quality Feedback Agents or user-reported critical incidents can support the post-deployment phase [Hartson, Castillo].

Current status of DROID:


In a first step a DROID framework was developed in Java to allow data capturing and transmission for Java-based applications. Java applications can be integrated into this framework and can be distributed together with the DROID functionality to a large numbers of end users to collect user interaction data during user operation.

As a case-study for DROID the visual information seeking system "MedioVis" was integrated into this DROID framework. MedioVis has been field-tested in the Library of the University of Konstanz since 2004, so the DROID components have been collecting user interaction data from thousands of users and user sessions and dozens of workstations within the library.

[Translate to Englisch:] Figure 1: DROID Framework and Application

[Translate to Englisch:] Figure 2: Evaluation of Multiple Application Installations with DROID

Future steps of the project will focus on the application of automated data mining and data visualization tools (e.g. KNIME) for an extensive analysis of the collected interaction data. Future goals are the classification of users and user sessions to identify typical user archetypes or typical user tasks. Furthermore the possibilities for automated identification of usage patterns or potential usability flaws as input for classical usability lab testing will be explored.


The DROID project started on 1st September 2003 and was initiated by a discussion of remote usability testing methods and their problems in [Jetter].


References:

  • [Gellner] (2003): Gellner M.: Mousemaps - ein Ansatz für eine Technik zur Visualisierung der Nutzung von Software und zur Automation der Entdeckung von Bedienungsfehlern, Mensch & Computer 2003, Stuttgart: B.G. Teubner, 2003, S. 197-206
  • [Gonzalez, Alvarez] (2000): Gonzalez Rodriguez M., Alvarez Gutierrez D.: Data Gathering Agents for Remote Navigability Testing, Proceedings of the SCI'2000 Conference (Systemics, Cybernetics and Informatics), Orlando, USA. 23th to 26th July 2000.
  • [Hartson, Castillo] (1999): Hartson H. R., Castillo J.: Remote Evaluation for Post-Deployment Usability Improvement. Proceedings of the Working Conference on Advanced Visual Interface (AVI'98)
  • [Hilbert et al.] (2000): Hilbert D.M., Redmiles D.F.: Extracting Usability Information from User Interface Events, Technical Report UCI-ICS-99-40, Department of Information and Computer Science, University of California, Irvine.
  • [Hong et al.] (2001): Hong J., Landay J.: WebQuilt: A Framework for Capturing and Visualizing the Web Experience, Proceeding of WWW 10, Hong Kong, May 2001
  • [Jetter] (2003): Jetter H.-C.: Usability Evaluation im Rahmen von INVISIP, University of Konstanz Online Publication Service, http://www.ub.uni-konstanz.de/kops/volltexte/2003/1046/
  • [Morae] (2007): Techsmith Morae Website, http://www.techsmith.com/morae.asp
  • [Plaisant] (2004): Plaisant, C.: The challenge of information visualization evaluation. In: Proceedings of the Working Conference on Advanced Visual interfaces (Gallipoli, Italy, May 25 - 28, 2004). AVI '04. ACM Press