Home

Nowadays, interactive spaces equipped with modern hardware technologies like motion capture or gaze-tracking systems enable sensing of human/user actions. For instance, to track continuous changes of their distance to other people and objects or their orientation and gaze direction. In HCI, researchers are trying to incorporate this sensor data to react on such implicit or explicit interaction and to visually adapt the interface using contextual information and/or personalization. Unfortunately, these sensors are frequently used in isolation and researchers often use very heterogeneous underlying models to design such visually adaptive interfaces. Even more, the approaches, concerns and constraints of research into visual adaptation differ within the field. This leads to conclusion that visual adaptation comes with its own set of challenges due to the mentioned complexities of dealing with contextual information.
We are trying to address this at our workshop by creating an opportunity to outline the different approaches and discuss their utility for real-life applications as well as establishing any potential commonalities.

The aim of the workshop is to bring together researchers investigating visual adaptation of interfaces. We especially welcome position-contingent, gaze-contingent and person-contingent visual adaptations. Every approach to visual adaptation comes with a set of constraints. As an example, gaze-based adaptive interfaces are limited to areas where good quality gaze data is available. A similar availability constraint, applies to interfaces that perform visual adaptation based on the distance and orientation of the user to the interaction surface. However, it is possible that some of the techniques used in gaze-based adaptation of interfaces may be applicable or relevant to the distance-based approaches or vice versa.

Thus we are hoping to bring together people performing adaptation based from as many perspectives as possible (proxemics, gaze-based interaction, personalisation and others) to explore if there is anything to be gained at their intersection.

The workshop welcomes submissions addressing the design, evaluation, modelling and other aspects of one or more of the following topics related to visual adaptation:

  • position, distance or proximity based visual adaptation
  • gaze-contingent adaptation
  • perceptual visual adaptation
  • user and/or environments modelling for adaptation
  • personalised visual adaptation
  • visual adaptation on large displays and tabletops
  • adaptation in heterogeneous multi-display environments
  • visual adaptation in collaborative work (remote or co-located)
  • social aspects of visual adaptation, including privacy and sharing
  • novel interaction techniques using visual adaptation
  • visual adaptation in information visualisation

Participation

We invite researchers to submit a work-in-progress or a position paper (4 to 6 pages in length, using the SIGCHI Extended Abstracts Template), which they will present at the workshop. Submissions on late breaking results and on-going research projects are highly encouraged.

Submissions can be made in PDF format using the EasyChair system at the following URL: https://www.easychair.org/conferences/?conf=vai2013

Given enough interest and high quality of submissions, authors may be invited to submit and extended version of their papers to a journal special issue on visual adaptation.

Mailing List on Google Groups

As promised at the workshop we created a space where people interested in visual adaptation can stay in contact with each other, post related or interesting content or links in this area or ongoing research. Therefore, we created a mailing list on Google groups called ‚Visual Adaptation of Interfaces‘. You can access the group here or send emails to visualadaptation@googlegroups.com.

Important Dates

Extended Deadline – Friday, August 16th, 2013Friday, August 9th, 2013 – Deadline for submissions
Friday, August 23rd, 2013 – Notification of acceptance

Workshop Agenda

Morning Session (09.00 to 12.00)
09.00 to 09.15 Welcome participants and introduction to the workshop theme
09:15 to 10:00 ADAPTIKs: Adaptive Information Keyholes for Public Libraries

Jens Müller, Simon Butscher, and Harald Reiterer

Human-Computer Interaction Group

University of Konstanz, Germany.

Paper

10:00 to 10:45 Utility of Gaze-contingent DOF Blur for Depth Perception

Michael Mauderer, Simone Conte, and Miguel Nacenta

School of Computer Science, University of St Andrews, UK

Paper / Slides

10:45 to 11:00 Coffee Break
11:00 to 11:45 Visual and Functional Adaptation in Ad-hoc Communities of Devices

Hans-Christian Jetter¹ and Roman Rädle²

¹Intel ICRI Cities, University College London, UK

² Human-Computer Interaction Group, University of Konstanz, Germany

Paper

11:45 to 12:00 Wrap-up & Summary
12:00 to 13:30 Lunch (Sandwiches & Nibbles)

 

Organizers

Jakub DostalJakub Dostal
School Of Computer Science,
The University of St. Andrews, Scotland
http://jakubdostal.com

 

 

Miguel NacentaMiguel Nacenta
School Of Computer Science,
The University of St. Andrews, Scotland
http://nacenta.com/

 

 

Roman RaedleRoman Rädle
Human-Computer Interaction Group,
University of Konstanz, Germany
http://romanraedle.de

 

 

Harald ReitererHarald Reiterer
Human-Computer Interaction Group,
University of Konstanz, Germany
http://hci.uni-konstanz.de/staff/reiterer

 

 

Sophie StellmachSophie Stellmach
Microsoft Redmond, USA

Program Committee

  • Simon Butscher, University of Konstanz
  • Jakub Dostal, University of St Andrews
  • Hans-Christian Jetter, ICRI Cities, University College London
  • Michael Mauderer, University of St Andrews
  • Jens Muller, University of Konstanz
  • Miguel Nacenta, University of St Andrews
  • Aaron Quigley, University of St Andrews
  • Umar Rashid, University of Lincoln
  • Roman Rädle, University of Konstanz
  • Harald Reiterer, University of Konstanz
  • Sophie Stellmach, Microsoft

Contact

For questions about the workshop please contact Jakub Dostal.

Letzte Beiträge