New forms of Human-Computer Interaction

from Reality-Based Interaction to Blended Interaction

The starting point: new findings in cognitive psychology – Embodied Cognition

The Graphical User Interface (GUI) – although developed in research laboratories in the late 1970s – is still the dominant interaction paradigm. However, important recent findings in cognitive psychology as well as the vision of Ubiquitous Computing (Weiser 1991) have contributed significantly to a critical questioning of this paradigm. A new point of view has established itself in cognitive psychology in recent years. In addition to the internal – and to us, invisible – cognitive processes in the brain, this view gives priority to studying our interaction with the environment and with other human beings in terms of its significance for cognitive development. This led to the realization that this view – termed Embodied Cognition (Dourish 2001) – is very significant for our spiritual development and everyday behaviour. Arguably, our cognitive development is decisively influenced by our physical and social interaction with objects and living beings in our environment. This has at least two implications for the field of Human-Computer Interaction: first, promoting the richest possible interaction with the computer; second, including such interaction with the social environment in the overall considerations. 

Consequences: comprehensive consideration of physical & social abilities

In this context, rich interaction means chiefly the number and type of senses and physical skills that can be employed in the situation. This is also known as Multimodal Interaction (Oviatt 2008) , in which, in addition to speech input, touch-sensitive displays and the manipulation of real-world objects in combination with digital displays play a particularly important role today. This form of interaction is referred to as Tangible Computing (Ishii & Ullmer 1997) in the research field. The dominance of Tangible Computing must also be viewed against the background that current technical innovations (iPhone from Apple, Surface by Microsoft) have reached the commercial market and thus technologies (e.g. multi-touch recognition, token recognition, touch-sensitive displays, etc.) also exist that are sufficiently robust to enable a variety of applications outside the laboratory. In parallel, users' first experiences with these new products have been positive and they are therefore very willing to try out new forms of Human-Computer Interaction.

At the same time, their social interaction with other users should also be taken into consideration. This form of interaction is referred to as Social Computing in the research field and principally takes into account the fact that today tasks or problems are frequently solved by groups, or that visits to museums, exhibitions, etc. often take place in groups.

Under the heading Reality-Based Interaction (Jacob et al. 2008) combined the above-mentioned findings in cognitive psychology as well as the technical developments in the fields of multimodal interaction, tangible computing and social computing in a new paradigm. The general objective is to orientate the interaction with the computer to the interaction with the real, non-digital world and thus to make it more reality-based – or easier to grasp, one could say. In this case, really easy to 'grasp' in both senses of the word: in the sense of touching and in the sense of understanding. Jacob and his co-authors set out four guiding principles for fashioning a reality-based interaction: "Note that people have common-sense knowledge about the physical world"; "Bear in mind that people have both body awareness and physical skills"; "Bear in mind that people have both spatial awareness and spatial skills" and finally, "Take into account people's social behaviour and their social skills of interaction and communication". These four principles are a helpful introduction to structuring the complex design space of this new form of interaction and making it more understandable.

In 1991, Marc Weiser published his vision of the computer of the 21st century. This vision has become known by the term Ubiquitous Computing in the scientific community. He saw the traditional PC – the dominant medium of human-computer interaction – becoming less important and expected the future to be characterised by a large number of networked, context-sensitive interactive devices with widely differing form factors. The devices that he called Tabs correspond in type and size to our Smartphones and PDAs. What he called Pads correspond in type and size to our Tablet PCs and iPad (Apple). His Boards correspond in type and size to our high-resolution large displays and electronic whiteboards. Very perceptively, he stated an essential objective as follows: "By pushing computers into the background, embodied virtuality will make individuals more aware of the people on the other ends of their computer links. ... Ubiquitous computers, ..., reside in the human world and pose no barrier to personal interactions" (Weiser 1991, emphasis added by the authors). His vision therefore implicitly includes the assumption that the interaction between human and computer will orientate itself to the way in which we interact with things and people in the real world. We now see an overlap between the objectives of Reality-Based Interaction and the vision of Ubiquitous Computing.

Blended Interaction: extensive amalgamation of the real and digital worlds

The next stage of development of human-computer interaction will therefore be marked by the goal of orientating the interaction with a variety of different terminals within the meaning of Ubiquitous Computing to the principles of Reality-Based Interaction. In this situation, users interact on their own or in groups at the same place and seamlessly switch between real-world interaction plus communication and computer-aided interaction plus, where appropriate, communication (e.g. with a person or members of another group at another location). This leads to a mix (blend) of the real and digital worlds in multiple domains, namely

  • the interaction: e.g. writing with digital pens on paper is an analogue and digital representation; interacting with real-world objects such as tokens combines analogue interaction with the resulting digital changes,
  • the communication: e.g. tokens and multi-touch displays enable an equitable form of communication, because several users can interact simultaneously or because normal social conventions can be observed immediately,
  • the real and computer-aided operations (business processes): e.g. during a visit to an exhibition, it is possible to switch smoothly between a virtual tour and the real tour; when we search in a library, traditional stack/shelf-centred research can be combined with digital search facilities; while conducting a brainstorming session as part of a design meeting, traditional creativity techniques in the form of cards can be combined with digital facilities for sorting, categorising, etc., and
  • Shaping the physical environment: e.g. configuring the rooms for new forms of interaction and communication. The design of the interaction includes not only walls, floors, and ceilings, for example, but also sound and light. This is configuring the architecture in its widest sense.

We therefore call this new interaction paradigm Blended Interaction. This is intended to stress that a single increase in the reality aspect of the interaction cannot go far enough. The particular challenge – and from the user's standpoint, the key advantage – lies in a meaningful marriage between the tested real-world options and the digital world. As a minimum this marriage must exist on the levels of the interaction, of the way we solve problems with conventional tools (processes), and of the design of the space or the architecture of buildings and places. The digital world often offers entirely new possibilities and takes the form of interactive devices of various shapes but also of intelligent everyday objects (e.g. the 'Internet of things'). In our view, interaction concepts can indeed offer a new quality of interaction, but only when the design of the interaction (Interaction Design) includes all these domains at the same time and with equal weighting.

Suitable methodological approaches for implementing Blended Interaction

The major challenge in designing and implementing interaction concepts that pursue the idea of Blended Interaction is finding the right mix or intersection of reality-based interaction & communication with the technological facilities of computer-based interaction & communication on all the above-mentioned domains. Jacob et al. (2008) pose the question as follows. "What part of the user interface should be based on reality-based interaction and what part should provide computer-only functionality that is not realistic?" In response, they recommend the following approach. "Make the first part as large as possible and use the second only as necessary, highlighting the tradeoffs explicitly." They list a number of possible trade-offs in the sense of conflicting objectives, such as "Reality versus Expressive Power, Reality versus Efficiency, Reality versus Plasticity, Reality versus Ergonomics, Reality versus Accessibility, Reality versus Practicality". This approach is very pragmatic and orientated toward traditional (product-)design practices and it offers the interaction designer no theoretically sound tool kit to which he/she could also gear themselves. The approach that we therefore take is to apply the ideas and practices of "Conceptual Blending" by Fauconnier & Turner (2002) to the design of blended interactions. A good introduction to this subject is provided by the works of Imaz & Benyon (2007), who have applied the ideas and concepts of Fauconnier & Turner (2002) to the design of the Human-Computer Interaction. It is a further development of the idea of using metaphors for designing the HCI. We try to test and methodically develop the idea of Conceptual Blending as part of the implementation of Blended Interaction concepts in diverse application domains.

Demo or Die: actual implementation of the Blended Interaction vision in research projects

We test the suitability of the ideas of the Blended Interaction concept by using specific application examples that are being worked on as part of current research projects on topics such as the Blended LibraryBlended MuseumBlended Interaction Design. Guided by our overall vision, we are generating a series of tangible expressions of this vision in the form of design studies and research prototypes, which facilitate practical testing – ideally, even by means of a longitudinal study. 

The research field of Human-Computer Interaction (HCI) thrives on the duality and the associated tension between theoretical derivation (e.g. of principles, models, and theories) and the specific design of interactive products (often based on the design ideas of individual designers or experts). This duality is also an area of tension for discussions such as 'Are we starting with a theory, and does it produce definite principles and models that lead ultimately to a particular design?' Or 'Are we starting with a design idea that is based on the designer's experience and intuition, and from which we can proceed to generalise the principles, models and theories. Blackwell (2006) draws an interesting parallel with medicine as a science. It is based on large numbers of clinical trials that - even though they are guided by theoretical considerations - are never able to show a 1:1 implementation of a theory. This can be accounted for by the individual circumstances of the participants (patients) in the trials. From the results of the studies, attempts are then made to derive more abstract concepts in terms of generalisations. These then become the subject of scientific debate. For him, therefore, the distinctive demo-culture in HCI ("demo or die") is, in effect, the counterpart to the clinical studies, each of which contains a fragment of theory, which is then to be "objectified" in a subsequent process. "The reification of ... many theories in HCI has been carried out through the exhibition of design work. ... designed software products serve as theories that have been made into instruments" (Blackwell 2006, p. 517).

We take a similar approach and, starting from a general vision of the blended interaction, we try to implement a variety of design concepts in different application domains. Each of these designs includes a particular form of the exemplification of our vision (and thus theory). In a subsequent process of critique and reflection, and accompanied by longitudinal studies, we then try to derive general principles, models and approaches, which are important for the implementation of our vision. This approach reflects the duality of the HCI as a research discipline (primarily interested in theories, models and principles) and as a design discipline (primarily interested in the pragmatic implementation of design ideas) and is, in our opinion, the appropriate approach in this field.

Literature

Blackwell, A. (2006). The Reification of Metaphor as a Design Tool. In ACM Transactions on Computer-Human Interaction, Vol. 13. No. 4, December 2006, Pages 490-530.
Dourish, P. (2001). Where The Action Is: The Foundations of Embodied Interaction. MIT Press. Cambridge, MA, USA.
Fauconnier, G., Turner M. (2002). The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. New York: Basic Books.
Imaz, M., Benyon, D. (2007). Designing with Blends: Conceptual Foundations of Human-Computer Interaction and Software Engineering. The MIT Press.
Ishii, H., Ullmer, B. (1997). Tangible Bis: Towards Seamless Interfaces between People, Bits and Atoms. In Proceedings of the SIGCHI conference on Human factors in Computing Systems. CHI 2007. ACM Press, New York, pp. 234-241
Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. (2008). Reality-based interaction: a framework for post-WIMP interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '08. ACM Press, New York, pp. 201-210.
Jetter, H. C., Geyer, F., Schwarz, T., Reiterer, H. (2012)Blended Interaction - Toward a Framework for the Design of Interactive Spaces. Workshop Designing Collaborative Interactive Spaces (DCIS 2012) at AVI 2012
Oviatt, S. (2008). Multimodal Interfaces. In Sears, A., Jacko J. (Eds.) The Human-Computer Interaction Handbook (2nd Edition), Lawrence Erlbaum Associates, New York, 2008, pp. 413-432.
Weiser, M. (1991). The Computer for the Twenty-First Century. In Scientific American. September 1991, pp. 94-100