- Multimodal interaction
-
Not to be confused with Mode (computer interface).
Multimodal interaction provides the user with multiple modes of interfacing with a system. A multimodal interface provides several distinct tools for input and output of data.
Contents
Multimodal input
Two major groups of multimodal interfaces have merged, one concerned in alternate input methods and the other in combined input/output. The first group of interfaces combined various user input modes beyond the traditional keyboard and mouse input/output, such as speech, pen, touch, manual gestures, gaze and head and body movements. The most common such interface combines a visual modality (e.g. a display, keyboard, and mouse) with a voice modality (speech recognition for input, speech synthesis and recorded audio for output). However other modalities, such as pen-based input or haptic input/output may be used. Multimodal user interfaces are a research area in human-computer interaction (HCI).
The advantage of multiple input modalities is increased usability: the weaknesses of one modality are offset by the strengths of another. On a mobile device with a small visual interface and keypad, a word may be quite difficult to type but very easy to say (e.g. Poughkeepsie). Consider how you would access and search through digital media catalogs from these same devices or set top boxes. And in one real-world example, patient information in an operating room environment is accessed verbally by members of the surgical team to maintain an antiseptic environment, and presented in near realtime aurally and visually to maximize comprehension.
Multimodal input user interfaces have implications for accessibility.[1] A well-designed multimodal application can be used by people with a wide variety of impairments. Visually impaired users rely on the voice modality with some keypad input. Hearing-impaired users rely on the visual modality with some speech input. Other users will be "situationally impaired" (e.g. wearing gloves in a very noisy environment, driving, or needing to enter a credit card number in a public place) and will simply use the appropriate modalities as desired. On the other hand, a multimodal application that requires users to be able to operate all modalities is very poorly designed.
The most common form of input multimodality in the market makes use of the XHTML+Voice (aka X+V) Web markup language, an open specification developed by IBM, Motorola, and Opera Software. X+V is currently under consideration by the W3C and combines several W3C Recommendations including XHTML for visual markup, VoiceXML for voice markup, and XML Events, a standard for integrating XML languages. Multimodal browsers supporting X+V include IBM WebSphere Everyplace Multimodal Environment, Opera for Embedded Linux and Windows, and ACCESS Systems NetFront for Windows Mobile. To develop multimodal applications, software developers may use a software development kit, such as IBM WebSphere Multimodal Toolkit, based on the open source Eclipse framework, which includes an X+V debugger, editor, and simulator.
Multimodal input and output
The second group of multimodal systems presents users with multimedia displays and multimodal output, primarily in the form of visual and auditory cues. Interface designers have also started to make use of other modalities, such as touch and olfaction. Proposed benefits of multimodal output system include synergy and redundancy. The information that is presented via several modalities is merged and refers to various aspects of the same process. The use of several modalities for processing exactly the same information provides an increased bandwidth of information transfer .[2][3][4] Currently, multimodal output is used mainly for improving the mapping between communication medium and content and to support attention management in data-rich environment where operators face considerable visual attention demands.[5]
An important step in multimodal interface design is the creation of natural mappings between modalities and the information and tasks. The auditory channel differs from vision in several aspects. It is omnidirection, transient and is always reserved.[5] Speech output, one form of auditory information, received considerable attention. Several guidelines have been developed for the use of speech. Michaelis and Wiggins (1982) suggested that speech output should be used for simple short messages that will not be referred to later. It was also recommended that speech should be generated in time and require an immediate response.
The sense of touch was first utilized as a medium for communication in the late 1950s.[6] It is not only a promising but also a unique communication channel. In contrast to vision and hearing, the two traditional senses employed in HCI, the sense of touch is proximal: it senses objects that are in contact with the body, and it is bidirectonal in that it supports both perception and acting on the environment.
Examples of auditory feedback include auditory icons in computer operating systems indicating users’ actions (e.g. deleting a file, open a folder, error), speech output for presenting navigational guidance in vehicles, and speech output for warning pilots on modern airplane cockpits. Examples of tactile signals include vibrations of the turn-signal lever to warn drivers of a car in their blind spot, the vibration of auto seat as a warning to drivers, and the stick shaker on modern aircraft alerting pilots to an impending stall.[5]
Invisible interface spaces became available using sensor technology. Infrared, ultrasound and cameras are all now commonly used.[7] Transparency of interfacing with content is enhanced providing an immediate and direct link via meaningful mapping is in place, thus the user has direct and immediate feedback to input and content response becomes interface affordance (Gibson 1979).
See also
- Modality (human–computer interaction)
- W3C's Multimodal Interaction Activity – an initiative from W3C aiming to provide means (mostly XML) to support Multimodal Interaction scenarios on the Web.
- NCCR IM2: Swiss project on multimodal interaction
- Device independence
- Speech recognition
- Web accessibility
- Wired glove
- XHTML+Voice
References
- ^ Vitense, H.S.; Jacko, J.A.; Emery, V.K. (2002). "Multimodal feedback: establishing a performance baseline for improved access by individuals with visual impairments". ACM Conf. on Assistive Technologies.
- ^ Oviatt, S. (2002), "Multimodal interfaces", in Jacko, J.; Sears, A, The Human-Computer Interaction Handbook, Lawrence Erlbaum
- ^ Bauckhage, C.; Fritsch, J.; Rohlfing, K.J.; Wachsmuth, S.; Sagerer, G. (2002). "Evaluating integrated speech-and image understanding". Int. Conf. on Multimodal Interfaces. http://dx.doi.org/10.1109/ICMI.2002.1166961.
- ^ Ismail, N.A.; O'Brien, E.A. (2008). "Enabling Multimodal Interaction in Web-Based Personal Digital Photo Browsing". Int. Conf. on Computer and Communication Engineering. http://eprints.utm.my/5732/1/ICCCE2008_preprint_version_UTM_IR.pdf.
- ^ a b c Sarter, N.B. (2006). "Multimodal information presentation: Design guidance and research challenges". Int. J. of Industrial Ergonomics 36 (5): pp. 439-445. http://www.sciencedirect.com/science/article/pii/S0169814106000217.
- ^ Geldar, F.A (1957). "Adventures in tactile literacy". American Psychologist 12 (3): pp. 115-124. http://www.sciencedirect.com/science/article/pii/S0003066X07652459.
- ^ Brooks, A.; Petersson, E. (2007). "SoundScapes: non-formal learning potentials from interactive VEs". SIGGRAPH. http://doi.acm.org/10.1145/1282040.1282059.
External links
- W3C Multimodal Interaction Activity
- XHTML+Voice Profile 1.0, W3C Note 21 December 2001
- Hoste, Lode, Dumas, Bruno and Signer, Beat: Mudra: A Unified Multimodal Interaction Framework, In Proceedings of the 13th International Conference on Multimodal Interaction (ICMI 2011), Alicante, Spain, November 2011.
Categories:
Wikimedia Foundation. 2010.