Inclusive Multimodal Design
- Amy Li
- Apr 18, 2021
- 4 min read
Multi-modal design is the building block of inclusive design thanks to its additional affordances and its ability to augment human-computer interaction.

Common design research methodologies like eye-tracking and usability studies focus on what users can see. However, humans rarely experience the world with only one modality. You are most likely reading this essay using several modalities simultaneously: you might use your sense of vision to read the individual characters, and you will most likely need to use your cognitive ability to translate these symbols into ideas. In addition, you could be using your motor skill to interact with an interface that allows users to scroll down to read more. Most tech-literate users know that what’s been mentioned is far from an exhaustive list of the possible interactions with this article. Many enjoy text-to-speech which engages their auditory sense. And for analog users, a physical copy of this article adds a layer of haptic sensory experience.
Multi-modal design is the building block of inclusive design thanks to its additional affordances and its ability to augment human-computer interaction.
It is important to acknowledge the effort that web designers and developers have put into crafting digital accessibility guidelines. Many like to say that they are visual animals, which is true to an extent. “As our dominant sense, at least 50% of sensory processing is used for vision, equal to all the other senses combined” The emphasis on tools like contrast checkers and colorblind-safe colors is justified given the statistic that 1 in 12 white males has some form of color blindness. However, the pursuit of equitable access to digital products doesn’t end with visual accessibility in regards to only white men. In fact, unimodality can be disabling for a far larger group of medically normative or “normal” users like children, pregnant wwomen, obese individuals, and elders.

Figure 1

Figure 2
In the 70s, the industrial designers, or the “original multi-sensory experience designers,” were the first to expand the idea of universality to mean inclusive (product) design. They abandoned the medical model of disability and challenged the legacy personas “Joe” and “Josephine” with tools like “Humanscale selector” (figure 1) and “The Enabler” (figure 2). They adopted the social model of disability which puts the responsibility on designers to create products, services, and spaces that fit the users. The underlying belief that serving the extremes would also benefit everyone else is at the core of universal design. At the turn of the 21st century, software product designers expanded the principles of universal design and devised the term “inclusive design,” which OCAD University – an art, design and media university in Canada – defines as “design that considers the full range of human diversity with respect to ability, language, culture, gender, age and other forms of human difference.” Inclusive design recognizes the complexity of socially constructed obstacles and seeks to leverage technology to make design adaptive to a diverse body of users. The portability of modern sensors and mechanical devices enables a myriad of device mode combinations, which also increase the risk of disabilities which Microsoft Inclusive Manual describes as “mismatched human interactions.”
Increased mobility of technology = increased moments of disability.
As technology affords various modes of inputting and communicating data, there is more contextual information to account for in order to understand the ease of use of one mode over another. Users shift their focus and frequently translate cross-modal information. To improve our existing technologies, user researchers should expand their evaluative research methodologies to account for situational limitations and identify mismatches between users and interfaces. Activities in the Microsoft Inclusive toolkits like “Context and Capability Match” and “Situational Adaptation” challenge a product’s ability to adapt to physical, social, and situational limitations. These are useful additions to any heuristic evaluation.
To help researchers consider inclusive multi-modal design as they conduct generative research, in their book Designing Across Senses: A Multimodal Approach to Product Design, Christine W. Park and John Alderman have included questions that could be incorporated into research protocols:
What kind of physical information is available within an experience?
How will people need to use it to accomplish their goals?
What kinds of previous experiences will shape their expectations and aptitudes?
Will they need to develop any specific skills? Develop hypotheses about these aspects of the experience and explore how individual or integrated modalities enable different user responses.
In addition to needs and behaviors, the contexts of use also affect a user’s interaction with a design. The lists of questions above prompts researchers to examine the information (including other actors) in the environment, people’s interaction with that information, their past experiences and expectations, and the differences in their abilities.
For specific generative and evaluative research methodologies, Microsoft Inclusive Design also has a toolkit that includes a number of activities that are useful during the discovery, ideation, iteration, and optimization stages of the end-to-end design process.
Our society has moved past the industrial age and is in the age of information. Technology greatly extends users’ abilities, yet it can also become a gatekeeper to information. For example, for an elder who is visually impaired and has motor issues, smart speaker Alexa offers a means of connecting with the world and provides a sense of independence. Humans’ cognitive ability to use different modalities via substitution and translation to understand information is a great source of inspiration and motivation for multi-modal design innovation.
As researchers learn more about human factors, there will be many more opportunities for assisted interactions that are yet to be explored. Multi-model design will increase its presence as more design teams expand their inclusive design guidelines to include other senses.
Comments