Table of Contents Hide
- VUI Use Cases – Does Your Product Need Voice?
- How Do VUIs Process Information?
- 8 Considerations for Voice User Interfaces
- Final Thoughts
- Enhancing the User Experience With UXPin
Just about every product features a voice user interface (VUI), from phones, wearables, and speakers to your car and even the fridge. According to Statista, the number of voice assistant devices will exceed 8.4 billion units by 2024-more than the world’s population.
As of May 2019, over 90,400 smart home devices supported voice assistants, more than 60,000 of which were for Amazon’s Alexa alone! It’s hard to say whether VUIs will replace screens, but there’s no denying the ever-growing demand for voice products.
Designing VUIs and the accompanying information architecture is a complex and exciting challenge for UX teams. This article looks at VUI design and how designers can create better voice experiences for their customers.
UXPin’s code-based design tool lets UX designers build complex high-fidelity prototypes that accurately replicate the final product. Sign up for a free trial to discover how UXPin can streamline and enhance your product design.
VUI Use Cases – Does Your Product Need Voice?
Just because any device or application can work with voice doesn’t mean it should. Designers must evaluate each product individually to assess whether voice commands, a screen UI, or a combination of both will best serve users.
For example, do you want an app dealing with sensitive data like finance or health blurting out your personal information? Designers must consider both ethics and legislation regarding this sensitive data.
VUI’s best use cases are providing users with hands-free assistance during cooking, driving, exercising, and other activities that require attention, focus, or use of someone’s hands and body.
Thorough research and user interviews will help designers determine whether voice will effectively solve user pain points or help people complete tasks better and faster.
How Do VUIs Process Information?
VUIs use a mix of artificial intelligence/machine learning, speech recognition, sound effects, and text-to-speech (TTS) to interact with users.
How VUIs Talk to Users
Voice design expert Guillaume Privat Siri Manager Apple says, “VUI designers should differentiate between a prompt and a statement” because each requires a different authoring strategy.
- Prompts: Questions from the VUI, like “What can I help you with today?”
- Statements: VUI answers, comments, and other communication that don’t necessarily elicit a response. “Hello, Jane.” or “Playing Hit Me Baby One More Time by Britney Spears on Spotify.”
Prompts present a few challenges for designers because they need to manage users’ expectations while posing questions that favor system constraints. For example, open-ended prompts expose AI to the complexities of human communication, whereas closed-ended questions are easier to handle.
Designers can optimize prompts by limiting options to a maximum of three, which usually correspond to frequently used features. For example, in a banking app, the VUI might ask, “Would you like to check your balance, pay a bill, or something else?”
Designers must also consider how to program re-prompts to incase the user doesn’t respond right away. Re-prompts should sound natural and conversational, like the way you would talk to a loved one if you thought they hadn’t heard your question.
Statements provide users with answers to their questions, but they also confirm their instructions followed by a prompt. For example, “You want to pay your $20 to the electric company (statement). Is this correct? (prompt).”
8 Considerations for Voice User Interfaces
Managing user expectations is one of VUI design’s biggest challenges. Without proper error handling or clarity from the voice assistant, users often abandon the product with the view that it doesn’t work.
Designers must also overcome background noise, accents, clarity, volume, and other vocal nuances. Never mind the complexities of language itself!
We’ve researched several leading UX designers who’ve shared their successes in overcoming VUI design challenges.
1. User Personas
As with any design project, personas play a crucial role in empathizing with users. VUI user personas must include additional details like the tone of voice, common word choices, and sentence structure. Designers must also consider cultural differences–like how people talk in California, USA vs. Yorkshire, UK.
Most visual UI design focuses on mobile, tablet, and desktop experiences, while a voice interface is far more complex with many possibilities, including:
- Smart speakers
- Home theatre system
- Car stereos
- Internet of Things (IoT)
- Home appliances
In some instances, a single VUI must work on several devices, impacting how the user and system interact. Designers must consider these factors and maintain a consistent user experience across multiple devices and environments.
3. VUI Microinteractions and System Status
Microinteractions provide reinforcement and feedback to enhance the user experience and provide feedback. Designers have a choice of sound effects, screen animations, haptic feedback, and LED illumination to show system status and states.
An essential VUI microinteraction is the “wake up” after the user says “Hey Siri” or “Hey Alexa.” Designers must indicate that the VUI is ready for the user’s instruction. Designers might have to use a specific LED for speakers and devices without a display to show the VUI is listening–like the illuminating bezel on the Amazon Echo.
4. VUI Triggers
Designers can use several VUI triggers to enhance a user experience and add value to users. Here are a few examples and use cases:
- Voice: The primary trigger for activating and interacting with the VUI.
- Touch: Using UI components or physical buttons.
- Motion: Some wearables and smartphones can detect specific movements to activate the VUI or features.
- Time: Dates and times for reminders and events can trigger VUI responses or system actions.
- Location: VUIs can use geolocation to trigger reminders or actions.
Machine learning opens a whole world of possibilities for VUI triggers and could provide life-saving feedback and advice. For example, if you’re driving through an unknown city, a voice assistant can alert you before entering high-risk crime areas or inform you of an accident up ahead.
5. Giving Users Control
Giving users control is a crucial user experience factor. Designers should consider how VUIs might force a user into listening to a long list of content, like “all the restaurants within a mile.” Users should be able to interrupt or add specific details like “with wheelchair access” without starting from scratch.
6. VUI Accessibility
Designers must also consider how to make VUIs accessible. Shaky voices, speech impediments, and second-language speakers are some voice recognition challenges designers must overcome.
Cognitive disabilities and hearing impairments often make it difficult to digest information. One solution is to include the option to repeat slower or louder for the user. Designers must also consider keeping VUI prompts and statements succinct with the option to “add more context” or “elaborate.”
While adding wit, sarcasm, or slang might seem fun, designers should avoid ambiguous language that could confuse second-language speakers, people with disabilities, or cognitive challenges. Technical jargon, abbreviations, and acronyms could also make users feel marginalized.
Designers must use whole words and natural language so that information is easy to absorb and interpret.
7) Graphical User Interface (GUI) Integration
Home management systems and IoT often come with a GUI touch screen or mobile app to support the VUI. GUIs can also help solve accessibility and usability issues by providing users with another option to interact with the voice assistant.
8) VUI Design Patterns
VUI is still in its infancy compared to visual interfaces, so there’s still a lot of work needed to develop industry-standard VUI patterns and accessibility guidelines. Still, you can find helpful information and guidance from Amazon, Samsung, Google, and Apple.
In Amazon Alexa’s Developer Documentation, the eCommerce giant divides its design patterns into four sections and summarizes each as follows:
- Be adaptable: Let users speak in their own words.
- Be personal: Individualize your entire interaction.
- Be available: Collapse your menus; make all options top-level.
- Be relatable: Talk with them, not at them.
Here are links to leading voice assistant documentation to get more ideas about VUI design patterns:
Voice user interface design is an exciting and ever-evolving field. AI and machine learning allow users to develop human-like bonds with virtual assistants and include them as part of the family–a unique quality other digital products do not share.
In one review on Amazon’s Echo Dot, a happy customer had this to say: “Artificial intelligence? Perhaps. But people rarely make me smile or laugh. Alexa rarely fails to do so. And the enjoyment I get from having her in my home is anything but ‘artificial.’”
UX designers must look for creative ways where VUI technology can excite users and enhance the human experience. When designed correctly, voice assistants can reduce the time people spend physically interacting with screens and devices.
Enhancing the User Experience With UXPin
No matter what project you have in mind, iterative improvement is the foundation of great design. When your design team has the tools to stay coordinated and aligned, they can reduce time-to-market, make smarter design choices, and amaze users with incredible product experiences.
UXPin is an end-to-end code-based design tool that fosters creativity and collaboration. UX designers can design, prototype, test, and iterate faster with higher fidelity and functionality than other leading design tools. Sign up for a free trial and see how the world’s best design tool can enhance your UX workflows and improve customer experiences.
Read the full article here