The growth of Voice Assistants and the importance of UX research

Image of different voice assistant tools

“Alexa, make me a cup of tea with milk only, then please brush my teeth and get today’s newspaper.” OK, that’s not quite possible, at least not yet. But have you noticed in recent years just how much you have started using voice commands or searches instead of manually typing? 

The use of Voice Assistants is growing, whether it’s to discover information (like the weather or directions), controlling your devices (turning on lights, watching TV or playing music) or even helping you go shopping (searching and buying online).

How are people using Voice Assistants?

Complex tasks are still relatively rare, but to give you a sense of how much people use them in everyday life, a study by Forrester and Adobe have found that over 70% of US adults surveyed used smart speakers to listen to music. Other common tasks include;

  • Checking the weather forecast (64%)
  • Asking fun questions (53%)
  • Online search (47%)
  • Checking the news (46%)
  • Basic research or confirming information (53%)
  • Asking directions (34%)

With significant advances to AI technology over the past decade and a boom in smart speaker adoption (Juniper predicts that by 2022, 55% of American households will own a smart speaker), the growth in Voice Assistants is expected to expand rapidly.

How are organisations are innovating to help customers

This technology isn’t limited to simple domestic tasks; voice technology, often combined with AI, is being trialled and implemented by business looking to improve not just customer experience but also efficiency.

Voice-enabled search is faster and more convenient, particularly for the mobile devices that make up more than 50% of all internet traffic. According to the Opus Research Survey, 73% of leaders in the retail industry considered faster search via voice to be a top end-user benefit of voice assistants. With voice-enabled search, customers can set filters naturally and conversationally instead of through a long series of types and swipes. “Find white Paul Smith trainers, size 11, for under £150.” 

McDonald’s has introduced voice recognition software in 10 of its drive-throughs so it can automate orders. Having acquired Apprente in 2019, it’s developing unique ‘sound-to-meaning’ technology which will be able to interpret complex orders as well as accents. It’s hoping to make the drive-through experience more efficient, more consistent and even more pleasurable since the voice assistant ‘never sounds tired, annoyed, unhappy or angry’.

H&M are using this technology in their Magic Mirror concept. In their New York flagship store, customers can ask a mirror to take a selfie then scan a QR code to download the image.

photo of a child holding hands with a robot

Not just a gimmick

While some of these uses may seem gimmicky, for many people with a disability they offer an opportunity to engage with a digital and physical environment that they may well have been excluded from previously.

Consider something as simple as sending a text message and the steps taken to do that, then think how much extra effort it is when you might have a physical condition which makes it difficult to hold and control the device, or a visual impairment which means you can’t see the options properly, or even a neurological condition which means you struggle to remember where the device is. Now think how their experience can be transformed by simply telling their device what they want to send and who to send it to.

Image of post-it notes with 'run a usability test highlighted'

Usability testing voice interactions

While the fundamental principles of usability testing are the same regardless of what is being tested, Voice assistants are of course non-screen based technologies that have specialised testing requirements.

Bunnyfoot have developed the tools and methodologies that ensure the very best insight is gained and user experience is driven forward:

  • We ensure contextual validity by testing on-site or by replicating the intended ambient environments in our specialised labs.
  • We ensure that moderator input is kept to a minimum by setting clear tasks and favouring retrospective questioning.
  • We ensure that note taking, data capture and A/V recording does not make test participants feel under scrutiny or interrupt their task flow.

There’s a lot more to it than that and our approach varies from project to project, but we have a large toolkit and experienced consultants who want to understand the nuances of user experience.

Prototyping voice assistants

We like to test prototypes because they allow early insight into user experience that helps you to understand what the user expects and how they interact amongst other things.

Prototyping voice interfaces presents a few challenges because the users choices aren’t limited to a set of predefined options. Think about how many different ways there are to ask the time; What time is it? What’s the time? How late is it? Do you have the time? What’s the time now?

Image of a hand connecting printed out wireframes

Here are some tools for prototyping you may find useful:

Protopie can make realistic voice interactions based on either reading the text you input or by responding to incoming speech. Individual plans are $11/month and team plans are $42/month.​

Adobe XD is a comprehensive vector-based user experience design tool for web apps and mobile apps which can help you create an interactive VUI design. Adobe XD is an interface design first of all, but it also includes an option for voice prototyping. Adobe XD offers a 7 day free trial after which the licence is £10.42/month, or can be bundled with other Adobe licences for £50/month.​

Voiceflow is a collaboration tool to design, prototype and build for Amazon Alexa and Google Assistant. Starter tier is free and lets you create up to 2 projects and allows 1000 interactions per month, but for personal use only, after which a licence is $40/month.​

Speechly is a spoken language understanding solution specifically designed to build voice user interfaces. You can use Speechly Dashboard for quickly prototyping your configuration and then integrate your configuration to your app for a full voice experience. There are a range of plans offering different features from free to $195 to $895 dollars per seat.​

We can help!

Designing voice assistants for your service and want some support with customer research and design? We can support you on small or large projects – get in touch, we’d love to talk.