Cortana zero UI

How to deliver the best employee experience? Zero UI!

If you’re like me, you spend a lot of your time on screens: from your phone to your tablet to your computer. And probably a little too much at times… But that’s going to change!

When we’re tapping on screens, we are using the graphical user interface (GUI). According to Wikipedia, a GUI is a type of interface which makes it easy, efficient, and enjoyable (user-friendly) to operate a machine or interact with a device in a way which produces the desired result using graphical icons and visual indicators.

And although we have made great progress in graphical interfaces, humans are just not meant to type and tap. It’s doesn’t come natural. Speaking does. Gesturing as well. And we’re really good at that. What if we could operate our devices through human machine interaction? Would that work?

Of course it would! And you already know that if you have called on Alexa, Siri or another voice assistant to get things done. So what do you call a digital experience where you can’t click or tap? Since it has no interface, someone suggested to name it zero UI.

What is a zero UI?

Zero UI is a popular phrase that was first coined by designer Andy Goodman from Fjord: “Zero UI refers to a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment. At its extreme, Zero UI implies a screen-less, invisible user interface where natural gestures trigger interactions, as if the user was communicating to another person.”

The concept covers human interaction like voice (eg Siri or Alexa) and body movement (eg Microsoft Kinect, Wii). The goal is to have computers respond to human behavior, body language and speech, rather than have us learn ways to interact with computers so these devices do what we want.

Google CEO Sundar Pichai responded that the future of devices could be the end of devices. “We will move from mobile first to an AI first world,” he wrote in a letter to shareholders of parent Alphabet Inc. But that was a bit optimistic, and I think we will continue to use our mobile devices for a very long time.

Never miss an article – subscribe to our monthly newsletter!

How artificial intelligence boosts zero UI

Screenless devices are often smart devices. They use algorithms to build a user profile and take appropriate actions. For example, a Nest Thermostat collects data on your personal temperature preferences, and when you turn it up or down. If you leave for work at the same time every day, it will adapt to your habits and automatically adjust the temperature to save energy and lower your utility bill. You won’t have to turn the dial anymore.

Smart AI devices are becoming commodities: many people now have smart speakers, lights and thermostats in their home. Machine learning plays a key role in the functionality of these zero UI devices. It is about predicting and understanding what we as users do and want, then learning from that to provide it faster and better, so we don’t have to think about it anymore. These smart devices alert us when they need us to do something, not the other way around.

Zero UI is meant to help us save valuable time and make our lives easier. It also contributes to accessibility: visually impaired and blind individuals can’t use screens. They use readers that translate written text to audio. When people can use their voice to speak about their needs, that opens up more possibilities to function independently. It can also make HR services more inclusive.

Why zero UI is good for employees and HR

Voice recognition software isn’t new, but with recent advances in speech recognition, artificial intelligence and machine learning, this technology finally provides a consumer-grade experience. Because of the improved quality, vendors are now opening up these solutions to third-party integration. We will see voice support pop up everywhere. Already in 2018, Google reported that 27% of search on mobile devices was through voice.

When I interviewed startup founders about their chatbot solutions, it became clear to me that people are getting more comfortable with these technologies. They already speak to smart devices at home. Why not do that at work too? The new solutions are reliable and fast and offer a good experience. And in some cases, they even prefer to talk to a voice assistant over contact with a real human.

Zero UI means that HR becomes invisible, something I’ve championed for a long time. When an employee has an HR question, they need an answer. And they want to ask that question right where they are, preferably in natural language. If they are at home, why can’t they just ask Alexa how many vacation days they have left? Or when they are working, just speak to Cortana to hear what their nett salary will be this month?

Ask your smart speaker and HR question

Why should an employee open your App to find an answer to a question? The average user has about 70 apps on their phone, of which they use less than 25 regularly. I’m pretty sure your HR App (if you have one) isn’t one of the 25. Managing and switching between so many apps on our mobiles is quickly becoming inefficient.

Which means that where you service employees, the service location, is getting irrelevant. Just like you don’t need to know which Alexa device answers your question about the weather – all you care about is to hear if you need to bring an umbrella. You only have to raise your voice and ask the question.

The zero UI concept means you’ll have to check which other channels your employees use, and decide if it is a good idea to meet them where they are instead of asking them to come to you. It’s about giving them the best possible user experience, so they enjoy working for you and you retain them a bit longer.

Ultimately, the platform everyone has built their application on ends up owning them all. And right now, for the workplace it seems that the productivity platforms like Teams, Google at Work and Slack are winning the game. You will need to integrate your HR services seamlessly with these platform. It’s much more efficient for users and it provides a better experience. On top of it, you can capitalize on the APIs that the platform providers make available, including voice, and potentially gestures in future.

What about body movements?

Does that mean the end of devices with screens? Not as long as people want to watch videos or read books and write reports for work. You’ll still need to offer a graphical user interface to get things done. But you’ll also need to understand how your employees want to interact with the content you offer.

Companies have experimented with voice for a long time. It became really successful once voice recognition matured and was supported by artificial intelligence. It is now so good that in some cases we can’t tell if we are talking to a human or a machine anymore. Think about the (in)famous Google example where an AI assistant made an appointment for a haircut. (Which raised all kinds of ethical concerns, so don’t be creepy.)

Recognizing facial expressions and gestures is a different story. Gestures and facial expressions are the most natural form of interaction and we are masters in picking up their meaning as well as subtle nuances. But there’s a problem: gestures and facial expressions have cultural elements. Shaking your head up and down can mean “Yes” in one culture, and “No” in the next, regardless of the language a person speaks (although that’s very often a good indicator).

Interpreting cultural differences in gesturing is a lot easier than interpreting all the languages of the world. Nevertheless, the maturity level is far behind voice recognition, and it’s unlikely you will see gestures and body movements appear in business settings shortly. At the moment, gesture control is basically used for video games only.

In the past year, a few HR conferences offered VR meetings, to emulate the in-person experience as much as they could. But that didn’t include gesture control, you used your computer to interact. There is a big difference between VR-helmets (or AR-glasses) with controllers and ones that can be controlled by hand gestures. There is a long way to go before everyone has one and they become ubiquitous. There’s also a cost factor involved.

Talk to the bot first

Chat bots, or voice assistants are a new way of interacting with services and will significantly change human-to-machine interaction. We are starting to see bots becoming an accepted and even preferred means of communication. But we’ve only scratched the service, and especially in HR, there’s still a lot to discover.

That doesn’t mean you can wait to explore this topic – as people are talking to machines, they’ll expect that they can do that everywhere, including HR services. A conversational interface allows employees to ask questions, receive answers, and even accomplish tasks through natural dialogue. It’s something people do every day and are good at. It’s simple a great experience and far more practical than typing on a screen.

To get you started, I’ve interviewed 3 founders of companies that focus on providing conversational assistants. We talked about the current state of their solutions, as well as future developments. You’ll find more ideas, including companies working on conversational applications in the December newsletter.

Never miss an article – subscribe to our monthly newsletter!