top of page

One wristband to control them all

Designing a User Experience for Full Eco-System Control | Part 1

Mudra is a wristband wearable, that is able to detect the way you move your hand using neural input and clever AI to decipher your intention.

But as clever as the AI might be, if the experience is broken — the user tries to do something and doesn't succeed — it doesn't matter how high the AI model accuracy is, people just won’t use it… That’s why we put great effort in figuring out the right Interaction for each scenario in the pursuit of a great User experience.


This is the first article in a series that puts a spotlight on the question: What is the right User experience for the_____ scenario

...

Choose your use case

First thing is to choose the scenario you’d want to control.

Obviously controlling a smartwatch OS is very different than a smart TV or a smart home device.


What is the right User experience for the Smart TV scenario



Analyze

A smart TV in essence is a menu and a media player. The Media Player requires a minimum of 4 actions: 1. Play/Pause 2. Next/Skip Forwards 3. Previous/Skip Backwards 4. Volume Control


Menu control also consists of a minimum of 4 actions:

1. Select

2. Menu/Back

3. Scroll Up/Right

4. Scroll Down/Left

Apart for that, since the Mudra device has an air-mouse we chose to make the selection more versatile by adding a cursor which gives another dimension to the scrolling (more on the TV OS itself in a future article).


Observe

We built a Smart TV OS so we could try various ways to control it and also so we could change the OS itself in order to have the right Interaction. Within the smart TV OS there’s a genre menu (that contains the movie items). These items can be scrolled right and left.

Then we ask users : “What would you do to scroll to the right on this screen?”.


While they move their hands to operate the TV OS, someone from the team would mimic the designated response by clicking a wireless mouse from a distance.

The first reaction is usually extreme excitement (of course we don’t initially tell them we’re “cheating” in this stage, so we could keep learning from them).

This is not a novelty! Making the interaction link directly to the intention is key for understanding what works, as opposed to what you think would work…


Create a wish-list

After observing enough users, you can really get a sense of what works and what doesn’t . So you write down your top three gestures for each interaction, then you cross reference all the results from all the interactions your use case requires. Finally, you try narrowing it down to as few gestures as you can.


There are two reasons why you should aspire to as few gestures as possible:

1. Your users will need to learn and remember all the gestures

Usability studies have shown that most people are comfortable holding around 4–6 slots in their memory at any given time. more than that just wouldn’t work — they might not remember them.

2. Developing multi-gestures AI models requires a lot of data gathering and engineering work

so if you can choose only the optimal gestures, you could save the time it would take to develop them and go to market faster.


In the TV OS example, after my wish-list held 10 Gestures, we cross-referenced it and created a 6 Gestures User experience that we believed in:


1. Tap- Play/Pause | Select

2. Swipe inward- Next/ skip Forwards | Scroll Down/Left

3. Swipe outward- Previous/ skip Backwards | Scroll Up/Right





4. Bloom- Menu/Back

5. Shake- Unlock to only use when intended

6. pinchand rotate arm- Increase/Decrease Volume





Gather Data & Build AI model

Our AI models require very specific Data Acquisition of gestures from multiple Users to teach the Deep Learning networks to decipher the gestures, this is a very tedious process i’ll write on in the near future.

Next is the big and complicated engineering part of the AI model development. But now, we have much more faith that the models we develop are the right ones and that the users will enjoy a superior experience.


Data Acquisition for smartwatch control


Then, all that’s left is to Enjoy:

the Gestures AI Models for TV OS control are still in the works, so here in the clip is our previous model, but i’ll update when it’s up & running!


Original blog was published on Medium/UX Planet





bottom of page