top of page
Naomi Mushka

From Touch to AirTouch: Challenges in Educating People to Use Their Body as a Controller.

Sometimes you find yourself lucky to be part of a team designing a revolution. Not a product. Not an interface. But a new way of thinking. A new relationship with your entire digital environment.


It's exciting on both a professional and personal level.

Working on a product that develops an entirely new category, which even after 10 years of development is still considered ahead of its time with no competing products in the ecosystem, is a challenge that is a continuous adventure.




Similar to a screenwriter working on a script like "Avatar", you not only enjoy working on the small nuances in dialogues between characters, but also and primarily look from above at the big picture and try to construct an entirely new world that includes terminology, best practices, rhythm, visual language, measurement scales, standards, and more.


In this document, I tried to concentrate on the unique challenges in designing onboarding for this unique experience.





__________________________________________________________________________________


About the Company

Wearable Devices Ltd. is a company offering a new breakthrough in the HCI field. The smart wristbands it produces enable natural device control through hand gestures, without touch.

With revolutionary technology including advanced neural sensors, IMU, and advanced AI models, it is at the forefront of the next user experience and the first to release a smart wristband that allows gesture control freely and without limitations of field of view, lighting, and hand position.

__________________________________________________________________________________


When I began listing the unique characteristics of the challenges the team faced in onboarding design, I found three key points:


1. Lack of Mental Model


Without a doubt, the greatest complexity that accompanied us in our work is that the user does not yet have any mental model regarding a neural wristband. No expectations about what to be careful about when wearing or using it.

This is a unique situation that requires an entirely different approach to user experience design. Every guideline about how to wear it, every practice or requirement to change a setting on the mobile, is another parameter the user "learns" in the context of using a smart wristband. Onboarding that accompanies the user step by step is not enough in this case. There is a need for tighter design of the entire experience, including addressing their level of attention at different stages, managing cognitive load, setup design, and more.


2. The technology is still in its infancy


The neural technology on which the gesture recognition process is based is groundbreaking in many ways, and yet, since the potential use is immeasurably broad, even the amazing and unique capability the product offers today is still just a first step. 


From this perspective, the development process is still in its early stages. At the current stage - everything that is not yet occurring intuitively or automatically by the system is rolled over to the user. For example, until a year ago, the user had to perform calibration independently, and an entire important section was designed in the onboarding process. The moment development allowed automatic calibration, the section was completely removed.


The product manager serves as a composer balancing the design and development teams, with all of us sharing one common goal - an out-of-the-box experience. Until it is fully implemented, large parts of the design are essentially "compensation" for development still in progress.


3. No Category


Everyone wants to invent a category to enjoy the marketing advantage it brings. In this sense, we enjoy being the first product in our field. It's not for nothing that we proudly wave the "First Neural Wristband in the World" flag. Pride is pride.


However, when you are innovative beyond a certain measure, you pretty much stand alone in new territory without the tools you would otherwise have used very much in designing the experience. The lack of a mental model from the user's side, as I mentioned in the first point, the lack of best practices that shorten the path to an efficient and effective flow, the lack of a professional ecosystem with both visual and terminological conventions. 

To date, every part of the experience has been designed through a long process of trial and error. Designing numerous concepts and conducting extensive usability testing.


There aren't many small design teams that can pride themselves on having an experience station at one of the leading universities, allowing their users to receive an experience where every aspect has undergone meticulous usability testing. From this perspective, I don't think there is a second like this company.





1. Lack of Mental Model


When discussing a "mental model" in the context of a smart wristband, we mean a system of expectations, assumptions, and understandings that the user has about how the product should work, what it should do, and how it should integrate into their life.

In the case of a smart wristband, and particularly an innovative neural wristband, users do not come with prior experience. They have no point of comparison, no similar product they know that can serve as a model. This is a unique situation that requires an entirely different approach to user experience design.

Onboarding design in this context becomes especially critical.


These are the main points:


1.1 Attention Design


Perhaps the most significant part of onboarding design in the absence of a mental model is heightened sensitivity to the user's level of attention at every stage. Screens were added and removed, solely to direct attention to things that are critically important for the user to do or understand.

Here are two examples of screens that were added only to direct attention to critical instructions in the process:


Wearing the Watch in the Correct Direction


There is critical importance to the direction of wearing the watch (no matter on which hand), however, we encountered many difficulties explaining this to users. We couldn't get them to stop and pay attention to the instructions. Watch bands were never the subject itself, so it felt completely natural to simply take the band out of the box and attach it to the watch. Every time we placed a title explaining the correct wearing of the band, they would unconsciously just scan it briefly and skip to the next stage. They didn't open the leaflet with instructions, and they didn't completely stop to understand the sticker on the product. Users are not yet trained to stop and ask themselves if there's a correct way to wear the wristband, or if there's something they need to know. This requires education.


Just as no one today would enter water with their watch without checking water resistance, because water resistance is a parameter clearly associated with watches, I believe that in the future, we'll stop to ask ourselves questions in the context of wearing and using wearable products. Orientation, tightness, and precise body placement may become familiar and common parameters in this context.



This specific problem was solved by several steps:


  1. We added a preliminary screen to the three screens dealing with correct wearing to direct the user's attention to the fact that the subject is important.

  2. We left the wearing instructions on the leaflet, but made a substantial change. Instead of a closed leaflet, we placed an open leaflet in the box with instructions displayed.

  3. We added a sticker on the product itself indicating the wearing direction.

  4. Among several instruction and illustration versions, we found those that somehow work best. Like any other "lesson" in our case, this happened after numerous usability tests and countless design concepts that were tested.


Changing Mobile Settings


Without changing the appropriate settings on the iPhone, the central feature, AirTouch, which enables gesture control, simply won't work. It is critically important that users make the changes, a matter of just a few seconds.


In many usability tests, users claimed they didn't make the changes in some or all settings, simply because they didn't think it was that important. "I thought I'd change it later" was said to the team multiple times in this context.


Again, after several concept and design rounds, we found that merely indicating these are important steps as a title on the relevant screens is not enough; it's effective only when a preliminary screen appears that warns "the next two steps are critical". When we have the user's attention, we can work. With the new screen, the instructions were fully implemented.


A preliminary screen is one technique we used. Throughout the onboarding, there were many additional methods by which we designed the user's attention. Another example is found in the settings change screen.


Next to each setting that needs to be changed, we placed a checkbox for marking. Only by filling all checkboxes does the button become active and progression is allowed. The moment we didn't allow the user to proceed before marking everything, they were forced to stop and ask themselves where the problem was, and from there it was easy to get them to do everything and mark it.


Another point worth noting about this screen's design is that we discovered in the field what has long been written in literature - when people are asked to commit to having performed or will perform an action, this is a very significant motivation for execution, by virtue of personal commitment. This screen structure encouraged execution and marking.


Directing attention to the importance of performing the action + adding internal motivation for execution = action performance


Tightening the Wristband


Another technique, also in the context of wearing: we presented the same instruction twice, on two screens that repeat themselves. Both screens had the same goal - to get the user to ensure the strap is tight enough. On the second screen, the user usually unconsciously inferred that probably there's a need to improve the instruction's performance and took care to do so. This way, 100% of users performed the additional tightening after the two screens. Those who were not entirely attentive and rushed through the first screen, acted as expected and performed an additional check on the second screen.


This repetitiveness is not something we think is design-wise correct; we would have preferred to find a more elegant solution. However, again, from usability tests, this was the only solution whose result was complete success.


One consideration throughout the entire process was to find the balance between elegant design and using red and emphasized warning lights regarding critical instructions. We tried as an approach to always start with more subtle solutions before using "heavy tools". In the case of wristband tightening, we were forced to choose a more aggressive solution.



1.2 Timing Design 


When there are so many new aspects in the experience and in the absence of a sufficient mental model, maximum control over the entire experience in all its aspects is essential. We want to ensure our user is as relaxed as possible throughout the process.


Timing A Magical Moment


This first magical moment, where the user realizes they are in full control of the cursor in front of them in a real brain-computer synchronization, brings a smile to all of us. Always, not just to them. This is a peak moment in the process, and it's part of a flow that was designed with particular precision.

If you look at the first screen the user operates using AirTouch, you won't see any outstanding, unique, or interesting design; in fact, it's one of the simpler screens presented to them. But the flow of this screen and those after it were planned and timed at the second and pixel level. What emerged from usability tests is that there's nothing additional worth adding to this special moment except to place it within a clear framework of guidance and timing.


Another thing that came up in initial usability tests was that if we don't carefully define a precise time for exploration and don't automatically transfer them to the next screen, in most cases they would rush to the next screen simply from a sense of disorientation that would also distract them from enjoying the moment. Instead of focusing on the experience itself, users would be busy wondering if something was expected of them and what the next stage was.



The current experience is designed as follows: 

  1. The first screen instructs the user to press the "Activate AirTouch" button on the watch screen. 

  2. The next screen appears automatically upon activation. For 10 seconds, the instruction "Move the cursor for a few seconds" appears. A running loop image demonstrates to the user the expected movement while a progress bar at the bottom shows the remaining time. 

  3. To emphasize that we share their excitement and are still accompanying them, we chose to display the text "cool right?" on the screen after 5 seconds. 

  4. When the bar shows the time is up, the screen automatically switches to a practice that largely continues the movement they've done so far. The practice usually takes the user another 15 seconds to understand and perform.


The resulting flow takes less than half a minute, but the great achievement from our perspective in this simple design is that throughout this important half minute, we have a user who feels relaxed and taken care of.


I imagine all this sounds quite simplistic, as a defined and timed experience seems self-evident. However, again, in the absence of mental models regarding gesture control, our first tendency was to let the user experiment a bit with the movement until they felt like moving to the next stage in their own free time. We planned a "move to next stage" button at the bottom of the screen in the first version but quickly abandoned the idea because we couldn't guide them to press the button if we hadn't yet taught them how to perform a tap correctly. Generally, we saw that it was better to remove any thought about orientation at this stage. It only confused them. An automatic transition to the next screen solved the issue.


Understanding and Accepting That Some Processes Cannot Be Accelerated


When discussing first-time gesture use, one must take into account that the nature of operation itself naturally requires an adaptation period, like in any physical pedal touch, where our body learns to identify the required sensitivity in the first moments. Here too, the pressure between fingers, speed, type of press - all of these are taken into account quite quickly in the initial moments, but still require a little time (usually several seconds to a minute).


1.3 Designing the Learning Environment


Should the user sit or stand? Should the iPhone be close or far away? Each of these questions was translated into a designed concept. After many usability tests, it was determined that the success of the first use depended mainly on understanding the correct hand position – movement with a fixed elbow. When users begin their first experience with a fixed elbow, they realize within seconds how gentle and effortless their movements can be if they want them to be.

To achieve this, before the practice stage, we included two general guidelines for the setup in the user flow.


  • A first screen instructs them to place their mobile phone in front of them.

  • A second screen displays an instruction "Touch the edges of the screen while keeping your elbow fixed." The instruction appears after a video explaining that the elbow should be fixed for easier use.


Of course, there is no limitation to the movement of the hand; on the contrary, one of the main advantages of the experience is the ability to control small and delicate gestures as well as fast and broad hand movements. For the first use, we found that this is undoubtedly the best guideline for fast and natural control. With a fixed elbow, users are relaxed enough to learn and practice the new gestures.



1.4 Managing Cognitive Load


One question that repeatedly surfaced, particularly in the context of onboarding design, was whether we were overloading the user with too much "educational" information.

Given that the entire experience was new in so many ways—from the initial acquaintance with a smart band and all the accompanying sensory and operational information, to the first use of gestures requiring attention—we were forced to consider almost surgically every instruction we thought of adding to the onboarding process.


Finding the balance between providing enough information for smooth and efficient operation versus too much information that would hinder the learning process and the memorization of the most essential things was a significant part of our daily work.


When looking at the number of lessons the user had to go through as part of the onboarding, it's easier to understand the picture. After completing a routine settings process of registering for the system, confirming an email, choosing a hand to use, updating firmware version, selecting the watch direction, and logging in — these were the lessons:


The user must be instructed to fix their elbow and move their hand on its axis for a simple initial operation.     

 

Easy


And there are 5 basic gestures they need to know             


Easy 


But each gesture has precisions that are very important to understand, such as - a tap is done with the pads of the fingers.     

       

 Still easy 


And there are 6 settings that are mandatory but must be ensured that they change on the iPhone for the system to work.             


Possible 


And there is one setting that is mandatory but must be ensured that they change on the watch for the system to work.


Possible


They must be instructed to return to onboarding after exiting it by adding a face on the watch


Not easy


And it's important, very important, to ensure that the user is wearing the band correctly for optimal reading.        


Challenging! 


And they must be given practice for each gesture separately to help them remember the gestures and practice sensitivity.


Very challenging!


And finally, they need to be shown how to assemble the band and how to activate and deactivate the air touch.


... All this - for a successful first use!


After over 3 years of hard work, I'm proud to say that most users are completing the onboarding process and reporting that it was short, enjoyable, and clear.


Dozens of versions, thousands of components, over 700 usability tests by an amazing team, and one small victory that only we know how huge it is.



1.5 Managing Conflicting Goals (ours and the user's)


Do you know that feeling when you open a new app and tons of tabs pop up with little instructions to help you navigate, but you're in a hurry to close them all one by one to get to the point? Working on onboarding is essentially going through all those tabs with the user and making sure they don't close or skip them, but rather read, learn, and even remember them!


It's not a simple task at all, especially when the user wants to rush through the onboarding as quickly as possible and start using the product freely. From this perspective, our goals are contradictory, and we need to bridge this difficult gap.


We must guide the user through all the tabs to ensure that the moment they finish the onboarding, their first "AirTouch" will be as natural as possible, and we can guarantee a smooth and pleasant user experience. In our case, this is the product.


We chose to design a large part of the onboarding as an enjoyable experience. We made sure to package each part of the experience as a separate section and to indicate before or after a transition what they are going to learn or celebrate what they have completed. The different parts contributed to the feeling of rapid progress.


We made sure to create a structure from which the user can understand how many more steps they have left to go until completion. For example, immediately after completing the settings section, we presented them with a screen that is a kind of index that shows all the lessons they need to go through. At the end of each lesson, they actually return to this screen where they can see at what stage they are, which lessons they have completed, and how many more they have left to complete. Besides the fact that the practice is usually quick and easy, the very fact that the entire experience is framed contributes to the sense of orientation that is so important in this context.



1.6 Teaching Gestures


So we’ve talked about attention, cognitive load, and timing; let's now discuss learning gestures.

Teaching users to perform these gestures isn't as simple as ensuring they remember them after the onboarding. We already know there are several things they need to remember to pay attention to at all times:


  • Hand position

  • Which gestures to perform

  • How to perform each gesture correctly


Given that there are only 7 gestures for full operation, this might sound like a fairly simple task, but the challenges were numerous. As the development progressed, the operation became simpler and more natural, but over 3 years ago when I joined the company, we had to deal with a fundamentally different set of problems compared to what we face today in the context of learning gestures.


The independent calibration that was done in the past placed a huge burden on the user's attention and their ability to focus and remember other details. The gestures were completely unfamiliar, and there was no point of reference for the user regarding them. Today, after the launch of Apple's double-tap, we're in a completely new reality. When we approach a user, they already know what a tap is. A small detail that makes a world of difference.


The discovery of starting the learning process by fixing the elbow was a game-changer. In the first versions we presented to users, we taught them the gestures and at the end, we instructed them to move their hand with a fixed elbow. Since this wasn't presented as a big and important part of the process but as a side instruction, users didn't remember it very well and waved their hands all over the place, which also made it difficult to perform the gestures. A small slide was often done with an exaggerated motion that was unnecessary, with the users' hands reaching high above their heads. From that point, they were more preoccupied with how to operate the entire arm in broad movements, and therefore were not focused on performing the gestures themselves correctly (tapping with the pads of the fingers, for example).


The moment the idea of starting the learning process with the elbow fixed on the table arose, everything changed. We managed to isolate one huge variable and completely eliminate it in the first use.


With the additional removal of the need for independent calibration, we were left with one lesson, albeit a large one, but only one - the gestures themselves!


So how best to teach the gestures so that users remember the lessons and implement them? Again, after many, many rounds of design and usability testing, we managed to arrive at a flow that works. It's understandable, and users go through it quickly and with great success. From here, the next step for us is a practice section that is entirely dynamic and more lively.


Choosing the Learning Approach


In the process, we went through dozens of concepts to examine the most suitable form of flow for learning gestures. The big challenge here, again, is remembering the gestures, practicing the execution, and understanding the subtleties of each gesture. 

What didn't we try? We can put it this way. Different approaches to learning a new language were taken into account, from those based on bottom-up learning where we presented users with all the gestures and all their small nuances in one long flow followed by continuous practice, to the opposite approaches based on top-down learning where we presented at the conceptual level a video that explains the use of gestures in general with do's and don'ts recommendations followed by practice. 

There were concepts that didn't succeed due to development limitations, and there were concepts that simply complicated ideas that were quite simple at their core.


______________


One of the flows we tested included interactive learning where we wanted to create a kind of back-and-forth dialogue with the user. We demonstrate a gesture, they perform it, and then another gesture, and so on. However, the rapid transition between watching and being required to perform confused the participants greatly. They didn't understand when they were supposed to listen and when it was their turn to act.


Another concept we tested presented all gestures in a sequence, followed by several practice screens. This concept showed less favorable results than the chosen one, in terms of the participants' level of understanding and proficiency at the end



The concept that was selected after numerous usability tests was one that presented the entire section of learning gestures as a single unit, centralized on a screen like an index as I mentioned earlier. Each gesture receives its own processing time that includes an explanatory video and practice.



The flow for each gesture looks like this:

  • Watching an explanatory video about the gesture

  • Activating the air touch

  • Practicing

  • Turning off the air touch

  • Returning to the main index screen


In this flow, users practice both the gestures and get used to the idea that all operation occurs through the watch. Between practices, they were asked to turn the air touch on and off.



These are the principles that guided us in this challenge:

  • Framing the experience: An index structure that gives the entire experience a sense of orientation and helps estimate the required execution time.

  • Using multiple senses: The videos include a visual display of the gesture + a voice explaining what is required, and the practice itself involves sensory input. The use of multiple senses deepens the processing stage and encourages memorization.

  • Measured selection of each element that enters the flow: Each element in the overall flow was carefully examined and selected to allow mental space for the more significant lessons to be grasped.



2. The product was, and still is, under development.


As more parameters were resolved at the development level, there were fewer "lessons" for the user to go through. In this sense, there is an inherent tension between how ready the product is to meet early users who are willing to go the extra mile and learn additional lessons to gain this magical experience, which is currently only possible through us, versus how much users will feel that they haven't yet been given the out-of-the-box experience they expect.


In my specific role as part of the design team, I feel today that every delay of this truly super complex product, at the hardware level, allowed us the time to optimize more and more lessons. Today, as the second product is launched, we believe we have managed to see the final tightenings that allow it to reach the customer's home, still as an innovative, first-of-its-kind product, but already with an onboarding process that feels light and fast for users. And of course, the performance is also in a different place. I think every experienced designer asks themselves at what level of maturity it is desirable to present their product to the user. As always, I suppose, it's all a matter of expectations.

In the same vein, two lessons worth mentioning, which were solved at the development level after almost a year of trying to crack them at the UX level, were calibration and drag-and-drop.


Calibration:

In the first year we worked on onboarding, calibration was considered a serious "lesson". We showed a video that explained to users why and how to calibrate the band through the display on the Mudra Watch Face. This was one of the most complex concepts to make accessible, because people are not used to thinking of their hand as something whose position in space needs to be consciously considered, and certainly not to stop and calibrate the system in relation to it from time to time.

Calibration itself also has many details to consider and remember, such as, for example, during calibration, the hand should be placed in the position in which you want to operate from now on. Many test subjects, immediately after the calibration process was completed, automatically changed the angle of their hand. This complex issue was solved one morning when the product manager arrived and announced that the entire calibration issue would now be done fully by the system itself. The lesson was canceled. A year of intensive work, and still, any development that removes a "lesson" from the user is a thrilling step forward.



Drag-and-drop: 

Another example is the work on drag-and-drop. Of the exercises that the user goes through, the exercise that users spent the most time on was drag-and-drop. Users would drag elements on the screen and sometimes they would release while dragging. We tried different versions of the practice that did not affect the results, and finally decided to try a different approach - we thought that if users understood conceptually that the band measures the pressure between the fingers, it would help them pay more attention to the intensity of the pressure occurring.


We started by designing a page that showcases this amazing capability in a video, after which the user is presented with an additional screen with a bar where they can see the amount of pressure they are applying. These screens were supposed to be located before the page with the drag-and-drop practice, but we will probably never know what the improvement in results could have been, because again the problem solved itself. Once again, the same story, one morning the product manager entered the office and announced with great fanfare that the latest breakthrough of the algorithms department allowed us to move to an embedded model that pretty much solved the entire issue. 



It is important to note at this point that the UX/UI team focuses on designing the onboarding application, but when the product itself is the user experience, any improvement in performance as a result of an improvement in the algorithmic model, or a change in the development environment, is an improvement in the user experience.


These are the things we as a company have managed to improve in the user experience in recent updates:


  • Shortened the time it took for the user to grasp the required sensitivity of the system and easily adapt to the new control method.

  • Reduced latency so that the experience feels as natural as possible.

  • Increased accuracy so there are fewer false positives.

  • Extended battery life.

  • Changed the internal structure of the band, so that the user can use the product in a wider variety of cases and environments.

  • Improved the calibration process.

  • Refined the algorithm to include a larger number of users and increase the certainty of gesture occurrence.


All of these factors cause a highly complex experience to feel transparent and natural.


3 .No Category


How do you design an experience when there are no established conventions or benchmarks? After detailing the unique aspects of our experience, I want to emphasize the challenges we faced, without going into too much detail about each one, primarily to paint a clearer picture of working in uncharted territory.


The first challenge lies in learning and adopting new gestures. Since there is no common language or familiar symbols for hand movements, we had to build an entire system for explaining and demonstrating each gesture. This includes choosing names, creating clear animations, and presenting practical scenarios. Additionally, we made sure to consider a wide range of users, from tech-savvy to beginners, and create a learning experience that would be understandable and easy for everyone.

Creating an effective measurement and evaluation system. To understand if users were learning the gestures effectively, we created a metrics scale that we could rely on to assess the improvement in the results of the various practices. At the end of each usability test, the participants were asked to perform a test that checked their level of proficiency and understanding.


The lack of feedback from other users. When there are no similar products on the market, we cannot compare our product to competitors or learn from the mistakes of others. We have to rely on our research, our understanding of user needs, and an iterative process of trial and error.


To operate as effectively as possible in the face of these challenges, these are the steps we took:


  • Intensive research:

Daily exploration of references and various products from which we could learn some angle on a learning process involving body movements. Apps that teach playing the piano or guitar were a relevant reference. Onboarding apps for various wearable products, setup apps for smartwatches, and more.


  • Establishing terminology and creating an internal library:

To maintain a uniform language, we created a glossary of terms. This kept all teams aligned on a single terminology. At a certain point, when more and more companies started offering gesture control with different names, we decided to stick to Apple's terminology since the Mudra Band, our first product, lives in Apple's ecosystem.


  • Conducting usability tests regularly:

In the past two years, an experience station was set up at one of the well-known universities near the office. A dedicated team comes and conducts usability tests on a regular basis, allowing us to test all aspects of the product with extreme care.


  • Using quantitative metrics:

Tracking quantitative metrics such as task completion time, error rate, and the number of times the user asks for help.


  • Creating a working prototype for every new concept:

As part of the regular process of testing concepts, we created a live working prototype that included all the videos and interactions that exist in the flow.


In summary, developing a gesture learning app is a challenging and innovative project. The lack of established conventions forces us to think outside the box and develop creative solutions. However, these challenges also present a unique opportunity to create a groundbreaking and innovative user experience.



Comments


bottom of page