top of page

Gesture UI Intelligence for the Smart Glasses Era

Updated: 3 days ago


Before words, there were gestures. Now technology is learning their meaning.

Hand gestures are our oldest interface - instinctive, not learned. We never shaped them for machines; they evolved for human communication and interaction.


Today, gestures are becoming the language through which we communicate with machines.



What's in a Name?

“That which we call a rose by any other name would smell as sweet.” 

Everybody is trying to name and claim dominance in the post-iPhone era - fighting to define the next personal computing category. 


Wether it's AR, XR, MR, or Spatial, the labels may shift, but the essence remains. All face-worn devices serve two functions: they display digital content and enable the user to interact. 


Their ability to perform these tasks ultimately determines their value and utility.



Taxonomy of the Smart Glasses Categories Landscape


A simple and practical way to categorize today’s glasses products is as follows:


  • AI Glasses – Screen-less eyewear that plays audio and connects to an AI agent for contextual information. (Meta Ray-Ban, Bose Frames, Amazon Echo Frames)

  • AR Glasses – Monocular or Binocular displays that act as a HUD (Heads up Display), or overlay digital elements—like text, navigation, or checklists. (EverySight Raptor, Google Glass Edition 2, RayNeo Air 3s, Rokid Max 2)

  • MR Glasses – Fully immersive, high-resolution devices that blend virtual and physical environments for apps, media, and collaboration. (Apple Vision Pro, Snap 2024 Spectacles, Magic Leap 2)


Left: Meta-Ray Ban glasses , Google Enterprise edition , Snap Spectacles 2024
Left: Meta-Ray Ban glasses , Google Enterprise edition , Snap Spectacles 2024

The Three Major Pointing Device Modalities

Navigation and pointing are the foundations of human-computer interaction. Navigation moves us through digital space, while pointing lets us select and manipulate elements. Together, they define how we engage with a graphical interface. Three primary input modalities have shaped how we point, select, and interact with digital elements:


  • Computer Mouse – continuous, free-movement cursor control

  • Directional Pad (D-Pad) – stepped, discrete directional navigation

  • Gaming Controller – each action is mapped to a dedicated button or trigger


Each modality aligns with a different style of GUI: 


  • 🖱️ A computer mouse corresponds with precise visual selection - think Desktop PC. 

  • 🕹️ A D-pad works well when the UI is arranged in grids or menus - think AppleTV. 

  • 🎮 A Gaming Controller excels in where immediate and a per-defined action is needed - think iPod.


The Broader Scope

The Mudra Link offers the user all three control modes:


  • For AI glasses, discrete gestures like tap or double-tap combined with wrist orientation can be mapped to control control simple functions such as play/pause, select/back, or next/previous. [ ▶, ⏸, ⏭ , ⏮ ]

  • AR glasses controlled using wrist flicks and fingertip pressure, enabling directional navigation-similar to arrow keys or a virtual track pad. [ ⬅️ , ➡️ , ⬆️ , ⬇️ ]

  • MR glasses interaction is most immersive by combining spatial wrist pointing with subtle tap gestures, letting users select, drag, or manipulate digital elements in mid-air. [ ↕️ , ↔️ ,  👌 ,  ↗️ ]


Gesturing a controller ; Gesturing a directional pad ; Gesturing a PC-mouse
Gesturing a controller ; Gesturing a directional pad ; Gesturing a PC-mouse

-----

Gesture Control Schemes at Wearable Devices


Wearable Devices Ltd. offers the Mudra Link - a universal gesture control for multiple operating systems, which supports multiple pointing modalities. 


The product is HID compatible, supports mouse mode and d-pad mode, and with the use of the gesture mapper, creates a gaming controller experience. 


To further explore the evolution of gesture control for digital devices,  download our 2024 whitepaper: Elevating AR Glasses User Experience with Gesture Control and Neural Wristband.

-----

Why This Matters Now 

The ecosystem is aligning. 

The smartphone era generated over $8 trillion in hardware and app revenue, powered by the touchscreen, a universal interface anyone could use. 

Today, the neural band is emerging as the next interface that will define how humans interact with technology. Consumer brands, platform developers, and OEMs are already shaping this new input standard. 

Staying close to this shift means staying ahead, before the next interface becomes the norm.

 
 
 

Comments


footer.png

STAY IN THE KNOW

Thanks for subscribing!

ABOUT US

Wearable Devices Ltd. develops a non-invasive Neural input interface for controlling digital devices using subtle finger movements.

 

We believe that neural-based interfaces will become as ubiquitous as wearable computing and digital devices in general, just as the touchscreen became the universal input method for smartphones.

TALK TO US

Wearable Devices Ltd - HQ
Hatnufa 5 street,
Yokneam Illit 2066736,

Israel.

STAY CONNECTED

  • YouTube
  • LinkedIn

© All rights reserved to Wearable Devices Ltd

bottom of page