Four Ways to Control AR Glasses
- Ariel Amar
- Aug 23, 2025
- 2 min read
AR and AI glasses are coming! But, without a touchscreen in our hands, how will we interact with them?
Tomorrow’s interface won’t live on a phone screen, rather it will overlay information and experiences seamlessly onto our world.
Maps, messages, and apps will appear as part of what we see, creating a continuous experience, freeing us from constantly holding a phone, and showing exactly what we need, when we need it.
Here are the four input methods vying for dominance.
[1] Temple Area Touchpad.
"Touch Control, Built into the Frame"
Temple touchpads are slim, touch-sensitive strips built directly into the frame, usually positioned at the arm near the temple, just above the ear.
Using taps, swipes, and long-presses, users navigate menus, adjust settings and trigger actions without a need for a separate controller.
Because the touchpad is part of the glasses’ physical frame, it’s always within reach and doesn’t add bulk. This form is already used in devices like the Rokid Max, offering a low-profile and familiar way of interaction while wearing the glasses.

[2] Handheld Controller
"A Familiar Input, Repurposed for AR"
Handheld controllers for AR glasses serve as companion devices, typically connecting over Bluetooth or proprietary links to act as input. Once paired, they act as the primary navigation tool to move through menus, select items, and trigger actions on the display.
The XREAL Beam uses a touch-sensitive surface for directional input, while devices like the Rokid Station rely on physical buttons. These controllers often include haptic feedback and are designed to integrate tightly with the glasses’ software layer, ensuring consistent, responsive control.

[3] Gesture Recognition Camera
"When You Vision Works, so Do Your Gestures"
Gesture-recognition cameras detect and interpret hand movements. By tracking motions like taps and swipes, users can control interfaces without touching any device. This method typically relies on depth-sensing cameras to create a 3D map of the space in front of the user.
The tracking zone, however, is limited to roughly 40 to 70 degrees field-of-view extending outward from the glasses, and gestures must stay within a line-of-sight. Performance is affected by light conditions.Integrating cameras into the frame adds bulk and weight, while the continuous processing places high demands on computing resources and battery life.
Despite these limitations, gesture cameras offer a hands-free way to interact, making the experience feel more spatial and responsive.

[4] Neural Wristband
Made possible by recent advances in AI, sensor technology, and flexible PCBs, neural wristbands mark a new frontier in human-computer input.
Worn like a fitness band, they detect neural signals generated by hand gestures and finger movements. These signals, originating from the forearm’s muscles, are translated into digital actions.
Because the sensors read intent rather visible motion, micro-gestures can be used — even in complete darkness or with hands tucked inside a pocekt.
For AR glasses, this brings a leap in UX, UI, and HCI. Interfaces become easier to navigate, less tiring, and more discreet, while neural input removes the need for bulky controllers or constrained touchpads. The result is interaction that feels natural, precise, and seamlessly woven into everyday life.



Comments