top of page
Hadas Brezner

Designing a Neural Input Wristband for XR Experiences - New Taxonomy Framework

In the previous blog post, we introduced the conventional approach to Human-Computer Interaction (HCI), which tends to prioritize input devices like mice, touchscreens, and keyboards over user experience. We emphasized the importance of putting the user's needs and experience first.


Now, in this blog post, we move forward to explore the realm of neural input devices and Brain-Computer Interfaces (BCI), addressing their emerging development state and proposing a framework for defining their scope. This framework categorizes user interactions based on activities and introduces a simplified taxonomy to promote clarity and cooperation among different parties.

The relevant discussion begins at 11:13


Neural input devices and Brain-Computer Interfaces (BCI) are evolving technologies and are not yet widely mainstream. However, their lack of technical and functional maturity does not mean that a standard should not be developed. To address this, we present a comprehensive framework aimed at outlining the scope of neural input interfaces.


The framework describes and categorizes interaction levels of users based on user activity.

Such functionality includes navigating to a certain location; interacting with a digital element, and being aware of location, motion, direction, action, and result.

Our simplified framework and taxonomy defines levels of interaction via 4 parameters:

1. handheld versus hands-free,

2. hands-on versus touchless,

3. big versus small physical movements,

4. and the time it takes to input the command and receive feedback.


As seen in the table below, this creates six product categories covering the entire spectrum of neural input interfaces. We are confident that these defined classifications will offer valuable insights to engineers, product managers, designers, and customers alike, ensuring clarity and understanding for all stakeholders involved.

Within the spectrum of Human-Computer Interaction (HCI), levels 0 to 2 encompass interactions driven by coarse palm, hand, and finger movements. In these levels, users are required to physically hold or touch the interface while inputting commands. The user's journey through this process unfolds as follows:

  • Movement intent

  • Neural signal

  • Muscle movement

  • Input interaction

  • Digital command

These interactions involve a combination of touch and muscle-based actions, such as keystrokes, point and click, drag and drop, finger taps, and finger dragging. At HCI level 0, we find simple functions like off-buttons, switch toggles, or navigation vectors. Level 1 introduces more familiar devices like computer mice, joysticks, and game controllers. HCI level 2 incorporates touch pads, touchscreens, and directional pads to facilitate interactions.


Moving ahead, HCI levels 3 through 5 are emerging with innovative possibilities, demanding consideration of various user interactions. These levels predominantly revolve around intent and neural-centered commands, often integrating nascent AI deep learning technologies. At these levels, users execute subtle finger movements or even non-visible muscle motions, engaging in hands-free and touchless interactions. In essence, the user becomes the interface. Movement intent is seamlessly translated into command action through neural signals, with minimal or no discernible muscle movement. Levels 4 and 5 signify a significant advancement, where movement intent immediately translates into command action, propelling us into a new realm of intuitive interaction possibilities.


So how does the user engage with the interface at HCI level 3? Interaction occurs through mid air gestures, finger movements, and fingertip pressure. Devices like wrist wearables and gesture sensors exemplify this level.


Moving to HCI level 4, we encounter movements ranging from minute to imperceptible, typically facilitated by wearables or invasive interfaces. In HCI level 5, direct brain-to-device interaction is present, akin to level 4 but often involving substantial reliance on AI deep learning. Notably, only at level 5 does movement intent instantaneously transform into a digital command. In all other levels, regardless of physical movement size, an additional time delay is necessitated, a reminder of the constraints imposed by our physical reality.


For example, to perform a "slide to unlock" command, you go through these steps: First, you intend to move, which sends a signal in your brain that leads to muscle movement. Then, you physically interact with something, and finally, the digital command happens. But in the highest HCI Level 5, your intention directly becomes the digital command. In other levels, no matter how big the movement, the time it takes to do this stays the same, because our bodies have limits.

In our upcoming blog post, we will delve into the intricate art of finding the right balance between functionality, accuracy, and design when crafting a wearable neural interface. This essential balance requires careful consideration across various dimensions in order to provide the user with a fashionable, functional product that provides a great user experience.


*All figures shown in this blog are taken from our white paper, available for download here.

Comments


bottom of page