top of page
Hadas Brezner

Designing a Neural Input Wristband for XR Experiences - HCI

In the previous blog, we embarked on an exciting journey into the realm of designing a neural input wristband for Extended Reality (XR). We explored various requirements needed to define a neural interface. Every detail we covered in those insightful posts is just a glimpse into the comprehensive guide found within our Whitepaper.


In this blog post we will delve into the world of HCI (Human-Computer Interaction). We will introduce the concept and the process a user goes through when using a digital device. We will expand on the future of HCI and ubiquitous computing, and we will examine the GUI, UX and UI elements of interfaces.

The relevant discussion begins at 6:40


Human-Computer Interaction (HCI) is conveyed through a user's input and computer output, by means of a user interface and an interaction device. The user forms an intent, expressed by selecting and executing an input action. The computer interprets the input command and presents the output result, which the user perceives to evaluate the outcome.

For example, when you want to move your hand, your brain sends a signal through your body to make it happen. Similarly, you use devices like a keyboard, mouse, or touch screen to tell the computer what to do. Humans get lots of information through their senses like seeing, hearing, and feeling. They use their eyes, ears, and hands to get this info. But when they tell the computer to do something, it takes more effort and time.


The goal is to make computer interaction feel as easy and natural as doing things in real life. This means connecting what you want to do with what the computer does in a simple and intuitive way. Now what does that mean exactly?


  • Natural, means you perform the input using comfortable and relaxed body movement. You are relaxed, your body resides in a natural posture;

  • Intuitive means you perform input using familiar and common methods. An intuitive gesture is a gesture which binds the “same functionality” with the “same gesture” for any device.


The evolution of computers into wearable devices has brought about exciting opportunities for designing new ways of interacting. This has led to the emergence of fresh and unique methods for engaging with technology. To make the most of these innovations, it's important to create interfaces specifically tailored to support them.


Usually, when using a regular computer, people lean forward, whereas with smartphones, they tend to be more relaxed. However, in augmented or virtual reality, people need to use different gestures and movements quickly.

When using these new computer devices, users might find limitations that make the experience not as good. That's why we need to rethink how we interact with wearable computers. Instead of making people adjust to the device's limits, we want the devices to understand what users want to do. But dealing with physical gadgets can complicate the process of turning natural movements into computer commands.


A neural interface gets rid of the need for physical parts by using the way our body interacts with special sensors. This makes the experience feel natural and intuitive. This kind of interface lets you control things instantly, without needing to touch anything. It's like stepping into a different world. As we create new types of computer devices, we also need new ways to control them. These devices call for fresh interfaces that let you control things faster and use different methods to give commands.


A Human Interface Device (HID) is commonly put into two categories: pointing devices and character devices. Pointing devices are used to show where you want things, like moving a cursor on a screen, while character devices are for typing in text. New technology, such as finger tracking, lets us follow where fingers and hands are, in both 3D and for tapping on things. Other technologies like voice and computer vision can also understand gestures.


Think about how we use computers like driving a car. Some ways of interacting need a lot of thinking and careful movements like in-line text editing, just like driving in a busy parking lot. Both require a lean forward posture and a good degree of focus. Other interactions are easier like browsing through icons, which can be compared to driving on simple roads. Finally, there are the really easy interactions, like selecting big icons with hand gestures, which is like driving on a highway. So, different interactions need different amounts of thinking and effort, just like different driving situations.

In the context of Fitts's law, spatial posture requires the lowest level of cognitive load versus its peer group of lean forward and lean back. So GUI (Graphical User Interface) design considerations should favor decreasing cognitive load. Input should be simple with a low index of difficulty.


In our upcoming blog post, we will delve into a fresh perspective on human-computer interaction (HCI), centering around user experience and present a new six-level HCI taxonomy framework. This framework introduces novel ways of categorizing interactions, from handheld to touchless, and explores the evolving landscape of neural input devices. We will journey through these levels, examining how movement intent translates into digital commands, and delve into emerging technologies like brain-computer interfaces and AI deep learning. Join us as we uncover the future of HCI, bridging the gap between human intention and digital interaction.


*All figures shown in this blog are taken from our white paper, available for download here.

Comments


bottom of page