top of page
Guy Wagner

The Missing Piece of HCI: Transcending from an Interface to a Relationship

A presentation by CSO and President of Wearable Devices - Guy Wagner.



Hi, I am Guy Wagner - Chief scientist and co-founder of Wearable Devices, a Nasdaq listed company. We build neurological interfaces for daily usage, enabling touchless interaction with any digital device, whether it’s a PC, a Smartwatch, a Smart TV, XR Headset or an AI Agent.

 


 

In the past 10 years we have been introduced to several devices that claimed to be the future successor of the smartphone.

 

First – the smartwatch – which attempted  to help us reduce the frequency with which we look at our smartphone. However, its small screen and limited interaction capabilities left it dealing with quick on-the-go interactions, such as receiving and answering short notifications and monitoring health and sports activities.

 

 

Second came Virtual Reality, Augmented Reality and Extended Reality. These technologies promise to give us an always available, enormous screen, which we will use for any task: from notifications, through playing immersive games, to allowing us to work anywhere as if we are in front of our home office desktop. However, Current XR devices are heavy, cumbersome, costly and you can’t wear them for more the 30 minutes without getting a headache. Currently, as an enormous screen, for the simple usage of content consumption (for example, watching a movie, without any interaction with an interface), is a subpar experience. The Apple Vision Pro and Quest 3 give an amazing experience for gaming and has an enormous Wow effect when you use them for the 1st time, but those devices are not for all day use, and can’t replace the smartphone.

 

Lately we see a new claim to the throne – The AI Agent. By conversing with it directly, we will be able to manage all our tasks, without the need for a screen or a keyboard. AI Agents already transformed the way most people get informed, work and make decisions, making individuals and companies 10 times more productive.

 However, The AI agent also has the same issue - a limited interface. In this case the limitation to a single modality of interaction – voice.


 

As technology changes, the interface should change and adapt accordingly, failing to do so will limit the new technology to be accessed only by early adaptors, or a few critical niche applications .

 

So what are the properties of a good interface, and what should we understand in order to build one?

 

The primary role of an interface is to manifest our intentions as actions in the world and to let us measure the effect of our actions closing a feedback loop. But to do that we first need to perceive the world, and understand, what  we want to achieve, and that’s exactly where the problem starts.

 


So to begin, I would like to argue that we're living in a simulated reality. - To try to prove that, I'm going to quote Professor Anil Seth, who is a professor of cognitive and computational neurology specializing in consciousness.

According to Prof. Seth, the brain is a kind of prediction machine; what we see, hear, and feel is the brain's best guess about the causes of sensory inputs.


 

So the brain is actually closed inside the skull and disconnected from the world, a bit like someone wearing virtual reality glasses, and its only connection to the outside world is a very confused and noisy sensory input, loosely and indirectly connected to the the outside world.


 

So how does the brain deal with this?

It basically holds a model of the world which contains assumptions and expectations that are constantly updated according to the input it receives. These incoming signals are interpreted based on such models and expectations, so that basically everything we see, experience, or feel is the brain's interpretation of that input.


Lets look at an example.

In this picture we see Adelson's checkerboard. Anyone who sees this board will tell you that square B is brighter than square A.


And I suppose you see it that way too, but the truth is that this is not true at all - they are both exactly the same color.

So why do we see it that way?

Because the brain knows that square B is in a shaded area, and that  things- that fall under a shadow appear darker than they really are, therefore it updates the sensory input that the color of square B is the same as the color of square A and produces a different reality where B looks brighter. So we do not see reality - we see its interpretation.


And now, when you look at the original image again, square B still looks brighter. Knowing that the squares are the same color did not change anything in your perception.


Let’s listen to an auditory example.


 

This is called the McGurk Effect. Listen carefully to the sound, while looking at the speaker.   What did You hear?


Lets play it again, but now, close your eyes.  What do you hear?

When looking at the video you probably heard Fa, but when close g your eyes, you have probably heard Ba.


So what Happened here?

In this Video sound track, the speaker was recorded saying Ba, but in the visual part  his lips are saying fa. When you look on the video, the visible cues effect the brain processing of the data and you hear Fa. When you close your eyes, you only hear the sound track and so you hear Ba.


What it shows us is that we don’t hear with our ears Nor we see with our eyes , we hear and see with our brain

 

 Another nice example is called the body ownership illusion, in which we can actually make the brain believe that an inanimate object is actually an organ in the body.

In this example, the subject is made to experience and believe that an imaginary hand is his real hand.



 

This is done by coordinating the visual input - the eye sees the brush polishing an imaginary hand. Additionally, the sensory input, the real hand, feels the brush’s skimming caress. The coordination between the senses of touch and sight meets the brain’s model expectations and is therefore willing to believe that the imaginary hand is the real hand of the subject.

Such is the extent of this belief, that when the imaginary hand is in danger of pain, the subject jumps in panic.

 

So what we have seen so far is that we live in a kind of virtual reality, what we see is not the reality. I and If we know how to coordinate what our senses perceive with the expectations of the brain’s model, we can create a very realistic experience.


 

The grotesque and strange character you see in this picture is called a homunculus.

The size of each organ of the homunculus represents the relative area in the brain required to manage that organ in the body. What is immediately noticeable to look at is the huge hands of the homunculus. The reason for this is clear - our hands are our main means of acting in the world. The hands have the highest concentration in the body of sensory nerves, they have a large number of joints that allow the hand many kinematic degrees of freedom, and all of this requires a lot of processing power from the brain.

 

In fact, monitoring the hand is monitoring both the user intentions, expectations and reactions to the mind’s inputs from the outside world. The hand is a hub of vital information about our activities, mental, emotional and physiological state.


 

What does it mean?

Imagine you are holding a ball in your hand.

When you hold a ball, your hand curves around it, your fingers conform to the shape of the ball. If the ball is soft, you would expect that applying minimal force by the hand would cause it to contort accordingly.

But if the ball is hard, even a large force applied to it will not affect the shape of the ball.


 


So basically, if I know how much force my fingers exert, what the position of my fingers is, and what the position of my hand is, I can know what the brain expects at that moment from the object the hand is holding. We can adjust visual, auditory, or any other sensory input and make the experience very realistic. More over, monitoring the tension in the hand muscles the hand’s skin conductivity can teach us about a persons emotional and mental state, as they are controlled by the sympathetic nervous system.

 

 

So far we have established two points:

1.        To make an experience reliable we need to supply it with input that conforms with the brain’s expectations.

2.        The hand is a hub of information about our emotions, cognitive status, and can teach us about the brain’s expectations.

  

At Wearable Devices we develop Mudra - a wristband that monitors neural activity in the wrist using our proprietary SNC technology. It lets us monitor the user’s intention to move, both hand and fingers, the amount of force applied to objects, which fingers are moved, and  hand position estimation. So, it let’s us adjust an experience to match the brain’s expectation!  But the Mudra is much more than that, it is actually the first commercialy available, non-invasive, wrist worn, Brain Computer Interface in the world! 

 

You should appreciate, that usually a BCI is a device implanted directly into a patient’s skull, with physical contact to specific brain cells. Brain implants are currently risky and can’t be widely adopted by healthy individuals, so a wrist worn, non-invasive BCI is the next best thing…

 

 

It is interesting that in order to receive information from the world, the brain uses many senses, at a high speed; our vision, auditory and tactile senses bring information directly to our brain to process, yet in order to manifest our intention in the world, we need our intentions to be translated into muscle contractions, which in turn cause action in the world. Controlling the muscles, for any activity, from fine motor skills such as drawing or soldering to grosser motor skills such as lifting objects, requires the coordination of many brain functions and motor units. By directly monitoring the nervous system, we are capable of by-passing the muscular system and have a direct path from intention to action. 

 

What this means, in practice, when connecting to digital technologies, we don’t need the brain to always do all the heavy lifting of coordinating muscles, as we can direcly transfer the tiniest wish to move a muscle into an action in the digital world.



 

The Mudra Band weighs only 36 grams, and can monitor the hand in any position, in front or behind the body, indoors or outdoors without the need for heavy and expensive cameras.


 

That’s great for XR, but what about the upcoming AI Agents we discussed?

 

Well, our excpectations from AI Agents is totally different from previous platforms, where we had sent commands to a device, whether in front of us or on our body. We expect AI agents to act as if they have agency. We expect to have a relationship with AI.

 

Currently our interaction with devices is direct – we command it by either typing, clicking moving or talking to a device.

But 90% of human communication is indirect, non verbal multi modal communication. By subconsciously monitoring are conversation partner’s voice nuances and body language we can, for example recognize how he feels, if he’s attentive to us or preoccupied, etc.

 

For communication with an AI agent to trancend into a relationship, we need to supply it with such information that can be gathered with a wrist-worn wearable such as the Mudra band. 


Our AI agent will be able to help us find when is our best time to work, study or exercise, understand our emotional response to its sugestions and update them etc.

It could be usefull in mentoring us in varoius activities such as improving our aim in ball games by monitoring both our hands’ postion and our stress level, helping us become better at whatever we do.


 

I would like to conclude with a prediction:

In The last 40 years, computer technology has required our full attention to it’s interface.

I am expecting (actively) that next gen computer interfaces will be attentive to the human and to its needs. It will set us free from endless screen time, endless sitting in front of the mouse keyboard and screen, and will help us be more focused on what’s important to us in life.

 

Comentarios


bottom of page