top of page

How to Insert Human Interaction Into the Digital Metaverse?

A presentation at the DigitalFest 2022 conference , March 29, 2022 , Tel Aviv


Good evening,


I am Guy Wagner - the chief scientist and co-founder of the company Wearable Devices, in which we develop technology for monitoring neural activity in the wrist of the hand, that can be used to turn the hand into the standard input device of the Metaverse.


Many times when people are introduced to the ideas of the metaverse, AR, VR, they feel uncomfortable and even a little intimidated by the thought of putting on a screen that isolates them from the world, and they don’t so much see how it fits together with the promises others are trying to sell us that the metaverse is going to connect people.

So in the next few minutes I'm going to show you that we're already living in a virtual reality, and how it is still possible to insert or create a human interaction within the metaverse.


I'm arguing that we're already living in a virtual reality and to try to prove that, I'm going to quote Professor Anil Seth, who is a professor of cognitive and computational neurology specializing in consciousness.

According to Prof. Seth, the brain is a kind of prediction machine; what we see, hear, and feel is the brain's best guess about the causes of sensory inputs.


So the brain is actually closed inside the skull and disconnected from the world, a bit like someone wearing virtual reality glasses, and its only connection to the outside world is a very confused sensory input and very noisy signals, that the connection between them and objects in the outside world, if there even is any, is very indirect.

So how does the brain deal with this?

It basically holds a model of the world which contains assumptions and expectations that it constantly updates according to the input it receives, and interprets the signals based on them, so that basically everything we see, experience, or feel is the brain's interpretation of that input.


So let's look at an example.

In this picture we see Adelson's checkerboard. Anyone who sees this board will tell you that square B is brighter than square A.

And I suppose you see it that way too, but the truth is that this is not true at all - they are both exactly the same color.

So why do we see it that way?

Because the brain knows that square B is in a shaded area, and that things that fall under a shadow appear darker than they really are, therefore it updates the sensory input that the color of square B is the same as the color of square A and produces a different reality where B looks brighter. So we do not see reality - we see its interpretation.

And now, when you look at the original image again, square B still looks brighter. Knowing that the squares are the same color did not change anything in your perception.


Another nice example is called the body ownership illusion, in which we can actually make the brain believe that an inanimate object is actually an organ in the body.

In this example, the subject is made to experience and believe that an imaginary hand is his real hand.

This is done by coordinating the visual input - the eye sees the brush brushing the imaginary hand, and the sensory input - the real hand feels a brush brushing it. The coordination between the senses of touch and sight meets the brain’s model expectations and is therefore willing to believe that the imaginary hand is the real hand of the subject.

Then, when the imaginary hand is in danger of pain, the subject jumps in panic.

So what we have seen so far is that we live in a kind of virtual reality, that what we see is not reality, and that if we know how to coordinate what our senses perceive with the expectations of the brain’s model, we can create a very realistic experience.


The grotesque and strange character you see in this picture is called a homunculus.

The size of each organ of the homunculus represents the relative area in the brain required to manage that organ in the body. What is immediately noticeable to look at is the huge hands of the homunculus. And the reason for this is clear -our hands are our main means of acting in the world. The hands have the highest concentration in the body of sensory nerves, they have a large number of joints that allow the hand many degrees of freedom, and all of this requires a lot of processing power from the brain.

In fact if I monitor the hand and know what it is doing, I can know what the mind is expecting.


What does it mean?

Imagine you are holding a ball in your hand.

When you hold a ball, your hand curves around it, your fingers take the shape of the ball. If the ball is soft, you would expect that applying minimal force by the hand would cause it to contort accordingly, but if the ball is hard, even a large force applied to it will not affect the shape of the ball.

So basically, if I know how much power my fingers exert, what the position of my fingers is, and what the position of my hand is, I can know what the brain expects at that moment from the object the hand is holding, and can adjust visual, auditory, or sensory input and make the experience very realistic.


Now you can also understand why classic remotes like joysticks for games are not really suitable for creating a realistic experience in VR.

First of all the hand holding the remote is static. The hand cannot change the shape and position of the fingers according to the virtual object it’s holding, nor can the intensity of the pressure exerted by the hand and fingers be sensed.

And even if we use advanced neurological technologies, for example we try to create computer brain interactions using an EEG that reads the electrical activity in the brain, we will not be able to create a realistic experience like the one that can be produced by hand monitoring, since we can not tell what the user intended to do psychically, and which sensory input to match to it in order to meet the brain’s model expectations.


At Wearable Devices we develop Mudra - a wristband that monitors neural activity in the wrist and allows us to know how much power the user exerts, which fingers he moves and what his hand position is, so we can actually adjust sensory input that matches the user's brain expectation in the same experience.


So far we have seen how it is possible to trick the brain and make the experience in virtual reality very realistic, but how is that related to human interaction??

So let's imagine for a moment that we are wearing virtual reality glasses, and we see our avatar. We already know how to make the brain believe that the same avatar is actually a part of our body by using the body ownership illusion, for example when we move our hand and fingers our avatar also moves them. Now, imagine that you want to meet a friend who lives very far away - let’s say in Australia, and you decide to meet in the virtual world through your avatars. So your avatar meets your friend's avatar and they shake hands, and you can really feel the warmth of the other hand, the texture and the degree of pressure on your hand, and that can already really create a human experience of connection and presence.


However, there is a much more meaningful human experience that requires connection and presence, and that is mentoring.

For example, if I want to learn a certain skill like playing the guitar, or playing tennis or drawing, it will be very difficult to do it well through video.

To learn to play the guitar for example, I need to know not only which string to press and with which finger, but also how much pressure to apply on the string, and at what angle it is best to place my finger. To learn to play tennis it’s important for me to know how tight to hold the racket, and at what intensity to hit the ball.

It is very difficult to learn such an activity without getting feedback on many nuances regarding how to use the hand. Technology that knows how to monitor and measure the same nuances of hand operation and then additionally match them to appropriate sensory feedback, can enable a very high-level mentoring experience in the virtual environment.


So in conclusion, we have seen that even today we already live in a virtual reality - because our experiences are the result of our brain’s interpretation and not of objective data and facts about reality, and we have seen that the same interpretation of the brain can be used to make experiences in the virtual space feel real, and moreover create a high level of human presence and connection - like the experience of being mentored.



This presentation was given on March 29, 2022 at the DigitalFest 2022 conference


bottom of page