The reason of Dizziness while playing VR is the conflict of systems of spatial perception in human body. We humans are equipped with three spatial-perception devices.
Our first spatial perception device – Stereo vision
We can see in 3D because we have two eyes located on either side of our face. The images seen by our two eyes are slightly different, and this phenomenon is called parallax. Our brain, after interpretation, rotates the two eyes to make the center of the eyeballs converge on the thing we are focusing on. This action of converging the eyes allows us to perceive distance, which is our most important spatial perception device.
Our second spatial-perception device – The focusing of the eye
Our second spatial perception device allows our eyes to focus on things. The structures of the eye and the camera are similar. For a camera to focus on an object, it shifts the lens to change the focal length, allowing the lens to focus on the main subject you want to shoot, thus producing a clear image. The other objects not brought into focus will be blurry in the photo to different degrees.
The optical principles of the eye and the camera are the same. The crystalline lens in the eye is just like the camera lens. Unlike the camera lens, which needs to change its location to focus, the crystalline lens can change its thickness directly to adjust the focal length and focus light on the retina. The closer the object, the greater the angle of refraction of light; hence the ciliary muscle works harder to pull the crystalline lens and make it thicker. The farther the object, the smaller the angle of refraction of light; hence the ciliary muscle does not need to work hard as it relaxes to make the crystalline lens thinner. Therefore, humans can feel how far an object in focus is through the amount of effort the ciliary muscle makes.
Our third spatial-perception device – The vestibular system
Our third spatial perception device is the semicircular canals and the otolith organs in the inner ear. Semicircular canals and otoliths are like angular accelerometers (equivalent to gyroscopes after integration) and linear accelerometers. That is exactly how mobile phones perceive space – they use gyroscopes and linear accelerometers to form an inertial navigation device (IMU/INS). Humans also have a set of perception devices in each ear, so we are probably better at spatial motion perception than mobile phones.
The Challenges VR faces
Our current VR devices use the screen image in front of us to fool our spatial perception devices. When our right eye and left eye see an image on the screen, what they actually see is different due to parallax. As a result, our first spatial perception device is tricked, and we feel immersed in the virtual world.
However, our second device is not tricked. The eyeballs know from the start that the ciliary muscles have been working hard to make our eyes focus on the LCD screen that is few centimeters in front of us. The accelerometers in the inner ears also know very well that we have not moved. Our vestibular system will feel the changes if we have moved.
Under the condition that our organs all function normally, our three spatial perception devices will give us consistent answers and offer more perceptual information to the brain for it to make up the missing parts and correct mistakes. For instance, our brain helps us eliminate vibration (similar to the anti-shake device in cameras) and fast-moving objects in the foreground in order for us to see the subject clearly. When the three devices cannot give us a consistent answer, the brain starts to hesitate and suspects that something is wrong. And here come our ape brain genes, which believe that the mushrooms we just ate were poisonous and made us spit them out. That is why dizziness and vomiting occur.
We can only use the disparity principle to fool the first spatial perception device at present. Are there any ways to trick the other two devices?
How to trick the focusing system of the eye?
We can use a “light field” technology to trick the second spatial perception device. The current screens can only emit light evenly in all directions and cannot accurately control the direction of the emitted light. If the direction of light can be controlled somehow, our eyes will be fooled into believing that the light comes from a far way (parallel light) or somewhere close (convergent light). Our second spatial perception device will then be fooled. However, the light field market is a battleground for virtual reality hardware manufacturers, and no mature head-mounted display has emerged in the market yet. Even if such hardware appears in the market, it needs to be supported by a corresponding software, which will be one of Panosensing’s future focuses.
The third device, the vestibular system, is more complicated. The otoliths and the semicircular canals use the cilia in the ears to sense the movement of liquid or otoliths. Without our body moving, we can only trick our brains by creating an illusion of movement through nerves. I believe there should be no good solutions available until the brain-computer interface matures, which is why I think it is a must to find a solution to overcome VR dizziness. Tricking the third device cannot be done by increasing the screen refresh rate and improving the resolution through increasing the lens refractive index and field of vision as suggested by other startups and venture capitalists.
We need new technology to solve the root of the problem regarding VR-induced dizziness.
Here’s counter-evidence for your reference: the helmet used by the U.S. AH-64 Apache helicopter pilots is an MR device, which can instantly have the enemy aircraft information projected on its visor. The helmet, costing million dollars apiece, adopts the cathode ray tube (CRT) technology and has a high refresh rate. However, pilots still cannot overcome dizziness using these helmets. Their solution to this problem is to pick pilots who can operate the helmet-mounted display (HMD) without feeling dizzy while wearing it. So, it is still a long way to go if we want to make a device that doesn’t make us feel dizzy and costs only few hundred dollars.
Here’s a life hack to trick the inner ear: to make people sit in a rotating device. The device shakes people up and down and all around in concert with what happens in the virtual world. This way, people’s movements match the corresponding content motions. This solution works just like a 4D movie theatre or a robotic armchair and is probably the closest way we can use to trick the inner ear in the short term. However, using such a device to turn you upside down and around will definitely make you dizzy.
Since you get dizzy eventually, which way would you choose?