Dry goods sharing: the past and present of wireless VR

Speaking of VR head display, if you want to see good results, you have to get the whole PC version, but this line is really troublesome. However, it is difficult to lose the wire on the VR head. It is really difficult. Want to play wireless head display, there are currently three ways, either with a mobile phone head display, or with a streaming phone head display, or with a backpack + cable head.

VR Value Theory Sharing: The Past and Present of Wireless VR

Although, these three methods have already taken a step toward wireless virtual reality to a certain extent, but the disadvantages are also piles. So what do you do? What are the more ultimate solutions in the future? What is the key technology to achieve wireless VR head display? VR Value Theory Sharing: The past, present and future of wireless VR. (Reminder: There are a lot of technical terms, and the content is small.)

On the night of the Single Dog Festival, HTC suddenly released a wireless virtual reality helmet. That's right, there really isn't that annoying line. This feeling, let the technical house, very excited. It is as if the landline was announced to be evolved into a mobile phone overnight. Since the Oculusrif tDK1 detonated this virtual reality revolution, the space requirements for the VR experience have been constantly refreshed:

At first we can only sit at the table and handle the rotation in three directions by sensors, such as left and right heads, shaking heads up and down and left and right swing heads (called "desktop level three degrees of freedom", such as the use of GearVR); We experience that in addition to rotation, the sensor can also handle up and down, front and back, left and right translation (called "desktop six degrees of freedom", such as the use of Oculusrift DK2); then to stand-up interaction, room-level interaction, warehouse level People interact, and every space upgrade brings more gameplay and a better immersive experience.

As space requirements continue to grow, VR input devices are changing with each passing day, but VR heads are slow to develop. Palmer Loki has been squatting at CES, and the biggest obstacle to restricting the above-mentioned interactive experience is the wire. So, is it really so difficult to cut off the wires on the VR head? The answer is -- it's hard.

Because VR head display is extremely demanding on technical indicators, there are three reasons: First, delay. The qualified VR experience requires MoTIonto Photon Latency (from the time the user starts to move to the corresponding screen display to the time on the screen) within 20ms, more than this time is easy to cause dizziness; second, resolution. At present, the mainstream resolution of VR head display is 2.5k (2560*1440). Under the requirement of viewing angle, there will be obvious screen door effect (ie, in the case of insufficient pixels, the thin line dancing caused by real-time rendering, High-contrast edges appear separate flicker), destroying immersion; third, rendering ability. VR head-mounted binocular rendering consumes about 70% more GPUs than monocular rendering, reducing rendering power almost doubles the picture quality.

Despite the obstacles to the development of wireless virtual reality, we are still seeing many solutions that have been proposed or will be proposed in the future. I will sort out these concepts from concept to technology to outline them. The past, present and future of wireless virtual reality.

Virtual reality head display

Former World: What solutions are there for wireless virtual reality?

In this section, we will see the solutions that have been proposed by wireless virtual reality headsets - namely, mobile phone headsets, streaming mobile phone headsets, and backpacks + cable headsets. Although each has its own disadvantages, it has taken the first step on the road to wireless virtual reality.

First, the phone head is displayed.

The earliest wireless virtual reality heads we touched were mobile phones. From the Cardboard, to the experience of the best GearVR, to the integrated machine of the parallel, it is natural wireless by means of the mobile phone chip. The advantage of mobile phone display is low cost. Relatively disadvantaged is the lack of native spatial positioning support and GPU performance is too low to perform complex scenes and high quality rendering.

The mobile phone head is under the constraints of the current mobile GPU performance, and the best use scene is panoramic video, it is difficult to provide a high quality immersive interactive experience above the stand.

The earliest manufacturer of mobile phone heads in China was the Storm Mirror. In the era when the VR head was seriously deficient, the Storm Mirror became the enlightenment of China's wireless virtual reality.

Second, the streaming mobile phone head display.

The streaming mobile phone head display can capture the video output result of the virtual reality application running on the PC frame by frame, and then the captured result is encoded and compressed and transmitted to the mobile phone head display or the all-in-one through wifi, and the mobile phone head display or The all-in-one is decoded and output to the screen.

The advantage of streaming mobile phone head display is that you can use the powerful graphics card resources on the PC to perform high-quality rendering of complex scenes. But the shortcomings are also obvious: video encoding and decoding is quite time consuming, superimposed wifi transmission will bring a long MoTIonto Photon delay, causing serious dizziness; and high quality rendered images will be significantly reduced in quality after video encoding compression.

The earliest product of the streaming mobile phone is the Trinus VR. Trinu sVR uses the CPU for video capture and encoding compression with a transmission delay of up to 100ms, but it opens up a true wireless era.

Due to the emergence of NVIDIA Video CodecSDK, we can directly call the NVENC of the GPU to directly capture and hardcode the video output of the application. Combined with the hard decoding of the mobile phone or Tegra platform, the video encoding compression and decoding time can be greatly reduced. Within 20ms.

NVIDIA first applied this technology to the Shield handheld's multi-screen game, but the technology was subsequently used on the streaming phone head display, which reduced the MoTIonto Photon delay of the streaming phone head display to less than 40ms.

The first to propose this solution is Visus VR. At the beginning of this year, the first PC-driven wireless VR head-mounted VISUS was introduced. The computing uses a PC, and the display uses a smartphone screen. It is wirelessly enabled by NVIDIA Game Stream technology. The game screen of the computer screen is transmitted to the mobile phone. Although the delay is shortened to 40ms, there is still a strong sense of dizziness.

Third, the backpack + cable head display.

Despite the great lag in the development of wireless virtual reality head-on, with the desire for a wide range of interactions, a compromise solution was born, backpack + cable head display. The first systematic solution to the backpack + cable head display business solution is Australia's Zero Latency virtual reality park.

Zero Latency's goal is to provide a warehouse-level multiplayer VR interaction program that allows multiple players to simultaneously enter a game scene. To do this, you must have several levels: First, wireless. Since the size of the venue is set to 400 square meters, the player must enter the scene wirelessly. It is difficult for the cable to support such a large range of travel. Secondly, the delay is due to the player's needs. Playing games for a long time, so can not cause obvious vertigo in the game time, delay control must be in the mainstream head water products, that is, within 20ms seconds; again is the picture, in order to create a better game atmosphere, you must use the host Level GPU rendering; finally pose calculation, because the player walks wirelessly, it needs to calculate six degrees of freedom, where the position calculation can not rely on the inertial sensor and use the cheap optical solution (60fps refresh rate), resulting in position calculation delay At least 16.6ms, plus the wireless transmission delay will be greater than 18ms, so the attitude calculations that have a greater impact on stun must be performed locally through a high sampling rate (1000Hz) IMU.

Zero Latency uses a backpack system designed by Alienware Alpha and mobile power. The backpack and server make data connections via wifi. The graphics card reaches 970m level. The Oculusrift DK2 is used as the headline cable to connect to Alpha, achieving the above four goals.

Although this backpack system can not be strictly referred to as wireless head display, it can achieve the results achieved by wireless head display. The disadvantage is that the backpack is not easy to wear and the battery life is not long.

Zero Latency's solution has a profound impact on the subsequent virtual reality theme park, the most famous of which is the Void of Salt Lake City. The Void adds an extraterrestrial approach to the backpack system and cable head display, adds force feedback, and combines OpTItrack's optical motion capture system to provide more accurate positioning calculations, more diverse virtual object interactions, etc. The range of multi-person interaction systems has become more interesting. At this point, the so-called "wireless virtual reality" solution for backpack + cable head display + wifi has settled, becoming the preferred solution for warehouse-level virtual reality interaction and VR theme parks. Many companies in China have also launched wireless backpack systems.

This life article: How is wireless virtual reality head display achieved?

There are two ways to implement our ideal wireless virtual reality head-on display: first, wirelessly transmit the rendered video signal; second, perform high-performance rendering in the header display. The development of 60Ghz millimeter wave communication technology has helped us achieve this. Next, we will explain them one by one:

First, the wireless transmission of the rendered video signal is the most direct way

As early as in the era of streaming mobile phones, people used this method of high-performance rendering with a remote PC and output the video results to the head-up via wifi.

The biggest drawback of this method is that the video signal must be compressed because the quality of the original video data is at least 1920*1080@60fps, if the uncompressed data consumes about 3Gbps, and the fastest 802.11ac communication bandwidth is 1.3. Gbps, so encoding and compressing data will not be transmitted over wifi.

As mentioned earlier, even with NVENC hard and hard decoding, the additional latency will be close to 20ms, which is a disaster for the VR experience.

Fortunately, 60Ghz millimeter wave communication has made great progress in the second half of 2015. The current WiGig60Ghz communication can support up to 7Gbps communication, which makes it possible to wirelessly transmit video images rendered on the PC side without raw data.

Lattice is the world's largest supplier of 60Ghz modules, providing modules that can transmit 1920*1080@60fps data at close range (about 20 meters) wirelessly. The first company to use 60Ghz for wireless virtual reality head-up design is Serious Simulations, which produces wireless virtual reality heads for military training.

Serious Simulations' wireless virtual reality headshow uses two 1920*1080 displays for output for a larger viewing angle, but is limited to a single module, with the left and right eyes in copy mode.

Since Lattice has already produced 60Ghz related modules, why are only a few teams capable of designing wireless virtual reality heads? The reasons are as follows:

1, the screen

Lattice's module uses the input and output specifications of 1920*1080@60fps, so video is required to output images to the module in 1920*1080 format. After 60Ghz wireless transmission, it is output to the screen in 1920*1080 format, which means The screen must be able to accept the Lands cape mode of 1920*1080 or below.

It can be easily found through the Panelook website that the screen supporting the landscape mode is a minimum of 7 inches and cannot reach 1080p, but the 7-inch screen is obviously too large for the head-mounted screen.

The ideal head-mounted screen is 5.5 inches, and the resolution must reach 1080p. This kind of screen is mostly used in mobile phones. It is a portrait mode, so it needs to convert the data horizontally and vertically to adapt to the vertical screen or dual screen mode.

2, line speed conversion

As mentioned above, in order to achieve screen transition without increasing the delay, the output video data must be converted to line rate, which requires the head display design team to have excellent high-speed video signal processing skills.

3, still the delay

Taking the horizontal and vertical screen conversion as an example, when the video data is rotated by 90 degrees, the following situation occurs: in the horizontal screen mode, the last pixel of the first line is flipped and becomes the last pixel of the last line in the portrait mode, so that the conversion must be performed. This is done after buffering one frame of image, which causes the delay to increase by nearly one frame. A good team will use a better algorithm to complete the conversion and avoid the delay caused by the cache.

4. Processing of input data such as inertial sensors

The current wired virtual reality head display transmits the inertial sensor information in the head display to the PC for data fusion (Fusion) through USB, and acquires the head appearance posture with the shortest delay, thereby calculating the camera pose and rendering the picture.

When the head-up video output becomes wireless, the inertial sensor input must also be wireless, which requires the design team to have ultra-low latency wireless communication skills and solid sensor fusion skills.

In early 2016, we launched a self-developed wireless virtual reality headset.

This wireless virtual reality headset provides a 1920*1080 OLED screen with integrated battery packs at the rear of the headset.

Since the 60Ghz module is integrated in the head display and only image conversion and output are performed, the battery consumption time is constant, and the data is provided for 3.5 hours.

Compared with the design scheme we started from the basics, TPcast adopts the cooperation with HTCVIVE, solves the wireless problem of wired virtual reality headset with a plug-in module, and has an easy-to-remember name: scissors plan.

Since VIVE uses its own packet format to transfer USB data, the first step TPCast does is to get the USB data structure and masquerade its own VendorId to VIVE on the sending end of the connected PC to implement Direct Mode and Support for the loading of the steam application.

In addition, the resolution of VIVE is 2160*1200@90fps, which exceeds the module communication data of 1920*1080@60fps. Therefore, it is inferred that TPCast may change the yuv444 format to yuv420 format to support higher resolution, and the actual refresh rate is reduced to 60fps.

This is inconsistent with the statement that Wang Congqing, president of HTC Vive China, mentioned at the press conference that there is no degradation in image quality. Of course, this is inferred, specifically waiting for verification after shipment, but undoubtedly TPCast has taken a solid step for wireless virtual reality.

Second, high-performance embedded computing platform, not only one machine

As early as HTC released the wireless virtual reality headset, Oculus revealed at the press conference that O-memory wireless virtual reality headline words.

According to limited information, Oculus's wireless virtual reality head does not use a high-performance PC, the calculation is done directly in the head display, and the InsideOut positioning system is built in, which is more like a combination of wired head display and all-in-one. Of course, the specific implementation and use effects require more information to investigate.

Of course, no matter whether you use the wired all-in-one solution, the plug-in solution or the solution from the basics, in our view, the curtain of the wireless virtual reality has been opened.

Future: What is the ultimate solution for wireless virtual reality?

In my opinion, as long as the embedded GPU is powerful enough and the power consumption is low enough, the MINI host + head display or all-in-one will be the ultimate solution, but this day may be a long time.

So what about tomorrow's plan? How to use the powerful rendering ability of PC? How can I achieve almost zero additional latency (at least what it looks like)? How to break the 60Ghz resolution refresh rate limit? The solution described next is possible.

First of all, in this scheme, we can see that the graphics rendering is still on the GPU of the PC, except that the output data is a larger image than the normal image perspective, similar to the panoramic video. Secondly, the image is compressed according to Facebook's panoramic video compression method and encoded by the GPU. The compressed image is transmitted to the dedicated all-in-one with image hard decoding capability through wifi peer-to-peer.

After the integrated machine is decoded, it is output to the screen in the form of a quasi-panoramic video. The head display calculates the panoramic image output by the current angle of view according to the inertial sensor. In combination with the ATW user, the viewport image with stable and no delay can be seen, just as the image is generated locally on the head display. . Thereafter, the encoded video stream in the background is continuously filled into the header by wifi with a delay of 20 ms to complete the relatively high-delay position calculation.

According to the inference, VIVE's previous cooperation with Quark VR is likely to use this solution. As some have said, the tracking technology in virtual reality is not composed of a single sensor, but by combining different sensors to achieve effective work in different environments. The same is true for wireless virtual reality heads, and only by combining different technologies can we achieve high performance, high resolution, low latency, wide range, and multi-user requirements. Then wait and see.

Solar Panel

Solar Panel,High Quality Solar Panel,Solar Panel Details, CN

Wuxi Shengda Yukun Energy Development co.,Ltd , https://www.xlite-solarlight.com