Spotless Holography for Virtual Display

holographic display

Holographic display prototype. credit: Stanford Computational Imaging Lab

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds, and experiences. While the technology is already popular among consumers for its immersive quality, there may be a future where holographic displays look even more like real life. In its pursuit of these superior performances, the Stanford Computational Imaging Lab has combined its expertise in optics and artificial intelligence. His most recent progress in this area is on 12 November in . is detailed in a paper published on science advance and work that will be presented at SIGGRAPH ASIA 2021 in December.

At its core, this research tackles the fact that current augmented and virtual reality only shows 2D images in each viewer’s eyes, rather than 3D- or holographic-images like we see in the real world.

“They are not conceptually realistic,” explained Gordon Weitzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab. Weitzstein and his colleagues are working to come up with solutions to bridge this gap between simulation and reality by creating displays that are more appealing and easier on the eye.

research published in science advance details a technique to reduce the speckle distortion often seen in regular laser-based holographic displays, while the SIGGRAPH Asia paper proposes a technique to more realistically represent the physics that is present in the real world. Will be applied to the 3D scene if present.

Bridging Simulation and Reality

For decades, image quality has been limited for current holographic displays. As Weitzstein explains, researchers are faced with the challenge of getting a holographic display to look as good as an LCD display.

One problem is that it is difficult to control the size of the light waves at the resolution of the hologram. The other major challenge hindering the creation of high-quality holographic displays is bridging the gap between what is happening in simulation and what the same scene would look like in a real environment.

Previously, scientists have attempted to create algorithms to solve both of these problems. Weitzstein and his colleagues also developed algorithms but did so using neural networks, a form of artificial intelligence that attempts to mimic the way the human brain learns information. They call it “nerve holography.”

“Artificial intelligence has significantly revolutionized all aspects of engineering and beyond,” Weitzstein said. “But in this specific area of ​​holographic displays or computer-generated holography, people have only just started to explore AI techniques.”

Yifan Peng, a postdoctoral research fellow in the Stanford Computational Imaging Lab, is using his interdisciplinary background in both optics and computer science to help design the optical engine to move into holographic displays.

“Recently, with emerging machine intelligence innovations, we have access to powerful tools and capabilities to harness advances in computer technology,” said Peng, co-lead author science advance Co-author of the paper and Siggraph paper.

The neural holographic display these researchers created involves training a neural network to mimic the physics of the real world that was happening in the display and obtaining real-time images. He then combined this with a camera-in-the-loop calibration strategy that provides near-instantaneous feedback to inform adjustments and improvements. By creating an algorithmic and calibration technique that moves with the image viewed in real time, the researchers were able to create more realistic-looking scenes with improved color, contrast, and clarity.

The new SIGGRAPH Asia paper highlights the lab’s first application of their neural holography system to 3D visualizations. This system produces high-quality, realistic representations of scenes that contain visual depth, even when parts of scenes are intentionally portrayed as distant or out-of-focus.

science advance The work uses the same camera-in-the-loop optimization strategy, combined with artificial intelligence-inspired algorithms, to produce an improved design for holographic displays using partially coherent light sources—LEDs and SLEDs. system provides. These light sources are attractive for their cost, size and energy requirements and also have the potential to avoid the speckled appearance of images produced by systems that rely on coherent light sources such as lasers. But the same features that help partially coherent source systems avoid blurring, resulting in blurry images with a lack of contrast. By building specific algorithms for the physics of partially coherent light sources, researchers have produced the first high quality and speckle-free holographic 2D and 3D images using LEDs and SLEDs.

transformative potential

Weitzstein and Peng believe that this coupling of emerging artificial intelligence technologies with virtual and augmented reality will become increasingly ubiquitous in many industries in the coming years.

“I’m a big believer in the future of wearable computing systems and AR and VR in general; I think they’re going to have a transformative impact on people’s lives,” Weitzstein said. It may not happen for the next few years, he said, but Weitzstein believes augmented reality is the “big future.”

Although augmented virtual reality is primarily associated with gaming right now, it and augmented reality have potential uses in a variety of fields, including medicine. Medical students can use augmented reality for training as well as overlay medical data from CT scans and MRIs directly onto patients.

“These types of technologies are already in use for thousands of surgeries per year,” Weitzstein said. “We envision that head-worn displays that are smaller, lighter-weight and more visually comfortable are a big part of the future of surgery planning.”

“It is very exciting to see how computation with the same hardware setup can improve the quality of performance,” said Jonghyun Kim, a visiting scholar at Nvidia and co-author of both papers. “Better computation can lead to better performance, which could be a game changer for the display industry.”


Improvements to holographic displays poised to enhance virtual and augmented reality


more information:
Yifan Peng et al, speckle-free holography with partially coherent light sources and camera-in-the-loop calibration, science advance (2021). DOI: 10.1126/sciadv.abg5040. www.science.org/doi/10.1126/sciadv.abg5040

Provided by Stanford University

Citation: Speckle-free holography for virtual displays (2021, November 12). Retrieved 30 March 2022 from https://techxplore.com/news/2021-11-speckle-free-holography-virtual.html.

This document is subject to copyright. No part may be reproduced without written permission, except for any fair use for the purpose of personal study or research. The content is provided for information purposes only.

Related Posts

Leave a Reply

Your email address will not be published.