Imaging through random diffusers instantly without a computer

Imaging through random diffuser instantly without computer

Computational Imaging Without Computers: Seeing Through Random Diffusers at the Speed ​​of Light. credit: UCLA Engineering Institute for Technology Advancement

Imaging through scattering and dispersive media has been a challenge for many decades, with many solutions reported so far. In theory, images distorted by random diffusers (such as frosted glass) can be recovered using a computer. However, existing methods rely on sophisticated algorithms and codes running on computers that process them digitally to correct distorted images.

Adaptive optics-based methods have also been applied in various scenarios for viewing through diffusive media. With significant advances in wavefront shaping, wide-field real-time imaging became possible through turbid media. In addition to digital computers, they require guide-stars or known reference objects, which introduce additional complexity to an imaging system. As another alternative approach, deep neural networks were trained using image pairs composed of distorted objects and their corresponding distortion-free images. This method taught deep neural networks to reconstruct distorted images using computers.

A new paper published in e light demanded an entirely new paradigm for the image of objects by means of diffused media. In their paper, titled “Computational Imaging Without Computers: Seeing Through Random Diffusers at the Speed ​​of Light,” researchers from UCLA, led by Professor Aydogan Ozken, described random diffusion through media instantly without the need for any digital processing. Introduced a new method for viewing. , This new approach is computer-free and all-optically reconstructs object images distorted by unknown, randomly generated phase diffusers.

To achieve this, they trained a set of diffractive surfaces, or transmissive layers, using deep learning to optically reconstruct an image of an unknown object placed entirely behind a random diffuser. The diffuser-distorted input optical field is continuously diffracted through the trained layers—the image reconstruction process is accomplished by the speed of light propagation through the diffracted layers. Each trained diffraction surface contains thousands of diffraction features (called neurons) that collectively compute the desired image at the output.

During training, several different and randomly selected phase diffusers were used to help normalize the optical network. Following this one-time deep learning-based design, the resulting layers are fabricated and put together to form a physical network located between an unknown, new diffuser and the output/image plane. The trained network collected the scattered light behind a random diffuser to optically reconstruct an image of the object.

There is no need for computers or digital reconstruction algorithms to image through an unknown diffuser. In addition, this tectonic processor does not use an external power source other than the light that illuminates the object behind the diffuser.

The research team experimentally confirmed the success of this approach using terahertz waves. He fabricated his designed tectonic network with a 3D printer to demonstrate the ability to see through randomly generated phase diffusers never used during training. The team also improved object reconstruction quality by using deeper tectonic networks with additional fabricated layers, one layer after the other.

The all-optical image reconstruction obtained by these passive tectonic layers allowed the team to view objects through unknown random diffusers. It presents an extremely low power solution compared to existing deep learning-based or iterative image reconstruction methods that use digital computers.

The researchers hope that their method could be applied to other parts of the electromagnetic spectrum, including visible and far/mid-infrared wavelengths. The reported proof-of-concept results represent a thin and random diffuse layer. The team believes that these built-in methods could potentially be extended to see through volumetric diffusers such as fog.

This approach could enable significant advances in areas where imaging via diffusive media is of utmost importance. Those areas include biomedical imaging, astronomy, autonomous vehicles, robotics and defense/security applications.

Tectonic optical networks reconstruct holograms instantaneously without a digital computer

more information:
Yi Luo et al, Computational Imaging without Computers: Seeing Through Random Diffusers at the Speed ​​of Light, e light (2022). DOI: 10.1186/s 43593-022-00012-4

Provided by UCLA Engineering Institute for Technology Advancement

Citation: Imaging via Random Diffuser without Computer (2022, January 27) Retrieved on 30 March 2022 from

This document is subject to copyright. No part may be reproduced without written permission, except for any fair use for the purpose of personal study or research. The content is provided for information purposes only.

Related Posts

Leave a Reply

Your email address will not be published.