stories on progressBreakthrough AI method generates 3D holograms in real-time

For virtual reality, 3D printing, and medical imaging.
January 1, 2024

— contents —

~ story
~ paper
~ reference
~ reading
~ watching

— story —

Even though virtual reality headsets are popular for gaming, they haven’t yet become the go-to device for watching television, shopping, or using software tools for design and modelling.

One reason why is because VR can make users feel sick with nausea, imbalance, eye strain, and headaches. This happens because VR creates an illusion of 3D viewing — but the user is actually staring at a fixed-distance 2D display. The solution for better 3D visualization exists in a 60-year-old tech that’s being updated for the digital world — holograms.

A new method called tensor holography enables the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smart-phone.

Holograms deliver an exceptional 3D representation of the world around us — and they’re beautiful. Holograms offer a shifting visual perspective based on the viewer’s position. They allow the human eye to adjust its focal depth — so you can move your focus easily from foreground to background. The visual holographic display appears just like a real 3D object — as if you could touch it.

Researchers have been trying to make computer-generated holograms. But the process has traditionally required a super-computer to churn through heaps of physics simulations — to create that life-like effect. That’s time-consuming and can yield less-than-photo-realistic results.

Holograms in the blink of an eye.

To deal with this, researchers at the Massachusetts Institute of Technology (MIT) designed a new way to produce holograms — almost instantly. The software they’re using is called a deep learning artificial intelligence program, because it can teach itself. They said it’s so efficient that it can create a hologram on a laptop — in the blink of an eye.

Liang Shi is a PhD student at MIT and the lead researcher on the project. He said:

People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations. It’s often said that commercially available holographic displays will be around in 10 years — but this statement has been around for decades.

This new approach — tensor holography — will finally bring that elusive 10-year goal within reach. This advance could fuel a spill-over of holography into fields like VR and 3D printing.

The quest for better 3D.

A typical lens-based photograph encodes the brightness of each light wave that touches it. So a photo can faithfully reproduce a scene’s colors, but it ultimately makes a flat image.

But a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. For example: a typical photograph of the famous oil painting Water Lilies can highlight the art’s color palate. But a hologram can bring the work to life — rendering the raised, unique 3D texture of each brush stroke. Despite being popular, holograms have been a challenge to make + share.

First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam — with half the beam used to illuminate the subject, and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion. And they were hard-copy only, making them difficult to reproduce and share.

Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photo-realistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.

They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.

The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new data-base, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photo-realistic training data. Next, the algorithm got to work.

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.

“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multic-amera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.


A considerable leap in ability.

3D holography in real-time would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

The research team said: “It’s a considerable leap that could completely change people’s attitudes toward holography. We feel like neural networks were born for this task.”

publication: Nature
paper title: Towards real-time photo-realistic 3D holography with deep neural networks.

read | paper

— description —

The ability to present 3D scenes with continuous depth sensation has a profound impact on: virtual + augmented reality, human–computer interaction, education, and training.

presented by

Nature | home ~ channel
tag line: text

Spinger Nature grp. | home ~ channel
tag line: Opening doors to discovery.


the Massachusetts Institute of Technology |  home channel
motto: text

name: Liang Shi
web: home


1. |

publication: Insider
tag line: What you want to know.
web: home~• channel

story title: Here’s what happens to your body when you’ve been in virtual reality for too long
read | story

Please add summary.

presented by

group: Axel Springer
tag line: The media + tech company.
banner: We empower free decisions.
web: home ~ channel


1. |

tag line:
web: home • channel

featurette title:
watch | featurette


presented by

tag line:

— notes —

AI = artificial intelligence
AR = augmented reality
VR = virtual reality

2D = 2-dimensional
3D = 3-dimensional