homehome Home chatchat Notifications


Imaging in 3D using a single camera lens

Using an innovative technique that mathematically infers what the environment outside the lens’ perspective might look like based on how light enters the camera, researchers at Harvard University have managed to create 3D images using only one lens and without moving the camera. The findings could prove to be applicable to amateur and professional photographers […]

Tibi Puiu
August 8, 2013 @ 11:45 am

share Share

This is a typical camera used for shooting 3-D images and movies. The equipment is very expensive and thus accessible only to a few people. Harvard Researchers have developed a technique that created 3D images using a single lens.

This is a typical camera used for shooting 3-D images and movies. The equipment is very expensive and thus accessible only to a few people. Harvard Researchers have developed a technique that created 3D images using a single lens.

Using an innovative technique that mathematically infers what the environment outside the lens’ perspective might look like based on how light enters the camera, researchers at Harvard University have managed to create 3D images using only one lens and without moving the camera. The findings could prove to be applicable to amateur and professional photographers alike, microscopists and other media-related fields like 3D movies of the future where no 3D glasses would be required.

So, only one lens and one perspective, yet the researchers were able to create 3-D images. How does this make any sense? Lead researcher Kenneth B. Crozier and colleagues achieved this by thinking outside the box, or in our case outside the camera’s objective.

Pixel to pixel, light enters at different angles in the camera – an important piece of information which the researchers exploited to infer how the image might look like from a different angle. But with regular tech, like standard cameras, this isn’t inherently possible out of the box.

“Cameras have been developed with all kinds of new hardware – microlens arrays and absorbing masks – that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view. That’s great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?” asked Crozier.

It’s only light that we’re ‘seeing’, after all…

Standard image sensors can’t measure the angle at which light enters the camera, but the next best thing one can do is guess. The team’s solution is to take two images from the same camera position but focused at different depths. The slight differences between these two images provide enough information for a computer to mathematically create a brand-new image as if the camera had been moved to one side.

By stitching the two resulting images, you get a 3-D animation of your scene. So, presuming you have the software the researchers developed at hand, anyone could create the impression of a stereo image with their shots using simple hardware. Microphotography might find this technique most useful, as the stereo imaging would be greatly helpful allowing the  studying of translucent materials, such as biological tissues, in 3D.

“This method devised by Orth and Crozier is an elegant solution to extract depth information with only a minimum of information from a sample,” says Conor L. Evans, an assistant professor at Harvard Medical School and an expert in biomedical imaging, who was not involved in the research. “Depth measurements in microscopy are usually made by taking many sequential images over a range of depths; the ability to glean depth information from only two images has the potential to accelerate the acquisition of digital microscopy data.”

“As the method can be applied to any image pair, microscopists can readily add this approach to our toolkit,” Evans adds. “Moreover, as the computational method is relatively straightforward on modern computer hardware, the potential exists for real-time rendering of depth-resolved information, which will be a boon to microscopists who currently have to comb through large data sets to generate similar 3D renders. I look forward to using their method in the future.”

The entertainment industry might also potentially benefit from the Harvard researchers’ work.

“When you go to a 3D movie, you can’t help but move your head to try to see around the 3D image, but of course it’s not going to do anything because the stereo image depends on the glasses,” explains co-researcher Anthony Orth. “Using light-field moment imaging, though, we’re creating the perspective-shifted images that you’d fundamentally need to make that work – and just from a regular camera. So maybe one day this will be a way to just use all of the existing cinematography hardware, and get rid of the glasses. With the right screen, you could play that back to the audience, and they could move their heads and feel like they’re actually there.”

Findings were reported in the journal Optics Letters. // source: Harvard press

share Share

Scientists Just Proved Ancient Humans Were in North America 10,000 Years Earlier Than We Thought

Ancient mud tells a story critics can no longer ignore

Elon Musk says he wants to "fix" Grok after the AI disagrees with him

Grok exposed inconvenient facts. Now Musk says he’s “fixing” his AI to obey him.

Scientists Detect Light Traversing the Entire Human Head—Opening a Window to the Brain’s Deepest Regions

Researchers are challenging the limits of optical brain imaging.

Stanford's New Rice-Sized Device Destroys Clots Where Other Treatments Fail

Forget brute force—Stanford engineers are using finesse to tackle deadly clots.

A Massive Particle Blasted Through Earth and Scientists Think It Might Be The First Detection of Dark Matter

A deep-sea telescope may have just caught dark matter in action for the first time.

Big Tech Said It Was Impossible to Create an AI Based on Ethically Sourced Data. These Researchers Proved Them Wrong

A massive AI breakthrough built entirely on public domain and open-licensed data

So, Where Is The Center of the Universe?

About a century ago, scientists were struggling to reconcile what seemed a contradiction in Albert Einstein’s theory of general relativity. Published in 1915, and already widely accepted worldwide by physicists and mathematicians, the theory assumed the universe was static – unchanging, unmoving and immutable. In short, Einstein believed the size and shape of the universe […]

Physicists Say Light Can Be Made From Nothing and Now They Have the Simulation to Prove It

An Oxford-led team simulation just brought one of physics' weirdest predictions to life.

Lawyers are already citing fake, AI-generated cases and it's becoming a problem

Just in case you're wondering how society is dealing with AI.

The Real Sound of Clapping Isn’t From Your Hands Hitting Each Other

A simple gesture hides a complex interplay of air, flesh, and fluid mechanics.