homehome Home chatchat Notifications


Digital imaging of the future: artificial imaging and 3-D displays

Computer graphics and digital video have gone an incredibly long way since their early days, however technology is still at such a point where people can still distinguish quite easily between what’s digitally rendered and what’s footage of reality. Three new papers recently presented by Harvard scientists at SIGGRAPH 2013 (the acronym stands for for Special Interest Group […]

Tibi Puiu
July 29, 2013 @ 6:45 am

share Share

The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image. (Courtesy of Ioannis Gkioulekas and Shuang Zhao.)

The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image. (Courtesy of Ioannis Gkioulekas and Shuang Zhao.)

Computer graphics and digital video have gone an incredibly long way since their early days, however technology is still at such a point where people can still distinguish quite easily between what’s digitally rendered and what’s footage of reality. Three new papers recently presented by Harvard scientists at SIGGRAPH 2013 (the acronym stands for for Special Interest Group on GRAPHics and Interactive Techniques), the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, are the most recent efforts in perfecting digital imaging, and their findings are most interesting so far to say the least.

One of the papers, led by  Todd Zickler, computer science faculty at theHarvard School of Engineering and Applied Sciences (SEAS), tackles a difficult subject in digital imaging, namely how to mimic the appearance of a translucent object, such as a bar of soap.

“If I put a block of butter and a block of cheese in front of you, and they’re the same color, and you’re looking for something to put on your bread, you know which is which,” says Zickler. “The question is, how do you know that? What in the image is telling you something about the material?”

To answer this question, the researchers had to dwell into how humans perceive and interact with objects and inherently can tell certain properties apart. For instance, if you were to look at a certain familiar object, you’re able to assess its mass and density without touching it simply based on its appearance and texture. For a computer this is more difficult to do, but if achieved, a device with a mounted camera could identify what material an object is made of and  know how to properly handle it—how much it weighs or how much pressure to safely apply to it—the way humans do.

The researchers’ approach is based on translucent materials’ phase function, part of a mathematical description of how light refracts or reflects inside an object – what we actually are able to see since what is perceived with our eyes is only light that bounces off objects, not the objects themselves. Phase function shape is incredibly different, vast and perceptually diverse to the human brain, which has made past attempts at modeling it extremely difficult.

Luckily, today scientists have access to a great deal of computing power. Zickler and his team first rendered thousands of computer-generated images of one object with different computer-simulated phase functions, so each image’s translucency was slightly different from the next.  From there, a program compared each image’s pixel colors and brightness to another image in the space and decided how different the two images were. Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space. At the end, actual people were invited to browse through various images and decide how different these were, providing insight into how the human brain tells objects like plastic or a soap bubble apart just by looking at them.

“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognize materials,” says Zickler

Looking at a display like through a window

A second paper also involving Zickler is also most interesting. Think of an adaptive display, inherently flat and thus 2-D, that can adapt the displayed objects according to the angle you view it from and environmental lighting  – just like looking through a window.

The solution takes advantage of mathematical functions (called bidirectional reflectance distribution functions) that represent how light coming from a particular direction will reflect off a surface.

color-grade

From the professional artist’s studio to the amateur’s bedroom

The third paper,  led by Hanspeter Pfister, An Wang Professor of Computer Science, takes a look on how to optimize and manipulate vivid colors. At the moment, professional artists need to manually brush and edit frame-by-frame a video that needs to have a certain color pallet imposed. Amateur filmmakers therefore cannot achieve the characteristically rich color palettes of professional films.

“The starting idea was to appeal to broad audience, like the millions of people on YouTube,” says lead author Nicolas Bonneel, a postdoctoral researcher in Pfister’s group at SEAS.

Pfister claims that his team is working on a kind of software that will allow amateur video editors to chose from various templates, say the color pallets for Amélie  or Transformers, and then simply by selecting what’s the foreground and what’s the background, and then software does the rest, interpolating the color transformations throughout the video.  Bonneel estimates that the team’s new color grading method could be incorporated into commercially available editing software within the next few years.

 

share Share

Elon Musk says he wants to "fix" Grok after the AI disagrees with him

Grok exposed inconvenient facts. Now Musk says he’s “fixing” his AI to obey him.

Stanford's New Rice-Sized Device Destroys Clots Where Other Treatments Fail

Forget brute force—Stanford engineers are using finesse to tackle deadly clots.

Big Tech Said It Was Impossible to Create an AI Based on Ethically Sourced Data. These Researchers Proved Them Wrong

A massive AI breakthrough built entirely on public domain and open-licensed data

Lawyers are already citing fake, AI-generated cases and it's becoming a problem

Just in case you're wondering how society is dealing with AI.

Leading AI models sometimes refuse to shut down when ordered

Models trained to solve problems are now learning to survive—even if we tell them not to.

AI slop is way more common than you think. Here's what we know

The odds are you've seen it too.

Scientists Invented a Way to Store Data in Plastic Molecules and It Could Someday Replace Hard Drives

What if your next hard drive wasn’t a box, but a string of molecules? Synthetic polymers promises to revolutionize data storage.

Meet Cavorite X7: An aircraft that can hover like a helicopter and fly like a plane

This unusual hybrid aircraft has sliding panels on its wings that cover hidden electric fans.

AI is quietly changing how we design our work

AI reshapes engineering, from sketches to skyscrapers, promising speed, smarts, and new creations.

Inside the Great Firewall: China’s Relentless Battle to Control the Internet

On the Chinese internet, a river crab isn’t just a crustacean. It’s code. River crab are Internet slang terms created by Chinese netizens in reference to the Internet censorship, or other kinds of censorship in mainland China. They need to do this because the Great Firewall of China censors and regulates everything that is posted […]