homehome Home chatchat Notifications


Digital imaging of the future: artificial imaging and 3-D displays

Computer graphics and digital video have gone an incredibly long way since their early days, however technology is still at such a point where people can still distinguish quite easily between what’s digitally rendered and what’s footage of reality. Three new papers recently presented by Harvard scientists at SIGGRAPH 2013 (the acronym stands for for Special Interest Group […]

Tibi Puiu
July 29, 2013 @ 6:45 am

share Share

The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image. (Courtesy of Ioannis Gkioulekas and Shuang Zhao.)

The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image. (Courtesy of Ioannis Gkioulekas and Shuang Zhao.)

Computer graphics and digital video have gone an incredibly long way since their early days, however technology is still at such a point where people can still distinguish quite easily between what’s digitally rendered and what’s footage of reality. Three new papers recently presented by Harvard scientists at SIGGRAPH 2013 (the acronym stands for for Special Interest Group on GRAPHics and Interactive Techniques), the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, are the most recent efforts in perfecting digital imaging, and their findings are most interesting so far to say the least.

One of the papers, led by  Todd Zickler, computer science faculty at theHarvard School of Engineering and Applied Sciences (SEAS), tackles a difficult subject in digital imaging, namely how to mimic the appearance of a translucent object, such as a bar of soap.

“If I put a block of butter and a block of cheese in front of you, and they’re the same color, and you’re looking for something to put on your bread, you know which is which,” says Zickler. “The question is, how do you know that? What in the image is telling you something about the material?”

To answer this question, the researchers had to dwell into how humans perceive and interact with objects and inherently can tell certain properties apart. For instance, if you were to look at a certain familiar object, you’re able to assess its mass and density without touching it simply based on its appearance and texture. For a computer this is more difficult to do, but if achieved, a device with a mounted camera could identify what material an object is made of and  know how to properly handle it—how much it weighs or how much pressure to safely apply to it—the way humans do.

The researchers’ approach is based on translucent materials’ phase function, part of a mathematical description of how light refracts or reflects inside an object – what we actually are able to see since what is perceived with our eyes is only light that bounces off objects, not the objects themselves. Phase function shape is incredibly different, vast and perceptually diverse to the human brain, which has made past attempts at modeling it extremely difficult.

Luckily, today scientists have access to a great deal of computing power. Zickler and his team first rendered thousands of computer-generated images of one object with different computer-simulated phase functions, so each image’s translucency was slightly different from the next.  From there, a program compared each image’s pixel colors and brightness to another image in the space and decided how different the two images were. Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space. At the end, actual people were invited to browse through various images and decide how different these were, providing insight into how the human brain tells objects like plastic or a soap bubble apart just by looking at them.

“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognize materials,” says Zickler

Looking at a display like through a window

A second paper also involving Zickler is also most interesting. Think of an adaptive display, inherently flat and thus 2-D, that can adapt the displayed objects according to the angle you view it from and environmental lighting  – just like looking through a window.

The solution takes advantage of mathematical functions (called bidirectional reflectance distribution functions) that represent how light coming from a particular direction will reflect off a surface.

color-grade

From the professional artist’s studio to the amateur’s bedroom

The third paper,  led by Hanspeter Pfister, An Wang Professor of Computer Science, takes a look on how to optimize and manipulate vivid colors. At the moment, professional artists need to manually brush and edit frame-by-frame a video that needs to have a certain color pallet imposed. Amateur filmmakers therefore cannot achieve the characteristically rich color palettes of professional films.

“The starting idea was to appeal to broad audience, like the millions of people on YouTube,” says lead author Nicolas Bonneel, a postdoctoral researcher in Pfister’s group at SEAS.

Pfister claims that his team is working on a kind of software that will allow amateur video editors to chose from various templates, say the color pallets for Amélie  or Transformers, and then simply by selecting what’s the foreground and what’s the background, and then software does the rest, interpolating the color transformations throughout the video.  Bonneel estimates that the team’s new color grading method could be incorporated into commercially available editing software within the next few years.

 

share Share

“How Fat Is Kim Jong Un?” Is Now a Cybersecurity Test

North Korean IT operatives are gaming the global job market. This simple question has them beat.

Japanese Scientists Just Summoned Lightning with a Drone. Here’s Why

The drone is essentially a mobile, customizable, lightning rod.

The UAE Wants AI to Write Its Laws — What Could Possibly Go Wrong?

But can machines really grasp justice, fairness, and human rights?

AI Made Up a Science Term — Now It’s in 22 Papers

A mistranslated term and a scanning glitch birthed the bizarre phrase “vegetative electron microscopy”

This Sensor Box Can Detect Deadly Bird Flu in 5 Minutes. But It Won't Stop the Current Outbreak

The biosensor can detect viral airborne particles.

Earth Might Run Out of Room for Satellites by 2100 Because of Greenhouse Gases

Satellite highways may break down due to greenhouse gases in the uppermost layers of the atmosphere.

These Robot Dogs Kept Going Viral on Social Media — Turns Out, They Have a Spying Backdoor

It looks like a futuristic pet, but the Unitree Go1 robot dog came with a silent stowaway.

If you use ChatGPT a lot, this study has some concerning findings for you

So, umm, AI is not your friend — literally.

Miyazaki Hates Your Ghibli-fied Photos and They're Probably a Copyright Breach Too

“I strongly feel that this is an insult to life itself,” he said.

Bad microphone? The people on your call probably think less of you

As it turns out, a bad microphone may be standing between you and your next job.