We’ve all tried to fix poorly lit pictures in Photoshop, but the results always end up unsatisfactory. You can’t polish a turd, they say. Researchers at the University of Illinois at Urbana–Champaign would beg to differ, however. In a new study, the researchers demonstrated a novel machine learning algorithm that corrects photos taken in complete darkness, with astonishing results.
In order to take decent photos in low-lighting conditions, professionals advise that you set a longer exposure and use a tripod to eliminate blur. You can also increase the camera’s sensor sensitivity, at the cost of introducing noise, which is what makes the photos grainy and ugly.
The new algorithm, however, is capable of turning even pitch black photos into impressively sharp images. They’re not the best, but given the starting conditions, the results are miles away from anything we’ve seen any post-production software do before.
The researchers first trained their neural network with a dataset of 5,094 dark, short-exposure images and an equal number of long-exposure images of the same scene. This taught the algorithm what the scene ought to look like with proper lighting and exposure.
“The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work,” the researchers wrote.
Some of the photos used to train the algorithm were taken by an iPhone 6, which means that someday similar technology could be integrated into smartphones. In this day and age, the software can matter just as much as the hardware, if not more, when it comes to snapping quality pictures. Think motion stabilization, lighting correction, and all the gimmicks employed by the cheap camera in your phone, in the absence of which photos would look abhorrent.
Who else is looking forward to using this new technology? Leave your comment below.