
Generative AI has a dirty little secret. Behind every whimsical AI sketch of a cat wearing a space helmet lies a vast industrial footprint: racks of GPUs guzzling electricity and water to churn out digital art. In just one week, OpenAI reported that its new image model created 700 million pictures — each requiring hundreds or even thousands of joules of energy.
Now, researchers say they may have found a way out of this damning energy spiral by building an AI that literally paints with light.
The new system, developed at UCLA and described in Nature, doesn’t rely on the usual brute-force silicon chip computations. Instead, it uses laser beams and a set of optical gadgets to generate images almost instantly, consuming only a few millijoules of energy per picture — millions of times less than digital models.
“Unlike digital diffusion models that require hundreds to thousands of iterative steps, this process achieves image generation in a snapshot, requiring no additional computation beyond the initial encoding,” according to researchers led by Aydogan Ozcan, the senior author of the study.
Turning Static Into Art

To understand why this matters, it helps to peek inside the black box of conventional AI art. Most image generators are based on a process called diffusion. First, the AI is trained to add “digital static” to a picture until nothing recognizable remains. Millions if not billions of images go through this process during the generative AI’s training. Then, when asked for a new image — say, “a house on Mars” — it starts with random static and removes noise step by step until a painting emerges. It’s clever, but it’s slow and computationally hungry.

The UCLA team turned this process into an optical technique. A small digital encoder, trained on standard datasets, creates a phase pattern, which can be described as the mathematical blueprints of the static. These patterns are loaded onto a spatial light modulator, a kind of liquid crystal screen. When laser light shines through, it carries the encoded pattern into a second modulator, known as a diffractive decoder. The result is an image materialized instantly on a sensor, conjured entirely by light passing through glass.
“Our optical generative models can synthesize countless images with almost no computing power, offering a scalable and energy-efficient alternative to digital AI models,” lead author Shiqi Chen told Phys.org.

The team tested the system on handwritten digits, butterflies, human faces, and even paintings inspired by Vincent van Gogh. The optical results weren’t perfect, but they looked statistically similar to what digital models produce.
“This is perhaps the first example where an optical neural network is not just a lab toy, but a computational tool capable of producing results of practical value,” Alexander Lvovsky, a quantum optics researcher at the University of Oxford, told New Scientist.
From Green AI to Secure AI
The paper describes two flavors of the technology. Snapshot models can produce an image in a single optical pass. Iterative models mimic digital diffusion more closely, refining outputs through successive flashes of light. Both approaches were able to produce multicolor Van Gogh-style artwork at resolutions that rival some digital generators.
Beyond efficiency, the researchers were also mindful of privacy. Because each image is encoded in a unique optical phase pattern, only the correct decoder surface can reconstruct the final picture. That creates what the authors call a “physical key-lock mechanism,” potentially useful for secure communication or anti-counterfeiting.
The system could eventually shrink into integrated photonic chips, replacing bulky lasers and modulators with nanofabricated surfaces. That means that optical generative AI models could be integrated into glasses, VR headsets, or even medical imaging tools. As Ozcan put it, “Our work shows that optics can be harnessed to perform generative AI tasks at scale.”
Overall, the bigger picture here is about sustainability. Generative AI’s rapid growth has sparked fears about energy demand spiraling out of control. In 2023, researchers estimated that training large models could emit as much carbon as flying thousands of passengers overseas. By eliminating the need for iterative digital computation during inference, optical AI could make content generation far more sustainable.
Of course, challenges remain. Optical hardware can be finicky, prone to misalignment and limited by the resolution of modulators. Scaling from lab setups to data centers won’t happen overnight. But the UCLA team has shown that it’s possible to reimagine generative AI not as a power hog, but as a dance of photons.