These devices amplify the tiny levels of light available at night or use the heat given off by different objects to peer in the dark.
No good spy movie or modern shooter game worth its salt could go without night-vision goggles. When in need, our favorite protagonists will whip out these funny-looking goggles, pull them down over their eyes, and see in the dark through the magic of the light-green glow they give off. At one point or another, many of us have wondered how such devices work, and if they’re even real, or just a convenient trope.
Well, the last question is easy and quick to answer: night-vision goggles are real. They also do really use screens that glow green while in operation. To understand exactly why we’ll need to take a look at exactly how these goggles function.
Peering into the night
Night-vision technology today can boast some impressive figures. For example, quality gear can allow that lucky special operative to see a person up to 180 m (around 200 yards) away even in the dead of a moonless, cloudy night. With the naked eye, you would be lucky to notice someone under such conditions at around 10 m (11 yards) — so that’s quite the improvement.
But how does this green-tinted magic happen? In very broad lines, night-vision devices (NVDs), also known as night optical/observation devices (NODs) function in one of two ways:
- Image enhancement. This involves picking up on visible light available in the darkness and amplifying it to a level that we can perceive. Essentially, this relies on light that our eyes can perceive, which reflects off objects even at night, and brightens it up enough for us to see.
- Pure thermal imaging. This involves registering the heat given off by certain items and ‘translating’ it into images for our eyes to perceive. This type of technology uses a type of light that our eyes cannot naturally see, but which is generated by objects themselves, and makes that visible to a user.
The first type of NVDs work in a pretty intuitive manner. There is very little light available on the surface of Earth at night, but it’s not absolute darkness. There is still some light diffused in the atmosphere. Image enhancement uses sensors that are much more sensitive than our eyes to pick up on this light, amplify it, and transmit the data to a display which renders it for our eyes to see.
For the second one, we’ll need to take a look at exactly why heat can be substituted for light. I’ve written about this topic in the past, but the short of it is that heat is a type of light. It is a type of electromagnetic radiation that is very close to, but just under, the frequency interval that our eyes can perceive. Because it’s just below the light frequencies that we perceive as the color red, this type of radiation is known as ‘below red’ radiation: ‘infrared’ (or IR for short).
Infrared is the type of radiation produced by hot objects; sunlight feels warm on our skin because it contains a healthy serving of IR radiation. But it is also produced and released by other hot objects like a cup of tea, a running car engine, or the main bad guy of a TV show. Without proper insulation, this radiation spreads out from hot bodies carrying their heat away.
What thermal imaging cameras do is to pick up on IR radiation and translate it into visible light. The different temperatures of items can then be used to single them out in the dark. The amount of IR radiation a body produces is directly proportional to its temperature — so, essentially, what it sees is the different heat levels of objects. Hotter objects will show up in brighter shades like red, yellow, or white, while colder ones will show up in dark shades like purples, blues, and black.
Because thermal imaging sees heat, not visible light, it works particularly well in settings such as security or combat. The human body generates a lot of heat compared to its surroundings, especially in situations where individuals exert physical effort. While IR radiation itself can be blocked out quite easily, it is virtually impossible to insulate a living, breathing, moving human in such a way that they won’t emit any IR while still allowing them any practical range of movement.
Here’s some context to help you understand why. A simple cardboard sheet placed between an object and a thermal camera will completely hide it from view. But a person behind the screen won’t be able to see past it unless there’s a flap or a slit cut into the cardboard sheet — if such a flap is fashioned, the cardboard sheet will no longer insulate them from the camera’s view. Alternatively, while clothing can be made to be almost completely insulating, hiding the individual’s body from a thermal camera, their breath will still be hotter than their surrounding areas and show up on the thermal view.
The two methods listed above describe the main approaches we’ve developed to see in the dark. Each has its own advantages and drawbacks, and they are used for slightly different purposes.
Thermal imaging is excellent at telling different items and bodies apart based on their temperature, but high-end thermal cameras are fragile, bulky, expensive, and power-hungry. It is also hampered by relatively low refresh rates and is quite bad at providing detailed images, especially for objects with erratic temperatures. So it is most commonly used to keep an eye out for any activity over large areas in the dark and is excellent at spotting moving objects.
Image enhancement, meanwhile, can use cheaper, more compact, more portable devices. It offers much better discernment of fine detail. Because it uses reflected light, it also offers a practical way for users to navigate their surroundings. Objects at room temperature such as floors, walls, furniture, the trees in a forest, don’t show up on thermal imaging. A soldier walking through a dark house using a thermal camera would bump into everything, all the time.
Night vision equipment today employs a mix of these methods. The most common approach is to use image enhancement that also translates a bit of the IR spectrum — the near-infrared — to visible light. But the most modern approach is to use fusion night vision, systems that blend thermal imaging with image enhancement within a single device. The first such devices appeared around the year 2000, and they combine the benefits of both imaging types. That being said, such devices also share some of the drawbacks of both imaging systems, such as higher cost, weight, and energy consumption.
Why are their displays always green?
Ok, so now it’s time to answer the real question: why do all night-vision goggles in every movie out there use a green-tinted display? The simple answer is that’s what night-vision technology displays look like in real life; and they look like that because they use phosphor to produce an image.
Phosphor is a substance that can produce a luminance effect. This means that it can release light when subjected to a flow of electrons. Night-vision technology can record light and transform it into an electrical current. This current is then fed through a display whose screen is coated with phosphor to produce an image. Green phosphor is used because our eyes are most sensitive to this color, and are able to distinguish more shades of the color green than of any other color, allowing users to notice more details. Green-phosphor screens are also very energy-efficient; because night-vision equipment doesn’t distinguish color, so the display wouldn’t be able to reproduce colors anyway, this energy-saving property is a further plus, as it allows the device to operate for longer periods of time on a given battery capacity.
Was this helpful?