Introduction to IR (Part 1): The physics behind thermal imaging

April 08, 2018

If you’ve ever wondered why pictures and videos taken with a thermal camera are a mesh of red, blue, and yellow, or why they are black and white in night vision, our new blog series will provide those answers. And much more than that. We will look at the underlying physics and technology behind Forward Looking Infrared technology, more colloquially known as thermal imaging.

Today, thermal imaging is used in all sorts of different scenarios—utility and energy companies use it to see where a house might be losing heat through cracks. The police use it to locate suspects from helicopters at night. Thermal cameras are used in advanced vehicles to see and classify items that are difficult to parse with the typical cameras on an autonomous car. Weather stations use it to track storms and hurricanes. It’s used in the medical field to diagnose different disorders and diseases. Thermal imaging cameras are mounted on ships to help the crew spot icebergs and passengers overboard.
There are many other interesting applications for this technology. So, if you are interested in learning more about it, make sure to follow our blog series Introduction to IR.


How does thermal imaging work?
Human eyes can see objects that are illuminated by either the sun or another form of light at specific wavelengths in the visual spectrum. In contrast, thermal cameras “see” heat, or electromagnetic radiation within the infrared spectrum, emitted by objects.
Infrared (IR) light is an electromagnetic radiation of small particles named photons. All objects at temperatures above absolute zero (-273°C or -459.69°F) emit infrared radiation, and this is how heat is transferred and detected by IR (thermal) cameras. This is why a thermal camera can operate even in complete darkness.
Though it’s not visible to human eye, radiation of infrared energy can be felt. If you hold your hand close to the side of a steaming cup of coffee – you feel the heat emitting from the cup. Thermal cameras can see this radiation and convert it to an image that we can then see with our eyes.


How are thermal cameras different from traditional cameras?
A thermal camera produces an image similar to that of a regular camera. But unlike a regular camera, thermal (infrared) sensors detect electromagnetic waves of different wavelength from those of light. This gives thermal cameras the ability to “see” heat, or more technically, infrared radiation. The hotter an object is, the more infrared radiation it produces.
In other words, thermal imaging allows us to see an object’s heat radiating off its surface. This way, thermal cameras measure the temperature of various objects in the frame, and then assign each temperature a shade of a color.
Colder temperatures are often represented as some shade of blue, purple, or green, while warmer temperatures — a shade of red, orange, or yellow.

Some thermal cameras use a grayscale instead. Night vision footage from security cameras always is in black and white. There is a good reason behind that: human eyes can differentiate between black and white better than they can differentiate other shades of colors, such as red or blue. Because of that, most night vision cameras use a monochrome filter to make it easier for us to understand what’s on the image. That is also why police helicopters use a greyscale to make suspects stand out.


What’s the difference between uncooled and cooled cameras?
Thermal cameras these days can use either Uncooled or cooled sensors to detect electromagnetic radiation.
In the more common uncooled thermal camera, the infrared-detecting elements are contained in a unit that operates at room temperature. While cooled thermal cameras have their detectors stored in a unit at temperature -32 degrees F (0 degrees C) or lower. As a result of their elements being cooled, these cooled systems offer much better sensitivity compared to the uncooled systems.
Now, let’s get into the physics for a moment.

Electromagnetic Spectrum

Electromagnetic Spectrum

The infrared spectrum constitutes only a part of the whole electromagnetic spectrum (as shown on the visual) and in its turn has three effective ranges depending on the wavelength:

  • Long-Wave IR (LWIR) (7.5-14µm) – typically used by uncooled IR cameras;
  • Medium-Wave IR (MWIR) (3-5µm) – typically used by cooled IR cameras;
  • Short-Wave IR (SWIR) (1-3µm) – typically used in active illumination night vision technology.

A longer wave enables a photon travel through environment with larger particles (e.g. dust or fog). Hence uncooled devices designed for 7.5-14µ wavelengths are better suited for dusty or foggy environments.
Examples of such cameras include the PTZ Systems Accuracii AT, Accuracii XRU, and Accuracii ML; security thermal cameras Sii OP and Accuracii TO; driver vision enhancement military cameras Tavor BS, and pilot’s enhanced vision system Everest EVS.

Photons have more energy when wavelength is shorter. Hence cooled devices supporting 3-5µ are suitable for longer range surveillance tasks.
Examples include the Accuracii XR PTZ System and pilot’s enhanced vision system EVS AP.

Hopefully, this has answered some questions. In the next post in this series we will get “under the hood” of thermal cameras and take a look at finer, more technical details like detector, pixel, and pitch.

Read Part 2 of the series: Cooled vs. uncooled cameras, sensitivity, resolution, frame rate