advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

How a digital camera sensor works

Taking a photograph might seem as easy as just aiming and clicking the button, but there is actually so much more that goes on behind the scenes and inside the camera. While the lens could be seen as one of the most important parts, the sensor that is tucked away inside the body is where the actual magic happens.

In layman’s terms, the sensor is the piece of technology that captures the image and ensures that the colours come out just the way they are intended. There are two different types of sensors: Charge-Coupled Device (CCD) and Complementary Metal-Oxide Semiconductor (CMOS), and while both perform the same function, they works in different ways. The CCD sensor is old technology and is in the process of being phased out by major camera manufacturers, while CMOS has been more prominent over the last couple of years.

Both pieces convert light into electrons, and the easiest way to think about camera sensors are to imagine them as millions of solar cells packed onto a small electronics board – just waiting to receive light information.

The vital sensor is made up of three different layers: the sensor substrate, the Bayer filter, and a microlens.

The sensor substrate measures light intensity and is made from a silicon material. If you were to look at a sensor real close, you would notice that it each pixel almost looks like a well or a bucket. Light travels towards the sensor, and is trapped in each tiny bucket which is measured by the Bayer filter.

Since the sensor can theoretically only record light in monochrome, the filter is bonded over it to allow only light photons of a certain colour into each pixel, or bucket. The filter looks like a chess board consisting of green, blue and red pixels, with twice more green pixels than any other colour.

“In this way, when the pixel measures the number of light photons it has captured, it knows that every photon is of a certain colour. For example, if a pixel that has a red filter above it has captured 5000 photons, it knows that they are all photons of red light, and it can therefore begin to calculate the brightness of red light at that point,” SensorLand explains.

Taking that knowledge, the sensor is able to calculate how much light and intensity goes where in the image, and starts to formulate the photo using the information provided. Each pixel with their colour value will start to make up the photograph, until the completed image is created.

The microlens also plays a vital part in preserving the colours for the sensor. Between each pixel there are tiny gaps, and if light fall on to them they are wasted. Microlenses are placed within these gaps and direct the wasted light back onto the main pixels for capture.

If you have been following as to how a sensor works, you would have noticed that the sensor only captures three colours. But how is it possible then for a full-colour image to be produced? As if through magic, the clever folks who make the cameras have worked some algorithms into the camera (called demosaicing) that works out the colour spectrum for each pixel.

To complicate things a bit further, the full-colour of a picture is achieved by grouping four pixels (2×2) together as one unit, so that the camera can estimate the colour levels from the one red, one blue and two green pixels.

Speaking of pixels, you might have seen the term Effective Pixels written on the side of a camera box. Taking the grouping of four pixels into account, the pixels on the outer edges of the sensor don’t have any pixels to be paired with – so their data isn’t as accurate as the rest.

So effective pixels are the amount of pixels that can be grouped together to produce the full-colour image, while actual pixels are the total number of pixels on the sensor, regardless of their placing.

[Image – File]

advertisement

About Author

advertisement

Related News

advertisement