There used to be a time when a camera’s sensor wasn’t a key factor when purchasing a new body. Manufacturers would simply equip a camera with a sensor appropriate to that model’s target user, and buying decisions would be centred on more tangible aspects such as camera functionality and lens options.
While these factors are still as relevant today as they have ever been, the past few years have seen boundaries between different classes of camera erode. Small Compact System Cameras now belie their size with relatively large sensors, while pocket-friendly compacts can offer pixel counts beyond that on more expensive DSLRs. Together with the recent rush of enthusiast compacts with larger-than-average sensors, it can be difficult to know which camera has the most appropriate sensor for your requirements, a confusion compounded by non-standard terminology and the various methods used to state a sensor’s physical size.
The following pages examine all the different options available and take a closer look at the technology which gives some sensors advantages over others. So, whether you’re looking to buy into a completely new system or you’re just curious as to what the jargon actually means, read on for our complete guide to sensors.
CCD and CMOS
Throughout digital imaging’s brief history, sensors have largely been one of two varieties: charged-coupled device (CCD) and complementary metal oxide semiconductor (CMOS). Although both share many commonalities, the former used to be considered the superior format thanks to its better noise performance. As such, it was the first choice for many DSLRs and high-performance cameras designed for scientific applications, leaving CMOS sensors to be used for more lower-end consumer products.
As CMOS sensors attracted more development, and partly thanks to their adoption of technologies designed for CCDs, their advantages began to outweigh their issues. Today, CMOS sensors are the number one choice for the majority of commercially available stills and video cameras, with CCDs confined only to high-end applications and a handful of compact cameras.
One of the reasons for this is that CMOS sensors have far more technology incorporated onto them than CCDs. Initially this gave them a disadvantage as this took up more of a sensor’s surface, which decreased the area sensitive to light. However, this system also meant that the charge accumulated at each photosite could be converted to voltage straight away, unlike CCDs which shift charge from pixel to pixel.This gave CMOS sensors a significant advantage with regards to power consumption.
Since then, thanks to improvements in sensor fabrication, and the incorporation of technologies to improve light capture, noise reduction and other image-processing tasks, CMOS sensors have become far more attractive to manufacturers who need to fit an awful lot of technology in as small a space as possible.
Size and resolution
Initially, there was a great desire for a camera to offer a high number of megapixels in order to produce enlargements without image degradation. Compact cameras and DSLRs alike continued to offer a greater number of megapixels, and although this has slowed in recent years megapixel counts continue to rise. It is now common for entry-level DSLRs to offer 16MP and professional DSLRs to offer 21MP and above.
The difference in image quality between the two, however, is subject to more than pixel count alone. After all, even compact cameras are now offering 20MP sensors, yet many people will appreciate that a DSLR will be able to produce comparatively better image quality.
Aside from the optical differences between lenses used in compacts and those designed for DSLRs, the main reason for this is due to the size of each pixel. The sensors inside professional DSLRs are considerably larger than those inside compacts, which in turn allows for each pixel to be larger. For a given exposure, a larger pixel can accept more light than a smaller one. Being able to accept more light increases the signal to noise ratio, which in turn helps to produce less noise in images and increase dynamic range.
So, too few pixels on a sensor and resolution is compromised. Too many, however, and image noise and dynamic range suffer. Manufacturers have therefore tried to strike a balance between the two, with technological improvements allowing an increasing number of pixels to be incorporated onto a sensor while maintaining the same standard of image quality.
Some manufacturers have deviated from the norm by radically rethinking the designs of their sensors. Most sensors, for example, use a Bayer colour filter array (CFA) to determine colour information (right). Foveon, however, has shunned this and instead exploited the fact that light penetrates silicon to different depths, according to its wavelength. So, by using layers of silicon, its sensors are able to capture full colour information at each photosite, without the need for either a CFA or the subsequent demosaicing process (explained below).
Fujifilm’s Super CCD technology, meanwhile, saw octagonal photodiodes arranged at a 45° angle to the sensor to increase its sensitivity, before Super CCD HR and Super CCD SR revised this for the respective benefits of resolution and sensitivity. Today, the company continues this with its EXR and X-Trans arrays, the latter boasting an unconventional colour filter array to minimise aliasing (below).
These developments were specific to Fujifilm and Sigma, although in recent years back-side illuminated (BSI) architectures have been adopted by many manufacturers. The idea behind this is simple: the wiring which usually sits in front of the light-sensitive photodiode are placed behind the silicon substrate, where they do not pose an obstruction for incoming light. By doing so, the sensor can capture light more effectively, which improves noise performance.
This system is particularly beneficial for compact cameras, whose small, saturated sensors traditionally struggle in low-light environments. Today, most compact cameras above a certain price have adopted this type of sensor, as have cameraphones such as the last few Apple iPhones.
Microlenses, which are used across cameraphones, compacts and DSLRs alike, are a more established sensor feature. These help to direct light which would otherwise hit non-sensitive parts of the sensor into each photosite, thereby increasing sensitivity (left).
Beyond image quality
Such changes presents clear benefits for image quality, although sensors have long been subject to revisions to benefit other aspects of their performance. In recent years, the most obvious change has concerned autofocus, which has gained significance thanks to the ever-increasing Compact System Cameras (CSC) market.
Unlike DSLRs, whose reflex construction permits a separate phase-detect AF module underneath the main mirror, CSCs have to perform autofocus on their main imaging sensor as standard by moving the lens elements back and forth until the position of highest contrast is achieved (in the same way as compact cameras). This system is considerably slower than the phase-detect AF system.
One of the ways this has been addressed is by incorporating pixels on the main imaging sensor whose purpose it is to detect autofocus in the same phase-detect manner as the AF systems on DSLRs, to create a “hybrid” contrast/phase-detect AF system. This brings with it an additional benefit of AF tracking during video recording, which explains why it has featured on both CSCs and DSLRs to date.
The majority of digital cameras contain anti-aliasing (or “low-pass”) filters, which blur the image slightly in order to avoid aliasing artefacts. The most notable exceptions to this are Sigma’s cameras, which use Foveon’s stacked-silicon principle for image acquisition, and Fujifilm’s X-Trans technology which uses a non-standard colour filter array to avoid aliasing artefacts. Recently, however, Nikon and Pentax have each released a DSLR without an anti-aliasing filter, alongside an identical model with such a colour filter array in place. This may lead some to wonder whether such filters are necessary in the first place, to which the answer depends on the sort of photography you practice.
Pentax and Nikon have both stated that these models are particularly suitable for capturing landscapes, the reason being that such scenes rarely present the kinds of repetitive details that cause aliasing artefacts. This is in contrast to fashion photographers, who may be faced with the fine weave of a piece of clothing on a daily basis. As anti-aliasing filters work by cutting off the frequencies which pass through to the sensors – and thus blur the image slightly – photographers who know they won’t encounter situations with such repetitive details may wish to use a camera without a filter for the benefit of detail retention. As most cameras do contain such a filter,
images are typically sharpened as part of in-camera processing to offset these effects.
Essential Guide to Sensors – Glossary
Glossary of terms…
Analogue-to-digital converter (ADC)
A device which converts a continuous signal into discrete digital data. The higher the bit depth of an ADC, the greater the number of possible values that may be assigned to the signal upon conversion, thus the smaller the chance of an error occurring. Many recent DSLRs have 14bit ADCs on board, which allow 16,384 tones per colour channel; since JPEGs are stored as 8bit files as standard, they cannot contain the full information from such converters. This level of information may be maintained in Raw images, or alternatively in uncompressed TIFF files with a high enough bit depth.
A sensor whose dimensions are roughly the same as the “Classic” format negatives from the APS film system (25.1×16.7mm). APS-C sensors are used in many DSLRs and Compact System Camera models, as well as in enthusiast compact cameras such as Fujifilm’s X100s. Being physically smaller than full frame sensors, these apply a crop factor to mounted lenses, typically 1.5-1.6x.
Back-side illumination (BSI)
A back-side illuminated sensor features a different construction from standard types, where the wiring and other physical obstructions usually on the top of the sensor sit behind the substrate. As these are no longer in the path of incoming light, it helps a sensor to gather light more effectively, thus reducing image noise.
The number of possible tones that can be recorded in an image, with a higher bit depth signifying a greater range of tones available. With sensors, the bit depth refers to the analogue-to-digital converter.
Charged Coupled Device. A type of sensor which has long featured in compact cameras, DSLRs and scanners, as well as in high-end cameras developed for scientific purposes. These work by transferring charge via a bucket-brigade system down to the end of each column on the sensor, before they are read out and converted to a discrete digital value. In many consumer applications these have largely been replaced by CMOS alternatives.
Complementary metal-oxide-semiconductor, the main type of sensors used in cameraphones, compacts, Compact System Cameras and DSLRs. These work in a similar manner to CCDs, although their construction sees more functionality integrated onto the sensor itself.
Colour Filter Array (CFA)
Sensors do not see colour as standard, so some way of determining colour information at each photosite is necessary. A CFA is the usual way this is achieved: this is an array of coloured filters which sit over the sensor, through which light passes before it enters each photosite. As only one colour sits over each pixel, other colours for that pixel are determined through demosaicing (right). Although a red-green-blue-green mosaic pattern is usually used, some sensors use a different colour combination, or work on a different principle entirely.
For any lens, the smaller the sensor with which it is used, the more peripheral areas are cut away. As this restriction leads to a smaller angle of view, it replicates the effect of using a longer lens. APS-C DSLRs typically apply a 1.5-1.6x crop factor; Micro Four Thirds sensors – being roughly a quarter the size of full frame – apply 2x.
The process of interpolating colour information into an image so that each pixel contains complete colour data. Values are calculated by looking at the values of surrounding pixels of that colour, and this is required for sensors that use a standard colour filter array, as each pixel only receives colour information for one colour as standard. This is a process that can lead to false colour patterning and softness.
Fujifilm’s proprietary sensor technology. This uses rotated photodiodes together with a non-standard colour filter array and back-side illumination, and adjusts its behaviour to best capture the scene.
A sensor technology that is found in Sigma’s DSLRs and compact cameras, where the standard colour filter array is replaced by layers of silicon. These are penetrated by different wavelengths of light to varying depths, which allows each pixel to receive full red, green and blue colour information without the need for demosaicing. Sigma claims that this system creates sharper images with more accurate colour.
A sensor whose dimensions roughly match those of a frame of 35mm (36x24mm). Because of this they apply no “crop factor” to mounted lenses. These are the largest sensors used in DSLRs and feature in the most expensive models, such as Nikon’s D4 and Canon’s EOS-1D X.
An array of very small lenses which sit over a camera’s sensor. These help to funnel in as much light as possible into each photosite, although they can cause purple fringing too.
The distance between the centre of two pixels on a sensor, stated in micrometres (μm). As an example, a 20MP compact camera with a small sensor may feature a pixel pitch of around 1.2µm, while the same pixel count on a full frame sensor may be around 5.5µm.
The ratio of the light to unwanted noise. The higher this ratio is in favour of the signal, the less noise can be seen and thus the better the image quality.