The ability to change lenses on camera is a great advantage, but until recently doing the same with a sensor was well outside of a consumer camera's domain. Now, with Ricoh's modular GXR system a reality, it's possible for us to match the right kind of sensor with the right kind of scenario. Unless everyone buys into this system, though, our choices will be restricted by the models we can either afford or which we deem practical for our purposes.
Even so, thanks partly to the advent of hybrid system cameras, the consumer now has the option of a number of different cameras with differently-sized sensors, all at the same price point. Each type of sensor is designed for specific tasks in mind, and so each bears both advantages and disadvantages - with such a choice on offer it pays to understand what these are, particularly if you are considering investing in a new model. The following feature looks at these in more detail, and at sensors in general. But first, what exactly is a sensor?
What is a sensor?
A sensor is a solid-state device which captures the light required to form a digital image. While the process of manufacturing a sensor is well outside of the scope of this feature, what essentially happens is that wafers of silicon are used as the base for the integrated circuit, which are built up via a process known as photolithography. This is where patterns of the circuitry are repeatedly projected onto the (sensitized) wafer, before being treated so that only the pattern remains. Funnily enough, this bears many similarities to traditional photographic processes, such as those used in a darkroom when developing film and printing.
This process creates millions of tiny wells known as pixels, and in each pixel there will be a light sensitive element which can sense how many photons have arrived at that particular location. As the charge output from each location is proportional to the intensity of light falling onto it, it becomes possible to reproduce the scene as the photographer originally saw it - but a number of processes have to take place before this is all possible.
As sensor is an analogue device, this charge first needs to be converted into a signal, which is amplified before it is converted into a digital form. So, an image may eventually appear as a collection of different objects and colours, but at a more basic level each pixel is simply given a number so that it can be understood by a computer (if you zoom into any digital image far enough you will be able to see that each pixel is simply a single coloured square).
A well as being an analogue device, a sensor is also colourblind. For it to sense different colours a mosaic of coloured filters is placed over the sensor, with twice as many green filters as there are of each red and blue, to match the heightened sensitivity of the human visual system towards the colour green. This system means that each pixel only receives colour information for either red, green or blue - as such, the values for the other two colours has to be guessed by a process known as demosaicing. The alternative to this system the Foveon sensor, which uses layers of silicon to absorb different wavelengths, the result being that each location receives full colour information.
The more the merrier?
At one point it was necessary to develop sensors with more and more pixels, as the earliest types were not sufficient for the demands of printing. That barrier was soon broken but sensors continued to be developed with a greater number of pixels, and compacts that once had two or three megapixels were soon replaced by the next generation of four of five megapixel variants. This has now escalated up to the 14MP compact cameras on the market today. As helpful as this is to manufacturers from a marketing perspective, it did little to educate consumers as to how many were necessary - and more importantly, how much was too much.
More pixels can mean more detail, but the size of the sensor is crucial for this to hold true: this is essentially because smaller pixels are less efficient than larger ones. The main attributes which separate images from compacts and those from a DSLR are dynamic range and noise, and the latter type of camera fares better with regards to each. As its pixels can be made larger, they can hold more light in relation to the noise created by the sensor through its operation, and a higher ratio in favour of the signal produces a cleaner image. Noise reduction technology, used in both compact cameras and DSLRs, aims to cover up any noise which has formed in the image, but this is usually only attainable by compromising its detail. This is standard on compact cameras and usually cannot be deactivated, unlike on DSLRs where the option to do so is provided (meaning you can take more care to process it out later yourself).
The increased capacity of larger pixels also means that they can contain more light before they are full - and a full pixel is essentially a blown highlight. When this happens on a densely populated sensor, it's easy for the charge from one pixel to overflow to neighbouring sites, which is known as blooming. By contrast, a larger pixel can contain a greater range of tonal values before this happens, and certain varieties of sensor will be fitted with anti-blooming gates to drain off excess charge. The downside to this is that the gates themselves require space on the sensor, and so again compromise the size of each individual pixel.
Types of Sensor
Used for a number of years in video and stills cameras, CCDs long offered superior image quality to CMOS sensors, with better dynamic range and noise control.
To this day they are used in budget compacts, but their higher power consumption and more basic construction has meant that they have been largely replaced by CMOS alternatives. They are, however, still used in medium format backs where the benefits of CMOS technology are not as necessary.
Long seen as an inferior competitor to the CCD, CMOS sensors have progressed to match the CCD standard.
With more functionality built on-chip than CCDs, CMOS sensors are able to work more efficiently and require less power to do so, and are better suited to high-speed capture.
As such, they are required in cameras where burst shooting is key, from Casio's EX-F series of compacts to Canon's EOS 1D series of DSLRs.
Foveon X3 sensor
Foveon X3 is based on CMOS technology and used in Sigma's compact cameras and DSLRs.
The Foveon X3 system does away with the Bayer filter array, and opts for three layers of silicon in its place.
Shorter wavelengths are absorbed nearer to the surface while longer ones travel further through.
As each photosite receives a value for each red, green and blue colour, no demosaicing is required.
LiveMOS technology has been used for the Four Thirds and Micro Four Thirds range of cameras.
LiveMOS is claimed to give the image quality of CCDs with the power consumption of CMOS sensors.
A guide to the main sensor sizes used in today's cameras
Full Frame - 36 x 24mm
The largest sensor size found in 35mm DSLRs. It shares its dimensions with a frame of 35mm negative film, and so applies no crop factor to lenses.
Cameras: Canon EOS 5D Mark II, Nikon D700, Sony A900
APS-H - 28.1 x 18.7mm
As featured in Canon's 1D series of cameras. These typically combine the slightly larger sensor with a modest pixel count for speed and high ISO performance, and apply a 1.3x crop factor to mounted lenses.
Cameras:Canon 1D Mark IV, Canon 1D Mark III
APS-C - 23.6 x 15.8mm (varies)
The most common sensor size in consumer and semi-professional DSLRs, the APS-C sensor applies a crop factor between 1.5x to 1.7x to mounted lenses.
Cameras: Nikon D300s, Sony A550, Sigma DP-2
Four Thirds - 17.3 x 13mm
As used in both Four Thirds DSLRs and Micro Four Thirds models, these are roughly a quarter of the size of a full-frame sensor. Their size results in a 2x crop factor, doubling the effective focal length of a mounted lens.
Cameras:Olympus E450, Olympus EP-2, Panasonic GH1
Among the largest sensor sizes used in compact cameras. This allows pixels to be larger and noise performance to be improved over standard compacts.
Cameras:Canon PowerShot S90, Ricoh GRD III
Among the smallest size of sensor used in today's budget compacts. While cheaper to manufacture than larger varieties the smaller pixels aren't quite as efficient, giving rise to noisy images and a reduced dynamic range.
Cameras: Canon PowerShot A490, Fujifilm FinePix JZ500, Panasonic Lumix FS33
Please note: The last two measurements do not refer directly to the size of the sensor - rather, they are derived from the size of the video camera tubes which were used in televisions
Anatomy of a sensor
A - Colour filter array
The vast majority of cameras use the Bayer GRGB colour filter array, which is a mosaic of filters used to determine colour. Each pixel only receives information for one colour - the process of demosaicing determines the other two.
B - Low-pass filter / Anti-aliasing filter
These are designed to limit the frequency of light passing through to the sensor, to prevent the effects of aliasing (such as moire patterning) in fine, repetitive details. What results is a slight blurring of the image, which compromises detail, but manufacturers attempt to rectify this by sharpening the image.
C - Infrared filter (hot mirror)
Camera sensors are sensitive to some infrared light. A hot mirror in between the lens and the low pass filter prevents this from reaching the sensor, and helps minimise any colour casts or other unwanted artefacts from forming.
D - Circuitry
CCD and CMOS sensors differ in terms of their construction. CCDs collect the charge at each photosite, and transfer it from the sensor through a light-shielded vertical array of pixels, before it is converted to a signal and amplified. CMOS sensors convert charge to voltage and amplify the signal at each pixel location, and so output voltage rather than charge. CMOS sensors may also typically incorporate extra transistors for other functionality, such as noise reduction.
E - Pixel
A pixel contains a light sensitive photodetector, which measures the amount of light (photons) falling onto it. This process releases electrons from the silicon, which forms the charge at each photosite.
F - Microlenses
Microlenses help funnel light into each pixel, thereby increasing the sensitivity of the sensor. These are particularly important as a proportion of most sensors' surface area is taken up by necessary circuitry.
G - Black pixels
Not all pixels on a sensor are used for capturing an image. In fact, those around the peripheries are typically shielded from light, which allows the camera to see how much dark current builds up during an exposure when there is no illumination - this is one of the causes of noise in images. By measuring this, the camera is able to make a rough estimate as to how much has built up in the active pixels, and subtracts this value from them. The result is a cleaner image with less noise.