- Capturing the image
- Camera settings
- Care and maintenance
- Custom functions
- Digital camera features
- Digital image file
- Digital image size and preview
- EOS MOVIE
- Exposure settings
- Flash basics
- Speedlite compatibility
- Speedlite range
- Speedlite zoom
- Flash on camera
- Dark backgrounds with flash
- Fill in flash
- Flash exposure lock and compensation
- Wireless flash
- Macroflash photography
- Bounce flash
- Flash synchronisation
- Stroboscopic flash
- Studio-style flash lighting with Speedlites
- Integrated Speedlite Transmitter
- Remote Release
- Focus points
- Image download
- Image compression
- Image information
- Image verification
- Introduction to digital photography
- Focal length
- All about apertures
- Lens speed
- Focusing and depth of field
- Black or white lenses
- Coloured rings
- Lens mount
- EF-S and field of view
- L-series lenses
- Fluorite, aspherical and UD lenses
- Prime and zoom lenses
- Image stabilisation
- Tilt and shift lenses
- Extension tubes
- Macro lenses
- Close-up lenses
- DO elements
- Fisheye lenses
- SubWavelength structure Coating
- Media cards
- Panoramic images
- Remote photography
- Scanning & copying
- Storage and archiving
- The digital darkroom
- White balance
Capturing the image: Photo sensors
Photographers place a lot of importance on image resolution. Lenses are rated by their resolution - essentially their ability to convert a scene into a sharp image. Film, too, has a resolution, with slow speed emulsions providing greater sharpness than fast materials.
So it is not surprising that one of the most important features of a digital camera has become its pixel count. But what is a pixel, why are there so many of them, and how do they affect image resolution?
Painting by numbers
Inside every digital camera, in place of film, is a sensor array. As the name suggests, this is a group of photo sensors laid out in the form of a grid.
A photo sensor reacts to light by creating an electrical charge. The brighter the light, the greater the charge. If you measure the value of this charge, you can determine the brightness of the light that created it. With this information, you can reproduce the effect of this light on a computer screen or a sheet of paper.
If there was only one large photo sensor, all the light from the scene would be averaged to a single tone and the image would be a uniform grey. Double the number of sensors and you capture double the amount of information - your picture would be two grey blocks, though probably of slightly different tones.
As you increase the number of sensors, you increase the amount of picture information. Eventually, you get to a point where there is enough information for a recognisable image to appear.
It is very much like creating a mosaic with small tiles that vary in tone from white to black through a range of greys. Each tile has only one tone, but by laying tiles of different tones next to each other you can build up a picture.
Early digital cameras used sensor arrays based on a grid of 640 columns by 480 rows, giving just over 0.3 million sensors packed together on the array. This sounds a lot, but while the images produced look good as small prints, the lack of detail quickly becomes apparent when the images are enlarged.
The first EOS digital camera - the DCS 3 - offered 1.3 million sensors (or megapixels). This was in 1995. However, it was the 6.3 megapixel EOS D60 (2002) that really started to compete with film cameras.
Picture elements (pixels)
The information provided by each photo sensor is called a picture element. This is usually shortened to ‘pixel’ (pix is a common abbreviation for ‘pictures’). By association, the term pixel has also come to mean a single photo sensor on the sensor array. So how many pixels do you need to produce an image with good detail?
Well, although direct comparisons are not possible, some sources suggest that you need around 100 million pixels to approach the resolution provided by the human eye. Similarly, it is estimated that the resolution of a fine grain colour film is equivalent to around 18 million pixels.
In practice, these figures seem quite high. Canon says that 10 million pixels gives similar resolution to film. Of course, some film users claim that digital can never give the same results as film, and they are probably right. Digital sensors are made up of precisely aligned pixels in an ordered grid. Film has its light-sensitive crystals randomly scattered through the emulsion. This tends to give digital images a very ‘smooth’ appearance, while film images have more ‘character’.
The choice between the two is purely subjective. Both digital and film are capable of providing images of superb quality.
The largest sensor array currently available for EOS digital cameras is 22.3 million, on the EOS 5D Mark III.
A pixel cannot see in colour. It merely registers the brightness of the light received. To produce a colour image, the sensor array is overlaid with a grid of tiny colour filters. Each filter covers one sensor. There are three different filter colours - red (R), green (G) and blue (B) - but with two green filters for every red and blue filter. This gives a microcluster of four filters that is repeated across the entire sensor array. (Green is chosen as the favoured filter to emulate the higher sensitivity of the human eye to green.)
A pixel covered by a red filter sees only red light; a pixel covered by a blue filter sees only blue light; a pixel covered by the green filter sees only green light. This suggests that the sensor only captures a third of the amount of colour data compared to the brightness data, but this is not the case. Each pixel actually samples the colour information from adjacent pixels to provide full colour data with the brightness for each pixel. This might sound like a compromise, but works extremely well in practice.
It is this sampling that accounts for the difference between the total number of pixels on a sensor, and the smaller number of ‘effective’ pixels. The effective pixels are those that fall within the actual image area. The remaining pixels form a border to the image. They receive light from the subject and their data is sampled by the effective pixels at the edge of the image area. This means that all the effective pixels sample data from adjacent pixels in all directions. It avoids the pixels at the edge of the image area having reduced colour data.
The algorithms (sets of rules) used by pixels to collate colour data are extremely complex and can take data from non-adjacent pixels. That’s why the border of non-effective pixels is more than one pixel deep.
The quality of the algorithms plays a major part in the quality of the colour image. Each camera manufacturer keeps its algorithms a closely guarded secret.