Image Sensor Technology
While an enormous number of physical mechanisms have been identified that can convert an incoming photon flux into an electrical signal, there are three groups of these that have been most often used in image sensors. The earliest sensors were based on the photoelectric effect, in which a material will emit electron into a vacuum when struck by photons with sufficient energy. In most practical sensors, these materials are transparent so that the photons can enter one side (generally the outside of a glass vacuum envelope) and the electrons exit the other. These materials are called photocathodes. Though still common in low-light-level instrumentation and x-ray imaging, they are rare in systems that collect data because the sensors using them tend to suffer from noise, distortion and signal non-linearities.
A second, mostly abandoned, group is photoconductors, in which the resistance of the material varies with exposure to light. Simple CdS cells used to control lighting are an example of this device type. Photoconductors were widely used in camera tubes, especially antimony trisulfide (Sb2S3) and various selenium compounds because they were relatively easy to produce in uniform films and operate at relatively low voltages. However, these materials suffer from signal non-linearities and from various forms of signal fatigue, which leads relatively quickly to burned-in images in static scene environments like those found in most machine vision tasks. In addition, all camera tubes are subject to geometric image distortions and to influences from external magnetic and electrical fields. Still, some photoconductors have survived in solid-state devices, especially in infrared and x-ray applications.
Most current image sensors are made from semiconductor photovoltaics, materials in which incoming light generates a voltage across a diode fabricated in the material. However, these materials are generally not run in the voltage-generating (or forward-biased) mode like solar cells, rather, they are reverse biased by an external voltage and generate a signal through the recombination of charge initiated by incoming photons. These materials are particularly well-suited to data-collection applications because they generate charge signals that are exactly proportional to the number of incoming photons and because they can be readily fabricated into geometrically stable monolithic planes. Even so, there are numerous sources of error in detectors made from these materials that the potential user needs to consider.
Basic Image Sensor Requirements
By far, the most commonly used image sensor material is silicon because silicon:
- responds strongly to visible light
- can be manufactured in sizes useful for image sensing,
- uses manufacturing processes and tools well-developed for other silicon devices,
- has performance characteristics compatible with the needs of high image data quality and
- can support fabrication of a wide selection of image sensor configurations at reasonable cost.
The process of building image sensors starts with the same base silicon material used in transistors, microprocessors and other silicon devices. These are wafers, typically 0.3 mm thick, with diameters from 100 to 300 mm cut from large cylindrical single crystals. The wafer is termed the substrate and is generally intrinsic silicon. That is, it is essentially pure silicon. With rare exceptions, discussed later, intrinsic silicon is not useful for making image sensors so impurities are introduced into the silicon crystal – a process called doping – to provide free electrons or holes that provide electrical conductivity. These impurities could be introduced into the wafer by thermal diffusion or by ion implantation but most commonly, they are incorporated into a new silicon layer grown on the wafer by a process called vapor phase epitaxy. This very uniform, doped epitaxial (literally “fixed on the surface”) layer, a few to perhaps 50 µm thick, will form the foundation for the fabrication of the image sensor.
To form an image sensor requires providing the silicon layer with a set of physical features that define the desired functions and performance. Fundamentally, any image sensor must take in incoming light, convert it to charge, keep the charge localized to the place where it was converted, hold the charge until the time for readout has arrived and then read out the signal generated by the charge. Each of these requirements adds complexity to the structure of the imager.
Any photodiode can perform the first two steps, converting incoming light to charge, but photodiodes do not preserve any information about the place of arrival of the light and do not hold the charge for readout, instead producing a continuous photocurrent. An array of photodiodes can provide spatial location information and a little additional circuitry can hold the signal and read it out on command. Thus an imager could be simply an array of photodiodes with some extra circuitry fabricated directly on the silicon wafer in a size appropriate to the imaging application. Some imagers are precisely this combination but many take advantage of the additional capabilities afforded by the ability of silicon to perform other useful functions.
Of course, many materials other than silicon are used for photon detection arrays, generally because of the need to detect photons outside the wavelength detection range of silicon, usually limited to 300 to 1100 nm or less. Commonly, InGaAs is used for requirements from just beyond the visible to 2.5 µm. InSb is common for detection in the 3-5 µm band and HgCdTe (MCT) can be used over the entire infrared range out to 14µm and beyond. All of these materials must be cryogenically cooled and bonded to silicon readout arrays. MEMS technology has allowed construction of microbolometer arrays for infrared imaging but these are not photovoltaic devices and will be considered separately. Infrared imaging is treated in a separate section.
More Image Sensor Details
For more information on specific image sensor topics, click these links: