WO2012028847A1 - Image sensor - Google Patents

Image sensor Download PDF

Info

Publication number
WO2012028847A1
WO2012028847A1 PCT/GB2011/001283 GB2011001283W WO2012028847A1 WO 2012028847 A1 WO2012028847 A1 WO 2012028847A1 GB 2011001283 W GB2011001283 W GB 2011001283W WO 2012028847 A1 WO2012028847 A1 WO 2012028847A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
sensor devices
photoelectric sensor
sub
image
Prior art date
Application number
PCT/GB2011/001283
Other languages
French (fr)
Inventor
Thomas Heinz-Helmut Altebaeumer
Henry James Snaith
Original Assignee
Isis Innovation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB1014713.0A external-priority patent/GB201014713D0/en
Priority claimed from GBGB1108211.2A external-priority patent/GB201108211D0/en
Application filed by Isis Innovation Limited filed Critical Isis Innovation Limited
Publication of WO2012028847A1 publication Critical patent/WO2012028847A1/en

Links

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14643Photodiode arrays; MOS imagers
    • H01L27/14645Colour imagers
    • H01L27/14647Multicolour imagers having a stacked pixel-element structure, e.g. npn, npnpn or MQW elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/047Picture signal generators using solid-state devices having a single pick-up sensor using multispectral pick-up elements

Definitions

  • the present invention relates to image sensors for sensing EM (electromagnetic) radiation of at least three spectral (e.g. colour) components which constitute a subset of the EM radiation spectrum of interest.
  • EM radiation spectrum of interest is application dependent and may encompass, in the case of conventional image sensors as they are used in mobile phones or digital single lens reflex cameras the spectrum of visible light which ranges approximately from 400nm to 800nm.
  • colour image sensors which detect the spatial intensity distribution of particular spectral components within the EM radiation spectrum of interest.
  • the most common image sensor comprises an array of photoelectric sensor devices, each of them capable of detecting EM radiation belonging to the entire spectrum of interest.
  • these device arrays are masked by colour filter arrays (CFAs) consisting of interleaved sub-arrays of colour filters that each pass a spectrally limited EM radiation of a single spectral component, for example only a red, green or blue colour component.
  • CFAs colour filter arrays
  • Each photoelectric sensor device and corresponding colour filter comprises one pixel, whose signal output depends on the intensity of the light transmitted by the filter.
  • CFAs comprise red, green and blue filters assembled in a Bayer type configuration which contains twice as many green pixels as red or blue pixels to reflect the human eye's greater sensitivity to green light.
  • the photoelectric sensor array may be for example based on CCDs (charge coupled devices) or constitute active pixel sensor arrays as they are realised in CMOS (complimentary metal-oxide- semiconductor) sensors.
  • CCDs charge coupled devices
  • CMOS complementary metal-oxide- semiconductor
  • Sensitivity affects the signal and consequently the signal to noise ratio (SNR) of the image sensor. It is particularly compromised if increases in sensor resolution are facilitated by smaller pixels whose smaller area will be inevitably struck by a smaller number of photons.
  • Shot noise Statistical fluctuations in the total number of photons hitting a particular area (i.e. Shot noise) in relation to the overall number of photons hitting said area are evidently more relevant if the overall number of photons is small. This is more likely to be relevant for a small pixel area giving rise to a lower SNR. Shot noise unlike other noise sources constitutes the fundamental limit of the sensors sensitivity and is particularly problematic in low light conditions or for image sensors with small form factors (i.e. small pixel sizes).
  • noise sources such as fixed pattern noise, thermal noise and read noise are increasingly addressed by improving the manufacturing process or using more advanced electronics (especially AID converters), such that the performance of most recent image sensors is increasingly limited by shot noise.
  • the fill factor for example, is typically limited by metal leads and, in the case of active pixel sensors, CMOS circuits integrated within each pixel. Microlenses are used to focus light onto the photo-active pixel area which would otherwise be reflected by these circuits and metal leads, increasing the amount of incident light collected. Also backside illuminated CMOS sensors were introduced where the photoelectric sensor devices are provided on the backside opposite of the metal leads and CMOS circuits . However, although utilising these approaches it is now possible to obtain fill factors which are close to 1 , i. e. all the light is directed onto the photo-active region, all of these sensors still rely on the use of CFAs to obtain spatial colour resolution.
  • WO-2008/150342 discloses a sensor design which includes alongside the well known R, G and B pixels also panchromatic pixels which convert most if not all of the photons within the visible light spectrum into a signal. As a result, the image sensor as a whole captures more light increasing its overall sensitivity. However, this gain is achieved at the expense of lateral colour resolution which is determined by the spacing of R, G and G filters which are now further apart to accomodate the panchromatic pixels.
  • US-6,731,397 discloses an example using silicon as the photoelectric material by forming a stack of three pn junctions in a sheet of silicon that constitute respective photoelectric sensor devices. This construction takes advantage of the wavelength dependent absorption depth of silicon which results in each photoelectric sensor device predominantly outputting a signal of a different colour component. This improves the sensitivity by collecting light of all three colour components at each pixel location, but the physical construction presents some limitations. It is important to note that the response of the photoelectric sensing devices generally relates to more than one colour component. For example, the photoelectric sensor device being closest to the surface of said silicon material and hence closest to the light source will predominantly absorb light with the shortest wavelength, i.e. blue light, but also any light with any longer wavelength.
  • the colour signals do not correspond to a particular colour space, say the RGB space and need to be converted into a standard colour space.
  • EM radiation absorbed in the depletion regions between the pn junctions is not sensed and compensation doping negatively affects the carrier lifetime in the depletion regions and hence the dark current.
  • US-7,41 1 ,620 disclose an image sensor that comprises stacked arrays of photoelectric sensor devices, each comprising organic material and having a response spectrally limited to a single colour component, e.g. by choosing appropriate dyes. This offers the opportunity to sense each colour component at each pixel location, thereby improving the sensitivity.
  • manufacture of the construction involving stacked arrays of photoelectric sensor devices presents technical challenges. For example, when manufacturing each stacked array sequentially it is difficult to provide the necessary sintering of metal oxide particles such as T1O2 particles in the construction of the photoelectric sensor devices at elevated temperatures without damaging the organic material incorporated in the previously fabricated arrays which may include dyes and hole transporter.
  • US-2007/0012955 and US-2009/0283758 disclose image sensors in which the problems of stacking photoelectric sensor devices comprising organic material are avoided by stacking a single array of photoelectric sensor devices comprising organic material on photoelectric sensor devices that are CMOS devices formed in silicon.
  • the array of photoelectric sensor devices comprising organic material all have a response spectrally limited to the same spectral components, for example green.
  • the photoelectric sensor devices that are CMOS devices formed in silicon are configured to output signals in respect of the two remaining spectral components, for example red and blue.
  • the photoelectric sensor devices formed in silicon comprise an arrangement in which the photoelectric sensor devices are aligned with interleaved sub-arrays of colour filters that each pass a spectrally limited EM radiation of a single colour component.
  • sensitivity is improved by sensing one of the colour components in the array of photoelectric sensor devices comprising organic material
  • the use of filters in the photoelectric sensor devices formed in silicon still limits the improvement in the sensitivity because the filters absorb a proportion of the incident EM radiation.
  • an image sensor for sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest with an improved sensitivity as a compared to a Bayer type of image sensor, but in which at least some of the technical problems associated with the approaches mentioned above are reduced.
  • an image sensor for sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest, the image sensor comprising a front array of photoelectric sensor devices for receiving incident EM radiation stacked on a back array of photoelectric sensor devices and aligned in a one-to-one relationship with the photoelectric sensor devices of the back array for receiving the incident EM radiation after transmission through the front array, wherein
  • the front array consists of at least two interleaved sub-arrays of photoelectric sensor devices each having a response spectrally limited to one or more of said spectral components that is the same within each sub-array and different between the sub-arrays, so that the photoelectric sensor devices are configured to output signals representing the one or more spectral components to which they have a response, and
  • the back array comprises photoelectric sensor devices each having a response to the entire spectrum of interest, so that the photoelectric sensor devices are configured to output a signal representing the total of all of said spectral components that reach the sensor device without having been absorbed by the aligned photoelectric sensor device of the front array or by any filter optionally provided in front of the photoelectric sensor device of the back array.
  • Such an image sensor provides the capability of sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest with a construction that has the potential of providing improved sensitivity as compared to an image sensor whose spectral response is solely defined by the use of conventional colour filter arrays.
  • the capability of sensing EM radiation of at least three spectral components is achieved as a result of the front array consisting of at least two interleaved sub-arrays of photoelectric sensor devices each having a response that is spectrally limited to one or more of said spectral components that is the same within each sub-array and different between the sub-arrays.
  • photoelectric sensor devices of the back array each have a response to the entire spectrum of interest, they are configured to output a signal representing the total of all of said spectral components that reach the sensor devices.
  • the respective sub-arrays of photoelectric sensor devices of the back array that are aligned with each sub-array of the front array are therefore also configured to output signals of one or more spectral components that can be different for each sub-array.
  • the maximum number of spectral components equals the total number of sub-arrays.
  • the spatial resolution of a particular spectral component is determined by the pitch or the corresponding sub- array. If, for example, the top array comprises only two sub-arrays (note that conventional Bayer image sensors require 3 individually fabricated sub-arrays), then also the bottom array will comprise two sub-arrays.
  • this configuration Due to the use of two stacked arrays, this configuration has twice the number of pixels per area as compared to conventional CFAs, but it allows for detecting up to twice as many spectral components as sub-arrays are present in the top array, i.e. 4 colours in this example. The result is an increased colour gamut and a doubling of the number of pixels per area.
  • Said construction also can be configured such that it provides improved sensitivity as compared to a Bayer type of image sensor, because all EM radiation of interest is collected at the location of each photoelectric sensor device, because the photoelectric sensor devices of the back array are configured to output a signal representing the total of all of said spectral components that reach said sensor device of the back array without having been converted by the aligned photoelectric sensor device of the front array or by any filter provided in front of the photoelectric sensor device of the back array.
  • the photoelectric sensor devices of the front array alone is required to have a spectrally limited response, it is straightforward to manufacture the photoelectric sensor devices of the back array to have a response to the entire spectrum of interest, for example by constructing the back array from photoelectric sensor devices comprising semiconductor material, such as silicon, for example as CMOS devices.
  • the front array is just a single array of photoelectric sensor devices having a spectrally limited response
  • it is relatively straightforward to add the front array for example by constructing the front array from photoelectric sensor devices comprising organic material organic semiconductor of either molecular or polymeric form, such as a dye, configured to convert said one or more of said spectral components, in contrast to the technical difficulties in constructing stacked arrays of photoelectric sensor devices having a spectrally limited responses.
  • each photoelectric sensor device of the front array has a high quantum efficiency so that it converts a high proportion of the one or more of said spectral components to which that photoelectric sensor device has a response.
  • the high degree of absorption by the photoelectric sensor devices of the front array still allows derivation, if desired, of a luminance signal representative of the luminance of the incident EM radiation with high sensitivity, by using the signals output by the photoelectric sensor devices of both the front array and the back array in combination.
  • the peak absorptance, in respect of the one or more of said spectral components to which that photoelectric sensor device has a response is at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%.
  • the absorption factor (sometimes referred to as the absorption factor) is the fraction of the incident EM radiation flux at a given wavelength that is absorbed. In an ideal sensor, all the absorbed EM radiation is converted.
  • the image sensor may further comprise a signal processing unit configured to receive the signals output by the photoelectric sensor devices of the front array and the back array and to process the received signals to derive spectral component signals representative of each of the at least three spectral components and/or a luminance signal representative of the luminance of the incident EM radiation.
  • a signal processing unit configured to receive the signals output by the photoelectric sensor devices of the front array and the back array and to process the received signals to derive spectral component signals representative of each of the at least three spectral components and/or a luminance signal representative of the luminance of the incident EM radiation.
  • the signal processing circuit uses the signals output by the photoelectric sensor devices of both the front array and the back array in combination.
  • the sensitivity of detection of the spectral components is improved as compared to using the signals output by the photoelectric sensor devices of the front array alone (or by the photoelectric sensor devices of the back array alone).
  • the respective sub- arrays of photoelectric sensor devices of the back array produce signals containing spectral information due to the absorption by the aligned sub-array sub-arrays of photoelectric sensor devices of the front array.
  • the output signals of the photoelectric sensor devices the back array can be combined with the output signals of the photoelectric sensor devices the back array. This improves the sensitivity of the resultant spectral component signals, as compared to relying on the photoelectric sensor devices of the front array alone.
  • the image sensor does not include any filters arranged to absorb any of the spectral components.
  • the photoelectric sensor devices of the back array are configured to output a signal representing the total of all of said spectral components that reach the sensor device except for the spectral component converted by the aligned photoelectric sensor device of the front array. This type of configuration maximises the improvement in sensitivity as all of the spectral components are converted by the photoelectric sensor devices of either the front or back array.
  • the front array consists of at least three interleaved sub- arrays of photoelectric sensor devices, and the photoelectric sensor devices of at least one said sub- arrays has a response spectrally limited to at least two said spectral components.
  • the front array consists of three interleaved sub-arrays of photoelectric sensor devices, the photoelectric sensor devices of a first two of the sub-arrays each having a response spectrally limited to one said spectral components that is the same within each sub- array and different between the sub-arrays, and the photoelectric sensor devices of the third sub-array each having a response spectrally limited to both of said spectral components to which the responses of the photoelectric sensor devices of the first two sub-arrays are spectrally limited.
  • This type of configuration may provide constructional advantages in requiring the photoelectric sensor devices of the front array to be spectrally limited to a smaller number of colour components in total.
  • two of the sub-arrays have responses spectrally limited to respective ones of two spectral components
  • the third sub-array has a response spectrally limited to both of those two spectral components, so no photoelectric sensor devices of the front array need to have a response to the third spectral component. This may provide a constructional advantage in making the front array easier to manufacture.
  • the light absorbing properties of the front array is determined by organic molecules such as dyes
  • organic molecules such as dyes
  • This set-up might be particularly desirable if it is technically challenging to realise the desired physical properties, e.g. quantum efficiency, for a particular spectral component, as is the case for red light in some types of photoelectric sensor devices, for example.
  • the image sensor further comprises at least one array of filters aligned in a one-to-one relationship with one of the sub-arrays of photoelectric sensor devices of the front array, disposed in front of the photoelectric sensor devices of the back array, and arranged to absorb at least a portion of one of the spectral components, for example one of the spectral arrangements other than one or more of said spectral component to which the response of the sub- array of photoelectric sensor devices of the front array with which the array of filters are aligned is spectrally limited.
  • each array of filters may be arranged to absorb a spectral component that is the same within each array and different between the arrays.
  • the arrays of filters are interleaved in the same manner as the sub-array of photoelectric sensor devices and may be thought of as being similar to a colour filter array in the case of visible light (although a conventional colour filter array passes light of a single colour component).
  • This type of configuration provides the potential to reduce the complexity of the image sensor by reducing the spectral components to which the response of the sub-arrays of front array need to be spectrally limited albeit at the expense of providing a lesser improvement in sensitivity over conventional image sensors as some incident light is absorbed by the array of filters.
  • various embodiments of the present invention address the challenge providing an improved method of constructing colour images and a device structure enabling this method such that many and in some cases all, of the following advantages can be achieved:
  • Sensitivity minimal (potentially no) waste of photons which are part of the spectrum of interest (e.g. RGB) due to the use of colour filters
  • Colour resolution improved or at least equivalent colour resolution compared with conventional image sensors using 3 colour filter arrays (e.g. Bayer pattern)
  • Luminance resolution improved luminance resolution compared with conventional image sensors using 3 colour filter arrays 2.
  • Image processing Simplifying existing and enabling more powerful signal processing techniques.
  • Fig. 1 is a diagram of a panchromatic image sensor and a white image sensor
  • Fig. 2 is a graph of spectral responses of the sensors, plotting relative channel responsivity against wavelength
  • Fig. 3 is a diagram of an image sensor
  • Fig. 4 is a partial schematic side view of a photodetector
  • Fig. 5 is a block diagram illustrating the operation of a DSP circuit using the photodetector configuration of Fig. 4;
  • Figs. 6 and 7 are partial schematic side views of further photodetector configurations
  • Fig. 8 is a partial schematic side view of an extended portion of the photodetector configuration of Fig. 6;
  • Fig. 9 is a block diagram illustrating an alternative operation of a DSP circuit using the photodetector configuration of Fig. 6;
  • Figs. 10 and 1 1 are partial schematic side and top views, respectively, of a further photodetector configuration
  • Figs. 12 to 1 are partial schematic side views of further photodetectors configurations
  • Fig. 15 is a side cross-sectional view of a possible structure for a photodetector
  • Fig. 16 is a schematic cross-sectional view of the construction of a photodetector confifuration.
  • Figs. 17(a) to (d), 18(a) and (b), 19(a) to (c), 20(a) and (b) and 21(a) to (c) are schematic side views of a photodetector configuration during subsequent fabrication steps.
  • the image sensors and photodetectors that will now be described sense EM radiation of three spectral components in a predetermined spectrum of interest.
  • the photodetectors comprise photoelectric sensor devices that convert photons into an electric signal (i.e. a change in voltage or current or both).
  • the description is given making specific reference to the spectrum of interest being visible light and to the three spectral components being red, green and blue colour components, i.e. using the RGB colour space or variations thereof.
  • the image sensors may equally be applied to a spectrum of interest that extends in part or in full beyond the visible spectrum, as desired for a given application, for example including photons with wavelengths below 400nm (e.g. ultraviolet (UV)) and/or above 750 nm (e.g. infrared (»)).
  • the image sensors may equally be applied to detect any number of spectral components of any wavelength spread across the spectrum of interest to represent different spectral bands of the spectrum of interest.
  • the spectrum is visible light and different spectral components are colour components as required by a particular colour model (e.g. the RGB model, the RGBE model, the CYGM model or the CMYK model) to create a full colour image representing a scene as perceived by the human eye.
  • a particular colour model e.g. the RGB model, the RGBE model, the CYGM model or the CMYK model
  • the spectral components might be any spectral components that are required to represent the spectrum as desirable for a given application.
  • the term "colour" is used to be consistent with common terminology for colour models of the visible spectrum, but the invention is not limited to the spectral components being colour components inside the visible spectrum.
  • panchromatic and “white”.
  • the term “panchromatic” is used to refer to signals representing the entire spectrum, that is from photons of all wavelengths belonging to the spectrum which is to be reconstructed to obtain the desired image
  • the term “white” is used to refer to signals representing just the spectral components, that is photons whose wavelengths belong to the spectral components, for example a chosen and well defined colour space such as RGB. This difference means that a panchromatic pixel would give a larger signal than a white pixel.
  • white is used to be consistent with common terminology for colour models of the visible spectrum, but the present invention is not limited to the spectral components being inside the visible spectrum.
  • Fig. 1 shows two image sensors 1 and 3, the first image sensor 1 having a panchromatic response and the second image sensor 3 having a white response.
  • Fig. 2. shows the spectral response 1 1 , 12, 13 and 14 of panchromatic, red, green and blue sensor devices, respectively (taken from "Mrityunjay Kumar, Efrain O. Morales, James E. Adams, Jr., and Wei Hao, New digital camera sensor architecture for low light imaging, Proceedings of the 16th IEEE international conference on Image processing, ISBN ⁇ ISSN: 1522-4880 , 978-1-4244-5653-6 (2009)").
  • the first image sensor 1 comprises an ideal photodetector 2 having a response to the entire spectrum of interest and has a response 1 1 to a wide spectrum, in this case from around 375nm to around 670nm, thus producing a panchromatic output signal representing the entire spectrum.
  • the second image sensor 3 comprises an ideal photodetector 4 having a response to the entire spectrum of interest having disposed in front an RGB colour filter 5 that has an absorption spectrum that is spatially uniform across the image sensor 3 and is a bandpass filter arranged to pass only the colour components, that is to restrict the spectrum of the incident EM radiation to bandwidth ranges in respect of each of said colour components.
  • a colour filter transmits EM radiation of a particular colour component, specifically in the case the RGB colour filter 5 each of the red, green and blue colour components.
  • the image sensor 3 has effective channel responses 12, 13 and 14 to red, green and blue colour components, in this case centred at wavelengths of around 470nm, 540nm and 620nm.
  • the image sensor 3 produces a white output signal representing the sum of the output signals corresponding to the red, green and blue colour components.
  • the panchromatic image sensor 1 exhibits a greater channel response near 380nm than the white image sensor 3.
  • any of the image sensors described below also employ a similar filter that has an absorption spectrum that is spatially uniform across the image sensor 3 and is a bandpass filter arranged to pass only the colour components, hereinafter referred to as an RGB filter.
  • any of the image sensors can also include one or more other filters that have an absorption spectrum that is spatially uniform across the image sensor to minimise the impact of photons that are not required to reconstruct the desired image, but might complicate the analysis of the output signals.
  • filters may be UV or IR filters that absorb UV or IR light and do not affect the response of the photoelectric sensor devices.
  • Fig. 3 illustrates an image sensor 20 implemented on an IC (integrated circuit) chip 21 using CMOS technology, although corresponding CCD implementations would equally be possible.
  • the image sensor 20 comprises a photodetector 22 comprising arrayed photoelectric sensor devices 23 that convert photons of incident EM radiation into electrical signals representing the converted EM radiation, having a detailed construction described further below. Otherwise, the image sensor 20 has a standard configuration for a CMOS image sensor including the following components.
  • a control circuit 24 is provided to control the operation of the photoelectric sensor devices 23.
  • each photoelectric sensor device 23 is provided with a pixel circuit (not shown) comprising reset and readout elements formed by CMOS switch devices controlled by a reset and select lines in respect of rows of sensor devices 23 to provide readout signals from successive rows of sensor devices 23 onto readout lines in respect of columns of sensor devices 23 that are connected to sample-hold circuits.
  • the control circuit 24 provides control signals on the reset and select lines and controls the sample-hold circuits.
  • An amplifier circuit 25 is provided to amplify the readout signals that are then supplied to an ADC (analog-to-digital) converter 26 that converts them into digital signals.
  • the digital signals from the ADC converter 26 are supplied to a DSP (digital signal processing) circuit 27 for further processing.
  • the DSP circuit 27 is implemented by a dedicated circuit of CMOS devices formed in the IC chip 21, but could equally be implemented in other ways, for example implemented on a separate IC chip and/or implemented as a processing unit running a suitable program. Indeed any of the control circuit 24, amplifier circuit 25, ADC converter 26 and/or DSP circuit 27 could alternatively be implemented in a separate IC chip from the IC chip 21.
  • the various photodetector configurations that may be implemented in the photodetector 22 in the image sensor 20 to embody the present invention, for clarity, depicting only the light sensitive elements (e.g. sensor devices and filters).
  • the various side and plan views of the photodetectors are partial views showing a few sensor devices that are in fact arrayed across the entire photodetector. It should be noted, that these side and plan views are chosen such as to illustrate the embodiments in the clearest possible way. For example, it should be obvious to the person skilled in the art that certain spatial arrangements of pixels will have different cross-sectional side-views. For example, side views of pixels arranged in a Bayer pattern may only reveal two out of the three colours being used. To illustrate the invention more clearly, the side views used here show all colour components and do not necessarily represent the spatial arrangement of the pixels in a photodetector array.
  • Fig. 4 illustrates a first photodetector configuration 30 being a side view of three sensor locations.
  • the photodetector configuration 30 comprises a front array of photoelectric sensor devices 31 stacked with a back array of photoelectric sensor devices 32.
  • the sensor devices 32 of the back array are aligned in a one-to-one relationship with the sensor devices 31 of the front array.
  • the sensor devices 31 of the front array receive incident light 33 and after transmission therethrough the sensor devices 32 of the back array receive the incident light 33.
  • the sensor devices 31 of the front array each have a response spectrally limited to a single one of three colour components, i.e. one of red, green or blue, whereas the sensor devices 32 of the back array each have a response to the entire spectrum of interest.
  • a non-limitative example described in more detail below is that the sensor devices 31 of the front array are formed by organic material, for example organic photocells or dye sensitised solar cells, whereas the sensor devices 32 of the back array are formed as CMOS devices in silicon.
  • the first photodetector 30 there are no filters arranged to absorb any of the colour components and so the sensor devices 31 of the front array output signals representing the colour component to which they have a response, whereas the sensor devices 32 of the back array output signals representing the total of all of the light that reaches them without having been converted by the aligned sensor device 31 of the front array, being panchromatic light less the one of the colour components converted by the aligned sensor device 31 of the front array.
  • the sensor devices 31 of the front array would absorb and convert all of the light of the colour component to which they have a response. Whilst this may not always be achievable in practice, the sensor devices 31 of the front array are designed to absorb and convert a relatively high proportion of that light.
  • the absorptance (sometimes referred to as the absorption factor and being the fraction of the incident EM radiation flux at a given wavelength that is absorbed, which is equal to one minus the transmittance, ignoring reflections) of each sensor device 31 of the front array has a peak, in respect of the colour component to which there is a response, that is at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%.
  • the sensor devices 31 of the front array would convert all of the light that is absorbed thereby into the output signal. Whilst this may not always be achievable in practice, the sensor devices 31 of the front array are designed to convert as high a proportion as possible of the absorbed light of the one of three colour components. Desirably, the sensor devices 31 of the front array further have a quantum conversion efficiency (the fraction of incident EM flux at a given wavelength that is converted into an output signal) that has a peak, in respect of the colour component to which there is a response, that is at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%.
  • a quantum conversion efficiency the fraction of incident EM flux at a given wavelength that is converted into an output signal
  • the sensor devices 31 of the front array consist of three interleaved sub-arrays 34, 35 and 36 of sensor devices 31 that each have a response spectrally limited to a different one of three colour components, that is to red, green and blue respectively.
  • Fig. 4 illustrates only a single sensor device 31 of each of the sub-arrays 34, 35 and 36, but in fact the sub-arrays 34, 35 and 36 are interleaved in any suitable pattern across the photodetector 30, for example a Bayer pattern (disclosed for example in US-3,971,065) in the same manner as a conventional Bayer type of image sensor.
  • sensor devices 32 of the back array may also be considered to consist of three interleaved sub-arrays 37, 38 and 39 of sensor devices 32, aligned respectively with three interleaved sub-arrays 34, 35 and 36 of the front array of sensor devices 31, that output signals representing different portions of the spectrum of interest.
  • the stacked arrangement of a single sensor device 31 of the front array and a single sensor device 32 of the rear array is the smallest spatially defined unit and therefore constitutes one pixel of the photodetector configuration 30. Pixels are similarly defined in the other photodetector configurations described below, in some cases additionally including a filter stacked with the sensor devices 31 and 32.
  • Derivation of the colour component signals and luminance signal may be performed in the DSP circuit 27 as shown in Fig. 5 which is a high level diagram showing the operation of the DSP circuit 27.
  • the photodetector configuration 30 produces a RGB F image 40 coming from the front array of sensor devices 31 and an (P-R)(P-G)(P-B) B image 41 from the back array of sensor devices 32.
  • both the RGB F image 40 and the (P-R)(P-G)(P-B) B image 41 are used to generate a full panchromatic image 43 at the resolution of the pixel pitch of the sensor devices 31 and 32, as follows. If f F x ( ) is the spectral response from each front sensor device 31 for a given wavelength ⁇ , the signal obtained from each photoelectric device is:
  • each back sensor device 32 is:
  • F(P, n, m) F B (P, -R, n,m) +F F (R, n,m)
  • panchromatic pixels can be constructed from the signals of the remaining colour pixels.
  • F(P, n, m) F B (P, -X, n,m)+F F (X, n, m)
  • bilinear interpolation filters are required to minimise colour artefacts caused by the spatial separation of the different colour arrays (e.g. the pixels of the array comprising all red pixels are in different locations than the pixels of the array comprising all green pixels).
  • no bilinear interpolation is needed to create a full resolution panchromatic image F Fu u(P,n,m) 43 because the position of the constructed panchromatic pixels coincides with the position of the colour pixels and constitutes a full resolution panchromatic image.
  • the full panchromatic image 43 may be considered to be a luminance signal representative of the luminance of the incident EM radiation at the resolution of the sensor devices 31 and 32. This is used to generate signals representative of the colour components at the resolution of the sensor devices 31 and 32, as follows.
  • step 44 the RGB F image 40 is filtered to reduce noise to provide a noise-reduced RGB F image 45.
  • the full panchromatic image 43 is filtered to reduce noise to provide a full noise-reduced panchromatic image 47.
  • step 48 the (P-R)(P-G)(P-B) B image 41 is filtered to reduce noise to provide a noise-reduced (P-R)(P-G)(P-B) B image 49.
  • step 50 the noise-reduced RGB F image 45 is interpolated to provide a full noise-reduced RGBp image 51 at the resolution of the sensor devices 3 1 and 32.
  • step 52 the noise- reduced (P-R)(P-G)(P-B) B image 49 is interpolated to provide a full noise-reduced (P-R)(P-G)(P-B) B image 53.
  • the red colour component of the noise- reduced RGB F image 45 is at the resolution of the sensor devices 31 of the sub-array 34, not at the resolution of all the sensor devices 3 1 and 32, but the full resolution red image F FFu u(R,n, m) can be constructed by applying a bilinear interpolation filter h FR to F F (P,-R).
  • the (P-R) component of the (P-R)(P-G)(P-B) B image 41 is at the resolution of the sensor devices 32 of the sub-array 37, not at the resolution of all the sensor devices 3 1 and 32, but the full resolution (P-R) image F BFu u(-R,n,m) can be constructed by applying a bilinear interpolation filter h BR to F B (P,-R).
  • Interpolation in steps 50 and 52 may be achieved using suitable interpolation algorithms, for example applying techniques similar to those disclosed in WO 2008/150342 and Kumar et. al. (New Digital Camera Sensor Architecture For Lowlight Imaging, Proceedings of the 2009 EEE
  • step 54 the full noise-reduced panchromatic image 47 and the full noise-reduced (P-R)(P- G)(P-B) B image 53 are used to generate the final, noise reduced full resolution full colour image 55, as follows.
  • the complete red image F Fu u(R,n,m) can be constructed by subtracting the (P-R) component of the full noise-reduced (P-R)(P-G)(P-B) B image 53 from the full noise-reduced panchromatic image F Fu u(P,n,m) 47 according to
  • F FuU (R,n,m) F Full (P,n,m) - F B (P, -R) * h BR
  • F Full (X,n,m) F Full (P,n,m) - F B (P,-X) * h Bx .
  • F B (P,-X) contains low frequency content while the high frequency content is added by F Fu it(P,n,m). It is noted that the term F B (P, -X) is directly measured and not derived by subtracting the interpolated values of the full panchromatic image from a colour image, which constitutes a simplification as compared to the methods disclosed in WO 2008/150342 and Kumar et. al..
  • the final, noise reduced full resolution full colour image 55 includes signals representative of each colour component at the resolution of the sensor devices 3 1 and 32.
  • the true resolution is in fact on average approximately 2/3 the resolution of the sensor devices 3 1 and 32.
  • the derivation of the final noise reduced full resolution full colour image 55 does not directly use the spatial colour information of the RGB F image 40.
  • the full RGB F image 51 may be used to further enhance the colour information by adding its content to the final image 55
  • F(X,n,m) F Fuil (X,n,m) + w F * F FFuschreib(Xn,m)
  • a suitable weighting term w F may have to be chosen to maximise the signal to noise ratio of F(X,n,m). This is particularly relevant if the signal to noise ratio of F FuU (X,n,m) differs substantially from the signal to noise ratio of F FFu u(X,n,m).
  • F BFlt n(W ,n,m) i.e. each photon being absorbed in the front sensor device 31 generates the same signal response as if it would have been absorbed in the back sensor device 32.
  • This information can be used to derive additional colour information from the back array of sensor devices32, that is by subtracting AF Fu u(n,m) from the signal obtained by each back sensor device 32 according to
  • Fig. 6 illustrates a second photodetector configuration 60 which is identical to the first photodetector configuration 30 of Fig. 4, except that the second photodetector configuration 60 has an RGB filter 61 disposed in front of the front array of sensor devices 31.
  • the RGB filter 61 has an absorption spectrum that is spatially uniform across the image sensor 3 and is a bandpass filter arranged to pass only the red, green, and blue colour components.
  • the absorption bands for the red, green and blue sensor devices 31 within the front array are rather broad and overlap, for example as shown in Fig. 2.
  • the back array of sensor devices 32 output signals representing the total of all of the light that reaches them without having been converted by the aligned sensor device 31 of the front array and without having been absorbed by the RGB filter 61.
  • This is effectively the two colour components remaining after conversion of one of the colour components by the aligned sensor device 3 1 of the front array, or in other words white light less the one of the colour components converted by the corresponding aligned sensor device 3 1 of the front array.
  • the sensor devices 32 of the back array may be considered to consist of three interleaved sub-arrays 37, 38 and 39 of sensor devices 32, aligned respectively with three interleaved sub-arrays 34, 35 and 36 of the front array of sensor devices 3 1, that output signals representing, respectively, the total of green plus blue, blue plus red and red plus green, as shown by the labels GB, RB and RG in Fig. 6.
  • the signals from the second photodetector configuration 60 are processed in a similar manner to the first photodetector configuration 30, except that the signals now represent white light where previously they represented panchromatic light.
  • This can simplify the processing in that the correction term AF(n,m) may be neglected because it is smaller and indeed is zero in the ideal case that filter characteristics of the RGB filter 61 perfectly match the responses of the sensor devices 31 of the front array, although in practice that might only be approximately achieved.
  • the sensor devices 32 of the back array produce signals that simplify to
  • F Ful ,(X,n,m) F Full (W,n,m) - F B (W-X) * h BX .
  • the sensor devices 31 of the front array absorb all of the single colour component to which they have a response and that every absorbed photon contributes to an electric signal. It might be that (1) all light of a given colour component is absorbed in a given front sensor device 31 but not every absorbed photon contributes to the signal (and the same might be true for a given back sensor device 32) and/or (2) only some light of a given colour component is absorbed in a given front sensor device 31 and the remaining light of the given colour component is transmitted to the aligned back sensor device 31. These cases may be accounted for in the signal processing as follows.
  • Case (1 ) can be addressed by multiplying the signal from the front sensor device 31 by a suitable weighting term Wr(R), i.e. F F (R) is to be replaced by Wj ⁇ R)F F (R). Similar considerations apply to back devices 32.
  • Case (2) can be taken into account if the amount of the transmitted light is known (for example from measurements on the actual photodetector). If T is the transmission of the front left device in the red part of the spectrum, one can write
  • Fig. 7 illustrates a third photodetector 70 which is identical to the second photodetector 60 of Fig. 6 in respect of two of the sub-arrays 35 and 36 of the sensor devices 31 of the front array (hereinafter first and second sub-arrays 35 and 36) but in which the third sub-array 34 has a response spectrally limited to both of the colour components to which the responses of the sensor devices 31 of the first and second sub-arrays 35 and 36 are spectrally limited, i.e. to both green and blue colour components in this example.
  • the sensor devices 32 of the sub-array 37 of the back array that is aligned with the third sub-array 34 of the sensor devices 31 of the front array produces a signal that represents only the single remaining colour component that is not converted by the sensor devices 31 of the third sub-array 34 of the front array, i.e. the red colour component in this example.
  • the output signals may be processed in the same manner as described above but swapping the signals of the third sub-array 34 of the front array with the signals of the aligned sub-array 37 of the back array. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination.
  • the third photodetector 70 may provide constructional advantages in requiring the sensor devices 31 of the front array to be spectrally limited to a smaller number of colour components in total, for example limited to green and blue, none being required to be spectrally limited to red in this example. If, for example, the light absorbing properties of the front array of sensor devices 31 is determined by organic molecules such as dyes, one could choose one set of dyes absorbing the green part of the spectrum and another set of dyes absorbing the blue part of the spectrum, without the need for a set of dyes absorbing the red part of the spectrum. Additional detail regarding the fabrication and basic device design using organic dyes is given below.
  • red light is transmitted by every sensor device 31 of the front array and contributes to an electric signal by every sensor device 32 of the back array.
  • This set-up might be particularly desirable if it is technically challenging to realise the desired physical properties, e.g. quantum efficiency, for a particular colour (in this example red) in the front array.
  • the signal processing may be modified to increase the colour resolution, in particular increasing number of pixels per colour by up to a factor 2, as follows.
  • F BFu ii(R, G, n, m) - F FFull (G, n, m) F BFull (R, n, m)
  • the signals of the back sensor devices 32 in the example depicted in Fig. 6 are derived from a larger number of photons than the signals from the front sensor devices 31 and are expected to have a larger signal to noise ratio.
  • the signal F B (G,B) from the sub-array 37 of the back array does not vary across the 3 pixels, adding the signal F B (RB) of the back sensor device 32 of the sub-array 38 to the signal F B (R,G) of the back sensor device 32 of the sub-array 39 and subtracting the signal F B (G,B) from the back sensor device 32 of the sub-array 38 yields
  • the signal is twice as big as the signal which would have been detected if only a single colour component per photodetector would have been detected.
  • F B (R, (2,3)) F B (R,2) + F B (R,3)
  • F B (R, (2,3)) F B (R,B,2)+ F B (R,G,3) - mean(F B (G,B, l), F B (G,B,4))
  • F B (G, (3,4)) F B (R, G,3)+ F B (G,B, 4) - mean(F B (R,B,2), F B (R,B,5))
  • F B (B, (4,5)) F B (G,B,4)+ F B (R,B,5) - mean(F B (R,G, 3), F B (R,G,6))
  • F BFull (R,n,m) F BFu u(R,B,n,m)+ F BFull (R,G,n,m) - F BFu ,,(G,B,n,m)
  • F BFu ,i(G,n,m) F BFu u(R,G,n,m)+ F BFu! i(G,B,n,m) - F BFuU (R,B,n,m)
  • CFAs colour filter arrays
  • the signal processing may be modified to increase the dynamic range as follows.
  • the additional colour pixels derived from the sensor devices 32 of back array may differ from the pixels derived from the sensor devices 31 of the front array in geometry (size, device geometry) and possibly also in material properties such as absorption properties due to different materials being used. Consequently, the saturation points would be reached after different exposure times, i.e. the sensor devices 31 of the front array have different sensitivities compared to the sensor devices 32 of the back array.
  • Fig. 9 is high level diagram showing an alternative operation of the DSP circuit 27 to process the signals from the second photodetector 60 employing some of the additional techniques described above, as follows.
  • the second photodetector 60 creates a RGBp image 80 coming from the front sensor devices 31 and a (GB)(RB)(RG) B image 81 derived from the back sensor devices 32. Both the RGB F image 80 and the (GB)(RB)(RG) B image 81 are used to create a full white image 82.
  • noise reduction algorithms are applied to the RGB F image 80, the (GB)(RB)(RG) B image 81 and the full white image 82 to create, respectively, a noise reduced RGB F image 83, a noise reduced (GB)(RB)(RG) B image 84 and a full noise reduced white image 85. It is beneficial not to construct the noise reduced white image 85 using the noise reduced RGB F image 83 and the noise reduced (GB)(RB)(RG) B image 84 because some of the noise in the signals from the front sensor devices 31 might cancel out against the noise from the back sensor devices 32 when the raw RGB F image 80 and (GB)(RB)(RG) B image 81 are combined.
  • red light passes through a sensor device 31 of the red front sub-array 34 it will result in a reduced signal from the sensor devices 31 belonging to the red sub-array 34.
  • the transmitted red light will be absorbed in aligned back sub-array 37 which results an increased signal in this sub-array 37.
  • a full noise reduced RGB F image 86 and a full noise reduced (GB)(RB)(RG)B image 87 is generated from the noise reduced RGB F image 83 and the noise reduced (GB)(RB)(RG) B image 84, respectively, using suitable interpolation filters.
  • a full noise reduced RGB B image 88 is created which is combined with the full noise reduced RGB F image 86 to create a full noise reduced RGB image 89.
  • the combination of the full noise reduced RGB F image 86 with the full noise reduced RGB B image 88 might either produce a full noise reduced RGB image 89 with increased dynamic range as described above or a full noise reduced RGB image 89 with increased colour resolution as described above.
  • In situ device calibration and enhanced error detection may be provided by the DSP circuit 27 as follows.
  • an out of focus image of a grey background might be taken to identify pixel errors.
  • the signal variation from one pixel to the next should be minute (because the optics was out of focus).
  • any large changes in intensity between adjacent pixels are due to imperfections of a sensor device which can be caused by dust particles on the sensor or faulty elements.
  • the uniform grey background provides the reference against which the sensor's performance is measured.
  • the present invention makes it possible to identify pixel errors without the need to take such a gray image.
  • the image derived from the back array should be identical to the image provided by the front array (except for random noise components).
  • the front image can be compared with the back image to identify imperfections due to faulty devices.
  • the devices belonging to the front array may be made of different materials and have a different structure compared to the devices of the back array. Hence, they will be to a different degree subject to undesirable physical phenomena. For example, the sensitivity of the devices in the front layer may decay more rapidly than the one in the back layer (e.g. due to bleaching of dyes), making them less suitable for low light applications. Also, this decay might not develop in every device at the same rate.
  • F EFuU (X,n,m) F BFull (X,n,m) - F FFuU (X,n,m).
  • E(X) an acceptable error tolerance
  • C Q a capacitance which translates into the voltage applied to the detecting transistor gate
  • E(X) neC G with e being the charge of one electron
  • step 1. detected an error at position (n,m), the following criteria might be used.
  • F F (X,n,m) and F B (X,n,m) are the data sets without any noise reduction (e.g. the raw data). Assume that the errors generally occur in the array with the higher standard deviation.
  • the more unreliable device array might be identified using criteria based on the signal to noise ratio or the change in signal between adjacent pixels for each device layer.
  • the faulty device might be identified using criteria based on the change in signal between adjacent pixels for each device layer.
  • F F Fuu(X, n, m) F BFuU (X, n, m).
  • F FFul ,(n,m) F FFu it(R,-AR,n,m)+ F FFu greed(G,-AG,n,m)+ F FFu n(B AB,n,m)
  • a criterion can be defined to specify if this loss in signal is acceptable.
  • a white balance using a gray card can be performed to identify faulty pixels.
  • Fig. 10 illustrates a fourth photodetector configuration 100 in which the sensor devices 31 of the front array again each have a response spectrally limited to a single one of three colour components and the sensor devices 32 of the back array each have a response to the entire spectrum of interest, but with the following modifications.
  • the front array consists of two interleaved sub-arrays 101 and 102 of sensor devices 31 (hereinafter first and second sub-arrays 101 and 102) that each have a response spectrally limited to a different one of three colour components, that is to blue and green, respectively, in this example.
  • the fourth photodetector 100 comprises two arrays of filters 103 and 104 (hereinafter first and second arrays of filters 103 and 104), each aligned in a one-to-one relationship with one of the sub-arrays 101 and 102 of sensor devices 3 1 of the front array.
  • Each array of filters 103 and 104 hereinafter first and second arrays of filters 103 and 104
  • the arrays of filters 103 and 104 are arranged to absorb one of the colour components other than colour component to which the response of the respective sub-array 101 and 102 of sensor devices of the front array with which the array of filters are aligned is spectrally limited.
  • the arrays of filters 103 and 104 absorb at least a portion of the colour component concerned, preferably a significant portion or all of the colour component.
  • the first array of filters 103 absorbs red light (i.e. not blue light converted by the aligned first sub-array 101) and the second array of filters 104 absorbs blue light (i.e. not green light converted by the aligned second sub-array 102).
  • red light i.e. not blue light converted by the aligned first sub-array 101
  • the second array of filters 104 absorbs blue light (i.e. not green light converted by the aligned second sub-array 102).
  • the arrays of filters 103 and 104 are aligned with a respective sub-array 101 and 102 of photoelectric sensor devices 31 of the front array, the arrays of filters 103 and 104 are interleaved in the same manner as the sub-arrays 101 and 102 of photoelectric sensor devices.
  • the arrays of filters 103 and 104 may be thought of as being similar to a colour filter array in the case of visible light except for the difference that they absorb a particular colour component while conventional colour filters transmit a particular colour component; i.e. the red filter here absorbs red light wile a red filter in a conventional CFA transmits red light.
  • the arrays of filters 103 and 104 are shown as a separate element in front of the sensor devices 31 of the front array, but they could alternatively be provided as a separate element between the sensor devices 31 of the front array and the sensor devices 32 of the back array or integrated within the sensor devices 31 of the front array, provided that they are disposed in front of the sensor devices 32 of the back array. Examples of such an integrated implementation are given further below.
  • the first array of filters 103 is arranged to absorb a colour component to which the response of neither the first sub-array 101 nor the second sub-array 102 of sensor devices 31 of the front array is spectrally limited, in this example red light.
  • the second array of filters 104 is arranged to absorb the colour component to which the response of the first sub-array 101 of sensor devices 31 of the front array is spectrally limited.
  • each sensor device 32 of the back array output signals representing a single colour component.
  • the sensor device 32 of the back array may also be considered to consist of two interleaved sub-arrays 105 and 106 of sensor devices 32, aligned respectively with the two interleaved sub-arrays 101 and 102 of the front array of sensor devices 31, that output signals representing a single colour component, being in this example green and red, respectively.
  • the output signals from the sub-arrays 101 and 102 of the front array of sensor devices 31 and the sub-arrays 105 and 106 of sensor devices 32 provide all three colour components, so no further signal processing is required.
  • the sub-arrays 101 and 102 and hence the sub-arrays 105 and 106 are interleaved, for example in the pattern shown in Fig. 1 1 (which is a view from above showing the colour components converted at the location of each aligned front sensor element 31 and back sensor element 32) in which the reference numerals 101, 102, 105, and 106 relate to the corresponding photodetectors shown in Fig. 10 (The filters 101 and 102 are not shown).
  • the output signals may be processed in a similar manner to that described for other photodetector configurations. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination.
  • the photodetector configuration 100 has a reduced sensitivity than the first, second and third photodetector configurations 30, 60 and 70, as some incident light is absorbed by the arrays of filters 103 and 104. Nonetheless, the fourth photodetector configuration 100 has an improved sensitivity over a conventional image sensor, as only a single colour component is absorbed at the location of each sensor device 1 and 32.
  • increased fidelity and colour gamut may be provided by modifying the responses of the sensor devices 31 of the front array and/or one or both of the array of filters 103 and 104 so that the spectral characteristics of one or more of the colour components represented by the output signals is different for different sub-arrays.
  • Fig. 12 shows a fifth photodetector configuration 1 10 which is the same as the fourth photodetector configuration 100 but modified so that the second sub-array 102 of the front array converts a green colour component Go that has a different spectral characteristic from the green colour component of the first sub-array 105 of the back array.
  • the output signals represent two different green colour components G and Go, where the spectral characteristic of the green colour component G is defined by the absorption spectra of the R and B absorbing material of the array of filters 103 and the sensor devices 31 of the sub-array 101 of the front array, while the spectral characteristic of the green colour component G 0 is directly defined by the properties of the light absorbing material used in the sensor devices 31 of the sub-array 102 of the front array.
  • This set-up is also useful if it proves to be too challenging to incorporate a suitable material with sufficiently high efficiencies in the red part of the light spectrum.
  • Fig. 13 shows a sixth photodetector configuration 1 1 1 which is the same as the fifth photodetector configuration 1 10 but modified so that the first and second arrays of filters 103 and 104 absorb light of the same colour components as the second sub-array 102 and the first sub- array 101, respectively, but with different spectral characteristics, i.e. different bandwidth.
  • the output signals from the two sub-arrays 105 and 106 of the back array both correspond to different green colour components G and G 0 .
  • Fig. 14 illustrates a seventh photodetector configuration 1 12 which is the same as the second photodetector configuration 60 but modified to extend the considerations of using different spectral characteristic to the case of three sub-arrays 34, 35 and 36, so that the spectral characteristics of the colour components Rj, G2, and B 3 of the front sensor devices 31 have a response which differs from the spectral characteristics of the colour components Gi, Bi, R 2 , B 2 , R 3 , and G 3 represented by the signals from the back sensor devices 32.
  • the front sensor devices 3 1 which detect colour components Ri, G 2) and B 3 might have very narrow absorption spectra which means that the absorption spectra of the back sensor devices 32 Gj, B ls R 2 , B 2 , R 3) and G 3 are broader.
  • the achievable colour gamut of the front array of sensor devices 31 is potentially larger than the colour gamut obtainable by the back array of sensor devices 32.
  • Such a design allows the creation of an image with improved colour gamut as the colours Ri, G 2 , and B 3 are placed closer to the edge of the CLE 1931 colour space chromaticity diagram, i.e. closer to monochromatic colours.
  • the output signals of the fifth to seventh photodetector configurations 1 10 to 1 12 may be processed in a similar manner to that described for other photodetector configurations. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination.
  • the photodetector 22 may comprise further sub-arrays of photodetectors, additional colour filter arrays and/or transistors (e.g. for active pixel arrays).
  • Fig. 15 shows an example of the construction of a sensor device 1 of the front array that is a dye sensitised photoelectric device employing organic material in the form of an organic dye to convert one or more of the colour components into an electrical signal.
  • Figuer 15 shows the approximate thickness of various layers although thinner layers may be preferred.
  • the sensor device 1 has a layered construction that will now be described.
  • the sensor device 31 is formed on a substrate 120, that is a silicon substrate in which the sensor devices 32 of the bottom array are formed.
  • a first electrode 121 formed of a transparent conductive material e.g. ITO (indium tin oxide) or FTO (fluorine doped tin oxide) is provided as a first layer on the substrate 121.
  • ITO indium tin oxide
  • FTO fluorine doped tin oxide
  • the second layer provided on the first electrode 121 is a compact conductive layer 122 of a non-porous, transparent material (e.g. Ti0 ) that is less conductive than the first electrode 121.
  • a non-porous, transparent material e.g. Ti0
  • the third layer provided on the compact conductive layer 122 is a conductive layer 123 made of a highly transparent material (e.g. Ti0 2 nanocrystals) arranged in an open structure having pores. This material is conductive to charge carriers of first type, in this example electrons.
  • the porous conductive layer 123 is electrically connected to the compact conductive layer 122 and hence to the first electrode 121.
  • the compact conductive layer 122 and the porous conductive layer 123 may be made of the same material but that is not essential.
  • the porous conductive layer 123 is coated with an organic dye 124 comprising light absorbing molecules that convert one or more of the colour components.
  • the next layer is a transporter layer 125 that sufficiently fills the pores of the porous conductive layer 123 and extends above the porous conductive layer 123. It is important to note that it is not necessarily required that the transporter 125 fills the pores completely, and that it is conceivable that said hole transporter wets the surface of conductive layer 123 assuming that the charge carrier mobilities of the second type of charge carriers is sufficiently large. Furthermore, layer 123 might be replaced by another transparent and conductive layer with a sufficiently large surface to volume ratio. Nanowires comprising a transparent conductive oxide (e.g. SnO, ZnO or Ti0 2 ) may constitute a suitable alternative.
  • the transporter layer 125 is made of a material that is conductive to charge carriers of second type, in this example holes.
  • the transporter layer 125 may be a hole-conducting polymer.
  • the final layer is a second electrode 126 disposed on, and electrically connected to, the transporter layer 125.
  • the second electrode 126 is made of a transparent film, for example a film of metal, e.g. gold, less than 50nm thick; or a transparent conductive oxide such as ITO.
  • the sensor device 32 works as follows. Light penetrating the second electrode 126 (which is the front electrode) gets absorbed in the organic dye 124, whereby each absorbed photon generates one electron, which is injected into the porous conductive layer 123, and one hole, which is injected into the transporter layer 125. Hence, electrons are collected at the first electrode 121 and holes are collected at the second electrode 126.
  • the wavelength of the photons absorbed is determined by the choice of the organic dye 124. Materials of different light absorbing properties can be selected to obtain the desired spectral response of the sensor device 31.
  • a filter may be incorporated into the sensor devices 31 by including an absorptive material arranged to absorb EM radiation without providing the photoelectric sensor device 31 with a response thereto.
  • the absorptive material might be disposed within the transporter layer 125, for example being nanoparticles or dyes which are electrically isolated from the porous conductive layer 123.
  • the absorptive material might be a dye (or an equivalent material) disposed anywhere in the sensor device 31, but preferably within the transporter layer 125, without being able to inject a charge into either one of the electrodes 121 or 126.
  • Such a filter incorporated into the sensor devices 31 may be used to form the embodiments described above in which arrays of filters are provided in front of the sensor devices of the back array.
  • Fig. 16 shows the construction of the photodetector 22 with the sensor devices 31 of the front array stacked on a semiconductor substrate 130 in which the sensor devices 32 of the back array are formed as CMOS devices, or more generally as any other form of silicon-based photovoltaic devices (e.g. a CCD), in a common layer of silicon, or more generally semiconductor material.
  • the sensor devices 31 of the front array have respective first electrodes 121 and a common front electrode 126.
  • the sensor devices 31 and 32 may be formed using conventional techniques, including but not restricted to any combination of the following techniques. 1. Additive techniques (e.g. deposition, transfer)
  • Deposition methods include but are not restricted to direct or indirect thermal evaporation, sputter deposition, chemical vapour deposition, spin coating, spray coating, doctor blading and ink-jet printing.
  • Transfer methods include dry transfer methods such as stamp-based transfers, and device bonding
  • Etching includes wet-chemical etching and dry- etching (e. g. reactive ion etching). Dry etching techniques may be combined with sputtering techniques.
  • Sputtering includes ion milling.
  • Local heating may occur due to a localised exposure to an energy source (e. g. a focussed laser beam, selective exposure to photons using a mask) or due to particular energy absorbing properties of the material to be heated (e.g. due to a smaller indirect band- gap compared to neighbouring materials allowing it to absorb of photons under emission of phonons.
  • Excimer laser annealing of Silicon on a glass substrate is such a material combination where the photons are predominantly absorbed in the silicon).
  • Chemical functionalisation may utilise particular surface properties of the elongate low dimensional structures being defined by the material composition.
  • the sensor devices 31 and 32 themselves may have a detailed construction similar to the stacked sensor devices disclosed in US-2007/0,012,955 and US-2009/0,283,758.
  • the sensor devices 31 of the front array may be fabricated directly onto sensor devices 32 of the back array, or alternatively may be fabricated on different substrates and subsequently assembled. Some examples of methods that may be employed are as follows.
  • a glass substrate 140 is coated with a TCO (transparent conductive oxide) layer 141.
  • a compact Ti02 film (not shown) may be deposited.
  • a porous conductive film 142 of Ti0 2 particles is formed (e.g. by doctor blading) which is subsequently sintered at a temperature in the range of 300°C to 600°C to ensure continuous conduction paths through the film towards the TCO layer 141.
  • Photoresist 143 is spincoated and patterned using conventional photolithographic techniques. The result to this point is depicted in Fig. 17(a).
  • a first dye and first hole transporter are added to the unmasked sections 144 that ultimately form a first sub-array of sensor devices 31, as shown in Fig. 17(b).
  • the surface properties of the photoresist 143 may have to be altered using a suitable functionalisation step.
  • the photoresist 143 is removed using suitable solvents, as shown in Fig. 17(c).
  • the first hole transporter may be cross-linked.
  • a second dye and second hole transporter are added to the sections 145 that were previously masked and ultimately form a first sub-array of sensor devices 31 , as shown in Fig. 17(d).
  • the first hole transporter 1 is identical with the second hole transporter 2.
  • An even shorter process flow may comprise the deposition of the first dye after the sintering of film 142 and spin coating of the photoresist 143.
  • the first dye might be removed.
  • a suitable resist developer might also remove the dye.
  • the second dye is deposited, the resist is removed and the hole transporter is added. In this case, the positions of dye 1 and dye 2 are swapped.
  • Figs. 18(a) and (b) shows an alternative construction of the front array of sensor devices 31 similar to that of Fig. 17 but in which the porous conducting material 142 is discontinuous and separated by spacers 146 forming pockets 147 and 148.
  • all pockets 147 and 148 are filled with a suitable polymer and subsequently a first set of pockets 148 is masked using the photolithography process described above.
  • the polymer of the exposed pockets 147 is removed, a first dye and first hole transporter are deposited, as shown in Fig. 18(a).
  • the resist 143 is removed and the polymer in the previously masked pockets 148 is removed.
  • the polymer might be highly soluble in a particular solvent compared to the first hole transporter (this can be either achieved by choosing an appropriate polymer and solvent combination or by cross-linking the first hole transporter).
  • a second dye and second hole transporter are deposited in the open pockets 149, during which the previously processed pockets 148 might be masked using a patterned photoresist.
  • This embodiment is particularly useful if the chosen photoresist does not mask the porous material sufficiently (e.g. it might not fill its cavities sufficiently) or if it is difficult to remove the chosen photoresist from the cavities of the porous material.
  • the material in the pockets 1 7 and 148 forms respective sub-arrays of sensor devices 31.
  • the porous conductive film 142 is coated with a first dye and first hole transporter 1 , as shown in Fig. 19(a).
  • the desired portions 150 of hole transporter are crosslinked using a suitable light source (e.g. UV light) in combination with a metal mask, as shown in Fig. 19(b).
  • a suitable light source e.g. UV light
  • the non-crosslinked portions 151 of polymer are removed, for example using techniques well known for patterning photoresists.
  • the first dye is removed from the exposed sections 151 (e.g. using oxygen plasma) and a second dye and second polymer are deposited as shown in Fig. 19(c).
  • the fabricated assembly 160 of the front array of sensor devices 31 needs to be joined with the assembly 162 of the back array of sensor devices 32 that can be made in a silicon chip using standard fabrication techniques for CMOS image sensors.
  • an adhesive layer 163 that may be a conductive polymer film to coat the assembly 160 of the front array of sensor devices 31 or the assembly 162 of the back array of sensor devices 32 as shown in Figs. 20(a) and (b), respectively, and subsequently joining the assemblies 161 and 162 together.
  • the adhesive layer 163 might require thermal annealing or exposure with a suitable radiation source to improve adhesion for example by facilitating crosslinking.
  • the assembly 160 of the front array of sensor devices 31 needs to be sufficiently transparent with respect to the used radiation (e.g. UV light) where a UV cure under high pressure is applied to form good electrical contacts. This can be achieved as follows.
  • the used radiation e.g. UV light
  • the substrate 140 is replaced by a handling wafer 164 and a sacrificial layer 165, as shown in Fig. 21(a).
  • the handling wafer 164 and the sacrificial layer 165 are ideally transparent to UV light. If not, a bond just sufficient to keep the assembly 161 in place is required. Instead of forming the bond across the assembly 161, it may be possible to form the bond at the periphery, thereby effectively sealing the assemblies 1 1 and 162.
  • the handling wafer 164 is removed, as shown in Fig. 21(b), possibly by etching using the sacrificial layer 1 5 as an etch stop, or alternatively by removal of the sacrificial layer 165 to release the handling wafer 164.
  • the use of a layer acting as an etch stop might be required because glass wafers which have thermochemical properties comparable to silicon (required for high alignment accuracy) are usually etched with HF which also etches most TCOs.
  • the sacrificial layer 165 is removed, as shown in Fig. 21(c).
  • the remaining TCO layer 141 might also absorb a portion of the UV radiation, but it can be made sufficiently thin to transmit a sufficiently large fraction of the UV radiation. Also, where spacers 146 are used, as shown in Figs. 18(a) and (b), the spacers 146 consist of a material that is sufficiently transparent to the UV radiation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Electromagnetism (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Color Television Image Signal Generators (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

An image sensor for sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest comprises stacked front and back arrays of photoelectric sensor devices aligned in a one-to-one relationship, wherein the front array consists of interleaved sub-arrays of photoelectric sensor devices having a spectrally limited response that is the same within each sub-array and different between the sub-arrays, and the back array comprises photoelectric sensor devices each having a response to the entire spectrum of interest. Consequently, the front photoelectric sensor devices output signals representing the one or more spectral components, and the back photoelectric sensor devices output a signal representing the total of all of said spectral components that reach the back photoelectric sensor device without conversion by the aligned front photoelectric sensor devices. This allows the derivation of full resolution images with a high sensitivity and a simple image sensor construction.

Description

Image Sensor
The present invention relates to image sensors for sensing EM (electromagnetic) radiation of at least three spectral (e.g. colour) components which constitute a subset of the EM radiation spectrum of interest. Such EM radiation spectrum of interest is application dependent and may encompass, in the case of conventional image sensors as they are used in mobile phones or digital single lens reflex cameras the spectrum of visible light which ranges approximately from 400nm to 800nm.
Various types of colour image sensors are known which detect the spatial intensity distribution of particular spectral components within the EM radiation spectrum of interest. The most common image sensor comprises an array of photoelectric sensor devices, each of them capable of detecting EM radiation belonging to the entire spectrum of interest. To resolve the spatial colour information, these device arrays are masked by colour filter arrays (CFAs) consisting of interleaved sub-arrays of colour filters that each pass a spectrally limited EM radiation of a single spectral component, for example only a red, green or blue colour component. Each photoelectric sensor device and corresponding colour filter comprises one pixel, whose signal output depends on the intensity of the light transmitted by the filter. The most widespread arrangement of these CFAs comprise red, green and blue filters assembled in a Bayer type configuration which contains twice as many green pixels as red or blue pixels to reflect the human eye's greater sensitivity to green light. The photoelectric sensor array may be for example based on CCDs (charge coupled devices) or constitute active pixel sensor arrays as they are realised in CMOS (complimentary metal-oxide- semiconductor) sensors. The use of CFAs is wasteful because they absorb most of the EM radiation within the spectrum of interest which, as a result, is lost without contributing to the signal generated by the pixels, drastically reducing the sensitivity of the image sensor.
Sensitivity affects the signal and consequently the signal to noise ratio (SNR) of the image sensor. It is particularly compromised if increases in sensor resolution are facilitated by smaller pixels whose smaller area will be inevitably struck by a smaller number of photons.
Statistical fluctuations in the total number of photons hitting a particular area (i.e. Shot noise) in relation to the overall number of photons hitting said area are evidently more relevant if the overall number of photons is small. This is more likely to be relevant for a small pixel area giving rise to a lower SNR. Shot noise unlike other noise sources constitutes the fundamental limit of the sensors sensitivity and is particularly problematic in low light conditions or for image sensors with small form factors (i.e. small pixel sizes).
Other noise sources such as fixed pattern noise, thermal noise and read noise are increasingly addressed by improving the manufacturing process or using more advanced electronics (especially AID converters), such that the performance of most recent image sensors is increasingly limited by shot noise.
Hence it is desirable to improve sensitivity to allow the development of image sensors facilitating increased image quality. The obvious solution would be the use of sensors with sufficiently large photodetector sites as they are common in the high-end digital camera market. However, even these cameras suffer increasingly from shot noise. Furthermore large sensors are often not practical as their use is accompanied by larger optics making them impractical for applications where small form factors are required (i.e. mobile phones, point and shoot cameras). Furthermore, recent years have seen the successful introduction of travel zoom and super zoom cameras which tend to suffer from slower lenses. Generally, camera lenses are of high technical maturity which means that further significant improvements (e.g. towards more compact or faster lenses) will come with significant costs.
More inexpensive improvements take advantage of the progress of microtechnology and adress the sensor itself. The fill factor, for example, is typically limited by metal leads and, in the case of active pixel sensors, CMOS circuits integrated within each pixel. Microlenses are used to focus light onto the photo-active pixel area which would otherwise be reflected by these circuits and metal leads, increasing the amount of incident light collected. Also backside illuminated CMOS sensors were introduced where the photoelectric sensor devices are provided on the backside opposite of the metal leads and CMOS circuits . However, although utilising these approaches it is now possible to obtain fill factors which are close to 1 , i. e. all the light is directed onto the photo-active region, all of these sensors still rely on the use of CFAs to obtain spatial colour resolution.
Hence, most recent efforts focus on minimising the use of CFAs where EM radiation is absorbed without contributing to the sensor's signal response.
WO-2008/150342 discloses a sensor design which includes alongside the well known R, G and B pixels also panchromatic pixels which convert most if not all of the photons within the visible light spectrum into a signal. As a result, the image sensor as a whole captures more light increasing its overall sensitivity. However, this gain is achieved at the expense of lateral colour resolution which is determined by the spacing of R, G and G filters which are now further apart to accomodate the panchromatic pixels.
Alternative approaches aimed at reducing the use of CFAs have been proposed such as utilising stacked arrays of photoelectric sensor devices. Some examples of such approaches are as follows.
US-6,731,397 discloses an example using silicon as the photoelectric material by forming a stack of three pn junctions in a sheet of silicon that constitute respective photoelectric sensor devices. This construction takes advantage of the wavelength dependent absorption depth of silicon which results in each photoelectric sensor device predominantly outputting a signal of a different colour component. This improves the sensitivity by collecting light of all three colour components at each pixel location, but the physical construction presents some limitations. It is important to note that the response of the photoelectric sensing devices generally relates to more than one colour component. For example, the photoelectric sensor device being closest to the surface of said silicon material and hence closest to the light source will predominantly absorb light with the shortest wavelength, i.e. blue light, but also any light with any longer wavelength. Consequently, the colour signals do not correspond to a particular colour space, say the RGB space and need to be converted into a standard colour space. This requires additional processing power and is done by aggressive calculations requiring a colour correction transformation matrix which induces additional noise and may still suffer from colour mixing. EM radiation absorbed in the depletion regions between the pn junctions is not sensed and compensation doping negatively affects the carrier lifetime in the depletion regions and hence the dark current.
Other examples make use of photoelectric sensor devices comprising organic materials. Such materials are promising because they present the scope to optimise the photoelectric properties of the photoelectric sensor device.
US-7,41 1 ,620 disclose an image sensor that comprises stacked arrays of photoelectric sensor devices, each comprising organic material and having a response spectrally limited to a single colour component, e.g. by choosing appropriate dyes. This offers the opportunity to sense each colour component at each pixel location, thereby improving the sensitivity. However, manufacture of the construction involving stacked arrays of photoelectric sensor devices presents technical challenges. For example, when manufacturing each stacked array sequentially it is difficult to provide the necessary sintering of metal oxide particles such as T1O2 particles in the construction of the photoelectric sensor devices at elevated temperatures without damaging the organic material incorporated in the previously fabricated arrays which may include dyes and hole transporter.
US-2007/0012955 and US-2009/0283758 disclose image sensors in which the problems of stacking photoelectric sensor devices comprising organic material are avoided by stacking a single array of photoelectric sensor devices comprising organic material on photoelectric sensor devices that are CMOS devices formed in silicon. The array of photoelectric sensor devices comprising organic material all have a response spectrally limited to the same spectral components, for example green. The photoelectric sensor devices that are CMOS devices formed in silicon are configured to output signals in respect of the two remaining spectral components, for example red and blue.
Both US-2007/0012955 and US-2009/0283758 disclose the possibility that the photoelectric sensor devices formed in silicon comprise a stack of two pn junctions in a sheet of semiconductor. This construction is similar to the image sensor disclosed in US-6,731,397 and allows the photoelectric sensor devices to output signals in respect of the two remaining spectral components, but it suffers from similar problems, albeit that a stack of only two photoelectric sensor devices formed by pn junctions is present.
US-2009/0283758 also disclose the possibility that the photoelectric sensor devices formed in silicon comprise an arrangement in which the photoelectric sensor devices are aligned with interleaved sub-arrays of colour filters that each pass a spectrally limited EM radiation of a single colour component. Although sensitivity is improved by sensing one of the colour components in the array of photoelectric sensor devices comprising organic material, the use of filters in the photoelectric sensor devices formed in silicon still limits the improvement in the sensitivity because the filters absorb a proportion of the incident EM radiation. Therefore it would be desirable to provide an image sensor for sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest with an improved sensitivity as a compared to a Bayer type of image sensor, but in which at least some of the technical problems associated with the approaches mentioned above are reduced.
According to the present invention, there is provided, an image sensor for sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest, the image sensor comprising a front array of photoelectric sensor devices for receiving incident EM radiation stacked on a back array of photoelectric sensor devices and aligned in a one-to-one relationship with the photoelectric sensor devices of the back array for receiving the incident EM radiation after transmission through the front array, wherein
the front array consists of at least two interleaved sub-arrays of photoelectric sensor devices each having a response spectrally limited to one or more of said spectral components that is the same within each sub-array and different between the sub-arrays, so that the photoelectric sensor devices are configured to output signals representing the one or more spectral components to which they have a response, and
the back array comprises photoelectric sensor devices each having a response to the entire spectrum of interest, so that the photoelectric sensor devices are configured to output a signal representing the total of all of said spectral components that reach the sensor device without having been absorbed by the aligned photoelectric sensor device of the front array or by any filter optionally provided in front of the photoelectric sensor device of the back array.
Such an image sensor provides the capability of sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest with a construction that has the potential of providing improved sensitivity as compared to an image sensor whose spectral response is solely defined by the use of conventional colour filter arrays.
The capability of sensing EM radiation of at least three spectral components is achieved as a result of the front array consisting of at least two interleaved sub-arrays of photoelectric sensor devices each having a response that is spectrally limited to one or more of said spectral components that is the same within each sub-array and different between the sub-arrays. This contrasts with the construction in US-2007/0012955 and US-2009/0283758 in which the front array of photoelectric sensor devices each have a response that is spectrally limited to the same one of said spectral components. As photoelectric sensor devices of the back array each have a response to the entire spectrum of interest, they are configured to output a signal representing the total of all of said spectral components that reach the sensor devices. Due to the conversion of different spectral components by the sub-arrays of the front array, the respective sub-arrays of photoelectric sensor devices of the back array that are aligned with each sub-array of the front array are therefore also configured to output signals of one or more spectral components that can be different for each sub-array. Hence, the maximum number of spectral components equals the total number of sub-arrays. The spatial resolution of a particular spectral component is determined by the pitch or the corresponding sub- array. If, for example, the top array comprises only two sub-arrays (note that conventional Bayer image sensors require 3 individually fabricated sub-arrays), then also the bottom array will comprise two sub-arrays. Due to the use of two stacked arrays, this configuration has twice the number of pixels per area as compared to conventional CFAs, but it allows for detecting up to twice as many spectral components as sub-arrays are present in the top array, i.e. 4 colours in this example. The result is an increased colour gamut and a doubling of the number of pixels per area.
Said construction also can be configured such that it provides improved sensitivity as compared to a Bayer type of image sensor, because all EM radiation of interest is collected at the location of each photoelectric sensor device, because the photoelectric sensor devices of the back array are configured to output a signal representing the total of all of said spectral components that reach said sensor device of the back array without having been converted by the aligned photoelectric sensor device of the front array or by any filter provided in front of the photoelectric sensor device of the back array.
These advantages are achieved with a construction that is relatively simple to manufacture. In particular, the image sensor requires only two stacked arrays.
As the photoelectric sensor devices of the front array alone is required to have a spectrally limited response, it is straightforward to manufacture the photoelectric sensor devices of the back array to have a response to the entire spectrum of interest, for example by constructing the back array from photoelectric sensor devices comprising semiconductor material, such as silicon, for example as CMOS devices.
Furthermore, as the front array is just a single array of photoelectric sensor devices having a spectrally limited response, it is relatively straightforward to add the front array, for example by constructing the front array from photoelectric sensor devices comprising organic material organic semiconductor of either molecular or polymeric form, such as a dye, configured to convert said one or more of said spectral components, in contrast to the technical difficulties in constructing stacked arrays of photoelectric sensor devices having a spectrally limited responses.
In the image sensor, it is desirable that each photoelectric sensor device of the front array has a high quantum efficiency so that it converts a high proportion of the one or more of said spectral components to which that photoelectric sensor device has a response. This advantageously provides each photoelectric sensor device of the front array with a relatively high sensitivity to the one or more spectral components to which it has a response.
Nonetheless, the high degree of absorption by the photoelectric sensor devices of the front array still allows derivation, if desired, of a luminance signal representative of the luminance of the incident EM radiation with high sensitivity, by using the signals output by the photoelectric sensor devices of both the front array and the back array in combination.
In a typical photoelectric sensor device of the front array that has such a high sensitivity, the peak absorptance, in respect of the one or more of said spectral components to which that photoelectric sensor device has a response, is at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%. The absorptance
(sometimes referred to as the absorption factor) is the fraction of the incident EM radiation flux at a given wavelength that is absorbed. In an ideal sensor, all the absorbed EM radiation is converted.
The image sensor may further comprise a signal processing unit configured to receive the signals output by the photoelectric sensor devices of the front array and the back array and to process the received signals to derive spectral component signals representative of each of the at least three spectral components and/or a luminance signal representative of the luminance of the incident EM radiation.
In the derivation of the spectral component signals representative of each of the at least three spectral components, advantageously, the signal processing circuit uses the signals output by the photoelectric sensor devices of both the front array and the back array in combination. In this manner, the sensitivity of detection of the spectral components is improved as compared to using the signals output by the photoelectric sensor devices of the front array alone (or by the photoelectric sensor devices of the back array alone). This is based on an appreciation that the respective sub- arrays of photoelectric sensor devices of the back array produce signals containing spectral information due to the absorption by the aligned sub-array sub-arrays of photoelectric sensor devices of the front array. Therefore, the output signals of the photoelectric sensor devices the back array can be combined with the output signals of the photoelectric sensor devices the back array. This improves the sensitivity of the resultant spectral component signals, as compared to relying on the photoelectric sensor devices of the front array alone.
Various configurations for spectral components associated with the sub-arrays are possible.
In one type of configuration, the image sensor does not include any filters arranged to absorb any of the spectral components. In this case, the photoelectric sensor devices of the back array are configured to output a signal representing the total of all of said spectral components that reach the sensor device except for the spectral component converted by the aligned photoelectric sensor device of the front array. This type of configuration maximises the improvement in sensitivity as all of the spectral components are converted by the photoelectric sensor devices of either the front or back array.
In another type of configuration, the front array consists of at least three interleaved sub- arrays of photoelectric sensor devices, and the photoelectric sensor devices of at least one said sub- arrays has a response spectrally limited to at least two said spectral components. In a specific example of this type of configuration, the front array consists of three interleaved sub-arrays of photoelectric sensor devices, the photoelectric sensor devices of a first two of the sub-arrays each having a response spectrally limited to one said spectral components that is the same within each sub- array and different between the sub-arrays, and the photoelectric sensor devices of the third sub-array each having a response spectrally limited to both of said spectral components to which the responses of the photoelectric sensor devices of the first two sub-arrays are spectrally limited.
This type of configuration may provide constructional advantages in requiring the photoelectric sensor devices of the front array to be spectrally limited to a smaller number of colour components in total. In the specific example of this type of configuration, two of the sub-arrays have responses spectrally limited to respective ones of two spectral components, and the third sub-array has a response spectrally limited to both of those two spectral components, so no photoelectric sensor devices of the front array need to have a response to the third spectral component. This may provide a constructional advantage in making the front array easier to manufacture. For example, if the light absorbing properties of the front array is determined by organic molecules such as dyes, one could choose organic molecules absorbing a reduced set of spectral components, because the further spectral component could be derived using the information from the photoelectric sensor devices of the back array. This set-up might be particularly desirable if it is technically challenging to realise the desired physical properties, e.g. quantum efficiency, for a particular spectral component, as is the case for red light in some types of photoelectric sensor devices, for example.
In yet another type of configuration, the image sensor further comprises at least one array of filters aligned in a one-to-one relationship with one of the sub-arrays of photoelectric sensor devices of the front array, disposed in front of the photoelectric sensor devices of the back array, and arranged to absorb at least a portion of one of the spectral components, for example one of the spectral arrangements other than one or more of said spectral component to which the response of the sub- array of photoelectric sensor devices of the front array with which the array of filters are aligned is spectrally limited. Where there are plural arrays of filters, each array of filters may be arranged to absorb a spectral component that is the same within each array and different between the arrays. As each array of filters is aligned with a sub-array of photoelectric sensor devices of the front array, the arrays of filters are interleaved in the same manner as the sub-array of photoelectric sensor devices and may be thought of as being similar to a colour filter array in the case of visible light (although a conventional colour filter array passes light of a single colour component).
This type of configuration provides the potential to reduce the complexity of the image sensor by reducing the spectral components to which the response of the sub-arrays of front array need to be spectrally limited albeit at the expense of providing a lesser improvement in sensitivity over conventional image sensors as some incident light is absorbed by the array of filters.
Accordingly, various embodiments of the present invention address the challenge providing an improved method of constructing colour images and a device structure enabling this method such that many and in some cases all, of the following advantages can be achieved:
1. Improved performance
a. Sensitivity: minimal (potentially no) waste of photons which are part of the spectrum of interest (e.g. RGB) due to the use of colour filters
b. Colour resolution: improved or at least equivalent colour resolution compared with conventional image sensors using 3 colour filter arrays (e.g. Bayer pattern) c. Luminance resolution: improved luminance resolution compared with conventional image sensors using 3 colour filter arrays 2. Improved technical simplicity
a. Device fabrication: using only two device layers instead of the 3 device layers
required to derive colour images with equivalent sensitivity,
b. Image processing: Simplifying existing and enabling more powerful signal processing techniques.
Embodiments of the present invention are now described by way of non-limitative example with reference to the drawings, in which:
Fig. 1 is a diagram of a panchromatic image sensor and a white image sensor;
Fig. 2 is a graph of spectral responses of the sensors, plotting relative channel responsivity against wavelength;
Fig. 3 is a diagram of an image sensor;
Fig. 4 is a partial schematic side view of a photodetector;
Fig. 5 is a block diagram illustrating the operation of a DSP circuit using the photodetector configuration of Fig. 4;
Figs. 6 and 7 are partial schematic side views of further photodetector configurations;
Fig. 8 is a partial schematic side view of an extended portion of the photodetector configuration of Fig. 6;
Fig. 9 is a block diagram illustrating an alternative operation of a DSP circuit using the photodetector configuration of Fig. 6;
Figs. 10 and 1 1 are partial schematic side and top views, respectively, of a further photodetector configuration;
Figs. 12 to 1 are partial schematic side views of further photodetectors configurations;
Fig. 15 is a side cross-sectional view of a possible structure for a photodetector;
Fig. 16 is a schematic cross-sectional view of the construction of a photodetector confifuration; and
Figs. 17(a) to (d), 18(a) and (b), 19(a) to (c), 20(a) and (b) and 21(a) to (c) are schematic side views of a photodetector configuration during subsequent fabrication steps.
The image sensors and photodetectors that will now be described sense EM radiation of three spectral components in a predetermined spectrum of interest. The photodetectors comprise photoelectric sensor devices that convert photons into an electric signal (i.e. a change in voltage or current or both).
For the sake of clarity, the description is given making specific reference to the spectrum of interest being visible light and to the three spectral components being red, green and blue colour components, i.e. using the RGB colour space or variations thereof. However this is merely by way of example, and the image sensors may equally be applied to a spectrum of interest that extends in part or in full beyond the visible spectrum, as desired for a given application, for example including photons with wavelengths below 400nm (e.g. ultraviolet (UV)) and/or above 750 nm (e.g. infrared (»)). The image sensors may equally be applied to detect any number of spectral components of any wavelength spread across the spectrum of interest to represent different spectral bands of the spectrum of interest. For example in the case of an image sensor intended to take photographs, the spectrum is visible light and different spectral components are colour components as required by a particular colour model (e.g. the RGB model, the RGBE model, the CYGM model or the CMYK model) to create a full colour image representing a scene as perceived by the human eye. For example in the case of image sensors in other applications, the spectral components might be any spectral components that are required to represent the spectrum as desirable for a given application. The term "colour" is used to be consistent with common terminology for colour models of the visible spectrum, but the invention is not limited to the spectral components being colour components inside the visible spectrum.
Furthermore, the following description also uses the terms "panchromatic" and "white". In this context, the term "panchromatic" is used to refer to signals representing the entire spectrum, that is from photons of all wavelengths belonging to the spectrum which is to be reconstructed to obtain the desired image, whereas the term "white" is used to refer to signals representing just the spectral components, that is photons whose wavelengths belong to the spectral components, for example a chosen and well defined colour space such as RGB. This difference means that a panchromatic pixel would give a larger signal than a white pixel. The term "white" is used to be consistent with common terminology for colour models of the visible spectrum, but the present invention is not limited to the spectral components being inside the visible spectrum.
To illustrate the difference, Fig. 1 shows two image sensors 1 and 3, the first image sensor 1 having a panchromatic response and the second image sensor 3 having a white response. Fig. 2. shows the spectral response 1 1 , 12, 13 and 14 of panchromatic, red, green and blue sensor devices, respectively (taken from "Mrityunjay Kumar, Efrain O. Morales, James E. Adams, Jr., and Wei Hao, New digital camera sensor architecture for low light imaging, Proceedings of the 16th IEEE international conference on Image processing, ISBN ~ ISSN: 1522-4880 , 978-1-4244-5653-6 (2009)").
The first image sensor 1 comprises an ideal photodetector 2 having a response to the entire spectrum of interest and has a response 1 1 to a wide spectrum, in this case from around 375nm to around 670nm, thus producing a panchromatic output signal representing the entire spectrum.
The second image sensor 3 comprises an ideal photodetector 4 having a response to the entire spectrum of interest having disposed in front an RGB colour filter 5 that has an absorption spectrum that is spatially uniform across the image sensor 3 and is a bandpass filter arranged to pass only the colour components, that is to restrict the spectrum of the incident EM radiation to bandwidth ranges in respect of each of said colour components. Conventionally, a colour filter transmits EM radiation of a particular colour component, specifically in the case the RGB colour filter 5 each of the red, green and blue colour components. As a result the image sensor 3 has effective channel responses 12, 13 and 14 to red, green and blue colour components, in this case centred at wavelengths of around 470nm, 540nm and 620nm. Thus the image sensor 3 produces a white output signal representing the sum of the output signals corresponding to the red, green and blue colour components. In this example, the panchromatic image sensor 1 exhibits a greater channel response near 380nm than the white image sensor 3.
Some of the photodetectors described below also employ a similar filter that has an absorption spectrum that is spatially uniform across the image sensor 3 and is a bandpass filter arranged to pass only the colour components, hereinafter referred to as an RGB filter. It is also noted that any of the image sensors can also include one or more other filters that have an absorption spectrum that is spatially uniform across the image sensor to minimise the impact of photons that are not required to reconstruct the desired image, but might complicate the analysis of the output signals. For example, for in some imaging applications such filters may be UV or IR filters that absorb UV or IR light and do not affect the response of the photoelectric sensor devices.
Fig. 3 illustrates an image sensor 20 implemented on an IC (integrated circuit) chip 21 using CMOS technology, although corresponding CCD implementations would equally be possible. The image sensor 20 comprises a photodetector 22 comprising arrayed photoelectric sensor devices 23 that convert photons of incident EM radiation into electrical signals representing the converted EM radiation, having a detailed construction described further below. Otherwise, the image sensor 20 has a standard configuration for a CMOS image sensor including the following components.
A control circuit 24 is provided to control the operation of the photoelectric sensor devices 23. In one implementation, each photoelectric sensor device 23 is provided with a pixel circuit (not shown) comprising reset and readout elements formed by CMOS switch devices controlled by a reset and select lines in respect of rows of sensor devices 23 to provide readout signals from successive rows of sensor devices 23 onto readout lines in respect of columns of sensor devices 23 that are connected to sample-hold circuits. In this case, the control circuit 24 provides control signals on the reset and select lines and controls the sample-hold circuits.
An amplifier circuit 25 is provided to amplify the readout signals that are then supplied to an ADC (analog-to-digital) converter 26 that converts them into digital signals. The digital signals from the ADC converter 26 are supplied to a DSP (digital signal processing) circuit 27 for further processing. The DSP circuit 27 is implemented by a dedicated circuit of CMOS devices formed in the IC chip 21, but could equally be implemented in other ways, for example implemented on a separate IC chip and/or implemented as a processing unit running a suitable program. Indeed any of the control circuit 24, amplifier circuit 25, ADC converter 26 and/or DSP circuit 27 could alternatively be implemented in a separate IC chip from the IC chip 21. There will now be described various photodetector configurations that may be implemented in the photodetector 22 in the image sensor 20 to embody the present invention, for clarity, depicting only the light sensitive elements (e.g. sensor devices and filters). The various side and plan views of the photodetectors are partial views showing a few sensor devices that are in fact arrayed across the entire photodetector. It should be noted, that these side and plan views are chosen such as to illustrate the embodiments in the clearest possible way. For example, it should be obvious to the person skilled in the art that certain spatial arrangements of pixels will have different cross-sectional side-views. For example, side views of pixels arranged in a Bayer pattern may only reveal two out of the three colours being used. To illustrate the invention more clearly, the side views used here show all colour components and do not necessarily represent the spatial arrangement of the pixels in a photodetector array.
Fig. 4 illustrates a first photodetector configuration 30 being a side view of three sensor locations. The photodetector configuration 30 comprises a front array of photoelectric sensor devices 31 stacked with a back array of photoelectric sensor devices 32. The sensor devices 32 of the back array are aligned in a one-to-one relationship with the sensor devices 31 of the front array. The sensor devices 31 of the front array receive incident light 33 and after transmission therethrough the sensor devices 32 of the back array receive the incident light 33.
The sensor devices 31 of the front array each have a response spectrally limited to a single one of three colour components, i.e. one of red, green or blue, whereas the sensor devices 32 of the back array each have a response to the entire spectrum of interest. A non-limitative example described in more detail below is that the sensor devices 31 of the front array are formed by organic material, for example organic photocells or dye sensitised solar cells, whereas the sensor devices 32 of the back array are formed as CMOS devices in silicon. In the first photodetector 30, there are no filters arranged to absorb any of the colour components and so the sensor devices 31 of the front array output signals representing the colour component to which they have a response, whereas the sensor devices 32 of the back array output signals representing the total of all of the light that reaches them without having been converted by the aligned sensor device 31 of the front array, being panchromatic light less the one of the colour components converted by the aligned sensor device 31 of the front array.
Ideally, in order to maximise the sensitivity to each colour component, the sensor devices 31 of the front array would absorb and convert all of the light of the colour component to which they have a response. Whilst this may not always be achievable in practice, the sensor devices 31 of the front array are designed to absorb and convert a relatively high proportion of that light. Typically, the absorptance (sometimes referred to as the absorption factor and being the fraction of the incident EM radiation flux at a given wavelength that is absorbed, which is equal to one minus the transmittance, ignoring reflections) of each sensor device 31 of the front array has a peak, in respect of the colour component to which there is a response, that is at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%.
Ideally, in order to maximise the sensitivity to each colour component, the sensor devices 31 of the front array would convert all of the light that is absorbed thereby into the output signal. Whilst this may not always be achievable in practice, the sensor devices 31 of the front array are designed to convert as high a proportion as possible of the absorbed light of the one of three colour components. Desirably, the sensor devices 31 of the front array further have a quantum conversion efficiency (the fraction of incident EM flux at a given wavelength that is converted into an output signal) that has a peak, in respect of the colour component to which there is a response, that is at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%.
Furthermore, the sensor devices 31 of the front array consist of three interleaved sub-arrays 34, 35 and 36 of sensor devices 31 that each have a response spectrally limited to a different one of three colour components, that is to red, green and blue respectively. For clarity, Fig. 4 illustrates only a single sensor device 31 of each of the sub-arrays 34, 35 and 36, but in fact the sub-arrays 34, 35 and 36 are interleaved in any suitable pattern across the photodetector 30, for example a Bayer pattern (disclosed for example in US-3,971,065) in the same manner as a conventional Bayer type of image sensor. Thus, sensor devices 32 of the back array may also be considered to consist of three interleaved sub-arrays 37, 38 and 39 of sensor devices 32, aligned respectively with three interleaved sub-arrays 34, 35 and 36 of the front array of sensor devices 31, that output signals representing different portions of the spectrum of interest.
This is shown in Fig. 4 by the respective labels P-R, P-G and P-B, where P represents panchromatic light and -R, -G and -B represent the removal of red, green and blue light by the sensor devices 31 of the front array. Noting the difference between panchromatic and white, these signals output by the sub-arrays 37, 38 and 39 of the back array of sensor devices 32 exceed the signal which would have been obtained only by the two "remaining" colour components, for example to green plus blue, blue plus red and red plus green, respectively. The narrower the bandwidth of the sensor devices 31 , the larger this difference.
The stacked arrangement of a single sensor device 31 of the front array and a single sensor device 32 of the rear array is the smallest spatially defined unit and therefore constitutes one pixel of the photodetector configuration 30. Pixels are similarly defined in the other photodetector configurations described below, in some cases additionally including a filter stacked with the sensor devices 31 and 32.
As a result, it is possible to process the signals output by the sub-arrays 34, 35 and 36 of the front array of sensor devices 31 alone to derive colour component signals representative of each of the colour components. However, it is possible to use the sub-arrays 37, 38 and 39 of the back array of sensor devices 32 in conjunction with sub-arrays 34, 35, and 36 of the front array of sensor devices to create panchromatic pixels to boost the sensors general sensitivity. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination. In this manner, the sensitivity of detection of the colour components may be improved, being possible because the back array of sensor devices 32 output signals containing spectral information due to the absorption by the aligned the front array of sensor devices 31.
Derivation of the colour component signals and luminance signal may be performed in the DSP circuit 27 as shown in Fig. 5 which is a high level diagram showing the operation of the DSP circuit 27.
The photodetector configuration 30 produces a RGBF image 40 coming from the front array of sensor devices 31 and an (P-R)(P-G)(P-B)B image 41 from the back array of sensor devices 32.
In step 42, both the RGBF image 40 and the (P-R)(P-G)(P-B)B image 41 are used to generate a full panchromatic image 43 at the resolution of the pixel pitch of the sensor devices 31 and 32, as follows. If fF x( ) is the spectral response from each front sensor device 31 for a given wavelength λ, the signal obtained from each photoelectric device is:
Figure imgf000014_0001
with Ι(λ) being the light spectrum, X indicating whether it is the sensor device 31 of the red (R), green (G) or blue (B) sub-arrays 34, 35 or 36. Hence, the signal of each back sensor device 32 is:
Figure imgf000014_0002
For example, considering the left sub-array 34 in Fig. 4, if the signal of the front sensor device 31 converting red light is FF(R), the signal originating from the aligned back sensor device 32 would be FB(P,-R) = FB(P) - FB(R), i.e. the panchromatic response FB(P) which would have originated upon exposing the back sensor device 32 to the total light spectrum minus the fraction of the spectrum absorbed by the front sensor device 32 (i.e. the red light). Hence, a panchromatic pixel at any position (n,m) with the signal
F(P, n, m) =FB(P, -R, n,m) +FF(R, n,m)
can be constructed within a 2-dimensional array of pixels (n,m).
In the same spirit, panchromatic pixels can be constructed from the signals of the remaining colour pixels. Hence, in general there applies
F(P, n, m) =FB(P, -X, n,m)+FF(X, n, m)
In other applications, the use of bilinear interpolation filters is required to minimise colour artefacts caused by the spatial separation of the different colour arrays (e.g. the pixels of the array comprising all red pixels are in different locations than the pixels of the array comprising all green pixels). However, in contrast to that common requirement, no bilinear interpolation is needed to create a full resolution panchromatic image FFuu(P,n,m) 43 because the position of the constructed panchromatic pixels coincides with the position of the colour pixels and constitutes a full resolution panchromatic image.
The full panchromatic image 43 may be considered to be a luminance signal representative of the luminance of the incident EM radiation at the resolution of the sensor devices 31 and 32. This is used to generate signals representative of the colour components at the resolution of the sensor devices 31 and 32, as follows.
In step 44, the RGBF image 40 is filtered to reduce noise to provide a noise-reduced RGBF image 45. Similarly, in step 46, the full panchromatic image 43 is filtered to reduce noise to provide a full noise-reduced panchromatic image 47. Similarly, in step 48, the (P-R)(P-G)(P-B)B image 41 is filtered to reduce noise to provide a noise-reduced (P-R)(P-G)(P-B)B image 49.
In step 50, the noise-reduced RGBF image 45 is interpolated to provide a full noise-reduced RGBp image 51 at the resolution of the sensor devices 3 1 and 32. Similarly, in step 52, the noise- reduced (P-R)(P-G)(P-B)B image 49 is interpolated to provide a full noise-reduced (P-R)(P-G)(P-B)B image 53.
Considering the red colour component, for example, the red colour component of the noise- reduced RGBF image 45 is at the resolution of the sensor devices 31 of the sub-array 34, not at the resolution of all the sensor devices 3 1 and 32, but the full resolution red image FFFuu(R,n, m) can be constructed by applying a bilinear interpolation filter hFR to FF(P,-R). Similarly, the (P-R) component of the (P-R)(P-G)(P-B)B image 41 is at the resolution of the sensor devices 32 of the sub-array 37, not at the resolution of all the sensor devices 3 1 and 32, but the full resolution (P-R) image FBFuu(-R,n,m) can be constructed by applying a bilinear interpolation filter hBR to FB(P,-R).
Interpolation in steps 50 and 52 may be achieved using suitable interpolation algorithms, for example applying techniques similar to those disclosed in WO 2008/150342 and Kumar et. al. (New Digital Camera Sensor Architecture For Lowlight Imaging, Proceedings of the 2009 EEE
International Conference on Image Processing), the contents of both of which are incorporated herein by reference.
In step 54, the full noise-reduced panchromatic image 47 and the full noise-reduced (P-R)(P- G)(P-B)B image 53 are used to generate the final, noise reduced full resolution full colour image 55, as follows. Considering the red colour component, for example, the complete red image FFuu(R,n,m) can be constructed by subtracting the (P-R) component of the full noise-reduced (P-R)(P-G)(P-B)B image 53 from the full noise-reduced panchromatic image FFuu(P,n,m) 47 according to
FFuU(R,n,m) = FFull(P,n,m) - FB(P, -R) * hBR
or more generally for any colour ^according to
FFull(X,n,m) = FFull(P,n,m) - FB(P,-X) * hBx.
The term FB(P,-X) contains low frequency content while the high frequency content is added by FFuit(P,n,m). It is noted that the term FB(P, -X) is directly measured and not derived by subtracting the interpolated values of the full panchromatic image from a colour image, which constitutes a simplification as compared to the methods disclosed in WO 2008/150342 and Kumar et. al..
Additionally, higher accuracy is achieved due to the higher spatial resolution of both the front array of sensor devices 3 1 and the back array of sensor devices 32. Also, no filters absorbing photons within the spectrum of interest are used, offering the highest possible sensitivity achievable by a sensor (potentially exceeding the one to be expected by stacking red, green and blue sensors).
The final, noise reduced full resolution full colour image 55 includes signals representative of each colour component at the resolution of the sensor devices 3 1 and 32. As interpolation is used in this derivation, the true resolution is in fact on average approximately 2/3 the resolution of the sensor devices 3 1 and 32.
Note that the derivation of the final noise reduced full resolution full colour image 55 does not directly use the spatial colour information of the RGBF image 40. The full RGBF image 51 may be used to further enhance the colour information by adding its content to the final image 55
F(X,n,m) = FFuil(X,n,m) + wF * FFFu„(Xn,m)
A suitable weighting term wF may have to be chosen to maximise the signal to noise ratio of F(X,n,m). This is particularly relevant if the signal to noise ratio of FFuU(X,n,m) differs substantially from the signal to noise ratio of FFFuu(X,n,m).
Optionally, further processing may be performed additionally using the full noise-reduced RGBF image 51 as follows.
Binning of the sensor devices 31 in the front array yields 'white' FFFull(W,n,m) = FF(R) * hFR+ FF(G) * h G + FF(B) * hFB. In the same vein, binning of the sensor devices 32 in the back array yields three times the panchromatic signal minus white 3FBFuu(P,n,m) - FBFuu(W,n,m). From this, it is possible to derive the difference between the signals obtained from white light and panchromatic light AFFu„(n,m)=(3FBFui,(P,n,m) - FBFuii(W,n,m) - 2FFFuU(W,n,m))/3 assuming FFFuu(W,n,m) =
F BFltn(W ,n,m) (i.e. each photon being absorbed in the front sensor device 31 generates the same signal response as if it would have been absorbed in the back sensor device 32). This information can be used to derive additional colour information from the back array of sensor devices32, that is by subtracting AFFuu(n,m) from the signal obtained by each back sensor device 32 according to
Figure imgf000016_0001
FBFu,i(P,-R,n,m) - AFFuU(n,m)
FBFul,(R,B,n,m)= FBFuU(R,n,m)JrFBFu„(B,n,m) = FBFuU(P,-G,n,m) - AFFuiiF(n,m)
FBFuU(G,R,n,m)= FBFu„(G,n,m)+FBFu„(R,n,m)= FBFu,,(P,-B,n,m) - AFFull(n,m)
There will now be described further photodetectors that are modified as compared to the photodetector configuration 30 of Fig. 4. The modifications will be described but for brevity common elements are identified by common reference numerals and a description thereof is not repeated.
Fig. 6 illustrates a second photodetector configuration 60 which is identical to the first photodetector configuration 30 of Fig. 4, except that the second photodetector configuration 60 has an RGB filter 61 disposed in front of the front array of sensor devices 31. As described above the RGB filter 61 has an absorption spectrum that is spatially uniform across the image sensor 3 and is a bandpass filter arranged to pass only the red, green, and blue colour components. Typically, the absorption bands for the red, green and blue sensor devices 31 within the front array are rather broad and overlap, for example as shown in Fig. 2. Consequently, the back array of sensor devices 32 output signals representing the total of all of the light that reaches them without having been converted by the aligned sensor device 31 of the front array and without having been absorbed by the RGB filter 61. This is effectively the two colour components remaining after conversion of one of the colour components by the aligned sensor device 3 1 of the front array, or in other words white light less the one of the colour components converted by the corresponding aligned sensor device 3 1 of the front array.
This means that the sensor devices 32 of the back array may be considered to consist of three interleaved sub-arrays 37, 38 and 39 of sensor devices 32, aligned respectively with three interleaved sub-arrays 34, 35 and 36 of the front array of sensor devices 3 1, that output signals representing, respectively, the total of green plus blue, blue plus red and red plus green, as shown by the labels GB, RB and RG in Fig. 6.
The signals from the second photodetector configuration 60 are processed in a similar manner to the first photodetector configuration 30, except that the signals now represent white light where previously they represented panchromatic light. This can simplify the processing in that the correction term AF(n,m) may be neglected because it is smaller and indeed is zero in the ideal case that filter characteristics of the RGB filter 61 perfectly match the responses of the sensor devices 31 of the front array, although in practice that might only be approximately achieved.
Thus, in the second photodetector configuration 60, the sensor devices 32 of the back array produce signals that simplify to
FB(G, B, n, m) =FB(Wr n, m)-FB(R, n, m) = FB(G, n, m) +FB(B, n, m),
FB(R,B,n,m)=FB(W,n,m)-FB(G,n,m)= FB(R,n,m)+FB(B,n,m), and
FB(R,G,n^)=FB(W,n,m)-FB(B,n,m)= FB(R,n,m)+FB(G,n,m).
Similarly, an image of colour can be reconstructed using
FFul,(X,n,m) = FFull(W,n,m) - FB(W-X) * hBX.
However, again an enhanced sensitivity is achieved by the second photodetector configuration 60 as by the first photodetector configuration 30. It is an imortant point that the luminance information FFun(W,n,m) and FFuu(P,n,m) constitute black and white images with the resolution of the pixel pitch.
In the above discussion of the processing performed in the DSP circuit 27, it is assumed that the sensor devices 31 of the front array absorb all of the single colour component to which they have a response and that every absorbed photon contributes to an electric signal. It might be that (1) all light of a given colour component is absorbed in a given front sensor device 31 but not every absorbed photon contributes to the signal (and the same might be true for a given back sensor device 32) and/or (2) only some light of a given colour component is absorbed in a given front sensor device 31 and the remaining light of the given colour component is transmitted to the aligned back sensor device 31. These cases may be accounted for in the signal processing as follows.
Case (1 ) can be addressed by multiplying the signal from the front sensor device 31 by a suitable weighting term Wr(R), i.e. FF(R) is to be replaced by Wj{R)FF(R). Similar considerations apply to back devices 32.
Case (2) can be taken into account if the amount of the transmitted light is known (for example from measurements on the actual photodetector). If T is the transmission of the front left device in the red part of the spectrum, one can write
F„(G,B)= FB(T * R,G,B) - T * FB(R).
Fig. 7 illustrates a third photodetector 70 which is identical to the second photodetector 60 of Fig. 6 in respect of two of the sub-arrays 35 and 36 of the sensor devices 31 of the front array (hereinafter first and second sub-arrays 35 and 36) but in which the third sub-array 34 has a response spectrally limited to both of the colour components to which the responses of the sensor devices 31 of the first and second sub-arrays 35 and 36 are spectrally limited, i.e. to both green and blue colour components in this example. As a result, the sensor devices 32 of the sub-array 37 of the back array that is aligned with the third sub-array 34 of the sensor devices 31 of the front array produces a signal that represents only the single remaining colour component that is not converted by the sensor devices 31 of the third sub-array 34 of the front array, i.e. the red colour component in this example.
The output signals may be processed in the same manner as described above but swapping the signals of the third sub-array 34 of the front array with the signals of the aligned sub-array 37 of the back array. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination.
However, the third photodetector 70 may provide constructional advantages in requiring the sensor devices 31 of the front array to be spectrally limited to a smaller number of colour components in total, for example limited to green and blue, none being required to be spectrally limited to red in this example. If, for example, the light absorbing properties of the front array of sensor devices 31 is determined by organic molecules such as dyes, one could choose one set of dyes absorbing the green part of the spectrum and another set of dyes absorbing the blue part of the spectrum, without the need for a set of dyes absorbing the red part of the spectrum. Additional detail regarding the fabrication and basic device design using organic dyes is given below. Note that as a consequence red light is transmitted by every sensor device 31 of the front array and contributes to an electric signal by every sensor device 32 of the back array. This set-up might be particularly desirable if it is technically challenging to realise the desired physical properties, e.g. quantum efficiency, for a particular colour (in this example red) in the front array. Similarly, it is possible to construct constellations where either no green light or no blue light is absorbed in the front array of sensor devices 31.
The signal processing may be modified to increase the colour resolution, in particular increasing number of pixels per colour by up to a factor 2, as follows.
This is done by exploiting the fact that the back array of sensor devices 32 contains colour information due to the pixelation of the front array of sensor devices 31. For example, one could obtain FBFuli(Rn,m) by subtracting the colour information FFFun(B,n,m) from FBFuu(R,B,n,m), i. e.
FBFu„(R,B,n,m) - FFFu„(B,n,m)= FBFu,i(R,n,m)
FBFuii(R, G, n, m) - FFFull(G, n, m) = FBFull(R, n, m)
However, it can be advantageous to utilise the colour information of the back sensor devices 32 without using any colour information from the front sensor devices 31 because the signals of the back sensor devices 32 in the example depicted in Fig. 6 are derived from a larger number of photons than the signals from the front sensor devices 31 and are expected to have a larger signal to noise ratio. Assuming that the signal FB(G,B) from the sub-array 37 of the back array does not vary across the 3 pixels, adding the signal FB(RB) of the back sensor device 32 of the sub-array 38 to the signal FB(R,G) of the back sensor device 32 of the sub-array 39 and subtracting the signal FB(G,B) from the back sensor device 32 of the sub-array 38 yields
FB(R,B) + FB(R, G) - FB(G,B)
= (FB(R) + FB(B)) + (FB(R) + FB(G)) - (FB(R) + FB(B)) = 2 FB(R).
The signal is twice as big as the signal which would have been detected if only a single colour component per photodetector would have been detected.
In most practical applications FB(G) + FB(B), FF(G) and FF(B) are expected to vary across the three sub-arrays. Hence interpolated values need to be constructed to obtain accurate estimates of the values of FB(G,B), FF(G) and FF(B) at the corresponding device positions. This concept is illustrated using the schematic shown in Fig. 8 in which the sensor devices 32 of the back array are identified as pixels 1 to 6.
Let FB(R, (2,3)) be the combined signal from pixels 2 and 3 caused by the red light, so that
FB(R, (2,3)) = FB(R,2) + FB(R,3)
which is to be obtained, then
FB(R, (2,3))= FB(R,B,2)+ FB(R,G,3) - mean(FB(G,B, l), FB(G,B,4))
= FB(R,B,2)+ FB(R,G,3) - ((FB(G,B, 1) + FB(G,B,4))/2
constitutes an improved result compared to the mere subtraction of non-averaged values of FB(G,B). Similar relationships apply for the calculation of FB(G, (3, 4)) and FB(B, (4,5))
FB(G, (3,4)) = FB(R, G,3)+ FB(G,B, 4) - mean(FB(R,B,2), FB(R,B,5))
FB(B, (4,5)) = FB(G,B,4)+ FB(R,B,5) - mean(FB(R,G, 3), FB(R,G,6))
These additional colour values are particular useful in combination with the colour values from the front array of devices, effectively doubling the number of colour pixels.
The above concept applies interpolation in one dimension, but can be extended to two dimensions in accordance with the pattern in which the sub-arrays 34, 35 and 36 are interleaved. As a result different colour interpolation algorithms may be used which may estimate the full signal at any given pixel position using the information obtained from more pixels than just the closest two:
FBFull(R,n,m) = FBFuu(R,B,n,m)+ FBFull(R,G,n,m) - FBFu,,(G,B,n,m)
FBFu,i(G,n,m) = FBFuu(R,G,n,m)+ FBFu!i(G,B,n,m) - FBFuU(R,B,n,m)
FBFun(B,n,m) = FBFuu(G,B,n,m)+ FBFuli(R,B,n,m) - FBFuu(R,G,n,m)T ese algorithm also works if the sensor devices 31 only absorb a particular colour component, but create no signal, i.e. they are simply colour filters removing a particular colour component out of the visible spectrum. In contrast to the colour filters in conventional colour filter arrays (CFAs), these filters absorb (i.e. remove) one colour component instead of transmitting only one colour component through (i.e. removing all components bar one). Note also that the signals obtained this way are twice as big as compared to the one obtained from conventional use of CFAs.
The signal processing may be modified to increase the dynamic range as follows.
The additional colour pixels derived from the sensor devices 32 of back array may differ from the pixels derived from the sensor devices 31 of the front array in geometry (size, device geometry) and possibly also in material properties such as absorption properties due to different materials being used. Consequently, the saturation points would be reached after different exposure times, i.e. the sensor devices 31 of the front array have different sensitivities compared to the sensor devices 32 of the back array. This in turn lends itself to applying algorithms which smoothly combine the image derived from sensor devices 32 of the back array with the one derived from the sensor devices 31 of the front array to obtain an image with an increased dynamic range at the expense of resolution (applying for example techniques similar to those disclosed in EP- A- 1,435, 662, JP- 2005,286,104; US-2003/0,141,564; or US-2007/0,223,059, the contents of each of which are incorporated herein by reference).
Fig. 9 is high level diagram showing an alternative operation of the DSP circuit 27 to process the signals from the second photodetector 60 employing some of the additional techniques described above, as follows.
The second photodetector 60 creates a RGBp image 80 coming from the front sensor devices 31 and a (GB)(RB)(RG)B image 81 derived from the back sensor devices 32. Both the RGBF image 80 and the (GB)(RB)(RG)B image 81 are used to create a full white image 82.
Next, noise reduction algorithms are applied to the RGBF image 80, the (GB)(RB)(RG)B image 81 and the full white image 82 to create, respectively, a noise reduced RGBF image 83, a noise reduced (GB)(RB)(RG)B image 84 and a full noise reduced white image 85. It is beneficial not to construct the noise reduced white image 85 using the noise reduced RGBF image 83 and the noise reduced (GB)(RB)(RG)B image 84 because some of the noise in the signals from the front sensor devices 31 might cancel out against the noise from the back sensor devices 32 when the raw RGBF image 80 and (GB)(RB)(RG)B image 81 are combined. For example, if some of the red light passes through a sensor device 31 of the red front sub-array 34 it will result in a reduced signal from the sensor devices 31 belonging to the red sub-array 34. However, the transmitted red light will be absorbed in aligned back sub-array 37 which results an increased signal in this sub-array 37.
Next, a full noise reduced RGBF image 86 and a full noise reduced (GB)(RB)(RG)B image 87 is generated from the noise reduced RGBF image 83 and the noise reduced (GB)(RB)(RG)B image 84, respectively, using suitable interpolation filters.
From the full noise reduced (GB)(RB)(RG)B image 87, a full noise reduced RGBB image 88 is created which is combined with the full noise reduced RGBF image 86 to create a full noise reduced RGB image 89. The combination of the full noise reduced RGBF image 86 with the full noise reduced RGBB image 88 might either produce a full noise reduced RGB image 89 with increased dynamic range as described above or a full noise reduced RGB image 89 with increased colour resolution as described above.
Finally, the full noise reduced RGB image 89 is combined with the full noise reduced white image 85 to create the final noise reduced full resolution full colour image 90.
In situ device calibration and enhanced error detection may be provided by the DSP circuit 27 as follows. For conventional sensors, an out of focus image of a grey background might be taken to identify pixel errors. For such an image the signal variation from one pixel to the next should be minute (because the optics was out of focus). In other words, any large changes in intensity between adjacent pixels are due to imperfections of a sensor device which can be caused by dust particles on the sensor or faulty elements. In this case, the uniform grey background provides the reference against which the sensor's performance is measured.
However, the present invention makes it possible to identify pixel errors without the need to take such a gray image. The image derived from the back array should be identical to the image provided by the front array (except for random noise components). Thus the front image can be compared with the back image to identify imperfections due to faulty devices. As indicated above and as it will be described in more detail later, the devices belonging to the front array may be made of different materials and have a different structure compared to the devices of the back array. Hence, they will be to a different degree subject to undesirable physical phenomena. For example, the sensitivity of the devices in the front layer may decay more rapidly than the one in the back layer (e.g. due to bleaching of dyes), making them less suitable for low light applications. Also, this decay might not develop in every device at the same rate. In addition, device performance variations might depend on the manufacturing process (fixed pattern noise). However, the use of device layers whose devices are based on different materials and fabrication techniques lends itself to develop selection criteria and algorithms which ( 1) identify in which pixel an error occurred; (2) identify in which array the error has occurred; and (3) correct or compensate the error. To illustrate this, examples of such algorithms and selection criteria are now given but this is not limitative and other algorithms and selection criteria might be chosen.
( 1) A method of identifying pixels in which an error occurred is as follows.
Use appropriate interpolation filters to construct the full image for each colour from the front array of devices F : FFuu(X,n,m) and for each colour from the back array of devices
FBFuii(X,n,m).
Calculate the error
FEFuU(X,n,m) = FBFull(X,n,m) - FFFuU(X,n,m).
Define an acceptable error tolerance E(X), e. g. acceptable variations of the number of electrons detected per device, which can be translated into the corresponding signal (typically via a capacitance CQ which translates into the voltage applied to the detecting transistor gate, i.e. if n = 4 electrons is the largest acceptable error, the error tolerance can be defined as E(X) = neCG with e being the charge of one electron). Next, find all positions (n,m) where \FEFuu(X,n,m)\ > E(X,n,m). These are the identified pixel errors.
(2) A method of identifying in which array the error has occurred is as follows.
A criterion needs to be defined which identifies in which layer the error occurred. If step 1. detected an error at position (n,m), the following criteria might be used.
An example of a global criterion is as follows. If not done already, apply a suitable noise reduction algorithm filter to the data of FBFuii( > standard deviation
Figure imgf000022_0001
FF(X,n,m) and FB(X,n,m) are the data sets without any noise reduction (e.g. the raw data). Assume that the errors generally occur in the array with the higher standard deviation.
Alternatively, the more unreliable device array might be identified using criteria based on the signal to noise ratio or the change in signal between adjacent pixels for each device layer.
An example of a local criterion is as follows.
Compare the front and back signal of the faulty pixel with the nearest neighbours,
Figure imgf000022_0002
Assume that the error occurred in the array with the higher standard deviation. Alternatively, the faulty device might be identified using criteria based on the change in signal between adjacent pixels for each device layer.
(3) A method of correcting or compensating for the error is as follows.
If the error occurred at position (n,m) in the front array, set
FFFuu(X, n, m) = FBFuU(X, n, m).
The methods described above work particularly well if the properties of the front array are independent of the properties of the back array. This is the case if the light absorbing properties of the devices in the front array do not change but the efficiency with which each absorbed photon contributes towards the signal does change.
If the devices in the front array also change their light absorbing properties (e.g. due to bleaching of dyes), it is not possible to calculate all signal contributions of R, G , B, AR, AG, and ΔΒ, whereby the colours correspond to the signal lost in a front device detecting colour X and gained in the corresponding back device. However, it is still possible to detect the location of faulty pixels: Adding up signals values of the interpolated full colour images of the front array at position (n,m) yields
FFFul,(n,m) = FFFuit(R,-AR,n,m)+ FFFu„(G,-AG,n,m)+ FFFun(B AB,n,m)
=FFFu„(R,n,m) + FFFuU(G,n,m) + FFFuli(B,n,m) - FFFltll(AR,n,m) - FFFuU(AG,n,m) - FFFult(AB,n,m)
=FFFu„(W,n,m) - FFFull(AW,n,m) The same calculation for the back array yields
FBFuu(n,m) = FBFull(AR G,B, n, m) + FBFu„(R,AG,B,n,m)+ FBFu„(R,G,AB,n, )= 2FBFuU(W,n,m)+ FBFM(AW,n,m)
Assuming that all photons which are not absorbed in the front device contribute to the signal in the back devices, i.e. FFFuu(Xn,m)= FBFul,(X,n,m), the cumulative signal loss FFull(AW,n,m) of the three colours in the front devices (corresponding to the signal gain in the back devices) can be calculated.
FFu„(AW,n,m)= FBFu„(n,m) - 2 FFFuu(n,m)
Again, a criterion can be defined to specify if this loss in signal is acceptable. A white balance using a gray card can be performed to identify faulty pixels.
Note that if AW= AX, i. e. only one colour array of pixels in the front array looses its light absorbing properties, the corresponding change in signal can be calculated and used to correct the data in the front and back device arrays.
At this point it should be noted that inserting beneath each front devices of colour X a filter which filters light makes the signals of the back array independent of the ones in the front array and all variables R, G, B, AR, AG, and AB can be determined at the expense of some loss in sensitivity.
Fig. 10 illustrates a fourth photodetector configuration 100 in which the sensor devices 31 of the front array again each have a response spectrally limited to a single one of three colour components and the sensor devices 32 of the back array each have a response to the entire spectrum of interest, but with the following modifications.
Firstly, the front array consists of two interleaved sub-arrays 101 and 102 of sensor devices 31 (hereinafter first and second sub-arrays 101 and 102) that each have a response spectrally limited to a different one of three colour components, that is to blue and green, respectively, in this example.
Secondly, the fourth photodetector 100 comprises two arrays of filters 103 and 104 (hereinafter first and second arrays of filters 103 and 104), each aligned in a one-to-one relationship with one of the sub-arrays 101 and 102 of sensor devices 3 1 of the front array. Each array of filters
103 and 104 is arranged to absorb one of the colour components other than colour component to which the response of the respective sub-array 101 and 102 of sensor devices of the front array with which the array of filters are aligned is spectrally limited. The arrays of filters 103 and 104 absorb at least a portion of the colour component concerned, preferably a significant portion or all of the colour component. In this example, the first array of filters 103 absorbs red light (i.e. not blue light converted by the aligned first sub-array 101) and the second array of filters 104 absorbs blue light (i.e. not green light converted by the aligned second sub-array 102). As each array of filters 103 and
104 is aligned with a respective sub-array 101 and 102 of photoelectric sensor devices 31 of the front array, the arrays of filters 103 and 104 are interleaved in the same manner as the sub-arrays 101 and 102 of photoelectric sensor devices. The arrays of filters 103 and 104 may be thought of as being similar to a colour filter array in the case of visible light except for the difference that they absorb a particular colour component while conventional colour filters transmit a particular colour component; i.e. the red filter here absorbs red light wile a red filter in a conventional CFA transmits red light.
In Fig. 10 the arrays of filters 103 and 104, are shown as a separate element in front of the sensor devices 31 of the front array, but they could alternatively be provided as a separate element between the sensor devices 31 of the front array and the sensor devices 32 of the back array or integrated within the sensor devices 31 of the front array, provided that they are disposed in front of the sensor devices 32 of the back array. Examples of such an integrated implementation are given further below.
Thus the first array of filters 103 is arranged to absorb a colour component to which the response of neither the first sub-array 101 nor the second sub-array 102 of sensor devices 31 of the front array is spectrally limited, in this example red light. But the second array of filters 104 is arranged to absorb the colour component to which the response of the first sub-array 101 of sensor devices 31 of the front array is spectrally limited.
The sensor devices 31 of the front array again output signals representing the colour component to which they have a response, whereas the sensor devices 32 of the back array output signals representing the total of all of the light that reaches them without having been converted by the aligned sensor device 31 of the front array, and without having been absorbed by the aligned array of filters 103 or 104. Thus, each sensor device 32 of the back array output signals representing a single colour component. The sensor device 32 of the back array may also be considered to consist of two interleaved sub-arrays 105 and 106 of sensor devices 32, aligned respectively with the two interleaved sub-arrays 101 and 102 of the front array of sensor devices 31, that output signals representing a single colour component, being in this example green and red, respectively. In this example there are twice as many photodetectors detecting green light than photodetectors detecting either red or blue light as there would be in a conventional sensor using a Bayer pattern CFA on top of a single photodetector array.
With this arrangement the output signals from the sub-arrays 101 and 102 of the front array of sensor devices 31 and the sub-arrays 105 and 106 of sensor devices 32 provide all three colour components, so no further signal processing is required. The sub-arrays 101 and 102 and hence the sub-arrays 105 and 106 are interleaved, for example in the pattern shown in Fig. 1 1 (which is a view from above showing the colour components converted at the location of each aligned front sensor element 31 and back sensor element 32) in which the reference numerals 101, 102, 105, and 106 relate to the corresponding photodetectors shown in Fig. 10 (The filters 101 and 102 are not shown).
The output signals may be processed in a similar manner to that described for other photodetector configurations. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination.
Furthermore, this can be achieved in some embodiments with a similar constructional advantage to the photodetector configuration 70 of Fig. 7 of requiring the sensor devices 31 of the front array to be spectrally limited to a smaller number of colour components, for example none being required to be spectrally limited to red in this example.
However, the photodetector configuration 100 has a reduced sensitivity than the first, second and third photodetector configurations 30, 60 and 70, as some incident light is absorbed by the arrays of filters 103 and 104. Nonetheless, the fourth photodetector configuration 100 has an improved sensitivity over a conventional image sensor, as only a single colour component is absorbed at the location of each sensor device 1 and 32.
In all the photodetectors described above, increased fidelity and colour gamut may be provided by modifying the responses of the sensor devices 31 of the front array and/or one or both of the array of filters 103 and 104 so that the spectral characteristics of one or more of the colour components represented by the output signals is different for different sub-arrays. Some examples of this will now be described.
Fig. 12 shows a fifth photodetector configuration 1 10 which is the same as the fourth photodetector configuration 100 but modified so that the second sub-array 102 of the front array converts a green colour component Go that has a different spectral characteristic from the green colour component of the first sub-array 105 of the back array. This means that an arrangement consisting of two sub-arrays 101 and 102 in the front array is generally capable of extracting four different colour components. As a result the output signals represent two different green colour components G and Go, where the spectral characteristic of the green colour component G is defined by the absorption spectra of the R and B absorbing material of the array of filters 103 and the sensor devices 31 of the sub-array 101 of the front array, while the spectral characteristic of the green colour component G0 is directly defined by the properties of the light absorbing material used in the sensor devices 31 of the sub-array 102 of the front array. This set-up is also useful if it proves to be too challenging to incorporate a suitable material with sufficiently high efficiencies in the red part of the light spectrum.
Alternatively, Fig. 13 shows a sixth photodetector configuration 1 1 1 which is the same as the fifth photodetector configuration 1 10 but modified so that the first and second arrays of filters 103 and 104 absorb light of the same colour components as the second sub-array 102 and the first sub- array 101, respectively, but with different spectral characteristics, i.e. different bandwidth. As a result, the output signals from the two sub-arrays 105 and 106 of the back array both correspond to different green colour components G and G0. For example, if the absorption spectrum of B reaches towards longer wavelengths than the absorption spectrum of B0 and if the absorption onset of R starts at longer wavelengths than the absorption onset of Ro, G will have a larger wavelength than G0. These four colours can be used to increase the colour gamut (fidelity) of the image or to reduce colour artefacts (for example using techniques similar to those disclosed in JP- 2003,284,084 the content of which is incorporated herein by reference).
Fig. 14 illustrates a seventh photodetector configuration 1 12 which is the same as the second photodetector configuration 60 but modified to extend the considerations of using different spectral characteristic to the case of three sub-arrays 34, 35 and 36, so that the spectral characteristics of the colour components Rj, G2, and B3 of the front sensor devices 31 have a response which differs from the spectral characteristics of the colour components Gi, Bi, R2, B2, R3, and G3 represented by the signals from the back sensor devices 32. For example the front sensor devices 3 1 which detect colour components Ri, G2) and B3 might have very narrow absorption spectra which means that the absorption spectra of the back sensor devices 32 Gj, Bls R2, B2, R3) and G3 are broader. Hence, the achievable colour gamut of the front array of sensor devices 31 is potentially larger than the colour gamut obtainable by the back array of sensor devices 32. Such a design allows the creation of an image with improved colour gamut as the colours Ri, G2, and B3 are placed closer to the edge of the CLE 1931 colour space chromaticity diagram, i.e. closer to monochromatic colours.
The output signals of the fifth to seventh photodetector configurations 1 10 to 1 12 may be processed in a similar manner to that described for other photodetector configurations. In that way, a luminance signal representative of the luminance of the incident light and colour component signals representative of each of the colour components may be derived from the signals output by the front array of sensor devices 31 and the back array of sensor devices 32 in combination.
The various photodetector configurations described above are given by way of example and are not limitative. For example, the photodetector 22 may comprise further sub-arrays of photodetectors, additional colour filter arrays and/or transistors (e.g. for active pixel arrays).
Fig. 15 shows an example of the construction of a sensor device 1 of the front array that is a dye sensitised photoelectric device employing organic material in the form of an organic dye to convert one or more of the colour components into an electrical signal. Figuer 15 shows the approximate thickness of various layers although thinner layers may be preferred. The sensor device 1 has a layered construction that will now be described.
The sensor device 31 is formed on a substrate 120, that is a silicon substrate in which the sensor devices 32 of the bottom array are formed.
A first electrode 121 formed of a transparent conductive material (e.g. ITO (indium tin oxide) or FTO (fluorine doped tin oxide)) is provided as a first layer on the substrate 121.
The second layer provided on the first electrode 121 is a compact conductive layer 122 of a non-porous, transparent material (e.g. Ti0 ) that is less conductive than the first electrode 121.
The third layer provided on the compact conductive layer 122 is a conductive layer 123 made of a highly transparent material (e.g. Ti02 nanocrystals) arranged in an open structure having pores. This material is conductive to charge carriers of first type, in this example electrons. The porous conductive layer 123 is electrically connected to the compact conductive layer 122 and hence to the first electrode 121. The compact conductive layer 122 and the porous conductive layer 123 may be made of the same material but that is not essential.
The porous conductive layer 123 is coated with an organic dye 124 comprising light absorbing molecules that convert one or more of the colour components.
The next layer is a transporter layer 125 that sufficiently fills the pores of the porous conductive layer 123 and extends above the porous conductive layer 123. It is important to note that it is not necessarily required that the transporter 125 fills the pores completely, and that it is conceivable that said hole transporter wets the surface of conductive layer 123 assuming that the charge carrier mobilities of the second type of charge carriers is sufficiently large. Furthermore, layer 123 might be replaced by another transparent and conductive layer with a sufficiently large surface to volume ratio. Nanowires comprising a transparent conductive oxide (e.g. SnO, ZnO or Ti02) may constitute a suitable alternative. The transporter layer 125 is made of a material that is conductive to charge carriers of second type, in this example holes. For example, the transporter layer 125 may be a hole-conducting polymer.
The final layer is a second electrode 126 disposed on, and electrically connected to, the transporter layer 125. The second electrode 126 is made of a transparent film, for example a film of metal, e.g. gold, less than 50nm thick; or a transparent conductive oxide such as ITO.
The sensor device 32 works as follows. Light penetrating the second electrode 126 (which is the front electrode) gets absorbed in the organic dye 124, whereby each absorbed photon generates one electron, which is injected into the porous conductive layer 123, and one hole, which is injected into the transporter layer 125. Hence, electrons are collected at the first electrode 121 and holes are collected at the second electrode 126.
The wavelength of the photons absorbed is determined by the choice of the organic dye 124. Materials of different light absorbing properties can be selected to obtain the desired spectral response of the sensor device 31.
Optionally, a filter may be incorporated into the sensor devices 31 by including an absorptive material arranged to absorb EM radiation without providing the photoelectric sensor device 31 with a response thereto. In the structure shown in Fig. 15, the absorptive material might be disposed within the transporter layer 125, for example being nanoparticles or dyes which are electrically isolated from the porous conductive layer 123. In general, the absorptive material might be a dye (or an equivalent material) disposed anywhere in the sensor device 31, but preferably within the transporter layer 125, without being able to inject a charge into either one of the electrodes 121 or 126. This can be achieved by using a dye which is miscible in the hole transporter but has no molecular groups capable of attaching to the pourous conductive layer. Such a filter incorporated into the sensor devices 31 may be used to form the embodiments described above in which arrays of filters are provided in front of the sensor devices of the back array.
Fig. 16 shows the construction of the photodetector 22 with the sensor devices 31 of the front array stacked on a semiconductor substrate 130 in which the sensor devices 32 of the back array are formed as CMOS devices, or more generally as any other form of silicon-based photovoltaic devices (e.g. a CCD), in a common layer of silicon, or more generally semiconductor material. The sensor devices 31 of the front array have respective first electrodes 121 and a common front electrode 126.
The sensor devices 31 and 32 may be formed using conventional techniques, including but not restricted to any combination of the following techniques. 1. Additive techniques (e.g. deposition, transfer)
• Deposition methods include but are not restricted to direct or indirect thermal evaporation, sputter deposition, chemical vapour deposition, spin coating, spray coating, doctor blading and ink-jet printing.
• Transfer methods include dry transfer methods such as stamp-based transfers, and device bonding
2. Subtractive techniques (e. g. etching, sputtering, dissolving)
• Etching includes wet-chemical etching and dry- etching (e. g. reactive ion etching). Dry etching techniques may be combined with sputtering techniques.
• Sputtering includes ion milling.
3. Selective techniques (e .g. self assembly, chemical functionalisation, local heating, local exposure to particles, local exposure to mechanical stress)
• Local heating may occur due to a localised exposure to an energy source (e. g. a focussed laser beam, selective exposure to photons using a mask) or due to particular energy absorbing properties of the material to be heated (e.g. due to a smaller indirect band- gap compared to neighbouring materials allowing it to absorb of photons under emission of phonons. Excimer laser annealing of Silicon on a glass substrate is such a material combination where the photons are predominantly absorbed in the silicon).
• Chemical functionalisation may utilise particular surface properties of the elongate low dimensional structures being defined by the material composition.
• Local exposure of particles include beyond the aforementioned lithographic methods also the use of a focussed ion beam. Local exposure to mechanical stress inc ludes imprint technologies.
The sensor devices 31 and 32 themselves may have a detailed construction similar to the stacked sensor devices disclosed in US-2007/0,012,955 and US-2009/0,283,758.
The sensor devices 31 of the front array may be fabricated directly onto sensor devices 32 of the back array, or alternatively may be fabricated on different substrates and subsequently assembled. Some examples of methods that may be employed are as follows.
A method of fabricating a pixelated photoelectric device array on a glass substrate is illustrated in Figs. 17(a) to 17(d).
First, a glass substrate 140 is coated with a TCO (transparent conductive oxide) layer 141. Next a compact Ti02 film (not shown) may be deposited. Now, a porous conductive film 142 of Ti02 particles is formed (e.g. by doctor blading) which is subsequently sintered at a temperature in the range of 300°C to 600°C to ensure continuous conduction paths through the film towards the TCO layer 141. Photoresist 143 is spincoated and patterned using conventional photolithographic techniques. The result to this point is depicted in Fig. 17(a).
Next a first dye and first hole transporter are added to the unmasked sections 144 that ultimately form a first sub-array of sensor devices 31, as shown in Fig. 17(b). To avoid that the hole transporter masks the photoresist 143, the surface properties of the photoresist 143 may have to be altered using a suitable functionalisation step.
Next, the photoresist 143 is removed using suitable solvents, as shown in Fig. 17(c). Prior to this step, the first hole transporter may be cross-linked.
Then a second dye and second hole transporter are added to the sections 145 that were previously masked and ultimately form a first sub-array of sensor devices 31 , as shown in Fig. 17(d).
To simplify the method described above, it might be preferable not to deposit the first hole transporter after the deposition of the first dye, but to first remove the photoresist 143, next to mask the regions coated with the first dye using a photoresist (not shown), add the second dye, remove the photoresist and finally add a hole transporter. In this case, the first hole transporter 1 is identical with the second hole transporter 2.
An even shorter process flow may comprise the deposition of the first dye after the sintering of film 142 and spin coating of the photoresist 143. After patterning the photoresist, the first dye might be removed. In fact, a suitable resist developer might also remove the dye. Next, the second dye is deposited, the resist is removed and the hole transporter is added. In this case, the positions of dye 1 and dye 2 are swapped.
Figs. 18(a) and (b) shows an alternative construction of the front array of sensor devices 31 similar to that of Fig. 17 but in which the porous conducting material 142 is discontinuous and separated by spacers 146 forming pockets 147 and 148. Next, all pockets 147 and 148 are filled with a suitable polymer and subsequently a first set of pockets 148 is masked using the photolithography process described above. Now, the polymer of the exposed pockets 147 is removed, a first dye and first hole transporter are deposited, as shown in Fig. 18(a).
Next, the resist 143 is removed and the polymer in the previously masked pockets 148 is removed. To ensure that the first hole transporter is not removed during these steps, the polymer might be highly soluble in a particular solvent compared to the first hole transporter (this can be either achieved by choosing an appropriate polymer and solvent combination or by cross-linking the first hole transporter).
Next, a second dye and second hole transporter are deposited in the open pockets 149, during which the previously processed pockets 148 might be masked using a patterned photoresist. This embodiment is particularly useful if the chosen photoresist does not mask the porous material sufficiently (e.g. it might not fill its cavities sufficiently) or if it is difficult to remove the chosen photoresist from the cavities of the porous material. Thus the material in the pockets 1 7 and 148 forms respective sub-arrays of sensor devices 31.
If a hole transporter which can be cross-linked is chosen, it can fulfil the role of the photoresist and replace it. Hence, the process described above and in particular the process relating to Figs. 17(a) to (d) simplifies as will now be described with reference to Figs. 19(a) to (c).
After sintering, the porous conductive film 142 is coated with a first dye and first hole transporter 1 , as shown in Fig. 19(a). Next, the desired portions 150 of hole transporter are crosslinked using a suitable light source (e.g. UV light) in combination with a metal mask, as shown in Fig. 19(b).
Now the non-crosslinked portions 151 of polymer are removed, for example using techniques well known for patterning photoresists. Now, the first dye is removed from the exposed sections 151 (e.g. using oxygen plasma) and a second dye and second polymer are deposited as shown in Fig. 19(c).
Next, the fabricated assembly 160 of the front array of sensor devices 31 needs to be joined with the assembly 162 of the back array of sensor devices 32 that can be made in a silicon chip using standard fabrication techniques for CMOS image sensors.
This may be done by applying an adhesive layer 163 that may be a conductive polymer film to coat the assembly 160 of the front array of sensor devices 31 or the assembly 162 of the back array of sensor devices 32 as shown in Figs. 20(a) and (b), respectively, and subsequently joining the assemblies 161 and 162 together. The adhesive layer 163 might require thermal annealing or exposure with a suitable radiation source to improve adhesion for example by facilitating crosslinking.
To achieve crosslinking, the assembly 160 of the front array of sensor devices 31 needs to be sufficiently transparent with respect to the used radiation (e.g. UV light) where a UV cure under high pressure is applied to form good electrical contacts. This can be achieved as follows.
The substrate 140 is replaced by a handling wafer 164 and a sacrificial layer 165, as shown in Fig. 21(a). The handling wafer 164 and the sacrificial layer 165 are ideally transparent to UV light. If not, a bond just sufficient to keep the assembly 161 in place is required. Instead of forming the bond across the assembly 161, it may be possible to form the bond at the periphery, thereby effectively sealing the assemblies 1 1 and 162.
Then, the handling wafer 164 is removed, as shown in Fig. 21(b), possibly by etching using the sacrificial layer 1 5 as an etch stop, or alternatively by removal of the sacrificial layer 165 to release the handling wafer 164. The use of a layer acting as an etch stop might be required because glass wafers which have thermochemical properties comparable to silicon (required for high alignment accuracy) are usually etched with HF which also etches most TCOs.
Next, as an optional step the sacrificial layer 165 is removed, as shown in Fig. 21(c).
The remaining TCO layer 141 might also absorb a portion of the UV radiation, but it can be made sufficiently thin to transmit a sufficiently large fraction of the UV radiation. Also, where spacers 146 are used, as shown in Figs. 18(a) and (b), the spacers 146 consist of a material that is sufficiently transparent to the UV radiation.

Claims

1. An image sensor for sensing EM radiation of at least three spectral components spread across a predetermined spectrum of interest, the image sensor comprising a photodetector comprising a front array of photoelectric sensor devices for receiving incident EM radiation stacked on a back array of photoelectric sensor devices and aligned in a one-to-one relationship with the photoelectric sensor devices of the back array for receiving the incident EM radiation after transmission through the front array, wherein
the front array consists of at least two interleaved sub-arrays of photoelectric sensor devices each having a response spectrally limited to one or more of said spectral components that is the same within each sub-array and different between the sub-arrays, so that the photoelectric sensor devices are configured to output signals representing the one or more spectral components to which they have a response, and
the back array comprises photoelectric sensor devices each having a response to the entire spectrum of interest, so that the photoelectric sensor devices are configured to output a signal representing the total of all of said spectral components that reach the sensor device without having been absorbed by the aligned photoelectric sensor device of the front array or by any filter optionally provided in front of the photoelectric sensor device of the back array.
2. An image sensor according to claim 1 , wherein each photoelectric sensor device of the front array have a peak absorptance, in respect of the one or more of said spectral components to which that photoelectric sensor device has a response, of at least 60%, preferably at least 75%, more preferably at least 90%, more preferably at least 95%, more preferably at least 99%.
3. An image sensor according to claim 1 or 2, wherein the front array consists of at least three interleaved sub-arrays of photoelectric sensor devices each having a response spectrally limited to one of said spectral components that is the same within each sub-array and different between the sub- arrays.
4. An image sensor according to claim 1 or 2, wherein the front array consists of at least three interleaved sub-arrays of photoelectric sensor devices, wherein the photoelectric sensor devices of at least one said sub-arrays has a response spectrally limited to at least two said spectral components.
5. An image sensor according to claim 4, wherein the front array consists of three interleaved sub-arrays of photoelectric sensor devices, the photoelectric sensor devices of a first two of the sub- arrays each having a response spectrally limited to one said spectral components that is the same within each sub-array and different between the sub-arrays, and the photoelectric sensor devices of the third sub-array each having a response spectrally limited to both of said spectral components to which the responses of the photoelectric sensor devices of the first two sub-arrays are spectrally limited.
6. An image sensor according to any one of the preceding claims, wherein the photodetector does not include any filters arranged to absorb any of the spectral components.
7. An image sensor according to any of the claims 1 to 5, wherein the photodetector further comprises at least one array of filters aligned in a one-to-one relationship with one of the sub-arrays of photoelectric sensor devices of the front array, disposed in front of the photoelectric sensor devices of the back array, and arranged to absorb at least a portion of one of the spectral components.
8. An image sensor according to claim 7, wherein the at least one array of filters is arranged to absorb at least a portion of one of the spectral components other than one or more of said spectral components to which the response of the sub-array of photoelectric sensor devices of the front array with which the array of filters are aligned is spectrally limited.
9. An image sensor according to claim 7 or 8, wherein
the front array consists of first and second interleaved sub-arrays of photoelectric sensor devices each having a response spectrally limited to one of said spectral components that is the same within each sub-array and different between the sub-arrays, and
the at least one array of filters consists of first and second arrays of filters wherein the first array of filters is substantially aligned in a one-to-one relationship with the first array of photoelectric sensor devices of the front array and the second array of filters is substantially aligned in a one-to-one relationship with the second sub-array of photoelectric sensor devices of the front array, at least one of the arrays of filters being arranged to absorb a spectral component to which the response of neither the first nor the second sub-array of photoelectric sensor devices of the front array is spectrally limited, and the second array of filters being arranged to absorb the spectral component to which the response of the first sub-array of photoelectric sensor devices of the front array is spectrally limited.
10. An image sensor according to any one of the preceding claims, further comprising a signal processing unit configured to receive the signals output by the photoelectric sensor devices of the back array and to process the received signals to derive spectral component signals representative of each of the at least three spectral components.
1 1. An image sensor according to any one of claims 1 to 9, further comprising a signal processing unit configured to receive the signals output by the photoelectric sensor devices of the front array and the back array and to process the received signals to derive spectral component signals representative of each of the at least three spectral components.
12. An image sensor according to claim 1 1, wherein the signal processing unit is configured to derive said spectral component signals from the signals output by the photoelectric sensor devices of both the front array and the back array in combination.
13. An image sensor according to claim 1 1 or 12, wherein the signal processing unit is configured to process the received signals to derive said spectral component signals at the spatial resolution of the front and back arrays of photoelectric sensor devices.
14. An image sensor according to any one of claims 1 1 to 13, wherein the signal processing unit is further configured to process the received signals to derive a luminance signal representative of the luminance of the incident EM radiation at the spatial resolution of the front and back arrays of photoelectric sensor devices.
15. An image sensor according to claim 14, wherein the signal processing unit is configured to derive said luminance signal from the signals output by the photoelectric sensor devices of both the front array and the back array in combination.
16. An image sensor according to claim 14 or 15, wherein the signal processing unit is configured to derive said spectral component signals using said luminance signal.
17. An image sensor according to any of claims 1 1 to 16, wherein the signal processing unit is configured to use the received signals of the front array and the received signals of the back array to identify photoelectric sensor devices which produce a faulty signal.
18. An image sensor according to any of the preceding claims, wherein the signal processing unit is configured to use the received signals of the front array and the received signals of the back array to produce an image with a dynamic range exceeding the dynamic range of the image derived from the signals of the front array and the dynamic range of the image derived from the signals of back array.
19. An image sensor according to any of the claims 11 to 18, wherein the bandwidth of the signal of one photoelectric device per pixel differs substantially from the bandwidth of the signal of the other devices per pixel and a signal processing unit which is configured to derive a colour image with a larger colour gamut than the image derived solely from the signals with the larger bandwidth and a larger dynamic range than the image solely derived from the signals with the smaller bandwidth.
20. An image sensor according to any of the claims 1 1 to 19, wherein the signal processing unit processes signals of at least four spectral components to derive an image with a lower noise floor compared to any image derived from three of the four spectral components.
21. An image sensor according to any one of the preceding claims, wherein the back array comprises photoelectric sensor devices comprising inorganic material.
22. An image sensor according to claim 21 , wherein the inorganic material is semiconductor material, optionally silicon.
23. An image sensor according to claim 21 or 22, wherein the back array comprises photoelectric sensor devices comprising portions of a common layer of inorganic material.
24. An image sensor according to any one of the preceding claims, wherein the back array comprises photoelectric sensor devices formed as CMOS devices.
25. An image sensor according to any one of the preceding claims, wherein the front array comprises photoelectric sensor devices comprising organic material configured to absorb said one or more of said spectral components.
26. An image sensor according to claim 25, wherein the organic material is an organic dye.
27. An image sensor according to claim 25, wherein the organic material is an organic semiconductor of either molecular or polymeric form.
28. An image sensor according to any one of claims 25 to 27, wherein the front array comprises photoelectric sensor devices comprising:
first and second electrodes;
a first transparent material that is conductive to charge carriers of a first type having pores and being electrically connected to the first electrode, the first transparent material being coated by said organic material; and
a second transparent material that is conductive to charge carriers of a second type disposed within said pores and being electrically connected to the second electrode.
29. An image sensor according to any one of the preceding claims, wherein at least some of the photoelectric sensor devices of the first device further comprise an absorptive material arranged to absorb EM radiation without providing the photoelectric sensor devices with a response thereto and thereby to act as a filter.
30. An image sensor according to claim 29 when dependent on claim 28, wherein the absorptive material is disposed within the second transparent material.
31. An image sensor according to any one of the preceding claims, wherein the photodetector further comprises a filter disposed in front of the front array of photoelectric sensor devices and that has an absorption spectrum that is spatially uniform across the image sensor.
32. An image sensor according to claim 31, wherein the filter is a bandpass filter arranged to restrict the spectrum of the incident EM radiation to bandwidth ranges in respect of each of said spectral components.
33. An image sensor according to any one of the preceding claims, wherein the spectrum of interest is visible light.
34. An image sensor according to claim 33, wherein the spectral components are red, green and blue colour components.
PCT/GB2011/001283 2010-09-03 2011-08-31 Image sensor WO2012028847A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1014713.0 2010-09-03
GBGB1014713.0A GB201014713D0 (en) 2010-09-03 2010-09-03 Image sensor
GB1108211.2 2011-05-16
GBGB1108211.2A GB201108211D0 (en) 2011-05-16 2011-05-16 Image sensor

Publications (1)

Publication Number Publication Date
WO2012028847A1 true WO2012028847A1 (en) 2012-03-08

Family

ID=45772206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2011/001283 WO2012028847A1 (en) 2010-09-03 2011-08-31 Image sensor

Country Status (1)

Country Link
WO (1) WO2012028847A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015015684A (en) * 2013-07-08 2015-01-22 株式会社ニコン Imaging apparatus
US9392166B2 (en) 2013-10-30 2016-07-12 Samsung Electronics Co., Ltd. Super-resolution in processing images such as from multi-layer sensors
JP2017135753A (en) * 2017-04-27 2017-08-03 株式会社ニコン Imaging apparatus
DE102016217282A1 (en) 2016-09-12 2018-03-15 Conti Temic Microelectronic Gmbh IMAGE SENSOR, IMAGING DEVICE, DRIVER ASSISTANCE SYSTEM, VEHICLE AND METHOD FOR EVALUATING ELECTROMAGNETIC RADIATION
CN110784634A (en) * 2019-11-15 2020-02-11 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
WO2022241702A1 (en) * 2021-05-20 2022-11-24 迪克创新科技有限公司 Readout circuit, image sensor, related chip, and electronic device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US20030141564A1 (en) 2002-01-25 2003-07-31 Fuji Photo Film Co., Ltd. Solid state image pickup device with two photosensitive fields per one pixel
JP2003284084A (en) 2002-03-20 2003-10-03 Sony Corp Image processing apparatus and method, and manufacturing method of image processing apparatus
US20030209651A1 (en) * 2002-05-08 2003-11-13 Canon Kabushiki Kaisha Color image pickup device and color light-receiving device
US6731397B1 (en) 1999-05-21 2004-05-04 Foveon, Inc. Method for storing and retrieving digital image data from an imaging array
EP1435662A2 (en) 2002-12-09 2004-07-07 Fuji Photo Film Co., Ltd. Solid state image pickup device with wide dynamic range
JP2005286104A (en) 2004-03-30 2005-10-13 Fuji Film Microdevices Co Ltd Wide dynamic range color solid-state imaging apparatus and digital camera mounted therewith
US20060278869A1 (en) * 2005-06-02 2006-12-14 Fuji Photo Film Co., Ltd. Photoelectric conversion layer, photoelectric conversion device and imaging device, and method for applying electric field thereto
US20070012955A1 (en) 2005-06-29 2007-01-18 Fuji Photo Film Co., Ltd. Organic and inorganic hybrid photoelectric conversion device
US20070223059A1 (en) 2006-03-24 2007-09-27 Fujifilm Corporation Image pickup apparatus and a method for producing an image of quality matching with a scene to be captured
US20070279501A1 (en) * 2006-05-18 2007-12-06 Fujifilm Corporation Photoelectric-conversion-layer-stack-type color solid-state imaging device
US7411620B2 (en) 2004-03-19 2008-08-12 Fujifilm Corporation Multilayer deposition multipixel image pickup device and television camera
WO2008150342A1 (en) 2007-05-23 2008-12-11 Eastman Kodak Company Noise reduced color image using panchromatic image
US20090283758A1 (en) 2008-05-14 2009-11-19 Fujifilm Corporation Organic semiconductor, photoelectric conversion element and image device
US20100123070A1 (en) * 2008-11-20 2010-05-20 Sony Corporation Solid-state image capture device and image capture apparatus

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US6731397B1 (en) 1999-05-21 2004-05-04 Foveon, Inc. Method for storing and retrieving digital image data from an imaging array
US20030141564A1 (en) 2002-01-25 2003-07-31 Fuji Photo Film Co., Ltd. Solid state image pickup device with two photosensitive fields per one pixel
JP2003284084A (en) 2002-03-20 2003-10-03 Sony Corp Image processing apparatus and method, and manufacturing method of image processing apparatus
US20030209651A1 (en) * 2002-05-08 2003-11-13 Canon Kabushiki Kaisha Color image pickup device and color light-receiving device
EP1435662A2 (en) 2002-12-09 2004-07-07 Fuji Photo Film Co., Ltd. Solid state image pickup device with wide dynamic range
US7411620B2 (en) 2004-03-19 2008-08-12 Fujifilm Corporation Multilayer deposition multipixel image pickup device and television camera
JP2005286104A (en) 2004-03-30 2005-10-13 Fuji Film Microdevices Co Ltd Wide dynamic range color solid-state imaging apparatus and digital camera mounted therewith
US20060278869A1 (en) * 2005-06-02 2006-12-14 Fuji Photo Film Co., Ltd. Photoelectric conversion layer, photoelectric conversion device and imaging device, and method for applying electric field thereto
US20070012955A1 (en) 2005-06-29 2007-01-18 Fuji Photo Film Co., Ltd. Organic and inorganic hybrid photoelectric conversion device
US20070223059A1 (en) 2006-03-24 2007-09-27 Fujifilm Corporation Image pickup apparatus and a method for producing an image of quality matching with a scene to be captured
US20070279501A1 (en) * 2006-05-18 2007-12-06 Fujifilm Corporation Photoelectric-conversion-layer-stack-type color solid-state imaging device
WO2008150342A1 (en) 2007-05-23 2008-12-11 Eastman Kodak Company Noise reduced color image using panchromatic image
US20090283758A1 (en) 2008-05-14 2009-11-19 Fujifilm Corporation Organic semiconductor, photoelectric conversion element and image device
US20100123070A1 (en) * 2008-11-20 2010-05-20 Sony Corporation Solid-state image capture device and image capture apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MRITYUNJAY KUMAR, EFRAIN O. MORALES, JAMES E. ADAMS, JR., WEI HAO: "New digital camera sensor architecture for low light imaging", PROCEEDINGS OF THE 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2009

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015015684A (en) * 2013-07-08 2015-01-22 株式会社ニコン Imaging apparatus
CN105580360A (en) * 2013-07-08 2016-05-11 株式会社尼康 Imaging device
EP3021577A4 (en) * 2013-07-08 2017-03-01 Nikon Corporation Imaging device
US10136108B2 (en) 2013-07-08 2018-11-20 Nikon Corporation Imaging device
CN109346493A (en) * 2013-07-08 2019-02-15 株式会社尼康 Photographic device
US9392166B2 (en) 2013-10-30 2016-07-12 Samsung Electronics Co., Ltd. Super-resolution in processing images such as from multi-layer sensors
US9996903B2 (en) 2013-10-30 2018-06-12 Samsung Electronics Co., Ltd. Super-resolution in processing images such as from multi-layer sensors
DE102016217282A1 (en) 2016-09-12 2018-03-15 Conti Temic Microelectronic Gmbh IMAGE SENSOR, IMAGING DEVICE, DRIVER ASSISTANCE SYSTEM, VEHICLE AND METHOD FOR EVALUATING ELECTROMAGNETIC RADIATION
JP2017135753A (en) * 2017-04-27 2017-08-03 株式会社ニコン Imaging apparatus
CN110784634A (en) * 2019-11-15 2020-02-11 Oppo广东移动通信有限公司 Image sensor, control method, camera assembly and mobile terminal
WO2022241702A1 (en) * 2021-05-20 2022-11-24 迪克创新科技有限公司 Readout circuit, image sensor, related chip, and electronic device

Similar Documents

Publication Publication Date Title
CN107146850B (en) Imaging element, laminated imaging element, solid-state imaging device, and driving method thereof
TWI459543B (en) Solid-state image capturing device and electronic device
US8378397B2 (en) Solid-state imaging device, process of making solid state imaging device, digital still camera, digital video camera, mobile phone, and endoscope
US8232616B2 (en) Solid-state imaging device and process of making solid state imaging device
US8704281B2 (en) Process of making a solid state imaging device
EP2413182B1 (en) Solid-state imaging device, driving method thereof and electronic apparatus
JP6754156B2 (en) Manufacturing methods for solid-state image sensors and solid-state image sensors, photoelectric conversion elements, image pickup devices, electronic devices, and photoelectric conversion elements.
JP2013055252A (en) Solid state image sensor and manufacturing method therefor, and electronic apparatus
WO2012028847A1 (en) Image sensor
US7339216B1 (en) Vertical color filter sensor group array with full-resolution top layer and lower-resolution lower layer
EP3340305B1 (en) Electronic devices and methods of manufacturing the same
KR20110003169A (en) Color unit and imaging device containing the same
US20160142660A1 (en) Single chip image sensor with both visible light image and ultraviolet light detection ability and the methods to implement the same
JP2009130239A (en) Color imaging device
CN114361341A (en) Laminated image pickup device and image pickup module
JP2014225536A (en) Solid-state imaging device and camera
TWI618235B (en) Quantum dot image sensor
WO2017163923A1 (en) Photoelectric conversion element, method for measuring same, solid-state imaging element, electronic device, and solar cell
TWI808414B (en) Image sensor
WO2017038256A1 (en) Imaging element, multilayer imaging element and solid-state imaging device
KR101653744B1 (en) Light Detector and Imaging Device Containing the Same
WO2007061565A2 (en) Vertical color filter sensor group array with full-resolution top layer and lower-resolution lower layer
JP5253856B2 (en) Solid-state imaging device
JP2005311315A (en) Organic information reading sensor, method of manufacturing same, and information reader using same
Knipp et al. Color aliasing free thin-film sensor array

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11760808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11760808

Country of ref document: EP

Kind code of ref document: A1