US20070047803A1 - Image processing device with automatic white balance - Google Patents
Image processing device with automatic white balance Download PDFInfo
- Publication number
- US20070047803A1 US20070047803A1 US11/216,272 US21627205A US2007047803A1 US 20070047803 A1 US20070047803 A1 US 20070047803A1 US 21627205 A US21627205 A US 21627205A US 2007047803 A1 US2007047803 A1 US 2007047803A1
- Authority
- US
- United States
- Prior art keywords
- block
- digital camera
- camera system
- color
- digital
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6027—Correction or control of colour gradation or colour contrast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
Definitions
- the invention relates to the image processing algorithms that are used in digital cameras. More accurately, the invention relates to Automatic White Balance (AWB) algorithms.
- ABB Automatic White Balance
- AWB algorithms that are used in digital cameras try to do the same for raw images that are captured by digital camera sensors. That is, AWB adjusts the gains of different color components (e.g. R, G and B) with respect to each other in order to make white objects white, despite the color temperature differences of the image scenes or different sensitivities of the color components.
- One of the existing methods of the AWB is to calculate averages of each color component, and then apply such gains for each color component so that these averages become equal. These types of methods are often called “grey world” AWB algorithms.
- the purpose of this invention is to provide sophisticated AWB mechanisms.
- a digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged:
- an electric circuit for processing a frame of image data comprising a plurality of color elements comprising:
- FIG. 1 is a schematic block diagram illustrating main components of a camera device according to a preferred embodiment of the present invention.
- FIG. 2 is a schematic block diagram illustrating main components of the AWB analyzer 17 according to the preferred embodiment.
- FIG. 3 is a flow chart for explaining the image processing of the AWB analyzer 17 according to the preferred embodiment.
- FIG. 1 is a schematic block diagram illustrating main components of a camera device according to a preferred embodiment of the present invention.
- the camera device 1 comprises a sensor module 11 , a pre-processing circuit 13 , a white balance (WB) amplifier 15 , an auto white balance (AWB) analyzer 17 , a post-processing circuit 19 , a display 21 , and a memory 23 .
- the sensor module 11 can be used for generate images for viewfinder and take still pictures or videos.
- the sensor module 11 comprises a lens, an image sensor with a RGB Bayer color filter, an analog amplifier, an A/D converter, and convents an incident light into a digital signal.
- This digital signal can be called as a Raw data, and has a RGB Bayer format i.e., each 2 x 2 pixel block comprises two Green data, one Red data, and one Blue data.
- the color filter may be a CMY Bayer filter.
- the sensor module 11 may not use a Bayer color filter, but comprises a Foveon-type sensor i.e. the sensor which records image signals of different wavelengths in different depths within the silicon).
- the sensor module 11 may also comprises actuators for auto-focus and zoom and aperture control and ND-filter control.
- the pre-processing circuit 13 may perform e.g. noise reduction, pixel linearization, and shading compensation.
- the WB amplifier 15 adjusts the gains of different color components (i.e. R, G and B) with respect to each other in order to make white objects white.
- the amount of gaining is decided by the AWB analyzer 17 .
- the AWB analyzer 17 analyzes the raw data to calculate the amount of gaining, and set the digital gains of the WB amplifier 15 according to the result of the calculation.
- the AWB analyzer 17 may also include overall digital gain part in the WB gains to increase image brightness and contrast by stretching adaptively the histograms towards the bright end.
- the AWB analyzer 17 may also calculate R, G and B offsets to increase the image contrast by stretching adaptively the histograms towards the dark end. Those offsets should be taken into account in the white balance gains, and they can be applied to data prior to or after WB gaining.
- the AWB analyzer 17 may also calculate an amount and shape of non-linear correction, which may be applied to the image data at the post-processing circuit 19 .
- the post processing circuit 19 performs CFA (Color Filter Array) interpolation, color space conversion, gamma correction, RGB to YUV conversion, image sharpening, and so on.
- the post processing circuit 19 may be hardware pipelines for performing one or all of those processing, or may comprise a processor for performing those processing by software processing.
- the image data processed by the post processing circuit 19 may be displayed on the display 21 or stored in the memory 23 .
- the memory 23 is a removable storage means such as a MMC card or a SD card.
- FIG. 2 is a schematic block diagram illustrating main components of the AWB analyzer 17 according to the preferred embodiment of the present invention.
- the AWB analyzer 17 comprises a block extractor 31 , a saturation checker 33 and average calculators for Red (R) component 35 , for Green (G) component 37 , and for Blue (B) component 39 .
- the AWB analyzer 17 also comprises a bus 39 , a CPU 41 , a firmware 42 , and a RAM 43 .
- the block extractor 31 divides a frame of the raw data into small blocks, and provides the raw data to the saturation checker 33 block by block.
- a frame in this embodiment means a total area of a still picture or a single picture in a series of video data.
- Each block contains a plurality of red, green, and blue pixel data.
- the block extractor 31 is arranged to divide a frame into 72 ⁇ 54 blocks.
- any ways of dividing are possible as long as they meets the requirement of speed, quality, price and so on.
- the saturation checker 33 is connected to the bus 39 , and checks a block whether the block contains a saturated pixel or not. The saturation checker 33 also checks whether the block in process is adjacent to the other saturated block or not. This information, i.e. whether the block does not contain any saturated pixels, contains a saturated pixel, or is adjacent to a saturated block, is sent to the CPU 41 . The information may also contain a number of saturated pixels within a block. The number of saturated pixels is useful in determining how reliable the saturated block is, if saturated blocks need to be used in step 150 or step 180 due to big block sizes etc.
- the average calculators 35 , 36 , and 37 are connected to the bus 39 , and calculate average values of Red components, Green components, and Blue components of the block in process respectively. If the block in process contains saturated pixels, the average calculators calculate the average values of only non-saturated pixels. The calculated average values are provided to the CPU 41 though the bus 39 .
- the block extractor 31 , the saturation checker 33 and the average calculators 35 , 36 , and 37 may be hardware circuits. But they can also be implemented as software processing by using a CPU. In the latter case the way of dividing may be programmable.
- the CPU 41 stores the saturation information and the R, G, B respective average values for each block in the RAM 43 according to instructions of the firmware 42 .
- the firmware 42 is software which instructs the CPU 41 to perform the necessary data processing.
- the CPU 41 performs further calculations to decide the amount of digital gains based on this information according to the instructions of the firmware 42 .
- Step 110 shows the start of analysis.
- a frame of image data to be analyzed is provided from the pre-processing module 13 .
- the image data to be analyzed in this embodiment is a Raw data with RGB Bayer format.
- this invention can be applied for other data format, e.g. CMY Bayer format or a data format containing the same number of color elements for all the color components, or a data format in which each pixel contains all used color components.
- CMY Bayer format or a data format containing the same number of color elements for all the color components, or a data format in which each pixel contains all used color components.
- the block extractor 31 divides a frame of the raw data into small blocks, and provides the raw data to the saturation checker 33 block by block.
- the frame in this embodiment means a total area of a still picture or a single picture in a series of video data.
- each block contains a plurality of red, green, and blue pixel data.
- the saturation checker 33 checks each block whether the block does not contain any saturated pixels, or contains a saturated pixel, or is adjacent to a saturated block. And the average calculators 35 , 36 , and 37 calculate average values of Red components, Green components, and Blue components of the block in process respectively. If the block in process contains saturated pixels, the average calculators calculate the average values of only non-saturated pixels.
- the AWB analyzer 17 will obtain a set of block information.
- the number of block information is the same as the number of blocks, and each block information contains saturation information and average values for R, G, and B components of the block.
- This set of block information is stored in the RAM 43 .
- step 150 the CPU 41 scans, according to the instruction of the firmware 42 , all of the block information, and calculates a set of statistic values from the block information.
- These statistic values include histograms of the red component, green component, blue component, and luminance of the whole of the raw data in process.
- the luminance in this embodiment is defined by (r+2g+b)/4, where the r, g, and b represent the average values of red, green and blue component of a block.
- Those statistic values also include the average values of red, green and blue components of the raw data in process, and the maximum luminance value.
- the maximum luminance value is a luminance of the brightest block.
- the CPU 41 excludes this block information from the calculation of these statistic values.
- the reason is that the saturated pixel values are not reliable, because the information about the relations between different color components is lost.
- the pixels that are close to saturated pixels are not preferred either because the pixel response is very non-linear when close to saturation, also possibility of electrons bleeding into neighboring pixels and increased pixel cross-talk.
- the AWB gain analysis in this embodiment is not sensitive to inclusion of saturated pixels. In another words the AWB gain analysis in this embodiment is not affected by the pixel saturations. This is one of an important advantage of this embodiment.
- the block information of the blocks containing a saturated pixel or the blocks adjacent to the saturated block may be used for calculating the above statistic values.
- This embodiment may be beneficial when the frame is divided into relatively large blocks, e.g. the frame is divided into 12 ⁇ 32 blocks. In such embodiment rejecting a block would lose big area of the image.
- the blocks adjacent to the saturated block or all blocks may be utilized, even they may not be reliable enough, or the averages of the non-saturated pixels within the saturated blocks may also be utilized.
- step 160 the CPU 41 sets, according to the instruction of the firmware 42 , boundary conditions for determining intensity ranges of each block. For example, the CPU 41 and the firmware 42 calculates 99.8%, 80%, 50% and 25% locations for the histograms of the red component, the green component, the blue component, and the luminance obtained in step 150 .
- the xx% location in histogram in this embodiment means that xx% of the pixels are darker than this value. After this step these values are used instead of the histograms obtained in step 150 .
- the CPU 41 and the firmware 42 also set boundary conditions for the R/G, B/G, min(R,B)/G and max(R,B)/G relations that are used to judge whether a block might be gray (white) or not.
- the R, G, and B represent average values of the red, green and blue component in the block to be judged.
- Min(R,B)/G represents a value dividing a smaller one of the R and the B by the G
- max(R,B)/G represents a value dividing a larger one of the R and the B by the G.
- these boundary conditions may be fixed according to known RGB response to different color temperatures, and then modified according to the sensor characteristics (e.g. different color filter characteristics between different types of camera sensors cause variation in the required limits).
- these criteria are decided based on the statistic values obtained in step 150 .
- the values for boundary conditions may also be used for detecting special case lighting conditions of the image data in step 250 , i.e. whether the lightning condition of the scene of the image data is such that it needs special adjustment to produce pleasing and accurate colors.
- lighting condition analysis may improve the performance of the white balancing, especially in case when there is little or no proper reference white available in the scene.
- the CPU 41 and the firmware 42 judge for the respective blocks whether color of the blocks could correspond to gray(/white) object or not, and cumulate the RGB average values of the white/gray blocks.
- the cumulated values are used for deciding the digital gains for white balancing.
- the CPU 41 and the firmware 42 exclude the saturated blocks and the blocks next to the saturated blocks from those operations (step 180 ).
- the reason of excluding those blocks is, as explained above, that they are not reliable enough. This is another reason why the AWB gain analysis in this embodiment is not sensitive to inclusion of saturated pixels. This is one of an important advantage of this embodiment.
- the blocks that are next to saturated blocks may not be rejected.
- even the saturated blocks may not be rejected, and non-saturated averages of the saturated blocks may be used in the following steps for some of the special case intensity ranges, at least if the block size is relatively large.
- the CPU 41 and the firmware 42 check whether a block belongs to one or more of the intensity ranges or not.
- Various types of intensity ranges can be defined.
- the intensity range C may be defined so as to cover the intensity range of the range A and the range B. Therefore a block belongs to the range A is also belongs to the range C.
- a block can belong to the several intensity ranges.
- first intensity range and a second intensity range which includes the range of the first intensity range. It may be also possible to define a third intensity range which includes the range of both the first and the second intensity range. Or, an intensity range whose range is very narrow, or is sensitive for the special lighting condition, i.e. a very bright, very dark, a blue sky, clouded, tungsten and so on, may be defined.
- step 210 the CPU 41 and the firmware 42 judge whether the block in process is potentially a part of white (gray) object or not.
- whether a block could be gray (white) or not is determined by comparing the R, G and B average values of the block to each other, taking into account certain sensor-specific pre-gains that normalize the sensor response into e.g. 5000K color temperature.
- Sensor-specific pre-gains that normalize the sensor response into e.g. 5000K color temperature.
- Relations between different color components that correspond to different black body radiator color temperatures can be used as a starting point for the limits, but additional variance needs to be allowed because the black body radiator does not represent accurately all possible light sources (e.g. fluorescent light sources have large variance in spectral response and some of them are not close to the black body radiator).
- the CPU 41 and the firmware 42 define the limits for the purpose of gray (white) judgment as explained in step 160 .
- the CPU 41 and the firmware 42 checks whether R/G, B/G, min(R,B)/G and max(R,B)/G satisfy the criteria decided in step 160 , and judge the blocks satisfying those criteria as the gray or white blocks.
- the criteria for gray (white) judgment can be different for the each intensity range. Also those criteria can be different depend on the estimated lighting conditions. There are separate color limits for high likelihood white and lower likelihood white. Bigger weight in the cumulative values is given to high likelihood blocks in comparison to lower likelihood blocks.
- step 220 the CPU 41 and the firmware 42 cumulates the respective red, green, and blue average values of the blocks judged as white or gray in the intensity range in process.
- the gray judgment is performed for each intensity range.
- the red, green and blue averages are accumulated into each variable if the block values are within the limits set in step 160 .
- the block values means R/G, B/G, min(R,B)/G and max(R,B)/G, see the description of step 160 .)
- step 240 a set of cumulated values of R, G, and B components will be created for each intensity range. These cumulative values are then temporarily stored in the RAM 43 , and will be used to calculate the white balance gains later.
- the CPU 41 and the firmware 42 detect special cases that need additional color balance tuning in order to produce accurate and pleasing colors. For example, bright blue sky scenes can be detected and known blackbody radiator color temperature of bright blue sky can be used together with grey blocks to determine the final color temperature. The values for determining boundary conditions obtained in step 160 will be used for identifying such special color cases. Also the total exposure that consists of exposure time and analogue gain (and digital gain if it has been applied for the raw data, and also aperture and focal length if those can be adjusted in the system) may also be utilized for identifying the special color cases. The result of the special case detection will be considered for the digital gain calculation in step 310 .
- An example of special case may be a blue sky.
- An image scene that is captured in bright daylight lighting condition and consists of bright blue sky and possibly some faint cloud haze can contain a lot of false reference white. Therefore detecting such case and considering such case in the digital gain calculation will improve the AWB performance.
- Other examples of the special cases are the images of candles and fireplaces. In those cases more pleasing (in this case maybe not more accurate but still more pleasing) colors can be obtained by detecting those and modifying the AWB gains.
- step 260 the CPU 41 and the firmware 42 calculate a certain goodness score of the grey-world algorithm so that also grey-world algorithm can be included in the final result if there seems to be very little reference white available and high goodness-score for the grey-world algorithm.
- the correction by the grey-world algorithm will be performed in step 320 .
- step 290 the CPU 41 and the firmware 42 mix the cumulated values obtained from the different intensity ranges in step 170 - 240 for the respective color elements. So a set of R, G, and B mixed values are created in this step. In this embodiment, only one a part of the intensity ranges will be used for this mixing. A set of intensity ranges used for the mixing is selected in step 280 . The mixed values will be used to decide the white balance gains. For the mixing, the CPU 41 and the firmware 42 may be arranged to give a weight on a cumulated value according to how many possible white blocks have been found from its intensity range.
- a cumulated value whose intensity range contains a lot of gray (white) blocks will be given a bigger weight at the mixing.
- the CPU 41 and the firmware 42 may be arranged to give bigger weights for cumulated values of the specific intensity ranges. For example, as the brighter ranges are considered to be more reliable, the cumulated value of the brightest intensity range will be given the biggest weight at the mixing. Giving weight may be done in advance in step 220 . If very little amount of grey blocks are found from the primary intensity range, then additional intensity ranges can be added to the cumulative values.
- step 300 the CPU 41 and the firmware 42 decide the first candidate of the white balance gains based on the set of mixed values.
- Gain values may be decided such that the values equalize the R, G, and B mixed values.
- the mixing values obtained in step 290 may be corrected based on the color tone estimation performed in step 250 to improve the performance of the white balancing, especially in case that there is little or no proper reference white available in the scene. For example, the blue sky could be problematic if there are no clearly white clouds but some haze, because those haze could be mistook as white. This correction can improve this kind of situation.
- the correction may be done by mixing the cumulate values of certain intensity range into the mixing values obtained in step 290 . And the candidate white balance gain will be updated based on the new mixing values. The correction in this step may not be performed if it is not necessary.
- the candidate white balance gains obtained in step 300 or 310 may be corrected by the gray-world algorithm based on the goodness calculation in step 260 .
- the goodness score calculated in step 260 may be used to determine how big weight in the final gains is given for the results that are obtained with the grey world algorithm. If the goodness score for grey world is very high, then e.g. 80% weight is given to grey world part and 20 % weight to the gain values obtained by the previous steps. Similarly, if the goodness score for grey world is not very high, then 0% weight is given to grey world and 100% to the results obtained by the previous steps.
- the correction may be done for the mixing values obtained in step 290 .
- the special cases like the bright blue sky or no white reference may be detected at first, then the mixing is done like as in step 310 .
- the correction in this step may not be performed if it is not necessary.
- the candidate digital gains or mixed values obtained in previous steps may be adapted according to total exposure (exposure time * analogue gain) in order to give good overall image contrast in bright scenes while still maintaining low noise and good detail preservation in bright image areas in dark scenes. Also aperture, ND-filter, focal length and additional digital pre-gaining may also be considered if those parameters are variable.
- the final gains may consist of combination of color balance gains and overall digital gain, i.e. colors are corrected with the gains and also luminance level.
- the correction in this step may not be performed if it is not necessary or if this step is not implemented in the firmware 42 .
- step 340 the final set of digital gains is decided. However, the final gains are still checked whether it is reliable values or not. For example, there may be a possibility of an error if some color component seems to dominate over the other color components. In such case the firmware 42 instructs the CPU 41 to go back to step 270 , and select different set of intensity ranges in step 280 , and re-do the following steps. If the final gains are reliable, then the gain calculations are finished (step 360 ). According to the instruction of firmware 42 , the CPU 41 sets the digital gains of the AWB amplifier 15 as calculated values.
- the preferred embodiment has been tested in the assignee of the present invention. In this test, 248 images that have been captured in a multitude of lighting conditions are used. And the result has shown that it can improve the AWB accuracy a quite a lot.
- the AWB amplifier may apply the digital gains to the frame of image data used for the gain calculation, or may apply them only to the image data of next frames, i.e. the image data taken by the next shooting.
- the camera device 1 may comprise a frame memory which can store a whole of image data which is comprised in a complete frame. In this embodiment the same frame which is used for the white balance calculation can be corrected by the calculated gains. But in another embodiment the camera device 1 may not comprise a memory which can store a whole of frame, but a line buffer which can store only a part of a frame. In this embodiment, the AWB gaining needs to be applied to part of the frame data before the statistics for the whole image data for the frame is available for AWB analysis.
- the result of AWB analysis from the previous frame can be used instead.
- the AWB analysis that is calculated on basis of frame n is used for frame n+1.
- the WB gains may be applied to the lines in the beginning of the image before the whole image is available for AWB analysis block 17 .
- this invention can be applied not only for the Bayer raw data also different types of image data such as Foveon type data.
- the calculation of the averages and the maximum luminance value in step 150 may also be done in later step from the R, G, B and luminance histograms.
Abstract
According to one aspect of the present invention, there is provided a digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged: to divide the frame image data into a plurality of blocks including a plurality of pixel data; to calculate a predetermined value for the each color element and for each of all or a part of the plurality of blocks; to judge, for each of all or a part of the plurality of blocks, whether the block being likely a part of a grey object or not to cumulate the predetermined values of the blocks judged as being likely a part of a gray object for the respective color elements, and; to decide a first set of digital gains for adjusting white balance based on the cumulated values.
Description
- The invention relates to the image processing algorithms that are used in digital cameras. More accurately, the invention relates to Automatic White Balance (AWB) algorithms.
- Human visual system compensates for different lighting conditions and color temperatures so that white objects are perceived as white in most of the situations. AWB algorithms that are used in digital cameras try to do the same for raw images that are captured by digital camera sensors. That is, AWB adjusts the gains of different color components (e.g. R, G and B) with respect to each other in order to make white objects white, despite the color temperature differences of the image scenes or different sensitivities of the color components.
- One of the existing methods of the AWB is to calculate averages of each color component, and then apply such gains for each color component so that these averages become equal. These types of methods are often called “grey world” AWB algorithms.
- The purpose of this invention is to provide sophisticated AWB mechanisms.
- According to one aspect of the present invention, there is provided a digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged:
-
- to divide the frame image data into a plurality of blocks including a plurality of pixel data;
- to calculate a predetermined value for the each color element and for each of all or a part of the plurality of blocks;
- to judge, for each of all or a part of the plurality of blocks, whether the block being likely a part of a grey (white) object or not;
- to cumulate the predetermined values of the blocks judged as being likely a part of a gray (white) object for the respective color elements, and;
- to decide a first set of digital gains for adjusting white balance based on the cumulated values.
- According to another aspect of the present invention, there is provided an electric circuit for processing a frame of image data comprising a plurality of color elements, wherein the said circuit comprising:
-
- a block extractor extracting a block of the image data, the block including a plurality of pixel data;
- an average calculator calculating average values of each color component of the extracted block;
- a judging unit judging whether the block being potentially white or not;
- a cumulator cumulating the average values of a plurality of the extracted blocks whose color being judged as potentially white for the respective color elements, and;
- a decision unit deciding a set of digital gains for adjusting white balance based on the cumulated values.
- Further features and advantages of the aspects of the present invention will be described by using an exemplary embodiment. Please note that the present invention includes any advantageous features or combinations thereof described in this specification, accompanying claims and accompanying drawings.
- Embodiments of the present invention will now be described by way of example only and with reference to accompanying drawings in which:
-
FIG. 1 is a schematic block diagram illustrating main components of a camera device according to a preferred embodiment of the present invention. -
FIG. 2 is a schematic block diagram illustrating main components of theAWB analyzer 17 according to the preferred embodiment. -
FIG. 3 is a flow chart for explaining the image processing of theAWB analyzer 17 according to the preferred embodiment. -
FIG. 1 is a schematic block diagram illustrating main components of a camera device according to a preferred embodiment of the present invention. Thecamera device 1 comprises asensor module 11, apre-processing circuit 13, a white balance (WB)amplifier 15, an auto white balance (AWB)analyzer 17, apost-processing circuit 19, adisplay 21, and amemory 23. Thesensor module 11 can be used for generate images for viewfinder and take still pictures or videos. Thesensor module 11 comprises a lens, an image sensor with a RGB Bayer color filter, an analog amplifier, an A/D converter, and convents an incident light into a digital signal. This digital signal can be called as a Raw data, and has a RGB Bayer format i.e., each 2x2 pixel block comprises two Green data, one Red data, and one Blue data. In another embodiment the color filter may be a CMY Bayer filter. In the other embodiments, thesensor module 11 may not use a Bayer color filter, but comprises a Foveon-type sensor i.e. the sensor which records image signals of different wavelengths in different depths within the silicon). Thesensor module 11 may also comprises actuators for auto-focus and zoom and aperture control and ND-filter control. - The
pre-processing circuit 13 may perform e.g. noise reduction, pixel linearization, and shading compensation. The WBamplifier 15 adjusts the gains of different color components (i.e. R, G and B) with respect to each other in order to make white objects white. The amount of gaining is decided by the AWBanalyzer 17. TheAWB analyzer 17 analyzes the raw data to calculate the amount of gaining, and set the digital gains of the WBamplifier 15 according to the result of the calculation. In addition to color balance, theAWB analyzer 17 may also include overall digital gain part in the WB gains to increase image brightness and contrast by stretching adaptively the histograms towards the bright end. TheAWB analyzer 17 may also calculate R, G and B offsets to increase the image contrast by stretching adaptively the histograms towards the dark end. Those offsets should be taken into account in the white balance gains, and they can be applied to data prior to or after WB gaining. TheAWB analyzer 17 may also calculate an amount and shape of non-linear correction, which may be applied to the image data at thepost-processing circuit 19. - The
post processing circuit 19 performs CFA (Color Filter Array) interpolation, color space conversion, gamma correction, RGB to YUV conversion, image sharpening, and so on. Thepost processing circuit 19 may be hardware pipelines for performing one or all of those processing, or may comprise a processor for performing those processing by software processing. The image data processed by thepost processing circuit 19 may be displayed on thedisplay 21 or stored in thememory 23. Preferably thememory 23 is a removable storage means such as a MMC card or a SD card. -
FIG. 2 is a schematic block diagram illustrating main components of theAWB analyzer 17 according to the preferred embodiment of the present invention. TheAWB analyzer 17 comprises ablock extractor 31, asaturation checker 33 and average calculators for Red (R)component 35, for Green (G)component 37, and for Blue (B)component 39. TheAWB analyzer 17 also comprises abus 39, aCPU 41, afirmware 42, and aRAM 43. - The
block extractor 31 divides a frame of the raw data into small blocks, and provides the raw data to thesaturation checker 33 block by block. A frame in this embodiment means a total area of a still picture or a single picture in a series of video data. Each block contains a plurality of red, green, and blue pixel data. As an example, when the resolution of thesensor module 11 is 1.3 mega pixels, preferably theblock extractor 31 is arranged to divide a frame into 72×54 blocks. Of course any ways of dividing are possible as long as they meets the requirement of speed, quality, price and so on. - The
saturation checker 33 is connected to thebus 39, and checks a block whether the block contains a saturated pixel or not. Thesaturation checker 33 also checks whether the block in process is adjacent to the other saturated block or not. This information, i.e. whether the block does not contain any saturated pixels, contains a saturated pixel, or is adjacent to a saturated block, is sent to theCPU 41. The information may also contain a number of saturated pixels within a block. The number of saturated pixels is useful in determining how reliable the saturated block is, if saturated blocks need to be used instep 150 or step 180 due to big block sizes etc. - The
average calculators bus 39, and calculate average values of Red components, Green components, and Blue components of the block in process respectively. If the block in process contains saturated pixels, the average calculators calculate the average values of only non-saturated pixels. The calculated average values are provided to theCPU 41 though thebus 39. Theblock extractor 31, thesaturation checker 33 and theaverage calculators - The
CPU 41 stores the saturation information and the R, G, B respective average values for each block in theRAM 43 according to instructions of thefirmware 42. Thefirmware 42 is software which instructs theCPU 41 to perform the necessary data processing. TheCPU 41 performs further calculations to decide the amount of digital gains based on this information according to the instructions of thefirmware 42. - Referring to
FIG. 3 , the image processing of theAWB analyzer 17 will be described below in detail. Step 110 shows the start of analysis. In step 120, a frame of image data to be analyzed is provided from thepre-processing module 13. As explained above, the image data to be analyzed in this embodiment is a Raw data with RGB Bayer format. Please note that this invention can be applied for other data format, e.g. CMY Bayer format or a data format containing the same number of color elements for all the color components, or a data format in which each pixel contains all used color components. The - In step 130, the
block extractor 31 divides a frame of the raw data into small blocks, and provides the raw data to thesaturation checker 33 block by block. As explained above the frame in this embodiment means a total area of a still picture or a single picture in a series of video data. And each block contains a plurality of red, green, and blue pixel data. - In step 140, the
saturation checker 33 checks each block whether the block does not contain any saturated pixels, or contains a saturated pixel, or is adjacent to a saturated block. And theaverage calculators - As the result of the step 140 processing, the
AWB analyzer 17 will obtain a set of block information. The number of block information is the same as the number of blocks, and each block information contains saturation information and average values for R, G, and B components of the block. This set of block information is stored in theRAM 43. - In
step 150, theCPU 41 scans, according to the instruction of thefirmware 42, all of the block information, and calculates a set of statistic values from the block information. These statistic values include histograms of the red component, green component, blue component, and luminance of the whole of the raw data in process. The luminance in this embodiment is defined by (r+2g+b)/4, where the r, g, and b represent the average values of red, green and blue component of a block. Those statistic values also include the average values of red, green and blue components of the raw data in process, and the maximum luminance value. The maximum luminance value is a luminance of the brightest block. - Some of these statistic values will be used to make criteria to whether it is likely that the block is part of grey object or not. Also some of these statistic values will be used to make criteria to decide intensity ranges.
- If block information of a block shows that the block contains a saturated pixel, or the block is next to a saturated block, the
CPU 41 excludes this block information from the calculation of these statistic values. The reason is that the saturated pixel values are not reliable, because the information about the relations between different color components is lost. The pixels that are close to saturated pixels are not preferred either because the pixel response is very non-linear when close to saturation, also possibility of electrons bleeding into neighboring pixels and increased pixel cross-talk. As theCPU 41 excludes the blocks that contain saturated pixels or blocks adjacent to such blocks, the AWB gain analysis in this embodiment is not sensitive to inclusion of saturated pixels. In another words the AWB gain analysis in this embodiment is not affected by the pixel saturations. This is one of an important advantage of this embodiment. - In another embodiment, the block information of the blocks containing a saturated pixel or the blocks adjacent to the saturated block (the block containing a saturated pixel) may be used for calculating the above statistic values. This embodiment may be beneficial when the frame is divided into relatively large blocks, e.g. the frame is divided into 12×32 blocks. In such embodiment rejecting a block would lose big area of the image. For such cases, the blocks adjacent to the saturated block or all blocks may be utilized, even they may not be reliable enough, or the averages of the non-saturated pixels within the saturated blocks may also be utilized.
- In step 160, the
CPU 41 sets, according to the instruction of thefirmware 42, boundary conditions for determining intensity ranges of each block. For example, theCPU 41 and thefirmware 42 calculates 99.8%, 80%, 50% and 25% locations for the histograms of the red component, the green component, the blue component, and the luminance obtained instep 150. The xx% location in histogram in this embodiment means that xx% of the pixels are darker than this value. After this step these values are used instead of the histograms obtained instep 150. - In step 160, the
CPU 41 and thefirmware 42 also set boundary conditions for the R/G, B/G, min(R,B)/G and max(R,B)/G relations that are used to judge whether a block might be gray (white) or not. Here the R, G, and B represent average values of the red, green and blue component in the block to be judged. Min(R,B)/G represents a value dividing a smaller one of the R and the B by the G, and max(R,B)/G represents a value dividing a larger one of the R and the B by the G. In one embodiment, these boundary conditions may be fixed according to known RGB response to different color temperatures, and then modified according to the sensor characteristics (e.g. different color filter characteristics between different types of camera sensors cause variation in the required limits). In another embodiment, these criteria (boundary conditions) are decided based on the statistic values obtained instep 150. - The values for boundary conditions may also be used for detecting special case lighting conditions of the image data in step 250, i.e. whether the lightning condition of the scene of the image data is such that it needs special adjustment to produce pleasing and accurate colors. Such lighting condition analysis may improve the performance of the white balancing, especially in case when there is little or no proper reference white available in the scene.
- From the step 170 to step 240, the
CPU 41 and thefirmware 42 judge for the respective blocks whether color of the blocks could correspond to gray(/white) object or not, and cumulate the RGB average values of the white/gray blocks. The cumulated values are used for deciding the digital gains for white balancing. - However, before the judgment and the cumulation, the
CPU 41 and thefirmware 42 exclude the saturated blocks and the blocks next to the saturated blocks from those operations (step 180). The reason of excluding those blocks is, as explained above, that they are not reliable enough. This is another reason why the AWB gain analysis in this embodiment is not sensitive to inclusion of saturated pixels. This is one of an important advantage of this embodiment. In one embodiment, if the block size were really big, then the blocks that are next to saturated blocks may not be rejected. In another embodiment, even the saturated blocks may not be rejected, and non-saturated averages of the saturated blocks may be used in the following steps for some of the special case intensity ranges, at least if the block size is relatively large. - In step 200, the
CPU 41 and thefirmware 42 check whether a block belongs to one or more of the intensity ranges or not. Various types of intensity ranges can be defined. For example, the intensity range C may be defined so as to cover the intensity range of the range A and the range B. Therefore a block belongs to the range A is also belongs to the range C. In general, a block can belong to the several intensity ranges. - In one embodiment, it is possible to define a first intensity range, and a second intensity range which includes the range of the first intensity range. It may be also possible to define a third intensity range which includes the range of both the first and the second intensity range. Or, an intensity range whose range is very narrow, or is sensitive for the special lighting condition, i.e. a very bright, very dark, a blue sky, clouded, tungsten and so on, may be defined.
- In step 210, the
CPU 41 and thefirmware 42 judge whether the block in process is potentially a part of white (gray) object or not. - In one embodiment, whether a block could be gray (white) or not is determined by comparing the R, G and B average values of the block to each other, taking into account certain sensor-specific pre-gains that normalize the sensor response into e.g. 5000K color temperature. Relations between different color components that correspond to different black body radiator color temperatures can be used as a starting point for the limits, but additional variance needs to be allowed because the black body radiator does not represent accurately all possible light sources (e.g. fluorescent light sources have large variance in spectral response and some of them are not close to the black body radiator).
- In the preferred embodiment, the
CPU 41 and thefirmware 42 define the limits for the purpose of gray (white) judgment as explained in step 160. In step 210 theCPU 41 and thefirmware 42 checks whether R/G, B/G, min(R,B)/G and max(R,B)/G satisfy the criteria decided in step 160, and judge the blocks satisfying those criteria as the gray or white blocks. The criteria for gray (white) judgment can be different for the each intensity range. Also those criteria can be different depend on the estimated lighting conditions. There are separate color limits for high likelihood white and lower likelihood white. Bigger weight in the cumulative values is given to high likelihood blocks in comparison to lower likelihood blocks. - In step 220, the
CPU 41 and thefirmware 42 cumulates the respective red, green, and blue average values of the blocks judged as white or gray in the intensity range in process. As shown by the loop from step 190 to step 230, the gray judgment is performed for each intensity range. And in each intensity range, the red, green and blue averages are accumulated into each variable if the block values are within the limits set in step 160. (The block values means R/G, B/G, min(R,B)/G and max(R,B)/G, see the description of step 160.) - When scanning all the blocks is finished, in step 240, a set of cumulated values of R, G, and B components will be created for each intensity range. These cumulative values are then temporarily stored in the
RAM 43, and will be used to calculate the white balance gains later. - In step 250, the
CPU 41 and thefirmware 42 detect special cases that need additional color balance tuning in order to produce accurate and pleasing colors. For example, bright blue sky scenes can be detected and known blackbody radiator color temperature of bright blue sky can be used together with grey blocks to determine the final color temperature. The values for determining boundary conditions obtained in step 160 will be used for identifying such special color cases. Also the total exposure that consists of exposure time and analogue gain (and digital gain if it has been applied for the raw data, and also aperture and focal length if those can be adjusted in the system) may also be utilized for identifying the special color cases. The result of the special case detection will be considered for the digital gain calculation in step 310. - An example of special case may be a blue sky. An image scene that is captured in bright daylight lighting condition and consists of bright blue sky and possibly some faint cloud haze can contain a lot of false reference white. Therefore detecting such case and considering such case in the digital gain calculation will improve the AWB performance. Other examples of the special cases are the images of candles and fireplaces. In those cases more pleasing (in this case maybe not more accurate but still more pleasing) colors can be obtained by detecting those and modifying the AWB gains.
- In step 260, the
CPU 41 and thefirmware 42 calculate a certain goodness score of the grey-world algorithm so that also grey-world algorithm can be included in the final result if there seems to be very little reference white available and high goodness-score for the grey-world algorithm. The correction by the grey-world algorithm will be performed in step 320. - The meaning of the loop starting from step 270 will be explained later. In step 290, the
CPU 41 and thefirmware 42 mix the cumulated values obtained from the different intensity ranges in step 170-240 for the respective color elements. So a set of R, G, and B mixed values are created in this step. In this embodiment, only one a part of the intensity ranges will be used for this mixing. A set of intensity ranges used for the mixing is selected in step 280. The mixed values will be used to decide the white balance gains. For the mixing, theCPU 41 and thefirmware 42 may be arranged to give a weight on a cumulated value according to how many possible white blocks have been found from its intensity range. For example, a cumulated value whose intensity range contains a lot of gray (white) blocks will be given a bigger weight at the mixing. Or, theCPU 41 and thefirmware 42 may be arranged to give bigger weights for cumulated values of the specific intensity ranges. For example, as the brighter ranges are considered to be more reliable, the cumulated value of the brightest intensity range will be given the biggest weight at the mixing. Giving weight may be done in advance in step 220. If very little amount of grey blocks are found from the primary intensity range, then additional intensity ranges can be added to the cumulative values. - In step 300, the
CPU 41 and thefirmware 42 decide the first candidate of the white balance gains based on the set of mixed values. Gain values may be decided such that the values equalize the R, G, and B mixed values. - In step 310, the mixing values obtained in step 290 may be corrected based on the color tone estimation performed in step 250 to improve the performance of the white balancing, especially in case that there is little or no proper reference white available in the scene. For example, the blue sky could be problematic if there are no clearly white clouds but some haze, because those haze could be mistook as white. This correction can improve this kind of situation. The correction may be done by mixing the cumulate values of certain intensity range into the mixing values obtained in step 290. And the candidate white balance gain will be updated based on the new mixing values. The correction in this step may not be performed if it is not necessary.
- In step 320, the candidate white balance gains obtained in step 300 or 310 may be corrected by the gray-world algorithm based on the goodness calculation in step 260. The goodness score calculated in step 260 may be used to determine how big weight in the final gains is given for the results that are obtained with the grey world algorithm. If the goodness score for grey world is very high, then e.g. 80% weight is given to grey world part and 20% weight to the gain values obtained by the previous steps. Similarly, if the goodness score for grey world is not very high, then 0% weight is given to grey world and 100% to the results obtained by the previous steps.
- To take account the grey world algorithm into account, the correction may be done for the mixing values obtained in step 290. As in step 310, the special cases like the bright blue sky or no white reference may be detected at first, then the mixing is done like as in step 310. The correction in this step may not be performed if it is not necessary.
- In step 330, the candidate digital gains or mixed values obtained in previous steps may be adapted according to total exposure (exposure time * analogue gain) in order to give good overall image contrast in bright scenes while still maintaining low noise and good detail preservation in bright image areas in dark scenes. Also aperture, ND-filter, focal length and additional digital pre-gaining may also be considered if those parameters are variable.
- The final gains may consist of combination of color balance gains and overall digital gain, i.e. colors are corrected with the gains and also luminance level. The correction in this step may not be performed if it is not necessary or if this step is not implemented in the
firmware 42. - In step 340, the final set of digital gains is decided. However, the final gains are still checked whether it is reliable values or not. For example, there may be a possibility of an error if some color component seems to dominate over the other color components. In such case the
firmware 42 instructs theCPU 41 to go back to step 270, and select different set of intensity ranges in step 280, and re-do the following steps. If the final gains are reliable, then the gain calculations are finished (step 360). According to the instruction offirmware 42, theCPU 41 sets the digital gains of theAWB amplifier 15 as calculated values. - The preferred embodiment has been tested in the assignee of the present invention. In this test, 248 images that have been captured in a multitude of lighting conditions are used. And the result has shown that it can improve the AWB accuracy a quite a lot.
- After step 360, the AWB amplifier may apply the digital gains to the frame of image data used for the gain calculation, or may apply them only to the image data of next frames, i.e. the image data taken by the next shooting. In one embodiment, the
camera device 1 may comprise a frame memory which can store a whole of image data which is comprised in a complete frame. In this embodiment the same frame which is used for the white balance calculation can be corrected by the calculated gains. But in another embodiment thecamera device 1 may not comprise a memory which can store a whole of frame, but a line buffer which can store only a part of a frame. In this embodiment, the AWB gaining needs to be applied to part of the frame data before the statistics for the whole image data for the frame is available for AWB analysis. Thus the result of AWB analysis from the previous frame (e.g. viewfinder frame) can be used instead. In another words the AWB analysis that is calculated on basis of frame n is used forframe n+ 1. In these memory-limited systems the WB gains may be applied to the lines in the beginning of the image before the whole image is available forAWB analysis block 17. - Please note that various modifications may be made without departing from the scope of the present invention. For example this invention can be applied not only for the Bayer raw data also different types of image data such as Foveon type data. The calculation of the averages and the maximum luminance value in
step 150 may also be done in later step from the R, G, B and luminance histograms. - Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance, it should be understood that the applicant claims protection in respect of any patentable feature of combination of features hereinbefore referred to and/or shown in the drawings whether of not particular emphasis has been placed thereon.
Claims (23)
1. A digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged:
to divide the frame image data into a plurality of blocks including a plurality of pixel data;
to calculate a predetermined value for the each color element and for each of all or a part of the plurality of blocks;
to judge, for each of all or a part of the plurality of blocks, whether the block being likely a part of a grey object or not;
to cumulate the predetermined values of the blocks judged as being likely a part of a gray object for the respective color elements, and;
to decide a first set of digital gains for adjusting white balance based on the cumulated values.
2. A digital camera system according to claim 1 , wherein the predetermined value is an average value of the color component in the block.
3. A digital camera system according to claim 1 , wherein the predetermined value is used for said judgment.
4. A digital camera system according to claim 1 , wherein the judgment is performed according to criteria decided based on one or more of histograms of the color elements, average values of the color elements, and a histogram of luminance of the image data.
5. A digital camera system according to claim 4 , wherein data of the block containing a saturated pixel is not involved in deciding the criteria.
6. A digital camera system according to claim 4 , wherein data of the block adjacent to the block containing a saturated pixel is not involved in deciding the criteria.
7. A digital camera system according to claim 4 , wherein the plurality of color elements comprise data representing red color, green color, and blue color, and the decision is contingent on one or more of B/G, R/G, min(R,B)/G, and max(R,B)/G satisfying the criteria
Where:
R represents an average value of the red data in the block to be judged.
G represents an average value of the green data in the block to be judged.
B represents an average value of the blue data in the block to be judged.
min(R,B)/G represents a value dividing a smaller one of the R and the B by the G.
max(R,B)/G represents a value dividing a larger one of the R and the B by the G.
8. A digital camera system according to claim 1 , wherein the block containing a saturated pixel is not involved in the cumulation.
9. A digital camera system according to claim 1 , wherein the block adjacent to the block containing a saturated pixel is not involved in the cumulation.
10. A digital camera system according to claim 1 , wherein the predetermined values of the block which is judged that its color is closer to white or gray than the other blocks are weighted at the cumulation.
11. A digital camera system according to claim 1 , wherein the system being arranged:
to define a plurality of intensity ranges;
to perform the gray judgment in the each intensity range; and
to perform the cumulation for the each intensity range.
12. A digital camera system according to claim 11 , wherein the intensity ranges are separated based on histograms of the color elements, and/or a histogram of luminance of the image data.
13. A digital camera system according to claim 11 , wherein the system being arranged to decide the first set of digital gains based on the cumulated values obtained from only a part of the plurality of the intensity ranges.
14. A digital camera system according to claim 11 , wherein the system being arranged to mix the cumulated values obtained from the different intensity ranges for the respective color elements; and to decide the first set of digital gains based on the mixed values.
15. A digital camera system according to claim 14 , wherein the system being arranged to perform the mixing by taking account of numbers of blocks whose color is judged as close to white or gray in each intensity range.
16. A digital camera system according to claim 14 , wherein the system being arranged to perform the mixing so that the cumulative value of the intensity range having higher luminance has a bigger influence on the mixed value.
17. A digital camera system according to claim 1 , wherein the system estimating a color tone of the image data and correcting the first set of digital gains based on the estimation.
18. A digital camera system according to claim 1 , wherein the system being arranged to calculate a second set of digital gains for adjusting white balance of the image data based on the Gray-world algorithm, and; to mix the first set of digital gains with the second set of digital gains to decide a final set of digital gains for adjusting white balance of the image data.
19. A digital camera system according to claim 12 , wherein the system being arranged to check a reliability of the final set of digital gains, and to calculate the first set of digital gains again by using the cumulated values deferent intensity ranges.
20. An electric circuit for processing a frame of image data comprising a plurality of color elements, wherein the said circuit comprising:
a block extractor extracting a block of the image data, the block including a plurality of pixel data;
an average calculator calculating average values of each color component of the extracted block;
a judging unit judging whether the block being potentially white or not;
a cumulator cumulating the average values of a plurality of the extracted blocks whose color being judged as potentially white for the respective color elements, and;
a decision unit deciding a set of digital gains for adjusting white balance based on the cumulated values.
21. An electric circuit according to claim 20 , wherein the circuit comprising a plurality of intensity ranges, and wherein the judging unit is arranged to perform the judgment in the each intensity range, and the cumulator is arranged to perform the cumulation in the each intensity range.
22. An electric circuit according to claim 20 , wherein the circuit comprising a processor and a computer program, wherein the program instructs the processor to perform a part of functions of at least one of the block extractor, the judging unit, the cumulator, and the decision unit.
23. A camera device comprising an electric circuit according to claim 20.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/216,272 US20070047803A1 (en) | 2005-08-30 | 2005-08-30 | Image processing device with automatic white balance |
EP06795791.0A EP1920610B1 (en) | 2005-08-30 | 2006-08-28 | Image processing device with automatic white balance |
JP2008528626A JP4977707B2 (en) | 2005-08-30 | 2006-08-28 | Image processing apparatus with auto white balance |
US11/991,194 US8941755B2 (en) | 2005-08-30 | 2006-08-28 | Image processing device with automatic white balance |
CN200680037762.2A CN101283604B (en) | 2005-08-30 | 2006-08-28 | Image processing device with automatic white balance |
PCT/IB2006/052971 WO2007026303A1 (en) | 2005-08-30 | 2006-08-28 | Image processing device with automatic white balance |
JP2012037206A JP5377691B2 (en) | 2005-08-30 | 2012-02-23 | Image processing apparatus with auto white balance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/216,272 US20070047803A1 (en) | 2005-08-30 | 2005-08-30 | Image processing device with automatic white balance |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070047803A1 true US20070047803A1 (en) | 2007-03-01 |
Family
ID=37804148
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/216,272 Abandoned US20070047803A1 (en) | 2005-08-30 | 2005-08-30 | Image processing device with automatic white balance |
US11/991,194 Active US8941755B2 (en) | 2005-08-30 | 2006-08-28 | Image processing device with automatic white balance |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/991,194 Active US8941755B2 (en) | 2005-08-30 | 2006-08-28 | Image processing device with automatic white balance |
Country Status (5)
Country | Link |
---|---|
US (2) | US20070047803A1 (en) |
EP (1) | EP1920610B1 (en) |
JP (2) | JP4977707B2 (en) |
CN (1) | CN101283604B (en) |
WO (1) | WO2007026303A1 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080181493A1 (en) * | 2007-01-09 | 2008-07-31 | Samsung Electronics Co., Ltd. | Method, medium, and system classifying images based on image properties |
EP1986444A2 (en) | 2007-04-25 | 2008-10-29 | Nikon Corporation | White balance adjusting device, imaging apparatus, and white balance adjusting program |
US20090278921A1 (en) * | 2008-05-12 | 2009-11-12 | Capso Vision, Inc. | Image Stabilization of Video Play Back |
US20100066857A1 (en) * | 2008-09-12 | 2010-03-18 | Micron Technology, Inc. | Methods, systems and apparatuses for white balance calibration |
US20100151903A1 (en) * | 2008-12-17 | 2010-06-17 | Sony Ericsson Mobile Communications Japan, Inc. | Mobile phone terminal with camera function and control method thereof |
US20110158527A1 (en) * | 2008-08-30 | 2011-06-30 | Ren Wu | Color Constancy Method And System |
US20130050580A1 (en) * | 2011-08-30 | 2013-02-28 | Lg Innotek Co., Ltd | Image Processing Method for Digital Apparatus |
US8666190B1 (en) | 2011-02-01 | 2014-03-04 | Google Inc. | Local black points in aerial imagery |
US8798360B2 (en) | 2011-06-15 | 2014-08-05 | Samsung Techwin Co., Ltd. | Method for stitching image in digital image processing apparatus |
US9111484B2 (en) | 2012-05-03 | 2015-08-18 | Semiconductor Components Industries, Llc | Electronic device for scene evaluation and image projection onto non-planar screens |
US20150296194A1 (en) * | 2014-04-09 | 2015-10-15 | Samsung Electronics Co., Ltd. | Image sensor and image sensor system including the same |
US20170017135A1 (en) * | 2014-03-14 | 2017-01-19 | Sony Corporation | Imaging apparatus, iris device, imaging method, and program |
US9787893B1 (en) | 2016-04-11 | 2017-10-10 | Microsoft Technology Licensing, Llc | Adaptive output correction for digital image capture processing |
EP3138283A4 (en) * | 2014-04-29 | 2017-12-13 | Intel Corporation | Automatic white balancing with chromaticity measure of raw image data |
CN107578390A (en) * | 2017-09-14 | 2018-01-12 | 长沙全度影像科技有限公司 | A kind of method and device that image white balance correction is carried out using neutral net |
EP3402187A4 (en) * | 2016-01-08 | 2019-06-12 | Olympus Corporation | Image processing apparatus, image processing method, and program |
CN110365920A (en) * | 2018-04-03 | 2019-10-22 | 顶级公司 | Image procossing |
EP3565236A4 (en) * | 2017-01-05 | 2020-01-08 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Control method, control apparatus, mobile terminal and computer-readable storage medium |
US10970828B2 (en) * | 2018-12-21 | 2021-04-06 | Ricoh Company, Ltd. | Image processing apparatus, image processing system, image processing method, and recording medium |
WO2022218080A1 (en) * | 2021-04-17 | 2022-10-20 | Oppo广东移动通信有限公司 | Prepositive image signal processing apparatus and related product |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5134508B2 (en) * | 2008-11-19 | 2013-01-30 | 株式会社日立製作所 | Television equipment |
KR101516963B1 (en) | 2008-12-11 | 2015-05-04 | 삼성전자주식회사 | Apparatus and method for adjusting auto white balance using effective area |
KR101018237B1 (en) * | 2009-02-03 | 2011-03-03 | 삼성전기주식회사 | White balance adjusting apparatus and method considering effect of single tone image |
JP5399739B2 (en) * | 2009-02-25 | 2014-01-29 | ルネサスエレクトロニクス株式会社 | Image processing device |
JP5743384B2 (en) * | 2009-04-14 | 2015-07-01 | キヤノン株式会社 | Image processing apparatus, image processing method, and computer program |
JP5358344B2 (en) * | 2009-08-12 | 2013-12-04 | 三星テクウィン株式会社 | Imaging apparatus and imaging method |
KR20110043833A (en) * | 2009-10-22 | 2011-04-28 | 삼성전자주식회사 | Dynamic range extended mode of digital camera decision method using fuzzy rule and apparatus for performing the method |
JP5445363B2 (en) * | 2010-07-08 | 2014-03-19 | 株式会社リコー | Image processing apparatus, image processing method, and image processing program |
JP5761946B2 (en) | 2010-09-02 | 2015-08-12 | キヤノン株式会社 | Image processing apparatus, image processing method, and storage medium |
JP5652649B2 (en) * | 2010-10-07 | 2015-01-14 | 株式会社リコー | Image processing apparatus, image processing method, and image processing program |
US9106916B1 (en) | 2010-10-29 | 2015-08-11 | Qualcomm Technologies, Inc. | Saturation insensitive H.264 weighted prediction coefficients estimation |
US8737727B2 (en) * | 2010-12-30 | 2014-05-27 | Pelco, Inc. | Color similarity sorting for video forensics search |
CN102340673B (en) * | 2011-10-25 | 2014-07-02 | 杭州藏愚科技有限公司 | White balance method for video camera aiming at traffic scene |
JP2013142984A (en) * | 2012-01-10 | 2013-07-22 | Toshiba Corp | Image processing system, image processing method and image processing program |
CN103297655B (en) * | 2012-02-28 | 2015-07-01 | 崴强科技股份有限公司 | Scanner automatic white balance calibrating method |
CA2875294C (en) | 2012-04-25 | 2021-06-08 | Riken | Cell preparation containing myocardium-committed cell |
CN103177694B (en) * | 2013-04-18 | 2015-03-04 | 广州市观泰数字显示设备有限公司 | White balance adjusting method of LED (Light Emitting Diode) display screen |
KR101964256B1 (en) * | 2013-06-24 | 2019-04-01 | 한화테크윈 주식회사 | Method for interpolating white balance |
CN103414914B (en) * | 2013-08-21 | 2016-02-03 | 浙江宇视科技有限公司 | A kind of color diagnostic arrangement and method |
CN105409211B (en) | 2013-08-26 | 2018-07-10 | 英特尔公司 | For the automatic white balance positive with skin-color adjustment of image procossing |
US9294687B2 (en) * | 2013-12-06 | 2016-03-22 | Intel Corporation | Robust automatic exposure control using embedded data |
JP6278713B2 (en) * | 2014-01-20 | 2018-02-14 | オリンパス株式会社 | Imaging apparatus and imaging method |
CN104469334B (en) * | 2014-12-10 | 2016-08-17 | 深圳市理邦精密仪器股份有限公司 | A kind of Medical Devices obtain the processing method and processing device of view data |
CN104683780B (en) * | 2015-03-24 | 2017-01-04 | 广东本致科技有限公司 | The auto white balance method of a kind of video monitoring camera and device |
CN106454285B (en) * | 2015-08-11 | 2019-04-19 | 比亚迪股份有限公司 | The adjustment system and method for adjustment of white balance |
KR102346522B1 (en) * | 2015-09-10 | 2022-01-03 | 삼성전자주식회사 | Image processing device and auto white balancing metohd thereof |
CN105163099A (en) * | 2015-10-30 | 2015-12-16 | 努比亚技术有限公司 | While balance adjustment method and device and mobile terminal |
CN105872500A (en) * | 2015-12-08 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Adjusting method and device for white balance of image |
CN107404640B (en) * | 2016-05-20 | 2018-12-25 | 北京集创北方科技股份有限公司 | The white balance correcting and digital imaging device of digital imaging device |
CN105915875B (en) * | 2016-06-01 | 2017-10-13 | 广东欧珀移动通信有限公司 | White balance calibration method and apparatus and its calibration parameter preparation method and device |
JP6837652B2 (en) * | 2016-07-26 | 2021-03-03 | 株式会社シグマ | Imaging device and signal processing method |
CN111526267B (en) * | 2016-08-03 | 2022-09-02 | 株式会社半导体能源研究所 | Image pickup device, image pickup module, electronic apparatus, and image pickup system |
CN107301027A (en) * | 2017-06-19 | 2017-10-27 | 广东欧珀移动通信有限公司 | Screen color temp adjusting method, device and its equipment |
CN108230405A (en) * | 2017-11-30 | 2018-06-29 | 中原智慧城市设计研究院有限公司 | Image white balancing treatment method based on gray processing |
CN108965835B (en) * | 2018-08-23 | 2019-12-27 | Oppo广东移动通信有限公司 | Image processing method, image processing device and terminal equipment |
US10496862B1 (en) | 2019-03-18 | 2019-12-03 | Capital One Services, Llc | Detection of images in relation to targets based on colorspace transformation techniques and utilizing ultraviolet light |
US10509991B1 (en) | 2019-03-18 | 2019-12-17 | Capital One Services, Llc | Detection of images in relation to targets based on colorspace transformation techniques and utilizing infrared light |
US10534948B1 (en) | 2019-03-18 | 2020-01-14 | Capital One Services, Llc | Optimizing detection of images in relation to targets based on colorspace transformation techniques |
US10523420B1 (en) | 2019-04-18 | 2019-12-31 | Capital One Services, Llc | Transmitting encoded data along transmission mediums based on colorspace schemes |
US10614635B1 (en) | 2019-07-25 | 2020-04-07 | Capital One Services, Llc | Augmented reality system with color-based fiducial marker |
CN110689497B (en) * | 2019-09-27 | 2020-05-12 | 信通达智能科技有限公司 | Data extraction device and method based on target identification |
US10833852B1 (en) | 2019-10-03 | 2020-11-10 | Capital One Services, Llc | Encoded data along tape based on colorspace schemes |
US10715183B1 (en) | 2019-10-25 | 2020-07-14 | Capital One Services, Llc | Data encoding with error-correcting code pursuant to colorspace schemes |
US10867226B1 (en) | 2019-11-04 | 2020-12-15 | Capital One Services, Llc | Programmable logic array and colorspace conversions |
US10762371B1 (en) * | 2019-11-14 | 2020-09-01 | Capital One Services, Llc | Object detection techniques using colorspace conversions |
US10878600B1 (en) | 2019-12-10 | 2020-12-29 | Capital One Services, Llc | Augmented reality system with color-based fiducial marker utilizing local adaptive technology |
KR102318196B1 (en) * | 2020-01-20 | 2021-10-27 | 숭실대학교산학협력단 | A method for auto white balance of image and an electronic device to process auto white balance method |
US11302036B2 (en) | 2020-08-19 | 2022-04-12 | Capital One Services, Llc | Color conversion between color spaces using reduced dimension embeddings |
CN112954290B (en) * | 2021-03-04 | 2022-11-11 | 重庆芯启程人工智能芯片技术有限公司 | White balance correction device and method based on image smoothness |
CN116915964B (en) * | 2023-09-13 | 2024-01-23 | 北京智芯微电子科技有限公司 | Gray world white balance correction method, device, equipment, chip and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5589879A (en) * | 1993-03-26 | 1996-12-31 | Fuji Photo Film Co., Ltd. | Performing white balance correction on integrated divided areas of which average color is substantially white |
US5805217A (en) * | 1996-06-14 | 1998-09-08 | Iterated Systems, Inc. | Method and system for interpolating missing picture elements in a single color component array obtained from a single color sensor |
US20020131635A1 (en) * | 2000-10-27 | 2002-09-19 | Sony Corporation And Sony Electronics, Inc. | System and method for effectively performing a white balance operation |
US20020167596A1 (en) * | 2000-12-08 | 2002-11-14 | Nikon Corporation | Image signal processing device, digital camera and computer program product for processing image signal |
US6727942B1 (en) * | 1998-09-11 | 2004-04-27 | Eastman Kodak Company | Auto white balance apparatus |
US20040085458A1 (en) * | 2002-10-31 | 2004-05-06 | Motorola, Inc. | Digital imaging system |
US20040141087A1 (en) * | 2003-01-17 | 2004-07-22 | Kazuya Oda | Solid-state image pickup apparatus with influence of shading reduced and a method of controlling the same |
US6801365B2 (en) * | 2001-12-05 | 2004-10-05 | Olympus Corporation | Projection type image display system and color correction method thereof |
US20040212691A1 (en) * | 2003-04-25 | 2004-10-28 | Genta Sato | Automatic white balance adjusting method |
US20050134702A1 (en) * | 2003-12-23 | 2005-06-23 | Igor Subbotin | Sampling images for color balance information |
US20050213128A1 (en) * | 2004-03-12 | 2005-09-29 | Shun Imai | Image color adjustment |
US20080094515A1 (en) * | 2004-06-30 | 2008-04-24 | Koninklijke Philips Electronics, N.V. | Dominant color extraction for ambient light derived from video content mapped thorugh unrendered color space |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3434847B2 (en) | 1993-03-26 | 2003-08-11 | 富士写真フイルム株式会社 | Auto white balance device |
US6421083B1 (en) | 1996-03-29 | 2002-07-16 | Sony Corporation | Color imaging device and method |
JP2000165896A (en) | 1998-11-25 | 2000-06-16 | Ricoh Co Ltd | White balance control method and its system |
JP2002095003A (en) | 2000-09-14 | 2002-03-29 | Olympus Optical Co Ltd | Electronic camera |
JP2002185976A (en) * | 2000-12-08 | 2002-06-28 | Nikon Corp | Video signal processor and recording medium with video signal processing program recorded thereon |
US7190394B2 (en) * | 2002-06-04 | 2007-03-13 | Micron Technology, Inc. | Method for statistical analysis of images for automatic white balance of color channel gains for image sensors |
EP1406454A1 (en) * | 2002-09-25 | 2004-04-07 | Dialog Semiconductor GmbH | Automatic white balance technique |
US7388612B2 (en) * | 2002-11-26 | 2008-06-17 | Canon Kabushiki Kaisha | Image pickup apparatus and method, recording medium, and program providing user selected hue and white balance settings |
JP2005033332A (en) | 2003-07-08 | 2005-02-03 | Konica Minolta Opto Inc | White balance control unit and electronic equipment |
US7728880B2 (en) * | 2004-06-25 | 2010-06-01 | Qualcomm Incorporated | Automatic white balance method and apparatus |
US7545435B2 (en) * | 2004-10-15 | 2009-06-09 | Lifesize Communications, Inc. | Automatic backlight compensation and exposure control |
-
2005
- 2005-08-30 US US11/216,272 patent/US20070047803A1/en not_active Abandoned
-
2006
- 2006-08-28 EP EP06795791.0A patent/EP1920610B1/en active Active
- 2006-08-28 JP JP2008528626A patent/JP4977707B2/en active Active
- 2006-08-28 US US11/991,194 patent/US8941755B2/en active Active
- 2006-08-28 WO PCT/IB2006/052971 patent/WO2007026303A1/en active Application Filing
- 2006-08-28 CN CN200680037762.2A patent/CN101283604B/en active Active
-
2012
- 2012-02-23 JP JP2012037206A patent/JP5377691B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5589879A (en) * | 1993-03-26 | 1996-12-31 | Fuji Photo Film Co., Ltd. | Performing white balance correction on integrated divided areas of which average color is substantially white |
US5805217A (en) * | 1996-06-14 | 1998-09-08 | Iterated Systems, Inc. | Method and system for interpolating missing picture elements in a single color component array obtained from a single color sensor |
US6727942B1 (en) * | 1998-09-11 | 2004-04-27 | Eastman Kodak Company | Auto white balance apparatus |
US20020131635A1 (en) * | 2000-10-27 | 2002-09-19 | Sony Corporation And Sony Electronics, Inc. | System and method for effectively performing a white balance operation |
US20020167596A1 (en) * | 2000-12-08 | 2002-11-14 | Nikon Corporation | Image signal processing device, digital camera and computer program product for processing image signal |
US6801365B2 (en) * | 2001-12-05 | 2004-10-05 | Olympus Corporation | Projection type image display system and color correction method thereof |
US20040085458A1 (en) * | 2002-10-31 | 2004-05-06 | Motorola, Inc. | Digital imaging system |
US20040141087A1 (en) * | 2003-01-17 | 2004-07-22 | Kazuya Oda | Solid-state image pickup apparatus with influence of shading reduced and a method of controlling the same |
US20040212691A1 (en) * | 2003-04-25 | 2004-10-28 | Genta Sato | Automatic white balance adjusting method |
US20050134702A1 (en) * | 2003-12-23 | 2005-06-23 | Igor Subbotin | Sampling images for color balance information |
US20050213128A1 (en) * | 2004-03-12 | 2005-09-29 | Shun Imai | Image color adjustment |
US20080094515A1 (en) * | 2004-06-30 | 2008-04-24 | Koninklijke Philips Electronics, N.V. | Dominant color extraction for ambient light derived from video content mapped thorugh unrendered color space |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080181493A1 (en) * | 2007-01-09 | 2008-07-31 | Samsung Electronics Co., Ltd. | Method, medium, and system classifying images based on image properties |
US8164649B2 (en) | 2007-04-25 | 2012-04-24 | Nikon Corporation | White balance adjusting device, imaging apparatus, and recording medium storing white balance adjusting program |
EP1986444A2 (en) | 2007-04-25 | 2008-10-29 | Nikon Corporation | White balance adjusting device, imaging apparatus, and white balance adjusting program |
US20080266417A1 (en) * | 2007-04-25 | 2008-10-30 | Nikon Corporation | White balance adjusting device, imaging apparatus, and recording medium storing white balance adjusting program |
EP1986444A3 (en) * | 2007-04-25 | 2011-09-07 | Nikon Corporation | White balance adjusting device, imaging apparatus, and white balance adjusting program |
US20090278921A1 (en) * | 2008-05-12 | 2009-11-12 | Capso Vision, Inc. | Image Stabilization of Video Play Back |
US8655066B2 (en) * | 2008-08-30 | 2014-02-18 | Hewlett-Packard Development Company, L.P. | Color constancy method and system |
US20110158527A1 (en) * | 2008-08-30 | 2011-06-30 | Ren Wu | Color Constancy Method And System |
US20100066857A1 (en) * | 2008-09-12 | 2010-03-18 | Micron Technology, Inc. | Methods, systems and apparatuses for white balance calibration |
US8564688B2 (en) | 2008-09-12 | 2013-10-22 | Aptina Imaging Corporation | Methods, systems and apparatuses for white balance calibration |
US20100151903A1 (en) * | 2008-12-17 | 2010-06-17 | Sony Ericsson Mobile Communications Japan, Inc. | Mobile phone terminal with camera function and control method thereof |
US8704906B2 (en) * | 2008-12-17 | 2014-04-22 | Sony Corporation | Mobile phone terminal with camera function and control method thereof for fast image capturing |
US8666190B1 (en) | 2011-02-01 | 2014-03-04 | Google Inc. | Local black points in aerial imagery |
US8798360B2 (en) | 2011-06-15 | 2014-08-05 | Samsung Techwin Co., Ltd. | Method for stitching image in digital image processing apparatus |
US20130050580A1 (en) * | 2011-08-30 | 2013-02-28 | Lg Innotek Co., Ltd | Image Processing Method for Digital Apparatus |
US8717458B2 (en) * | 2011-08-30 | 2014-05-06 | Lg Innotek Co., Ltd. | Updating gain values using an image processing method for digital apparatus |
US9111484B2 (en) | 2012-05-03 | 2015-08-18 | Semiconductor Components Industries, Llc | Electronic device for scene evaluation and image projection onto non-planar screens |
US20170017135A1 (en) * | 2014-03-14 | 2017-01-19 | Sony Corporation | Imaging apparatus, iris device, imaging method, and program |
JPWO2015137148A1 (en) * | 2014-03-14 | 2017-04-06 | ソニー株式会社 | Imaging device, iris device, imaging method, and program |
US10018890B2 (en) * | 2014-03-14 | 2018-07-10 | Sony Corporation | Imaging apparatus, iris device and imaging method |
US9485483B2 (en) * | 2014-04-09 | 2016-11-01 | Samsung Electronics Co., Ltd. | Image sensor and image sensor system including the same |
US20150296194A1 (en) * | 2014-04-09 | 2015-10-15 | Samsung Electronics Co., Ltd. | Image sensor and image sensor system including the same |
EP3138283A4 (en) * | 2014-04-29 | 2017-12-13 | Intel Corporation | Automatic white balancing with chromaticity measure of raw image data |
EP3402187A4 (en) * | 2016-01-08 | 2019-06-12 | Olympus Corporation | Image processing apparatus, image processing method, and program |
US9787893B1 (en) | 2016-04-11 | 2017-10-10 | Microsoft Technology Licensing, Llc | Adaptive output correction for digital image capture processing |
EP3565236A4 (en) * | 2017-01-05 | 2020-01-08 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Control method, control apparatus, mobile terminal and computer-readable storage medium |
US10812733B2 (en) | 2017-01-05 | 2020-10-20 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Control method, control device, mobile terminal, and computer-readable storage medium |
CN107578390A (en) * | 2017-09-14 | 2018-01-12 | 长沙全度影像科技有限公司 | A kind of method and device that image white balance correction is carried out using neutral net |
CN110365920A (en) * | 2018-04-03 | 2019-10-22 | 顶级公司 | Image procossing |
US10863156B2 (en) * | 2018-04-03 | 2020-12-08 | Apical Ltd. | Image processing |
US10970828B2 (en) * | 2018-12-21 | 2021-04-06 | Ricoh Company, Ltd. | Image processing apparatus, image processing system, image processing method, and recording medium |
WO2022218080A1 (en) * | 2021-04-17 | 2022-10-20 | Oppo广东移动通信有限公司 | Prepositive image signal processing apparatus and related product |
Also Published As
Publication number | Publication date |
---|---|
CN101283604B (en) | 2010-12-01 |
CN101283604A (en) | 2008-10-08 |
US8941755B2 (en) | 2015-01-27 |
JP5377691B2 (en) | 2013-12-25 |
JP2012100362A (en) | 2012-05-24 |
EP1920610B1 (en) | 2019-05-29 |
JP4977707B2 (en) | 2012-07-18 |
US20090295938A1 (en) | 2009-12-03 |
EP1920610A1 (en) | 2008-05-14 |
WO2007026303A1 (en) | 2007-03-08 |
JP2009506700A (en) | 2009-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070047803A1 (en) | Image processing device with automatic white balance | |
CN110022469B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
US8451371B2 (en) | Exposure control for an imaging system | |
KR100983037B1 (en) | Method for controlling auto white balance | |
US8363131B2 (en) | Apparatus and method for local contrast enhanced tone mapping | |
US9426437B2 (en) | Image processor performing noise reduction processing, imaging apparatus equipped with the same, and image processing method for performing noise reduction processing | |
RU2496250C1 (en) | Image processing apparatus and method | |
CN107395991B (en) | Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment | |
CN107533756B (en) | Image processing device, imaging device, image processing method, and storage medium storing image processing program for image processing device | |
JP2015156615A (en) | Image processing system, image processing method, control program, and recording medium | |
US9177396B2 (en) | Image processing apparatus and image processing method | |
JP5814799B2 (en) | Image processing apparatus and image processing method | |
JP2008153768A (en) | Imaging apparatus, and white balance processor | |
US20200228770A1 (en) | Lens rolloff assisted auto white balance | |
KR20170030418A (en) | Imaging processing device and Imaging processing method | |
KR20060118352A (en) | Image process apparatus, image pickup apparatus, and image processing program | |
JP5899894B2 (en) | Imaging apparatus, image processing apparatus, image processing program, and image processing method | |
KR101854432B1 (en) | Method and apparatus for detecting and compensating back light frame | |
US20200228769A1 (en) | Lens rolloff assisted auto white balance | |
JP2009004966A (en) | Imaging apparatus | |
CN114697483B (en) | Under-screen camera shooting device and method based on compressed sensing white balance algorithm | |
US20220311982A1 (en) | Image capturing apparatus, control method, and storage medium | |
US11153467B2 (en) | Image processing | |
JP3907654B2 (en) | Imaging apparatus and signal processing apparatus | |
CN115170407A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NIKKANAN, JARNO;REEL/FRAME:017191/0793 Effective date: 20050901 |
|
AS | Assignment |
Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NIKKANEN, JARNO;REEL/FRAME:018011/0213 Effective date: 20050901 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |