CA2212802A1 - Brightness adjustment for images using digital scene analysis - Google Patents
Brightness adjustment for images using digital scene analysisInfo
- Publication number
- CA2212802A1 CA2212802A1 CA002212802A CA2212802A CA2212802A1 CA 2212802 A1 CA2212802 A1 CA 2212802A1 CA 002212802 A CA002212802 A CA 002212802A CA 2212802 A CA2212802 A CA 2212802A CA 2212802 A1 CA2212802 A1 CA 2212802A1
- Authority
- CA
- Canada
- Prior art keywords
- image
- sector
- average luminance
- average
- values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004458 analytical method Methods 0.000 title description 27
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000000638 solvent extraction Methods 0.000 claims abstract 3
- 230000000694 effects Effects 0.000 claims description 26
- 230000004044 response Effects 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims 3
- 238000013507 mapping Methods 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 12
- 238000012935 Averaging Methods 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 8
- 230000006872 improvement Effects 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000003044 adaptive effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000009467 reduction Effects 0.000 description 5
- LUBKKVGXMXTXOZ-QGZVFWFLSA-N (+)-geodin Chemical compound COC(=O)C1=CC(=O)C=C(OC)[C@@]11C(=O)C(C(O)=C(Cl)C(C)=C2Cl)=C2O1 LUBKKVGXMXTXOZ-QGZVFWFLSA-N 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000035807 sensation Effects 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 241000256844 Apis mellifera Species 0.000 description 1
- 244000228957 Ferula foetida Species 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000002195 synergetic effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G06T5/92—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Abstract
A system and method for processing a digital image signal which represents an image can be made to optimally map luminance values versus a tonal reproduction capability of a destination application. Specifically, the system includes a device for partitioning the image into blocks, then combining certain blocks into sectors. An average luminance block value is determined for each block and a difference is determined between the maximum and minimum average luminance block values for each sector. If the difference exceeds a predetermined threshold value, then the sector is labeled as an active sector and an average luminance sector values is obtained from maximum and minimum average luminance block values. All weighted counts of active sectors of the image are plotted versus the average luminance sector values in a histogram, then the histogram is shifted via some predetermined criterion so that the average luminance sector values of interest will fall within a destination window corresponding to the tonal reproduction capability of a destination application.
Description
W 096/30871 PCT/u~cr~2353 BRIGHTNESS ADJUSTMENT OF IMAGES USING DIGITAL SCENE ANALYSIS
BACKGROUND OF THE INVENTION
l . Field of the Invention The invention relates generally to an improved image processing system and methods 5 for use with this system. More particularly, the invention relates to a system and methods thereto for adjusting the lightn~ss of a digitally represented image.
BACKGROUND OF THE INVENTION
l . Field of the Invention The invention relates generally to an improved image processing system and methods 5 for use with this system. More particularly, the invention relates to a system and methods thereto for adjusting the lightn~ss of a digitally represented image.
2. Description of the Prior Art Anyone acquiring an image needs to have a pç~ nent record which faithfully reproduces the original subject or scene, or at least those aspects of the subject or scene which 10 are considered most important. The quality of the reproduction is judged by visually c~n~ the hardcopy with the original scene where the hardcopy is nearly imme~ tely available or with what is remembered about the scene. In m~king this j~ gment an observer co~ ~es the magnitude of the visual sensation created by the hardcopy under the prevailing viewing conditions with the m~gnit~l~le of the visual sensation created by the original scene 15 under the actual lightin~ conditions or what they are remembered to be, i.e.tthe observer cu~ J~eS the brightness of various points in the hardcopy with the brightne~ of corresponding points in the original scene and thereby fs)rms a subjective opinion about the quality of the reproduction. Exact subjective tone reproduction requires that the brightness of each point irl the hardcopy equals that of the brightness of a corresponding point in the 2 0 origin~l scene. However, as those skilled in the arts know. exact subjective tone reproduction is extrac,ldin~ily difficult and inconvenient to achieve because hardcopy media are generally W O96/30871 PCT/u~ 2353 viewed at illumination levels which are sigmficantly less than those under which an original was created - original photographs typically being about 1/100 that ofthe original scene. This fact and the fact that most hardcopy media, ~ their tone scale characteristics in-lir~t~, have a limited ability to capture the range of tones which typically exist in nature would seem to 5 indicate that an observer could never be satisfied with the rendition that can be produced with the present level of reproduction technology. However, this is obviously not the case, and the reason for it is that s~ticf~tclry subjective tone reproduction can be obtained and will please an observer if the hrightn~eeçc of the subject under oldill~y viewing conditions are S~y~loxi ~ t~ly proportional to cc,ll~,;,yonding scene briphtnPcsçc, if the brightn~cc of skin ~0 tones approximately equals that of real skin under the prevailing viewing conditions, if the a~c.ll hue and saturation of object colors is m~int~inçd relative to the original scene, and if the medium reproduces tones corresponding more or less to the range of tones ~ es~llled by the important objects of the scene.
To assure that the foregoing conditions are more or less satisfied depends, nltim~tely, 15 on ~l~,pclly m~trhing the scene lightn.-cc values to the tone scale of the mediurn, taking into account the particular scene characteristics, prevailing scene lighting conditions, and the medium char~ctçrictics. Given the variety of possible scenes and lighting conditions, proper m~tt~hing can only be achieved regularly by underst~ncling the complex interrelationships of the entire reproduction system and some probability estim~te of the likelihood of the 2 0 occurrence of typical scenes. This would include knowing, for example. the most likely distribution and intçncities of scene illllmin~nce patterns expected to be captured, the spectral reflectivity of commonly re~ objects expected to be reproduced, t'ne spectral content of likely scene ill~ . .ce, and the spectral response and tone scale char~ctçri~tics of the W O 96/30871 PCTnUS96102353 m~ m In ~;ul~ lly available ~m~tt~llr camera products, all of these interrelationships are typically autom~tir~lly correlated for o~Lilllulll exposure by a camera's automatic exposure control system which c-)mmonly utilizes a built-in "averaging" or "center-weighted" type meter for exposure prediction purposes. Fur~er, in electronic im~ging, images are acquired 5 and lc~lcse-lL~d as digital signals which can be manipulated, processed or displayed through the use of colll~uLcl~ or other special purpose electronic h~dw~c. The processing of these digital signals includes known techniques such as lnmin~nce averaging or center-weighting for ~qllt )m~tic exposure control.
While the various known tone m~ hin~ techniques may prove adequate for many 10 purposes, they ~ c.lLly do not consider tone as a function of the detail that may be present in the subject. Consequently, the primary object of the present invention is to provide an improved system, and associated methods for pclro~ ing lightne~ adjn~tment of a digital image which is independent of large area l~ e averaging. This and other objects will become a~pa c.ll in view of the following descriptions~ drawings and claims.
SUMMARY OF THE INVENTION
A system and method for processing a digital image signal which l~lcscllL~ an image can be made to optimally map l - "i"~ e values versus a tonal reproduction capability of a ctin~tion application. Specifically, the system includes a device for partiti--ning the image into blocks, then combining certain blocks into sectors. An average l..",il)~l,re block value is 20 ~ ",;..~?d for each block and a difference is ~letermint?rl between the m~x;~ .... and minimllm average l-----i~ e block values for each sector. If the difference e~cee-lC a predet~rmin~od threshold value, then the sector is labeled as an active sector and an average lumin~n~e sector W O96t30871 PCT/u~7~G~2353 value is obtained from l~lAXil~ lll and Illil~illlLIIII average lll"~;"~ e block values. All active sectors of the image are plotted versus the average lllmin~nre sector values in a histogram, then the histogram is shifted via some predet~rmin.ocl criterion so that the average l~ sector values of interest will fall within a ~lestin~tion window corresponding to the tonal reproduction 5 capability of a destin~tion applir~tion BRIEF DESCRIPTION OF THE DRAWINGS
The aforementioned aspects and other features of the invention are described in detail in conjunction with the accul,lp~,ying drawings in which the same reference numerals are used throughout for denoting co"es~ond*ng elem~nt~ and wherein:
Figure 1 shows an exemplary embodiment of the image processing system of the invention;
Figure 2 is a block diagram of an image processing system according to the invention by which lightnecs adjustment of a digital image may be made;
Figure 3 is a m~Enified portion of an image showing individual pixels, 8 x 8 image 15 blocks, and a 2 x 2 sector;
Figure 4A is a histograrn of the number of active sectors corresponding to average sector lllmin~nce values plotted along a lo~ ic scale; and Figure 4B is the histogram of Figure 3A which has been remapped to target a des*able portion of the histogram into a specific tonal reproduction range.
W Og~ 71 PCTnUS96/02353 DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention relates to an image procec~ing system and associated image proces~ing methods by which the l.,..~ e content in an original scene may be optimally m~tchf-d with the tone reproduction capabilities of a destin~tion application such as an 5 electronic output display device, hardcopy printer, or photographic reproduction device.
Figure 1 illll~LldLes one exemplary embodiment of such a system. As can be seen, Figure 1 illustrates an electronic image processing system where an image signal source, such as an cle~;Llollic still camera 10 or a scanner 12, provides an electronic image signal which se~ i an image of the subject (not shown). A computer 18 receives the electronic signal 10 from the image signal source and thereafter processes the image signal electronically to provide any number of known image procec~ing functions such as brightn~ss adj--ctment The processed image can be transmitted, i.e. output, to any clestin~ti- n device or destin~tion application such as a diskette 16, an user monitor 20, a printer 14, or a remote monitor 26.
Operator interaction with the system is facilitated by use of a keyboard 22 or a mouse 24. Of 15 course, the components shown in Figure 1 are merely exemplary rather than all inclusive of the many equivalent devices known by those skilled in the art. For instance. the image signal source could include any device which acts as an image signal source such as an electronic camera, a scanner, a camcorder, a charge coupled device~ a charge injected device, etc. Also, it is noteworthy that the processing of the image need not necessarily occur solely in the 2 0 cc,~ uLel 18. Indeed, various phases or aspects of the image processing could occur in the image signal source, the co~ uLel, or the ~itostin~tion output device.
Electronic image procçscin~ can occur in various domains such as the spatial domain or the frequency tlom~in An image is said to reside in the spatial domain when the values of p~a~ tl~ used to describe the image, such as bri~htn~cc, have a direct corresp~-n-l~n~e with spatial location. Bri~htn~sc of an image is defined as the attribute of sensation by which an 5 observer is aware of ~lirr~l~nces in lllmin~nre In the frequency domain, the image is ~res~l~led as a matrix of frequency coefficients which are obtained by various L~,srOl,l,ation methods such as Fourier transformation, discrete cosine (DCT) Lla~ru~lation~ etc.
One problem associated with known systems for brightnPcs adjnctm~nt of a digital 10 image is the lack of optimal exposure control. The simplest method of exposure control takes the overall lllmin~nce average of a scene and sets that average to coincide with the im~ging system's reference gray value. This works well for scenes wherein the average lllmin~nce of the principal subject is the same as the overall average However, this method is ineffective for difficult scenes which contain excessive b~ckli~htin~ or frontlighting or which have 15 specular reflectances which can unduly affect the overall average upon which the simple exposure meter is based. In the case of excessive b~ hting, the brightn~-cc of the background is significantly higher than that of the subject of the scene, and in the case of excessive frontlighting, the background is much darker than the subject.
More sophisticated exposure control systems typically extend the above simple 2 0 exposure method by using more complicated averaging schemes. One of the most common averaging methods for exposure control uses a center weighted lllmin~nre average, since the principal subject is often placed near the center of the picture. For this method, the highest weighting is applied a little below the geomrtric~l center in order to reduce the influence of a W O96/30871 PCTrUS96/02353 bright sky, which might be in the background. Other known methods se~ment the scene into a pattern of central and peripheral areas and ~lçt~rmin~ a control metric from some logical combination of the l~ re averages of these areas- These refin~ment~, though they ' represent a considerable improvement, are, when presented with a difficult scene, still subject 5 to the errors pointed out above which are inherent in any method which depends upon large area ll....i~ e averaging.
Figure 2 is a block diagram of the various elements of the image processing system for lightn~ adj ~ of a digital image. It will be understood that each of the elements of the im~E~ing processing system may be embodied, all~ dLi~ely, as an on-board application 10 specific integrated circuit (ASIC), field programmable gate array, other forms of fillllw~, resident on one or more of the components of the system of Figure 1 or resident as an application program or program module in a general purpose computer such as that shown in Figure 1. The scene 201 is repres~-nt~l as a digital image by image signal source 207, e.g. a camera, camcorder, charge-coupled device, charge-injected device, scanner, etc. The image 15 acquisition device 200 acquires the image signal, which contains both lllmin~nc e data and chlu. ..;. .~nre data characteristic of the image, and separates the luminance data which is thereafter stored in input buffer 202. The present invention analyzes only the ll-min~nce data and as such, the chrominance components will not be discussed any further.
A portion of the image is represented in cutout in Figure 3 as a composite of pixels 2 0 300 of lnmin~nce data. The signal processing cil~;uiLly 20~ retrieves the luminance data from input buffer 202, processes the lllmin~nce data for li~htn~s adjllctment then stores the ~ processed data in output buffer 224. Eventually in a ~estin~tion device 226~ the processed data is used to reproduce the processed image in a ~lestin~tion application such as printing, .
W O96/30871 PCTrUS96/02353 displaying, tr~n~mi~ n to a do~ sL~Gdlll application, or the like. The signal processing ci~;uil,y 205, as mentioned earlier, can be implement~cl as discrete hardware components as shown in Figure 2, or the ~.;il~;UiLlj 205 could be em~ t~d by a general purpose computer.
And again, the h~d~e included in cil~,uilly 205 could be completely or partially housed in 5 other system element~ such as the image signal generator 207, the image acquisition device 200, or the output device 226.
The signal processing ci,uuil"~ 205 specifically includes a partitioner 204 which reads the 1~ llre data from input buffer 202 and partitions the l.. ;.. ~.~re data into a plurality of image blocks of precletrrminp(l size ranging from one to M x N pixels, where M is the 1 0 number of rows and N is the number of columns in the image. In the ~,~f~ d embodiment, the lumin~nre data is segmentpcl into 8 x 8 image blocks which conform to convention~l block sizes for image processing as recommended, for instance, in Section 3.3 of the digital image co"~,cs~ion standard 10918-1 ofthe lntrrn~tional Standards O~ ion Joint Photographic Experts Group (ISO/JPEG). Figure 3 demarcates four separate 8 x 8 image 1 5 blocks 30'7, 304, 306 and 308. A sectorizer 206 combines a preselected group of the 8 x 8 image blocks into sectors which provides a lower resolution representation of the image than that of the pixel level. However, in the case when lowering the resolution of the pixels 300 is not desirable, the sectorizer 206 could be bypassed and for purposes of the processing to follow, each pixel 300 could be interpreted as a separate sector. In the pl~fclled embodiment.
2 0 sector 310 is defined as four 8 x 8 pixel ~ c~ nt image blocks combined into a single 2 x '' sector. Block averager 208 then ~leterrnin~oc an average lurninance block value for each image block in each sector. The average luminance block value can either be determined in the spatial domain by averaging each of the 64 pixels in each image block, or by transforming the -W O9~'3C~71 PCTnUS96/02353 l"...;..,..~re data of the pixels into discrete cosine transform (DCT) coefficient~, then using the direct current element (0,0) of each 8 x 8 block of DCT coefficients as the average value for the image block. In a subtractor 210, the m;lX;~ and minimum average l~ ..re block values are ~ietermin~d for each sector, then a dirr.,l~nce is det~rmin~l th~ Lw~en.
5A sector thresholder 212 co~ ueS the dirre,~"ce between the ~ x;t------- and ...;..;...-.... average lnmin~nre block values for each sector with a precleterminP~l threshold value and, when the dirr~ ,ce is greater than the preclet.ormined threshold value, then that sector is defined as an active sector. Once a sector is defined as an active sector, the ...~x;...-..., and ...;..;...-~-.. average l---- i~ -re block values for that sector are averaged together to establish an average l---.. ;~ re sector value in a sector averager 214. Counter 216 counts the nurnber of active sectors corresponding to each average lnmin~nce sector value, which typically ranges from 0 to 255 (white). Alternatively, the average lllmin~nre sector value of each active sector could be first weighted in sector averager 214, then counted in device 216, by any of a number of well-known weighting algc,~ill.."s, so that the count of the number of 1 5 active sectors would be altered accordingly. Once all the active sectors of the image have been weighted and counted, then a histogram such as the one shown in Figure 4A is created by histogram generator 218. Figure 4A shows the histogram where the nurnber of active sectors is depicted along the vertical axis versus the average lllmin~nce sector values depicted along the horizontal axis. Figure 4A also shows a ~estin~tion window which l~ies~llL~ the 2 0 tone scale or tone reproduction capability corresponding to a ~1estin~tion application such as a printer, display, or other downstrearn application i~the image processing chain of the invention. Here, the ~lestin~tion application is depicted as a ~lestin~tion output device 226.
From Figure 4A, it is clear that part of the 1~ re information (represented as active . q sectors of average lllmin~nl~e values) cannot be reproduced by the ~estin~tion device 226, i.e., only the lnmin~nce data which falls within the clestin~tion window of the particular clestin~tion device will be reproduced. This problem is overcome in selector 220 by first providing a selection criterion for positioning the histogram of Figure 4A. Then, a positioner 5 222 will maneuver the histogram of Figure 4A so that desirable Illmin~nce information will fall within the established destination window in accol~d~ce with a selection criteria. For inct~nce, Figure 4B shows the histogram of Figure 4A shifted so that the average lnmin~nee values corresponding to the highest occurrences of active sectors appear in the ~l~stin~tion window. The various criteria for deciding how to shift or otherwise position the histogram in 10 position 222 is predetermined according to a number of considerations which will be detailed further on in this application. The shifted lull~inallce values of the histogram of Figure 4B are stored in output buffer 224 and are thereafter printed, displayed or otherwise output to or by the destin~ti- n device 226.
The approach of the present invention relies on scene analysis for solving various 15 problems associated with the lightn~sc adjl~ctm~nt of an image. It has been verified using psychovisual analysis whereby an ~ ",lent~l group of hurnan obselv~l~ collll)ale~ a series of images of a common scene, each image varying only in bri~htnecc Each observer selects the one image of the series which is most ~osth~tic~lly pleasing. While the eyes sense a scene as a lnmin~nce bitmap, the content of the scene is ascertained by reading the visual 2 0 information in the form of edges. textures and ~h~-lings. The quality of the image depends critically upon how well and to what extent this visual information is represented in the displayed image. Hypothetically, the optimal exposure should be the one which best preserves the visual information of the scene in easily readable form.
IC' W O96/30871 PCTnUS96/02353 Each form of the above described visual information is represented according to changes in l,.",i"~re, defined as the lll".i"~,-ce activity. There are three parameters which defineal-~",i~ ,ceactivity: (1)them~nit~lcleofthellllll;l~ rechangebetweenportionsof an image; (2) the average ll~min~nce value over the area in which the ll~ l.re change 5 occurs; and (3) the geometric distance over which the lllmin~nce change occurs.
The scene analysis method of the present invention is based on the observation that only the lnmin~nre content of a scene should be considered in m~king lightn~ee adjnctmente where some detail of interest resides. Consequently, the overall strategy is to build sectors at the resolution of hll~ol l~lce and then use a metric to interrogate those sectors for the presence 10 of detail activity. In particular, the scene analysis method of the present invention is based on a low resolution image derived from the average lnmin~nce of an image block. This low resolution image is divided into 2 x 2 sectors (for the ~l~f~ d embodiment). Preferably, the size of the 2 x 2 sectors corresponds approximately to the peak resolution of the human visual system at the final image size. The magnitude of the sector activity is taken as Ymag = Ymax ~ Ymln where YmaX and Ymjn are the m~hllulll and minimllm lllmin~nre sector values of the four blocks in a sector. The average lumin~nce segment value for any given sector can be taken as Ave Yseg = (Ymax + Ymin)/2 Since a lnmin~nce change which is not noticeable is not important, only activities 2 0 whose magnitudes exceed some predetermin~cl threshold value are counted. Best empirical results have come using a threshold equivalent to about 1/3 of an F-stop or a density of about 0.1. This activity metric is a non linear, omni directional 'detail' finder which has some sensitivity to all of the dirr~ ll types of information element~ mentioned above, on a half 1!
W O96/30871 PCTnUS96/02353 wave scale of eight high resolution pixels. A histograrn is formed by counting the number of over-threshold activities as a function of the average luminance sector values. This e histogram shows image detail which is the basis of the analysis to estim~t~o the optimal exposure. Large light or dark areas which contain no detail will not affect the results.
If the dynamic range of the detail lllmin~nce histogram is the sarne or smaller than that of the destin~tion window ~ se~ g the tonal reproduction range corresponding to the tonal reproduction capability of the rl~ctin~tion application, then it is only nPce~ry to reposition the histogram on a lo~. ;ll.. ic l.. ;.. ~l~ce scale such that it fits within the rlestin~tion window. The destin~tion window useful for reproduction of an ideal film with a 10 slope of l.S is about 4 stops, considerably smaller than the range of a typical activity lllmin~nee histogram, which can be up to 2 or 3 stops greater. Since there is generally a ron~i-lerable overlap outside the ~iestin~tion window, some of the detail information is clipped and lost by the output limitations of the destin~tion device.
The question of the optimal exposure shift is one of positioning the histogram over the 15 A~stin~tion window to get the best result, recognizing that the information represented by the ends of the histogram which extend beyond the print window may be lost. The best or desired results are, of course, dependent upon the specific application requirements.
An example of an image histogram based on activity lllmin~nce is shown in Figure 4A. The lnmin~nc~e scale is in log~ c units to the base 2 with 20 units per F stop. The 2 0 vertical dotted lines represent the limits of the destin~tion window. Numerous parameters for positioning the histogram into the d~stin~tion window are possible in accordance with the specific application requirements. The following four exemplary paraTneters have been tested.
l~
W O96130871 PCTnUS96/02353 MidShift: Set the midpoint of the histogram range to the midpoint of the clestinzttion window.
Mezm~hift- Dett-rmine the weighted mean of the activity htminztnre, using the activity counts as weights and set the weighted mean to the midpoint of the clestinzttion window.
MaxShift: Shift the histogram on the log l.. nit-~t.~re scale such that the ms.xil~""
possible number of counts are included in the destinzttion window.
FqF.n-l~hift: Shift the histogram such that the same number of counts are excluded from the ~estinzttion window at either end of the histogram.
10 These four parameters differ in their sensitivity to the range, the shape and the symmetry of the histogram. The best test results of any one of the above parameters occurred with the F.-lFn~l~hift. Somewhat be~ter results were obtained by either averaging all four of the above .cters, or by averaging the MidShift, the MeanShift, and the FqFncl!~hift.
Testing of the above parameters was accommodated with a digital image library of 15 594 digital images of a wide variety of scenes including an ove,~z . ~ .l,ling of difficult scenes, e.g. snow scenes, beach and water scenes, high contrast back-lit and side-lit scenes, etc. Each image was obtained by photographing a scene using Kodak VPS color negative film and scztnning the processed negative with a high resolution Itek scanner. The scanner was calibrated to world XYZ space using a set of known color reference patches over a broad 2 0 exposure range. This calibration enables printing of the scanned digital images on a similarly calibrated high quality color printer.
Each image was subjected to psychovisual testing with a panel of 25 people who selected a ~lcr~ d exposure from a series of prints of the sarne scene made at eight equallyspaced levels of brightn~-cs This provided the "psychovisual best" exposure data for culllpa~illg estim~teS of the best exposure from the scene analysis exposure method. The 5 differences between the scene analysis estim~t~ and the corresponding psychovisual best exposures were determined for each of the 594 images. The results were ~let~rmin~cl using the standard deviation of these differences. Some of the results are snrnm~ri7~1 below.
I~t W O9~"3~71 PCTnUS96/02353 Results of Scene Analvsis Values are log Exposure, 20 units per F stop Std. Dev. Max. Error Center Weight 13.7 53 MidShift 12.5 51 MeanShift 11.3 34 MaxShift 11.3 42 FqFn~1~hift 10.3 35 EqEndMeanMidAve 10.1 30 AllAve 10.1 31 The above results include center weiphting and cutoffprocedures to be later described. The first entry is the result of a weighted Illmin~nt~e average. The last two entries are the result of 5 averaging the exposure estim~te~ from three, or all four of the scene analysis histogram evaluation methods.
Almost any method of adjusting exposure -- even a good guess based upon prior experience and personal jllclgment -- can give quite acceptable results for a majority of scenes, since the most common scenes tend to contain a well distributed set of brightn~c~ values.
10 Improvements in exposure control reduce the variability in the common, easy scenes and bring a greater range of difficult scenes into the acceptable and higher quality categories. The scene analysis method reduced the maximum error in the tested images for the center-weighted average from over 2.5 stops to just 1.5 stops while making a 30% reduction in the standard deviation of the exposure error.
Additional improvements or modifications of the above described basic scene analysis can give modest improvement in the overall results. These new factors, listed below, are described in the following sections.
W O96/30871 PCTrUS96/02353 1. Center weightinp of over-threshold h..~ re activity counts 2. High lll..l;ll~l.ce cutoff 3. Multiple correlation of errors using shift invariant image parameters 4. Lllmin~nre adaptive activity threshold 5. Scene adaptive contrast adjl-ctment based on l.l.llil-~.. ce activity Center W~
Almost all camera exposure control m~r.h~nicmc use some sort of center weighting.
This approach takes advantage of the general tendency to place the princip~l subject near the center of the frame when taking a picture. Many automatic cameras use an exposure meter 10 covered by an asyrmnetric lens which produces a slightly off center weighted response.
To use center weighting in the above scene analysis procedure, the histogram is built by adding up the position dependent weights for the active sectors (instead of simply counting them) associated with a given average lnmin~nce sector value. Several weighting m~triççc have been tested, with the best results coming from the following weighting matrix of the 15 exposure meter used in one automatic camera.
W O96/30871 PCTnUS96/023~3 2 0 Weighting Matrix in Percentages This 19 x 23 matrix gives m~h~lulll weight of 100% to cell (12,12), centered laterally but about 5/8 of the distance down from the top of the matrix. The weights fall off in a more or less G~ ci~n ellipse to ten percent at the edges for this particular weighting matrix. In practice this weighting matrix is interpolated to both the aspect ratio and the resolution of the 2 5 sector image of the scene to be analyzed. Results using this procedure show a modest improvement of about 4% in the standard deviation of the scene analysis exposure error.
W O96/30871 PCTnUS961~23 nin~rce Cutoff Very high contrast scenes often contain specular reflections from waves, ice crystals, etc., which produce over-threshold activities that add to the high lumin~n~e end of the histogram and tend to make the resultant image too dark. Attempts to avoid this phenomenon 5 have been made by establishing a lllmin~nce limit beyond which the histogram values are set to zero, i.e., active sectors where average I ~ e sector values above this limit are not considered in the analysis. The h-min~nce limit, Yljm, for a given scene is given as follows:
if Ymax - Ymin > Cutoff then Ylim = Ymjn ~ 1 + Cutoff else Ylim = Ymax where YmaX and Ymjn are, respectively, the maximum and minilllulll average l.l..li~ e sector values in the scene and Cutoff is the upper limit of the dynamic range. The o~Lilllulll value for Cutoffwas clet~rmined by experiment using the 594 images from the image A~t~b~ce, to be 7.5 stops. Incorporating the h-min~nce cutoffprocedure into the scene analysis reduced 15 the standard deviation of the overall exposure error by just 1% (since it affects only a few of the highest contrast images), but reduced the m~x;..,l..,, too dark error by 10%.
MultiPle Correlation of Errors A main object of scene analysis is basically to predict the lllmin~n~e shift which will produce a print at the same lightn~ss level that was chosen by the psychovisual testing panel 2 0 as the psychovisual best for any given image.
TrueShift = SA_Shift + ~rror TrueShift is the psychovisual best and SA_Shift is the scene analysis result. The success of the scene analysis alg~"iLl.~l- is char~(teri7~l bythe standard deviation ofthe error.
1~
W O96/30871 PCTnUS96/023~3 If image parameters could be established with respect to which the errors are not completely random, then TrueShift = SA_Shift + f(p~r~m~oters) + Re~lllçe-lFrror wherein f(Parameters) leprest;~ a function of image pararneters which correlates out some of 5 the error variability. The correlating parameters are selected cautiously to assure that the result will remain independent of the starting point. A parameter, such as Ymay or Ymjn, which is related to an absolute lnmin~nce value in the original scene, will introduce implicit knowledge of the correct answer into the correlation process and give excellent results which, however, are spurious and have no general predictive value. Valid correlating parameters must themselves be shift invariant, i.e. relating to l.. ;.. ~-ce dirr~ nces.
The correlation ten~lencies of each of the following invariant parameters was tested.
ActAveO The average of all unthresholded Activity values MeanMMid HistoMean - HistoMid MeanMMidSq MeanMMid2 T nmRz~l (HistoMax - HistoMean) - (HistoMean - HistoMin) HistoRange LllmR~ q LumBal2 Yrange Overall lllmin~nce dynamic range (YmaY - Ymin) of the histogram 2 0 Range_Ratio HistoRange/DestinationWindowWidth In the above definitions, HistoMax is the maximum luminance for which the histogram value is non-zero, HistoMin is the corresponding mhlilllul.l luminance~ HistoRange is HistoMax -/~
W O96130871 PCTrUS96/0~353 HistoMin, HistoMean is the weighted mean l~ e of the histogram, HistoMid is the midpoint value of the histogram, and R llmR~l is the lnmin~n~ e b~l~n~e The following equation ~ sents the overall multiple correlation of the scene analysis errors.
Error = -1.07ActAveO - 0.41LurnBal + 0.004LumBalSq + 0.61MeanMMid The other variables listed above but not included in the above equation did not test out as significant at the 0.05 level. The significance levels from the equation of the variables ActAveO, T.llmR~l, T llmR~l~q, and MeanMMid were, respectively, 0.0001, 0.0018, 0.00018, and 0.0057. The standard deviation of the rçm~ining error was improved by about 5%, and the correlation coefficient was 1 1% whereby the correlation can account for 1 1% of the error variability. This represents a modest improvement, but it is so weak that it may be fortuitous.
Experiments have also been tried for grouping the data according to a variable such as Range_Ratio or LumBal and then doing a multiple correlation within each group. This gives a little better result (up to about 8% improvement in the standard deviation of the overall error) but has the disadvantage that the number of data points in each group is smaller, leading to concerns about the general applicability of the correlations to new images not in the experimental ~l~t~b~ce~ The application of multiple correlation of errors could thus be useful in applications where the char?~cteri~tics of a photographic space of interest are well ~lefin~ ! and there is a statistically valid representative sample of the workable photographic 2 0 space, e.g., application to identification photos, industrial docllment~tion photos, etc 2~
CA 022l2802 l997-08-l2 W O96/30871 PCTnUS96/02353 T ~ - _c AdaPtive Activitv Threshold The human visual system has a nonlin~r response to dirr~ ces in lllmin~n~e expressed by the relation L* = 1 l 6(Y/Yn)-16 5 where Y is lllmin~nce, Yn is the reference l~ e illllmin~tin~ the scene. and L* is a lightn~s.c measure in L*a*b* space in which dirr~l~nces bear a linear relation to human pc~c~lual response. It makes sense theoretically to define the activity threshold in terms of L* rather than the usual log l.. .i~ e scale, because the perceptible detail in the image is sought. A given threshold value encomp~eses considerably fewer L* steps in darker regions 10than in lighter ones - by about a factor of 3 when going from about 5% to 90% reflect~nre7 allowing discrimination of smaller luminance increments at higher lnmin~n-~e levels. It should, therefore, be better to use a smaller lllmin~n/~e increment as the threshold for counting activities as a function of increasing sector lllmin~nce in such a way as to keep the L*
increment constant. An outline of the procedure to convert a given L* increment to the 15 corresponding log(Y) increment follows.
I . Assume a digital log space in which the digital value (y) is defined by y = dps * log2(Y) + yn Here dps is the number of digits per stop, log2(Y) represents the logarithm base 2 of linear space l........ ;.-~.-ce, and yn is the digital value on the chosen scale 2 0corresponding to l 00% reflectance when Y = l .
2. Convert the reference y (yref). the value at the center of the equivalent delta y to be calc~ terl, to the L* scale (Lref).
Lref = (1 l 6)2((Yr~f ~ 5~11)/(3dps))_ 16 W O9~3CE71 PCTnUS96/023~3 3. Dt;lrl . "; .-e the delta y (ydelta) equivalent to a given delta L* (Ldelta) by x = Ldelta/(2(Lref + 16)) ydelta= (6dpslln (2)) arcsinh(x) wherein In (2) is the natural log~rithm of 2.
Using an eight bit log2 scale with dps = 20 and yn = 160. the following equivalent increments in y are found for a desired increment of 4 units in L* (with Ldelta = 4) y ydelta % Reflectance 1~i7 2.5 90 In order to use an L* increment as an activity threshold in the scene analysis, a lookup table is established expressing the above ydelta versus y relation and use of the ydelta value for the activity l.l..,i"z~,-ce as the threshold for testing the activity of a sector. Since the ydelta values are a~lopl;ate only for a correctly exposed scene and since, in general, the original 15 digital image may be off by some albiLI~ y amount, a preliminary estim~t~ of the exposure shift must be made. This is accomplished by ~l~t~ormining the shift necessarv to bring Ylim.
previously defined in the T llmin~nce Cutoffsection, to coincide with yn, which is the white point for 100% reflection.
After testing several L* increments as activity thresholds, it was found that Ldelta=4 2 0 gave statistical results which were virtually identical to those obtained by the standard svstem using a constant threshold value of 0.3 stop.
Scene Adal)tive Contrast Adiustment W O96/30871 PCTnUS96102353 Up until now the object of scene analysis has been to ~et~rmine a lightn~cs shift necec~i1, y to achieve a high quality display of the image. If the scene has a dynamic range greater than the ~l~stin~tion window (which is the case more often than not), this means that even if the principal subject is ~lup.,~ly exposed the brighter and darker areas of the image 5 will be clipped in the display, i.e., bright objects will be washed out and deep shadows will be blocked up. A strongly backlit background, for example, can completely dis~peal.
Using the activity based histogram, clipping of important information can be avoided by adjusting the scene contrast such that the entire histogram fits within the ~lestin~tion window. This means that the tone scale of a high contrast image is coll.pressed.
The concept of tone scale adjll~tment is not new. It is common, particularly in video photography, to adjust the signal such that the overall dynamic range of the scene fills up the dynamic range of the display. But there is a distinct advantage to using the activity based histogram as the basis of the contrast adjll~tment in that the entire lllmin~nce range is not necessarily included. but just the range necessary to include the perceptible detail. Overbright 15 or overdark areas that contain no detail (and so are thus not very interesting anyway) will have no influence. This minimi7Ps the degree of contrast adjnctment necessary.
W O96/30871 PCT/U~2353 This concept has been implement~l in the simplest possible way. l~or each lnmin~nce value y in the image, a new value yadj is calculated as yadj = (y - ymid) / RangeRatio + ymid, where ymid represents the midpoint of the histogram. This is a simple linear adj~lctment which brings the histogram limits to be the sarne size as the ~lestin~tion window. At that point, the image is shifted to bring the histogram into coincidence with the ~lestin~tion window. One can think of other ways to do the co~ ~L adj-lctment There might be advantages to an asymmetric adjll~trnent, for in~t~nl~e, in which dirr~,elll factors would be applied to the brighter and darker portions with the fixed point at print L* = 50 (mid-gray) or print L* = 60 (where illllmin~te~l flesh tones tend to appear) instead of at ymid. This would be particularly effective if very good exposure accuracy could be ~tt~ine~l The danger of making large contrast reduction adjustments is that the image can take on a flat appearance, lacking punch. This is due to the reduction in sharpness and color saturation which accompany the effectively lower contrast. However, measures to co~ e for these problems in image processing steps extraneous to scene analysis per se include the following:
1. Using a tone scale in the final ~lestin~tion application color map which is similar to the usual ideal film but which has a sharper toe and shoulder. This increases2 0 the effective dynamic range and so reduces the magnitude of the nececs~ry contrast adjll~tment 2. Using contrast adaptive sh~.~ing. That is, sharpening more in proportion to the m~gnitll-le of the contract reduction.
W O9OE/_C~71 PCT/u~G~2353 3. Combining scene analysis with a burning and dodging procedure for local brightness adjll~tm~nt Scene analysis is done t~,vice - first using the standardalgo~ l, then the reslllting image is passed through the burning and dodging procedure, and finally the contrast adaptive scene analysis is used on that result to produce the final result. Since burning and dodging reduces local contrast, for some images the global contrast reduction in the second scene analysis step can be less. Both burning and dodging and global contrast adjustment can bring in overexposed or underexposed foregrounds and backgrounds. They seem to be synergistic when used together.
One burning and dodging procedure for local brightnPss adjnctment is implem~-nte~l in the positioner 222 of Figure 2 by moving all the active lllmin~nce sectors of interest that fall outside of the ~iestin~tion window in the histogram of Figure 4A to the nearest histograrn limit Ymax or Ymin. In other words, any lllmin~n~e values of interest with an average Illmin~nce sector value less than Ymin is moved to Ymin. and any luminance values of interest with any average lllmin~nce sector value greater than Ymax is moved to Ymax.
It is to be understood that the above described embo~liment~ are merely illustrative of the present invention and represent a limited number of the possible specific embodiments that can provide applications of the principles of the invention. Numerous and varied other arrangements may be readily devised in accordance with these principles by those skilled in 2 0 the art without departing from the spirit and scope of the invention as claimed.
To assure that the foregoing conditions are more or less satisfied depends, nltim~tely, 15 on ~l~,pclly m~trhing the scene lightn.-cc values to the tone scale of the mediurn, taking into account the particular scene characteristics, prevailing scene lighting conditions, and the medium char~ctçrictics. Given the variety of possible scenes and lighting conditions, proper m~tt~hing can only be achieved regularly by underst~ncling the complex interrelationships of the entire reproduction system and some probability estim~te of the likelihood of the 2 0 occurrence of typical scenes. This would include knowing, for example. the most likely distribution and intçncities of scene illllmin~nce patterns expected to be captured, the spectral reflectivity of commonly re~ objects expected to be reproduced, t'ne spectral content of likely scene ill~ . .ce, and the spectral response and tone scale char~ctçri~tics of the W O 96/30871 PCTnUS96102353 m~ m In ~;ul~ lly available ~m~tt~llr camera products, all of these interrelationships are typically autom~tir~lly correlated for o~Lilllulll exposure by a camera's automatic exposure control system which c-)mmonly utilizes a built-in "averaging" or "center-weighted" type meter for exposure prediction purposes. Fur~er, in electronic im~ging, images are acquired 5 and lc~lcse-lL~d as digital signals which can be manipulated, processed or displayed through the use of colll~uLcl~ or other special purpose electronic h~dw~c. The processing of these digital signals includes known techniques such as lnmin~nce averaging or center-weighting for ~qllt )m~tic exposure control.
While the various known tone m~ hin~ techniques may prove adequate for many 10 purposes, they ~ c.lLly do not consider tone as a function of the detail that may be present in the subject. Consequently, the primary object of the present invention is to provide an improved system, and associated methods for pclro~ ing lightne~ adjn~tment of a digital image which is independent of large area l~ e averaging. This and other objects will become a~pa c.ll in view of the following descriptions~ drawings and claims.
SUMMARY OF THE INVENTION
A system and method for processing a digital image signal which l~lcscllL~ an image can be made to optimally map l - "i"~ e values versus a tonal reproduction capability of a ctin~tion application. Specifically, the system includes a device for partiti--ning the image into blocks, then combining certain blocks into sectors. An average l..",il)~l,re block value is 20 ~ ",;..~?d for each block and a difference is ~letermint?rl between the m~x;~ .... and minimllm average l-----i~ e block values for each sector. If the difference e~cee-lC a predet~rmin~od threshold value, then the sector is labeled as an active sector and an average lumin~n~e sector W O96t30871 PCT/u~7~G~2353 value is obtained from l~lAXil~ lll and Illil~illlLIIII average lll"~;"~ e block values. All active sectors of the image are plotted versus the average lllmin~nre sector values in a histogram, then the histogram is shifted via some predet~rmin.ocl criterion so that the average l~ sector values of interest will fall within a ~lestin~tion window corresponding to the tonal reproduction 5 capability of a destin~tion applir~tion BRIEF DESCRIPTION OF THE DRAWINGS
The aforementioned aspects and other features of the invention are described in detail in conjunction with the accul,lp~,ying drawings in which the same reference numerals are used throughout for denoting co"es~ond*ng elem~nt~ and wherein:
Figure 1 shows an exemplary embodiment of the image processing system of the invention;
Figure 2 is a block diagram of an image processing system according to the invention by which lightnecs adjustment of a digital image may be made;
Figure 3 is a m~Enified portion of an image showing individual pixels, 8 x 8 image 15 blocks, and a 2 x 2 sector;
Figure 4A is a histograrn of the number of active sectors corresponding to average sector lllmin~nce values plotted along a lo~ ic scale; and Figure 4B is the histogram of Figure 3A which has been remapped to target a des*able portion of the histogram into a specific tonal reproduction range.
W Og~ 71 PCTnUS96/02353 DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention relates to an image procec~ing system and associated image proces~ing methods by which the l.,..~ e content in an original scene may be optimally m~tchf-d with the tone reproduction capabilities of a destin~tion application such as an 5 electronic output display device, hardcopy printer, or photographic reproduction device.
Figure 1 illll~LldLes one exemplary embodiment of such a system. As can be seen, Figure 1 illustrates an electronic image processing system where an image signal source, such as an cle~;Llollic still camera 10 or a scanner 12, provides an electronic image signal which se~ i an image of the subject (not shown). A computer 18 receives the electronic signal 10 from the image signal source and thereafter processes the image signal electronically to provide any number of known image procec~ing functions such as brightn~ss adj--ctment The processed image can be transmitted, i.e. output, to any clestin~ti- n device or destin~tion application such as a diskette 16, an user monitor 20, a printer 14, or a remote monitor 26.
Operator interaction with the system is facilitated by use of a keyboard 22 or a mouse 24. Of 15 course, the components shown in Figure 1 are merely exemplary rather than all inclusive of the many equivalent devices known by those skilled in the art. For instance. the image signal source could include any device which acts as an image signal source such as an electronic camera, a scanner, a camcorder, a charge coupled device~ a charge injected device, etc. Also, it is noteworthy that the processing of the image need not necessarily occur solely in the 2 0 cc,~ uLel 18. Indeed, various phases or aspects of the image processing could occur in the image signal source, the co~ uLel, or the ~itostin~tion output device.
Electronic image procçscin~ can occur in various domains such as the spatial domain or the frequency tlom~in An image is said to reside in the spatial domain when the values of p~a~ tl~ used to describe the image, such as bri~htn~cc, have a direct corresp~-n-l~n~e with spatial location. Bri~htn~sc of an image is defined as the attribute of sensation by which an 5 observer is aware of ~lirr~l~nces in lllmin~nre In the frequency domain, the image is ~res~l~led as a matrix of frequency coefficients which are obtained by various L~,srOl,l,ation methods such as Fourier transformation, discrete cosine (DCT) Lla~ru~lation~ etc.
One problem associated with known systems for brightnPcs adjnctm~nt of a digital 10 image is the lack of optimal exposure control. The simplest method of exposure control takes the overall lllmin~nce average of a scene and sets that average to coincide with the im~ging system's reference gray value. This works well for scenes wherein the average lllmin~nce of the principal subject is the same as the overall average However, this method is ineffective for difficult scenes which contain excessive b~ckli~htin~ or frontlighting or which have 15 specular reflectances which can unduly affect the overall average upon which the simple exposure meter is based. In the case of excessive b~ hting, the brightn~-cc of the background is significantly higher than that of the subject of the scene, and in the case of excessive frontlighting, the background is much darker than the subject.
More sophisticated exposure control systems typically extend the above simple 2 0 exposure method by using more complicated averaging schemes. One of the most common averaging methods for exposure control uses a center weighted lllmin~nre average, since the principal subject is often placed near the center of the picture. For this method, the highest weighting is applied a little below the geomrtric~l center in order to reduce the influence of a W O96/30871 PCTrUS96/02353 bright sky, which might be in the background. Other known methods se~ment the scene into a pattern of central and peripheral areas and ~lçt~rmin~ a control metric from some logical combination of the l~ re averages of these areas- These refin~ment~, though they ' represent a considerable improvement, are, when presented with a difficult scene, still subject 5 to the errors pointed out above which are inherent in any method which depends upon large area ll....i~ e averaging.
Figure 2 is a block diagram of the various elements of the image processing system for lightn~ adj ~ of a digital image. It will be understood that each of the elements of the im~E~ing processing system may be embodied, all~ dLi~ely, as an on-board application 10 specific integrated circuit (ASIC), field programmable gate array, other forms of fillllw~, resident on one or more of the components of the system of Figure 1 or resident as an application program or program module in a general purpose computer such as that shown in Figure 1. The scene 201 is repres~-nt~l as a digital image by image signal source 207, e.g. a camera, camcorder, charge-coupled device, charge-injected device, scanner, etc. The image 15 acquisition device 200 acquires the image signal, which contains both lllmin~nc e data and chlu. ..;. .~nre data characteristic of the image, and separates the luminance data which is thereafter stored in input buffer 202. The present invention analyzes only the ll-min~nce data and as such, the chrominance components will not be discussed any further.
A portion of the image is represented in cutout in Figure 3 as a composite of pixels 2 0 300 of lnmin~nce data. The signal processing cil~;uiLly 20~ retrieves the luminance data from input buffer 202, processes the lllmin~nce data for li~htn~s adjllctment then stores the ~ processed data in output buffer 224. Eventually in a ~estin~tion device 226~ the processed data is used to reproduce the processed image in a ~lestin~tion application such as printing, .
W O96/30871 PCTrUS96/02353 displaying, tr~n~mi~ n to a do~ sL~Gdlll application, or the like. The signal processing ci~;uil,y 205, as mentioned earlier, can be implement~cl as discrete hardware components as shown in Figure 2, or the ~.;il~;UiLlj 205 could be em~ t~d by a general purpose computer.
And again, the h~d~e included in cil~,uilly 205 could be completely or partially housed in 5 other system element~ such as the image signal generator 207, the image acquisition device 200, or the output device 226.
The signal processing ci,uuil"~ 205 specifically includes a partitioner 204 which reads the 1~ llre data from input buffer 202 and partitions the l.. ;.. ~.~re data into a plurality of image blocks of precletrrminp(l size ranging from one to M x N pixels, where M is the 1 0 number of rows and N is the number of columns in the image. In the ~,~f~ d embodiment, the lumin~nre data is segmentpcl into 8 x 8 image blocks which conform to convention~l block sizes for image processing as recommended, for instance, in Section 3.3 of the digital image co"~,cs~ion standard 10918-1 ofthe lntrrn~tional Standards O~ ion Joint Photographic Experts Group (ISO/JPEG). Figure 3 demarcates four separate 8 x 8 image 1 5 blocks 30'7, 304, 306 and 308. A sectorizer 206 combines a preselected group of the 8 x 8 image blocks into sectors which provides a lower resolution representation of the image than that of the pixel level. However, in the case when lowering the resolution of the pixels 300 is not desirable, the sectorizer 206 could be bypassed and for purposes of the processing to follow, each pixel 300 could be interpreted as a separate sector. In the pl~fclled embodiment.
2 0 sector 310 is defined as four 8 x 8 pixel ~ c~ nt image blocks combined into a single 2 x '' sector. Block averager 208 then ~leterrnin~oc an average lurninance block value for each image block in each sector. The average luminance block value can either be determined in the spatial domain by averaging each of the 64 pixels in each image block, or by transforming the -W O9~'3C~71 PCTnUS96/02353 l"...;..,..~re data of the pixels into discrete cosine transform (DCT) coefficient~, then using the direct current element (0,0) of each 8 x 8 block of DCT coefficients as the average value for the image block. In a subtractor 210, the m;lX;~ and minimum average l~ ..re block values are ~ietermin~d for each sector, then a dirr.,l~nce is det~rmin~l th~ Lw~en.
5A sector thresholder 212 co~ ueS the dirre,~"ce between the ~ x;t------- and ...;..;...-.... average lnmin~nre block values for each sector with a precleterminP~l threshold value and, when the dirr~ ,ce is greater than the preclet.ormined threshold value, then that sector is defined as an active sector. Once a sector is defined as an active sector, the ...~x;...-..., and ...;..;...-~-.. average l---- i~ -re block values for that sector are averaged together to establish an average l---.. ;~ re sector value in a sector averager 214. Counter 216 counts the nurnber of active sectors corresponding to each average lnmin~nce sector value, which typically ranges from 0 to 255 (white). Alternatively, the average lllmin~nre sector value of each active sector could be first weighted in sector averager 214, then counted in device 216, by any of a number of well-known weighting algc,~ill.."s, so that the count of the number of 1 5 active sectors would be altered accordingly. Once all the active sectors of the image have been weighted and counted, then a histogram such as the one shown in Figure 4A is created by histogram generator 218. Figure 4A shows the histogram where the nurnber of active sectors is depicted along the vertical axis versus the average lllmin~nce sector values depicted along the horizontal axis. Figure 4A also shows a ~estin~tion window which l~ies~llL~ the 2 0 tone scale or tone reproduction capability corresponding to a ~1estin~tion application such as a printer, display, or other downstrearn application i~the image processing chain of the invention. Here, the ~lestin~tion application is depicted as a ~lestin~tion output device 226.
From Figure 4A, it is clear that part of the 1~ re information (represented as active . q sectors of average lllmin~nl~e values) cannot be reproduced by the ~estin~tion device 226, i.e., only the lnmin~nce data which falls within the clestin~tion window of the particular clestin~tion device will be reproduced. This problem is overcome in selector 220 by first providing a selection criterion for positioning the histogram of Figure 4A. Then, a positioner 5 222 will maneuver the histogram of Figure 4A so that desirable Illmin~nce information will fall within the established destination window in accol~d~ce with a selection criteria. For inct~nce, Figure 4B shows the histogram of Figure 4A shifted so that the average lnmin~nee values corresponding to the highest occurrences of active sectors appear in the ~l~stin~tion window. The various criteria for deciding how to shift or otherwise position the histogram in 10 position 222 is predetermined according to a number of considerations which will be detailed further on in this application. The shifted lull~inallce values of the histogram of Figure 4B are stored in output buffer 224 and are thereafter printed, displayed or otherwise output to or by the destin~ti- n device 226.
The approach of the present invention relies on scene analysis for solving various 15 problems associated with the lightn~sc adjl~ctm~nt of an image. It has been verified using psychovisual analysis whereby an ~ ",lent~l group of hurnan obselv~l~ collll)ale~ a series of images of a common scene, each image varying only in bri~htnecc Each observer selects the one image of the series which is most ~osth~tic~lly pleasing. While the eyes sense a scene as a lnmin~nce bitmap, the content of the scene is ascertained by reading the visual 2 0 information in the form of edges. textures and ~h~-lings. The quality of the image depends critically upon how well and to what extent this visual information is represented in the displayed image. Hypothetically, the optimal exposure should be the one which best preserves the visual information of the scene in easily readable form.
IC' W O96/30871 PCTnUS96/02353 Each form of the above described visual information is represented according to changes in l,.",i"~re, defined as the lll".i"~,-ce activity. There are three parameters which defineal-~",i~ ,ceactivity: (1)them~nit~lcleofthellllll;l~ rechangebetweenportionsof an image; (2) the average ll~min~nce value over the area in which the ll~ l.re change 5 occurs; and (3) the geometric distance over which the lllmin~nce change occurs.
The scene analysis method of the present invention is based on the observation that only the lnmin~nre content of a scene should be considered in m~king lightn~ee adjnctmente where some detail of interest resides. Consequently, the overall strategy is to build sectors at the resolution of hll~ol l~lce and then use a metric to interrogate those sectors for the presence 10 of detail activity. In particular, the scene analysis method of the present invention is based on a low resolution image derived from the average lnmin~nce of an image block. This low resolution image is divided into 2 x 2 sectors (for the ~l~f~ d embodiment). Preferably, the size of the 2 x 2 sectors corresponds approximately to the peak resolution of the human visual system at the final image size. The magnitude of the sector activity is taken as Ymag = Ymax ~ Ymln where YmaX and Ymjn are the m~hllulll and minimllm lllmin~nre sector values of the four blocks in a sector. The average lumin~nce segment value for any given sector can be taken as Ave Yseg = (Ymax + Ymin)/2 Since a lnmin~nce change which is not noticeable is not important, only activities 2 0 whose magnitudes exceed some predetermin~cl threshold value are counted. Best empirical results have come using a threshold equivalent to about 1/3 of an F-stop or a density of about 0.1. This activity metric is a non linear, omni directional 'detail' finder which has some sensitivity to all of the dirr~ ll types of information element~ mentioned above, on a half 1!
W O96/30871 PCTnUS96/02353 wave scale of eight high resolution pixels. A histograrn is formed by counting the number of over-threshold activities as a function of the average luminance sector values. This e histogram shows image detail which is the basis of the analysis to estim~t~o the optimal exposure. Large light or dark areas which contain no detail will not affect the results.
If the dynamic range of the detail lllmin~nce histogram is the sarne or smaller than that of the destin~tion window ~ se~ g the tonal reproduction range corresponding to the tonal reproduction capability of the rl~ctin~tion application, then it is only nPce~ry to reposition the histogram on a lo~. ;ll.. ic l.. ;.. ~l~ce scale such that it fits within the rlestin~tion window. The destin~tion window useful for reproduction of an ideal film with a 10 slope of l.S is about 4 stops, considerably smaller than the range of a typical activity lllmin~nee histogram, which can be up to 2 or 3 stops greater. Since there is generally a ron~i-lerable overlap outside the ~iestin~tion window, some of the detail information is clipped and lost by the output limitations of the destin~tion device.
The question of the optimal exposure shift is one of positioning the histogram over the 15 A~stin~tion window to get the best result, recognizing that the information represented by the ends of the histogram which extend beyond the print window may be lost. The best or desired results are, of course, dependent upon the specific application requirements.
An example of an image histogram based on activity lllmin~nce is shown in Figure 4A. The lnmin~nc~e scale is in log~ c units to the base 2 with 20 units per F stop. The 2 0 vertical dotted lines represent the limits of the destin~tion window. Numerous parameters for positioning the histogram into the d~stin~tion window are possible in accordance with the specific application requirements. The following four exemplary paraTneters have been tested.
l~
W O96130871 PCTnUS96/02353 MidShift: Set the midpoint of the histogram range to the midpoint of the clestinzttion window.
Mezm~hift- Dett-rmine the weighted mean of the activity htminztnre, using the activity counts as weights and set the weighted mean to the midpoint of the clestinzttion window.
MaxShift: Shift the histogram on the log l.. nit-~t.~re scale such that the ms.xil~""
possible number of counts are included in the destinzttion window.
FqF.n-l~hift: Shift the histogram such that the same number of counts are excluded from the ~estinzttion window at either end of the histogram.
10 These four parameters differ in their sensitivity to the range, the shape and the symmetry of the histogram. The best test results of any one of the above parameters occurred with the F.-lFn~l~hift. Somewhat be~ter results were obtained by either averaging all four of the above .cters, or by averaging the MidShift, the MeanShift, and the FqFncl!~hift.
Testing of the above parameters was accommodated with a digital image library of 15 594 digital images of a wide variety of scenes including an ove,~z . ~ .l,ling of difficult scenes, e.g. snow scenes, beach and water scenes, high contrast back-lit and side-lit scenes, etc. Each image was obtained by photographing a scene using Kodak VPS color negative film and scztnning the processed negative with a high resolution Itek scanner. The scanner was calibrated to world XYZ space using a set of known color reference patches over a broad 2 0 exposure range. This calibration enables printing of the scanned digital images on a similarly calibrated high quality color printer.
Each image was subjected to psychovisual testing with a panel of 25 people who selected a ~lcr~ d exposure from a series of prints of the sarne scene made at eight equallyspaced levels of brightn~-cs This provided the "psychovisual best" exposure data for culllpa~illg estim~teS of the best exposure from the scene analysis exposure method. The 5 differences between the scene analysis estim~t~ and the corresponding psychovisual best exposures were determined for each of the 594 images. The results were ~let~rmin~cl using the standard deviation of these differences. Some of the results are snrnm~ri7~1 below.
I~t W O9~"3~71 PCTnUS96/02353 Results of Scene Analvsis Values are log Exposure, 20 units per F stop Std. Dev. Max. Error Center Weight 13.7 53 MidShift 12.5 51 MeanShift 11.3 34 MaxShift 11.3 42 FqFn~1~hift 10.3 35 EqEndMeanMidAve 10.1 30 AllAve 10.1 31 The above results include center weiphting and cutoffprocedures to be later described. The first entry is the result of a weighted Illmin~nt~e average. The last two entries are the result of 5 averaging the exposure estim~te~ from three, or all four of the scene analysis histogram evaluation methods.
Almost any method of adjusting exposure -- even a good guess based upon prior experience and personal jllclgment -- can give quite acceptable results for a majority of scenes, since the most common scenes tend to contain a well distributed set of brightn~c~ values.
10 Improvements in exposure control reduce the variability in the common, easy scenes and bring a greater range of difficult scenes into the acceptable and higher quality categories. The scene analysis method reduced the maximum error in the tested images for the center-weighted average from over 2.5 stops to just 1.5 stops while making a 30% reduction in the standard deviation of the exposure error.
Additional improvements or modifications of the above described basic scene analysis can give modest improvement in the overall results. These new factors, listed below, are described in the following sections.
W O96/30871 PCTrUS96/02353 1. Center weightinp of over-threshold h..~ re activity counts 2. High lll..l;ll~l.ce cutoff 3. Multiple correlation of errors using shift invariant image parameters 4. Lllmin~nre adaptive activity threshold 5. Scene adaptive contrast adjl-ctment based on l.l.llil-~.. ce activity Center W~
Almost all camera exposure control m~r.h~nicmc use some sort of center weighting.
This approach takes advantage of the general tendency to place the princip~l subject near the center of the frame when taking a picture. Many automatic cameras use an exposure meter 10 covered by an asyrmnetric lens which produces a slightly off center weighted response.
To use center weighting in the above scene analysis procedure, the histogram is built by adding up the position dependent weights for the active sectors (instead of simply counting them) associated with a given average lnmin~nce sector value. Several weighting m~triççc have been tested, with the best results coming from the following weighting matrix of the 15 exposure meter used in one automatic camera.
W O96/30871 PCTnUS96/023~3 2 0 Weighting Matrix in Percentages This 19 x 23 matrix gives m~h~lulll weight of 100% to cell (12,12), centered laterally but about 5/8 of the distance down from the top of the matrix. The weights fall off in a more or less G~ ci~n ellipse to ten percent at the edges for this particular weighting matrix. In practice this weighting matrix is interpolated to both the aspect ratio and the resolution of the 2 5 sector image of the scene to be analyzed. Results using this procedure show a modest improvement of about 4% in the standard deviation of the scene analysis exposure error.
W O96/30871 PCTnUS961~23 nin~rce Cutoff Very high contrast scenes often contain specular reflections from waves, ice crystals, etc., which produce over-threshold activities that add to the high lumin~n~e end of the histogram and tend to make the resultant image too dark. Attempts to avoid this phenomenon 5 have been made by establishing a lllmin~nce limit beyond which the histogram values are set to zero, i.e., active sectors where average I ~ e sector values above this limit are not considered in the analysis. The h-min~nce limit, Yljm, for a given scene is given as follows:
if Ymax - Ymin > Cutoff then Ylim = Ymjn ~ 1 + Cutoff else Ylim = Ymax where YmaX and Ymjn are, respectively, the maximum and minilllulll average l.l..li~ e sector values in the scene and Cutoff is the upper limit of the dynamic range. The o~Lilllulll value for Cutoffwas clet~rmined by experiment using the 594 images from the image A~t~b~ce, to be 7.5 stops. Incorporating the h-min~nce cutoffprocedure into the scene analysis reduced 15 the standard deviation of the overall exposure error by just 1% (since it affects only a few of the highest contrast images), but reduced the m~x;..,l..,, too dark error by 10%.
MultiPle Correlation of Errors A main object of scene analysis is basically to predict the lllmin~n~e shift which will produce a print at the same lightn~ss level that was chosen by the psychovisual testing panel 2 0 as the psychovisual best for any given image.
TrueShift = SA_Shift + ~rror TrueShift is the psychovisual best and SA_Shift is the scene analysis result. The success of the scene analysis alg~"iLl.~l- is char~(teri7~l bythe standard deviation ofthe error.
1~
W O96/30871 PCTnUS96/023~3 If image parameters could be established with respect to which the errors are not completely random, then TrueShift = SA_Shift + f(p~r~m~oters) + Re~lllçe-lFrror wherein f(Parameters) leprest;~ a function of image pararneters which correlates out some of 5 the error variability. The correlating parameters are selected cautiously to assure that the result will remain independent of the starting point. A parameter, such as Ymay or Ymjn, which is related to an absolute lnmin~nce value in the original scene, will introduce implicit knowledge of the correct answer into the correlation process and give excellent results which, however, are spurious and have no general predictive value. Valid correlating parameters must themselves be shift invariant, i.e. relating to l.. ;.. ~-ce dirr~ nces.
The correlation ten~lencies of each of the following invariant parameters was tested.
ActAveO The average of all unthresholded Activity values MeanMMid HistoMean - HistoMid MeanMMidSq MeanMMid2 T nmRz~l (HistoMax - HistoMean) - (HistoMean - HistoMin) HistoRange LllmR~ q LumBal2 Yrange Overall lllmin~nce dynamic range (YmaY - Ymin) of the histogram 2 0 Range_Ratio HistoRange/DestinationWindowWidth In the above definitions, HistoMax is the maximum luminance for which the histogram value is non-zero, HistoMin is the corresponding mhlilllul.l luminance~ HistoRange is HistoMax -/~
W O96130871 PCTrUS96/0~353 HistoMin, HistoMean is the weighted mean l~ e of the histogram, HistoMid is the midpoint value of the histogram, and R llmR~l is the lnmin~n~ e b~l~n~e The following equation ~ sents the overall multiple correlation of the scene analysis errors.
Error = -1.07ActAveO - 0.41LurnBal + 0.004LumBalSq + 0.61MeanMMid The other variables listed above but not included in the above equation did not test out as significant at the 0.05 level. The significance levels from the equation of the variables ActAveO, T.llmR~l, T llmR~l~q, and MeanMMid were, respectively, 0.0001, 0.0018, 0.00018, and 0.0057. The standard deviation of the rçm~ining error was improved by about 5%, and the correlation coefficient was 1 1% whereby the correlation can account for 1 1% of the error variability. This represents a modest improvement, but it is so weak that it may be fortuitous.
Experiments have also been tried for grouping the data according to a variable such as Range_Ratio or LumBal and then doing a multiple correlation within each group. This gives a little better result (up to about 8% improvement in the standard deviation of the overall error) but has the disadvantage that the number of data points in each group is smaller, leading to concerns about the general applicability of the correlations to new images not in the experimental ~l~t~b~ce~ The application of multiple correlation of errors could thus be useful in applications where the char?~cteri~tics of a photographic space of interest are well ~lefin~ ! and there is a statistically valid representative sample of the workable photographic 2 0 space, e.g., application to identification photos, industrial docllment~tion photos, etc 2~
CA 022l2802 l997-08-l2 W O96/30871 PCTnUS96/02353 T ~ - _c AdaPtive Activitv Threshold The human visual system has a nonlin~r response to dirr~ ces in lllmin~n~e expressed by the relation L* = 1 l 6(Y/Yn)-16 5 where Y is lllmin~nce, Yn is the reference l~ e illllmin~tin~ the scene. and L* is a lightn~s.c measure in L*a*b* space in which dirr~l~nces bear a linear relation to human pc~c~lual response. It makes sense theoretically to define the activity threshold in terms of L* rather than the usual log l.. .i~ e scale, because the perceptible detail in the image is sought. A given threshold value encomp~eses considerably fewer L* steps in darker regions 10than in lighter ones - by about a factor of 3 when going from about 5% to 90% reflect~nre7 allowing discrimination of smaller luminance increments at higher lnmin~n-~e levels. It should, therefore, be better to use a smaller lllmin~n/~e increment as the threshold for counting activities as a function of increasing sector lllmin~nce in such a way as to keep the L*
increment constant. An outline of the procedure to convert a given L* increment to the 15 corresponding log(Y) increment follows.
I . Assume a digital log space in which the digital value (y) is defined by y = dps * log2(Y) + yn Here dps is the number of digits per stop, log2(Y) represents the logarithm base 2 of linear space l........ ;.-~.-ce, and yn is the digital value on the chosen scale 2 0corresponding to l 00% reflectance when Y = l .
2. Convert the reference y (yref). the value at the center of the equivalent delta y to be calc~ terl, to the L* scale (Lref).
Lref = (1 l 6)2((Yr~f ~ 5~11)/(3dps))_ 16 W O9~3CE71 PCTnUS96/023~3 3. Dt;lrl . "; .-e the delta y (ydelta) equivalent to a given delta L* (Ldelta) by x = Ldelta/(2(Lref + 16)) ydelta= (6dpslln (2)) arcsinh(x) wherein In (2) is the natural log~rithm of 2.
Using an eight bit log2 scale with dps = 20 and yn = 160. the following equivalent increments in y are found for a desired increment of 4 units in L* (with Ldelta = 4) y ydelta % Reflectance 1~i7 2.5 90 In order to use an L* increment as an activity threshold in the scene analysis, a lookup table is established expressing the above ydelta versus y relation and use of the ydelta value for the activity l.l..,i"z~,-ce as the threshold for testing the activity of a sector. Since the ydelta values are a~lopl;ate only for a correctly exposed scene and since, in general, the original 15 digital image may be off by some albiLI~ y amount, a preliminary estim~t~ of the exposure shift must be made. This is accomplished by ~l~t~ormining the shift necessarv to bring Ylim.
previously defined in the T llmin~nce Cutoffsection, to coincide with yn, which is the white point for 100% reflection.
After testing several L* increments as activity thresholds, it was found that Ldelta=4 2 0 gave statistical results which were virtually identical to those obtained by the standard svstem using a constant threshold value of 0.3 stop.
Scene Adal)tive Contrast Adiustment W O96/30871 PCTnUS96102353 Up until now the object of scene analysis has been to ~et~rmine a lightn~cs shift necec~i1, y to achieve a high quality display of the image. If the scene has a dynamic range greater than the ~l~stin~tion window (which is the case more often than not), this means that even if the principal subject is ~lup.,~ly exposed the brighter and darker areas of the image 5 will be clipped in the display, i.e., bright objects will be washed out and deep shadows will be blocked up. A strongly backlit background, for example, can completely dis~peal.
Using the activity based histogram, clipping of important information can be avoided by adjusting the scene contrast such that the entire histogram fits within the ~lestin~tion window. This means that the tone scale of a high contrast image is coll.pressed.
The concept of tone scale adjll~tment is not new. It is common, particularly in video photography, to adjust the signal such that the overall dynamic range of the scene fills up the dynamic range of the display. But there is a distinct advantage to using the activity based histogram as the basis of the contrast adjll~tment in that the entire lllmin~nce range is not necessarily included. but just the range necessary to include the perceptible detail. Overbright 15 or overdark areas that contain no detail (and so are thus not very interesting anyway) will have no influence. This minimi7Ps the degree of contrast adjnctment necessary.
W O96/30871 PCT/U~2353 This concept has been implement~l in the simplest possible way. l~or each lnmin~nce value y in the image, a new value yadj is calculated as yadj = (y - ymid) / RangeRatio + ymid, where ymid represents the midpoint of the histogram. This is a simple linear adj~lctment which brings the histogram limits to be the sarne size as the ~lestin~tion window. At that point, the image is shifted to bring the histogram into coincidence with the ~lestin~tion window. One can think of other ways to do the co~ ~L adj-lctment There might be advantages to an asymmetric adjll~trnent, for in~t~nl~e, in which dirr~,elll factors would be applied to the brighter and darker portions with the fixed point at print L* = 50 (mid-gray) or print L* = 60 (where illllmin~te~l flesh tones tend to appear) instead of at ymid. This would be particularly effective if very good exposure accuracy could be ~tt~ine~l The danger of making large contrast reduction adjustments is that the image can take on a flat appearance, lacking punch. This is due to the reduction in sharpness and color saturation which accompany the effectively lower contrast. However, measures to co~ e for these problems in image processing steps extraneous to scene analysis per se include the following:
1. Using a tone scale in the final ~lestin~tion application color map which is similar to the usual ideal film but which has a sharper toe and shoulder. This increases2 0 the effective dynamic range and so reduces the magnitude of the nececs~ry contrast adjll~tment 2. Using contrast adaptive sh~.~ing. That is, sharpening more in proportion to the m~gnitll-le of the contract reduction.
W O9OE/_C~71 PCT/u~G~2353 3. Combining scene analysis with a burning and dodging procedure for local brightness adjll~tm~nt Scene analysis is done t~,vice - first using the standardalgo~ l, then the reslllting image is passed through the burning and dodging procedure, and finally the contrast adaptive scene analysis is used on that result to produce the final result. Since burning and dodging reduces local contrast, for some images the global contrast reduction in the second scene analysis step can be less. Both burning and dodging and global contrast adjustment can bring in overexposed or underexposed foregrounds and backgrounds. They seem to be synergistic when used together.
One burning and dodging procedure for local brightnPss adjnctment is implem~-nte~l in the positioner 222 of Figure 2 by moving all the active lllmin~nce sectors of interest that fall outside of the ~iestin~tion window in the histogram of Figure 4A to the nearest histograrn limit Ymax or Ymin. In other words, any lllmin~n~e values of interest with an average Illmin~nce sector value less than Ymin is moved to Ymin. and any luminance values of interest with any average lllmin~nce sector value greater than Ymax is moved to Ymax.
It is to be understood that the above described embo~liment~ are merely illustrative of the present invention and represent a limited number of the possible specific embodiments that can provide applications of the principles of the invention. Numerous and varied other arrangements may be readily devised in accordance with these principles by those skilled in 2 0 the art without departing from the spirit and scope of the invention as claimed.
Claims (4)
1. A method for processing a digital input image and exporting or rendering theprocessed image to a downstream application or device having a given tonal reproduction range, said method characterized by:
partitioning the input image into sectors of predetermined size;
determining for each said sector a luminance activity defined as a difference in luminance between at least two pixels within each said sector;
generating an average luminance value for each said sector having said luminance activity greater than a predetermined threshold value;
generating a count of sectors having each said average luminance value;
mapping said average luminance values, having counts falling within a predetermined selection criterion, into the tonal reproduction range; and exporting the processed image to the downstream application in response to said mapped luminance values.
partitioning the input image into sectors of predetermined size;
determining for each said sector a luminance activity defined as a difference in luminance between at least two pixels within each said sector;
generating an average luminance value for each said sector having said luminance activity greater than a predetermined threshold value;
generating a count of sectors having each said average luminance value;
mapping said average luminance values, having counts falling within a predetermined selection criterion, into the tonal reproduction range; and exporting the processed image to the downstream application in response to said mapped luminance values.
2. The method of claim 1, wherein said predetermined selection criterion is characterized by any one or more of:
(A) setting a midpoint of a range of said average luminance values to a midpoint of the tonal reproduction range;
(B) determining a weighted mean of said average luminance values, then setting said weighted mean to said midpoint of the tonal reproduction range;
(C) maximizing said average luminance values within the tonal reproduction range;
(D) excluding an equal number of low end and high end average luminance values from the tonal reproduction range;
(E) providing an average of (A), (B) and (C); and (F) providing an average of (A), (B), (C) and (D).
(A) setting a midpoint of a range of said average luminance values to a midpoint of the tonal reproduction range;
(B) determining a weighted mean of said average luminance values, then setting said weighted mean to said midpoint of the tonal reproduction range;
(C) maximizing said average luminance values within the tonal reproduction range;
(D) excluding an equal number of low end and high end average luminance values from the tonal reproduction range;
(E) providing an average of (A), (B) and (C); and (F) providing an average of (A), (B), (C) and (D).
3. A system for processing a digital input image and exporting or rendering theprocessed image to a destination application or device having a given tonal reproduction range, said system comprising:
means for partitioning the input image into sectors of predetermined size;
means for determining for each said sector a luminance activity defined as a difference in luminance between at least two pixels within each said sector;
means for generating an average luminance value for each said sector having said luminance activity greater than a predetermined threshold value;
means for generating a count of sectors having each said average luminance value;
means for mapping said average luminance values, having counts falling within a predetermined selection criterion, into the tonal reproduction range; and means for exporting or rendering the processed image to the destination application or device in response to said mapped luminance values.
means for partitioning the input image into sectors of predetermined size;
means for determining for each said sector a luminance activity defined as a difference in luminance between at least two pixels within each said sector;
means for generating an average luminance value for each said sector having said luminance activity greater than a predetermined threshold value;
means for generating a count of sectors having each said average luminance value;
means for mapping said average luminance values, having counts falling within a predetermined selection criterion, into the tonal reproduction range; and means for exporting or rendering the processed image to the destination application or device in response to said mapped luminance values.
4. The system of claim 3, wherein said predetermined selection criterion is characterized by any one or more of:
(A) setting a midpoint of a range of said average luminance values to a midpoint of the tonal reproduction range;
(B) determining a weighted mean of said average luminance values, then setting said weighted mean to said midpoint of the tonal reproduction range;
(C) maximizing said average luminance values within the tonal reproduction range;
(D) excluding an equal number of low end and high end average luminance values from the tonal reproduction range;
(E) providing an average of (A), (B) and (C); and (F) providing an average of (A), (B), (C) and (D).
(A) setting a midpoint of a range of said average luminance values to a midpoint of the tonal reproduction range;
(B) determining a weighted mean of said average luminance values, then setting said weighted mean to said midpoint of the tonal reproduction range;
(C) maximizing said average luminance values within the tonal reproduction range;
(D) excluding an equal number of low end and high end average luminance values from the tonal reproduction range;
(E) providing an average of (A), (B) and (C); and (F) providing an average of (A), (B), (C) and (D).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/414,750 | 1995-03-31 | ||
US08/414,750 US5724456A (en) | 1995-03-31 | 1995-03-31 | Brightness adjustment of images using digital scene analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2212802A1 true CA2212802A1 (en) | 1996-10-03 |
Family
ID=23642793
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002212802A Abandoned CA2212802A1 (en) | 1995-03-31 | 1996-02-23 | Brightness adjustment for images using digital scene analysis |
Country Status (5)
Country | Link |
---|---|
US (1) | US5724456A (en) |
EP (1) | EP0818027A1 (en) |
JP (1) | JPH10511246A (en) |
CA (1) | CA2212802A1 (en) |
WO (1) | WO1996030871A1 (en) |
Families Citing this family (198)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5877844A (en) * | 1995-06-13 | 1999-03-02 | Fuji Photo Film Co., Ltd. | Image exposure method using display panel |
US5883973A (en) * | 1996-02-20 | 1999-03-16 | Seiko Epson Corporation | Method and apparatus for processing a document by segmentation into text and image areas |
US5870505A (en) * | 1996-03-14 | 1999-02-09 | Polaroid Corporation | Method and apparatus for pixel level luminance adjustment |
JPH1069543A (en) * | 1996-08-29 | 1998-03-10 | Oki Electric Ind Co Ltd | Method and device for reconstituting curved surface of object |
US5818975A (en) * | 1996-10-28 | 1998-10-06 | Eastman Kodak Company | Method and apparatus for area selective exposure adjustment |
JPH10191100A (en) * | 1996-12-26 | 1998-07-21 | Fujitsu Ltd | Video signal processing method |
US6587581B1 (en) * | 1997-01-10 | 2003-07-01 | Hitachi, Ltd. | Visual inspection method and apparatus therefor |
US6249315B1 (en) | 1997-03-24 | 2001-06-19 | Jack M. Holm | Strategy for pictorial digital image processing |
JP3522495B2 (en) * | 1997-06-13 | 2004-04-26 | 三洋電機株式会社 | Image synthesis method and digital camera |
JPH1188584A (en) * | 1997-07-09 | 1999-03-30 | Canon Inc | Image processing unit, image processing method and memory read by computer |
US6263091B1 (en) * | 1997-08-22 | 2001-07-17 | International Business Machines Corporation | System and method for identifying foreground and background portions of digitized images |
US6292574B1 (en) * | 1997-08-29 | 2001-09-18 | Eastman Kodak Company | Computer program product for redeye detection |
JP3822723B2 (en) * | 1997-08-29 | 2006-09-20 | 富士写真フイルム株式会社 | Image processing device |
US7042505B1 (en) | 1997-10-09 | 2006-05-09 | Fotonation Ireland Ltd. | Red-eye filter method and apparatus |
US7630006B2 (en) * | 1997-10-09 | 2009-12-08 | Fotonation Ireland Limited | Detecting red eye filter and apparatus using meta-data |
US7738015B2 (en) * | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
JP3726223B2 (en) * | 1998-03-27 | 2005-12-14 | 富士写真フイルム株式会社 | Image processing device |
US6731798B1 (en) * | 1998-04-30 | 2004-05-04 | General Electric Company | Method for converting digital image pixel values including remote services provided over a network |
US6204940B1 (en) * | 1998-05-15 | 2001-03-20 | Hewlett-Packard Company | Digital processing of scanned negative films |
US6438264B1 (en) * | 1998-12-31 | 2002-08-20 | Eastman Kodak Company | Method for compensating image color when adjusting the contrast of a digital color image |
US6282317B1 (en) | 1998-12-31 | 2001-08-28 | Eastman Kodak Company | Method for automatic determination of main subjects in photographic images |
US6734913B1 (en) * | 1999-10-28 | 2004-05-11 | Hewlett-Packard Development Company, L.P. | Method of automatically adjusting exposure in a shutterless digital camera |
US6628843B1 (en) * | 1999-11-24 | 2003-09-30 | Xerox Corporation | Image enhancement on JPEG compressed image data |
US6813389B1 (en) | 1999-12-15 | 2004-11-02 | Eastman Kodak Company | Digital image processing method and system including noise reduction and tone scale adjustments |
US7345702B2 (en) * | 2000-02-07 | 2008-03-18 | Canon Kabushiki Kaisha | Image sensing apparatus, control method for illumination device, flash photographing method, and computer program product |
JP2001238127A (en) * | 2000-02-21 | 2001-08-31 | Fuji Photo Film Co Ltd | Camera |
US7289154B2 (en) * | 2000-05-10 | 2007-10-30 | Eastman Kodak Company | Digital image processing method and apparatus for brightness adjustment of digital images |
US6785414B1 (en) * | 2000-09-28 | 2004-08-31 | Media Cybernetics, Inc. | System and method for establishing an aggregate degree of brightness for each primary color to create a composite color digital image |
US20050162515A1 (en) * | 2000-10-24 | 2005-07-28 | Objectvideo, Inc. | Video surveillance system |
US8711217B2 (en) | 2000-10-24 | 2014-04-29 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US8564661B2 (en) | 2000-10-24 | 2013-10-22 | Objectvideo, Inc. | Video analytic rule detection system and method |
US9892606B2 (en) | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
US6985637B1 (en) * | 2000-11-10 | 2006-01-10 | Eastman Kodak Company | Method and apparatus of enhancing a digital image using multiple selected digital images |
JP2002278522A (en) * | 2001-03-19 | 2002-09-27 | Matsushita Electric Ind Co Ltd | Portable video display device |
US6625310B2 (en) * | 2001-03-23 | 2003-09-23 | Diamondback Vision, Inc. | Video segmentation using statistical pixel modeling |
US7424175B2 (en) | 2001-03-23 | 2008-09-09 | Objectvideo, Inc. | Video segmentation using statistical pixel modeling |
US6999202B2 (en) | 2001-03-27 | 2006-02-14 | Polaroid Corporation | Method for generating a halftone of a source image |
US6842186B2 (en) * | 2001-05-30 | 2005-01-11 | Polaroid Corporation | High speed photo-printing apparatus |
US6937365B2 (en) | 2001-05-30 | 2005-08-30 | Polaroid Corporation | Rendering images utilizing adaptive error diffusion |
ATE303901T1 (en) * | 2001-05-30 | 2005-09-15 | Polaroid Corp | HIGH SPEED PHOTO PRINTING MACHINE |
JP3733873B2 (en) * | 2001-06-07 | 2006-01-11 | ノーリツ鋼機株式会社 | Photo image processing apparatus, method, and photo processing apparatus |
US7576797B2 (en) * | 2001-06-25 | 2009-08-18 | Texas Instruments Incorporated | Automatic white balancing via illuminant scoring autoexposure by neural network mapping |
US7006688B2 (en) * | 2001-07-05 | 2006-02-28 | Corel Corporation | Histogram adjustment features for use in imaging technologies |
US6826310B2 (en) * | 2001-07-06 | 2004-11-30 | Jasc Software, Inc. | Automatic contrast enhancement |
US7126629B1 (en) * | 2001-09-07 | 2006-10-24 | Pure Digital Technologies, Icn. | Recyclable, digital one time use camera |
KR100425312B1 (en) * | 2001-12-11 | 2004-03-30 | 삼성전자주식회사 | Apparatus and method for controlling brightness and/or contrast gain automatically |
US6906736B2 (en) * | 2002-02-19 | 2005-06-14 | Polaroid Corporation | Technique for printing a color image |
US7221807B2 (en) * | 2002-03-29 | 2007-05-22 | Sharp Laboratories Of America, Inc. | Methods and systems for digital image characteristic adjustment using a neural network |
US7317559B2 (en) * | 2002-04-05 | 2008-01-08 | Canon Kabushiki Kaisha | Imaging device and imaging method for use in such device |
US6937775B2 (en) * | 2002-05-15 | 2005-08-30 | Eastman Kodak Company | Method of enhancing the tone scale of a digital image to extend the linear response range without amplifying noise |
US7274830B2 (en) * | 2002-06-12 | 2007-09-25 | Litton Systems, Inc. | System for multi-sensor image fusion |
JP4443406B2 (en) * | 2002-07-17 | 2010-03-31 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Non-linear image processing |
US7130485B2 (en) * | 2002-10-02 | 2006-10-31 | Eastman Kodak Company | Enhancing the tonal and color characteristics of digital images using expansive and compressive tone scale functions |
US7116838B2 (en) * | 2002-10-25 | 2006-10-03 | Eastman Kodak Company | Enhancing the tonal and spatial characteristics of digital images using selective spatial filters |
TW583600B (en) * | 2002-12-31 | 2004-04-11 | Ind Tech Res Inst | Method of seamless processing for merging 3D color images |
US7283666B2 (en) * | 2003-02-27 | 2007-10-16 | Saquib Suhail S | Digital image exposure correction |
US7483083B2 (en) * | 2003-04-16 | 2009-01-27 | Intervideo, Inc. | Movie enhancement |
US7565030B2 (en) * | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7844076B2 (en) * | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US7315630B2 (en) | 2003-06-26 | 2008-01-01 | Fotonation Vision Limited | Perfecting of digital image rendering parameters within rendering devices using face detection |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7620218B2 (en) | 2006-08-11 | 2009-11-17 | Fotonation Ireland Limited | Real-time face tracking with reference images |
US8330831B2 (en) * | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US9129381B2 (en) * | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7269292B2 (en) * | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US8170294B2 (en) * | 2006-11-10 | 2012-05-01 | DigitalOptics Corporation Europe Limited | Method of detecting redeye in a digital image |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US8254674B2 (en) * | 2004-10-28 | 2012-08-28 | DigitalOptics Corporation Europe Limited | Analyzing partial face regions for red-eye detection in acquired digital images |
US8036458B2 (en) * | 2007-11-08 | 2011-10-11 | DigitalOptics Corporation Europe Limited | Detecting redeye defects in digital images |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7471846B2 (en) * | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US7536036B2 (en) | 2004-10-28 | 2009-05-19 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US7970182B2 (en) | 2005-11-18 | 2011-06-28 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7920723B2 (en) * | 2005-11-18 | 2011-04-05 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US7587085B2 (en) * | 2004-10-28 | 2009-09-08 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US8155397B2 (en) * | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
US7362368B2 (en) * | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US7587068B1 (en) | 2004-01-22 | 2009-09-08 | Fotonation Vision Limited | Classification database for consumer digital images |
US7792335B2 (en) | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US7317815B2 (en) * | 2003-06-26 | 2008-01-08 | Fotonation Vision Limited | Digital image processing composition using face detection information |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US8593542B2 (en) * | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US8553949B2 (en) | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US8948468B2 (en) * | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US8498452B2 (en) * | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US7680342B2 (en) | 2004-08-16 | 2010-03-16 | Fotonation Vision Limited | Indoor/outdoor classification in digital images |
US8989453B2 (en) * | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US7440593B1 (en) | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US8682097B2 (en) * | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8363951B2 (en) * | 2007-03-05 | 2013-01-29 | DigitalOptics Corporation Europe Limited | Face recognition training method and apparatus |
US7689009B2 (en) * | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
US7606417B2 (en) | 2004-08-16 | 2009-10-20 | Fotonation Vision Limited | Foreground/background segmentation in digital images with differential exposure calculations |
US8773685B2 (en) | 2003-07-01 | 2014-07-08 | Intellectual Ventures I Llc | High-speed digital image printing system |
US20050140801A1 (en) * | 2003-08-05 | 2005-06-30 | Yury Prilutsky | Optimized performance and performance for red-eye filter method and apparatus |
US8520093B2 (en) * | 2003-08-05 | 2013-08-27 | DigitalOptics Corporation Europe Limited | Face tracker and partial face tracker for red-eye filter method and apparatus |
US9412007B2 (en) * | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
JP2005099598A (en) * | 2003-09-26 | 2005-04-14 | Sanyo Electric Co Ltd | Display device |
US7633655B2 (en) * | 2003-10-10 | 2009-12-15 | Yuping Yang | Optical imaging device |
US7164518B2 (en) | 2003-10-10 | 2007-01-16 | Yuping Yang | Fast scanner with rotatable mirror and image processing system |
US7245781B2 (en) * | 2003-10-23 | 2007-07-17 | Eastman Kodak Companny | Applying a tone scale function to a digital image |
JP4069943B2 (en) * | 2003-12-03 | 2008-04-02 | 株式会社ニコン | Image processing apparatus, image processing program, image processing method, and electronic camera for controlling noise removal strength within screen |
US7558408B1 (en) | 2004-01-22 | 2009-07-07 | Fotonation Vision Limited | Classification system for consumer digital images using workflow and user interface modules, and face detection and recognition |
US7555148B1 (en) | 2004-01-22 | 2009-06-30 | Fotonation Vision Limited | Classification system for consumer digital images using workflow, face detection, normalization, and face recognition |
US7564994B1 (en) * | 2004-01-22 | 2009-07-21 | Fotonation Vision Limited | Classification system for consumer digital images using automatic workflow and face detection and recognition |
US7551755B1 (en) | 2004-01-22 | 2009-06-23 | Fotonation Vision Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US20110102643A1 (en) * | 2004-02-04 | 2011-05-05 | Tessera Technologies Ireland Limited | Partial Face Detector Red-Eye Filter Method and Apparatus |
JP2005295497A (en) * | 2004-03-10 | 2005-10-20 | Seiko Epson Corp | Image quality display device, digital camera, developing apparatus, image quality display method and image quality display program |
JP4242796B2 (en) * | 2004-03-12 | 2009-03-25 | パナソニック株式会社 | Image recognition method and image recognition apparatus |
US20050270397A1 (en) * | 2004-06-02 | 2005-12-08 | Battles Amy E | System and method for indicating settings |
US8320641B2 (en) * | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
US7315631B1 (en) | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US8503800B2 (en) * | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US7715597B2 (en) * | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
KR100640063B1 (en) * | 2005-02-18 | 2006-10-31 | 삼성전자주식회사 | Method for enhancing image considering to exterior illuminance and apparatus thereof |
US8422546B2 (en) * | 2005-05-25 | 2013-04-16 | Microsoft Corporation | Adaptive video encoding using a perceptual model |
NO20054829D0 (en) * | 2005-10-19 | 2005-10-19 | Ignis Photonyx As | Diffractive technology-based dynamic contrast manipulation in display systems |
US7599577B2 (en) * | 2005-11-18 | 2009-10-06 | Fotonation Vision Limited | Method and apparatus of correcting hybrid flash artifacts in digital images |
DE102005060893C5 (en) * | 2005-12-20 | 2019-02-28 | Manroland Goss Web Systems Gmbh | Method for determining a printing-technical measured value |
US7692696B2 (en) * | 2005-12-27 | 2010-04-06 | Fotonation Vision Limited | Digital image acquisition system with portrait mode |
EP1987436B1 (en) * | 2006-02-14 | 2015-12-09 | FotoNation Limited | Image blurring |
WO2007095553A2 (en) | 2006-02-14 | 2007-08-23 | Fotonation Vision Limited | Automatic detection and correction of non-red eye flash defects |
IES20060559A2 (en) * | 2006-02-14 | 2006-11-01 | Fotonation Vision Ltd | Automatic detection and correction of non-red flash eye defects |
US7804983B2 (en) | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US20070237237A1 (en) * | 2006-04-07 | 2007-10-11 | Microsoft Corporation | Gradient slope detection for video compression |
US8503536B2 (en) * | 2006-04-07 | 2013-08-06 | Microsoft Corporation | Quantization adjustments for DC shift artifacts |
US7995649B2 (en) | 2006-04-07 | 2011-08-09 | Microsoft Corporation | Quantization adjustment based on texture level |
US8059721B2 (en) | 2006-04-07 | 2011-11-15 | Microsoft Corporation | Estimating sample-domain distortion in the transform domain with rounding compensation |
CA2649389A1 (en) * | 2006-04-17 | 2007-11-08 | Objectvideo, Inc. | Video segmentation using statistical pixel modeling |
JP5196731B2 (en) * | 2006-04-20 | 2013-05-15 | キヤノン株式会社 | Image processing apparatus and image processing method |
IES20060564A2 (en) * | 2006-05-03 | 2006-11-01 | Fotonation Vision Ltd | Improved foreground / background separation |
US8711925B2 (en) | 2006-05-05 | 2014-04-29 | Microsoft Corporation | Flexible quantization |
US7636496B2 (en) * | 2006-05-17 | 2009-12-22 | Xerox Corporation | Histogram adjustment for high dynamic range image mapping |
US7639893B2 (en) * | 2006-05-17 | 2009-12-29 | Xerox Corporation | Histogram adjustment for high dynamic range image mapping |
FR2902217B1 (en) * | 2006-06-08 | 2008-12-26 | E On Software Sarl | METHOD FOR MAKING THREE-DIMENSIONAL VIEWS COMPRISING BRIGHTNESS ADJUSTMENT |
DE602007012246D1 (en) | 2006-06-12 | 2011-03-10 | Tessera Tech Ireland Ltd | PROGRESS IN EXTENDING THE AAM TECHNIQUES FROM GRAY CALENDAR TO COLOR PICTURES |
JP2008009318A (en) * | 2006-06-30 | 2008-01-17 | Toshiba Corp | Image processing apparatus and image processing method |
KR100809347B1 (en) * | 2006-07-31 | 2008-03-05 | 삼성전자주식회사 | Method and apparatus for compensating shadow area |
US7515740B2 (en) * | 2006-08-02 | 2009-04-07 | Fotonation Vision Limited | Face recognition with combined PCA-based datasets |
US7403643B2 (en) * | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7916897B2 (en) * | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
US7865032B2 (en) * | 2006-09-22 | 2011-01-04 | Hewlett-Packard Development Company, L.P. | Methods and systems for identifying an ill-exposed image |
TW200820123A (en) * | 2006-10-20 | 2008-05-01 | Primax Electronics Ltd | Method and system of generating high dynamic range image corresponding to specific scene |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
US8238424B2 (en) * | 2007-02-09 | 2012-08-07 | Microsoft Corporation | Complexity-based adaptive preprocessing for multiple-pass video compression |
ATE472140T1 (en) * | 2007-02-28 | 2010-07-15 | Fotonation Vision Ltd | SEPARATION OF DIRECTIONAL ILLUMINATION VARIABILITY IN STATISTICAL FACIAL MODELING BASED ON TEXTURE SPACE DECOMPOSITIONS |
JP2010520567A (en) * | 2007-03-05 | 2010-06-10 | フォトネーション ビジョン リミテッド | Red-eye false detection filtering using face position and orientation |
US8649604B2 (en) * | 2007-03-05 | 2014-02-11 | DigitalOptics Corporation Europe Limited | Face searching and detection in a digital image acquisition device |
EP2123008A4 (en) | 2007-03-05 | 2011-03-16 | Tessera Tech Ireland Ltd | Face categorization and annotation of a mobile phone contact list |
US8498335B2 (en) * | 2007-03-26 | 2013-07-30 | Microsoft Corporation | Adaptive deadzone size adjustment in quantization |
US20080240257A1 (en) * | 2007-03-26 | 2008-10-02 | Microsoft Corporation | Using quantization bias that accounts for relations between transform bins and quantization bins |
US8243797B2 (en) | 2007-03-30 | 2012-08-14 | Microsoft Corporation | Regions of interest for quality adjustments |
US8442337B2 (en) * | 2007-04-18 | 2013-05-14 | Microsoft Corporation | Encoding adjustments for animation content |
US7916971B2 (en) * | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
US8331438B2 (en) * | 2007-06-05 | 2012-12-11 | Microsoft Corporation | Adaptive selection of picture-level quantization parameters for predicted video pictures |
JP5057884B2 (en) * | 2007-08-15 | 2012-10-24 | 株式会社ジャパンディスプレイイースト | Display device |
US8503818B2 (en) | 2007-09-25 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Eye defect detection in international standards organization images |
TW200930040A (en) * | 2007-12-19 | 2009-07-01 | Altek Corp | Gradient mirror processing method for digital images |
US8750578B2 (en) | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
US8212864B2 (en) * | 2008-01-30 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for using image acquisition data to detect and correct image defects |
US7855737B2 (en) * | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
US8189933B2 (en) * | 2008-03-31 | 2012-05-29 | Microsoft Corporation | Classifying and controlling encoding quality for textured, dark smooth and smooth video content |
CN103402070B (en) | 2008-05-19 | 2017-07-07 | 日立麦克赛尔株式会社 | Record reproducing device and method |
US8897359B2 (en) | 2008-06-03 | 2014-11-25 | Microsoft Corporation | Adaptive quantization for enhancement layer video coding |
EP2312831A4 (en) * | 2008-06-30 | 2011-12-14 | Kyocera Corp | Image processing method and image pickup device module |
JP5547730B2 (en) * | 2008-07-30 | 2014-07-16 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Automatic facial and skin beautification using face detection |
US8081254B2 (en) * | 2008-08-14 | 2011-12-20 | DigitalOptics Corporation Europe Limited | In-camera based method of detecting defect eye with high accuracy |
TWI380236B (en) * | 2008-09-12 | 2012-12-21 | Avisonic Technology Corp | Image enhancement method using local gain correction |
WO2010063463A2 (en) * | 2008-12-05 | 2010-06-10 | Fotonation Ireland Limited | Face recognition using face tracker classifier data |
US20100295782A1 (en) | 2009-05-21 | 2010-11-25 | Yehuda Binder | System and method for control based on face ore hand gesture detection |
TWI423246B (en) * | 2009-08-21 | 2014-01-11 | Primax Electronics Ltd | Image processing method and apparatus thereof |
US8379917B2 (en) * | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
US8692867B2 (en) * | 2010-03-05 | 2014-04-08 | DigitalOptics Corporation Europe Limited | Object detection and rendering for wide field of view (WFOV) image acquisition systems |
US8488958B2 (en) | 2010-05-25 | 2013-07-16 | Apple Inc. | Scene adaptive auto exposure |
KR101330396B1 (en) * | 2010-06-25 | 2013-11-15 | 엘지디스플레이 주식회사 | Display Device And Contrast Enhancement Method Thereof |
US8836777B2 (en) | 2011-02-25 | 2014-09-16 | DigitalOptics Corporation Europe Limited | Automatic detection of vertical gaze using an embedded imaging device |
US8947501B2 (en) | 2011-03-31 | 2015-02-03 | Fotonation Limited | Scene enhancements in off-center peripheral regions for nonlinear lens geometries |
US8896703B2 (en) | 2011-03-31 | 2014-11-25 | Fotonation Limited | Superresolution enhancment of peripheral regions in nonlinear lens geometries |
AU2011244921B8 (en) * | 2011-11-01 | 2014-04-03 | Canon Kabushiki Kaisha | Method and system for luminance adjustment of images in an image sequence |
US20130201316A1 (en) | 2012-01-09 | 2013-08-08 | May Patents Ltd. | System and method for server based control |
US9472163B2 (en) * | 2012-02-17 | 2016-10-18 | Monotype Imaging Inc. | Adjusting content rendering for environmental conditions |
US9111174B2 (en) * | 2012-02-24 | 2015-08-18 | Riverain Technologies, LLC | Machine learnng techniques for pectoral muscle equalization and segmentation in digital mammograms |
CN102929447B (en) * | 2012-10-19 | 2015-09-09 | 无锡海森诺科技有限公司 | A kind of extracting method of perception image effective area of optical sensor |
US9665796B2 (en) * | 2013-01-22 | 2017-05-30 | University Of Central Florida Research Foundation, Inc. | System and method for visual correlation of digital images |
US9972070B2 (en) | 2013-06-24 | 2018-05-15 | Nintendo Co., Ltd. | Brightness-compensating safe pixel art upscaler |
EP3028446A1 (en) * | 2013-07-30 | 2016-06-08 | Dolby Laboratories Licensing Corporation | System and methods for generating scene stabilized metadata |
EP3134850B1 (en) | 2014-04-22 | 2023-06-14 | Snap-Aid Patents Ltd. | Method for controlling a camera based on processing an image captured by other camera |
WO2016056787A1 (en) * | 2014-10-06 | 2016-04-14 | Samsung Electronics Co., Ltd. | Display device and method of controlling the same |
WO2016207875A1 (en) | 2015-06-22 | 2016-12-29 | Photomyne Ltd. | System and method for detecting objects in an image |
CN104978710B (en) * | 2015-07-02 | 2018-07-10 | 广东欧珀移动通信有限公司 | A kind of method and apparatus that face brightness is adjusted based on identification of taking pictures |
CN105096267B (en) * | 2015-07-02 | 2018-06-22 | 广东欧珀移动通信有限公司 | A kind of method and apparatus that eye brightness is adjusted based on identification of taking pictures |
JP6562770B2 (en) * | 2015-08-24 | 2019-08-21 | キヤノン株式会社 | Image processing apparatus, image processing method, program, and storage medium |
EP3142355B1 (en) * | 2015-09-08 | 2017-10-25 | Axis AB | Method and apparatus for enhancing local contrast in a thermal image |
DE102015119137B3 (en) * | 2015-11-06 | 2017-06-08 | Hochschule Rheinmain University Of Applied Sciences Wiesbaden Rüsselsheim | Invention relating to methods for aperture adjustment |
JP2017098845A (en) * | 2015-11-26 | 2017-06-01 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
CN111526277A (en) * | 2019-02-01 | 2020-08-11 | 神讯电脑(昆山)有限公司 | Processing method and system for adjusting image according to environmental conditions |
CN111444825A (en) * | 2020-03-25 | 2020-07-24 | 四川长虹电器股份有限公司 | Method for judging image scene by utilizing histogram |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5212828A (en) * | 1975-07-21 | 1977-01-31 | Konishiroku Photo Ind Co Ltd | Exposure control process for the object of a non-uniform distribution of brightness |
JPS5651728A (en) * | 1979-10-03 | 1981-05-09 | Fuji Photo Film Co Ltd | Exposure control method |
US4445138A (en) * | 1981-12-21 | 1984-04-24 | Hughes Aircraft Company | Real time dynamic range compression for image enhancement |
US4639769A (en) * | 1985-04-01 | 1987-01-27 | Eastman Kodak Company | Modifying color digital images |
US5038389A (en) * | 1987-06-25 | 1991-08-06 | Nec Corporation | Encoding of a picture signal in consideration of contrast in each picture and decoding corresponding to the encoding |
US5042077A (en) * | 1987-10-02 | 1991-08-20 | General Electric Company | Method of highlighting subtle contrast in graphical images |
US4929824A (en) * | 1988-02-26 | 1990-05-29 | Fuji Photo Film Co., Ltd. | light metering device with detector matrix and mean value detection |
US4868651A (en) * | 1988-05-17 | 1989-09-19 | S&S Inficon, Inc. | Digital radiography with image brightness and contrast normalization |
US5150433A (en) * | 1989-12-01 | 1992-09-22 | Eastman Kodak Company | Histogram/variance mechanism for detecting presence of an edge within block of image data |
US5418895A (en) * | 1992-11-25 | 1995-05-23 | Eastman Kodak Company | Method for displaying a high quality digital color image on a limited color display |
-
1995
- 1995-03-31 US US08/414,750 patent/US5724456A/en not_active Expired - Lifetime
-
1996
- 1996-02-23 EP EP96906569A patent/EP0818027A1/en not_active Withdrawn
- 1996-02-23 WO PCT/US1996/002353 patent/WO1996030871A1/en not_active Application Discontinuation
- 1996-02-23 JP JP8529375A patent/JPH10511246A/en active Pending
- 1996-02-23 CA CA002212802A patent/CA2212802A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
WO1996030871A1 (en) | 1996-10-03 |
JPH10511246A (en) | 1998-10-27 |
EP0818027A1 (en) | 1998-01-14 |
US5724456A (en) | 1998-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2212802A1 (en) | Brightness adjustment for images using digital scene analysis | |
EP0843465B1 (en) | Image processing apparatus and image processing method | |
US6366680B1 (en) | Adjusting an electronic camera to acquire a watermarked image | |
US5828793A (en) | Method and apparatus for producing digital images having extended dynamic ranges | |
EP2461576B1 (en) | Image processing apparatus and image processing program | |
US7840084B2 (en) | Digital camera incorporating a sharpness predictor | |
CN107862657A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
EP0870396B1 (en) | System and method for color gamut and tone compression using an ideal mapping function | |
EP0848545A2 (en) | Method for estimating and adjusting digital image contrast | |
US7064864B2 (en) | Method and apparatus for compressing reproducible color gamut | |
CN107451969A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
US6487309B1 (en) | Interpolation processing apparatus and recording medium having interpolation processing program recorded therein | |
JP2004165932A (en) | Device and method for estimating light source, image pickup device, and image processing method | |
EP1107610A1 (en) | Compression encoding method, recorded medium on which compression encoding program is recorded, and imaging device | |
CN109040607A (en) | Image formation control method, device, electronic equipment and computer readable storage medium | |
CN107945106B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
WO2008102296A2 (en) | Method for enhancing the depth sensation of an image | |
CN104954627B (en) | A kind of information processing method and electronic equipment | |
Farrell | Image quality evaluation | |
US8026954B2 (en) | System and computer-readable medium for automatic white balancing | |
EP1326209B1 (en) | Method for contrast enhancement in colour digital image | |
Bouzit et al. | Colour difference metrics and image sharpness | |
US6731797B2 (en) | Color dependent luminance processing | |
MacDonald et al. | Assessing image quality | |
Adams et al. | Perceptually based image processing algorithm design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |