US20130169821A1 - Detecting Orientation of Digital Images Using Face Detection Information - Google Patents

Detecting Orientation of Digital Images Using Face Detection Information Download PDF

Info

Publication number
US20130169821A1
US20130169821A1 US13/778,128 US201313778128A US2013169821A1 US 20130169821 A1 US20130169821 A1 US 20130169821A1 US 201313778128 A US201313778128 A US 201313778128A US 2013169821 A1 US2013169821 A1 US 2013169821A1
Authority
US
United States
Prior art keywords
digital image
image
images
digital
orientation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/778,128
Inventor
Eran Steinberg
Yury Prilutsky
Peter Corcoran
Petronel Bigioi
Leo Blonk
Mihnea Gângea
Constantin Vertan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fotonation Ltd
Original Assignee
DigitalOptics Corp Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/024,046 external-priority patent/US7565030B2/en
Application filed by DigitalOptics Corp Europe Ltd filed Critical DigitalOptics Corp Europe Ltd
Priority to US13/778,128 priority Critical patent/US20130169821A1/en
Publication of US20130169821A1 publication Critical patent/US20130169821A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00228
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits

Definitions

  • the invention relates to automatic suggesting or processing of enhancements of a digital image using information gained from identifying and analyzing faces appearing within the image, and in particular method of detection the image orientation using face detection.
  • the invention provides automated orientation detection for photographs taken and/or images detected, acquired or captured in digital form or converted to digital form, by using information about the faces in the photographs and/or images.
  • Balluja, 1997 describes methods of extending the upright, frontal template based face detection system to efficiently handle all in plane rotations, this achieving a rotation invariant face detection system.
  • the camera is usually held horizontally or vertically, in counter clockwise or clockwise in relations to the horizontal position when the picture is taken, creating what is referred to as a landscape mode or portrait mode, respectively.
  • a landscape mode or portrait mode For most images are taken in either one of the three orientations, namely landscape, clockwise portrait and counterclockwise portrait.
  • the system may try to determine if the image was shot horizontally, which is also referred to as landscape format, where the width is larger than the height of an image, or vertically, also referred to as portrait mode, where the height of the image is larger than the width.
  • Techniques may be used to determine an orientation of an image.
  • these techniques include either recording the camera orientation at an acquisition time using an in camera mechanical indicator or attempting to analyze image content post-acquisition.
  • In-camera methods although providing precision, use additional hardware and sometimes movable hardware components which can increase the price of the camera and add a potential maintenance challenge.
  • post-acquisition analysis may not generally provide sufficient precision.
  • Knowledge of location, size and orientation of faces in a photograph a computerized system can offer powerful automatic tools to enhance and correct such images or to provide options for enhancing and correcting images.
  • a method of analyzing and processing a digital image using the results of face detection algorithms within said image to determine the correct orientation of the image is provided.
  • a face detection algorithm with classifiers that are orientation sensitive, or otherwise referred to as rotation variant is applied to an image, or a subsampled resolution of an image.
  • the image is then rotated, or the classifiers are rotated, and the search is repeated for the orientations that are under question.
  • the image with the highest amount of faces detected, and or the orientation with the highest face detection confidence level is the one estimated to be the correct orientation of the image.
  • the digital image may be digitally-acquired and/or may be digitally-captured. Decisions for processing the digital image based on said face detection, selecting one or more parameters and/or for adjusting values of one or more parameters within the digital image may be automatically, semi-automatically or manually performed.
  • Values of orientation may be adjusted arbitrarily or in known intervals, e.g., of 90 degrees, such that a rotation value for the digital image may be determined.
  • the method may be performed within a digital acquisition device or an external device or a combination thereof. Rotation can also be applied as part of the transfer process between devices.
  • the face pixels may be identified, a false indication of another face within the image may be removed.
  • the face pixels identifying may be automatically performed by an image processing apparatus, and a manual verification of a correct detection of at least one face within the image may be provided.
  • a method is further provided for detecting an orientation of a digital image using statistical classifier techniques.
  • a set of classifiers are applied to a digital image in a first orientation and a first level of match between the digital image at the first orientation and the classifiers is determined.
  • the digital image is rotated to a second orientation, and the classifiers are applied to the rotated digital image at the second orientation.
  • a second level of match is determined between the rotated digital image at the second orientation and the classifiers.
  • the first and second levels of match are compared. It is determined which of the first orientation and the second orientations has a greater probability of being a correct orientation based on which of the first and second levels of match, respectively, comprises a higher level of match.
  • the method may further include rotating the digital image to a third orientation, applying the classifiers to the rotated digital image at the third orientation, and determining a third level of match between the rotated digital image at the third orientation and the classifiers.
  • the third level of match is compared with the first level of match or the second level of match, or both. It is determined which of two or more of the first orientation, the second orientation and the third orientation has a greater probability of being a correct orientation based on which of the corresponding levels of match is greater.
  • a method for detecting an orientation of a digital image using statistical classifier techniques includes applying a set of classifiers to a digital image in a first orientation and determining a first level of match between the digital image at the first orientation and the classifiers.
  • the set of classifiers is rotated a first predetermined amount, the classifiers rotated the first amount are applied to the digital image at the first orientation.
  • a second level of match is determined between the digital image at the first orientation and the classifiers rotated the first amount.
  • the first and second levels of match are compared, and it is determined which of the first and second levels of match is greater in order to determine whether the first orientation is a correct orientation of the digital image.
  • a rotation of the classifiers by a second amount may be performed and the method performed with three relatively rotated sets of classifiers, and so on.
  • processor readable storage devices are also provided having processor readable code embodied thereon.
  • the processor readable code programs one or more processors to perform any of the methods for detecting an orientation of a digital image using statistical classifier techniques briefly summarized above.
  • FIG. 1 a is a flow diagram that illustrates a main orientation workflow based on rotation of a digital image that includes one or more faces.
  • FIG. 1 b is a flow diagram that illustrates a main orientation workflow based on rotation of classifiers relative to an orientation of a digital image that includes one or more faces.
  • FIG. 1 c describes an exemplary implementation of the process illustrated at FIG. 1 a and/or FIG. 1 b.
  • FIG. 2 a illustrates an ellipse-based orientation classifier that may be used in a process in accordance with a preferred embodiment.
  • FIG. 2 b illustrates an ellipse-based classifier system applied to a facial image.
  • FIG. 3 a illustrates four different potential orientations of a single image.
  • FIG. 3 b illustrates different orientations of classifiers applied to a same image.
  • FIG. 4 a illustrates a matching of ellipse-based classifiers within images.
  • FIG. 4 b illustrates a matching of complex classifiers with an image.
  • Face Detection involves the art of detecting faces in a digital image.
  • One or more faces may be first isolated and/or identified within a larger digital image prior to further processing based at least in part on the detection of the faces.
  • Face detection includes a process of determining whether a human face is present in an input image, and may include or is preferably used in combination with determining a position and/or other features, properties, parameters or values of parameters of the face within the input image.
  • Image-enhancement or “image correction” involves the art of modifying a digital image to improve its quality or according to another selected manual or automatic input criteria.
  • a “global” modification is one that is applied to an entire image or substantially the entire image, while a “selective” modification is applied differently to different portions of the image or to only a selected portion of the image.
  • a “pixel” is a picture element or a basic unit of the composition of a digital image or any of the small discrete elements that together constitute an image.
  • a “digitally-captured image” includes an image that is digitally located and held in a detector, preferably of a portable digital camera or other digital image acquisition device.
  • a “digitally-acquired image” includes an image that is digitally recorded in a permanent file and/or preserved in a more or less permanent digital form.
  • a digitally-detected image is an image comprising digitally detected electromagnetic waves.
  • Classifiers are generally reference parameters selectively or automatically correlated or calibrated to some framework or absolute reference criteria.
  • one or more orientation classifiers in a 2-dimensional image may be configured according to a proper and/or selected orientation of a detected face within a digital image.
  • Such classifiers may be calibrated or correlated with a detected facial orientation such that an overall digital image containing a face may be oriented according to these calibrated or correlated classifiers.
  • Classifiers may be statistical or absolute: Statistical classifiers assign a class ⁇ i so that given a pattern ⁇ , the most probable P( ⁇ i
  • the accuracy of a statistical classifier generally depends on the quality of training data and of the algorithm used for classification. The selected populations of pixels used for training should be statistically significant. This means that a minimum number of observations are generally required to characterize a particular site to some selected or acceptable threshold level of error.
  • FIG. 2 a and FIG. 2 b illustrate in a graphical form non-exhaustive examples of classifiers.
  • Objects 210 , 212 , and 214 in FIG. 2 a represent a simple ellipse classifier, in varying sizes.
  • FIG. 2 b illustrates a complex classifier of a face, which is made of simpler classifiers.
  • the mouth, 224 and the eyes 226 , 228 correspond to ellipse classifiers 210 and 214 as defined in FIG. 2 a.
  • the classifiers may not necessarily be only of certain shape. More complex classifiers can be of a more abstract physical nature. Alternatively, a classifier can be of color data. For example, a color classifier may be a classifier with higher content of blue towards the top and higher content of green or brown towards the bottom.
  • An “image orientation” is a rotational position of an image relative to a selected or permanent coordinate or coordinate system that may itself be determined relative to an absolute spatial system, such as the earth, or a system determined or selected within a frame of a digital image.
  • an image orientation is identified relative to an orientation of one or more classifiers, such as the elliptical classifiers illustrated at 2 a - 2 b , 3 b and 4 a - 4 b.
  • an image orientation may identified relative to a horizontal/vertical system, such as illustrated in FIG. 3 a .
  • the image 310 may be rotated relative to this coordinate system or to an orientation of one or more elliptical classifiers by 90° counter clockwise 320 or clock wise 330 .
  • a fourth orientation 340 is a 180° degree rotation which is also illustrated in FIG. 3 a .
  • a 180 degree orientation is typically not a desired or viable situation for hand held pictures.
  • Rotational positions may be defined relative to absolute or image-based coordinates, and rotations of the image and/or of the classifiers may be of arbitrary angular extent, e.g., 1° or finer, 5°, 10°, 15°, 30°, 45°, or others, may be selected in accordance with embodiments of the invention.
  • Classifier orientation is illustrated in FIG. 3 b .
  • the classifiers of FIG. 3 b are oriented in three orientations corresponding to the image orientations shown.
  • Object 360 represents a “correctly” oriented image, as selected or built-in to the digital system
  • block 350 represents a counter clockwise orientation
  • block 370 represents a clockwise orientation.
  • a “correct” orientation may be determined based on a combined level of match of multiple classifiers and/or on relative positions of the classifiers once matched to their respective facial regions. These regions may include the two eyes and mouth of a detected face, and may also include an outline of a person's head or entire face.
  • the arrow labeled “N” in the example of FIG. 3 b points in a direction that is selected or determined to be the “correct” vertical axis of the image.
  • the orientations illustrated at FIG. 3 b correspond to illustrative images 310 , 320 and 330 in FIG. 3 a.
  • “Matching image classifiers to images” involves correlating or assigning classifier constructs to actual digital images or portions or sub-samplings of digital images. Such matching is illustrated at FIGS. 4 a and 4 b .
  • FIG. 4 a different sized ellipses, as already described as being examples of classifiers, e.g., ellipses 210 , 212 and 214 of FIG. 2 a , are matched to various objects, e.g., eyes and mouth regions, in facial images.
  • the matching is preferably performed for different image and/or facial region orientations, e.g., 400 and 410 of FIG. 4 a , to determine a correct or selected orientation of the digital image.
  • a correctly oriented ellipse may, however, match different objects in two orientations of an image or may match different objects than desired in images regardless of orientation.
  • ellipse 214 matches correctly the lips 414 in image 410 but also the nose bridge 404 when the image is “incorrectly” oriented or not in the desired orientation.
  • the smaller ellipse 410 matches both instances of eyes 412 and 413 in the correctly oriented image 410 .
  • This example illustrates an instance wherein it is not sufficient to use a single classifier, as there may be cases of false detection. This illustrates an advantage of the process of determining the orientation of faces based on statistical classifiers in accordance with a preferred embodiment of the present invention.
  • Concatenation is generally used herein to describe a technique wherein classifiers, objects, axes, or parameters are connected, linked, correlated, matched, compared or otherwise related according to a selected or built-in set of criteria, and/or to describe sequential performance of operation or processes in methods in accordance with embodiments of the invention.
  • an image may be determined to be properly aligned when axes of a pair of eye ellipses are determined to be collinear or the image is oriented or re-oriented such that they are made to be collinear, or when an image and/or classifiers are rotated to cause the foci of eye ellipses to form an isosceles triangle with a center of a mouth ellipse, etc.
  • a preferred embodiment includes an image processing application whether implemented in software or in firmware, as part of the image capture process, such as in a digital camera, or as part of post processing, such as a desktop, in the camera as a post processing background process or on a server application.
  • This system receives images in digital form, where the images can be translated into a grid representation including multiple pixels.
  • the preferred embodiment describes a method of re-using face detection information in different orientations of the image to determine the orientation with the highest probability to be the correct one.
  • the information regarding the location and size of faces in an image assist in determining correct orientation.
  • Advantages of the preferred embodiments include the ability to automatically perform or suggest or assist in the determination of the correct orientation of an image. Another advantage is that the processing may be automatically performed and/or suggested based on this information. Such automatic processing is fast enough and efficient enough to handle multiple images in close to real time, or be used for a single image as part of the image processing in the acquisition device. Many advantageous techniques are provided in accordance with preferred and alternative embodiments set forth herein. For example, this method of detection the image orientation can be combined with other methods of face detection, thus improving the functionality, and re-purposing the process for future applications.
  • Two or more methods of detecting faces in different orientations may be combined to achieve better accuracy and parameters of a single algorithm may be concatenated into a single parameter.
  • the digital image may be transformed to speed up the process, such as subsampling or reducing the color depth.
  • the digital image may be transformed to enhance the accuracy such as preprocessing stage for improving the color balance, exposure or sharpness.
  • the digital image may post processed to enhance the accuracy such as removal of false positives as a post processing process, based on parameters and criteria outside of the face detection algorithm.
  • Values of orientation may be adjusted such that a rotation value for the digital image is determined. This technique may be implemented for supporting arbitrary rotation or fixed interval rotation such as 90 degree rotation.
  • the method may be performed within any digital image capture device, which as, but not limited to digital still camera, phone handset with built in camera, web camera or digital video camera. Determining which of the sub-group of pixels belong to which of the group of face pixels may be performed. The determining of the initial values of one or more parameters of pixels may be calculated based on the spatial orientation of the one or more sub-groups that correspond to one or more facial features. The spatial orientation of the one or more sub-groups that correspond to one or more facial features may be calculated based on an axis of an ellipse fit to the sub-group. The adjusted values of pixels within the digital image may be rounded to a closest multiple of 90 degrees. The initial values may be adjusted to adjusted values for re-orienting the image to an adjusted orientation.
  • the one or more facial features may include an eye, two eyes, two eyes and a mouth, an eye, a mouth, hairline, ears, nostrils, nose bridge, eyebrows or a nose, or combinations thereof.
  • the features used for the detection of objects in general in the image, or faces specifically may be determined through a mathematical classifiers that are either deduced via a learning process or inserted into the system.
  • classifiers are described by Viola Jones in the paper incorporated herein by reference.
  • Other classifiers can be the eigenfaces, which are the basis functions that define images with faces.
  • Each of the methods provided are preferably implemented within software and/or firmware either in the camera or with external processing equipment.
  • the software may also be downloaded into the camera or image processing equipment.
  • processor readable storage devices having processor readable code embodied thereon are provided.
  • the processor readable code programs one or more processors to perform any of the above or below described methods.
  • FIG. 1 a illustrates a process flow according to a preferred embodiment.
  • the input is an image which can come from various sources.
  • an image may be opened by a software, firmware or other program application in block 102 .
  • the process may be initiated when a photographer takes a picture at block 103 , or as an automatic background process for an application or acquisition device at block 104 .
  • the classifiers are preferably pre-determined for the specific image classification. A detailed description of the learning process to create the appropriate classifiers can be found in the paper by Viola and Jones that has been cited and incorporated by reference hereinabove.
  • the classifiers are loaded, at step 108 , into the application.
  • the image is preferably rotated into three orientations at block 110 . Only two or more than three orientation may alternatively be used: The preferred orientations are counter clockwise 112 , no rotation 114 and clockwise, 116 . Note that a fourth orientation which is the upside down 118 is technically and theoretically plausible but is not preferred due to the statistical improbability of such images.
  • One or more images rotated by 1°, or a few seconds or minutes, or by 3° or 45°, or an arbitrary amount, may also be used.
  • the three images are then provided to the face detection software at block 120 and the results are analyzed at block 130 .
  • the image with the highest probability of detection of faces is determined at block 140 to be most likely the one with the right orientation.
  • FIG. 1 b is an alternative embodiment, wherein the classifiers are rotated as opposed to the images.
  • the execution time is highly optimized because the process is preferably not repeated over three images, and is instead performed over only a single image with two, three or more times the number of classifiers.
  • two sets of rotated classifiers are used along with an unrotated set.
  • the classifiers loaded at block 108 are rotated at block 160 to create counter clockwise classifiers 162 , original classifiers 164 and clockwise classifiers 166 .
  • a fourth set of classifiers 168 of 180 degree rotation can be generated, and in fact, any number of classifier sets may be generated according to rotations of arbitrary or selected amounts in accordance with alternative embodiments of this invention.
  • both the image and the classifiers may be rotated.
  • the classifiers are preferably combined into a single set of classifiers at block 170 .
  • the concatenation of the classifiers is preferably performed in such a manner that an false eliminating process would still be optimized. Note that these operations need not be executed at the time of analysis, but can be prepared prior to running the process on an image, as a preparatory step. Also note that the two approaches may be combined, where some classifiers may or may not be used depending on the results of the previous classifies. It may be possible to merge the preferred three sets, or an arbitrary number of two or more sets, of rotated classifiers.
  • the common classifiers one would branch into the specific classifiers for each orientation. This would speed up the algorithm because the first part of the classification would be common to the three orientations.
  • the classifier set contains rotation invariant classifiers it is possible to reduce the number of classifiers which must be applied to an image from 3 N to 3 N- 2 M where N is the number of classifiers in the original classifier set and M is the number of rotation invariant classifiers.
  • the image is then prepared at block 158 to run the face detection algorithm at block 122 .
  • Such preparation varies on the algorithm and can include different operations such as converting the image format, the color depth, the pixel representation etc. In some cases the image is converted, such as described by Viola and Jones, to form a pixel based representation from an integral one.
  • the image may be subsampled to reduce computation, converted to a gray scale representation, or various image enhancement algorithms such as edge enhancement, sharpening, blurring, noise reduction etc. may be applied to the image. Numerous operations on the image in preparation may also be concatenated.
  • the face detection algorithm is run once on the image at block 122 , using the multiple set of classifiers 170 .
  • the results are then collated at block 128 , according to each of the three orientations of the preferred classifier set.
  • the number of surviving face regions for each orientation of the classifier set are next compared at block 130 .
  • the orientation with the highest number of surviving face regions is determined at block 140 —to be the one with the highest likelihood orientation.
  • the algorithm handles may handle cases of false detection of faces.
  • the problem occurs where in some cases regions that are not faces are marked as potential faces. In such cases, it is not enough to count the occurrence of faces, but the probability of false detection and missed faces needs to be accounted for.
  • an apparatus may be provided for detection and recognition of specific features in an image using an eigenvector approach to face detection (see, e.g., U.S. Pat. No. 5,710,833 to Moghaddam et al., incorporated by reference).
  • Additional eigenvectors may be used in addition to or alternatively to the principal eigenvector components, e.g., all eigenvectors may be used.
  • the use of all eigenvectors may be intended to increase the accuracy of the apparatus to detect complex multi-featured objects.
  • Such eigenvectors are orientation sensitive, a feature that can be utilized according to this invention.
  • Faces may be detected in complex visual scenes and/or in a neural network based face detection system, particularly for digital image processing in accordance with preferred or alternative embodiments herein (see, e.g., U.S. Pat. No. 6,128,397 to Baluja & Rowley; and “Neural Network-Based Face Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 1, pages 23-28, January 1998 by the same authors, each reference being hereby incorporated by reference.
  • An image may be rotated prior to the application of the neural network analysis in order to optimize the success rate of the neural-network based detection (see, e.g., U.S. Pat. No. 6,128,397, incorporated by reference).
  • Face detection in accordance with preferred and alternative embodiments, and which are particularly advantageous when a complex background is involved, may use one or more of skin color detection, spanning tree minimization and/or heuristic elimination of false positives (see, e.g., U.S. Pat. No. 6,263,113 to Abdel-Mottaleb et al., incorporated by reference).
  • the neural-network classifiers may be rotated, to determine the match based the image orientation, as described by this invention.
  • an embodiment including electrical, software and/or firmware components that detect blue sky within images may be included (see, e.g., U.S. Pat. No. 6,504,951 to Luo et al., incorporated by reference)
  • This feature allows image orientation to be determined once the blue-sky region(s) are located and analyzed in an image.
  • other image aspects are also used in combination with blue sky detection and analysis, and in particular the existence of facial regions in the image, to determine the correct orientation of an image.
  • filters including color based filters with specific orientation characteristics to them can be introduced into the system as added classifiers, this expanding the scope of the invention form face detection to generic automatic orientation detection using generic image object analysis.
  • Another embodiment includes scene recognition method and a system using brightness and ranging mapping (see, e.g., US published patent application 2001/0031142 to Whiteside, incorporated by reference). Auto-ranging and/or brightness measurement may be used as orientation specific features for this invention.
  • the orientation may be suggested to a user in the acquisition device after the image has been acquired or captured by a camera (see, e.g., U.S. Pat. No. 6,516,154 to Parulski et al., incorporated by reference).
  • a user may confirm the new orientation before saving a picture or before deciding to re-save or delete the picture.
  • the user may choose to re-take a picture using different settings on the camera. Suggestion for improvements may be made by the camera user-interface.
  • automatically or semi-automatically improving the appearance of faces in images based on automatically and/or manually detecting such facial images in the digital image is an advantageous feature (see also US published patent application 20020172419, to Lin et al., incorporated by reference) Lightness contrast and color level modification of an image may be performed to produce better results. Moreover, using such information for detecting orientation, may provide assistance as part of an in-camera acquisition process to perform other face related operations such as composition or a slide show as may be recited at U.S. patent application Ser. No. 10/608,772, filed Jun. 26, 2003, hereby incorporated by reference.
  • Image enhancement may be applied to a face region or face regions only, or the enhancement may be applied to the entire image, or selective and distinct corrections may be applied to both background and foreground regions, particularly facial regions, based on knowledge of the presence of faces in the image and/or other image regions such as blue sky or other detectable features.
  • various schemes may be used for selecting an area or areas of interest from an electronically captured image, most preferably areas including faces or facial regions (see also UK patent application number GB0031423.7 entitled “automatic cropping of electronic images”, incorporated by reference). Regions of interest may be automatically or semi-automatically selected within an image in response to a selection signal (see, e.g., US published patent application 2003/0025812, incorporated by reference).

Abstract

A method of automatically establishing the correct orientation of an image using facial information. This method is based on the exploitation of the inherent property of image recognition algorithms in general and face detection in particular, where the recognition is based on criteria that is highly orientation sensitive. By applying a detection algorithm to images in various orientations, or alternatively by rotating the classifiers, and comparing the number of successful faces that are detected in each orientation, one may conclude as to the most likely correct orientation. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images.

Description

    PRIORITY
  • This application is a Continuation of U.S. patent application Ser. No. 13/330,480, filed on Dec. 19, 2011; which is a Continuation of U.S. patent application Ser. No. 12/949,751, filed Nov. 18, 2010, now U.S. Pat. No. 8,081,844; which is a Continuation of U.S. patent application Ser. No. 12/482,305, filed Jun. 10, 2009, now U.S. Pat. No. 7,844,135; which is a Continuation of U.S. patent application Ser. No. 11/024,046, filed Dec. 27, 2004, now U.S. Pat. No. 7,565,030; which is a Continuation-in-Part of U.S. patent application Ser. No. 10/608,772, filed Jun. 26, 2003, now U.S. Pat. No. 7,440,593, hereby incorporated by reference.
  • BACKGROUND
  • 1. Field of the Invention
  • The invention relates to automatic suggesting or processing of enhancements of a digital image using information gained from identifying and analyzing faces appearing within the image, and in particular method of detection the image orientation using face detection. The invention provides automated orientation detection for photographs taken and/or images detected, acquired or captured in digital form or converted to digital form, by using information about the faces in the photographs and/or images.
  • 2. Description of the Related Art
  • Viola and Jones in the paper entitled “Robust Real Time Object Detection” as presented in the 2nd international workshop on Statistical and Computational theories of Vision, in Vancouver, Canada, Jul. 31, 2001, describe a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. The paper demonstrates this framework by the task of face detection. The technique is based on a learning technique where a small number of critical visual features yield a set of classifiers.
  • Yang et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, pages 34-58, give a useful and comprehensive review of face detection techniques January 2002. These authors discuss various methods of face detection which may be divided into four main categories: (i) knowledge-based methods; (ii) feature-invariant approaches, including the identification of facial features, texture and skin color; (iii) template matching methods, both fixed and deformable and (iv) appearance based methods, including eigenface techniques, statistical distribution based methods and neural network approaches. They also discuss a number of the main applications for face detections technology. It is recognized in the present invention that none of the prior art describes or suggests using detection and knowledge of faces in images to create and/or use tools for the enhancement or correction of the images according to the invention as set forth in the claims below, nor as described in detail below as preferred and alternative embodiments.
  • Balluja, 1997 describes methods of extending the upright, frontal template based face detection system to efficiently handle all in plane rotations, this achieving a rotation invariant face detection system.
  • a. Faces as Subject Matter
  • It is well known that human faces are the most photographed subject matter for the amateur and professional photographer. Thus it is possible to assume a high starting percentage for algorithms based on the existence of faces in them.
  • b. Orientation
  • The camera is usually held horizontally or vertically, in counter clockwise or clockwise in relations to the horizontal position when the picture is taken, creating what is referred to as a landscape mode or portrait mode, respectively. Thus most images are taken in either one of the three orientations, namely landscape, clockwise portrait and counterclockwise portrait. When viewing images, it is preferable to determine ahead of time the orientation of the camera at acquisition, thus eliminating a step of rotating the image and automatically orienting the image. The system may try to determine if the image was shot horizontally, which is also referred to as landscape format, where the width is larger than the height of an image, or vertically, also referred to as portrait mode, where the height of the image is larger than the width. Techniques may be used to determine an orientation of an image. Primarily these techniques include either recording the camera orientation at an acquisition time using an in camera mechanical indicator or attempting to analyze image content post-acquisition. In-camera methods, although providing precision, use additional hardware and sometimes movable hardware components which can increase the price of the camera and add a potential maintenance challenge. However, post-acquisition analysis may not generally provide sufficient precision. Knowledge of location, size and orientation of faces in a photograph, a computerized system can offer powerful automatic tools to enhance and correct such images or to provide options for enhancing and correcting images.
  • c. Face Recognition as a Function of Orientation
  • It is a well known fact for one familiar in the art of face recognition that the human visual system is very sensitive to the orientation of the faces. As a matter of fact, experiments indicated that the way the human mind stores faces is different for upright and inverted faces, as described in Endo, 1982. In particular, recognition of inverted faces is known to be a difficult perceptual task. While the human visual system performs well in recognizing different faces, performing the same task with inverted faces is significantly worse. Such results are illustrated for example in Moses, 1994, where face memory and face recognition is determined to be highly orientation dependent. A detailed review of face recognition of inverted faces is available in Valentine, 1988.
  • It is therefore only natural that artificial intelligence detection algorithms based on face related classifiers may have the same features of being orientation variant.
  • d. Image Classifiers for scene analysis:
  • Even though human beings have no problem to interpret images semantically, the challenge to do so using artificial intelligence is not that straight forward. A few methods are available to those familiar in the art of image and pattern recognition that separate images using a learning based descriptor space. Such methods are using a training set and a maximization methods of likelihood. Examples of such methods includes the Adatron (1989) method as described by Analauf et. al incorporated herein by reference. Other work includes scene analysis such as the work by Le Saux Bertrand et al (2004).
  • SUMMARY OF THE INVENTION
  • In view of the above, a method of analyzing and processing a digital image using the results of face detection algorithms within said image to determine the correct orientation of the image is provided.
  • A face detection algorithm with classifiers that are orientation sensitive, or otherwise referred to as rotation variant, is applied to an image, or a subsampled resolution of an image. The image is then rotated, or the classifiers are rotated, and the search is repeated for the orientations that are under question. Based on the results of the detection, the image with the highest amount of faces detected, and or the orientation with the highest face detection confidence level, is the one estimated to be the correct orientation of the image.
  • The digital image may be digitally-acquired and/or may be digitally-captured. Decisions for processing the digital image based on said face detection, selecting one or more parameters and/or for adjusting values of one or more parameters within the digital image may be automatically, semi-automatically or manually performed.
  • Values of orientation may be adjusted arbitrarily or in known intervals, e.g., of 90 degrees, such that a rotation value for the digital image may be determined.
  • The method may be performed within a digital acquisition device or an external device or a combination thereof. Rotation can also be applied as part of the transfer process between devices.
  • The face pixels may be identified, a false indication of another face within the image may be removed. The face pixels identifying may be automatically performed by an image processing apparatus, and a manual verification of a correct detection of at least one face within the image may be provided.
  • A method is further provided for detecting an orientation of a digital image using statistical classifier techniques. A set of classifiers are applied to a digital image in a first orientation and a first level of match between the digital image at the first orientation and the classifiers is determined. The digital image is rotated to a second orientation, and the classifiers are applied to the rotated digital image at the second orientation. A second level of match is determined between the rotated digital image at the second orientation and the classifiers. The first and second levels of match are compared. It is determined which of the first orientation and the second orientations has a greater probability of being a correct orientation based on which of the first and second levels of match, respectively, comprises a higher level of match.
  • The method may further include rotating the digital image to a third orientation, applying the classifiers to the rotated digital image at the third orientation, and determining a third level of match between the rotated digital image at the third orientation and the classifiers. The third level of match is compared with the first level of match or the second level of match, or both. It is determined which of two or more of the first orientation, the second orientation and the third orientation has a greater probability of being a correct orientation based on which of the corresponding levels of match is greater.
  • A method is also provided for detecting an orientation of a digital image using statistical classifier techniques. The method includes applying a set of classifiers to a digital image in a first orientation and determining a first level of match between the digital image at the first orientation and the classifiers. The set of classifiers is rotated a first predetermined amount, the classifiers rotated the first amount are applied to the digital image at the first orientation. A second level of match is determined between the digital image at the first orientation and the classifiers rotated the first amount. The first and second levels of match are compared, and it is determined which of the first and second levels of match is greater in order to determine whether the first orientation is a correct orientation of the digital image. A rotation of the classifiers by a second amount may be performed and the method performed with three relatively rotated sets of classifiers, and so on.
  • One or more processor readable storage devices are also provided having processor readable code embodied thereon. The processor readable code programs one or more processors to perform any of the methods for detecting an orientation of a digital image using statistical classifier techniques briefly summarized above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 a is a flow diagram that illustrates a main orientation workflow based on rotation of a digital image that includes one or more faces.
  • FIG. 1 b is a flow diagram that illustrates a main orientation workflow based on rotation of classifiers relative to an orientation of a digital image that includes one or more faces.
  • FIG. 1 c describes an exemplary implementation of the process illustrated at FIG. 1 a and/or FIG. 1 b.
  • FIG. 2 a illustrates an ellipse-based orientation classifier that may be used in a process in accordance with a preferred embodiment.
  • FIG. 2 b illustrates an ellipse-based classifier system applied to a facial image.
  • FIG. 3 a illustrates four different potential orientations of a single image.
  • FIG. 3 b illustrates different orientations of classifiers applied to a same image.
  • FIG. 4 a illustrates a matching of ellipse-based classifiers within images.
  • FIG. 4 b illustrates a matching of complex classifiers with an image.
  • INCORPORATION BY REFERENCE
  • What follows is a cite list of references each of which is, in addition to those references otherwise cited in this application, and that which is described as background, the invention summary, the abstract, the brief description of the drawings and the drawings themselves, hereby incorporated by reference into the detailed description of the preferred embodiments below, as disclosing alternative embodiments of elements or features of the preferred embodiments not otherwise set forth in detail below. A single one or a combination of two or more of these references may be consulted to obtain a variation of the preferred embodiments described in the detailed description herein:
    • U.S. Pat. Nos. RE33682, RE31370, 4,047,187, 4,317,991, 4,367,027, 4,638,364, 5,291,234, 5,488,429, 5,638,136, 5,710,833, 5,724,456, 5,781,650, 5,812,193, 5,818,975, 5,835,616, 5,870,138, 5,900,909, 5,978,519, 5,991,456, 6,097,470, 6,101,271, 6,128,397, 6,148,092, 6,151,073, 6,188,777, 6,192,149, 6,249,315, 6,263,113, 6,268,939, 6,282,317, 6,301,370, 6,332,033, 6,393,148, 6,404,900, 6,407,777, 6,421,468, 6,438,264, 6,456,732, 6,459,436, 6,473,199, 6,501,857, 6,504,942, 6,504,951, 6,516,154, and 6,526,161; United States published patent applications no. 2004/40013304, 2004/0223063, 2004/0013286. 2003/0071908, 2003/0052991, 2003/0025812, 2002/20102024, 2002/0172419, 2002/0114535, 2002/0105662, and 2001/0031142;
    • Japanese patent application no. JP5260360A2;
    • British patent application no. GB0031423.7; and
    • Anlauf, J. K. and Biehl, M.: “The adatron: and adaptive perception algorithm”. Neurophysics Letters, 10:687-692, 1989;
    • Baluja & Rowley, “Neural Network-Based Face Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 1, pages 23-28, January 1998;
    • Baluja, Shumeet in “Face Detection with In-Plane rotation: Early Concepts and Preliminary Results”, Technical Report JPRC-TR-97-001;
    • Endo, M., “Perception of upside-down faces: and analysis form the viewpoint of cue saliency”, in Ellis, H. Jeeves, M., Newcombe, F,. and Young, A., editors, Aspects of Face Processing, 53-58, 1986, Matnus Nijhoff Publishers;
    • Moses, Yael and Ullman, Shimon and Shimon Edelman in “Generalization to Novel Images in Upright and Inverted Faces”, 1994;
    • Le Saux, Bertrand and Amato, Giuseppe: “Image Classifiers for Scene Analysis”, International Conference on Computer Vision and Graphics (ICCVG '04), Warsaw, Poland, September 2004;
    • Valentine, T., Upsaide Down Faces: A review of the effect of inversion and encoding activity upon face recognition”, 1988, Acta Psychologica, 61:259-273;
    • Viola and Jones “Robust Real Time Object Detection”, 2nd international workshop on Statistical and Computational theories of Vision, in Vancouver, Canada, Jul. 31, 2001;
    • Yang et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, no. 1, pp 34-58 (January 2002).
    ILLUSTRATIVE DEFINITIONS
  • “Face Detection” involves the art of detecting faces in a digital image. One or more faces may be first isolated and/or identified within a larger digital image prior to further processing based at least in part on the detection of the faces. Face detection includes a process of determining whether a human face is present in an input image, and may include or is preferably used in combination with determining a position and/or other features, properties, parameters or values of parameters of the face within the input image.
  • “Image-enhancement” or “image correction” involves the art of modifying a digital image to improve its quality or according to another selected manual or automatic input criteria. A “global” modification is one that is applied to an entire image or substantially the entire image, while a “selective” modification is applied differently to different portions of the image or to only a selected portion of the image.
  • A “pixel” is a picture element or a basic unit of the composition of a digital image or any of the small discrete elements that together constitute an image.
  • A “digitally-captured image” includes an image that is digitally located and held in a detector, preferably of a portable digital camera or other digital image acquisition device.
  • A “digitally-acquired image” includes an image that is digitally recorded in a permanent file and/or preserved in a more or less permanent digital form.
  • “A digitally-detected image” is an image comprising digitally detected electromagnetic waves.
  • “Classifiers” are generally reference parameters selectively or automatically correlated or calibrated to some framework or absolute reference criteria. For example, one or more orientation classifiers in a 2-dimensional image may be configured according to a proper and/or selected orientation of a detected face within a digital image. Such classifiers may be calibrated or correlated with a detected facial orientation such that an overall digital image containing a face may be oriented according to these calibrated or correlated classifiers.
  • Classifiers may be statistical or absolute: Statistical classifiers assign a class ωi so that given a pattern ŷ, the most probable P(ωi|ŷ) is the largest. In many cases, it is not desired to actually calculate P(ωi|ŷ), but rather to find (i) so that ωi will provide the largest P(ωi|ŷ). The accuracy of a statistical classifier generally depends on the quality of training data and of the algorithm used for classification. The selected populations of pixels used for training should be statistically significant. This means that a minimum number of observations are generally required to characterize a particular site to some selected or acceptable threshold level of error.
  • FIG. 2 a and FIG. 2 b illustrate in a graphical form non-exhaustive examples of classifiers. Objects 210, 212, and 214 in FIG. 2 a represent a simple ellipse classifier, in varying sizes. FIG. 2 b illustrates a complex classifier of a face, which is made of simpler classifiers. In this case, the mouth, 224 and the eyes 226, 228 correspond to ellipse classifiers 210 and 214 as defined in FIG. 2 a.
  • The classifiers may not necessarily be only of certain shape. More complex classifiers can be of a more abstract physical nature. Alternatively, a classifier can be of color data. For example, a color classifier may be a classifier with higher content of blue towards the top and higher content of green or brown towards the bottom.
  • An “image orientation” is a rotational position of an image relative to a selected or permanent coordinate or coordinate system that may itself be determined relative to an absolute spatial system, such as the earth, or a system determined or selected within a frame of a digital image. Generally herein, an image orientation is identified relative to an orientation of one or more classifiers, such as the elliptical classifiers illustrated at 2 a-2 b, 3 b and 4 a-4 b.
  • As another example, an image orientation may identified relative to a horizontal/vertical system, such as illustrated in FIG. 3 a. The image 310 may be rotated relative to this coordinate system or to an orientation of one or more elliptical classifiers by 90° counter clockwise 320 or clock wise 330. A fourth orientation 340 is a 180° degree rotation which is also illustrated in FIG. 3 a. For most practical reasons, a 180 degree orientation is typically not a desired or viable situation for hand held pictures. However, technically and theoretically, the up-side-down orientation can be included in the algorithm Rotational positions may be defined relative to absolute or image-based coordinates, and rotations of the image and/or of the classifiers may be of arbitrary angular extent, e.g., 1° or finer, 5°, 10°, 15°, 30°, 45°, or others, may be selected in accordance with embodiments of the invention.
  • Classifier orientation is illustrated in FIG. 3 b. The classifiers of FIG. 3 b are oriented in three orientations corresponding to the image orientations shown. Object 360 represents a “correctly” oriented image, as selected or built-in to the digital system, block 350 represents a counter clockwise orientation, and block 370 represents a clockwise orientation. A “correct” orientation may be determined based on a combined level of match of multiple classifiers and/or on relative positions of the classifiers once matched to their respective facial regions. These regions may include the two eyes and mouth of a detected face, and may also include an outline of a person's head or entire face. The arrow labeled “N” in the example of FIG. 3 b points in a direction that is selected or determined to be the “correct” vertical axis of the image. The orientations illustrated at FIG. 3 b correspond to illustrative images 310, 320 and 330 in FIG. 3 a.
  • “Matching image classifiers to images” involves correlating or assigning classifier constructs to actual digital images or portions or sub-samplings of digital images. Such matching is illustrated at FIGS. 4 a and 4 b. According to FIG. 4 a different sized ellipses, as already described as being examples of classifiers, e.g., ellipses 210, 212 and 214 of FIG. 2 a, are matched to various objects, e.g., eyes and mouth regions, in facial images. The matching is preferably performed for different image and/or facial region orientations, e.g., 400 and 410 of FIG. 4 a, to determine a correct or selected orientation of the digital image.
  • A correctly oriented ellipse may, however, match different objects in two orientations of an image or may match different objects than desired in images regardless of orientation. Referring to FIG. 4 a, e.g., ellipse 214 matches correctly the lips 414 in image 410 but also the nose bridge 404 when the image is “incorrectly” oriented or not in the desired orientation. The smaller ellipse 410 matches both instances of eyes 412 and 413 in the correctly oriented image 410. This example illustrates an instance wherein it is not sufficient to use a single classifier, as there may be cases of false detection. This illustrates an advantage of the process of determining the orientation of faces based on statistical classifiers in accordance with a preferred embodiment of the present invention.
  • Concatenation is generally used herein to describe a technique wherein classifiers, objects, axes, or parameters are connected, linked, correlated, matched, compared or otherwise related according to a selected or built-in set of criteria, and/or to describe sequential performance of operation or processes in methods in accordance with embodiments of the invention. For example, an image may be determined to be properly aligned when axes of a pair of eye ellipses are determined to be collinear or the image is oriented or re-oriented such that they are made to be collinear, or when an image and/or classifiers are rotated to cause the foci of eye ellipses to form an isosceles triangle with a center of a mouth ellipse, etc.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Preferred embodiments are described below including methods and devices for providing or suggesting options for determining image orientation automatically using face detection. A preferred embodiment includes an image processing application whether implemented in software or in firmware, as part of the image capture process, such as in a digital camera, or as part of post processing, such as a desktop, in the camera as a post processing background process or on a server application. This system receives images in digital form, where the images can be translated into a grid representation including multiple pixels.
  • The preferred embodiment describes a method of re-using face detection information in different orientations of the image to determine the orientation with the highest probability to be the correct one. The information regarding the location and size of faces in an image assist in determining correct orientation.
  • Advantages of the preferred embodiments include the ability to automatically perform or suggest or assist in the determination of the correct orientation of an image. Another advantage is that the processing may be automatically performed and/or suggested based on this information. Such automatic processing is fast enough and efficient enough to handle multiple images in close to real time, or be used for a single image as part of the image processing in the acquisition device. Many advantageous techniques are provided in accordance with preferred and alternative embodiments set forth herein. For example, this method of detection the image orientation can be combined with other methods of face detection, thus improving the functionality, and re-purposing the process for future applications.
  • Two or more methods of detecting faces in different orientations may be combined to achieve better accuracy and parameters of a single algorithm may be concatenated into a single parameter. The digital image may be transformed to speed up the process, such as subsampling or reducing the color depth. The digital image may be transformed to enhance the accuracy such as preprocessing stage for improving the color balance, exposure or sharpness. The digital image may post processed to enhance the accuracy such as removal of false positives as a post processing process, based on parameters and criteria outside of the face detection algorithm. Values of orientation may be adjusted such that a rotation value for the digital image is determined. This technique may be implemented for supporting arbitrary rotation or fixed interval rotation such as 90 degree rotation.
  • The method may be performed within any digital image capture device, which as, but not limited to digital still camera, phone handset with built in camera, web camera or digital video camera. Determining which of the sub-group of pixels belong to which of the group of face pixels may be performed. The determining of the initial values of one or more parameters of pixels may be calculated based on the spatial orientation of the one or more sub-groups that correspond to one or more facial features. The spatial orientation of the one or more sub-groups that correspond to one or more facial features may be calculated based on an axis of an ellipse fit to the sub-group. The adjusted values of pixels within the digital image may be rounded to a closest multiple of 90 degrees. The initial values may be adjusted to adjusted values for re-orienting the image to an adjusted orientation. The one or more facial features may include an eye, two eyes, two eyes and a mouth, an eye, a mouth, hairline, ears, nostrils, nose bridge, eyebrows or a nose, or combinations thereof. On a more abstract level the features used for the detection of objects in general in the image, or faces specifically may be determined through a mathematical classifiers that are either deduced via a learning process or inserted into the system. One example of such classifiers are described by Viola Jones in the paper incorporated herein by reference. Other classifiers can be the eigenfaces, which are the basis functions that define images with faces.
  • Each of the methods provided are preferably implemented within software and/or firmware either in the camera or with external processing equipment. The software may also be downloaded into the camera or image processing equipment. In this sense, one or more processor readable storage devices having processor readable code embodied thereon are provided. The processor readable code programs one or more processors to perform any of the above or below described methods.
  • FIG. 1 a illustrates a process flow according to a preferred embodiment. The input is an image which can come from various sources. According to this exemplary procedure, an image may be opened by a software, firmware or other program application in block 102. The process may be initiated when a photographer takes a picture at block 103, or as an automatic background process for an application or acquisition device at block 104.
  • The classifiers are preferably pre-determined for the specific image classification. A detailed description of the learning process to create the appropriate classifiers can be found in the paper by Viola and Jones that has been cited and incorporated by reference hereinabove. The classifiers are loaded, at step 108, into the application.
  • The image is preferably rotated into three orientations at block 110. Only two or more than three orientation may alternatively be used: The preferred orientations are counter clockwise 112, no rotation 114 and clockwise, 116. Note that a fourth orientation which is the upside down 118 is technically and theoretically plausible but is not preferred due to the statistical improbability of such images. One or more images rotated by 1°, or a few seconds or minutes, or by 3° or 45°, or an arbitrary amount, may also be used.
  • The three images are then provided to the face detection software at block 120 and the results are analyzed at block 130. The image with the highest probability of detection of faces is determined at block 140 to be most likely the one with the right orientation.
  • FIG. 1 b is an alternative embodiment, wherein the classifiers are rotated as opposed to the images. By doing so, even if the results are similar, the execution time is highly optimized because the process is preferably not repeated over three images, and is instead performed over only a single image with two, three or more times the number of classifiers. Preferably, two sets of rotated classifiers are used along with an unrotated set. According to FIG. 1 b, the classifiers loaded at block 108 are rotated at block 160 to create counter clockwise classifiers 162, original classifiers 164 and clockwise classifiers 166. As explained above, if desired, a fourth set of classifiers 168 of 180 degree rotation can be generated, and in fact, any number of classifier sets may be generated according to rotations of arbitrary or selected amounts in accordance with alternative embodiments of this invention. In a third embodiment, both the image and the classifiers may be rotated.
  • The classifiers are preferably combined into a single set of classifiers at block 170. The concatenation of the classifiers is preferably performed in such a manner that an false eliminating process would still be optimized. Note that these operations need not be executed at the time of analysis, but can be prepared prior to running the process on an image, as a preparatory step. Also note that the two approaches may be combined, where some classifiers may or may not be used depending on the results of the previous classifies. It may be possible to merge the preferred three sets, or an arbitrary number of two or more sets, of rotated classifiers.
  • Part-way through, the common classifiers one would branch into the specific classifiers for each orientation. This would speed up the algorithm because the first part of the classification would be common to the three orientations.
  • In another embodiment, where the classifier set contains rotation invariant classifiers it is possible to reduce the number of classifiers which must be applied to an image from 3N to 3N-2M where N is the number of classifiers in the original classifier set and M is the number of rotation invariant classifiers. The image is then prepared at block 158 to run the face detection algorithm at block 122. Such preparation varies on the algorithm and can include different operations such as converting the image format, the color depth, the pixel representation etc. In some cases the image is converted, such as described by Viola and Jones, to form a pixel based representation from an integral one. In other cases the image may be subsampled to reduce computation, converted to a gray scale representation, or various image enhancement algorithms such as edge enhancement, sharpening, blurring, noise reduction etc. may be applied to the image. Numerous operations on the image in preparation may also be concatenated. The face detection algorithm is run once on the image at block 122, using the multiple set of classifiers 170. The results are then collated at block 128, according to each of the three orientations of the preferred classifier set. The number of surviving face regions for each orientation of the classifier set are next compared at block 130. The orientation with the highest number of surviving face regions is determined at block 140—to be the one with the highest likelihood orientation.
  • In an additional embodiment, the algorithm handles may handle cases of false detection of faces. The problem occurs where in some cases regions that are not faces are marked as potential faces. In such cases, it is not enough to count the occurrence of faces, but the probability of false detection and missed faces needs to be accounted for.
      • Such algorithm which is an expansion of Block 140 of FIGS. 1 a and 1 b is described with reference to the flow diagram illustrated at FIG. 1 c:
      • Some representations used in the algorithm include the following:
      • DIR: the most populated direction and the maximal number of detected faces on any direction (DIR is on of CCW, O, CW).
      • M: the minimal non-zero number of detected faces on any direction (m).
      • NZ: the number of populated directions (directions for which we have detection).
      • N: the total number of detected faces.
      • CONST: probability factor, which is based on empirical results can be from 0.6 to 0.9.
      • An exemplary orientation decision may be determined as follows:
      • 1410 NZ==0, no faces are found in the image, image orientation is, 1490 DEFAULT (keep image as it is)
      • 1420 NZ==1 there is as single face in the image, image orientation is DIR
      • 1421 If NZ>1
        • 1430 if NZ*m/N<=CONST there are multiple faces, multiple orientations with a predominant orientation, image orientation is Dir
        • Therefore 1431 NZ*m/N>CONST there are multiple faces, multiple orientations without a predominant orientation, image orientation is, 1490 DEFAULT (no decision can be taken). (keep image as it is)
  • Automatic Orientation detection and in particular orientation detection using faces, particularly for digital image processing applications according to preferred and alternative embodiments set forth herein, are further advantageous in accordance with various modifications of the systems and methods of the above description as may be understood by those skilled in the art, as set forth in the references cited and incorporated by reference herein and as may be otherwise described below.
  • For example, an apparatus according to another embodiment may be provided for detection and recognition of specific features in an image using an eigenvector approach to face detection (see, e.g., U.S. Pat. No. 5,710,833 to Moghaddam et al., incorporated by reference). Additional eigenvectors may be used in addition to or alternatively to the principal eigenvector components, e.g., all eigenvectors may be used. The use of all eigenvectors may be intended to increase the accuracy of the apparatus to detect complex multi-featured objects. Such eigenvectors are orientation sensitive, a feature that can be utilized according to this invention.
  • Faces may be detected in complex visual scenes and/or in a neural network based face detection system, particularly for digital image processing in accordance with preferred or alternative embodiments herein (see, e.g., U.S. Pat. No. 6,128,397 to Baluja & Rowley; and “Neural Network-Based Face Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 1, pages 23-28, January 1998 by the same authors, each reference being hereby incorporated by reference. An image may be rotated prior to the application of the neural network analysis in order to optimize the success rate of the neural-network based detection (see, e.g., U.S. Pat. No. 6,128,397, incorporated by reference). This technique is particularly advantageous when faces are oriented vertically. Face detection in accordance with preferred and alternative embodiments, and which are particularly advantageous when a complex background is involved, may use one or more of skin color detection, spanning tree minimization and/or heuristic elimination of false positives (see, e.g., U.S. Pat. No. 6,263,113 to Abdel-Mottaleb et al., incorporated by reference). Alternatively, according to this invention, the neural-network classifiers may be rotated, to determine the match based the image orientation, as described by this invention.
  • In the context of automatic image rotation, and determining image orientation, an embodiment including electrical, software and/or firmware components that detect blue sky within images may be included (see, e.g., U.S. Pat. No. 6,504,951 to Luo et al., incorporated by reference) This feature allows image orientation to be determined once the blue-sky region(s) are located and analyzed in an image. In accordance with an alternative embodiment, other image aspects are also used in combination with blue sky detection and analysis, and in particular the existence of facial regions in the image, to determine the correct orientation of an image. In accordance with this invention, such filters, including color based filters with specific orientation characteristics to them can be introduced into the system as added classifiers, this expanding the scope of the invention form face detection to generic automatic orientation detection using generic image object analysis.
  • Another embodiment includes scene recognition method and a system using brightness and ranging mapping (see, e.g., US published patent application 2001/0031142 to Whiteside, incorporated by reference). Auto-ranging and/or brightness measurement may be used as orientation specific features for this invention.
  • In further preferred and alternative embodiments, the orientation may be suggested to a user in the acquisition device after the image has been acquired or captured by a camera (see, e.g., U.S. Pat. No. 6,516,154 to Parulski et al., incorporated by reference). According to these embodiments, a user may confirm the new orientation before saving a picture or before deciding to re-save or delete the picture. The user may choose to re-take a picture using different settings on the camera. Suggestion for improvements may be made by the camera user-interface.
  • In preferred embodiments herein, automatically or semi-automatically improving the appearance of faces in images based on automatically and/or manually detecting such facial images in the digital image is an advantageous feature (see also US published patent application 20020172419, to Lin et al., incorporated by reference) Lightness contrast and color level modification of an image may be performed to produce better results. Moreover, using such information for detecting orientation, may provide assistance as part of an in-camera acquisition process to perform other face related operations such as composition or a slide show as may be recited at U.S. patent application Ser. No. 10/608,772, filed Jun. 26, 2003, hereby incorporated by reference.
  • Based on the detection of the correct orientation, Image enhancement according to preferred and alternative embodiment herein may be applied to a face region or face regions only, or the enhancement may be applied to the entire image, or selective and distinct corrections may be applied to both background and foreground regions, particularly facial regions, based on knowledge of the presence of faces in the image and/or other image regions such as blue sky or other detectable features.
  • In further embodiments, various schemes may be used for selecting an area or areas of interest from an electronically captured image, most preferably areas including faces or facial regions (see also UK patent application number GB0031423.7 entitled “automatic cropping of electronic images”, incorporated by reference). Regions of interest may be automatically or semi-automatically selected within an image in response to a selection signal (see, e.g., US published patent application 2003/0025812, incorporated by reference).
  • While an exemplary drawings and specific embodiments of the present invention have been described and illustrated, it is to be understood that that the scope of the present invention is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by workers skilled in the arts without departing from the scope of the present invention as set forth in the claims that follow and their structural and functional equivalents.
  • In addition, in methods that may be performed according to preferred embodiments herein, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations, unless a particular ordering is expressly provided or understood by those skilled in the art as being necessary.

Claims (22)

What is claimed is:
1. A method of enhancing a digital image using statistical classifier techniques, comprising:
using a processor;
acquiring a first digital image;
detecting an object within the first digital image;
generating in-camera, capturing or otherwise obtaining in-camera a collection of further digital images including said object captured at different times than said first digital image;
tracking said object within said collection of further images;
applying a set of object detection classifiers to the first digital image and at least one image of the collection of further images;
determining that at least one of said further digital images has a greater probability of comprising a higher level of match to a desired image parameter than said first digital image; and
generating a re-composited first digital image including replacing said object within said first digital image with said object from said at least one further digital image captured respectively at said different times.
2. The method of claim 1, further comprising cropping the first digital image including selecting an object region that includes said object within the first digital image and excludes one or more regions of the first digital image outside of said object region to obtain a cropped image including the object region.
3. The method of claim 1, further comprising detecting a second object within the first digital image; and wherein said generating said re-composited first digital image further includes maintaining said same second object within said re-composited first digital image.
4. The method of claim 3, wherein the tracking said object comprises tracking both the object and the second object respectively continuing to identify, in one or more subsequently captured digital images of the further digital images, the object and the second object as corresponding to the object and the second object captured within the first digital image at said different times.
5. The method of claim 1, further comprising automatically adjusting focus position based upon detecting that the focus position of the object t has changed based on movement between two or more of the first and further digital images captured at said different times.
6. The method of claim 1, wherein said collection of further images comprises relatively low resolution images compared with said first digital image.
7. The method of claim 1, wherein said object comprises a face.
8. One or more non-transitory computer readable storage devices having processor readable code embodied thereon, said processor readable code for programming one or more processors to perform a method of enhancing a digital image using statistical classifier techniques, wherein the method comprises:
detecting an object within a first digital image;
generating in-camera, capturing or otherwise obtaining in-camera a collection of further digital images including said object captured at different times than said first digital image;
tracking said object within said collection of further images;
applying a set of object detection classifiers to the first digital image and at least one image of the collection of further images;
determining that at least one of said further digital images has a greater probability of comprising a higher level of match to a desired image parameter than said first digital image; and
generating a re-composited first digital image including replacing said object within said first digital image with said object from said at least one further digital image captured respectively at said different times.
9. The one or more non-transitory computer readable storage devices of claim 8, wherein the method further comprises cropping the first digital image including selecting an object region that includes said object within the first digital image and excludes one or more regions of the first digital image outside of said object region to obtain a cropped image including the object region.
10. The one or more non-transitory computer readable storage devices of claim 8, wherein the method further comprises detecting a second object within the first digital image; and wherein said generating said re-composited first digital image further includes maintaining said same second object within said re-composited first digital image.
11. The one or more non-transitory computer readable storage devices of claim 10, wherein the tracking said object comprises tracking both the object and the second object respectively continuing to identify, in one or more subsequently captured digital images of the further digital images, the object and the second object as corresponding to the object and the second object captured within the first digital image at said different times.
12. The one or more non-transitory computer readable storage devices of claim 8, wherein the method further comprises automatically adjusting focus position based upon detecting that the focus position of the object t has changed based on movement between two or more of the first and further digital images captured at said different times.
13. The one or more non-transitory computer readable storage devices of claim 8, wherein said collection of further images comprises relatively low resolution images compared with said first digital image.
14. The one or more non-transitory computer readable storage devices of claim 8, wherein said object comprises a face.
15. A digital image acquisition device, comprising:
one or more optics and a sensor for acquiring a digital image,
a processor, and
one or more processor readable storage devices having processor readable code embodied thereon for programming the processor to perform a method of enhancing a digital image using statistical classifier techniques, wherein the method comprises:
acquiring a first digital image;
detecting an object within the first digital image;
generating in-camera, capturing or otherwise obtaining in-camera a collection of further digital images including said object captured at different times than said first digital image;
tracking said object within said collection of further images;
applying a set of object detection classifiers to the first digital image and at least one image of the collection of further images;
determining that at least one of said further digital images has a greater probability of comprising a higher level of match to a desired image parameter than said first digital image; and
generating a re-composited first digital image including replacing said object within said first digital image with said object from said at least one further digital image captured respectively at said different times.
16. The digital image acquisition device of claim 15, wherein the method further comprises cropping the first digital image including selecting an object region that includes said object within the first digital image and excludes one or more regions of the first digital image outside of said object region to obtain a cropped image including the object region.
17. The digital image acquisition device of claim 15, wherein the method further comprises detecting a second object within the first digital image; and wherein said generating said re-composited first digital image further includes maintaining said same second object within said re-composited first digital image.
18. The digital image acquisition device of claim 17, wherein the tracking said object comprises tracking both the object and the second object respectively continuing to identify, in one or more subsequently captured digital images of the further digital images, the object and the second object as corresponding to the object and the second object captured within the first digital image at said different times.
19. The digital image acquisition device of claim 15, wherein the method further comprises automatically adjusting focus position based upon detecting that the focus position of the object t has changed based on movement between two or more of the first and further digital images captured at said different times.
20. The digital image acquisition device of claim 15, wherein said collection of further images comprises relatively low resolution images compared with said first digital image.
21. The digital image acquisition device of claim 15, wherein said object comprises a face.
22. The digital image acquisition device of claim 15, comprising a cell phone equipped with a digital camera.
US13/778,128 2003-06-26 2013-02-27 Detecting Orientation of Digital Images Using Face Detection Information Abandoned US20130169821A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/778,128 US20130169821A1 (en) 2003-06-26 2013-02-27 Detecting Orientation of Digital Images Using Face Detection Information

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US10/608,772 US7440593B1 (en) 2003-06-26 2003-06-26 Method of improving orientation and color balance of digital images using face detection information
US11/024,046 US7565030B2 (en) 2003-06-26 2004-12-27 Detecting orientation of digital images using face detection information
US12/482,305 US7844135B2 (en) 2003-06-26 2009-06-10 Detecting orientation of digital images using face detection information
US12/949,751 US8081844B2 (en) 2003-06-26 2010-11-18 Detecting orientation of digital images using face detection information
US13/330,480 US8391645B2 (en) 2003-06-26 2011-12-19 Detecting orientation of digital images using face detection information
US13/778,128 US20130169821A1 (en) 2003-06-26 2013-02-27 Detecting Orientation of Digital Images Using Face Detection Information

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/330,480 Continuation US8391645B2 (en) 2003-06-26 2011-12-19 Detecting orientation of digital images using face detection information

Publications (1)

Publication Number Publication Date
US20130169821A1 true US20130169821A1 (en) 2013-07-04

Family

ID=36970974

Family Applications (10)

Application Number Title Priority Date Filing Date
US10/608,772 Expired - Fee Related US7440593B1 (en) 2003-06-26 2003-06-26 Method of improving orientation and color balance of digital images using face detection information
US11/765,967 Expired - Lifetime US7630527B2 (en) 2003-06-26 2007-06-20 Method of improving orientation and color balance of digital images using face detection information
US11/765,899 Active 2024-10-07 US7912245B2 (en) 2003-06-26 2007-06-20 Method of improving orientation and color balance of digital images using face detection information
US12/637,664 Expired - Fee Related US7986811B2 (en) 2003-06-26 2009-12-14 Method of improving orientation and color balance of digital images using face detection information
US13/118,572 Expired - Lifetime US8498446B2 (en) 2003-06-26 2011-05-30 Method of improving orientation and color balance of digital images using face detection information
US13/330,480 Expired - Lifetime US8391645B2 (en) 2003-06-26 2011-12-19 Detecting orientation of digital images using face detection information
US13/778,128 Abandoned US20130169821A1 (en) 2003-06-26 2013-02-27 Detecting Orientation of Digital Images Using Face Detection Information
US13/953,717 Expired - Lifetime US8989443B2 (en) 2003-06-26 2013-07-29 Method of improving orientation and color balance of digital images using face detection information
US13/953,733 Expired - Lifetime US9008374B2 (en) 2003-06-26 2013-07-29 Method of improving orientation and color balance of digital images using face detection information
US13/953,740 Expired - Lifetime US8761449B2 (en) 2003-06-26 2013-07-29 Method of improving orientation and color balance of digital images using face detection information

Family Applications Before (6)

Application Number Title Priority Date Filing Date
US10/608,772 Expired - Fee Related US7440593B1 (en) 2003-06-26 2003-06-26 Method of improving orientation and color balance of digital images using face detection information
US11/765,967 Expired - Lifetime US7630527B2 (en) 2003-06-26 2007-06-20 Method of improving orientation and color balance of digital images using face detection information
US11/765,899 Active 2024-10-07 US7912245B2 (en) 2003-06-26 2007-06-20 Method of improving orientation and color balance of digital images using face detection information
US12/637,664 Expired - Fee Related US7986811B2 (en) 2003-06-26 2009-12-14 Method of improving orientation and color balance of digital images using face detection information
US13/118,572 Expired - Lifetime US8498446B2 (en) 2003-06-26 2011-05-30 Method of improving orientation and color balance of digital images using face detection information
US13/330,480 Expired - Lifetime US8391645B2 (en) 2003-06-26 2011-12-19 Detecting orientation of digital images using face detection information

Family Applications After (3)

Application Number Title Priority Date Filing Date
US13/953,717 Expired - Lifetime US8989443B2 (en) 2003-06-26 2013-07-29 Method of improving orientation and color balance of digital images using face detection information
US13/953,733 Expired - Lifetime US9008374B2 (en) 2003-06-26 2013-07-29 Method of improving orientation and color balance of digital images using face detection information
US13/953,740 Expired - Lifetime US8761449B2 (en) 2003-06-26 2013-07-29 Method of improving orientation and color balance of digital images using face detection information

Country Status (1)

Country Link
US (10) US7440593B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129145A1 (en) * 2011-11-22 2013-05-23 Cywee Group Limited Orientation correction method for electronic device used to perform facial recognition and electronic device thereof
US10623630B2 (en) 2016-08-01 2020-04-14 Samsung Electronics Co., Ltd Method of applying a specified effect to an area of an image and electronic device supporting the same
US10825150B2 (en) 2017-10-31 2020-11-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, electronic device and computer-readable storage medium

Families Citing this family (147)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738015B2 (en) * 1997-10-09 2010-06-15 Fotonation Vision Limited Red-eye filter method and apparatus
US7630006B2 (en) * 1997-10-09 2009-12-08 Fotonation Ireland Limited Detecting red eye filter and apparatus using meta-data
EP1413972B1 (en) * 2002-10-24 2008-10-22 L-1 Identity Solutions AG Examination of images of persons
US7606417B2 (en) 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations
US7269292B2 (en) 2003-06-26 2007-09-11 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7362368B2 (en) * 2003-06-26 2008-04-22 Fotonation Vision Limited Perfecting the optics within a digital image acquisition device using face detection
US8896725B2 (en) * 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US7440593B1 (en) * 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US8989453B2 (en) * 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US8155397B2 (en) 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
US8593542B2 (en) 2005-12-27 2013-11-26 DigitalOptics Corporation Europe Limited Foreground/background separation using reference images
US8682097B2 (en) 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US9129381B2 (en) 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US7792970B2 (en) * 2005-06-17 2010-09-07 Fotonation Vision Limited Method for establishing a paired connection between media devices
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8330831B2 (en) 2003-08-05 2012-12-11 DigitalOptics Corporation Europe Limited Method of gathering visual meta data using a reference image
US7471846B2 (en) 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7685341B2 (en) * 2005-05-06 2010-03-23 Fotonation Vision Limited Remote control apparatus for consumer electronic appliances
US7620218B2 (en) * 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US7616233B2 (en) * 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US7574016B2 (en) * 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
JP4046079B2 (en) * 2003-12-10 2008-02-13 ソニー株式会社 Image processing device
JP4572583B2 (en) * 2004-05-31 2010-11-04 パナソニック電工株式会社 Imaging device
JP2005346806A (en) * 2004-06-02 2005-12-15 Funai Electric Co Ltd Dvd recorder and recording and reproducing apparatus
US8320641B2 (en) 2004-10-28 2012-11-27 DigitalOptics Corporation Europe Limited Method and apparatus for red-eye detection using preview or other reference images
US7715597B2 (en) 2004-12-29 2010-05-11 Fotonation Ireland Limited Method and component for image recognition
US8995715B2 (en) 2010-10-26 2015-03-31 Fotonation Limited Face or other object detection including template matching
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US7315631B1 (en) * 2006-08-11 2008-01-01 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US7694048B2 (en) * 2005-05-06 2010-04-06 Fotonation Vision Limited Remote control apparatus for printer appliances
JP4412552B2 (en) * 2005-10-05 2010-02-10 富士フイルム株式会社 Image layout apparatus and method, and program
US7840898B2 (en) * 2005-11-01 2010-11-23 Microsoft Corporation Video booklet
US7599577B2 (en) 2005-11-18 2009-10-06 Fotonation Vision Limited Method and apparatus of correcting hybrid flash artifacts in digital images
JP4345757B2 (en) * 2006-02-22 2009-10-14 セイコーエプソン株式会社 Image data color correction
JP4862447B2 (en) * 2006-03-23 2012-01-25 沖電気工業株式会社 Face recognition system
DE602007012246D1 (en) 2006-06-12 2011-03-10 Tessera Tech Ireland Ltd PROGRESS IN EXTENDING THE AAM TECHNIQUES FROM GRAY CALENDAR TO COLOR PICTURES
US7701618B2 (en) * 2006-06-14 2010-04-20 Kabushiki Kaisha Toshiba Automatic image enhancement using computed predictors
US20070291316A1 (en) * 2006-06-14 2007-12-20 Kabushiki Kaisha Toshiba And Toshiba Tec Kabushiki Kaisha Automatic image enhancement using computed predictors
US7403643B2 (en) 2006-08-11 2008-07-22 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US7916897B2 (en) 2006-08-11 2011-03-29 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US10021291B2 (en) * 2006-11-16 2018-07-10 Samsung Electronics Co., Ltd Portable device enabling a user to manually adjust locations of focusing and photometry and method for taking images
KR100857463B1 (en) * 2006-11-17 2008-09-08 주식회사신도리코 Face Region Detection Device and Correction Method for Photo Printing
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8761532B2 (en) * 2007-02-20 2014-06-24 Xerox Corporation Method and system for the selective application of automatic image enhancement to digital images
ATE472140T1 (en) 2007-02-28 2010-07-15 Fotonation Vision Ltd SEPARATION OF DIRECTIONAL ILLUMINATION VARIABILITY IN STATISTICAL FACIAL MODELING BASED ON TEXTURE SPACE DECOMPOSITIONS
US8649604B2 (en) 2007-03-05 2014-02-11 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
JP4315215B2 (en) * 2007-05-18 2009-08-19 カシオ計算機株式会社 Imaging apparatus, face detection method, and face detection control program
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
JP4525719B2 (en) 2007-08-31 2010-08-18 カシオ計算機株式会社 Gradation correction apparatus, gradation correction method, and program
JP4600448B2 (en) * 2007-08-31 2010-12-15 カシオ計算機株式会社 Gradation correction apparatus, gradation correction method, and program
US8750578B2 (en) 2008-01-29 2014-06-10 DigitalOptics Corporation Europe Limited Detecting facial expressions in digital images
JP4518157B2 (en) * 2008-01-31 2010-08-04 カシオ計算機株式会社 Imaging apparatus and program thereof
US8098929B2 (en) * 2008-02-28 2012-01-17 Broadcom Corporation Method and system for automatic correction of flesh-tones (skin-tones)
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
KR101412772B1 (en) * 2008-04-29 2014-07-02 삼성전자주식회사 Camera and method for providing guide information for image capture
US8116580B2 (en) * 2008-05-12 2012-02-14 Lexmark International, Inc. Embedded high frequency image details
US8391642B1 (en) * 2008-05-12 2013-03-05 Hewlett-Packard Development Company, L.P. Method and system for creating a custom image
US8115801B2 (en) * 2008-05-15 2012-02-14 Arcsoft, Inc. Method of automatic photographs stitching
JP5547730B2 (en) 2008-07-30 2014-07-16 デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド Automatic facial and skin beautification using face detection
JP5237721B2 (en) * 2008-08-13 2013-07-17 ペンタックスリコーイメージング株式会社 Imaging device
EP2237034B1 (en) * 2009-04-03 2013-07-24 Dräger Safety AG & Co. KGaA Device and method for recognising the correct usage of an alcohol measuring device
WO2010063463A2 (en) * 2008-12-05 2010-06-10 Fotonation Ireland Limited Face recognition using face tracker classifier data
JP2010204304A (en) * 2009-03-02 2010-09-16 Panasonic Corp Image capturing device, operator monitoring device, method for measuring distance to face
US9886693B2 (en) 2009-03-30 2018-02-06 Yuh-Shen Song Privacy protected anti identity theft and payment network
US8339506B2 (en) * 2009-04-24 2012-12-25 Qualcomm Incorporated Image capture parameter adjustment using face brightness information
JP5313037B2 (en) * 2009-05-11 2013-10-09 パナソニック株式会社 Electronic camera, image processing apparatus, and image processing method
US8359541B1 (en) * 2009-09-18 2013-01-22 Sprint Communications Company L.P. Distributing icons so that they do not overlap certain screen areas of a mobile device
US8379917B2 (en) 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
US20110141226A1 (en) * 2009-12-11 2011-06-16 Fotonation Ireland Limited Panorama imaging based on a lo-res map
EP2545411B1 (en) 2009-12-11 2014-02-12 DigitalOptics Corporation Europe Limited Panorama imaging
US20110141229A1 (en) * 2009-12-11 2011-06-16 Fotonation Ireland Limited Panorama imaging using super-resolution
US20110141225A1 (en) * 2009-12-11 2011-06-16 Fotonation Ireland Limited Panorama Imaging Based on Low-Res Images
US20110141224A1 (en) * 2009-12-11 2011-06-16 Fotonation Ireland Limited Panorama Imaging Using Lo-Res Images
US10080006B2 (en) * 2009-12-11 2018-09-18 Fotonation Limited Stereoscopic (3D) panorama creation on handheld device
US8294748B2 (en) * 2009-12-11 2012-10-23 DigitalOptics Corporation Europe Limited Panorama imaging using a blending map
JP5529568B2 (en) * 2010-02-05 2014-06-25 キヤノン株式会社 Image processing apparatus, imaging apparatus, control method, and program
US8692867B2 (en) 2010-03-05 2014-04-08 DigitalOptics Corporation Europe Limited Object detection and rendering for wide field of view (WFOV) image acquisition systems
JP5818409B2 (en) * 2010-06-17 2015-11-18 キヤノン株式会社 Fundus imaging apparatus and control method thereof
US8723912B2 (en) 2010-07-06 2014-05-13 DigitalOptics Corporation Europe Limited Scene background blurring including face modeling
US8970770B2 (en) 2010-09-28 2015-03-03 Fotonation Limited Continuous autofocus based on face detection and tracking
US8648959B2 (en) 2010-11-11 2014-02-11 DigitalOptics Corporation Europe Limited Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing
US8659697B2 (en) 2010-11-11 2014-02-25 DigitalOptics Corporation Europe Limited Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing
US8308379B2 (en) 2010-12-01 2012-11-13 Digitaloptics Corporation Three-pole tilt control system for camera module
US8896703B2 (en) 2011-03-31 2014-11-25 Fotonation Limited Superresolution enhancment of peripheral regions in nonlinear lens geometries
US8982180B2 (en) 2011-03-31 2015-03-17 Fotonation Limited Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries
US8947501B2 (en) 2011-03-31 2015-02-03 Fotonation Limited Scene enhancements in off-center peripheral regions for nonlinear lens geometries
US8723959B2 (en) 2011-03-31 2014-05-13 DigitalOptics Corporation Europe Limited Face and other object tracking in off-center peripheral regions for nonlinear lens geometries
JP5156108B2 (en) * 2011-04-23 2013-03-06 オリンパスイメージング株式会社 Imaging apparatus and imaging method
EP2527992A1 (en) * 2011-05-23 2012-11-28 Sony Ericsson Mobile Communications AB Generating content data for a video file
KR101747304B1 (en) * 2011-05-30 2017-06-14 삼성전자주식회사 A digital photographing apparatus, a method for auto-focusing, and a computer-readable storage medium for executing the method
WO2012168538A1 (en) * 2011-06-07 2012-12-13 Nokia Corporation Method, apparatus and computer program product for object detection
US8620088B2 (en) 2011-08-31 2013-12-31 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
US9596398B2 (en) * 2011-09-02 2017-03-14 Microsoft Technology Licensing, Llc Automatic image capture
US9082004B2 (en) 2011-12-15 2015-07-14 The Nielsen Company (Us), Llc. Methods and apparatus to capture images
JP5978639B2 (en) * 2012-02-06 2016-08-24 ソニー株式会社 Image processing apparatus, image processing method, program, and recording medium
US9294667B2 (en) 2012-03-10 2016-03-22 Digitaloptics Corporation MEMS auto focus miniature camera module with fixed and movable lens groups
WO2013136053A1 (en) 2012-03-10 2013-09-19 Digitaloptics Corporation Miniature camera module with mems-actuated autofocus
JP2013219556A (en) * 2012-04-09 2013-10-24 Olympus Imaging Corp Imaging apparatus
US9385324B2 (en) * 2012-05-07 2016-07-05 Samsung Electronics Co., Ltd. Electronic system with augmented reality mechanism and method of operation thereof
US9354486B2 (en) 2012-06-07 2016-05-31 DigitalOptics Corporation MEMS MEMS fast focus camera module
US9007520B2 (en) 2012-08-10 2015-04-14 Nanchang O-Film Optoelectronics Technology Ltd Camera module with EMI shield
US9001268B2 (en) 2012-08-10 2015-04-07 Nan Chang O-Film Optoelectronics Technology Ltd Auto-focus camera module with flexible printed circuit extension
US9242602B2 (en) 2012-08-27 2016-01-26 Fotonation Limited Rearview imaging systems for vehicle
US20150062114A1 (en) * 2012-10-23 2015-03-05 Andrew Ofstad Displaying textual information related to geolocated images
US20140160340A1 (en) * 2012-12-11 2014-06-12 Rochester Institute Of Technology Methods for Enhancing Perception of Image Quality at Capture Using Gaze Detection and Devices Thereof
US8769557B1 (en) 2012-12-27 2014-07-01 The Nielsen Company (Us), Llc Methods and apparatus to determine engagement levels of audience members
US9081264B2 (en) 2012-12-31 2015-07-14 Digitaloptics Corporation Auto-focus camera module with MEMS capacitance estimator
JP2014220622A (en) * 2013-05-07 2014-11-20 オリンパス株式会社 Imaging apparatus, imaging support method, display device, and display method
US10402846B2 (en) 2013-05-21 2019-09-03 Fotonation Limited Anonymizing facial expression data with a smart-cam
CN103324918B (en) * 2013-06-25 2016-04-27 浙江中烟工业有限责任公司 The identity identifying method that a kind of recognition of face matches with lipreading recognition
KR20150011714A (en) * 2013-07-23 2015-02-02 주식회사 케이티 Device for determining orientation of picture
DE102013015826B4 (en) * 2013-09-24 2016-05-12 Dräger Safety AG & Co. KGaA Device for measuring the state of intoxication of a subject
US10037467B2 (en) * 2013-09-26 2018-07-31 Nec Corporation Information processing system
TWI532026B (en) * 2013-12-18 2016-05-01 原相科技股份有限公司 Image beightness adjusting method, object tracking method and object tracking apparatus
CN104299196A (en) * 2014-10-11 2015-01-21 京东方科技集团股份有限公司 Image processing device and method and display device
US9959610B2 (en) * 2014-10-30 2018-05-01 Applied Materials, Inc. System and method to detect substrate and/or substrate support misalignment using imaging
CN107077623A (en) * 2014-11-13 2017-08-18 英特尔公司 Image quality compensation system and method
TWI646503B (en) * 2014-12-30 2019-01-01 香港商富智康〈香港〉有限公司 Method and system for correcting orientation of photos
JP5930245B1 (en) * 2015-01-23 2016-06-08 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
US9922271B2 (en) * 2015-03-20 2018-03-20 Netra, Inc. Object detection and classification
US9760792B2 (en) * 2015-03-20 2017-09-12 Netra, Inc. Object detection and classification
US9509902B1 (en) 2015-06-10 2016-11-29 Microsoft Technology Licensing, Llc Methods and devices for correction of camera module sensitivity and flash color variation
JP2017028633A (en) * 2015-07-27 2017-02-02 株式会社リコー Video distribution terminal, program, and video distribution method
CN105590103B (en) * 2015-12-30 2019-10-01 中国银联股份有限公司 Eyeball recognition methods and system
JP6458782B2 (en) * 2016-07-28 2019-01-30 カシオ計算機株式会社 Display control apparatus, display control method, and program
US10127246B2 (en) * 2016-08-16 2018-11-13 Microsoft Technology Licensing, Llc Automatic grouping based handling of similar photos
CN106357961A (en) * 2016-08-25 2017-01-25 维沃移动通信有限公司 Photographing method and mobile terminal
CN106855744B (en) * 2016-12-30 2019-02-15 维沃移动通信有限公司 A kind of screen display method and mobile terminal
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
CN107437269B (en) * 2017-08-03 2021-10-08 北京小米移动软件有限公司 Method and device for processing picture
CN107491755B (en) * 2017-08-16 2021-04-27 京东方科技集团股份有限公司 Method and device for gesture recognition
CN107633252B (en) * 2017-09-19 2020-04-21 广州市百果园信息技术有限公司 Skin color detection method, device and storage medium
WO2019175620A1 (en) * 2018-03-11 2019-09-19 Pratik Sharma View based object detection in images
CN108512651B (en) * 2018-03-19 2020-05-19 网御安全技术(深圳)有限公司 Artificial intelligence image identification attack defense method, system and storage medium
KR20210041328A (en) * 2019-10-07 2021-04-15 엘지전자 주식회사 Apparatus and method for recognizing a face based on artificial intelligence
CN111275907B (en) * 2020-01-20 2022-04-12 江苏恒宝智能系统技术有限公司 Multi-channel identification authentication method and multi-channel identification authentication terminal
US11496151B1 (en) * 2020-04-24 2022-11-08 Tencent America LLC Neural network model compression with block partitioning
CN111556303B (en) * 2020-05-14 2022-07-15 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer readable medium
US11711638B2 (en) 2020-06-29 2023-07-25 The Nielsen Company (Us), Llc Audience monitoring systems and related methods
TWI768932B (en) * 2021-05-27 2022-06-21 倍利科技股份有限公司 Image color adjustment method
US11860704B2 (en) 2021-08-16 2024-01-02 The Nielsen Company (Us), Llc Methods and apparatus to determine user presence
US11758223B2 (en) 2021-12-23 2023-09-12 The Nielsen Company (Us), Llc Apparatus, systems, and methods for user presence detection for audience monitoring

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4047187A (en) * 1974-04-01 1977-09-06 Canon Kabushiki Kaisha System for exposure measurement and/or focus detection by means of image senser
US20040022435A1 (en) * 2002-07-30 2004-02-05 Canon Kabushiki Kaisha Image processing apparatus and method and program storage medium
US20050104848A1 (en) * 2003-09-25 2005-05-19 Kabushiki Kaisha Toshiba Image processing device and method
US7050607B2 (en) * 2001-12-08 2006-05-23 Microsoft Corp. System and method for multi-view face detection
US7583294B2 (en) * 2000-02-28 2009-09-01 Eastman Kodak Company Face detecting camera and method

Family Cites Families (416)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4456354A (en) * 1980-01-11 1984-06-26 Olympus Optical Co., Ltd. Exposure controller for a camera
US4317991A (en) 1980-03-12 1982-03-02 Honeywell Inc. Digital auto focus system utilizing a photodetector array
US4367027A (en) 1980-03-12 1983-01-04 Honeywell Inc. Active auto focus system improvement
JPS5870217A (en) 1981-10-23 1983-04-26 Fuji Photo Film Co Ltd Camera-shake detecting device
JPS61105978A (en) 1984-10-30 1986-05-24 Sanyo Electric Co Ltd Automatic focusing circuit
US4821074A (en) * 1985-09-09 1989-04-11 Minolta Camera Kabushiki Kaisha Exposure control device for a camera
US4745427A (en) 1985-09-13 1988-05-17 Minolta Camera Kabushiki Kaisha Multi-point photometric apparatus
DE256051T1 (en) 1986-01-20 1988-09-01 Georges Garches Fr Cornuejols IMAGE PROCESSING DEVICE FOR CONTROLLING THE TRANSFER FUNCTION OF AN OPTICAL SYSTEM.
US5130935A (en) * 1986-03-31 1992-07-14 Canon Kabushiki Kaisha Color image processing apparatus for extracting image data having predetermined color information from among inputted image data and for correcting inputted image data in response to the extracted image data
US4970683A (en) 1986-08-26 1990-11-13 Heads Up Technologies, Inc. Computerized checklist with predetermined sequences of sublists which automatically returns to skipped checklists
US5291234A (en) 1987-02-04 1994-03-01 Asahi Kogaku Kogyo Kabushiki Kaisha Auto optical focus detecting device and eye direction detecting optical system
JPH01158579A (en) 1987-09-09 1989-06-21 Aisin Seiki Co Ltd Image recognizing device
US4975969A (en) 1987-10-22 1990-12-04 Peter Tal Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same
US5384912A (en) * 1987-10-30 1995-01-24 New Microtime Inc. Real time video image processing system
US5018017A (en) 1987-12-25 1991-05-21 Kabushiki Kaisha Toshiba Electronic still camera and image recording method thereof
US4970663A (en) 1989-04-28 1990-11-13 Avid Technology, Inc. Method and apparatus for manipulating digital video data
US5227837A (en) 1989-05-12 1993-07-13 Fuji Photo Film Co., Ltd. Photograph printing method
US5111231A (en) 1989-07-27 1992-05-05 Canon Kabushiki Kaisha Camera system
US5063603A (en) 1989-11-06 1991-11-05 David Sarnoff Research Center, Inc. Dynamic method for recognizing objects and image processing system therefor
US5164831A (en) 1990-03-15 1992-11-17 Eastman Kodak Company Electronic still camera providing multi-format storage of full and reduced resolution images
US5150432A (en) 1990-03-26 1992-09-22 Kabushiki Kaisha Toshiba Apparatus for encoding/decoding video signals to improve quality of a specific region
US5274714A (en) 1990-06-04 1993-12-28 Neuristics, Inc. Method and apparatus for determining and organizing feature vectors for neural network recognition
US5161204A (en) 1990-06-04 1992-11-03 Neuristics, Inc. Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices
GB9019538D0 (en) 1990-09-07 1990-10-24 Philips Electronic Associated Tracking a moving object
JP2748678B2 (en) 1990-10-09 1998-05-13 松下電器産業株式会社 Gradation correction method and gradation correction device
JP2766067B2 (en) 1990-10-31 1998-06-18 キヤノン株式会社 Imaging device
US5164992A (en) 1990-11-01 1992-11-17 Massachusetts Institute Of Technology Face recognition system
US5493409A (en) 1990-11-29 1996-02-20 Minolta Camera Kabushiki Kaisha Still video camera having a printer capable of printing a photographed image in a plurality of printing modes
JPH04257830A (en) 1991-02-12 1992-09-14 Nikon Corp Flash light dimming controller for camera
JP2790562B2 (en) 1992-01-06 1998-08-27 富士写真フイルム株式会社 Image processing method
US5488429A (en) 1992-01-13 1996-01-30 Mitsubishi Denki Kabushiki Kaisha Video signal processor for detecting flesh tones in am image
US5638136A (en) 1992-01-13 1997-06-10 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for detecting flesh tones in an image
JP2973676B2 (en) 1992-01-23 1999-11-08 松下電器産業株式会社 Face image feature point extraction device
JP3288420B2 (en) 1992-03-10 2002-06-04 株式会社トプコン Autofocus CCD camera for stereo photography
US5331544A (en) 1992-04-23 1994-07-19 A. C. Nielsen Company Market research method and system for collecting retail store and shopper market research data
US5450504A (en) * 1992-05-19 1995-09-12 Calia; James Method for finding a most likely matching of a target facial image in a data base of facial images
US5680481A (en) * 1992-05-26 1997-10-21 Ricoh Corporation Facial feature extraction method and apparatus for a neural network acoustic and visual speech recognition system
JP3298072B2 (en) 1992-07-10 2002-07-02 ソニー株式会社 Video camera system
US5278923A (en) 1992-09-02 1994-01-11 Harmonic Lightwaves, Inc. Cascaded optical modulation system with high linearity
US5311240A (en) 1992-11-03 1994-05-10 Eastman Kodak Company Technique suited for use in multi-zone autofocusing cameras for improving image quality for non-standard display sizes and/or different focal length photographing modes
KR100276681B1 (en) 1992-11-07 2001-01-15 이데이 노부유끼 Video camera system
JPH06178261A (en) 1992-12-07 1994-06-24 Nikon Corp Digital still camera
US5550928A (en) 1992-12-15 1996-08-27 A.C. Nielsen Company Audience measurement system and method
US5907315A (en) * 1993-03-17 1999-05-25 Ultimatte Corporation Method and apparatus for adjusting parameters used by compositing devices
JP2983407B2 (en) 1993-03-31 1999-11-29 三菱電機株式会社 Image tracking device
US5384615A (en) 1993-06-08 1995-01-24 Industrial Technology Research Institute Ambient depth-of-field simulation exposuring method
US5432863A (en) 1993-07-19 1995-07-11 Eastman Kodak Company Automated detection and correction of eye color defects due to flash illumination
US6181805B1 (en) * 1993-08-11 2001-01-30 Nippon Telegraph & Telephone Corporation Object image detecting method and system
AU7603894A (en) 1993-08-27 1995-03-21 Massachusetts Institute Of Technology Example-based image analysis and synthesis using pixelwise correspondence
US6798834B1 (en) * 1996-08-15 2004-09-28 Mitsubishi Denki Kabushiki Kaisha Image coding apparatus with segment classification and segmentation-type motion prediction circuit
US5835616A (en) 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
US5781650A (en) 1994-02-18 1998-07-14 University Of Central Florida Automatic feature detection and age classification of human faces in digital images
US5852669A (en) 1994-04-06 1998-12-22 Lucent Technologies Inc. Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video
US5519451A (en) 1994-04-14 1996-05-21 Texas Instruments Incorporated Motion adaptive scan-rate conversion using directional edge interpolation
US5774754A (en) 1994-04-26 1998-06-30 Minolta Co., Ltd. Camera capable of previewing a photographed image
US5678098A (en) * 1994-06-09 1997-10-14 Fuji Photo Film Co., Ltd. Method and apparatus for controlling exposure of camera
US5715377A (en) * 1994-07-21 1998-02-03 Matsushita Electric Industrial Co. Ltd. Gray level correction apparatus
WO1996005664A2 (en) 1994-08-12 1996-02-22 Philips Electronics N.V. Optical synchronisation arrangement, transmission system and optical receiver
US6714665B1 (en) * 1994-09-02 2004-03-30 Sarnoff Corporation Fully automated iris recognition system utilizing wide and narrow fields of view
EP0701338B1 (en) 1994-09-12 2003-07-23 Nippon Telegraph And Telephone Corporation An optical intensity modulation transmission system
IT1277993B1 (en) * 1994-09-30 1997-11-12 Ist Trentino Di Cultura PROCEDURE FOR STORING AND RETRIEVING IMAGES OF PEOPLE, FOR EXAMPLE IN PHOTOGRAPHIC ARCHIVES AND FOR THE CONSTRUCTION OF IDENTIKIT AND
US5629752A (en) * 1994-10-28 1997-05-13 Fuji Photo Film Co., Ltd. Method of determining an exposure amount using optical recognition of facial features
US6232965B1 (en) * 1994-11-30 2001-05-15 California Institute Of Technology Method and apparatus for synthesizing realistic animations of a human speaking using a computer
US5496106A (en) 1994-12-13 1996-03-05 Apple Computer, Inc. System and method for generating a contrast overlay as a focus assist for an imaging device
US6426779B1 (en) 1995-01-04 2002-07-30 Sony Electronics, Inc. Method and apparatus for providing favorite station and programming information in a multiple station broadcast system
US6128398A (en) 1995-01-31 2000-10-03 Miros Inc. System, method and application for the recognition, verification and similarity ranking of facial or other object patterns
US5724456A (en) 1995-03-31 1998-03-03 Polaroid Corporation Brightness adjustment of images using digital scene analysis
US5870138A (en) 1995-03-31 1999-02-09 Hitachi, Ltd. Facial image processing
US5710833A (en) 1995-04-20 1998-01-20 Massachusetts Institute Of Technology Detection, recognition and coding of complex objects using probabilistic eigenspace analysis
US5844573A (en) 1995-06-07 1998-12-01 Massachusetts Institute Of Technology Image compression by pointwise prototype correspondence using shape and texture information
US5774129A (en) 1995-06-07 1998-06-30 Massachusetts Institute Of Technology Image analysis and synthesis networks using shape and texture information
JP3799633B2 (en) 1995-06-16 2006-07-19 セイコーエプソン株式会社 Face image processing method and face image processing apparatus
US5912980A (en) 1995-07-13 1999-06-15 Hunke; H. Martin Target acquisition and tracking
US5842194A (en) 1995-07-28 1998-11-24 Mitsubishi Denki Kabushiki Kaisha Method of recognizing images of faces or general images using fuzzy combination of multiple resolutions
US5715325A (en) 1995-08-30 1998-02-03 Siemens Corporate Research, Inc. Apparatus and method for detecting a face in a video image
US5850470A (en) 1995-08-30 1998-12-15 Siemens Corporate Research, Inc. Neural network for locating and recognizing a deformable object
US5802220A (en) * 1995-12-15 1998-09-01 Xerox Corporation Apparatus and method for tracking facial motion through a sequence of images
US5774591A (en) 1995-12-15 1998-06-30 Xerox Corporation Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
US5633678A (en) 1995-12-20 1997-05-27 Eastman Kodak Company Electronic still camera for capturing and categorizing images
JP3279913B2 (en) 1996-03-18 2002-04-30 株式会社東芝 Person authentication device, feature point extraction device, and feature point extraction method
US6151073A (en) 1996-03-28 2000-11-21 Fotonation, Inc. Intelligent camera flash system
US5911139A (en) 1996-03-29 1999-06-08 Virage, Inc. Visual image database search engine which allows for different schema
US5764803A (en) 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US5802208A (en) 1996-05-06 1998-09-01 Lucent Technologies Inc. Face recognition using DCT-based feature vectors
US6188776B1 (en) 1996-05-21 2001-02-13 Interval Research Corporation Principle component analysis of images for the automatic location of control points
US5991456A (en) 1996-05-29 1999-11-23 Science And Technology Corporation Method of improving a digital image
JP2907120B2 (en) 1996-05-29 1999-06-21 日本電気株式会社 Red-eye detection correction device
US6173068B1 (en) 1996-07-29 2001-01-09 Mikos, Ltd. Method and apparatus for recognizing and classifying individuals based on minutiae
US5978519A (en) 1996-08-06 1999-11-02 Xerox Corporation Automatic image cropping
US20030118216A1 (en) 1996-09-04 2003-06-26 Goldberg David A. Obtaining person-specific images in a public venue
DK0923828T3 (en) 1996-09-05 2004-05-24 Swisscom Ag Quantum cryptography device and method
US6028960A (en) * 1996-09-20 2000-02-22 Lucent Technologies Inc. Face feature analysis for automatic lipreading and character animation
US5852823A (en) 1996-10-16 1998-12-22 Microsoft Image classification and retrieval system using a query-by-example paradigm
US5818975A (en) 1996-10-28 1998-10-06 Eastman Kodak Company Method and apparatus for area selective exposure adjustment
US6765612B1 (en) * 1996-12-09 2004-07-20 Flashpoint Technology, Inc. Method and system for naming images captured by a digital camera
US6526156B1 (en) 1997-01-10 2003-02-25 Xerox Corporation Apparatus and method for identifying and tracking objects with view-based representations
JPH10208047A (en) 1997-01-23 1998-08-07 Nissan Motor Co Ltd On-vehicle traveling environment recognizing device
US6441854B2 (en) 1997-02-20 2002-08-27 Eastman Kodak Company Electronic camera with quick review of last captured image
US6061055A (en) * 1997-03-21 2000-05-09 Autodesk, Inc. Method of tracking objects with an imaging device
US6249315B1 (en) 1997-03-24 2001-06-19 Jack M. Holm Strategy for pictorial digital image processing
JP3222091B2 (en) 1997-05-27 2001-10-22 シャープ株式会社 Image processing apparatus and medium storing image processing apparatus control program
US7057653B1 (en) 1997-06-19 2006-06-06 Minolta Co., Ltd. Apparatus capable of image capturing
JP3436473B2 (en) * 1997-06-20 2003-08-11 シャープ株式会社 Image processing device
US6009209A (en) 1997-06-27 1999-12-28 Microsoft Corporation Automated removal of red eye effect from a digital image
US5990901A (en) * 1997-06-27 1999-11-23 Microsoft Corporation Model based image editing and correction
AUPO798697A0 (en) * 1997-07-15 1997-08-07 Silverbrook Research Pty Ltd Data processing method and apparatus (ART51)
US6188777B1 (en) 1997-08-01 2001-02-13 Interval Research Corporation Method and apparatus for personnel detection and tracking
US6072094A (en) 1997-08-06 2000-06-06 Merck & Co., Inc. Efficient synthesis of cyclopropylacetylene
US6252976B1 (en) * 1997-08-29 2001-06-26 Eastman Kodak Company Computer program product for redeye detection
JP3661367B2 (en) 1997-09-09 2005-06-15 コニカミノルタフォトイメージング株式会社 Camera with shake correction function
KR19990030882A (en) 1997-10-07 1999-05-06 이해규 Digital still camera with adjustable focus position and its control method
US7738015B2 (en) 1997-10-09 2010-06-15 Fotonation Vision Limited Red-eye filter method and apparatus
US6407777B1 (en) 1997-10-09 2002-06-18 Deluca Michael Joseph Red-eye filter method and apparatus
US7630006B2 (en) 1997-10-09 2009-12-08 Fotonation Ireland Limited Detecting red eye filter and apparatus using meta-data
US7352394B1 (en) 1997-10-09 2008-04-01 Fotonation Vision Limited Image modification based on red-eye filter analysis
US7042505B1 (en) 1997-10-09 2006-05-09 Fotonation Ireland Ltd. Red-eye filter method and apparatus
US6016354A (en) 1997-10-23 2000-01-18 Hewlett-Packard Company Apparatus and a method for reducing red-eye in a digital image
JP3724157B2 (en) 1997-10-30 2005-12-07 コニカミノルタホールディングス株式会社 Video observation device
US6108437A (en) 1997-11-14 2000-08-22 Seiko Epson Corporation Face recognition apparatus, method, system and computer readable medium thereof
US6128397A (en) 1997-11-21 2000-10-03 Justsystem Pittsburgh Research Center Method for finding all frontal faces in arbitrarily complex visual scenes
JPH11175699A (en) 1997-12-12 1999-07-02 Fuji Photo Film Co Ltd Picture processor
JP3361980B2 (en) 1997-12-12 2003-01-07 株式会社東芝 Eye gaze detecting apparatus and method
US6430312B1 (en) 1997-12-29 2002-08-06 Cornell Research Foundation, Inc. Image subregion querying using color correlograms
US6148092A (en) 1998-01-08 2000-11-14 Sharp Laboratories Of America, Inc System for detecting skin-tone regions within an image
US6268939B1 (en) 1998-01-08 2001-07-31 Xerox Corporation Method and apparatus for correcting luminance and chrominance data in digital color images
GB2333590A (en) 1998-01-23 1999-07-28 Sharp Kk Detecting a face-like region
US6278491B1 (en) 1998-01-29 2001-08-21 Hewlett-Packard Company Apparatus and a method for automatically detecting and reducing red-eye in a digital image
US6556708B1 (en) * 1998-02-06 2003-04-29 Compaq Computer Corporation Technique for classifying objects within an image
US6400830B1 (en) * 1998-02-06 2002-06-04 Compaq Computer Corporation Technique for tracking objects through a series of images
US6115052A (en) 1998-02-12 2000-09-05 Mitsubishi Electric Information Technology Center America, Inc. (Ita) System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence
JPH11231358A (en) 1998-02-19 1999-08-27 Nec Corp Optical circuit and its production
US6349373B2 (en) 1998-02-20 2002-02-19 Eastman Kodak Company Digital image management system having method for managing images according to image groups
US6529630B1 (en) 1998-03-02 2003-03-04 Fuji Photo Film Co., Ltd. Method and device for extracting principal image subjects
EP1315371B1 (en) * 1998-03-16 2006-06-14 SANYO ELECTRIC Co., Ltd. Digital camera capable of image processing
JP3657769B2 (en) 1998-03-19 2005-06-08 富士写真フイルム株式会社 Image processing method and image processing apparatus
JP2923894B1 (en) * 1998-03-31 1999-07-26 日本電気株式会社 Light source determination method, skin color correction method, color image correction method, light source determination device, skin color correction device, color image correction device, and computer-readable recording medium
US6192149B1 (en) 1998-04-08 2001-02-20 Xerox Corporation Method and apparatus for automatic detection of image target gamma
EP1549041A3 (en) 1998-04-10 2005-09-21 Fuji Photo Film Co., Ltd. Electronic album producing and viewing system and method
US6240198B1 (en) 1998-04-13 2001-05-29 Compaq Computer Corporation Method for figure tracking using 2-D registration
US6301370B1 (en) 1998-04-13 2001-10-09 Eyematic Interfaces, Inc. Face recognition from video images
US6097470A (en) 1998-05-28 2000-08-01 Eastman Kodak Company Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing
JP2000048184A (en) 1998-05-29 2000-02-18 Canon Inc Method for processing image, and method for extracting facial area and device therefor
AUPP400998A0 (en) * 1998-06-10 1998-07-02 Canon Kabushiki Kaisha Face detection in digital images
US6404900B1 (en) 1998-06-22 2002-06-11 Sharp Laboratories Of America, Inc. Method for robust human face tracking in presence of multiple persons
CN1229977C (en) * 1998-06-22 2005-11-30 富士写真胶片株式会社 Imaging device and method
US6496607B1 (en) 1998-06-26 2002-12-17 Sarnoff Corporation Method and apparatus for region-based allocation of processing resources and control of input image formation
US6275614B1 (en) * 1998-06-26 2001-08-14 Sarnoff Corporation Method and apparatus for block classification and adaptive bit allocation
US7130454B1 (en) * 1998-07-20 2006-10-31 Viisage Technology, Inc. Real-time facial recognition and verification system
US6292575B1 (en) 1998-07-20 2001-09-18 Lau Technologies Real-time facial recognition and verification system
US6362850B1 (en) 1998-08-04 2002-03-26 Flashpoint Technology, Inc. Interactive movie creation from one or more still images in a digital imaging device
DE19837004C1 (en) * 1998-08-14 2000-03-09 Christian Eckes Process for recognizing objects in digitized images
GB2341231A (en) 1998-09-05 2000-03-08 Sharp Kk Face detection in an image
US6445819B1 (en) 1998-09-10 2002-09-03 Fuji Photo Film Co., Ltd. Image processing method, image processing device, and recording medium
US6456732B1 (en) 1998-09-11 2002-09-24 Hewlett-Packard Company Automatic rotation, cropping and scaling of images for printing
US6134339A (en) 1998-09-17 2000-10-17 Eastman Kodak Company Method and apparatus for determining the position of eyes and for correcting eye-defects in a captured frame
US6606398B2 (en) * 1998-09-30 2003-08-12 Intel Corporation Automatic cataloging of people in digital photographs
JP3291259B2 (en) 1998-11-11 2002-06-10 キヤノン株式会社 Image processing method and recording medium
US6351556B1 (en) 1998-11-20 2002-02-26 Eastman Kodak Company Method for automatically comparing content of images for classification into events
WO2000033240A1 (en) * 1998-12-02 2000-06-08 The Victoria University Of Manchester Face sub-space determination
US6263113B1 (en) 1998-12-11 2001-07-17 Philips Electronics North America Corp. Method for detecting a face in a digital image
US6473199B1 (en) 1998-12-18 2002-10-29 Eastman Kodak Company Correcting exposure and tone scale of digital images captured by an image capture device
US6396599B1 (en) 1998-12-21 2002-05-28 Eastman Kodak Company Method and apparatus for modifying a portion of an image in accordance with colorimetric parameters
JP2000197050A (en) 1998-12-25 2000-07-14 Canon Inc Image processing unit and its method
US6282317B1 (en) 1998-12-31 2001-08-28 Eastman Kodak Company Method for automatic determination of main subjects in photographic images
US6438264B1 (en) 1998-12-31 2002-08-20 Eastman Kodak Company Method for compensating image color when adjusting the contrast of a digital color image
US6393136B1 (en) * 1999-01-04 2002-05-21 International Business Machines Corporation Method and apparatus for determining eye contact
US6421468B1 (en) 1999-01-06 2002-07-16 Seiko Epson Corporation Method and apparatus for sharpening an image by scaling elements of a frequency-domain representation
US6463163B1 (en) 1999-01-11 2002-10-08 Hewlett-Packard Company System and method for face detection using candidate image region selection
US7038715B1 (en) * 1999-01-19 2006-05-02 Texas Instruments Incorporated Digital still camera with high-quality portrait mode
AUPP839199A0 (en) 1999-02-01 1999-02-25 Traffic Pro Pty Ltd Object recognition & tracking system
US6778216B1 (en) 1999-03-25 2004-08-17 Texas Instruments Incorporated Method and apparatus for digital camera real-time image correction in preview mode
US7106374B1 (en) 1999-04-05 2006-09-12 Amherst Systems, Inc. Dynamically reconfigurable vision system
JP2000324437A (en) 1999-05-13 2000-11-24 Fuurie Kk Video database system
US6393148B1 (en) 1999-05-13 2002-05-21 Hewlett-Packard Company Contrast enhancement of an image using luminance and RGB statistical metrics
WO2000070558A1 (en) * 1999-05-18 2000-11-23 Sanyo Electric Co., Ltd. Dynamic image processing method and device and medium
US6760485B1 (en) 1999-05-20 2004-07-06 Eastman Kodak Company Nonlinearly modifying a rendered digital image
US6967680B1 (en) 1999-05-28 2005-11-22 Microsoft Corporation Method and apparatus for capturing images
US7248300B1 (en) 1999-06-03 2007-07-24 Fujifilm Corporation Camera and method of photographing good image
US6571003B1 (en) 1999-06-14 2003-05-27 The Procter & Gamble Company Skin imaging and analysis systems and methods
US6879705B1 (en) * 1999-07-14 2005-04-12 Sarnoff Corporation Method and apparatus for tracking multiple objects in a video sequence
US6501857B1 (en) 1999-07-20 2002-12-31 Craig Gotsman Method and system for detecting and classifying objects in an image
US6545706B1 (en) 1999-07-30 2003-04-08 Electric Planet, Inc. System, method and article of manufacture for tracking a head of a camera-generated image of a person
US6526161B1 (en) 1999-08-30 2003-02-25 Koninklijke Philips Electronics N.V. System and method for biometrics-based facial feature extraction
JP4378804B2 (en) * 1999-09-10 2009-12-09 ソニー株式会社 Imaging device
WO2001028238A2 (en) * 1999-10-08 2001-04-19 Sarnoff Corporation Method and apparatus for enhancing and indexing video and audio signals
US6937773B1 (en) 1999-10-20 2005-08-30 Canon Kabushiki Kaisha Image encoding method and apparatus
US6792135B1 (en) 1999-10-29 2004-09-14 Microsoft Corporation System and method for face detection through geometric distribution of a non-intensity image property
US6504951B1 (en) 1999-11-29 2003-01-07 Eastman Kodak Company Method for detecting sky in images
US6754389B1 (en) * 1999-12-01 2004-06-22 Koninklijke Philips Electronics N.V. Program classification using object tracking
EP1107166A3 (en) * 1999-12-01 2008-08-06 Matsushita Electric Industrial Co., Ltd. Device and method for face image extraction, and recording medium having recorded program for the method
US6516147B2 (en) 1999-12-20 2003-02-04 Polaroid Corporation Scene recognition method and system using brightness and ranging mapping
US20030035573A1 (en) 1999-12-22 2003-02-20 Nicolae Duta Method for learning-based object detection in cardiac magnetic resonance images
JP2001186323A (en) 1999-12-24 2001-07-06 Fuji Photo Film Co Ltd Identification photograph system and picture on processing method
JP2001216515A (en) * 2000-02-01 2001-08-10 Matsushita Electric Ind Co Ltd Method and device for detecting face of person
US6504546B1 (en) * 2000-02-08 2003-01-07 At&T Corp. Method of modeling objects to synthesize three-dimensional, photo-realistic animations
US7043465B2 (en) 2000-02-24 2006-05-09 Holding B.E.V.S.A. Method and device for perception of an object by its shape, its size and/or its orientation
US6807290B2 (en) 2000-03-09 2004-10-19 Microsoft Corporation Rapid computer modeling of faces for animation
US7106887B2 (en) 2000-04-13 2006-09-12 Fuji Photo Film Co., Ltd. Image processing method using conditions corresponding to an identified person
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
US20020150662A1 (en) 2000-04-19 2002-10-17 Dewis Mark Lawrence Ethyl 3-mercaptobutyrate as a flavoring or fragrance agent and methods for preparing and using same
KR100810813B1 (en) * 2000-04-21 2008-03-06 가부시키가이샤 시세이도 Makeup counseling apparatus
JP4443722B2 (en) 2000-04-25 2010-03-31 富士通株式会社 Image recognition apparatus and method
US6944341B2 (en) 2000-05-01 2005-09-13 Xerox Corporation Loose gray-scale template matching for image processing of anti-aliased lines
EP1158786A3 (en) * 2000-05-24 2005-03-09 Sony Corporation Transmission of the region of interest of an image
US6700999B1 (en) 2000-06-30 2004-03-02 Intel Corporation System, method, and apparatus for multiple face tracking
US6747690B2 (en) 2000-07-11 2004-06-08 Phase One A/S Digital camera with integrated accelerometers
US6564225B1 (en) 2000-07-14 2003-05-13 Time Warner Entertainment Company, L.P. Method and apparatus for archiving in and retrieving images from a digital image library
JP2003032542A (en) 2001-07-19 2003-01-31 Mitsubishi Electric Corp Imaging apparatus
AUPQ896000A0 (en) 2000-07-24 2000-08-17 Seeing Machines Pty Ltd Facial image processing system
JP4140181B2 (en) 2000-09-08 2008-08-27 富士フイルム株式会社 Electronic camera
US6900840B1 (en) 2000-09-14 2005-05-31 Hewlett-Packard Development Company, L.P. Digital camera and method of using same to view image in live view mode
US6965684B2 (en) 2000-09-15 2005-11-15 Canon Kabushiki Kaisha Image processing methods and apparatus for detecting human eyes, human face, and other objects in an image
US7038709B1 (en) * 2000-11-01 2006-05-02 Gilbert Verghese System and method for tracking a subject
JP4590717B2 (en) * 2000-11-17 2010-12-01 ソニー株式会社 Face identification device and face identification method
US7099510B2 (en) * 2000-11-29 2006-08-29 Hewlett-Packard Development Company, L.P. Method and system for object detection in digital images
US6975750B2 (en) 2000-12-01 2005-12-13 Microsoft Corp. System and method for face recognition using synthesized training images
JP2002171398A (en) * 2000-12-04 2002-06-14 Konica Corp Image processing method and electronic camera
US6654507B2 (en) 2000-12-14 2003-11-25 Eastman Kodak Company Automatically producing an image of a portion of a photographic image
US6697504B2 (en) * 2000-12-15 2004-02-24 Institute For Information Industry Method of multi-level facial image recognition and system using the same
GB2370438A (en) 2000-12-22 2002-06-26 Hewlett Packard Co Automated image cropping using selected compositional rules.
US20020081003A1 (en) * 2000-12-27 2002-06-27 Sobol Robert E. System and method for automatically enhancing graphical images
US7034848B2 (en) 2001-01-05 2006-04-25 Hewlett-Packard Development Company, L.P. System and method for automatically cropping graphical images
JP3529759B2 (en) * 2001-01-26 2004-05-24 株式会社ソニー・コンピュータエンタテインメント Image processing program, computer-readable recording medium storing image processing program, program execution device, image processing device, and image processing method
EP1231564B1 (en) 2001-02-09 2007-03-28 Imaging Solutions AG Digital local control of image properties by means of masks
EP1231565A1 (en) 2001-02-09 2002-08-14 GRETAG IMAGING Trading AG Image colour correction based on image pattern recognition, the image pattern including a reference colour
GB2372658A (en) 2001-02-23 2002-08-28 Hewlett Packard Co A method of creating moving video data from a static image
US7027621B1 (en) 2001-03-15 2006-04-11 Mikos, Ltd. Method and apparatus for operator condition monitoring and assessment
US20020136433A1 (en) 2001-03-26 2002-09-26 Koninklijke Philips Electronics N.V. Adaptive facial recognition system and method
US6915011B2 (en) 2001-03-28 2005-07-05 Eastman Kodak Company Event clustering of images using foreground/background segmentation
US6760465B2 (en) * 2001-03-30 2004-07-06 Intel Corporation Mechanism for tracking colored objects in a video sequence
JP2002334338A (en) * 2001-05-09 2002-11-22 National Institute Of Advanced Industrial & Technology Device and method for object tracking and recording medium
US20020172419A1 (en) 2001-05-15 2002-11-21 Qian Lin Image enhancement using face detection
US6847733B2 (en) 2001-05-23 2005-01-25 Eastman Kodak Company Retrieval and browsing of database images based on image emphasis and appeal
TW505892B (en) * 2001-05-25 2002-10-11 Ind Tech Res Inst System and method for promptly tracking multiple faces
US20020181801A1 (en) 2001-06-01 2002-12-05 Needham Bradford H. Feature-based image correction
AUPR541801A0 (en) * 2001-06-01 2001-06-28 Canon Kabushiki Kaisha Face detection in colour images with complex background
US7068841B2 (en) * 2001-06-29 2006-06-27 Hewlett-Packard Development Company, L.P. Automatic digital image enhancement
US6980691B2 (en) 2001-07-05 2005-12-27 Corel Corporation Correction of “red-eye” effects in images
GB0116877D0 (en) 2001-07-10 2001-09-05 Hewlett Packard Co Intelligent feature selection and pan zoom control
US6516154B1 (en) 2001-07-17 2003-02-04 Eastman Kodak Company Image revising camera and method
US6832006B2 (en) 2001-07-23 2004-12-14 Eastman Kodak Company System and method for controlling image compression based on image emphasis
US20030023974A1 (en) * 2001-07-25 2003-01-30 Koninklijke Philips Electronics N.V. Method and apparatus to track objects in sports programs and select an appropriate camera view
EP1288858A1 (en) 2001-09-03 2003-03-05 Agfa-Gevaert AG Method for automatically detecting red-eye defects in photographic image data
EP1293933A1 (en) 2001-09-03 2003-03-19 Agfa-Gevaert AG Method for automatically detecting red-eye defects in photographic image data
US6993180B2 (en) 2001-09-04 2006-01-31 Eastman Kodak Company Method and system for automated grouping of images
US7027619B2 (en) * 2001-09-13 2006-04-11 Honeywell International Inc. Near-infrared method and system for use in face detection
US7262798B2 (en) 2001-09-17 2007-08-28 Hewlett-Packard Development Company, L.P. System and method for simulating fill flash in photography
US7298412B2 (en) 2001-09-18 2007-11-20 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
US7133070B2 (en) 2001-09-20 2006-11-07 Eastman Kodak Company System and method for deciding when to correct image-specific defects based on camera, scene, display and demographic data
US7110569B2 (en) 2001-09-27 2006-09-19 Koninklijke Philips Electronics N.V. Video based detection of fall-down and other events
US7324246B2 (en) * 2001-09-27 2008-01-29 Fujifilm Corporation Apparatus and method for image processing
US20030063102A1 (en) * 2001-10-01 2003-04-03 Gilles Rubinstenn Body image enhancement
US7130864B2 (en) 2001-10-31 2006-10-31 Hewlett-Packard Development Company, L.P. Method and system for accessing a collection of images in a database
KR100421221B1 (en) * 2001-11-05 2004-03-02 삼성전자주식회사 Illumination invariant object tracking method and image editing system adopting the method
US7162101B2 (en) * 2001-11-15 2007-01-09 Canon Kabushiki Kaisha Image processing apparatus and method
US7130446B2 (en) * 2001-12-03 2006-10-31 Microsoft Corporation Automatic detection and tracking of multiple individuals using multiple cues
US7688349B2 (en) 2001-12-07 2010-03-30 International Business Machines Corporation Method of detecting and tracking groups of people
US7092573B2 (en) * 2001-12-10 2006-08-15 Eastman Kodak Company Method and system for selectively applying enhancement to an image
TW535413B (en) 2001-12-13 2003-06-01 Mediatek Inc Device and method for processing digital video data
US7221809B2 (en) 2001-12-17 2007-05-22 Genex Technologies, Inc. Face recognition system and method
US7167519B2 (en) * 2001-12-20 2007-01-23 Siemens Corporate Research, Inc. Real-time video object generation for smart cameras
JP2003189168A (en) * 2001-12-21 2003-07-04 Nec Corp Camera for mobile phone
US7035467B2 (en) * 2002-01-09 2006-04-25 Eastman Kodak Company Method and system for processing images for themed imaging services
US7289664B2 (en) 2002-01-17 2007-10-30 Fujifilm Corporation Method of detecting and correcting the red eye
JP2003219225A (en) 2002-01-25 2003-07-31 Nippon Micro Systems Kk Device for monitoring moving object image
US7362354B2 (en) 2002-02-12 2008-04-22 Hewlett-Packard Development Company, L.P. Method and system for assessing the photo quality of a captured image in a digital still camera
JP4271964B2 (en) * 2002-03-04 2009-06-03 三星電子株式会社 Face recognition method and apparatus using component-based PCA / ICA
US7110597B2 (en) * 2002-03-18 2006-09-19 Intel Corporation Correcting digital images using unique subjects
US7082211B2 (en) * 2002-05-31 2006-07-25 Eastman Kodak Company Method and system for enhancing portrait images
US7146026B2 (en) 2002-06-04 2006-12-05 Hewlett-Packard Development Company, L.P. Image correction system and method
US6959109B2 (en) 2002-06-20 2005-10-25 Identix Incorporated System and method for pose-angle estimation
US6801642B2 (en) 2002-06-26 2004-10-05 Motorola, Inc. Method and apparatus for limiting storage or transmission of visual information
US8599266B2 (en) 2002-07-01 2013-12-03 The Regents Of The University Of California Digital processing of video images
JP3700687B2 (en) * 2002-07-08 2005-09-28 カシオ計算機株式会社 Camera device and subject photographing method
US7227976B1 (en) 2002-07-08 2007-06-05 Videomining Corporation Method and system for real-time facial image enhancement
US7020337B2 (en) * 2002-07-22 2006-03-28 Mitsubishi Electric Research Laboratories, Inc. System and method for detecting objects in images
US7110575B2 (en) * 2002-08-02 2006-09-19 Eastman Kodak Company Method for locating faces in digital color images
CN100477745C (en) 2002-08-09 2009-04-08 夏普株式会社 Image combination device and image combination method
US7035462B2 (en) * 2002-08-29 2006-04-25 Eastman Kodak Company Apparatus and method for processing digital images having eye color defects
US20040041121A1 (en) 2002-08-30 2004-03-04 Shigeyoshi Yoshida Magnetic loss material and method of producing the same
EP1398733A1 (en) 2002-09-12 2004-03-17 GRETAG IMAGING Trading AG Texture-based colour correction
JP3761169B2 (en) * 2002-09-30 2006-03-29 松下電器産業株式会社 Mobile phone
US7194114B2 (en) 2002-10-07 2007-03-20 Carnegie Mellon University Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder
KR100473598B1 (en) 2002-11-04 2005-03-11 삼성전자주식회사 System and method for detecting veilde face image
US7154510B2 (en) 2002-11-14 2006-12-26 Eastman Kodak Company System and method for modifying a portrait image in response to a stimulus
GB2395778A (en) 2002-11-29 2004-06-02 Sony Uk Ltd Face detection
GB2395264A (en) 2002-11-29 2004-05-19 Sony Uk Ltd Face detection in images
US7394969B2 (en) 2002-12-11 2008-07-01 Eastman Kodak Company System and method to compose a slide show
JP3954484B2 (en) 2002-12-12 2007-08-08 株式会社東芝 Image processing apparatus and program
US7082157B2 (en) 2002-12-24 2006-07-25 Realtek Semiconductor Corp. Residual echo reduction for a full duplex transceiver
JP4178949B2 (en) 2002-12-27 2008-11-12 富士ゼロックス株式会社 Image processing apparatus, image processing method, and program thereof
CN100465985C (en) 2002-12-31 2009-03-04 佳能株式会社 Human ege detecting method, apparatus, system and storage medium
JP4218348B2 (en) 2003-01-17 2009-02-04 オムロン株式会社 Imaging device
JP4204336B2 (en) * 2003-01-30 2009-01-07 富士通株式会社 Facial orientation detection device, facial orientation detection method, and computer program
US7120279B2 (en) 2003-01-30 2006-10-10 Eastman Kodak Company Method for face orientation determination in digital color images
US7162076B2 (en) * 2003-02-11 2007-01-09 New Jersey Institute Of Technology Face detection method and apparatus
JP2004274720A (en) 2003-02-18 2004-09-30 Fuji Photo Film Co Ltd Data conversion apparatus and data conversion program
US7039222B2 (en) 2003-02-28 2006-05-02 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US7306337B2 (en) * 2003-03-06 2007-12-11 Rensselaer Polytechnic Institute Calibration-free gaze tracking under natural head movement
US7508961B2 (en) 2003-03-12 2009-03-24 Eastman Kodak Company Method and system for face detection in digital images
JP4461789B2 (en) * 2003-03-20 2010-05-12 オムロン株式会社 Image processing device
US20040228505A1 (en) 2003-04-14 2004-11-18 Fuji Photo Film Co., Ltd. Image characteristic portion extraction method, computer readable medium, and data collection and processing device
US7609908B2 (en) 2003-04-30 2009-10-27 Eastman Kodak Company Method for adjusting the brightness of a digital image utilizing belief values
US20040223649A1 (en) 2003-05-07 2004-11-11 Eastman Kodak Company Composite imaging method and system
US7224850B2 (en) * 2003-05-13 2007-05-29 Microsoft Corporation Modification of red-eye-effect in digital image
EP1482724B1 (en) 2003-05-19 2007-07-11 STMicroelectronics S.A. Image processing method for digital images with exposure correction by recognition of skin areas of the subject.
JP2004350130A (en) 2003-05-23 2004-12-09 Fuji Photo Film Co Ltd Digital camera
US7616233B2 (en) 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US9129381B2 (en) * 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US7620218B2 (en) * 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US7574016B2 (en) * 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US7680342B2 (en) 2004-08-16 2010-03-16 Fotonation Vision Limited Indoor/outdoor classification in digital images
US7792335B2 (en) 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
US7269292B2 (en) 2003-06-26 2007-09-11 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7362368B2 (en) 2003-06-26 2008-04-22 Fotonation Vision Limited Perfecting the optics within a digital image acquisition device using face detection
US7536036B2 (en) 2004-10-28 2009-05-19 Fotonation Vision Limited Method and apparatus for red-eye detection in an acquired digital image
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
WO2007142621A1 (en) 2006-06-02 2007-12-13 Fotonation Vision Limited Modification of post-viewing parameters for digital images using image region or feature information
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US7689009B2 (en) 2005-11-18 2010-03-30 Fotonation Vision Ltd. Two stage detection for photographic eye artifacts
US7315630B2 (en) 2003-06-26 2008-01-01 Fotonation Vision Limited Perfecting of digital image rendering parameters within rendering devices using face detection
US20060093238A1 (en) 2004-10-28 2006-05-04 Eran Steinberg Method and apparatus for red-eye detection in an acquired digital image using face recognition
US7636486B2 (en) 2004-11-10 2009-12-22 Fotonation Ireland Ltd. Method of determining PSF using multiple instances of a nominally similar scene
US7920723B2 (en) 2005-11-18 2011-04-05 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US7606417B2 (en) 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations
US7317815B2 (en) 2003-06-26 2008-01-08 Fotonation Vision Limited Digital image processing composition using face detection information
US7471846B2 (en) 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7587085B2 (en) 2004-10-28 2009-09-08 Fotonation Vision Limited Method and apparatus for red-eye detection in an acquired digital image
US7190829B2 (en) * 2003-06-30 2007-03-13 Microsoft Corporation Speedup of face detection in digital images
US7274822B2 (en) * 2003-06-30 2007-09-25 Microsoft Corporation Face annotation for photo management
EP1499111B1 (en) 2003-07-15 2015-01-07 Canon Kabushiki Kaisha Image sensiting apparatus, image processing apparatus, and control method thereof
US7689033B2 (en) * 2003-07-16 2010-03-30 Microsoft Corporation Robust multi-view face detection methods and apparatuses
US20050140801A1 (en) 2003-08-05 2005-06-30 Yury Prilutsky Optimized performance and performance for red-eye filter method and apparatus
US8520093B2 (en) 2003-08-05 2013-08-27 DigitalOptics Corporation Europe Limited Face tracker and partial face tracker for red-eye filter method and apparatus
US20050031224A1 (en) 2003-08-05 2005-02-10 Yury Prilutsky Detecting red eye filter and apparatus using meta-data
JP2005094741A (en) * 2003-08-14 2005-04-07 Fuji Photo Film Co Ltd Image pickup device and image synthesizing method
JP3946676B2 (en) * 2003-08-28 2007-07-18 株式会社東芝 Captured image processing apparatus and method
JP2005078376A (en) * 2003-08-29 2005-03-24 Sony Corp Object detection device, object detection method, and robot device
US7362210B2 (en) 2003-09-05 2008-04-22 Honeywell International Inc. System and method for gate access control
US7590305B2 (en) 2003-09-30 2009-09-15 Fotonation Vision Limited Digital camera with built-in lens calibration table
US7424170B2 (en) * 2003-09-30 2008-09-09 Fotonation Vision Limited Automated statistical self-calibrating detection and removal of blemishes in digital images based on determining probabilities based on image analysis of single images
US7295233B2 (en) 2003-09-30 2007-11-13 Fotonation Vision Limited Detection and removal of blemishes in digital images utilizing original images of defocused scenes
US7369712B2 (en) 2003-09-30 2008-05-06 Fotonation Vision Limited Automated statistical self-calibrating detection and removal of blemishes in digital images based on multiple occurrences of dust in images
JP2005128956A (en) 2003-10-27 2005-05-19 Pentax Corp Subject determining program and digital camera
CN100440944C (en) 2003-11-11 2008-12-03 精工爱普生株式会社 Image processing device, image processing method, program thereof, and recording medium
US7274832B2 (en) * 2003-11-13 2007-09-25 Eastman Kodak Company In-plane rotation invariant object detection in digitized images
US7596247B2 (en) 2003-11-14 2009-09-29 Fujifilm Corporation Method and apparatus for object recognition using probability models
JP2005182771A (en) 2003-11-27 2005-07-07 Fuji Photo Film Co Ltd Image editing apparatus, method, and program
JP2006033793A (en) 2004-06-14 2006-02-02 Victor Co Of Japan Ltd Tracking video reproducing apparatus
JP4442330B2 (en) 2004-06-17 2010-03-31 株式会社ニコン Electronic camera and electronic camera system
ATE476715T1 (en) 2004-06-21 2010-08-15 Google Inc SINGLE IMAGE BASED MULTI-BIOMETRIC SYSTEM AND METHOD
JP4574249B2 (en) 2004-06-29 2010-11-04 キヤノン株式会社 Image processing apparatus and method, program, and imaging apparatus
US7457477B2 (en) 2004-07-06 2008-11-25 Microsoft Corporation Digital photography with flash/no flash extension
CA2575211C (en) * 2004-07-30 2012-12-11 Euclid Discoveries, Llc Apparatus and method for processing video data
KR100668303B1 (en) 2004-08-04 2007-01-12 삼성전자주식회사 Method for detecting face based on skin color and pattern matching
JP4757559B2 (en) 2004-08-11 2011-08-24 富士フイルム株式会社 Apparatus and method for detecting components of a subject
US7119838B2 (en) * 2004-08-19 2006-10-10 Blue Marlin Llc Method and imager for detecting the location of objects
US7502498B2 (en) 2004-09-10 2009-03-10 Available For Licensing Patient monitoring apparatus
JP4408779B2 (en) 2004-09-15 2010-02-03 キヤノン株式会社 Image processing device
US7333963B2 (en) 2004-10-07 2008-02-19 Bernard Widrow Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs
US7542600B2 (en) * 2004-10-21 2009-06-02 Microsoft Corporation Video image quality
JP4383399B2 (en) 2004-11-05 2009-12-16 富士フイルム株式会社 Detection target image search apparatus and control method thereof
US7734067B2 (en) 2004-12-07 2010-06-08 Electronics And Telecommunications Research Institute User recognition system and method thereof
DE102004062315A1 (en) 2004-12-20 2006-06-29 Mack Ride Gmbh & Co Kg Water ride
US20060006077A1 (en) 2004-12-24 2006-01-12 Erie County Plastics Corporation Dispensing closure with integral piercing unit
US8503800B2 (en) * 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US7315631B1 (en) 2006-08-11 2008-01-01 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US7715597B2 (en) 2004-12-29 2010-05-11 Fotonation Ireland Limited Method and component for image recognition
CN100358340C (en) 2005-01-05 2007-12-26 张健 Digital-camera capable of selecting optimum taking opportune moment
JP4755490B2 (en) 2005-01-13 2011-08-24 オリンパスイメージング株式会社 Blur correction method and imaging apparatus
US7454058B2 (en) * 2005-02-07 2008-11-18 Mitsubishi Electric Research Lab, Inc. Method of extracting and searching integral histograms of data samples
US7620208B2 (en) * 2005-02-09 2009-11-17 Siemens Corporate Research, Inc. System and method for detecting features from images of vehicles
JP4639869B2 (en) 2005-03-14 2011-02-23 オムロン株式会社 Imaging apparatus and timer photographing method
US20060203106A1 (en) 2005-03-14 2006-09-14 Lawrence Joseph P Methods and apparatus for retrieving data captured by a media device
JP4324170B2 (en) 2005-03-17 2009-09-02 キヤノン株式会社 Imaging apparatus and display control method
US7801328B2 (en) 2005-03-31 2010-09-21 Honeywell International Inc. Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing
JP4519708B2 (en) 2005-05-11 2010-08-04 富士フイルム株式会社 Imaging apparatus and method, and program
JP2006318103A (en) 2005-05-11 2006-11-24 Fuji Photo Film Co Ltd Image processor, image processing method, and program
JP4906034B2 (en) 2005-05-16 2012-03-28 富士フイルム株式会社 Imaging apparatus, method, and program
US7612794B2 (en) 2005-05-25 2009-11-03 Microsoft Corp. System and method for applying digital make-up in video conferencing
US8320660B2 (en) 2005-06-03 2012-11-27 Nec Corporation Image processing system, 3-dimensional shape estimation system, object position/posture estimation system and image generation system
JP2006350498A (en) 2005-06-14 2006-12-28 Fujifilm Holdings Corp Image processor and image processing method and program
JP2007006182A (en) 2005-06-24 2007-01-11 Fujifilm Holdings Corp Image processing apparatus and method therefor, and program
JP4661413B2 (en) * 2005-07-11 2011-03-30 富士フイルム株式会社 Imaging apparatus, number of shots management method and number of shots management program
US20070018966A1 (en) 2005-07-25 2007-01-25 Blythe Michael M Predicted object location
US7606392B2 (en) * 2005-08-26 2009-10-20 Sony Corporation Capturing and processing facial motion data
JP4429241B2 (en) 2005-09-05 2010-03-10 キヤノン株式会社 Image processing apparatus and method
JP4799101B2 (en) 2005-09-26 2011-10-26 富士フイルム株式会社 Image processing method, apparatus, and program
JP2007094549A (en) 2005-09-27 2007-04-12 Fujifilm Corp Image processing method, device and program
US7555149B2 (en) * 2005-10-25 2009-06-30 Mitsubishi Electric Research Laboratories, Inc. Method and system for segmenting videos using face detection
US20070098303A1 (en) * 2005-10-31 2007-05-03 Eastman Kodak Company Determining a particular person from a collection
JP4626493B2 (en) 2005-11-14 2011-02-09 ソニー株式会社 Image processing apparatus, image processing method, program for image processing method, and recording medium recording program for image processing method
US7599577B2 (en) 2005-11-18 2009-10-06 Fotonation Vision Limited Method and apparatus of correcting hybrid flash artifacts in digital images
US7643659B2 (en) * 2005-12-31 2010-01-05 Arcsoft, Inc. Facial feature detection on mobile devices
US7953253B2 (en) * 2005-12-31 2011-05-31 Arcsoft, Inc. Face detection on mobile devices
JP4657934B2 (en) 2006-01-23 2011-03-23 富士フイルム株式会社 Face detection method, apparatus and program
WO2007095553A2 (en) 2006-02-14 2007-08-23 Fotonation Vision Limited Automatic detection and correction of non-red eye flash defects
WO2007095556A2 (en) 2006-02-14 2007-08-23 Fotonation Vision Limited Digital image acquisition device with built in dust and sensor mapping capability
EP1987436B1 (en) 2006-02-14 2015-12-09 FotoNation Limited Image blurring
US7551754B2 (en) 2006-02-24 2009-06-23 Fotonation Vision Limited Method and apparatus for selective rejection of digital images
US7804983B2 (en) 2006-02-24 2010-09-28 Fotonation Vision Limited Digital image acquisition control and correction method and apparatus
US8265351B2 (en) 2006-05-05 2012-09-11 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US7724952B2 (en) 2006-05-15 2010-05-25 Microsoft Corporation Object matting using flash and no-flash images
US7539533B2 (en) 2006-05-16 2009-05-26 Bao Tran Mesh network monitoring appliance
IES20070229A2 (en) 2006-06-05 2007-10-03 Fotonation Vision Ltd Image acquisition method and apparatus
DE602007012246D1 (en) 2006-06-12 2011-03-10 Tessera Tech Ireland Ltd PROGRESS IN EXTENDING THE AAM TECHNIQUES FROM GRAY CALENDAR TO COLOR PICTURES
US7515740B2 (en) 2006-08-02 2009-04-07 Fotonation Vision Limited Face recognition with combined PCA-based datasets
WO2008102205A2 (en) 2006-08-09 2008-08-28 Fotonation Vision Limited Detection of airborne flash artifacts using preflash image
US7403643B2 (en) 2006-08-11 2008-07-22 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
JP4655054B2 (en) * 2007-02-26 2011-03-23 富士フイルム株式会社 Imaging device
ATE472140T1 (en) 2007-02-28 2010-07-15 Fotonation Vision Ltd SEPARATION OF DIRECTIONAL ILLUMINATION VARIABILITY IN STATISTICAL FACIAL MODELING BASED ON TEXTURE SPACE DECOMPOSITIONS
US8175382B2 (en) * 2007-05-10 2012-05-08 Microsoft Corporation Learning image enhancement
US7991285B2 (en) * 2008-01-08 2011-08-02 Sony Ericsson Mobile Communications Ab Using a captured background image for taking a photograph

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4047187A (en) * 1974-04-01 1977-09-06 Canon Kabushiki Kaisha System for exposure measurement and/or focus detection by means of image senser
US7583294B2 (en) * 2000-02-28 2009-09-01 Eastman Kodak Company Face detecting camera and method
US7050607B2 (en) * 2001-12-08 2006-05-23 Microsoft Corp. System and method for multi-view face detection
US20040022435A1 (en) * 2002-07-30 2004-02-05 Canon Kabushiki Kaisha Image processing apparatus and method and program storage medium
US20050104848A1 (en) * 2003-09-25 2005-05-19 Kabushiki Kaisha Toshiba Image processing device and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129145A1 (en) * 2011-11-22 2013-05-23 Cywee Group Limited Orientation correction method for electronic device used to perform facial recognition and electronic device thereof
US8971574B2 (en) * 2011-11-22 2015-03-03 Ulsee Inc. Orientation correction method for electronic device used to perform facial recognition and electronic device thereof
US10623630B2 (en) 2016-08-01 2020-04-14 Samsung Electronics Co., Ltd Method of applying a specified effect to an area of an image and electronic device supporting the same
US10825150B2 (en) 2017-10-31 2020-11-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
US8989443B2 (en) 2015-03-24
US7630527B2 (en) 2009-12-08
US20080013800A1 (en) 2008-01-17
US20120155709A1 (en) 2012-06-21
US9008374B2 (en) 2015-04-14
US20130314525A1 (en) 2013-11-28
US20100220899A1 (en) 2010-09-02
US8761449B2 (en) 2014-06-24
US20130315488A1 (en) 2013-11-28
US7912245B2 (en) 2011-03-22
US20080013799A1 (en) 2008-01-17
US7986811B2 (en) 2011-07-26
US20120075504A1 (en) 2012-03-29
US7440593B1 (en) 2008-10-21
US8498446B2 (en) 2013-07-30
US8391645B2 (en) 2013-03-05
US20140036109A1 (en) 2014-02-06

Similar Documents

Publication Publication Date Title
US8391645B2 (en) Detecting orientation of digital images using face detection information
US8081844B2 (en) Detecting orientation of digital images using face detection information
CN108230252B (en) Image processing method and device and electronic equipment
US8494286B2 (en) Face detection in mid-shot digital images
US8836777B2 (en) Automatic detection of vertical gaze using an embedded imaging device
US8797448B2 (en) Rapid auto-focus using classifier chains, MEMS and multiple object focusing
US8861806B2 (en) Real-time face tracking with reference images
US8675991B2 (en) Modification of post-viewing parameters for digital images using region or feature information
US8659697B2 (en) Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing
US7403643B2 (en) Real-time face tracking in a digital image acquisition device
JP4957922B2 (en) Image direction determination apparatus, image direction determination method, and image direction determination program
US20160073040A1 (en) Method for image segmentation
WO2007142621A1 (en) Modification of post-viewing parameters for digital images using image region or feature information
JP2012530994A (en) Method and apparatus for half-face detection
US11315360B2 (en) Live facial recognition system and method
US20220019771A1 (en) Image processing device, image processing method, and storage medium
KR20150011714A (en) Device for determining orientation of picture
JP2008234377A (en) Image sorting device and method, and program
IES84977Y1 (en) Face detection in mid-shot digital images
JP2003208615A (en) Image processing device and program
IE20080161U1 (en) Face detection in mid-shot digital images

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION