US6421463B1 - Trainable system to search for objects in images - Google Patents

Trainable system to search for objects in images Download PDF

Info

Publication number
US6421463B1
US6421463B1 US09/282,742 US28274299A US6421463B1 US 6421463 B1 US6421463 B1 US 6421463B1 US 28274299 A US28274299 A US 28274299A US 6421463 B1 US6421463 B1 US 6421463B1
Authority
US
United States
Prior art keywords
wavelet
image
coefficients
images
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US09/282,742
Inventor
Tomaso Poggio
Michael Oren
Constatine P. Papageorgiou
Pawan Sinha
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Massachusetts Institute of Technology
Original Assignee
Massachusetts Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Massachusetts Institute of Technology filed Critical Massachusetts Institute of Technology
Priority to US09/282,742 priority Critical patent/US6421463B1/en
Assigned to MASSACHUSETTS INSTITUTE OF TECHNOLOGY reassignment MASSACHUSETTS INSTITUTE OF TECHNOLOGY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SINHA, PAWAN, OREN, MICHAEL, PAPAGEORGIOU, CONSTANTINE P., POGGIO, TOMASO
Assigned to NAVY, SECRETARY OF THE, UNITED STATES OF AMERICA reassignment NAVY, SECRETARY OF THE, UNITED STATES OF AMERICA CONFIRMATORY INSTRUMENT Assignors: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Assigned to NATIONAL SCIENCE FOUNDATION reassignment NATIONAL SCIENCE FOUNDATION CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Application granted granted Critical
Publication of US6421463B1 publication Critical patent/US6421463B1/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/431Frequency domain transformation; Autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/11Technique with transformation invariance effect

Definitions

  • This invention relates generally to image processing systems and more particularly to systems for detecting objects in images.
  • an analog or continuous parameter image such as a still photograph or a frame in a video sequence may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device.
  • a digital image When an image is represented in this way, it is generally referred to as a digital image. It is desirable to digitize an image such that the image may be digitally processed by a processing device.
  • Images which illustrate items or scenes recognizable by a human typically contain at least one object such as a persons face, an entire person, a car, etc . . .
  • Some images, referred to as “cluttered” images contain more than one object of the same type and/or more than one type of object.
  • cluttered images contain more than one type or class of object (e.g. pedestrians as one class and cars as a different class) as well as multiple instances of objects of the same type (e.g. multiple pedestrians walking on a sidewalk).
  • object detection refers to the process of detecting a particular object or a particular type of object contained within an image.
  • an object class description is important since the object detection process requires a system to differentiate between a particular object class and all other possible types of objects in the rest of the world. This is in contrast to pattern classification, in which it is only necessary to decide between a relatively small number of classes.
  • the intra-class variability itself is significant and difficult to model. Since it is not known how many instances of the class are presented in any particular image or scene, if any, the detection problem cannot easily be solved using methods such as maximum-a-posteriori probability (MAP) or maximum likelihood (ML) methods. Consequently, the classification of each pattern in the image must be performed independently. This makes the decision process susceptible to missed instances of the class and to false positives. Thus, in an object detection process, it is desirable for the class description to have large discriminative power thereby enabling the processing system to recognize particular object types in a variety of different images including cluttered and uncluttered images.
  • MAP maximum-a-posteriori probability
  • ML maximum likelihood
  • one approach to detect objects utilizes motion and explicit segmentation of the image. Such approaches have been used, for example, to detect people within an image.
  • One problem with this approach is that it is possible that an object which is of the type intended to be detected is not moving. Thus, in this case, the utilization of motion would not aid in the detection of an object.
  • Another approach to detecting objects in an image is to utilize trainable object detection.
  • trainable object detection Such an approach has been utilized to detect faces in cluttered scenes.
  • the face detection system utilizes models of face and non-face patterns in a high dimensional space and derives a statistical model for the a particular class such as the class of frontal human faces.
  • Frontal human faces despite their variability, share similar patterns (shape and the spatial layout of facial features) and their color space is relatively constrained.
  • the ratio template technique detects faces in cluttered scenes by utilizing a relatively small set of relationships between face regions.
  • the set of relationships are collectively referred to as a ratio template and provide a constraint for face detection.
  • the ratio template encodes the ordinal structure of the brightness distribution on an object such as a face.
  • the ratio template consists of a set of inequality relationships between the average intensities of a few different object-regions.
  • the ratio template consists of a set of inequality relationships between the average intensities of a few different face-regions.
  • This technique utilizes the concept that while the absolute intensity values of different regions may change dramatically under varying illumination conditions, their mutual ordinal relationships (binarized ratios) remain largely unaffected.
  • the forehead is typically brighter than the eye-socket regions for all but the most contrived lighting setups.
  • the ratio template technique overcomes some but not all of the problems associated with detecting objects having significant variability in the patterns and colors within the boundaries of the object and with detection of such objects in the absence of constraints on the image background.
  • an object detection system includes (a) an image preprocessor for moving a window across the image and a classifier coupled to the preprocessor for classifying the portion of the image within the window.
  • the classifier includes a wavelet template generator which generates a wavelet template that defines the shape of an object with a subset of the wavelet coefficients of the image.
  • the wavelet template generator generates a wavelet template which includes a set of regular regions of different scales that correspond to the support of a subset of significant wavelet functions. The relationships between different regions are expressed as constraints on the values of the wavelet coefficients.
  • the wavelet template defines an object as a set of regions and relationships among the regions. Use of a wavelet basis to represent the template yields both a computationally efficient technique and an effective learning scheme.
  • a wavelet template that defines the shape of an object in terms of a subset of the wavelet coefficients of the image, the system can detect highly non-rigid objects such as people and other objects with a high degree of variability in size, shape, color, and texture.
  • the wavelet template is invariant to changes in color and texture and can be used to robustly define a rich and complex class of objects such as people.
  • the system utilizes a model that is automatically learned from examples and thus can avoid the use of motion and explicit image segmentation to detect objects in an image.
  • the system further includes a training system coupled to the classifier and including a database including both positive and negative examples; and a quadratic programming solver.
  • the system utilizes a general paradigm for object detection.
  • the system is trainable and utilizes example-based models.
  • the system is reconfigurable and extendible to a wide variety of object classes.
  • a wavelet template includes a set of regular regions of different scales that correspond to the support of a subset of significant wavelet functions of an image.
  • the relationships between different regions are expressed as constraints on the values of the wavelet coefficients.
  • the wavelet template can compactly express the structural commonality of a class of objects and is computationally efficient. It is learnable from a set of examples and provides an effective tool for the challenging problem of detecting pedestrians in cluttered scenes.
  • a learnable wavelet template provides a framework that is extensible to the detection of complex object classes including but not limited to the pedestrian object class.
  • the wavelet template is an extension of the ratio template and addresses some of these issues not addressed by the ratio template in the context of pedestrian detection.
  • the success of the wavelet template for pedestrian detection comes from its ability to capture high-level knowledge about the object class (structural information expressed as a set of constraints on the wavelet coefficients) and incorporate it into the low-level process of interpreting image intensities. Attempts to directly apply low-level techniques such as edge detection and region segmentation are likely to fail in the images which include highly non-rigid objects having a high degree of variability in size, shape, color, and texture since these methods are not robust, are sensitive to spurious details, and give ambiguous results. Using the wavelet template, only significant information that characterizes the object class, as obtained in a learning phase, is evaluated and used.
  • the approach of the present invention as applied to a pedestrian template is learned from examples and then used for classification, ideally in a template matching scheme. It is important to realize that this is not the only interpretation of the technique.
  • An alternative, and perhaps more general, utilization of the technique includes the step of learning the template as a dimensionality reduction stage. Using all the wavelet functions that describe a window of 128 ⁇ 64 pixels would yield vectors of very high dimensionality. The training of a classifier with such a high dimensionality would in turn require an example set which may be too large to utilize in practical systems using present day technology.
  • the template learning stage serves to select the basis functions relevant for this task and to reduce their number considerably.
  • the twenty-nine basis functions are used.
  • a classifier such as a support vector machine (SVM) can then be trained on a small example set.
  • learning the pedestrian detection task consists of two learning steps: (1) dimensionality reduction, that is, task-dependent basis selection and (2) training the classifier.
  • dimensionality reduction that is, task-dependent basis selection
  • training the classifier In this interpretation, a template in the strict sense of the word is neither learned nor used. It should be appreciated of course that in other applications and embodiments, it may be desirable to not reduce the number of basis functions but instead it may be desirable to use all available basis functions. In this case, all of the basis functions are provided to the classifier.
  • an object detection system includes an optical flow processor which receives frames from a video sequence and computes the optical flow between images in the frames and a discontinuity detector coupled to the optical flow processor.
  • the discontinuity detector detects discontinuities in the flow field that indicate probable motion of objects relative to the background in the frame.
  • a detection system is coupled to the discontinuity detector and receives information indicating which regions of an image or frame are likely to include objects having motion.
  • an object detection system which utilizes motion information to detect objects is provided.
  • the frames may be consecutive frames in a video sequence.
  • the discontinuity detector detects discontinuities in the flow field that indicate probable motion of objects relative to the background and the detected regions of discontinuity are grown using morphological operators, to define the full regions of interest. In these regions of motion, the likely class of objects is limited, thus strictness of the classifier can be relaxed.
  • FIG. 1 is a block diagram of an image processing system utilizing a wavelet template generator
  • FIG. 2 is a block diagram of a wavelet template image processing system for detecting pedestrians in an image utilizing a bootstrapping technique
  • FIG. 2A is a plot of Pedestrian Detection Rate vs. False Detection Rate
  • FIG. 2B is a subset of training images for training a wavelet template image processing system for detecting pedestrians
  • FIG. 3 is an image diagrammatically illustrating a dictionary of basis functions which encode differences in the intensities among different regions of an image
  • FIG. 3A is a diagrammatical representation of a basis function
  • FIGS. 3B-3H are a series of diagrams illustrating ensemble average values of wavelet coefficients for pedestrians coded using gray level coding
  • FIG. 3I is a diagram of a pedestrian image having coefficients disposed thereon
  • FIG. 4 is a block diagram of an image processing system architecture
  • FIG. 5 is a diagrammatical representation of a face image having a predetermined number of basis coefficients disposed thereover;
  • FIGS. 5A-5F are a series of diagrams illustrating ensemble average values of wavelet coefficients for face images coded using gray level coding
  • FIG. 5G is a set of training images for training a wavelet template image processing system for detecting pedestrians.
  • FIGS. 6-6C are a series of images showing the sequence of steps to utilize motion in the detection of objects.
  • An analog or continuous parameter image such as a still photograph may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device.
  • the matrix of digital data values are generally referred to as a “digital image” or more simply an “image” and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene.
  • an image sequence such as a view of a moving roller-coaster for example, may be converted to a digital video signal as is generally known.
  • the digital video signal is provided from a sequence of discrete digital images or frames. Each frame may be represented as a matrix of digital data values which may be stored in a storage device of a computer or other digital processing device.
  • a matrix of digital data values are generally referred to as an “image frame” or more simply an “image” or a “frame.”
  • Each of the images in the digital video signal may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene in a manner similar to the manner in which an image of a still photograph is stored.
  • each of the numbers in the array correspond to a digital word (e.g. an eight-bit binary value) typically referred to as a “picture element” or a “pixel” or as “image data.”
  • the image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
  • An image is also sometimes made herein to an image as a two-dimensional pixel array.
  • An example of an array size is an array having 512 rows and 512 columns (denoted 512 ⁇ 512). Specific reference is sometimes made herein to operation on arrays having a particular size (e.g. 128 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, etc . . . ).
  • One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of pixel arrays including irregularly shaped pixel arrays.
  • a scene is an image or a single representative frame of video in which the contents and the associated relationships within the image can be assigned a semantic meaning.
  • a still image may be represented, for example, as a pixel array having 512 rows and 512 columns.
  • An object is an identifiable entity in a scene in a still image or a moving or non-moving entity in a video image.
  • a scene may correspond to an entire image while a boat might correspond to an object in the scene.
  • a scene typically includes many objects and image regions while an object corresponds to a single entity within a scene.
  • An image region or more simply a region is a portion of an image. For example, if an image is provided as a 32 ⁇ 32 pixel array, a region may correspond to a 4 ⁇ 4 portion of the 32 ⁇ 32 pixel array.
  • an object detection system 10 includes a resize and preprocessing processor 12 which receives an input signal corresponding to at least a portion of a digital image 14 and provides an output signal (a feature vector) to an input port of a classifier 16 which may be provided, for example, as a support vector machine (SVM) 16 .
  • SVM support vector machine
  • the classifier 16 provides the class information to a detection system 18 which detects objects in particular images and provides output signals to an output/display system 20 .
  • the system 10 can also include a training system 22 which in turn includes an image database 24 and a quadratic programming (QP) solver 26 .
  • the training system 22 provides one or more training samples to the classifier 16 .
  • basis functions having a significant correlation with object characteristics are identified.
  • Various classification techniques well known to those of ordinary skill in the art can be used to learn the relationships between the wavelet coefficients that define a particular class such as a pedestrian class, for example.
  • the detection system 10 detects objects in arbitrary positions in the image and in different scales. To accomplish this task, the system is trained to detect an object centered in a window of a predetermined size.
  • the window may, for example, be provided as a 128 ⁇ 64 pixel window.
  • the system is able to detect objects at arbitrary positions by shifting the 128 ⁇ 64 window throughout the image, thereby scanning all possible locations in the image.
  • the scanning step is combined with the step of iteratively resizing the image to achieve multi-scale detection.
  • the image is scaled from 0.2 to 1.5 times its original size, at increments of 0.1.
  • a transform computation for the whole image is performed and shifting is performed in the coefficient space.
  • a shift of one coefficient in the finer scale corresponds to a shift of four pixels in the window and a shift in the coarse scale corresponds to a shift of eight pixels. Since most of the coefficients in the wavelet template are at the finer scale (the coarse scale coefficients undergo a relatively small change with a shift of four pixels), an effective spatial resolution of four pixels is achieved by working in the wavelet coefficient space.
  • images of same class objects from different views are stored in the database and used.
  • frontal and rear images of people from outdoor and indoor scenes can be used.
  • positive example images i.e. images which include people
  • negative example images i.e. images which do not include people
  • the initial non-people images in the training database i.e. the negative image examples
  • the combined set of positive and negative examples form the initial training database for the classifier.
  • one problem is how to learn which are the relevant coefficients that express structure common to the entire object class and which are the relationships that define the class.
  • the learning can be divided into a two stage process.
  • the first stage includes identifying a relatively small subset of basis functions that capture the structure of a class.
  • the second stage of the process includes using a classifier to derive a precise class model from the subset of basis functions.
  • the system can detect objects in arbitrary positions and in different scales in an image.
  • the system detects objects at arbitrary positions in an image by scanning all possible locations in the image. This is accomplished by shifting a detection window (see e.g. FIGS. 1 and 4 ). This is combined with the iterativley re-sizing the image to achieve multi-scale detection.
  • faces were detected from a minimum size of 19 ⁇ 19 pixels to 5 time this size by scaling the novel image from 0.2 to 2 times its original size. This can be done in increments such as increments of 0.1.
  • the transform instead of recomputing the wavelet coefficients for every window in the image, the transform can be computed for the whole image and the shifting can be done in the coefficient space.
  • various classification techniques can be used to learn the relationships between the wavelet coefficients that define a particular class such as the pedestrian class.
  • the system detects people in arbitrary positions in the image and in different scales. To accomplish this task, the system is trained to detect a pedestrian centered in a 128 ⁇ 64 pixel window. Once a training stage is completed, the system is able to detect pedestrians at arbitrary positions by shifting the 128 ⁇ 64 window, thereby scanning all possible locations in the image. This is combined with iteratively resizing the image to achieve multi-scale detection; in accordance with the present invention, scaling of the images ranges from 0.2 to 1.5 times the original image size. Such scaling may be done at increments of 0.1.
  • the transform is computed for the whole image and the shifting is done in the coefficient space.
  • a shift of one coefficient in the finer scale corresponds to a shift of 4 pixels in the window and a shift in the coarse scale corresponds to a shift of 8 pixels. Since most of the coefficients in the wavelet template are at the finer scale (the coarse scale coefficients hardly change with a shift of 4 pixels), we achieve an effective spatial resolution of 4 pixels by working in the wavelet coefficient space.
  • the image database includes both positive and negative image examples.
  • a positive image example is an image which includes the object of interest.
  • a negative image example refers to an image which does not include the object of interest.
  • the detection system is a pedestrian detection system
  • a positive image would include a pedestrian while a negative image would not include a pedestrian.
  • the database of images 32 would thus include frontal and rear images of people from outdoor and indoor scenes.
  • the database 32 would also include an initial set of non-people images in the training database 32 .
  • Such non-people images could correspond to patterns from nature or other scenes not containing people.
  • the combined set of positive and negative examples form the initial training database 32 is provided to a classifier 34 .
  • Classifier 34 provides information (i.e. a likelihood estimate of the object belonging to the class) to the detection system 30 which allows the detection system 30 to receive a new image 38 and to detect objects of interest in the image.
  • the objects of interest are pedestrians 40 a - 40 c as shown.
  • objects 40 a and 40 c detected by system 30 do not-correspond to pedestrians.
  • these objects correspond to so-called false positive images 42 a , 42 b which are provided to classifier 34 and identified as false positive images.
  • Classifier 34 receives the false positive images 42 a , 42 b and uses the images as additional learning examples.
  • the template learning stage which includes the identification of the significant coefficients that characterize the object class (e.g. the pedestrian class). These coefficients are used as the feature vector for various classification methods.
  • the simplest classification scheme is to use a basic template matching measure. Normalized template coefficients are divided into two categories: coefficients above 1 (indicating strong change) and below 1 (weak change). For every novel window, the wavelet coefficients are compared to the pedestrian template. The matching value is the ratio of coefficients in agreement. While this basic template matching scheme is relatively simple, it performs relatively well detecting relatively complex objects such as pedestrians.
  • a relatively sophisticated classifier 34 which will learn the relationship between the coefficients from given sets of positive and negative examples is preferably used.
  • the classifier can learn more refined relationships than the simple template matching schemes and therefore can provide more accurate detection.
  • the classification technique used is the support vector machine (SVM).
  • SVM support vector machine
  • MLP multi-layer perceptions
  • SVM machinery uses structural risk minimization which minimizes a bound on the generalization error and therefore should perform better on novel data.
  • Another interesting aspect of the SVM is that its decision surface depends only on the inner product of the feature vectors. This leads to an important extension since the Euclidean inner product can be replaced by any symmetric positive-definite kernel K(x,y). This use of a kernel is equivalent to mapping the feature vectors to a high-dimensional space, thereby significantly increasing the discriminative power of the classifier.
  • K(x,y) symmetric positive-definite kernel
  • the system was operated using a database of 564 positive image examples and 597 negative image examples. The system then undergoes the bootstrapping cycle described above. In one embodiment in which pedestrians were detected, the support vector system undergoes three bootstrapping steps, ending up with a total of 4597 negative examples. For the template matching version a threshold of 0.7 (70% matching) was empirically found to yield good results.
  • Out-of-sample performance was evaluated over a test set consisting of 72 images for both the template matching scheme and the support vector classifier.
  • the test images contain a total of 165 pedestrians in frontal or near-frontal poses; 24 of these pedestrians are only partially observable (e.g. with body regions that are indistinguishable from the background). Since the system was not trained with partially observable pedestrians, it is expected that the system would not be able to detect these instances. To give a fair account of the system, statistics are presented for both the total set and the set of 141 “high quality” pedestrian images. Results of the tests are presented in Table 1 for representative systems using template matching and support vectors.
  • the template matching system has a pedestrian detection rate of 52.7%, with a false positive rate of 1 for every 5,000 windows examined.
  • the success of such a straightforward template matching measure which is much less powerful than the SMV classifier, suggests that the template learning scheme extracts non-trivial structural regularity within the pedestrian class.
  • FIG. 2A a plot of Pedestrian Detection Rate vs. False Detection Rate (an ROC curve) is shown.
  • Curve 46 (solid line) is over the entire test set while curve 44 is over a “high quality” test set. Examination of the curves 44 , 46 illustrates, for example, that if a tolerance of one false positive for every 15,000 windows examined exists, the system can achieve a detection rate of 69.6%, and as high as 81.6% on a “high quality” image set. It should be noted that the support vector classifier with the bootstrapping performs better than a “naive” template matching scheme.
  • classifier can also be trained to handle side views in a manner substantially the same as that herein described for training of front and rear views.
  • FIG. 2B a plurality of typical images of people 48 which may be stored in the database 32 (FIG. 2) are shown. Examination of these images illustrates the difficulties of pedestrian detection as evidenced by the significant variability in the patterns and colors within the boundaries of the body.
  • the wavelet coefficients can be interpreted as indicating an almost uniform area, i.e. “no-change”, if their absolute value is relatively small, or as indicating “strong change” if their absolute value is relatively large.
  • the wavelet template sought to be identified consists solely of wavelet coefficients (either vertical, horizontal or corner) whose types (“change”/“no-change”) are both clearly identified and consistent along the ensemble of pedestrian images; these comprise the “important” coefficients.
  • the basic analysis to identify the template consists of two steps: first, the wavelet coefficients are normalized relative to the rest of the coefficients in the patterns; second, the averages of the normalized coefficients are analyzed along the ensemble.
  • a relatively large number of images can be used in the template learning process.
  • a set of 564 color images of people similar to those shown in FIG. 2A are used in the template learning.
  • Each of the images are scaled and clipped to the dimensions 128 ⁇ 64 such that the people are centered and approximately the same size (the distance from the shoulders to feet is about 80 pixels).
  • restriction is made to the use of wavelets at scales of 32 ⁇ 32 pixels (one array of 15 ⁇ 5 coefficients for each wavelet class) and 16 ⁇ 16 pixels (29 ⁇ 13 for each class).
  • a quadruple dense Haar transform is computed and the coefficient value corresponding to the largest absolute value among the three channels is selected.
  • the normalization step computes the average of each coefficient's class ( ⁇ vertical, horizontal, corner ⁇ 16,32 ⁇ ) over all the pedestrian patterns and divides every coefficient by its corresponding class average. Separate computations of the averages for each class are made since the power distribution between the different classes may vary.
  • Tables 2A and 2B show the average coefficient values for the set of vertical Haar coefficients of scale 32 ⁇ 32 for both the non-pedestrian (Table 2A) and pedestrian (Table 2B) classes.
  • Table 2A shows that the process of averaging the coefficients within the pattern and then in the ensemble does not create spurious patterns; the average values of these non-pedestrian coefficients are near 1 since these are random images that do not share any common pattern.
  • the pedestrian averages show a clear pattern, with strong response (values over 1.5) in the coefficients corresponding to the sides of the body and weak response (values less than 0.5) in the coefficients along the center of the body.
  • Table 2B shows that the pedestrian averages have a clear pattern, with strong response (values over 1.5) in the coefficients corresponding to the sides of the body and weak response (values less than 0.5) in the coefficients along the center of the body.
  • a gray level coding scheme can be used to visualize the patterns in the different classes and values of coefficients.
  • the values can be displayed in the proper spatial layout. With this technique, coefficients close to 1 are gray, stronger coefficients are darker, and weaker coefficients are lighter.
  • Haar wavelet representation has also been used in prior art techniques for image database retrieval where the largest wavelet coefficients are used as a measure of similarity between two images.
  • a wavelet representation is used to capture the structural similarities between various instances of the class.
  • FIG. 3A three types of 2-dimensional Haar wavelets 52-56 are depicted. These types include basis functions which capture change in intensity along the horizontal direction, the vertical direction and the diagonals (or corners). Since the wavelets that the standard transform generates have irregular support, a non-standard two-dimensional DWT is used where, at a given scale, the transform is applied to each dimension sequentially before proceeding to the next scale. The results are Haar wavelets with square support at all scales. Also depicted in FIG. 3A is a quadruple density 2D Haar basis 58 .
  • the spatial sampling of the standard Haar basis is not dense enough for all applications.
  • the Haar basis is not dense enough for a pedestrian detection application.
  • the distance between two neighboring wavelets at level n is 2 n .
  • a set of redundant basis functions, or an over-complete dictionary, where the distance between the wavelets at scale n is 1 ⁇ 4 2 n is required. This is referred to as a quadruple density dictionary.
  • the even scaling coefficients of the previous level are kept and the quadruple transform is repeated on this set only.
  • the odd scaling coefficients are dropped off. Since only the even coefficients are carried along at all the scales, this avoids an “explosion” in the number of coefficients, yet provides a dense and uniform sampling of the wavelet coefficients at all the scales.
  • the time complexity is 0(n) in the number of pixels n.
  • the Haar wavelets thus provide natural set basis functions which encode differences in average intensities between different regions.
  • the quadruple transform is used.
  • the quadruple transform yields an over complete set of bases functions.
  • the standard Haar transform shifts each wavelet by n
  • the quadruple density transform shifts the wavelet by 1 ⁇ 42 n in each direction.
  • the use of this quadruple density transform results in the overcomplete dictionary of basis functions that facilitate the definition of complex constraints on the object patterns.
  • the ratio template defines a set of constraints on the appearance of an object by defining a set of regions and a set of relationships on their average intensities.
  • the relationships can require, for example, that the ratio of intensities between two specific regions falls within a certain range.
  • the issues of learning these relationships, using the template for detection, and its efficient computation is addressed by establishing the ratio template in the natural framework of Haar wavelets.
  • Each wavelet coefficient describes the relationship between the average intensities of two neighboring regions. If the transform on the image intensities is computed, the Haar coefficients specify the intensity differences between the regions; computing the transform on the log of the image intensities produces coefficients that represent the log of the ratio of the intensities.
  • the wavelet template can describe regions with different shapes by using combinations of neighboring wavelets with overlapping support and wavelets of different scales.
  • the wavelet template is also computationally efficient since the transform is computed once for the whole image and different sets of coefficients are examined for different spatial locations.
  • FIGS. 3B-3H the ensemble average values of the wavelet coefficients coded using gray level are shown in images 60 - 72 . Coefficients having values above the template average are darker while those below the average are lighter.
  • FIG. 3B shows the vertical coefficients of random images and as expected, this figure is uniformly gray. The corresponding images for the horizontal and corner coefficients (not shown here) are similar.
  • FIGS. 3C-3E show vertical, horizontal and corner coefficients of scale 32 ⁇ 32 images of people. The coefficients of the images with people show clear patterns with the different classes of wavelet coefficients being tuned to different types of structural information.
  • the vertical wavelets, FIG. 3C capture the sides of the pedestrians.
  • FIGS. 3D respond to the line from shoulder to shoulder and to a weaker belt line.
  • the corner wavelets, FIG. 3E are better tuned to corners, for example, the shoulders, hands and feet.
  • FIGS. 3F-3H show vertical, horizontal and corner coefficients, respectively, of scale 16 ⁇ 16 images of people.
  • the wavelets of finer scale in FIG. 3F-3H provide better spatial resolution of the body's overall shape and smaller scale details such as the head and extremities appear clearer.
  • Two similar statistical analyses using (a)the wavelets of the of the intensities and (b) the sigmoid function as a “soft threshold” on the normalized coefficients yields results that are similar to the intensity differencing wavelets. It should be noted that a basic measure such as the ensemble average provides clear identification of the template as can be seen from FIGS. 3B-3H.
  • the significant wavelet bases for pedestrian detection that were uncovered during the learning strategy are shown overlayed on an example image of a pedestrian 74 .
  • the template derived form the learning uses a set of 29 coefficients that are consistent along the ensemble (FIGS. 3B-3H) either as indicators if “change” or “no-change.” There are 6 vertical and 1 horizontal coefficients at the scale of 32 ⁇ 32 and 14 vertical and 8 horizontal at the scale of 16 ⁇ 16. These coefficients serve as the feature vector for the ensuing classification problem.
  • FIGS. 3B-3I it can be seen that the coefficients of people show clear patterns with the different classes of wavelet coefficients being tuned to different types of structural information.
  • the vertical wavelets capture the sides of the pedestrians.
  • the horizontal wavelets respond to the line from shoulder to shoulder and to a weaker belt line.
  • the corner wavelets are better tuned to corners, for example, the shoulders, hands and feet.
  • the wavelets of finer scale provide better spatial resolution of the body's overall shape and smaller scale details such as the head and extremities appear clearer.
  • a window 82 (e.g. a 128 ⁇ 64 pixel window) moves across an image 80 .
  • the first stage of learning results in a basis selection 84 and in the second stage of learning, the selected bases are provided to a classifier 86 which may, for example, be provided as an SVM classifier.
  • a set of images of the object class of interest is used.
  • the object class is faces
  • a set of grey-scale images of a predetermined face sized is used.
  • a set of 2429 grey-scale images of face size 19 ⁇ 19 including a core set of faces with some small angular rotations (to improved generalizations may be used).
  • wavelets at scales having dimensions selected to correspond to typical features of the object of interest may be used.
  • wavelets at scales of 4 ⁇ 4 pixels and 2 ⁇ 2 pixels can be used since their dimensions correspond to typical facial features for this size of face image.
  • the basic analysis in identifying the important coefficients includes two steps. Since the power distribution of different types of coefficients may vary, the first step is to compute the class average of ( ⁇ vertical, horizontal, diagonal) ⁇ ⁇ 2 ⁇ 4 ⁇ for a total of 8 classes) and normalize every coefficient by its corresponding average class.
  • the second step is to average the normalized coefficients over the entire set of examples.
  • the normalization has the property that the average value of coefficients of random patterns will be 1. If the average value of a coefficient is much greater than 1, this indicates that the coefficient is encoding a boundary between two regions that is consistent along the examples, of the class. Similarly, if the average value of a coefficient is much smaller than 1, that coefficient encodes a uniform region.
  • various classification techniques can be used to learn the relationships between the wavelet coefficients that define the object class.
  • the system can be trained using the bootstrapping technique described above in conjunction with FIG. 2 .
  • FIGS. 5A-5F are a series of images 92 a - 92 f used to illustrate the steps to determine significant wavelet bases for face detection that are uncovered through the learning strategy of the present invention.
  • the so-identified significant basis functions are disposed over an image 90 .
  • FIG. 5G shows several exemplary face images 94 used for training from which the ensemble average values of the wavelet coefficients of FIGS. 5A-5F are generated.
  • the images in FIG. 5G are gray level of size 19 ⁇ 19 pixels.
  • the coefficients' values are coded using grey-scale where each coefficient, or basis function, is drawn as a distinct square in the image.
  • the arrangement of squares corresponds to the spatial location of the basis functions, where strong coefficients (relatively large average values) are coded by darker grey levels and weak coefficients (relatively small average values) are coded by lighter grey levels.
  • a basis function corresponds to a single square in each image and not the entire image.
  • the different types of wavelets capture various facial features. For example, vertical, horizontal and diagonal wavelets capture eyes, nose and mouth. In other applications (e.g. objects other than faces) the different wavelets should capture various features of the particular object.
  • FIGS. 5A-5F illustrate ensemble average values of the wavelet coefficients for faces coded using color.
  • Each basis function is displayed as a single square in the images above. Coefficients whose values are close to the average value of 1 are coded gray, the ones which are above the average are coded using red and below the average are coded using blue. It can be seen from observing FIGS. 5A-5F the strong features in the eye areas and the nose. Also, the cheek area is an area of almost uniform intensity, i.e. below average coefficients.
  • FIGS. 5A-5C are vertical, horizontal and diagonal coefficients, respectively, of scale 4 ⁇ 4 of images of faces.
  • FIGS. 5D-5F are vertical 1 horizontal and diagonal coefficients, respectively, of scale 2 ⁇ 2 of images of faces.
  • FIG. 5 shows a typical human face from the training database with the significant 37 coefficients drawn in the proper configuration.
  • FIG. 2B For the task of pedestrian detection, in one example a database of 924 color images of people was used. Several of such images are shown in FIG. 2B above. A similar analysis of the average values of the coefficients was done for the pedestrian class and FIGS. 3B-3H show the grey-scale coding similar to FIGS. 5A-5F for the face class. It should be noted that for the pedestrian class, there are no strong internal patterns as in the face class. Rather, the significant basis functions are along the exterior boundary of the class, indicating a different type of significant visual information. Through the same type of analysis as used for the face class, for the pedestrian class, 29 significant coefficients are chosen from the initial, overcomplete set of 1326 wavelet coefficients. These basis functions are shown overlayed on an example pedestrian in FIG. 3 I.
  • FIGS. 6-6C the sequence of steps in a system which utilizes motion-information are shown.
  • FIG. 6 illustrates static detection results
  • FIGS. 6A, 6 B Illustrate full motion regions
  • FIG. 6C illustrates improved detection results using the motion information.
  • motion information can be utilized to enhance the robustness of the detection.
  • the optical flow between consecutive images are first computed.
  • discontinuities in the flow field that indicate probable motion of objects relative to the background are detected.
  • the detected regions of discontinuity are grown using morphological operators, to define the full regions of interest 106 a - 106 d . In these regions of motion, the likely class of objects is limited, thus strictness of the classifier can be relaxed.
  • FIGS. 6-6C illustrate how the motion cues enhance the performance of the system. For example, without the use if motion cues, in FIG. 6 the pedestrian 102 is not detected. However, using the motion cues from two successive frames in a sequence (FIGS. 6A, 6 B) pedestrian 102 is detected in FIG. 6 C.
  • FIG. 6B the areas of motion are identified using the technique described above and correspond to regions 106 a - 106 d . It should be noted that pedestrian 102 falls within region 106 a.
  • Table 3 shows the performance of the pedestrian detection system with the motion-based extensions, compared to the base system.
  • the base system correctly detects 360 (43.5%) of them with a false detection rate of 1 per 236,500 windows.
  • the system enhanced with the motion module detects 445 (53.8%) of the pedestrians, a 23.7% increase in detection accuracy, while maintaining a false detection rate of 1 per 90,000 windows. It is important to iterate that the detection accuracy for non-moving objects is not compromised; in the areas of the image where there is no motion, the classifier simply runs as before.
  • the majority of the false positives in the motion enhanced system are partial body detections; i.e., a detection with the head cut off, which were still counted as false detections. Taking this factor into account, the false detection rate is even lower.
  • the relaxation paradigm has difficulties when there are a large number of moving bodies in the frame or when the pedestrian motion is very small when compared to the camera motion. Based on these results, it is believed that integration of a trained classifier with the module that provides motion cues could be extended to other systems as well.
  • aspects of this invention pertain to specific “method functions” implementable on computer systems.
  • programs defining these functions can be delivered to a computer in many forms; including, but not limited to: (a) information permanently stored on non-writable storage media (e.g., read only memory devices within a computer or CD-ROM disks readable by a computer I/O attachment); (b) information alterably stored on writable storage media (e.g., floppy disks and hard drives); or (c) information conveyed to a computer through communication media such as telephone networks. It should be understood, therefore, that such media, when carrying such information, represent alternate embodiments of the present invention.

Abstract

A trainable object detection system and technique for detecting objects such as people in static or video images of cluttered scenes is described. The described system and technique can be used to detect highly non-rigid objects with a high degree of variability in size, shape, color, and texture. The system learns from examples and does not rely on any a priori (hand-crafted) models or on motion. The technique utilizes a wavelet template that defines the shape of an object in terms of a subset of the wavelet coefficients of the image. It is invariant to changes in color and texture and can be used to robustly define a rich and complex class of objects such as people. The invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.

Description

This application claims benefit of provisional appln. No. 60/080,358 filed Apr. 1, 1998.
FIELD OF THE INVENTION
This invention relates generally to image processing systems and more particularly to systems for detecting objects in images.
BACKGROUND OF THE INVENTION
As is known in the art, an analog or continuous parameter image such as a still photograph or a frame in a video sequence may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device. When an image is represented in this way, it is generally referred to as a digital image. It is desirable to digitize an image such that the image may be digitally processed by a processing device.
Images which illustrate items or scenes recognizable by a human typically contain at least one object such as a persons face, an entire person, a car, etc . . . Some images, referred to as “cluttered” images, contain more than one object of the same type and/or more than one type of object. In a single image or picture of a city street, for example, a number of objects such as people walking on a sidewalk, street signs, light posts, buildings and cars may all be visible within the image. Thus, an image may contain more than one type or class of object (e.g. pedestrians as one class and cars as a different class) as well as multiple instances of objects of the same type (e.g. multiple pedestrians walking on a sidewalk).
As is also known, object detection refers to the process of detecting a particular object or a particular type of object contained within an image. In the object detection process, an object class description is important since the object detection process requires a system to differentiate between a particular object class and all other possible types of objects in the rest of the world. This is in contrast to pattern classification, in which it is only necessary to decide between a relatively small number of classes.
Furthermore, in defining or modeling complicated classes of objects (e.g., faces, pedestrians, etc . . . ) the intra-class variability itself is significant and difficult to model. Since it is not known how many instances of the class are presented in any particular image or scene, if any, the detection problem cannot easily be solved using methods such as maximum-a-posteriori probability (MAP) or maximum likelihood (ML) methods. Consequently, the classification of each pattern in the image must be performed independently. This makes the decision process susceptible to missed instances of the class and to false positives. Thus, in an object detection process, it is desirable for the class description to have large discriminative power thereby enabling the processing system to recognize particular object types in a variety of different images including cluttered and uncluttered images.
One problem, therefore, with the object detection process arises due to difficulties in specifying appropriate characteristics to include in an object class. Characteristics used to specify an object class are referred to as a class description.
To help overcome the difficulties and limitations of object detection due to class descriptions, one approach to detect objects utilizes motion and explicit segmentation of the image. Such approaches have been used, for example, to detect people within an image. One problem with this approach, however, is that it is possible that an object which is of the type intended to be detected is not moving. Thus, in this case, the utilization of motion would not aid in the detection of an object.
Another approach to detecting objects in an image is to utilize trainable object detection. Such an approach has been utilized to detect faces in cluttered scenes. The face detection system utilizes models of face and non-face patterns in a high dimensional space and derives a statistical model for the a particular class such as the class of frontal human faces. Frontal human faces, despite their variability, share similar patterns (shape and the spatial layout of facial features) and their color space is relatively constrained.
Such an approach, without a flexible scheme to characterize the object class, will not be well suited to provide optimum performance unless the objects such as faces have similar patterns (shape and the spatial layout of facial features) and relatively constrained color spaces. Thus, such an approach is not well-suited to detection of those types of objects, such as pedestrians, which typically have dissimilar patterns and relatively unconstrained color spaces.
The detection of objects, such as pedestrians for example, having significant variability in the patterns and colors within the boundaries of the object can be further complicated by the absence of constraints on the image background. Given these problems, direct analysis of pixel characteristics (e.g., intensity, color and texture) is not adequate to reliably and repeatedly detect objects.
One technique, sometimes referred to as the ratio template technique, detects faces in cluttered scenes by utilizing a relatively small set of relationships between face regions. The set of relationships are collectively referred to as a ratio template and provide a constraint for face detection. The ratio template encodes the ordinal structure of the brightness distribution on an object such as a face. The ratio template consists of a set of inequality relationships between the average intensities of a few different object-regions. For example, as applied to faces, the ratio template consists of a set of inequality relationships between the average intensities of a few different face-regions.
This technique utilizes the concept that while the absolute intensity values of different regions may change dramatically under varying illumination conditions, their mutual ordinal relationships (binarized ratios) remain largely unaffected. Thus, for instance, the forehead is typically brighter than the eye-socket regions for all but the most contrived lighting setups.
The ratio template technique overcomes some but not all of the problems associated with detecting objects having significant variability in the patterns and colors within the boundaries of the object and with detection of such objects in the absence of constraints on the image background.
Nevertheless, it would be desirable to provide a technique to reliably and repeatedly detect objects, such as pedestrians, which have significant variability in patterns and colors within the boundaries of the object and which can detect objects even in the absence of constraints on the image background. It would also be desirable to provide a formalization of a template structure in terms of simple primitives, a rigorous learning scheme capable of working with real images, and also to provide a technique to apply the ratio template concept to relatively complex object classes such as pedestrians. It would further be desirable to provide a technique and architecture for object detection which is trainable and which may also be used to detect people in static or video images of cluttered scenes. It would further be desirable to provide a system which can detect highly non-rigid objects with a high degree of variability in size, shape, color, and texture and which does not rely on any a priori (hand-crafted) models or on changes in position of objects between frames in a video sequence.
SUMMARY OF THE INVENTION
In accordance with the present invention, an object detection system includes (a) an image preprocessor for moving a window across the image and a classifier coupled to the preprocessor for classifying the portion of the image within the window. The classifier includes a wavelet template generator which generates a wavelet template that defines the shape of an object with a subset of the wavelet coefficients of the image. The wavelet template generator generates a wavelet template which includes a set of regular regions of different scales that correspond to the support of a subset of significant wavelet functions. The relationships between different regions are expressed as constraints on the values of the wavelet coefficients. With this particular arrangement, a system which is trainable and which detects objects in static or video images of cluttered scenes is provided. The wavelet template defines an object as a set of regions and relationships among the regions. Use of a wavelet basis to represent the template yields both a computationally efficient technique and an effective learning scheme. By using a wavelet template that defines the shape of an object in terms of a subset of the wavelet coefficients of the image, the system can detect highly non-rigid objects such as people and other objects with a high degree of variability in size, shape, color, and texture. The wavelet template is invariant to changes in color and texture and can be used to robustly define a rich and complex class of objects such as people. The system utilizes a model that is automatically learned from examples and thus can avoid the use of motion and explicit image segmentation to detect objects in an image. The system further includes a training system coupled to the classifier and including a database including both positive and negative examples; and a quadratic programming solver. The system utilizes a general paradigm for object detection. The system is trainable and utilizes example-based models. Furthermore, the system is reconfigurable and extendible to a wide variety of object classes.
In accordance with a further aspect of the present invention, a wavelet template includes a set of regular regions of different scales that correspond to the support of a subset of significant wavelet functions of an image. The relationships between different regions are expressed as constraints on the values of the wavelet coefficients. The wavelet template can compactly express the structural commonality of a class of objects and is computationally efficient. It is learnable from a set of examples and provides an effective tool for the challenging problem of detecting pedestrians in cluttered scenes. With this particular technique, a learnable wavelet template provides a framework that is extensible to the detection of complex object classes including but not limited to the pedestrian object class. The wavelet template is an extension of the ratio template and addresses some of these issues not addressed by the ratio template in the context of pedestrian detection. By using a wavelet basis to represent the template a computationally efficient technique for detecting objects as well as an effective learning scheme is provided.
The success of the wavelet template for pedestrian detection comes from its ability to capture high-level knowledge about the object class (structural information expressed as a set of constraints on the wavelet coefficients) and incorporate it into the low-level process of interpreting image intensities. Attempts to directly apply low-level techniques such as edge detection and region segmentation are likely to fail in the images which include highly non-rigid objects having a high degree of variability in size, shape, color, and texture since these methods are not robust, are sensitive to spurious details, and give ambiguous results. Using the wavelet template, only significant information that characterizes the object class, as obtained in a learning phase, is evaluated and used.
The approach of the present invention as applied to a pedestrian template is learned from examples and then used for classification, ideally in a template matching scheme. It is important to realize that this is not the only interpretation of the technique. An alternative, and perhaps more general, utilization of the technique includes the step of learning the template as a dimensionality reduction stage. Using all the wavelet functions that describe a window of 128×64 pixels would yield vectors of very high dimensionality. The training of a classifier with such a high dimensionality would in turn require an example set which may be too large to utilize in practical systems using present day technology.
The template learning stage serves to select the basis functions relevant for this task and to reduce their number considerably. In one particular embodiment, the twenty-nine basis functions are used. A classifier, such as a support vector machine (SVM) can then be trained on a small example set. From this point of view, learning the pedestrian detection task consists of two learning steps: (1) dimensionality reduction, that is, task-dependent basis selection and (2) training the classifier. In this interpretation, a template in the strict sense of the word is neither learned nor used. It should be appreciated of course that in other applications and embodiments, it may be desirable to not reduce the number of basis functions but instead it may be desirable to use all available basis functions. In this case, all of the basis functions are provided to the classifier.
In accordance with a still further aspect of the present invention, an object detection system includes an optical flow processor which receives frames from a video sequence and computes the optical flow between images in the frames and a discontinuity detector coupled to the optical flow processor. The discontinuity detector detects discontinuities in the flow field that indicate probable motion of objects relative to the background in the frame. A detection system is coupled to the discontinuity detector and receives information indicating which regions of an image or frame are likely to include objects having motion. With this particular arrangement an object detection system which utilizes motion information to detect objects is provided. The frames may be consecutive frames in a video sequence. The discontinuity detector detects discontinuities in the flow field that indicate probable motion of objects relative to the background and the detected regions of discontinuity are grown using morphological operators, to define the full regions of interest. In these regions of motion, the likely class of objects is limited, thus strictness of the classifier can be relaxed.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing features of the invention, as well as the invention itself may be more fully understood from the following detailed description of the drawings, in which:
FIG. 1 is a block diagram of an image processing system utilizing a wavelet template generator;
FIG. 2 is a block diagram of a wavelet template image processing system for detecting pedestrians in an image utilizing a bootstrapping technique;
FIG. 2A is a plot of Pedestrian Detection Rate vs. False Detection Rate;
FIG. 2B is a subset of training images for training a wavelet template image processing system for detecting pedestrians;
FIG. 3 is an image diagrammatically illustrating a dictionary of basis functions which encode differences in the intensities among different regions of an image;
FIG. 3A is a diagrammatical representation of a basis function;
FIGS. 3B-3H are a series of diagrams illustrating ensemble average values of wavelet coefficients for pedestrians coded using gray level coding;
FIG. 3I is a diagram of a pedestrian image having coefficients disposed thereon;
FIG. 4 is a block diagram of an image processing system architecture;
FIG. 5 is a diagrammatical representation of a face image having a predetermined number of basis coefficients disposed thereover;
FIGS. 5A-5F are a series of diagrams illustrating ensemble average values of wavelet coefficients for face images coded using gray level coding;
FIG. 5G is a set of training images for training a wavelet template image processing system for detecting pedestrians; and
FIGS. 6-6C are a series of images showing the sequence of steps to utilize motion in the detection of objects.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Terminology
Before describing an object detection system and the operations performed to generate a wavelet template, some introductory concepts and terminology are explained.
An analog or continuous parameter image such as a still photograph may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device. Thus, as described herein, the matrix of digital data values are generally referred to as a “digital image” or more simply an “image” and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene.
Similarly, an image sequence such as a view of a moving roller-coaster for example, may be converted to a digital video signal as is generally known. The digital video signal is provided from a sequence of discrete digital images or frames. Each frame may be represented as a matrix of digital data values which may be stored in a storage device of a computer or other digital processing device. Thus in the case of video signals, as described herein, a matrix of digital data values are generally referred to as an “image frame” or more simply an “image” or a “frame.” Each of the images in the digital video signal may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene in a manner similar to the manner in which an image of a still photograph is stored.
Whether provided from a still photograph or a video sequence, each of the numbers in the array correspond to a digital word (e.g. an eight-bit binary value) typically referred to as a “picture element” or a “pixel” or as “image data.” The image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
Reference is also sometimes made herein to an image as a two-dimensional pixel array. An example of an array size is an array having 512 rows and 512 columns (denoted 512×512). Specific reference is sometimes made herein to operation on arrays having a particular size (e.g. 128×64, 32×32, 16×16, etc . . . ). One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of pixel arrays including irregularly shaped pixel arrays.
A scene is an image or a single representative frame of video in which the contents and the associated relationships within the image can be assigned a semantic meaning. A still image may be represented, for example, as a pixel array having 512 rows and 512 columns. An object is an identifiable entity in a scene in a still image or a moving or non-moving entity in a video image. For example, a scene may correspond to an entire image while a boat might correspond to an object in the scene. Thus, a scene typically includes many objects and image regions while an object corresponds to a single entity within a scene.
An image region or more simply a region is a portion of an image. For example, if an image is provided as a 32×32 pixel array, a region may correspond to a 4×4 portion of the 32×32 pixel array.
Before describing the processing to be performed by and on networks, it should be appreciated that, in an effort to promote clarity, reference is sometimes made herein to operation on images which include pedestrians and faces. It should be noted, however, that the techniques described herein are not limited to use of detection or classification of pedestrians or faces in images. Rather, the techniques described herein can also be used to detect and/or classify a wide variety objects within images including but not limited to pedestrians, faces, automobiles, animals, and other objects. Accordingly, those of ordinary skill in the art will appreciate that the description and processing taking place on objects which are pedestrians or images which include objects which are pedestrians, could equally be taking place on a objects which are not pedestrians.
Referring now to FIG. 1, an object detection system 10 includes a resize and preprocessing processor 12 which receives an input signal corresponding to at least a portion of a digital image 14 and provides an output signal (a feature vector) to an input port of a classifier 16 which may be provided, for example, as a support vector machine (SVM) 16.
The classifier 16 provides the class information to a detection system 18 which detects objects in particular images and provides output signals to an output/display system 20.
The system 10 can also include a training system 22 which in turn includes an image database 24 and a quadratic programming (QP) solver 26. During a training operation, the training system 22 provides one or more training samples to the classifier 16.
During the training, basis functions having a significant correlation with object characteristics are identified. Various classification techniques well known to those of ordinary skill in the art can be used to learn the relationships between the wavelet coefficients that define a particular class such as a pedestrian class, for example.
The detection system 10 detects objects in arbitrary positions in the image and in different scales. To accomplish this task, the system is trained to detect an object centered in a window of a predetermined size. The window may, for example, be provided as a 128×64 pixel window. Once the training stage is completed, the system is able to detect objects at arbitrary positions by shifting the 128×64 window throughout the image, thereby scanning all possible locations in the image. The scanning step is combined with the step of iteratively resizing the image to achieve multi-scale detection. In one particular embodiment, the image is scaled from 0.2 to 1.5 times its original size, at increments of 0.1.
At any given scale, rather than recomputing the wavelet coefficients for every window in the image, a transform computation for the whole image is performed and shifting is performed in the coefficient space. A shift of one coefficient in the finer scale corresponds to a shift of four pixels in the window and a shift in the coarse scale corresponds to a shift of eight pixels. Since most of the coefficients in the wavelet template are at the finer scale (the coarse scale coefficients undergo a relatively small change with a shift of four pixels), an effective spatial resolution of four pixels is achieved by working in the wavelet coefficient space.
To train the system, images of same class objects from different views are stored in the database and used. For example, to detect pedestrians, frontal and rear images of people from outdoor and indoor scenes can be used. Also, positive example images (i.e. images which include people) and negative example images (i.e. images which do not include people) can be stored in the database and used for training. The initial non-people images in the training database (i.e. the negative image examples) are patterns from natural scenes not containing people. The combined set of positive and negative examples form the initial training database for the classifier. A key issue with the training of detection systems is that, while the examples of the target class, in this case pedestrians, are well defined, there are no typical examples of non-pedestrians. The main idea in overcoming this problem of defining this extremely large negative class is the use of “bootstrapping” training. After the initial training, the system processes arbitrary images that do not contain any people. Any detections are clearly identified as false positives and are added to the database of negative examples and the classifier is then retrained with this larger set of data. These iterations of the bootstrapping procedure allows the classifier to construct an incremental refinement of the non-pedestrian class until satisfactory performance is achieved. This bootstrapping technique is illustrated in conjunction with FIG. 2 below.
Thus given an object class, one problem is how to learn which are the relevant coefficients that express structure common to the entire object class and which are the relationships that define the class. To solve this problem the learning can be divided into a two stage process. The first stage includes identifying a relatively small subset of basis functions that capture the structure of a class. The second stage of the process includes using a classifier to derive a precise class model from the subset of basis functions.
Once the learning stages are complete, the system can detect objects in arbitrary positions and in different scales in an image. The system detects objects at arbitrary positions in an image by scanning all possible locations in the image. This is accomplished by shifting a detection window (see e.g. FIGS. 1 and 4). This is combined with the iterativley re-sizing the image to achieve multi-scale detection.
In one example of detecting face images, faces were detected from a minimum size of 19×19 pixels to 5 time this size by scaling the novel image from 0.2 to 2 times its original size. This can be done in increments such as increments of 0.1. At any given scale, instead of recomputing the wavelet coefficients for every window in the image, the transform can be computed for the whole image and the shifting can be done in the coefficient space.
Once the important basis functions are identified, various classification techniques can be used to learn the relationships between the wavelet coefficients that define a particular class such as the pedestrian class. The system detects people in arbitrary positions in the image and in different scales. To accomplish this task, the system is trained to detect a pedestrian centered in a 128×64 pixel window. Once a training stage is completed, the system is able to detect pedestrians at arbitrary positions by shifting the 128×64 window, thereby scanning all possible locations in the image. This is combined with iteratively resizing the image to achieve multi-scale detection; in accordance with the present invention, scaling of the images ranges from 0.2 to 1.5 times the original image size. Such scaling may be done at increments of 0.1.
At any given scale, instead of recomputing the wavelet coefficients for every window in the image, the transform is computed for the whole image and the shifting is done in the coefficient space. A shift of one coefficient in the finer scale corresponds to a shift of 4 pixels in the window and a shift in the coarse scale corresponds to a shift of 8 pixels. Since most of the coefficients in the wavelet template are at the finer scale (the coarse scale coefficients hardly change with a shift of 4 pixels), we achieve an effective spatial resolution of 4 pixels by working in the wavelet coefficient space.
Referring now to FIG. 2, to train a detection system 30, a database of images 32 is used. The image database includes both positive and negative image examples. A positive image example is an image which includes the object of interest. A negative image example, on the otherhand refers to an image which does not include the object of interest. Thus, in the case where the detection system is a pedestrian detection system, a positive image would include a pedestrian while a negative image would not include a pedestrian.
Considering detection system 30 as a pedestrian detection system, the database of images 32 would thus include frontal and rear images of people from outdoor and indoor scenes. The database 32 would also include an initial set of non-people images in the training database 32. Such non-people images could correspond to patterns from nature or other scenes not containing people. The combined set of positive and negative examples form the initial training database 32 is provided to a classifier 34.
Classifier 34 provides information (i.e. a likelihood estimate of the object belonging to the class) to the detection system 30 which allows the detection system 30 to receive a new image 38 and to detect objects of interest in the image. In particular example, the objects of interest are pedestrians 40 a-40 c as shown. As also can be seen, objects 40 a and 40 c detected by system 30 do not-correspond to pedestrians. Thus, these objects correspond to so-called false positive images 42 a, 42 b which are provided to classifier 34 and identified as false positive images. Classifier 34 receives the false positive images 42 a, 42 b and uses the images as additional learning examples.
One important issue with the training of detection systems is that, while the examples of the target class, in this case pedestrians, are well defined, there are no typical examples of non-pedestrians. The main idea in overcoming this problem of defining this extremely large negative class is the use of “bootstrapping” training.
In bootstrap training, after the initial training, the system is presented with arbitrary images that do not contain any people. Any images detected are clearly identified as false positives and are added to the database of negative examples and the classifier 34 is then retrained with this larger set of data. These iterations of the bootstrapping procedure allows the classifier 34 to construct an incremental refinement of the non-pedestrian class until satisfactory performance is achieved.
Described below in conjunction with FIG. 2A is the template learning stage which includes the identification of the significant coefficients that characterize the object class (e.g. the pedestrian class). These coefficients are used as the feature vector for various classification methods.
The simplest classification scheme is to use a basic template matching measure. Normalized template coefficients are divided into two categories: coefficients above 1 (indicating strong change) and below 1 (weak change). For every novel window, the wavelet coefficients are compared to the pedestrian template. The matching value is the ratio of coefficients in agreement. While this basic template matching scheme is relatively simple, it performs relatively well detecting relatively complex objects such as pedestrians.
Instead of using a classifier 34 which utilizes a relatively simple template matching paradigm, a relatively sophisticated classifier 34 which will learn the relationship between the coefficients from given sets of positive and negative examples is preferably used. The classifier can learn more refined relationships than the simple template matching schemes and therefore can provide more accurate detection.
In one embodiment, the classification technique used is the support vector machine (SVM). This technique has several features that make it particularly attractive. Traditional training techniques for classifiers, such as multi-layer perceptions (MLP), use empirical risk minimization and only guarantee minimum error over the training set. In contrast, the SVM machinery uses structural risk minimization which minimizes a bound on the generalization error and therefore should perform better on novel data. Another interesting aspect of the SVM is that its decision surface depends only on the inner product of the feature vectors. This leads to an important extension since the Euclidean inner product can be replaced by any symmetric positive-definite kernel K(x,y). This use of a kernel is equivalent to mapping the feature vectors to a high-dimensional space, thereby significantly increasing the discriminative power of the classifier. In the pedestrian classification problem, for example, it has been found that using a polynomial of degree two as the kernel provides good results.
It should be noted, that from the view point of the classification task, one could use the entire set of coefficients as a feature vector. It should also be noted, however, that using all the wavelet functions that describe a window of 128×64 pixels, over a few thousands, would yield vectors of very high dimensionality. The training of a classifier with such a high dimensionality would in turn require a relatively large example set. The template learning stage to be described below, serves to select the basis functions relevant for this task and to reduce their number considerably. In one embodiment directed toward pedestrian detection, twenty-nine basis functions are used.
To evaluate the system performance, the system was operated using a database of 564 positive image examples and 597 negative image examples. The system then undergoes the bootstrapping cycle described above. In one embodiment in which pedestrians were detected, the support vector system undergoes three bootstrapping steps, ending up with a total of 4597 negative examples. For the template matching version a threshold of 0.7 (70% matching) was empirically found to yield good results.
Out-of-sample performance was evaluated over a test set consisting of 72 images for both the template matching scheme and the support vector classifier. The test images contain a total of 165 pedestrians in frontal or near-frontal poses; 24 of these pedestrians are only partially observable (e.g. with body regions that are indistinguishable from the background). Since the system was not trained with partially observable pedestrians, it is expected that the system would not be able to detect these instances. To give a fair account of the system, statistics are presented for both the total set and the set of 141 “high quality” pedestrian images. Results of the tests are presented in Table 1 for representative systems using template matching and support vectors.
TABLE 1
Detection Detection Rate for False Positive
Rate High Quality Class Rate (per window)
Template 52.7% 61.7% 1:5,000 
Matching
SVM 69.7% 81.6% 1:15,000
As can be seen in Table 1, The template matching system has a pedestrian detection rate of 52.7%, with a false positive rate of 1 for every 5,000 windows examined. The success of such a straightforward template matching measure, which is much less powerful than the SMV classifier, suggests that the template learning scheme extracts non-trivial structural regularity within the pedestrian class.
For the more sophisticated system with the support vector classifier, a more thorough analysis can be performed. In general, the performance of any detection system exhibits a tradeoff between the rate of detection and the rate of false positives. Performance drops as impose more stringent restrictions are imposed on the rate of false positives. To capture this tradeoff, the sensitivity of the system is varied by threshholding the output and evaluating a Receiver Operating Characteristic (ROC) curve.
Referring briefly to FIG. 2A, a plot of Pedestrian Detection Rate vs. False Detection Rate (an ROC curve) is shown. Curve 46 (solid line) is over the entire test set while curve 44 is over a “high quality” test set. Examination of the curves 44, 46 illustrates, for example, that if a tolerance of one false positive for every 15,000 windows examined exists, the system can achieve a detection rate of 69.6%, and as high as 81.6% on a “high quality” image set. It should be noted that the support vector classifier with the bootstrapping performs better than a “naive” template matching scheme.
Although training using only frontal and rear views of pedestrians is discussed above, it should be appreciated that the classifier can also be trained to handle side views in a manner substantially the same as that herein described for training of front and rear views.
Referring now to FIG. 2B, a plurality of typical images of people 48 which may be stored in the database 32 (FIG. 2) are shown. Examination of these images illustrates the difficulties of pedestrian detection as evidenced by the significant variability in the patterns and colors within the boundaries of the body.
As can be observed in FIG. 2B, there are no consistent patterns in the color and texture of pedestrians or their backgrounds in arbitrary cluttered scenes in unconstrained environments. This lack of clearly discernible interior features is circumvented by relying on (1) differences in the intensity between pedestrian bodies and their backgrounds and (2) consistencies within regions inside the body boundaries. The wavelet coefficients can be interpreted as indicating an almost uniform area, i.e. “no-change”, if their absolute value is relatively small, or as indicating “strong change” if their absolute value is relatively large. The wavelet template sought to be identified consists solely of wavelet coefficients (either vertical, horizontal or corner) whose types (“change”/“no-change”) are both clearly identified and consistent along the ensemble of pedestrian images; these comprise the “important” coefficients.
The basic analysis to identify the template consists of two steps: first, the wavelet coefficients are normalized relative to the rest of the coefficients in the patterns; second, the averages of the normalized coefficients are analyzed along the ensemble. A relatively large number of images can be used in the template learning process. In one embodiment, a set of 564 color images of people similar to those shown in FIG. 2A are used in the template learning.
Each of the images are scaled and clipped to the dimensions 128×64 such that the people are centered and approximately the same size (the distance from the shoulders to feet is about 80 pixels). In this analysis, restriction is made to the use of wavelets at scales of 32×32 pixels (one array of 15×5 coefficients for each wavelet class) and 16×16 pixels (29×13 for each class). For each color channel (RGB) of every image, a quadruple dense Haar transform is computed and the coefficient value corresponding to the largest absolute value among the three channels is selected. The normalization step computes the average of each coefficient's class ({vertical, horizontal, corner}×{16,32}) over all the pedestrian patterns and divides every coefficient by its corresponding class average. Separate computations of the averages for each class are made since the power distribution between the different classes may vary.
To begin specifying the template, a calculation is made of the average of each normalized coefficient over the set of pedestrians. A base set of 597 color images of natural scenes of size 128×64 that do not contain people were gathered to compare with the pedestrian patterns and are processed as above. Tables 2A and 2B show the average coefficient values for the set of vertical Haar coefficients of scale 32×32 for both the non-pedestrian (Table 2A) and pedestrian (Table 2B) classes.
Table 2A shows that the process of averaging the coefficients within the pattern and then in the ensemble does not create spurious patterns; the average values of these non-pedestrian coefficients are near 1 since these are random images that do not share any common pattern. The pedestrian averages, on the other hand, show a clear pattern, with strong response (values over 1.5) in the coefficients corresponding to the sides of the body and weak response (values less than 0.5) in the coefficients along the center of the body.
TABLE 2A
1.18 1.14 1.16 1.09 1.11
1.13 1.06 1.11 1.06 1.07
1.07 1.01 1.05 1.03 1.05
1.07 0.97 1.00 1.00 1.05
1.06 0.99 0.98 0.98 1.04
1.03 0.98 0.95 0.94 1.01
0.98 0.97 0.96 0.91 0.98
0.98 0.96 0.98 0.94 0.99
1.01 0.94 0.98 0.96 1.01
1.01 0.95 0.95 0.96 1.00
0.99 0.95 0.92 0.93 0.98
1.00 0.94 0.91 0.92 0.96
1.00 0.92 0.93 0.92 0.96
Table 2B shows that the pedestrian averages have a clear pattern, with strong response (values over 1.5) in the coefficients corresponding to the sides of the body and weak response (values less than 0.5) in the coefficients along the center of the body.
TABLE 2B
0.62 0.74 0.60 0.75 0.66
0.76 0.92 0.54 0.88 0.81
1.07 1.11 0.52 1.04 1.15
1.38 1.17 0.48 1.08 1.47
1.65 1.27 0.48 1.15 1.71
1.62 1.24 0.48 1.11 1.63
1.44 1.27 0.46 1.20 1.44
1.27 1.38 0.46 1.34 1.27
1.18 1.51 0.46 1.48 1.18
1.09 1.54 0.45 1.52 1.08
0.94 1.38 0.42 1.39 0.93
0.74 1.08 0.36 1.11 0.72
0.52 .74 0.29 0.77 0.50
A gray level coding scheme can be used to visualize the patterns in the different classes and values of coefficients. The values can be displayed in the proper spatial layout. With this technique, coefficients close to 1 are gray, stronger coefficients are darker, and weaker coefficients are lighter.
Referring now to FIGS. 3 and 3A, it is desirable to locate an image representation which captures the relationship between average intensities of neighboring regions. To accomplish this, a family of basis functions, such as the Haar wavelets, which encode such relationships along different orientations can be used.
The Haar wavelet representation has also been used in prior art techniques for image database retrieval where the largest wavelet coefficients are used as a measure of similarity between two images.
In accordance with the present invention, however, a wavelet representation is used to capture the structural similarities between various instances of the class. In FIG. 3A, three types of 2-dimensional Haar wavelets 52-56 are depicted. These types include basis functions which capture change in intensity along the horizontal direction, the vertical direction and the diagonals (or corners). Since the wavelets that the standard transform generates have irregular support, a non-standard two-dimensional DWT is used where, at a given scale, the transform is applied to each dimension sequentially before proceeding to the next scale. The results are Haar wavelets with square support at all scales. Also depicted in FIG. 3A is a quadruple density 2D Haar basis 58.
The spatial sampling of the standard Haar basis is not dense enough for all applications. For example the Haar basis is not dense enough for a pedestrian detection application. For a 1-dimensional transform, the distance between two neighboring wavelets at level n (with support of size 2n) is 2n. For better spatial resolution, a set of redundant basis functions, or an over-complete dictionary, where the distance between the wavelets at scale n is ¼ 2n is required. This is referred to as a quadruple density dictionary.
As can be observed, the straightforward approach of shifting the signal and recomputing the DWT will not generate the desired dense sampling. However, one can observe that in the standard wavelet transform, after the scaling and wavelet coefficients are convolved with the corresponding filters there is a step of downsampling. If the wavelet coefficients are not downsampled, wavelets with double density are generated, where wavelets of level n are centered every ½ 2n. A quadruple density dictionary is generated by computing the scaling coefficients with double density rather than by downsampling them. The next step is to calculate double density wavelet coefficients on the two sets of scaling coefficients—even and odd—separately. By interleaving the results of the two transforms quadruple density wavelet coefficients are provided.
For the next scale only the even scaling coefficients of the previous level are kept and the quadruple transform is repeated on this set only. The odd scaling coefficients are dropped off. Since only the even coefficients are carried along at all the scales, this avoids an “explosion” in the number of coefficients, yet provides a dense and uniform sampling of the wavelet coefficients at all the scales. As with the regular DWT, the time complexity is 0(n) in the number of pixels n. The extension for the 2-dimensional transform is straight-forward and after reading the present disclosure is within the level of one of ordinary skill in the art.
The Haar wavelets thus provide natural set basis functions which encode differences in average intensities between different regions. To achieve the spatial resolution necessary for detection and to increase the expressive power of a model, the quadruple transform is used. As mentioned above, the quadruple transform yields an over complete set of bases functions. Thus, whereas for a wavelet with size 2n, the standard Haar transform shifts each wavelet by n the quadruple density transform shifts the wavelet by ¼2n in each direction. The use of this quadruple density transform results in the overcomplete dictionary of basis functions that facilitate the definition of complex constraints on the object patterns. Moreover there is n loss of computational efficiency with respect to the standard wavelet transform.
The ratio template defines a set of constraints on the appearance of an object by defining a set of regions and a set of relationships on their average intensities. The relationships can require, for example, that the ratio of intensities between two specific regions falls within a certain range. The issues of learning these relationships, using the template for detection, and its efficient computation is addressed by establishing the ratio template in the natural framework of Haar wavelets. Each wavelet coefficient describes the relationship between the average intensities of two neighboring regions. If the transform on the image intensities is computed, the Haar coefficients specify the intensity differences between the regions; computing the transform on the log of the image intensities produces coefficients that represent the log of the ratio of the intensities. Furthermore, the wavelet template can describe regions with different shapes by using combinations of neighboring wavelets with overlapping support and wavelets of different scales. The wavelet template is also computationally efficient since the transform is computed once for the whole image and different sets of coefficients are examined for different spatial locations.
Referring now to FIGS. 3B-3H, the ensemble average values of the wavelet coefficients coded using gray level are shown in images 60-72. Coefficients having values above the template average are darker while those below the average are lighter. FIG. 3B shows the vertical coefficients of random images and as expected, this figure is uniformly gray. The corresponding images for the horizontal and corner coefficients (not shown here) are similar. In contrast FIGS. 3C-3E, show vertical, horizontal and corner coefficients of scale 32×32 images of people. The coefficients of the images with people show clear patterns with the different classes of wavelet coefficients being tuned to different types of structural information. The vertical wavelets, FIG. 3C, capture the sides of the pedestrians. The horizontal wavelets, FIG. 3D, respond to the line from shoulder to shoulder and to a weaker belt line. The corner wavelets, FIG. 3E, are better tuned to corners, for example, the shoulders, hands and feet. FIGS. 3F-3H, show vertical, horizontal and corner coefficients, respectively, of scale 16×16 images of people. The wavelets of finer scale in FIG. 3F-3H provide better spatial resolution of the body's overall shape and smaller scale details such as the head and extremities appear clearer. Two similar statistical analyses using (a)the wavelets of the of the intensities and (b) the sigmoid function as a “soft threshold” on the normalized coefficients yields results that are similar to the intensity differencing wavelets. It should be noted that a basic measure such as the ensemble average provides clear identification of the template as can be seen from FIGS. 3B-3H.
Referring now to FIG. 3I, the significant wavelet bases for pedestrian detection that were uncovered during the learning strategy are shown overlayed on an example image of a pedestrian 74. In this particular example, the template derived form the learning uses a set of 29 coefficients that are consistent along the ensemble (FIGS. 3B-3H) either as indicators if “change” or “no-change.” There are 6 vertical and 1 horizontal coefficients at the scale of 32×32 and 14 vertical and 8 horizontal at the scale of 16×16. These coefficients serve as the feature vector for the ensuing classification problem.
In FIGS. 3B-3I, it can be seen that the coefficients of people show clear patterns with the different classes of wavelet coefficients being tuned to different types of structural information. The vertical wavelets capture the sides of the pedestrians. The horizontal wavelets respond to the line from shoulder to shoulder and to a weaker belt line. The corner wavelets are better tuned to corners, for example, the shoulders, hands and feet. The wavelets of finer scale provide better spatial resolution of the body's overall shape and smaller scale details such as the head and extremities appear clearer.
Two similar statistical analyses using (a) the wavelets of the of the intensities and (b) the sigmoid function as a “soft threshold” on the normalized coefficients yields results that are similar to the intensity differencing wavelets. It should be noted that a basic measure like the ensemble average provides clear identification of the template as shown in FIGS. 3B-3H.
Referring now to FIG. 4, a graphical illustration of learning is shown. A window 82 (e.g. a 128×64 pixel window) moves across an image 80. The first stage of learning results in a basis selection 84 and in the second stage of learning, the selected bases are provided to a classifier 86 which may, for example, be provided as an SVM classifier.
To learn the significant bases functions, a set of images of the object class of interest is used. In one particular example where the object class is faces, a set of grey-scale images of a predetermined face sized is used. For example, a set of 2429 grey-scale images of face size 19×19 including a core set of faces with some small angular rotations (to improved generalizations may be used). For the wavelet coefficient analysis, wavelets at scales having dimensions selected to correspond to typical features of the object of interest may be used. For example, in the case where the objects of interest are faces and the images have face sizes on the order of 19×19 pixels, wavelets at scales of 4×4 pixels and 2×2 pixels can be used since their dimensions correspond to typical facial features for this size of face image. Thus, in the above example, there exists a total of 173 coefficients.
The basic analysis in identifying the important coefficients includes two steps. Since the power distribution of different types of coefficients may vary, the first step is to compute the class average of ({vertical, horizontal, diagonal)} {2×4} for a total of 8 classes) and normalize every coefficient by its corresponding average class.
The second step is to average the normalized coefficients over the entire set of examples. The normalization has the property that the average value of coefficients of random patterns will be 1. If the average value of a coefficient is much greater than 1, this indicates that the coefficient is encoding a boundary between two regions that is consistent along the examples, of the class. Similarly, if the average value of a coefficient is much smaller than 1, that coefficient encodes a uniform region.
Once the important basis functions have been identified, various classification techniques can be used to learn the relationships between the wavelet coefficients that define the object class. The system can be trained using the bootstrapping technique described above in conjunction with FIG. 2.
With reference now to FIGS. 5-5F, to illustrate the first stage of learning which results in a basis selection, FIGS. 5A-5F are a series of images 92 a-92 f used to illustrate the steps to determine significant wavelet bases for face detection that are uncovered through the learning strategy of the present invention. In FIG. 5, the so-identified significant basis functions are disposed over an image 90.
FIG. 5G shows several exemplary face images 94 used for training from which the ensemble average values of the wavelet coefficients of FIGS. 5A-5F are generated. The images in FIG. 5G are gray level of size 19×19 pixels.
In FIGS. 5-5F, the coefficients' values are coded using grey-scale where each coefficient, or basis function, is drawn as a distinct square in the image. The arrangement of squares corresponds to the spatial location of the basis functions, where strong coefficients (relatively large average values) are coded by darker grey levels and weak coefficients (relatively small average values) are coded by lighter grey levels. It is important to note that in FIGS. 5A-5F, a basis function corresponds to a single square in each image and not the entire image. It should be noted that in this particular example, the different types of wavelets capture various facial features. For example, vertical, horizontal and diagonal wavelets capture eyes, nose and mouth. In other applications (e.g. objects other than faces) the different wavelets should capture various features of the particular object.
FIGS. 5A-5F illustrate ensemble average values of the wavelet coefficients for faces coded using color. Each basis function is displayed as a single square in the images above. Coefficients whose values are close to the average value of 1 are coded gray, the ones which are above the average are coded using red and below the average are coded using blue. It can be seen from observing FIGS. 5A-5F the strong features in the eye areas and the nose. Also, the cheek area is an area of almost uniform intensity, i.e. below average coefficients. FIGS. 5A-5C are vertical, horizontal and diagonal coefficients, respectively, of scale 4×4 of images of faces. FIGS. 5D-5F are vertical1 horizontal and diagonal coefficients, respectively, of scale 2×2 of images of faces.
From the statistical analysis, a set of 37 coefficients can be derived from both the course and finer scales, that capture the significant feature of the face. These significant bases include 12 vertical, 14 horizontal and 3 diagonal coefficients at the scale of 4×4 and 3 vertical, 2 horizontal and 3 corner coefficients at the scale of 2×2. FIG. 5 shows a typical human face from the training database with the significant 37 coefficients drawn in the proper configuration.
For the task of pedestrian detection, in one example a database of 924 color images of people was used. Several of such images are shown in FIG. 2B above. A similar analysis of the average values of the coefficients was done for the pedestrian class and FIGS. 3B-3H show the grey-scale coding similar to FIGS. 5A-5F for the face class. It should be noted that for the pedestrian class, there are no strong internal patterns as in the face class. Rather, the significant basis functions are along the exterior boundary of the class, indicating a different type of significant visual information. Through the same type of analysis as used for the face class, for the pedestrian class, 29 significant coefficients are chosen from the initial, overcomplete set of 1326 wavelet coefficients. These basis functions are shown overlayed on an example pedestrian in FIG. 3I.
It should be noted that from the viewpoint of the classification task, the whole set of coefficients could be used as a feature vector. As mentioned above however, using all wavelet functions that describe a window of 128×64 pixels in the case of pedestrians, for example, would yield vectors of very high dimensionality. The training of a classifier with such a high dimensionality, on the order of 1000, would in turn a relatively large example set which makes such an approach somewhat impractical using current technology (i.e. current commercially available microprocessor speeds, etc . . . ). This dimensionality reduction stage serves to select the basis functions relevant for this task and to reduce considerably the number basis functions required.
To evaluate the face detection system performance, a database of 2429 positive examples and 100 negative examples was used. Several systems were trained using different penalties for misclassification. The systems undergo the bootstrapping cycle detailed discussed above in conjunction with FIG. 2 to arrive at between 4500 and 9500 negative examples. Out-of-sample performance was evaluated using a set of 131 faces and the rate of false detection was determined by running the system over approximately 900,000 patterns from images of natural scenes that do not contain either faces or people. With this arrangement, if one false detection is allowed per 7,500 windows examined, the rate of correctly detected faces reaches 75%. It is also seen in such a system that higher penalties for missed positive examples may result in better performance.
Referring now to FIGS. 6-6C, the sequence of steps in a system which utilizes motion-information are shown. FIG. 6 illustrates static detection results, FIGS. 6A, 6B Illustrate full motion regions and FIG. 6C illustrates improved detection results using the motion information.
In the case of video sequences, motion information can be utilized to enhance the robustness of the detection. Using the pedestrian detection system as a example, the optical flow between consecutive images are first computed. Next discontinuities in the flow field that indicate probable motion of objects relative to the background are detected. The detected regions of discontinuity are grown using morphological operators, to define the full regions of interest 106 a-106 d. In these regions of motion, the likely class of objects is limited, thus strictness of the classifier can be relaxed.
It is important to observe that, unlike most people detection systems, it is not necessary to assume a static camera nor is it necessary to recover camera ego-motion. Rather, the dynamic motion information is used to assist the classifier. Additionally, the use of motion information does not compromise the ability of the system to detect non-moving people.
Examination of FIGS. 6-6C illustrate how the motion cues enhance the performance of the system. For example, without the use if motion cues, in FIG. 6 the pedestrian 102 is not detected. However, using the motion cues from two successive frames in a sequence (FIGS. 6A, 6B) pedestrian 102 is detected in FIG. 6C.
In FIG. 6B the areas of motion are identified using the technique described above and correspond to regions 106 a-106 d. It should be noted that pedestrian 102 falls within region 106 a.
The system is tested over a sequence of 208 frames. The results of the test are summarized in Table 3 below. Table 3 shows the performance of the pedestrian detection system with the motion-based extensions, compared to the base system. Out of a possible 827 pedestrians in the video sequence—including side views for which the system is not trained—the base system correctly detects 360 (43.5%) of them with a false detection rate of 1 per 236,500 windows. The system enhanced with the motion module detects 445 (53.8%) of the pedestrians, a 23.7% increase in detection accuracy, while maintaining a false detection rate of 1 per 90,000 windows. It is important to iterate that the detection accuracy for non-moving objects is not compromised; in the areas of the image where there is no motion, the classifier simply runs as before. Furthermore, the majority of the false positives in the motion enhanced system are partial body detections; i.e., a detection with the head cut off, which were still counted as false detections. Taking this factor into account, the false detection rate is even lower.
TABLE 3
False Positive Rate
Detection Rate (per window)
Base system 43.5% 1:236,500
Motion Extension 53.8% 1:90,000
The relaxation paradigm has difficulties when there are a large number of moving bodies in the frame or when the pedestrian motion is very small when compared to the camera motion. Based on these results, it is believed that integration of a trained classifier with the module that provides motion cues could be extended to other systems as well.
As indicated heretofore, aspects of this invention pertain to specific “method functions” implementable on computer systems. Those skilled in the art should readily appreciate that programs defining these functions can be delivered to a computer in many forms; including, but not limited to: (a) information permanently stored on non-writable storage media (e.g., read only memory devices within a computer or CD-ROM disks readable by a computer I/O attachment); (b) information alterably stored on writable storage media (e.g., floppy disks and hard drives); or (c) information conveyed to a computer through communication media such as telephone networks. It should be understood, therefore, that such media, when carrying such information, represent alternate embodiments of the present invention.
Having described preferred embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may be used. It is felt therefore that these embodiments should not be limited to disclosed embodiments, but rather should be limited only by the spirit and scope of the appended claims.

Claims (14)

What is claimed is:
1. A system for processing an image, the system comprising:
an image database which includes a set of example images at least some of which include objects of interest; and
a classifier, coupled to said image database, to receive an image from said database and to process said image, said classifier including a wavelet template generator, said wavelet template generator comprising:
(1) a wavelet scale selector to select at least one wavelet scale which corresponds to at least one feature of the object of interest;
(2) a wavelet coefficient processor for computing wavelet coefficients at each of the at least one wavelet scales; and
(3) a normalization processor to receive the wavelet coefficients and to normalize the wavelet coefficients such that an average value of the wavelet coefficients of a random pattern is a predetermined value.
2. The system of claim 1 further comprising:
an image preprocessor coupled to receive images from said image database and to provide at least a portion of an image to said classifier, said image pre-processor for moving a window across an image selected from the database; and
a resizing preprocessor for scaling an image from a first size to a second size at a predetermined increment and for providing each of the scaled images to said classifier.
3. The system of claim 1 further comprising a training system coupled to said classifier, said training system comprising:
an image database containing a first plurality of positive example images and a second plurality of negative example images; and
a quadratic programming solver wherein said training system provides negative example images to said classifier and any object detected by said classifier in the negative example images are identified as false positive images and are added to the second plurality of negative example images.
4. The system of claim 3 further comprising:
an image retrieval device coupled to said image database for retrieving images from said image database;
a relationship processor, coupled to said image retrieval device, said relationship processor for identifying relationships between image characteristics of images retrieved from said database;
a wavelet template generator, coupled to said relationship processor, said wavelet template generator for encoding relationships between characteristics which are consistent between images retrieved from said database as a wavelet image template; and
an image detector for applying the wavelet image template to images in said image database to detect images belonging to a particular class of images stored in said image database.
5. The image processing system of claim 4 wherein said image detector detects novel images having relative relationships between selected image regions thereof which are consistent with the relative relationships encoded in the wavelet image template.
6. The image processing system of claim 5 wherein said wavelet template generator comprises:
a wavelet scale selector to select at least one wavelet scale which corresponds to at least one feature of the object of interest;
a wavelet coefficient processor for computing wavelet coefficients at each of the least one wavelet scales; and
a normalization processor to receive the wavelet coefficients and to normalize the wavelet coefficients such that an average value of the wavelet coefficients of a random pattern is a predetermined value.
7. A method of generating a model for use in an image processing system, the method comprising the steps of:
(a) providing a set of example images at least some of which include objects of interest to a classifier;
(b) selecting at least one wavelet scale which corresponds to at least one feature of the object of interest;
(c) computing wavelet coefficients at each of the least one wavelet scales;
(d) normalizing the wavelet coefficients such that an average value of coefficients of a random pattern is a predetermined value.
8. The Method of claim 7 wherein the step of normalizing the wavelet coefficients includes the steps of:
(d1) computing a class average value for each wavelet coefficient;
(d2) normalizing each wavelet coefficient by its class average; and
(d3) averaging the normalized coefficients over example images to provide a normalized average value for each coefficient.
9. The method of claim 8 wherein the step of averaging the normalized coefficients over example images includes the step of averaging the normalized coefficients over the entire set of example images.
10. The method of claim 7 further comprising the steps of:
comparing normalized wavelet coefficient values; and
selecting coefficient values which capture one or more significant characteristics of the object of interest.
11. The method of claim 10 wherein the step of selecting coefficient values includes the step of selecting a number of coefficients which is less than the total number of coefficients.
12. The method of claim 11 wherein the wavelet coefficients correspond to vertical, horizontal and diagonal wavelet coefficients.
13. The method of claim 10 wherein the step of selecting at least one wavelet scale which corresponds to at least one feature of the object of interest includes the step of selecting a plurality of wavelet scales, each of the wavelet scales selected to correspond to a corresponding plurality of characteristics of the object.
14. The method of claim 9 wherein the example images correspond to grey-scale example images.
US09/282,742 1998-04-01 1999-03-31 Trainable system to search for objects in images Expired - Fee Related US6421463B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/282,742 US6421463B1 (en) 1998-04-01 1999-03-31 Trainable system to search for objects in images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US8035898P 1998-04-01 1998-04-01
US09/282,742 US6421463B1 (en) 1998-04-01 1999-03-31 Trainable system to search for objects in images

Publications (1)

Publication Number Publication Date
US6421463B1 true US6421463B1 (en) 2002-07-16

Family

ID=26763408

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/282,742 Expired - Fee Related US6421463B1 (en) 1998-04-01 1999-03-31 Trainable system to search for objects in images

Country Status (1)

Country Link
US (1) US6421463B1 (en)

Cited By (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020067857A1 (en) * 2000-12-04 2002-06-06 Hartmann Alexander J. System and method for classification of images and videos
US20020102024A1 (en) * 2000-11-29 2002-08-01 Compaq Information Technologies Group, L.P. Method and system for object detection in digital images
US20020131617A1 (en) * 2000-12-07 2002-09-19 Pelly Jason Charles Apparatus for detecting and recovering data
US20030128876A1 (en) * 2001-12-13 2003-07-10 Kabushiki Kaisha Toshiba Pattern recognition apparatus and method therefor
US6628834B2 (en) * 1999-07-20 2003-09-30 Hewlett-Packard Development Company, L.P. Template matching system for images
US6671391B1 (en) * 2000-05-26 2003-12-30 Microsoft Corp. Pose-adaptive face detection system and process
US20040005083A1 (en) * 2002-03-26 2004-01-08 Kikuo Fujimura Real-time eye detection and tracking under various light conditions
US20040022423A1 (en) * 2002-08-02 2004-02-05 Eastman Kodak Company Method for locating faces in digital color images
US20040047492A1 (en) * 2002-06-20 2004-03-11 Robert Muise Target detection system using trained and untrained detection and methods therefor
US20040062441A1 (en) * 2000-12-06 2004-04-01 Jerome Meniere Method for detecting new objects in an illuminated scene
US20040066966A1 (en) * 2002-10-07 2004-04-08 Henry Schneiderman Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder
US20040101200A1 (en) * 2002-11-26 2004-05-27 Larry Peele Method and apparatus for image processing to detect changes in a scene
WO2004044830A1 (en) 2002-11-12 2004-05-27 Nokia Corporation Region-of-interest tracking method and device for wavelet-based video coding
US20040120572A1 (en) * 2002-10-31 2004-06-24 Eastman Kodak Company Method for using effective spatio-temporal image recomposition to improve scene classification
US6760724B1 (en) * 2000-07-24 2004-07-06 Lucent Technologies Inc. Approximate query processing using wavelets
US6766058B1 (en) * 1999-08-04 2004-07-20 Electro Scientific Industries Pattern recognition using multiple templates
US20040151371A1 (en) * 2003-01-30 2004-08-05 Eastman Kodak Company Method for face orientation determination in digital color images
US20040179719A1 (en) * 2003-03-12 2004-09-16 Eastman Kodak Company Method and system for face detection in digital images
US20040246336A1 (en) * 2003-06-04 2004-12-09 Model Software Corporation Video surveillance system
US20040246114A1 (en) * 2003-06-05 2004-12-09 Stefan Hahn Image processing system for a vehicle
US20040252864A1 (en) * 2003-06-13 2004-12-16 Sarnoff Corporation Method and apparatus for ground detection and removal in vision systems
US20040258279A1 (en) * 2003-06-13 2004-12-23 Sarnoff Corporation Method and apparatus for pedestrian detection
US20050036676A1 (en) * 2003-06-30 2005-02-17 Bernd Heisele Systems and methods for training component-based object identification systems
US20050036649A1 (en) * 2001-08-23 2005-02-17 Jun Yokono Robot apparatus, face recognition method, and face recognition apparatus
KR100474771B1 (en) * 2002-08-29 2005-03-10 엘지전자 주식회사 Face detection method based on templet
WO2005038701A1 (en) * 2003-09-26 2005-04-28 Siemens Aktiengesellschaft Method for producing and/or updating learning and/or random test samples
US20050100192A1 (en) * 2003-10-09 2005-05-12 Kikuo Fujimura Moving object detection using low illumination depth capable computer vision
US20050123216A1 (en) * 2003-10-20 2005-06-09 The Regents Of The University Of California. 3D wavelet-based filter and method
US20050129274A1 (en) * 2001-05-30 2005-06-16 Farmer Michael E. Motion-based segmentor detecting vehicle occupants using optical flow method to remove effects of illumination
US6944332B1 (en) * 1999-04-20 2005-09-13 Microsoft Corporation Method and system for searching for images based on color and shape of a selected image
US20050248654A1 (en) * 2002-07-22 2005-11-10 Hiroshi Tsujino Image-based object detection apparatus and method
US20060062478A1 (en) * 2004-08-16 2006-03-23 Grandeye, Ltd., Region-sensitive compression of digital video
US20060147108A1 (en) * 2005-01-04 2006-07-06 Samsung Electronics Co., Ltd. Apparatus and method for detecting heads in input image
US20060177097A1 (en) * 2002-06-14 2006-08-10 Kikuo Fujimura Pedestrian detection and tracking with night vision
KR100624481B1 (en) 2004-11-17 2006-09-18 삼성전자주식회사 Method for tracking face based on template
US20060280341A1 (en) * 2003-06-30 2006-12-14 Honda Motor Co., Ltd. System and method for face recognition
US20070183655A1 (en) * 2006-02-09 2007-08-09 Microsoft Corporation Reducing human overhead in text categorization
US20070201747A1 (en) * 2006-02-28 2007-08-30 Sanyo Electric Co., Ltd. Object detection apparatus
US20080044086A1 (en) * 2006-08-15 2008-02-21 Fuji Xerox Co., Ltd. Image processing system, image processing method, computer readable medium and computer data signal
US20080056611A1 (en) * 2006-08-30 2008-03-06 Honeywell International, Inc. Neurophysiologically driven high speed image triage system and method
KR100813167B1 (en) 2006-06-09 2008-03-17 삼성전자주식회사 Method and system for fast and accurate face detection and face detection learning
US20080232696A1 (en) * 2007-03-23 2008-09-25 Seiko Epson Corporation Scene Classification Apparatus and Scene Classification Method
WO2008118977A1 (en) * 2007-03-26 2008-10-02 Desert Research Institute Data analysis process
US20080273795A1 (en) * 2007-05-02 2008-11-06 Microsoft Corporation Flexible matching with combinational similarity
US20080279456A1 (en) * 2007-05-08 2008-11-13 Seiko Epson Corporation Scene Classification Apparatus and Scene Classification Method
WO2008152208A1 (en) * 2007-06-15 2008-12-18 Virtual Air Guitar Company Oy Image sampling in stochastic model-based computer vision
US20090010495A1 (en) * 2004-07-26 2009-01-08 Automotive Systems Laboratory, Inc. Vulnerable Road User Protection System
US20090019044A1 (en) * 2007-07-13 2009-01-15 Kabushiki Kaisha Toshiba Pattern search apparatus and method thereof
US7505621B1 (en) 2003-10-24 2009-03-17 Videomining Corporation Demographic classification using image components
US20090129683A1 (en) * 2006-05-10 2009-05-21 Nikon Corporation Object Recognition Apparatus,Computer Readable Medium Storing Object Recognition Program, and Image Retrieval Service Providing Method
US20090141007A1 (en) * 2007-11-29 2009-06-04 Honeywell International, Inc. Dynamic calibration of physiologically driven image triage systems
US20090214118A1 (en) * 2008-02-25 2009-08-27 Honeywell International, Inc. Target specific image scaling for effective rapid serial visual presentation
US20090252380A1 (en) * 2008-04-07 2009-10-08 Toyota Jidosha Kabushiki Kaisha Moving object trajectory estimating device
US20100008589A1 (en) * 2006-10-11 2010-01-14 Mitsubishi Electric Corporation Image descriptor for image recognition
US7769759B1 (en) * 2003-08-28 2010-08-03 Biz360, Inc. Data classification based on point-of-view dependency
US20100205177A1 (en) * 2009-01-13 2010-08-12 Canon Kabushiki Kaisha Object identification apparatus and method for identifying object
US20100239154A1 (en) * 2008-09-24 2010-09-23 Canon Kabushiki Kaisha Information processing apparatus and method
US20100272350A1 (en) * 2009-04-27 2010-10-28 Morris Lee Methods and apparatus to perform image classification based on pseudorandom features
US20100281361A1 (en) * 2009-04-30 2010-11-04 Xerox Corporation Automated method for alignment of document objects
US20100310153A1 (en) * 2007-10-10 2010-12-09 Mitsubishi Electric Corporation Enhanced image identification
US20110002531A1 (en) * 2009-07-01 2011-01-06 Honda Motor Co., Ltd. Object Recognition with 3D Models
US20110090359A1 (en) * 2009-10-20 2011-04-21 Canon Kabushiki Kaisha Image recognition apparatus, processing method thereof, and computer-readable storage medium
US7987111B1 (en) 2006-10-30 2011-07-26 Videomining Corporation Method and system for characterizing physical retail spaces by determining the demographic composition of people in the physical retail spaces utilizing video image analysis
US20120148160A1 (en) * 2010-07-08 2012-06-14 Honeywell International Inc. Landmark localization for facial imagery
US20120281907A1 (en) * 2011-05-06 2012-11-08 Toyota Motor Engin. & Manufact. N.A.(TEMA) Real-time 3d point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
US8401250B2 (en) 2010-02-19 2013-03-19 MindTree Limited Detecting objects of interest in still images
US20130156343A1 (en) * 2011-12-20 2013-06-20 Harman International Industries, Incorporated System for communicating relationship data associated with image characteristics
US20130202152A1 (en) * 2012-02-06 2013-08-08 GM Global Technology Operations LLC Selecting Visible Regions in Nighttime Images for Performing Clear Path Detection
CN103313015A (en) * 2013-06-19 2013-09-18 海信集团有限公司 Image processing method and image processing device
US20130336579A1 (en) * 2012-06-15 2013-12-19 Vufind, Inc. Methods for Efficient Classifier Training for Accurate Object Recognition in Images and Video
US20140002650A1 (en) * 2012-06-28 2014-01-02 GM Global Technology Operations LLC Wide baseline binocular object matching method using minimal cost flow network
US20140010410A1 (en) * 2011-03-17 2014-01-09 Nec Corporation Image recognition system, image recognition method, and non-transitory computer readable medium storing image recognition program
US20140169628A1 (en) * 2011-07-14 2014-06-19 Bayerische Motoren Werke Aktiengesellschaft Method and Device for Detecting the Gait of a Pedestrian for a Portable Terminal
US8761446B1 (en) * 2009-03-10 2014-06-24 Google Inc. Object detection with false positive filtering
US8799201B2 (en) 2011-07-25 2014-08-05 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for tracking objects
CN104200209A (en) * 2014-08-29 2014-12-10 南京烽火星空通信发展有限公司 Image text detecting method
CN104361355A (en) * 2014-12-02 2015-02-18 威海北洋电气集团股份有限公司 Infrared-based automatic human-vehicle classification method and channel device
CN104834926A (en) * 2015-04-09 2015-08-12 孙晓航 Method and system for character zone extraction
US20150362989A1 (en) * 2014-06-17 2015-12-17 Amazon Technologies, Inc. Dynamic template selection for object detection and tracking
CN105335716A (en) * 2015-10-29 2016-02-17 北京工业大学 Improved UDN joint-feature extraction-based pedestrian detection method
US20160063673A1 (en) * 2014-09-03 2016-03-03 Samsung Electronics Co., Ltd. Display apparatus and controller and method of controlling the same
US20160086033A1 (en) * 2014-09-19 2016-03-24 Bendix Commercial Vehicle Systems Llc Advanced blending of stitched images for 3d object reproduction
US9536178B2 (en) 2012-06-15 2017-01-03 Vufind, Inc. System and method for structuring a large scale object recognition engine to maximize recognition accuracy and emulate human visual cortex
US9955910B2 (en) 2005-10-14 2018-05-01 Aranz Healthcare Limited Method of monitoring a surface feature and apparatus therefor
US10013527B2 (en) 2016-05-02 2018-07-03 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
EP3355243A1 (en) * 2017-01-30 2018-08-01 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and program
USRE47420E1 (en) 2001-03-02 2019-06-04 Advanced Micro Devices, Inc. Performance and power optimization via block oriented performance measurement and control
US10874302B2 (en) 2011-11-28 2020-12-29 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US10916013B2 (en) * 2018-03-14 2021-02-09 Volvo Car Corporation Method of segmentation and annotation of images
US11064219B2 (en) * 2018-12-03 2021-07-13 Cloudinary Ltd. Image format, systems and methods of implementation thereof, and image processing
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US11210797B2 (en) 2014-07-10 2021-12-28 Slyce Acquisition Inc. Systems, methods, and devices for image matching and object recognition in images using textures
USD939980S1 (en) 2019-06-17 2022-01-04 Guard, Inc. Data and sensor system hub
US11294624B2 (en) 2015-03-02 2022-04-05 Slyce Canada Inc. System and method for clustering data
US20220417429A1 (en) * 2021-06-28 2022-12-29 International Business Machines Corporation Privacy-protecting multi-pass street-view photo-stitch
US11587059B1 (en) 2015-03-20 2023-02-21 Slyce Canada Inc. System and method for instant purchase transactions via image recognition
US11903723B2 (en) 2017-04-04 2024-02-20 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5325475A (en) 1992-09-09 1994-06-28 Massachusetts Institute Of Technology Computer method and apparatus for matching between line drawings
US5412738A (en) 1992-08-11 1995-05-02 Istituto Trentino Di Cultura Recognition system, particularly for recognising people
US5598488A (en) 1993-09-13 1997-01-28 Massachusetts Institute Of Technology Object movement estimator using one-dimensional optical flow
US5642431A (en) 1995-06-07 1997-06-24 Massachusetts Institute Of Technology Network-based system and method for detection of faces and the like
US5659692A (en) 1992-01-13 1997-08-19 Massachusetts Institute Of Technology Computer method and apparatus for video conferencing
US5841473A (en) * 1996-07-26 1998-11-24 Software For Image Compression, N.V. Image sequence compression and decompression
US5870502A (en) * 1996-04-08 1999-02-09 The Trustees Of Columbia University In The City Of New York System and method for a multiresolution transform of digital image information
US6081612A (en) * 1997-02-28 2000-06-27 Electro Optical Sciences Inc. Systems and methods for the multispectral imaging and characterization of skin tissue
US6148106A (en) * 1998-06-30 2000-11-14 The United States Of America As Represented By The Secretary Of The Navy Classification of images using a dictionary of compressed time-frequency atoms
US6173068B1 (en) * 1996-07-29 2001-01-09 Mikos, Ltd. Method and apparatus for recognizing and classifying individuals based on minutiae

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659692A (en) 1992-01-13 1997-08-19 Massachusetts Institute Of Technology Computer method and apparatus for video conferencing
US5412738A (en) 1992-08-11 1995-05-02 Istituto Trentino Di Cultura Recognition system, particularly for recognising people
US5325475A (en) 1992-09-09 1994-06-28 Massachusetts Institute Of Technology Computer method and apparatus for matching between line drawings
US5598488A (en) 1993-09-13 1997-01-28 Massachusetts Institute Of Technology Object movement estimator using one-dimensional optical flow
US5642431A (en) 1995-06-07 1997-06-24 Massachusetts Institute Of Technology Network-based system and method for detection of faces and the like
US5870502A (en) * 1996-04-08 1999-02-09 The Trustees Of Columbia University In The City Of New York System and method for a multiresolution transform of digital image information
US5841473A (en) * 1996-07-26 1998-11-24 Software For Image Compression, N.V. Image sequence compression and decompression
US6173068B1 (en) * 1996-07-29 2001-01-09 Mikos, Ltd. Method and apparatus for recognizing and classifying individuals based on minutiae
US6081612A (en) * 1997-02-28 2000-06-27 Electro Optical Sciences Inc. Systems and methods for the multispectral imaging and characterization of skin tissue
US6148106A (en) * 1998-06-30 2000-11-14 The United States Of America As Represented By The Secretary Of The Navy Classification of images using a dictionary of compressed time-frequency atoms

Non-Patent Citations (20)

* Cited by examiner, † Cited by third party
Title
B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimum margin classifier. In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pp. 144-152. ACM, 1992.
B. Moghaddam and A. Pentland. Probabilistic visual learning for object detection. Technical Report 326, Media Laboratory, Massachusetts Institute of Technology, 1995.
C. Jacobs, A. Finkelstein, and D. Salesin. Fast multiresolution image querying. SIGGRAPH95, Aug. 1994. University of Washington, TR-95-01-06.
C.P. Papageorgiou, M. Oren and T. Poggio. A General Framework for Object Detection. Computer Vision and Pattern Recognition. 1998.
E. Stollnitz, T. DeRose, and D. Salesin. Wavelets for computer graphics: A primer. University of Washington, TR-94-09-11, Sep. 1994, pp. 1-40.
Gorter et al, "Hierarchical and Variational Geometric Modeling with Wavelets", Apr. 1995; ACM Paper ISBN: 0-89791-736-7, pp. 35-42/205.* *
H. Rowley, S. Baluja, and T. Kanade. Human face detection in visual scenes. Technical Report CMU-CS-95-158, School of Computer Science, Carnegie Mellon University, Jul./Nov. 1995.
H.-J. Chen and Y. Shirai. Detecting multiple image motions by exploiting temporal coherence of apparent motions. Computer Vision and Pattern Recognition, pp. 899-902, 1994.
K. Rohr. Incremental recognition of pedestrians from image sequences. Computer Vision and Pattern Recognition, pp. 8-13, 1993.
K.-K. Sung and T. Poggio. Example-based learning for view-based human face detection. A.I. Memo 1521, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Dec. 1994.
M. Leung and Y.H. Yang. A region based approach for human body analysis. Pattern Recognition, 20(3):321-39, 1987.
M. Leung and Y.H. Yang. Human body motion segmentation in a complex case. Pattern Recognition, 20(1):55-64, 1987.
M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio. Pedestrain detection using wavelet templates. In Computer Vision and Pattern Recognition, pp. 193-199, 1997.
N.R. Adam and A. Gangopadhyay. Content Bases Retrieval in Digital Libraries. In IEEE Computer Magazine, pp. 93-95.
R. F. Edgar Osuna and F. Girosi. Support vector machines: Training and applications. MIT CBCL-Memo, Mar. 1997.
R. Vaillant, C. Monrocq, and Y. L. Cun. Original approach for the localization of objects in images. IEE Pro.-Vis. Image Signal Processing, 141(4), Aug. 1994, pp. 245-250.
S. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7):674-93, Jul. 1989.
T. Tsukiyama and Y. Shirai. Detection of the movements of persons from a sparse sequence of tv images. Pattern Recognition, 18(3/4):207-13, 1985.
Tolig et al, "Wavelet Neural Network for Classification of transient signals", Sep. 1997; IEEE Paper ISBN: 0-7803-4173-2, pp. 161-166.* *
Vrhel et al, "Rapid Computation of the Continuous Wavelet Transform by Oblique Projections", Apr. 1997; IEEE Paper, vol. 45, Issue 4, pp. 891-900.* *

Cited By (185)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6944332B1 (en) * 1999-04-20 2005-09-13 Microsoft Corporation Method and system for searching for images based on color and shape of a selected image
US7194127B2 (en) 1999-04-20 2007-03-20 Microsoft Corporation Method and system for searching for images based on color and shape of a selected image
US6628834B2 (en) * 1999-07-20 2003-09-30 Hewlett-Packard Development Company, L.P. Template matching system for images
US6766058B1 (en) * 1999-08-04 2004-07-20 Electro Scientific Industries Pattern recognition using multiple templates
US6671391B1 (en) * 2000-05-26 2003-12-30 Microsoft Corp. Pose-adaptive face detection system and process
US6760724B1 (en) * 2000-07-24 2004-07-06 Lucent Technologies Inc. Approximate query processing using wavelets
US20020102024A1 (en) * 2000-11-29 2002-08-01 Compaq Information Technologies Group, L.P. Method and system for object detection in digital images
US7099510B2 (en) * 2000-11-29 2006-08-29 Hewlett-Packard Development Company, L.P. Method and system for object detection in digital images
US7171042B2 (en) * 2000-12-04 2007-01-30 Intel Corporation System and method for classification of images and videos
US20020067857A1 (en) * 2000-12-04 2002-06-06 Hartmann Alexander J. System and method for classification of images and videos
US20040062441A1 (en) * 2000-12-06 2004-04-01 Jerome Meniere Method for detecting new objects in an illuminated scene
US7302081B2 (en) * 2000-12-06 2007-11-27 Vision Iq Method for detecting new objects in an illuminated scene
US7085395B2 (en) * 2000-12-07 2006-08-01 Sony United Kingdom Limited Apparatus for detecting and recovering data
US20020131617A1 (en) * 2000-12-07 2002-09-19 Pelly Jason Charles Apparatus for detecting and recovering data
USRE47420E1 (en) 2001-03-02 2019-06-04 Advanced Micro Devices, Inc. Performance and power optimization via block oriented performance measurement and control
USRE48819E1 (en) 2001-03-02 2021-11-16 Advanced Micro Devices, Inc. Performance and power optimization via block oriented performance measurement and control
US20050129274A1 (en) * 2001-05-30 2005-06-16 Farmer Michael E. Motion-based segmentor detecting vehicle occupants using optical flow method to remove effects of illumination
US7369686B2 (en) * 2001-08-23 2008-05-06 Sony Corporation Robot apparatus, face recognition method, and face recognition apparatus
US20050036649A1 (en) * 2001-08-23 2005-02-17 Jun Yokono Robot apparatus, face recognition method, and face recognition apparatus
US20030128876A1 (en) * 2001-12-13 2003-07-10 Kabushiki Kaisha Toshiba Pattern recognition apparatus and method therefor
US7200270B2 (en) * 2001-12-13 2007-04-03 Kabushiki Kaisha Toshiba Pattern recognition apparatus and method using distributed model representation of partial images
US20040005083A1 (en) * 2002-03-26 2004-01-08 Kikuo Fujimura Real-time eye detection and tracking under various light conditions
US7206435B2 (en) * 2002-03-26 2007-04-17 Honda Giken Kogyo Kabushiki Kaisha Real-time eye detection and tracking under various light conditions
US7139411B2 (en) * 2002-06-14 2006-11-21 Honda Giken Kogyo Kabushiki Kaisha Pedestrian detection and tracking with night vision
US20060177097A1 (en) * 2002-06-14 2006-08-10 Kikuo Fujimura Pedestrian detection and tracking with night vision
US7421090B2 (en) * 2002-06-20 2008-09-02 Lockheed Martin Corporation Target detection system using trained and untrained detection and methods therefor
US20040047492A1 (en) * 2002-06-20 2004-03-11 Robert Muise Target detection system using trained and untrained detection and methods therefor
US20050248654A1 (en) * 2002-07-22 2005-11-10 Hiroshi Tsujino Image-based object detection apparatus and method
US7295684B2 (en) * 2002-07-22 2007-11-13 Honda Giken Kogyo Kabushiki Kaisha Image-based object detection apparatus and method
US20040022423A1 (en) * 2002-08-02 2004-02-05 Eastman Kodak Company Method for locating faces in digital color images
US7110575B2 (en) * 2002-08-02 2006-09-19 Eastman Kodak Company Method for locating faces in digital color images
KR100474771B1 (en) * 2002-08-29 2005-03-10 엘지전자 주식회사 Face detection method based on templet
US7194114B2 (en) 2002-10-07 2007-03-20 Carnegie Mellon University Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder
US20040066966A1 (en) * 2002-10-07 2004-04-08 Henry Schneiderman Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder
US7313268B2 (en) * 2002-10-31 2007-12-25 Eastman Kodak Company Method for using effective spatio-temporal image recomposition to improve scene classification
US20040120572A1 (en) * 2002-10-31 2004-06-24 Eastman Kodak Company Method for using effective spatio-temporal image recomposition to improve scene classification
EP2405382A1 (en) 2002-11-12 2012-01-11 Nokia Corporation Region-of-interest tracking method and device for wavelet-based video coding
US6757434B2 (en) 2002-11-12 2004-06-29 Nokia Corporation Region-of-interest tracking method and device for wavelet-based video coding
WO2004044830A1 (en) 2002-11-12 2004-05-27 Nokia Corporation Region-of-interest tracking method and device for wavelet-based video coding
US7149361B2 (en) * 2002-11-26 2006-12-12 Lockheed Martin Corporation Method and apparatus for image processing to detect changes in a scene
US20040101200A1 (en) * 2002-11-26 2004-05-27 Larry Peele Method and apparatus for image processing to detect changes in a scene
US20040151371A1 (en) * 2003-01-30 2004-08-05 Eastman Kodak Company Method for face orientation determination in digital color images
US7120279B2 (en) * 2003-01-30 2006-10-10 Eastman Kodak Company Method for face orientation determination in digital color images
US20040179719A1 (en) * 2003-03-12 2004-09-16 Eastman Kodak Company Method and system for face detection in digital images
US7508961B2 (en) * 2003-03-12 2009-03-24 Eastman Kodak Company Method and system for face detection in digital images
US8605155B2 (en) 2003-06-04 2013-12-10 Model Software Corporation Video surveillance system
US7859564B2 (en) 2003-06-04 2010-12-28 Model Software Corporation Video surveillance system
US20040246336A1 (en) * 2003-06-04 2004-12-09 Model Software Corporation Video surveillance system
US20040246114A1 (en) * 2003-06-05 2004-12-09 Stefan Hahn Image processing system for a vehicle
US20040258279A1 (en) * 2003-06-13 2004-12-23 Sarnoff Corporation Method and apparatus for pedestrian detection
US20040252864A1 (en) * 2003-06-13 2004-12-16 Sarnoff Corporation Method and apparatus for ground detection and removal in vision systems
US7957562B2 (en) 2003-06-13 2011-06-07 Sri International Method and apparatus for ground detection and removal in vision systems
US6956469B2 (en) * 2003-06-13 2005-10-18 Sarnoff Corporation Method and apparatus for pedestrian detection
US7068815B2 (en) * 2003-06-13 2006-06-27 Sarnoff Corporation Method and apparatus for ground detection and removal in vision systems
US20060210117A1 (en) * 2003-06-13 2006-09-21 Peng Chang Method and apparatus for ground detection and removal in vision systems
US7783082B2 (en) 2003-06-30 2010-08-24 Honda Motor Co., Ltd. System and method for face recognition
US20060280341A1 (en) * 2003-06-30 2006-12-14 Honda Motor Co., Ltd. System and method for face recognition
US7734071B2 (en) * 2003-06-30 2010-06-08 Honda Motor Co., Ltd. Systems and methods for training component-based object identification systems
US20050036676A1 (en) * 2003-06-30 2005-02-17 Bernd Heisele Systems and methods for training component-based object identification systems
WO2005002921A3 (en) * 2003-07-02 2005-03-10 Sarnoff Corp Method and apparatus for pedestrian detection
US7769759B1 (en) * 2003-08-28 2010-08-03 Biz360, Inc. Data classification based on point-of-view dependency
US20110125747A1 (en) * 2003-08-28 2011-05-26 Biz360, Inc. Data classification based on point-of-view dependency
WO2005038701A1 (en) * 2003-09-26 2005-04-28 Siemens Aktiengesellschaft Method for producing and/or updating learning and/or random test samples
US20070084641A1 (en) * 2003-09-26 2007-04-19 Siemens Aktiengesellschaft Method for producing and/or updating learning and/or random test samples
US7366325B2 (en) 2003-10-09 2008-04-29 Honda Motor Co., Ltd. Moving object detection using low illumination depth capable computer vision
US20050100192A1 (en) * 2003-10-09 2005-05-12 Kikuo Fujimura Moving object detection using low illumination depth capable computer vision
US7412103B2 (en) * 2003-10-20 2008-08-12 Lawrence Livermore National Security, Llc 3D wavelet-based filter and method
US20050123216A1 (en) * 2003-10-20 2005-06-09 The Regents Of The University Of California. 3D wavelet-based filter and method
US7505621B1 (en) 2003-10-24 2009-03-17 Videomining Corporation Demographic classification using image components
US9330321B2 (en) 2004-07-26 2016-05-03 Tk Holdings, Inc. Method of processing an image of a visual scene
US8509523B2 (en) 2004-07-26 2013-08-13 Tk Holdings, Inc. Method of identifying an object in a visual scene
US20090010495A1 (en) * 2004-07-26 2009-01-08 Automotive Systems Laboratory, Inc. Vulnerable Road User Protection System
US8594370B2 (en) 2004-07-26 2013-11-26 Automotive Systems Laboratory, Inc. Vulnerable road user protection system
US20060062478A1 (en) * 2004-08-16 2006-03-23 Grandeye, Ltd., Region-sensitive compression of digital video
KR100624481B1 (en) 2004-11-17 2006-09-18 삼성전자주식회사 Method for tracking face based on template
US7869629B2 (en) 2005-01-04 2011-01-11 Samsung Electronics Co., Ltd. Apparatus and method for detecting heads in input image
US20060147108A1 (en) * 2005-01-04 2006-07-06 Samsung Electronics Co., Ltd. Apparatus and method for detecting heads in input image
KR100695136B1 (en) 2005-01-04 2007-03-14 삼성전자주식회사 Face detection method and apparatus in image
US9955910B2 (en) 2005-10-14 2018-05-01 Aranz Healthcare Limited Method of monitoring a surface feature and apparatus therefor
US10827970B2 (en) 2005-10-14 2020-11-10 Aranz Healthcare Limited Method of monitoring a surface feature and apparatus therefor
US7894677B2 (en) * 2006-02-09 2011-02-22 Microsoft Corporation Reducing human overhead in text categorization
US20070183655A1 (en) * 2006-02-09 2007-08-09 Microsoft Corporation Reducing human overhead in text categorization
US7974441B2 (en) * 2006-02-28 2011-07-05 Sanyo Electric Co., Ltd. Object detection apparatus for detecting a specific object in an input image
US20070201747A1 (en) * 2006-02-28 2007-08-30 Sanyo Electric Co., Ltd. Object detection apparatus
US20090129683A1 (en) * 2006-05-10 2009-05-21 Nikon Corporation Object Recognition Apparatus,Computer Readable Medium Storing Object Recognition Program, and Image Retrieval Service Providing Method
US8379990B2 (en) * 2006-05-10 2013-02-19 Nikon Corporation Object recognition apparatus, computer readable medium storing object recognition program, and image retrieval service providing method
KR100813167B1 (en) 2006-06-09 2008-03-17 삼성전자주식회사 Method and system for fast and accurate face detection and face detection learning
US20080044086A1 (en) * 2006-08-15 2008-02-21 Fuji Xerox Co., Ltd. Image processing system, image processing method, computer readable medium and computer data signal
US8077977B2 (en) * 2006-08-15 2011-12-13 Fuji Xerox Co., Ltd. Image processing system, image processing method, computer readable medium and computer data signal
US7835581B2 (en) * 2006-08-30 2010-11-16 Honeywell International Inc. Neurophysiologically driven high speed image triage system and method
US20080056611A1 (en) * 2006-08-30 2008-03-06 Honeywell International, Inc. Neurophysiologically driven high speed image triage system and method
US20100008589A1 (en) * 2006-10-11 2010-01-14 Mitsubishi Electric Corporation Image descriptor for image recognition
US8655103B2 (en) * 2006-10-11 2014-02-18 Mitsubishi Electric Corporation Deriving an image representation using frequency components of a frequency representation
US7987111B1 (en) 2006-10-30 2011-07-26 Videomining Corporation Method and system for characterizing physical retail spaces by determining the demographic composition of people in the physical retail spaces utilizing video image analysis
US20080232696A1 (en) * 2007-03-23 2008-09-25 Seiko Epson Corporation Scene Classification Apparatus and Scene Classification Method
US20100104191A1 (en) * 2007-03-26 2010-04-29 Mcgwire Kenneth C Data analysis process
WO2008118977A1 (en) * 2007-03-26 2008-10-02 Desert Research Institute Data analysis process
US8615133B2 (en) 2007-03-26 2013-12-24 Board Of Regents Of The Nevada System Of Higher Education, On Behalf Of The Desert Research Institute Process for enhancing images based on user input
US7957596B2 (en) 2007-05-02 2011-06-07 Microsoft Corporation Flexible matching with combinational similarity
US20080273795A1 (en) * 2007-05-02 2008-11-06 Microsoft Corporation Flexible matching with combinational similarity
US20080279456A1 (en) * 2007-05-08 2008-11-13 Seiko Epson Corporation Scene Classification Apparatus and Scene Classification Method
US20100202659A1 (en) * 2007-06-15 2010-08-12 Haemaelaeinen Perttu Image sampling in stochastic model-based computer vision
WO2008152208A1 (en) * 2007-06-15 2008-12-18 Virtual Air Guitar Company Oy Image sampling in stochastic model-based computer vision
US20090019044A1 (en) * 2007-07-13 2009-01-15 Kabushiki Kaisha Toshiba Pattern search apparatus and method thereof
US8510311B2 (en) * 2007-07-13 2013-08-13 Kabushiki Kaisha Toshiba Pattern search apparatus and method thereof
US8515158B2 (en) * 2007-10-10 2013-08-20 Mitsubishi Electric Corporation Enhanced image identification
US20100310153A1 (en) * 2007-10-10 2010-12-09 Mitsubishi Electric Corporation Enhanced image identification
US20090141007A1 (en) * 2007-11-29 2009-06-04 Honeywell International, Inc. Dynamic calibration of physiologically driven image triage systems
US8271074B2 (en) 2007-11-29 2012-09-18 Honeywell International Inc. Dynamic calibration of physiologically driven image triage systems
US7991195B2 (en) 2008-02-25 2011-08-02 Honeywell International Inc. Target specific image scaling for effective rapid serial visual presentation
US20090214118A1 (en) * 2008-02-25 2009-08-27 Honeywell International, Inc. Target specific image scaling for effective rapid serial visual presentation
US8615109B2 (en) 2008-04-07 2013-12-24 Toyota Jidosha Kabushiki Kaisha Moving object trajectory estimating device
US20090252380A1 (en) * 2008-04-07 2009-10-08 Toyota Jidosha Kabushiki Kaisha Moving object trajectory estimating device
US20110235864A1 (en) * 2008-04-07 2011-09-29 Toyota Jidosha Kabushiki Kaisha Moving object trajectory estimating device
US20100239154A1 (en) * 2008-09-24 2010-09-23 Canon Kabushiki Kaisha Information processing apparatus and method
CN102165488A (en) * 2008-09-24 2011-08-24 佳能株式会社 Information processing apparatus for selecting characteristic feature used for classifying input data
CN102165488B (en) * 2008-09-24 2013-11-06 佳能株式会社 Information processing apparatus for selecting characteristic feature used for classifying input data
US8189906B2 (en) * 2008-09-24 2012-05-29 Canon Kabushiki Kaisha Information processing apparatus and method
US8819015B2 (en) * 2009-01-13 2014-08-26 Canon Kabushiki Kaisha Object identification apparatus and method for identifying object
US20100205177A1 (en) * 2009-01-13 2010-08-12 Canon Kabushiki Kaisha Object identification apparatus and method for identifying object
US9104914B1 (en) 2009-03-10 2015-08-11 Google Inc. Object detection with false positive filtering
US8761446B1 (en) * 2009-03-10 2014-06-24 Google Inc. Object detection with false positive filtering
US20100272350A1 (en) * 2009-04-27 2010-10-28 Morris Lee Methods and apparatus to perform image classification based on pseudorandom features
US8351712B2 (en) 2009-04-27 2013-01-08 The Neilsen Company (US), LLC Methods and apparatus to perform image classification based on pseudorandom features
US8818112B2 (en) 2009-04-27 2014-08-26 The Nielsen Company (Us), Llc Methods and apparatus to perform image classification based on pseudorandom features
US8271871B2 (en) * 2009-04-30 2012-09-18 Xerox Corporation Automated method for alignment of document objects
US20100281361A1 (en) * 2009-04-30 2010-11-04 Xerox Corporation Automated method for alignment of document objects
US20110002531A1 (en) * 2009-07-01 2011-01-06 Honda Motor Co., Ltd. Object Recognition with 3D Models
US8422797B2 (en) 2009-07-01 2013-04-16 Honda Motor Co., Ltd. Object recognition with 3D models
US20110090359A1 (en) * 2009-10-20 2011-04-21 Canon Kabushiki Kaisha Image recognition apparatus, processing method thereof, and computer-readable storage medium
US8643739B2 (en) * 2009-10-20 2014-02-04 Canon Kabushiki Kaisha Image recognition apparatus, processing method thereof, and computer-readable storage medium
US8401250B2 (en) 2010-02-19 2013-03-19 MindTree Limited Detecting objects of interest in still images
US20120148160A1 (en) * 2010-07-08 2012-06-14 Honeywell International Inc. Landmark localization for facial imagery
US20140010410A1 (en) * 2011-03-17 2014-01-09 Nec Corporation Image recognition system, image recognition method, and non-transitory computer readable medium storing image recognition program
US9600745B2 (en) * 2011-03-17 2017-03-21 Nec Corporation Image recognition system, image recognition method, and non-transitory computer readable medium storing image recognition program
US20120281907A1 (en) * 2011-05-06 2012-11-08 Toyota Motor Engin. & Manufact. N.A.(TEMA) Real-time 3d point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
US8605998B2 (en) * 2011-05-06 2013-12-10 Toyota Motor Engineering & Manufacturing North America, Inc. Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
US20140169628A1 (en) * 2011-07-14 2014-06-19 Bayerische Motoren Werke Aktiengesellschaft Method and Device for Detecting the Gait of a Pedestrian for a Portable Terminal
US9286689B2 (en) * 2011-07-14 2016-03-15 Bayerische Motoren Werke Aktiengesellschaft Method and device for detecting the gait of a pedestrian for a portable terminal
US8799201B2 (en) 2011-07-25 2014-08-05 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for tracking objects
US11850025B2 (en) 2011-11-28 2023-12-26 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US10874302B2 (en) 2011-11-28 2020-12-29 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US20130156343A1 (en) * 2011-12-20 2013-06-20 Harman International Industries, Incorporated System for communicating relationship data associated with image characteristics
US8948449B2 (en) * 2012-02-06 2015-02-03 GM Global Technology Operations LLC Selecting visible regions in nighttime images for performing clear path detection
US20130202152A1 (en) * 2012-02-06 2013-08-08 GM Global Technology Operations LLC Selecting Visible Regions in Nighttime Images for Performing Clear Path Detection
US20130336579A1 (en) * 2012-06-15 2013-12-19 Vufind, Inc. Methods for Efficient Classifier Training for Accurate Object Recognition in Images and Video
US9536178B2 (en) 2012-06-15 2017-01-03 Vufind, Inc. System and method for structuring a large scale object recognition engine to maximize recognition accuracy and emulate human visual cortex
US8811727B2 (en) * 2012-06-15 2014-08-19 Moataz A. Rashad Mohamed Methods for efficient classifier training for accurate object recognition in images and video
US9228833B2 (en) * 2012-06-28 2016-01-05 GM Global Technology Operations LLC Wide baseline binocular object matching method using minimal cost flow network
US20140002650A1 (en) * 2012-06-28 2014-01-02 GM Global Technology Operations LLC Wide baseline binocular object matching method using minimal cost flow network
CN103530868B (en) * 2012-06-28 2016-10-05 通用汽车环球科技运作有限责任公司 Use the wide baseline binocular object matches method of least cost flow network
CN103530868A (en) * 2012-06-28 2014-01-22 通用汽车环球科技运作有限责任公司 Using the minimum cost flow network wide baseline binocular objects matching method
CN103313015A (en) * 2013-06-19 2013-09-18 海信集团有限公司 Image processing method and image processing device
CN103313015B (en) * 2013-06-19 2018-08-10 海信集团有限公司 A kind of image processing method and device
US20150362989A1 (en) * 2014-06-17 2015-12-17 Amazon Technologies, Inc. Dynamic template selection for object detection and tracking
US11210797B2 (en) 2014-07-10 2021-12-28 Slyce Acquisition Inc. Systems, methods, and devices for image matching and object recognition in images using textures
CN104200209A (en) * 2014-08-29 2014-12-10 南京烽火星空通信发展有限公司 Image text detecting method
CN104200209B (en) * 2014-08-29 2017-11-03 南京烽火星空通信发展有限公司 A kind of pictograph detection method
US9904980B2 (en) * 2014-09-03 2018-02-27 Samsung Electronics Co., Ltd. Display apparatus and controller and method of controlling the same
US20160063673A1 (en) * 2014-09-03 2016-03-03 Samsung Electronics Co., Ltd. Display apparatus and controller and method of controlling the same
KR20160028286A (en) * 2014-09-03 2016-03-11 삼성전자주식회사 Display apparatus, mobile and method for controlling the same
US20160086033A1 (en) * 2014-09-19 2016-03-24 Bendix Commercial Vehicle Systems Llc Advanced blending of stitched images for 3d object reproduction
US10055643B2 (en) * 2014-09-19 2018-08-21 Bendix Commercial Vehicle Systems Llc Advanced blending of stitched images for 3D object reproduction
CN104361355A (en) * 2014-12-02 2015-02-18 威海北洋电气集团股份有限公司 Infrared-based automatic human-vehicle classification method and channel device
CN104361355B (en) * 2014-12-02 2018-02-23 威海北洋电气集团股份有限公司 Based on infrared people's car automatic classification method and lane device
US11294624B2 (en) 2015-03-02 2022-04-05 Slyce Canada Inc. System and method for clustering data
US11587059B1 (en) 2015-03-20 2023-02-21 Slyce Canada Inc. System and method for instant purchase transactions via image recognition
CN104834926B (en) * 2015-04-09 2018-10-02 深圳市天阿智能科技有限责任公司 A kind of character zone extracting method and system
CN104834926A (en) * 2015-04-09 2015-08-12 孙晓航 Method and system for character zone extraction
CN105335716B (en) * 2015-10-29 2019-03-26 北京工业大学 A kind of pedestrian detection method extracting union feature based on improvement UDN
CN105335716A (en) * 2015-10-29 2016-02-17 北京工业大学 Improved UDN joint-feature extraction-based pedestrian detection method
US11250945B2 (en) 2016-05-02 2022-02-15 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11923073B2 (en) 2016-05-02 2024-03-05 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US10777317B2 (en) 2016-05-02 2020-09-15 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US10013527B2 (en) 2016-05-02 2018-07-03 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US11019251B2 (en) 2017-01-30 2021-05-25 Canon Kabushiki Kaisha Information processing apparatus, image capturing apparatus, information processing method, and recording medium storing program
EP3355243A1 (en) * 2017-01-30 2018-08-01 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and program
US11903723B2 (en) 2017-04-04 2024-02-20 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US10916013B2 (en) * 2018-03-14 2021-02-09 Volvo Car Corporation Method of segmentation and annotation of images
US11064219B2 (en) * 2018-12-03 2021-07-13 Cloudinary Ltd. Image format, systems and methods of implementation thereof, and image processing
USD939980S1 (en) 2019-06-17 2022-01-04 Guard, Inc. Data and sensor system hub
USD957966S1 (en) 2019-06-17 2022-07-19 Guard, Inc. Tile sensor unit
US20220417429A1 (en) * 2021-06-28 2022-12-29 International Business Machines Corporation Privacy-protecting multi-pass street-view photo-stitch
US11558550B1 (en) * 2021-06-28 2023-01-17 International Business Machines Corporation Privacy-protecting multi-pass street-view photo-stitch

Similar Documents

Publication Publication Date Title
US6421463B1 (en) Trainable system to search for objects in images
Oren et al. Pedestrian detection using wavelet templates
Lin et al. Shape-based human detection and segmentation via hierarchical part-template matching
Torralba Contextual priming for object detection
Garcia et al. Face detection using quantized skin color regions merging and wavelet packet analysis
Amit et al. A computational model for visual selection
US8682029B2 (en) Rule-based segmentation for objects with frontal view in color images
Porikli et al. Object detection and tracking
Garcia et al. A neural architecture for fast and robust face detection
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
Wang et al. Human action recognition with depth cameras
Kpalma et al. An overview of advances of pattern recognition systems in computer vision
Oren et al. A trainable system for people detection
Taylor et al. Pose-sensitive embedding by nonlinear nca regression
Keren Recognizing image “style” and activities in video using local features and naive bayes
Papageorgiou Object and pattern detection in video sequences
Rasche Computer Vision
Gu et al. Visual Saliency Detection Based Object Recognition.
Anjum et al. A new approach to face recognition using dual dimension reduction
Zaki et al. Multiple object detection and localisation system using automatic feature selection
Abdallah Investigation of new techniques for face detection
KR102395866B1 (en) Method and apparatus for object recognition and detection of camera images using machine learning
Mondal Hog Feature-A Survey
Nayak et al. A Hybrid Model for Frontal View Human Face Detection and Recognition
Delakis et al. Robust face detection based on convolutional neural networks

Legal Events

Date Code Title Description
AS Assignment

Owner name: MASSACHUSETTS INSTITUTE OF TECHNOLOGY, MASSACHUSET

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POGGIO, TOMASO;OREN, MICHAEL;PAPAGEORGIOU, CONSTANTINE P.;AND OTHERS;REEL/FRAME:010143/0343;SIGNING DATES FROM 19990326 TO 19990329

AS Assignment

Owner name: NAVY, SECRETARY OF THE, UNITED STATES OF AMERICA,

Free format text: CONFIRMATORY INSTRUMENT;ASSIGNOR:MASSACHUSETTS INSTITUTE OF TECHNOLOGY;REEL/FRAME:010468/0324

Effective date: 19990730

AS Assignment

Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:MASSACHUSETTS INSTITUTE OF TECHNOLOGY;REEL/FRAME:013079/0690

Effective date: 20020320

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20100716