US20150125036A1 - Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting - Google Patents

Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting Download PDF

Info

Publication number
US20150125036A1
US20150125036A1 US14/594,278 US201514594278A US2015125036A1 US 20150125036 A1 US20150125036 A1 US 20150125036A1 US 201514594278 A US201514594278 A US 201514594278A US 2015125036 A1 US2015125036 A1 US 2015125036A1
Authority
US
United States
Prior art keywords
interest
processor
region
fingerprint
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/594,278
Inventor
Sergiy Bilobrov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Excalibur IP LLC
Altaba Inc
Original Assignee
Yahoo Inc until 2017
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yahoo Inc until 2017 filed Critical Yahoo Inc until 2017
Priority to US14/594,278 priority Critical patent/US20150125036A1/en
Publication of US20150125036A1 publication Critical patent/US20150125036A1/en
Assigned to EXCALIBUR IP, LLC reassignment EXCALIBUR IP, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YAHOO! INC.
Assigned to YAHOO! INC. reassignment YAHOO! INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EXCALIBUR IP, LLC
Assigned to EXCALIBUR IP, LLC reassignment EXCALIBUR IP, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YAHOO! INC.
Assigned to AUDITUDE, INC. reassignment AUDITUDE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BILOBROV, SERGIY
Assigned to INTONOW, INC. reassignment INTONOW, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AUDITUDE, INC.
Assigned to YAHOO! INC. reassignment YAHOO! INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTONOW, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06K9/00624
    • G06K9/46
    • G06K9/4604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • G06K2009/4666

Definitions

  • This invention relates generally to the identification and comparison of multimedia materials containing a series of visual images.
  • Audio and video fingerprinting techniques are often used for identification of multimedia content.
  • a digital fingerprint is a compact representation of the characteristic features of multimedia content that can be used to categorize the content and distinguish it from perceptually different materials.
  • the characteristic features of audio and video fingerprints should be robust and withstand against typical content distortions, noise, digital compression, and filtering. Simultaneously, these exclusive characteristics should also assure minimal false positive and false negative results, which lead to incorrect identification.
  • video content is three-dimensional, consisting of a two-dimensional image plane and a time axis. Due to the spatial nature of video content, it is subject to 2D transformations and distortions.
  • the editing cycle may produce multiple versions of the same material with various spatial representations. Some video content may appear perceptually similar to the human eye, yet contain significantly different spatial composition, which results in varying image pixel values. Typical examples of these variations include widescreen and full screen editions of the same video content. However, variations can also occur from cropping, rotation, and affine transformations as a result of compression or copying, for example, due to recording of the video content projected on a movie screen from varying angles.
  • the first category is based on recording a scene using apparatus with different zoom and aspect ratio, placing emphasis on a certain portion of a larger image. Typically, this produces two distinct version of video content.
  • the first format is the full-screen, where only a confined part of the larger image is displayed on the screen.
  • the visual image is produced by zooming on a chosen region of the larger image, then expanding the image to fit the typical television screen, usually with an aspect ratio of 4:3.
  • a second possible format is the wide-screen format, where the camera records the entire wider scene.
  • While this format displays the entire scene, the produced image may be compressed horizontally or zoomed to fit the video frame commonly used for video capture, such as a film.
  • the sub-category of this format is the anamorphic wide-screen display. This format fits a wide-screen display format into a standard full-screen, compressing the visual content horizontally while maintaining unchanged vertical resolution.
  • Another sub-category is the masked wide-screen format where the whole image is resized proportionally and padded on the top and bottom by black bars.
  • Visual display may vary greatly between the full-screen and wide-screen versions of the same visual content, with the wide-screen format displaying a greater range of horizontal visual content while maintaining identical or similar vertical range.
  • the edited movie may also contain overlays, logos, banners, closed captions, and fragments of other movies embedded as picture-in-picture.
  • Human viewers usually ignore these irrelevant parts of the visual content and concentrate on the perceptually significant elements regardless of the video format and its aspect ratio.
  • Existing video fingerprinting techniques are unable to differentiate between perceptually relevant content and insignificant elements and insets during extraction of the characteristic features of a series of visual images. Due to this lack of perceptual versatility, existing fingerprinting algorithms must conduct analysis of the whole visual image to determine the regions of interest for identification purposes, since whether the regions are perceptually relevant has no impact on their usefulness for identification purposes.
  • One method of detecting regions of interest for fingerprinting comprises analyzing key video frames and continuous scenes.
  • Key frames usually represent the beginning of a new scene where the image content changes rapidly.
  • the set of pixels within key frames where content is different usually represent the regions of interest.
  • the comparison of key frames also allows the identification of the static parts of the movie that are common to all scenes in a particular time interval. If the elements are not changing their shape and location on the screen for a continuous sequence of key frames, then most likely these objects are not part of the actual visual content and were embedded into the movie stream afterwards. Such elements may include logos, banners, information bars, and blank areas around the actual image.
  • the fingerprint generation process can crop or shade these elements during the pre-processing stage and therefore minimize their contribution on produced feature vectors of the video content. The errors in detection and removal of such static objects have less effect on the quality of the generated fingerprints, as the algorithm may remove identical elements from similar content to produce comparable fingerprints for all copies of the content.
  • Another method may be used to detect boundaries of actual scenes within the movie.
  • Some movies may contain multiple regions of interest containing parts of the movie that change independently. Some of them may represent distinct elements added to the video as a result of editing, and include picture-in-picture, animated logos, split screens, etc.
  • Analysis of the slow-changing movie frames, which lay in between key frames, allows detection of content that was embedded into the movie during the processing and editing stages.
  • This detection method is based on the comparison of the level of local changes with the level of changes in the whole image. An abrupt change of a localized image area in contrast to the slow variations of the entire picture suggests that the localized area is changing independently from the rest of the image, and vice versa. This localized variation is characteristic of production stage editing additions, such as picture-in-picture or animated logos.
  • the detailed analysis of the local changes can provide long term statistics for a set of pixels gathered along a time axis, motion trajectories, contour lines, and gradient vectors.
  • the time axis is perpendicular to the image plane and characterizes how each image pixel independently changes in time.
  • the time axis can be considered the third dimension for the video content.
  • the gradient vector points in the direction of the greatest change of pixel intensity in the 2D (spatial gradient) or 3D (spatio-temporal gradient). Large spatial gradient values are typical for the edges of captured and embedded objects as well as regions of interest. Repetitive detection of a common shape bounded by large gradient as well as divergence of the gradient field across multiple frames of an image area suggests that this area is a region of interest.
  • An alternate method of detection of the regions of interest is based on analysis of motion trajectories. These trajectories point in the direction of the smallest changes in the 3D spatio-temporal space and point to the direction of movement of individual objects and elements of the video content.
  • Motion estimation is widely used in video compression algorithms to increase efficiency and can be obtained at a low computation cost directly from the compressed video stream.
  • Some motion detection algorithms isolate individual moving objects and trace their position in time. The objects in the original content may move across the image plane (original objects move across the scene, camera pan) and change size (objects move towards or away from camera, camera zoom).
  • the original (natural) objects move across the image plane and have continuous motion trajectories.
  • the gaps in the object motion trajectory within a scene indicate that the object was covered by another object located in front of it or by the frame boundary. While such shielding and interruptions in the motion trajectories are common for natural scenes, the appearance, shape and location of the moving and shielding objects vary from scene to scene. In a long term the gaps in motion vectors and statistics of the changes of series of multiple scenes are distributed randomly and uniformly across the region breaking consistently only on image boundaries.
  • the largest region covering image boundaries represent the main region of interest of the movie. Contrary to the natural scene objects, secondary content added to the movie during an editing phase is limited to a static position and localized in relatively small area within the larger region of interest representing the whole image. The changes and movement of embedded objects are contained in the same area (usually having rectangular shape) and this area does not vary in size.
  • ROI Region of Interest
  • the ROI may contain another ROI. Each of these ROIs may be extracted and processed separately in categorical order, starting from the largest. In the case of multiple ROIs, the number of regions selected for fingerprinting within the visual image may depend on the required level of detail to be contained in the fingerprint.
  • the content may first be scaled properly. Once the ROIs are isolated, the visual content may be normalized and transformed into images with the same resolution and aspect ratio for further evaluation. However, possible errors in the detection of ROI boundaries may cause significant variations in the actual content aspect ratio and image resolution even after ROI normalization and conversion them into images of the same size and resolution.
  • the size, orientation and aspect ratio of the selected regions of interest, including the main (largest) region of interest may depend greatly on the format of the processed video stream. For example, visual content can be stored as anamorphic on the master copy, yet displayed through a lens to produce a wide-screen end result. This would result in the fingerprint database reference to be coded in condensed horizontally anamorphic format while the subject of the fingerprint analysis to be the full-screen version with similar horizontal extent (and vice versa).
  • the full-screen version contains only a part of the larger wide-screen version.
  • the exact locations of the matching sub-region of the full-screen and wide-screen versions of the same content are not known.
  • the full-screen version may represent any sub-regions of the wider version, not only its central part.
  • the horizontal position of the matching regions may change from one scene to another or even gradually vary within a scene due to horizontal panning.
  • the proper comparison of the normalized regions may involve a method that is invariant to moderate variations in scale, aspect ratio, and shift of the compared images.
  • a comparison of log-scaled magnitude of FFT coefficients is performed. This approach, also known as Fourier-Mellin transformation, is based solely on the magnitude of the spectral components and completely ignores the phase.
  • the phase may have a useful contribution in the image reconstruction from the spectral data, and thus can be used for reliable identification of the visual information.
  • the feature vectors produced without accounting the phase are not discriminative, which can result in a high rate of false positives.
  • the feature vectors computed taking into consideration the phase are robust, but sensitive to scale and shift variations.
  • An alternate approach is to implement a multi-resolution pyramid, which incorporates an algorithm that compares the visual content in the library with the target visual content with different scale and resolution. This approach may involve significant computational costs and memory requirements to store all data necessary to compare images with different resolutions and aspect ratios.
  • a fingerprinting method incorporates the shift and scale invariance of the spectral approach and the robustness of the methods based on multi-resolution pyramid.
  • the shift and scale invariance is achieved by using low frequency spectral coefficients to extract features in one direction while applying the multi-resolution approach in another direction.
  • the fingerprinting algorithm can identify and isolate common traits of video content that has undergone a moderate transformation along the vertical axis.
  • the robustness to significant scale and shift variations in the horizontal direction is achieved by fragmentation of the video content with different scale, followed by an in-depth analysis of the individual aspects.
  • the algorithm may incorporate a simpler triangular-base representation of the scale and shift in only one (horizontal) direction. This approach reduces spatial complexity of the search algorithm from O(n 3 ) typical for 3D multi-resolution pyramid to O(n 2 ).
  • FIG. 1 is a schematic drawing of a process for extracting and using a fingerprint from a media object, in accordance with an embodiment of the invention.
  • FIG. 2 is a schematic drawing of a system for capturing and saving a video signal into various video formats.
  • FIG. 3 illustrates an example process in which a frame from a video signal is scaled differently based on different underlying formats.
  • FIG. 4 illustrates an example process in which a frame from a video signal is scaled with different horizontal scaling.
  • FIG. 5 is a drawing of a multi-resolution pyramid for matching a test fingerprint to a reference fingerprint, in accordance with an embodiment of the invention.
  • Described herein are systems and methods for identification and quantitative classification of visual content by means of extraction of its distinguishing features and matching them with corresponding features of reference content. These features are calculated based on the exclusive characteristics of that content and presented in compact form—a digital fingerprint.
  • This fingerprint can be matched against a set of reference fingerprints (e.g., reference digital fingerprints stored in a database) to determine the identity and relative quality of the video content based on the distance between the query and database fingerprints.
  • Any of a variety of matching techniques may be used, as appropriate, such as those described in U.S. patent application Ser. No. 10/132,091, filed Apr. 24, 2002, or U.S. patent application Ser. No. 10/830,962, filed Apr. 22, 2004, each of which is incorporated by reference herein. Due to the nature of the fingerprint extraction algorithms, the results of embodiments of the invention do not suffer from degradation of the video content due to editing, distortions, moderate rotation, or affine transformations.
  • Embodiments of the invention enable the extraction of characteristic information from a media object as well as the matching or identification of the media object using that extracted characteristic information.
  • a frame 105 of a media signal e.g., a frame from a video signal
  • the media object 100 may be provided by any of a wide variety of sources.
  • the fingerprint extraction algorithm 110 Based on one or more frames 105 , the fingerprint extraction algorithm 110 generates one or more fingerprints 115 that are characteristic of the frames 105 . Serving as a distinguishing identifier, the fingerprint 115 provides information relating to the identity or other characteristics of the sequence of frames 105 of the media object 100 . In particular, one or more fingerprints 115 for the media object 100 may allow the media object 100 to be uniquely identified.
  • Embodiments of the fingerprint extraction algorithm 110 are described in more detail below.
  • the extracted fingerprint 115 can then be used in a further process or stored on a medium for later use.
  • the fingerprint 115 can be used by a fingerprint matching algorithm 120 , which compares the fingerprint 115 with entries in a fingerprint database 125 (e.g., a collection of fingerprints from known sources) to determine the identity of the media object 100 from which the fingerprint 115 was generated.
  • a fingerprint matching algorithm 120 compares the fingerprint 115 with entries in a fingerprint database 125 (e.g., a collection of fingerprints from known sources) to determine the identity of the media object 100 from which the fingerprint 115 was generated.
  • the media object 100 may originate from any of a wide variety of sources, depending on the application of the fingerprinting system.
  • the media object 100 is sampled from a broadcast received from a media broadcaster and digitized.
  • a media broadcaster may transmit audio and/or video in digital form, obviating the need to digitize it.
  • Types of media broadcasters include, but are not limited to, radio transmitters, satellite transmitters, and cable operators.
  • a media server retrieves audio files from a media library and transmits a digital broadcast over a network (e.g., the Internet) for use by the fingerprint extraction algorithm 110 .
  • a streaming Internet radio broadcast is one example of this type of architecture, where media, advertisements, and other content is delivered to an individual or to a group of users.
  • the fingerprint extraction algorithm 110 receives the media object 100 from a client computer that has access to a storage device containing media object files.
  • the client computer retrieves an individual media object file from the storage and sends the file to the fingerprint extraction algorithm 110 for generating one or more fingerprints 115 from the file.
  • the fingerprint extraction algorithm 110 may be performed by the client computer or by a remote server coupled to the client computer over a network.
  • Embodiments of the video fingerprinting algorithm extract characteristic features of multiple regions of interest containing the most important and perceptually essential part of the visual images.
  • the fingerprints of each region of interest of target content may be matched against multiple regions of reference content, thus allowing identification of complex scenes, inserts and different versions of the same content presented in wide-screen and full-screen formats.
  • the variations of video content is calculated and analysed over given period of time.
  • the fingerprinting algorithm calculates long term statistics of the changes in pixels across multiple frames and identifies the areas of maximum variation of the pixel values. Once the areas of maximized variation are determined, the boundaries and orientation of the area are estimated. If the area had distinct rectangular shape, the orientation may be defined by angle between its sides and the vertical or horizontal axis. If selected area has irregular shape, its orientation may be calculated as orientation of the smallest possible escribed rectangle covering the entire area of interest. Since the region orientation is ambiguous with respect to 90-degree rotation, its orientation is defined as a smallest angle by which the region has to be rotated to align it parallel to the vertical and horizontal axis.
  • the fingerprinting algorithm identifies the spatial and spatio-temporal gradient vectors of the sampled video content and isolates the areas of maximum value. This may be achieved by calculating the divergence of the gradient field over an interval of frames.
  • the divergence of spatio-temporal gradient, div G is a scalar value, which is invariant under orthogonal transformations and thus independent of region orientation.
  • the maximum values of the divergence of the gradient invariant concentrate along the edges of regions of interest which can be used for isolation of such regions.
  • contour lines connect image points with equal values and therefore are perpendicular to the spatial gradient.
  • the analog of the contour lines in spatio-temporal space is a level set or level surface that connects the points with similar values in 3D space.
  • the contour lines have higher density along object boundaries. Areas repetitively bounded by continuous contour lines within a given time interval likely represent regions of interest. This concept applies to points of discontinuity formed by the fracture in the contour lines due to overlaps. Artificial objects embedded into video stream disrupt the contours of natural objects they intersect causing discontinuities. Identifying these points may be important for isolating the regions of interest. This is accomplished through a filtration process, which is done over a series of frames. Once the filtration is complete, the points of maximization form a distinct border around the region of interest. For example, if the maximized points form a rectangular perimeter, then the region of interest for fingerprinting exists within that rectangle.
  • the isolation of the region of interest is achieved by tracking the motion trajectories of the video content.
  • the motion trajectories are the changes in the pixels over time for a series of frames. When a motion trajectory is broken, it forms a disruption point. These disruption points occur at areas where the motion trajectories are interrupted by overlap of editing changes, such as picture-in-picture, logos, banners, and closed captions.
  • the fingerprinting algorithm identifies the highest concentration of disruption points and uses a filtration system to form a boundary along these locations. This area depicts the borders of the region of interest.
  • the fingerprinting algorithm selects required number of regions of interest that will be analyzed. This selection may be based on the size, stability, and length of these regions.
  • the selected regions of interest may have different shape and orientation.
  • the most typical shape of the region of interest is a rectangle.
  • Another typical shape is trapezoid which can be easily transformed into rectangle.
  • the rectangular (as well as trapezoid) shape is easy to identify by its edges.
  • the orientation of rectangular ROI is determined by orientation of its edges. While some regions of interest (like picture-in-picture) naturally have rectangular boundaries, others (like logos) may have an irregular shape. Any irregular objects may be extracted and padded to rectangular shape.
  • the size and orientation of the escribed rectangle is selected to guarantee that produced rectangle contains the object entirely and padded area is minimal.
  • the produced regions of interest two sides oriented vertically and another two sides oriented horizontally resembling matrix I with N columns and M rows, where N and M are width and height of the ROI correspondingly.
  • the distance between fingerprints X and Y reflects differences between the corresponding spatio-temporal regions A and B.
  • a region of interest in anamorphic format containing the wide-screen version will provide different characteristics from a masked version of the same content. If the masked version is contained in the main region of interest representing the whole video frame, the region detection algorithm most likely will trim the top and bottom blank areas. However, if the masked version of the content is embedded into another video stream (picture-in-picture), the region recognition algorithm may detect outer boundaries of the area producing the region of interest that do contain blank areas of the masked image. As shown in FIGS. 2-4 , the normalized and scaled regions of anamorphic and masked versions of the same content have identical horizontal scale and different vertical scale.
  • the direct comparison of these two versions is possible by columns, assuming that every column is processed using scale and shift invariant transform.
  • One possible process used in this situation is the Fourier-Mellin transform.
  • the Mellin transform is scale invariant and can be approximated by calculation of the magnitude of the FFT of resampled data represented in logarithmic scale.
  • An additional FFT magnitude calculation may be used to provide shift invariance.
  • One disadvantage of this approach is that it is based on the magnitude spectrum and completely ignores the phase of the processed series. Different series of values may produce similar magnitude spectrum. This means that any method based on spectral magnitude becomes indiscriminative, and alternatives methods must be used to generate fingerprints.
  • a possible alternative to magnitude spectrum is to use low-frequency spectral coefficients.
  • low-frequency coefficients are not as sensitive as the mid to high frequency coefficient to moderate shift and scale variations.
  • the low frequency coefficients of any spectral or wavelet transform can be used rather than the FFT to increase the robustness of the fingerprints to shift and scale variations.
  • the fingerprinting algorithm extends number of samples and resolution of low frequency coefficients by increasing size of the processed data buffer.
  • the algorithm places pixel values from a column x i consisting of m pixels into a larger processing buffer containing k*m elements. Once this is accomplished, the processing buffer is padded by null values and the low frequency spectral coefficients of the data in the buffer are calculated. This may be accomplished through the use of the FFT, DCT, or any other suitable transform.
  • An alternative method relies on filling the larger processing buffer by values selected from multiple columns.
  • the content may be fragmented into more than four columns, though the algorithm is not limited to any set preference of fragment numbers.
  • the first four columns are selected and then reconstructed as a single vertical strip of larger size. This may be achieved by extracting the columns one-by-one from the original region of interest, in order, from left to right, and placing them in one single buffer containing 4*m elements.
  • This processing buffer forms a larger base for feature extraction on a vertical axis.
  • the feature vector for the set of the first four columns is constructed and the processing algorithm selects the next set of four columns. The calculation of the vertical feature vectors recurs until all columns of the regions are processed.
  • the processing group includes four columns organized from left to the right; the content of every column is ordered from top to the bottom; and on each iteration the algorithm shifts by one column to the right.
  • An alternative method for calculating vertical feature vectors is to arrange columns in alternating fashion.
  • the processing group may consist of four columns organized from left to the right; the content of every odd column is ordered from top to the bottom; and the content of every even column is ordered in the reverse order (from the bottom to the top).
  • the processing buffer contains four columns with alternating order of pixels.
  • the DCT transform of the mirrored version arranged in an alternating fashion has the identical odd coefficients as the original version, while the even DCT coefficients have same absolute value, but the opposite sign. This allows matching both original and horizontally mirrored targets using the same set of vertical feature vectors in the reference data base.
  • the vertical feature vectors constructed of low frequency spectral coefficients may be insensitive to moderate vertical scale and shift changes.
  • significant overlap of column groups horizontally makes the produced features insensitive to the horizontal shift of columns within the processing group.
  • these features are still sensitive to changes in horizontal scale.
  • the vertical feature vectors of the target and reference content would match only if they have similar horizontal scale covering the same spread of the captured scene.
  • the horizontal scale of vertical feature vectors depends on a width and, correspondingly, the inversely proportional to number of the columns. In turn, the number of columns that provides the required horizontal resolution may depend on the image format and its horizontal extent.
  • FIG. 4 shows how a fragment of a wide-screen image matches its full-screen version.
  • the wide-screen image X is decomposed into m vectors X m [1,m] and the full-screen image Y into n vectors Y n [1,n] of the same width.
  • a sub-set X m [i,i+n] containing n vectors of the wide-screen image X matches the set Y n [1,n] of its full-screen version Y.
  • the image format and its aspect ratio are known, the image can be straight away divided into a number of vectors with required resolution. If information about the format of the target video is not accessible or available information is not reliable (for example due to improper encoding or intentional manipulations), the multi-resolution classifier consisting of several feature sets with different horizontal scale may be constructed.
  • the one-dimensional horizontal multi-resolution approach is employed to scale and isolate commonalties between the vertical feature vectors of content.
  • the algorithm For every processed image, the algorithm extracts multiple ordered sets of vertical feature vectors with different horizontal scale. The algorithm divides every image into specified number of columns of the same width and calculates vertical feature vectors. Once this is accomplished, the algorithm takes the next step; fragmenting the original image into an incremental amount of equal columns. As see in the example of FIG. 5 , the number of columns added as well as number of produced vertical feature vectors on each step may be changing linearly.
  • the scaling algorithm employs a non-linear scaling method.
  • the non-linear scaling approach uses a difference system of fragmentation; the columns within the scale layers may be added on a non-linear basis.
  • linear or non-linear a multi-resolution classifiers may be composed for a series of frames in the content. This means that each selected frame is stretched and analyzed by the algorithm, and a database for each permutation is constructed, recorded, and associated with every region of interest within the analyzed video sequence.
  • the classifiers are composed only for key-frames of the analyzed video. This approach is also applicable to static images and series of images (slide show). Since the robustness of a single classifier may be insufficient for unique and reliable identification of the single video frame, the sequence of multiple classifiers may be calculated based on various properties of the image. In an embodiment, the series of classifiers is produced by cyclic shift of the data in processing buffers.
  • the classifiers are composed for a series of consecutive frames of the analyzed video.
  • the size of produced sequence of classifiers is reduced using tuple differential coding.
  • the Huffman code table with code words of variable size could be used to further reduce the size of the sequence.
  • the differential coding takes into account local changes of the extracted features in time. The more robust feature representation can be obtained by employing a long term analysis across multiple frames.
  • the difference encoding comprises calculation of the difference between the feature vectors of the current frame and the averaged feature vectors calculated for a number of preceding frames.
  • An alternative method of long term processing of time series of feature vectors comprises linear transformation and de-correlation of the values. Coefficients of such transformation can be obtained during the training phase. Alternatively, the Karhunen-Loève transform or a simplified approximation of such a DCT can be used.
  • An embodiment of the invention uses a non-linear time scale, which increases the robustness of the generated fingerprints to variations in the playback speed of the video content. The feature values within the processing time window are re-sampled non-uniformly according to the selected non-linear scale.
  • the series of the features is sampled logarithmically and then de-correlated by applying DCT.
  • a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
  • Embodiments of the invention may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
  • any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments of the invention may also relate to a product that is produced by a computing process described herein.
  • a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.

Abstract

A video fingerprinting algorithm extracts characteristic features from regions of interest in a media object, such as a video signal. The regions of interest contain the perceptually important parts of the video signal. A fingerprint may be extracted from a target media object, and the fingerprint the target media content may then be matched against multiple regions of interest of known reference fingerprints. This matching may allow identification of complex scenes, inserts, and different versions of the same content presented in, for example, different formats of the media object.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 61/152,642, filed Feb. 13, 2009, which is incorporated by reference in its entirety.
  • BACKGROUND
  • This invention relates generally to the identification and comparison of multimedia materials containing a series of visual images.
  • Audio and video fingerprinting techniques are often used for identification of multimedia content. A digital fingerprint is a compact representation of the characteristic features of multimedia content that can be used to categorize the content and distinguish it from perceptually different materials. The characteristic features of audio and video fingerprints should be robust and withstand against typical content distortions, noise, digital compression, and filtering. Simultaneously, these exclusive characteristics should also assure minimal false positive and false negative results, which lead to incorrect identification.
  • Unlike audio content, video content is three-dimensional, consisting of a two-dimensional image plane and a time axis. Due to the spatial nature of video content, it is subject to 2D transformations and distortions. During content production, the editing cycle may produce multiple versions of the same material with various spatial representations. Some video content may appear perceptually similar to the human eye, yet contain significantly different spatial composition, which results in varying image pixel values. Typical examples of these variations include widescreen and full screen editions of the same video content. However, variations can also occur from cropping, rotation, and affine transformations as a result of compression or copying, for example, due to recording of the video content projected on a movie screen from varying angles.
  • During the production cycle of the video content, multiple methods of recording are currently used. Common methods of the content recording are shown in FIG. 2. The first category is based on recording a scene using apparatus with different zoom and aspect ratio, placing emphasis on a certain portion of a larger image. Typically, this produces two distinct version of video content. The first format is the full-screen, where only a confined part of the larger image is displayed on the screen. The visual image is produced by zooming on a chosen region of the larger image, then expanding the image to fit the typical television screen, usually with an aspect ratio of 4:3. A second possible format is the wide-screen format, where the camera records the entire wider scene. While this format displays the entire scene, the produced image may be compressed horizontally or zoomed to fit the video frame commonly used for video capture, such as a film. The sub-category of this format is the anamorphic wide-screen display. This format fits a wide-screen display format into a standard full-screen, compressing the visual content horizontally while maintaining unchanged vertical resolution. Another sub-category is the masked wide-screen format where the whole image is resized proportionally and padded on the top and bottom by black bars. Visual display may vary greatly between the full-screen and wide-screen versions of the same visual content, with the wide-screen format displaying a greater range of horizontal visual content while maintaining identical or similar vertical range.
  • In addition to the spatial distortions and changes, the edited movie may also contain overlays, logos, banners, closed captions, and fragments of other movies embedded as picture-in-picture. Human viewers usually ignore these irrelevant parts of the visual content and concentrate on the perceptually significant elements regardless of the video format and its aspect ratio. Existing video fingerprinting techniques are unable to differentiate between perceptually relevant content and insignificant elements and insets during extraction of the characteristic features of a series of visual images. Due to this lack of perceptual versatility, existing fingerprinting algorithms must conduct analysis of the whole visual image to determine the regions of interest for identification purposes, since whether the regions are perceptually relevant has no impact on their usefulness for identification purposes.
  • SUMMARY
  • One method of detecting regions of interest for fingerprinting comprises analyzing key video frames and continuous scenes. Key frames usually represent the beginning of a new scene where the image content changes rapidly. The set of pixels within key frames where content is different usually represent the regions of interest. The comparison of key frames also allows the identification of the static parts of the movie that are common to all scenes in a particular time interval. If the elements are not changing their shape and location on the screen for a continuous sequence of key frames, then most likely these objects are not part of the actual visual content and were embedded into the movie stream afterwards. Such elements may include logos, banners, information bars, and blank areas around the actual image. The fingerprint generation process can crop or shade these elements during the pre-processing stage and therefore minimize their contribution on produced feature vectors of the video content. The errors in detection and removal of such static objects have less effect on the quality of the generated fingerprints, as the algorithm may remove identical elements from similar content to produce comparable fingerprints for all copies of the content.
  • In addition to detecting the aforementioned static elements, another method may be used to detect boundaries of actual scenes within the movie. Some movies may contain multiple regions of interest containing parts of the movie that change independently. Some of them may represent distinct elements added to the video as a result of editing, and include picture-in-picture, animated logos, split screens, etc. Analysis of the slow-changing movie frames, which lay in between key frames, allows detection of content that was embedded into the movie during the processing and editing stages. This detection method is based on the comparison of the level of local changes with the level of changes in the whole image. An abrupt change of a localized image area in contrast to the slow variations of the entire picture suggests that the localized area is changing independently from the rest of the image, and vice versa. This localized variation is characteristic of production stage editing additions, such as picture-in-picture or animated logos.
  • The detailed analysis of the local changes can provide long term statistics for a set of pixels gathered along a time axis, motion trajectories, contour lines, and gradient vectors. The time axis is perpendicular to the image plane and characterizes how each image pixel independently changes in time. The time axis can be considered the third dimension for the video content. The gradient vector points in the direction of the greatest change of pixel intensity in the 2D (spatial gradient) or 3D (spatio-temporal gradient). Large spatial gradient values are typical for the edges of captured and embedded objects as well as regions of interest. Repetitive detection of a common shape bounded by large gradient as well as divergence of the gradient field across multiple frames of an image area suggests that this area is a region of interest.
  • An alternate method of detection of the regions of interest is based on analysis of motion trajectories. These trajectories point in the direction of the smallest changes in the 3D spatio-temporal space and point to the direction of movement of individual objects and elements of the video content. There are numerous algorithms for the motion estimation based on the spatio-temporal filtering, edge-detection, point-correspondence, cross-correlation, etc. Motion estimation is widely used in video compression algorithms to increase efficiency and can be obtained at a low computation cost directly from the compressed video stream. Some motion detection algorithms isolate individual moving objects and trace their position in time. The objects in the original content may move across the image plane (original objects move across the scene, camera pan) and change size (objects move towards or away from camera, camera zoom). Thus, the original (natural) objects move across the image plane and have continuous motion trajectories. The gaps in the object motion trajectory within a scene indicate that the object was covered by another object located in front of it or by the frame boundary. While such shielding and interruptions in the motion trajectories are common for natural scenes, the appearance, shape and location of the moving and shielding objects vary from scene to scene. In a long term the gaps in motion vectors and statistics of the changes of series of multiple scenes are distributed randomly and uniformly across the region breaking consistently only on image boundaries.
  • The largest region covering image boundaries represent the main region of interest of the movie. Contrary to the natural scene objects, secondary content added to the movie during an editing phase is limited to a static position and localized in relatively small area within the larger region of interest representing the whole image. The changes and movement of embedded objects are contained in the same area (usually having rectangular shape) and this area does not vary in size. In general, internal parts of the Region of Interest (ROI), which include statistically admissible gaps or breaks in motion and gradient trajectories, are likely areas where the original video image was altered. Moreover, the ROI may contain another ROI. Each of these ROIs may be extracted and processed separately in categorical order, starting from the largest. In the case of multiple ROIs, the number of regions selected for fingerprinting within the visual image may depend on the required level of detail to be contained in the fingerprint.
  • To obtain an acceptable level of similarity between the target and database content, the content may first be scaled properly. Once the ROIs are isolated, the visual content may be normalized and transformed into images with the same resolution and aspect ratio for further evaluation. However, possible errors in the detection of ROI boundaries may cause significant variations in the actual content aspect ratio and image resolution even after ROI normalization and conversion them into images of the same size and resolution. The size, orientation and aspect ratio of the selected regions of interest, including the main (largest) region of interest may depend greatly on the format of the processed video stream. For example, visual content can be stored as anamorphic on the master copy, yet displayed through a lens to produce a wide-screen end result. This would result in the fingerprint database reference to be coded in condensed horizontally anamorphic format while the subject of the fingerprint analysis to be the full-screen version with similar horizontal extent (and vice versa).
  • Typically, the information about pixel aspect ratio and the aspect ratio of the movie frame are stored in the movie stream, though this information is often misleading. This may lead to errors while decoding and expanding of the anamorphic content to its original dimensions. For example, the full-screen version contains only a part of the larger wide-screen version. In general, the exact locations of the matching sub-region of the full-screen and wide-screen versions of the same content are not known. The full-screen version may represent any sub-regions of the wider version, not only its central part. Furthermore, the horizontal position of the matching regions may change from one scene to another or even gradually vary within a scene due to horizontal panning.
  • The proper comparison of the normalized regions may involve a method that is invariant to moderate variations in scale, aspect ratio, and shift of the compared images. In one embodiment, to scale and shift invariant matching, a comparison of log-scaled magnitude of FFT coefficients is performed. This approach, also known as Fourier-Mellin transformation, is based solely on the magnitude of the spectral components and completely ignores the phase. However, as proved by Oppenheim, the phase may have a useful contribution in the image reconstruction from the spectral data, and thus can be used for reliable identification of the visual information. The feature vectors produced without accounting the phase are not discriminative, which can result in a high rate of false positives. At the same time, the feature vectors computed taking into consideration the phase are robust, but sensitive to scale and shift variations. An alternate approach is to implement a multi-resolution pyramid, which incorporates an algorithm that compares the visual content in the library with the target visual content with different scale and resolution. This approach may involve significant computational costs and memory requirements to store all data necessary to compare images with different resolutions and aspect ratios.
  • In accordance with embodiments of the invention, a fingerprinting method incorporates the shift and scale invariance of the spectral approach and the robustness of the methods based on multi-resolution pyramid. The shift and scale invariance is achieved by using low frequency spectral coefficients to extract features in one direction while applying the multi-resolution approach in another direction. In one embodiment, using the low frequency spectral coefficients, the fingerprinting algorithm can identify and isolate common traits of video content that has undergone a moderate transformation along the vertical axis. The robustness to significant scale and shift variations in the horizontal direction is achieved by fragmentation of the video content with different scale, followed by an in-depth analysis of the individual aspects. Instead of complex 3D multi-resolution pyramid comprising scale, horizontal and vertical shift dimensions, the algorithm may incorporate a simpler triangular-base representation of the scale and shift in only one (horizontal) direction. This approach reduces spatial complexity of the search algorithm from O(n3) typical for 3D multi-resolution pyramid to O(n2).
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic drawing of a process for extracting and using a fingerprint from a media object, in accordance with an embodiment of the invention.
  • FIG. 2 is a schematic drawing of a system for capturing and saving a video signal into various video formats.
  • FIG. 3 illustrates an example process in which a frame from a video signal is scaled differently based on different underlying formats.
  • FIG. 4 illustrates an example process in which a frame from a video signal is scaled with different horizontal scaling.
  • FIG. 5 is a drawing of a multi-resolution pyramid for matching a test fingerprint to a reference fingerprint, in accordance with an embodiment of the invention.
  • The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
  • DETAILED DESCRIPTION
  • Described herein are systems and methods for identification and quantitative classification of visual content by means of extraction of its distinguishing features and matching them with corresponding features of reference content. These features are calculated based on the exclusive characteristics of that content and presented in compact form—a digital fingerprint. This fingerprint can be matched against a set of reference fingerprints (e.g., reference digital fingerprints stored in a database) to determine the identity and relative quality of the video content based on the distance between the query and database fingerprints. Any of a variety of matching techniques may be used, as appropriate, such as those described in U.S. patent application Ser. No. 10/132,091, filed Apr. 24, 2002, or U.S. patent application Ser. No. 10/830,962, filed Apr. 22, 2004, each of which is incorporated by reference herein. Due to the nature of the fingerprint extraction algorithms, the results of embodiments of the invention do not suffer from degradation of the video content due to editing, distortions, moderate rotation, or affine transformations.
  • Embodiments of the invention enable the extraction of characteristic information from a media object as well as the matching or identification of the media object using that extracted characteristic information. As illustrated in FIG. 1, a frame 105 of a media signal (e.g., a frame from a video signal) taken from a media object 100 is input into a fingerprint extraction algorithm 110. The media object 100 may be provided by any of a wide variety of sources. Based on one or more frames 105, the fingerprint extraction algorithm 110 generates one or more fingerprints 115 that are characteristic of the frames 105. Serving as a distinguishing identifier, the fingerprint 115 provides information relating to the identity or other characteristics of the sequence of frames 105 of the media object 100. In particular, one or more fingerprints 115 for the media object 100 may allow the media object 100 to be uniquely identified. Embodiments of the fingerprint extraction algorithm 110 are described in more detail below.
  • Once generated, the extracted fingerprint 115 can then be used in a further process or stored on a medium for later use. For example, the fingerprint 115 can be used by a fingerprint matching algorithm 120, which compares the fingerprint 115 with entries in a fingerprint database 125 (e.g., a collection of fingerprints from known sources) to determine the identity of the media object 100 from which the fingerprint 115 was generated.
  • The media object 100 may originate from any of a wide variety of sources, depending on the application of the fingerprinting system. In one embodiment, the media object 100 is sampled from a broadcast received from a media broadcaster and digitized. Alternatively, a media broadcaster may transmit audio and/or video in digital form, obviating the need to digitize it. Types of media broadcasters include, but are not limited to, radio transmitters, satellite transmitters, and cable operators. In another embodiment, a media server retrieves audio files from a media library and transmits a digital broadcast over a network (e.g., the Internet) for use by the fingerprint extraction algorithm 110. A streaming Internet radio broadcast is one example of this type of architecture, where media, advertisements, and other content is delivered to an individual or to a group of users. In another embodiment, the fingerprint extraction algorithm 110 receives the media object 100 from a client computer that has access to a storage device containing media object files. The client computer retrieves an individual media object file from the storage and sends the file to the fingerprint extraction algorithm 110 for generating one or more fingerprints 115 from the file. The fingerprint extraction algorithm 110 may be performed by the client computer or by a remote server coupled to the client computer over a network.
  • Embodiments of the video fingerprinting algorithm extract characteristic features of multiple regions of interest containing the most important and perceptually essential part of the visual images. The fingerprints of each region of interest of target content may be matched against multiple regions of reference content, thus allowing identification of complex scenes, inserts and different versions of the same content presented in wide-screen and full-screen formats.
  • In one embodiment of a method for identifying the region of interest, the variations of video content is calculated and analysed over given period of time. The fingerprinting algorithm calculates long term statistics of the changes in pixels across multiple frames and identifies the areas of maximum variation of the pixel values. Once the areas of maximized variation are determined, the boundaries and orientation of the area are estimated. If the area had distinct rectangular shape, the orientation may be defined by angle between its sides and the vertical or horizontal axis. If selected area has irregular shape, its orientation may be calculated as orientation of the smallest possible escribed rectangle covering the entire area of interest. Since the region orientation is ambiguous with respect to 90-degree rotation, its orientation is defined as a smallest angle by which the region has to be rotated to align it parallel to the vertical and horizontal axis.
  • In another embodiment, the fingerprinting algorithm identifies the spatial and spatio-temporal gradient vectors of the sampled video content and isolates the areas of maximum value. This may be achieved by calculating the divergence of the gradient field over an interval of frames. The divergence of spatio-temporal gradient, div G, is a scalar value, which is invariant under orthogonal transformations and thus independent of region orientation. The maximum values of the divergence of the gradient invariant concentrate along the edges of regions of interest which can be used for isolation of such regions.
  • Another method of detecting the regions of interest is to analyze the image along contour lines. The contour lines connect image points with equal values and therefore are perpendicular to the spatial gradient. The analog of the contour lines in spatio-temporal space is a level set or level surface that connects the points with similar values in 3D space. Typically, the contour lines have higher density along object boundaries. Areas repetitively bounded by continuous contour lines within a given time interval likely represent regions of interest. This concept applies to points of discontinuity formed by the fracture in the contour lines due to overlaps. Artificial objects embedded into video stream disrupt the contours of natural objects they intersect causing discontinuities. Identifying these points may be important for isolating the regions of interest. This is accomplished through a filtration process, which is done over a series of frames. Once the filtration is complete, the points of maximization form a distinct border around the region of interest. For example, if the maximized points form a rectangular perimeter, then the region of interest for fingerprinting exists within that rectangle.
  • In another embodiment, the isolation of the region of interest is achieved by tracking the motion trajectories of the video content. The motion trajectories are the changes in the pixels over time for a series of frames. When a motion trajectory is broken, it forms a disruption point. These disruption points occur at areas where the motion trajectories are interrupted by overlap of editing changes, such as picture-in-picture, logos, banners, and closed captions. The fingerprinting algorithm identifies the highest concentration of disruption points and uses a filtration system to form a boundary along these locations. This area depicts the borders of the region of interest.
  • Once the regions of interest are identified, they are transformed into a proper format that can be used for fingerprinting. The fingerprinting algorithm selects required number of regions of interest that will be analyzed. This selection may be based on the size, stability, and length of these regions. The selected regions of interest may have different shape and orientation. The most typical shape of the region of interest is a rectangle. Another typical shape is trapezoid which can be easily transformed into rectangle. The rectangular (as well as trapezoid) shape is easy to identify by its edges. The orientation of rectangular ROI is determined by orientation of its edges. While some regions of interest (like picture-in-picture) naturally have rectangular boundaries, others (like logos) may have an irregular shape. Any irregular objects may be extracted and padded to rectangular shape. The size and orientation of the escribed rectangle is selected to guarantee that produced rectangle contains the object entirely and padded area is minimal.
  • Once the key regions of interest are isolated, they are transformed into rectangular regions of identical size and orientation. This may be done through the use of standard methods of image rotation, scale, and skew transformations. In one embodiment, the produced regions of interest two sides oriented vertically and another two sides oriented horizontally resembling matrix I with N columns and M rows, where N and M are width and height of the ROI correspondingly. The series of video frames bounded by a region of interest over time interval [t1, t2, . . . , tn] is represented as series of images I=[I1, I2, . . . , It]. The fingerprint generation process extracts distinguishing features from the input spatio-temporal sequence I and maps the formed feature vector into output sequence X=[X1, X2, . . . , Xt]. One region of interest A is matched to another region B by computing the distance between their fingerprints X=F(A) and Y=F(B). The distance between fingerprints X and Y reflects differences between the corresponding spatio-temporal regions A and B.
  • Even when the regions of interest are scaled to the same size, this may not assure a flawless base for comparison. A region of interest in anamorphic format containing the wide-screen version will provide different characteristics from a masked version of the same content. If the masked version is contained in the main region of interest representing the whole video frame, the region detection algorithm most likely will trim the top and bottom blank areas. However, if the masked version of the content is embedded into another video stream (picture-in-picture), the region recognition algorithm may detect outer boundaries of the area producing the region of interest that do contain blank areas of the masked image. As shown in FIGS. 2-4, the normalized and scaled regions of anamorphic and masked versions of the same content have identical horizontal scale and different vertical scale. In this case, the direct comparison of these two versions is possible by columns, assuming that every column is processed using scale and shift invariant transform. One possible process used in this situation is the Fourier-Mellin transform. The Mellin transform is scale invariant and can be approximated by calculation of the magnitude of the FFT of resampled data represented in logarithmic scale. An additional FFT magnitude calculation may be used to provide shift invariance. One disadvantage of this approach is that it is based on the magnitude spectrum and completely ignores the phase of the processed series. Different series of values may produce similar magnitude spectrum. This means that any method based on spectral magnitude becomes indiscriminative, and alternatives methods must be used to generate fingerprints. A possible alternative to magnitude spectrum is to use low-frequency spectral coefficients. These low-frequency coefficients are not as sensitive as the mid to high frequency coefficient to moderate shift and scale variations. Alternatively, the low frequency coefficients of any spectral or wavelet transform can be used rather than the FFT to increase the robustness of the fingerprints to shift and scale variations.
  • In another embodiment, the fingerprinting algorithm extends number of samples and resolution of low frequency coefficients by increasing size of the processed data buffer. The column feature extractor divides the entire video image into n columns Xn=[x1, x2, . . . , xn]. The algorithm places pixel values from a column xi consisting of m pixels into a larger processing buffer containing k*m elements. Once this is accomplished, the processing buffer is padded by null values and the low frequency spectral coefficients of the data in the buffer are calculated. This may be accomplished through the use of the FFT, DCT, or any other suitable transform.
  • An alternative method relies on filling the larger processing buffer by values selected from multiple columns. For the sake of clarity and example, the content may be fragmented into more than four columns, though the algorithm is not limited to any set preference of fragment numbers. The first four columns are selected and then reconstructed as a single vertical strip of larger size. This may be achieved by extracting the columns one-by-one from the original region of interest, in order, from left to right, and placing them in one single buffer containing 4*m elements. This processing buffer forms a larger base for feature extraction on a vertical axis. Once the data in processing buffer is transformed into frequency domain, the feature vector for the set of the first four columns is constructed and the processing algorithm selects the next set of four columns. The calculation of the vertical feature vectors recurs until all columns of the regions are processed.
  • The number of columns in the group selected for calculation of the vertical feature vector as well as the way how the groups overlap may vary. In one embodiment of the described technique, the processing group includes four columns organized from left to the right; the content of every column is ordered from top to the bottom; and on each iteration the algorithm shifts by one column to the right. Thus, processing groups overlap with the neighbouring groups by 75% and the image consisting of n′ columns yields n=n′−3 vertical feature vectors.
  • An alternative method for calculating vertical feature vectors is to arrange columns in alternating fashion. The processing group may consist of four columns organized from left to the right; the content of every odd column is ordered from top to the bottom; and the content of every even column is ordered in the reverse order (from the bottom to the top). Thus, the processing buffer contains four columns with alternating order of pixels. Once the processing buffer is filled by column data in this alternating fashion, the spectral transform such as DCT is performed on the buffer data and separate sets of odd and even low frequency coefficients are selected. One possible advantage of this approach is that it allows the extraction of comparable feature vectors in instances where the video content flipped (mirrored) in horizontal direction. The DCT transform of the mirrored version arranged in an alternating fashion has the identical odd coefficients as the original version, while the even DCT coefficients have same absolute value, but the opposite sign. This allows matching both original and horizontally mirrored targets using the same set of vertical feature vectors in the reference data base.
  • As mentioned above, the vertical feature vectors constructed of low frequency spectral coefficients may be insensitive to moderate vertical scale and shift changes. In addition to the vertical shift and scale invariance, significant overlap of column groups horizontally makes the produced features insensitive to the horizontal shift of columns within the processing group. At the same time, these features are still sensitive to changes in horizontal scale. The vertical feature vectors of the target and reference content would match only if they have similar horizontal scale covering the same spread of the captured scene. The horizontal scale of vertical feature vectors depends on a width and, correspondingly, the inversely proportional to number of the columns. In turn, the number of columns that provides the required horizontal resolution may depend on the image format and its horizontal extent.
  • The example illustrated in FIG. 4 shows how a fragment of a wide-screen image matches its full-screen version. The wide-screen image X is decomposed into m vectors Xm[1,m] and the full-screen image Y into n vectors Yn[1,n] of the same width. A sub-set Xm[i,i+n] containing n vectors of the wide-screen image X matches the set Yn[1,n] of its full-screen version Y. If the image format and its aspect ratio are known, the image can be straight away divided into a number of vectors with required resolution. If information about the format of the target video is not accessible or available information is not reliable (for example due to improper encoding or intentional manipulations), the multi-resolution classifier consisting of several feature sets with different horizontal scale may be constructed.
  • In an embodiment of the invention, the one-dimensional horizontal multi-resolution approach is employed to scale and isolate commonalties between the vertical feature vectors of content. For every processed image, the algorithm extracts multiple ordered sets of vertical feature vectors with different horizontal scale. The algorithm divides every image into specified number of columns of the same width and calculates vertical feature vectors. Once this is accomplished, the algorithm takes the next step; fragmenting the original image into an incremental amount of equal columns. As see in the example of FIG. 5, the number of columns added as well as number of produced vertical feature vectors on each step may be changing linearly. The resulting multi-resolution classifier X={X1, X2, . . . , XL} containing L layers may be presented in form of a triangle. Each row of the classifier triangle contains n-tuple Xn=[x1, x2, . . . , xn] of feature vectors with scale S=1:n′, where n′ is number of columns used to produce nth layer. By employing the multi-scale classifiers of the target content and reference content, the algorithm can cross-reference all possible horizontal permutations that can be encountered due to editing or varied formatting.
  • In an embodiment of the invention, the scaling algorithm employs a non-linear scaling method. The non-linear scaling approach uses a difference system of fragmentation; the columns within the scale layers may be added on a non-linear basis. Regardless of the approach, linear or non-linear, a multi-resolution classifiers may be composed for a series of frames in the content. This means that each selected frame is stretched and analyzed by the algorithm, and a database for each permutation is constructed, recorded, and associated with every region of interest within the analyzed video sequence.
  • In an embodiment of the invention, the classifiers are composed only for key-frames of the analyzed video. This approach is also applicable to static images and series of images (slide show). Since the robustness of a single classifier may be insufficient for unique and reliable identification of the single video frame, the sequence of multiple classifiers may be calculated based on various properties of the image. In an embodiment, the series of classifiers is produced by cyclic shift of the data in processing buffers.
  • In an embodiment of this invention, the classifiers are composed for a series of consecutive frames of the analyzed video. The size of produced sequence of classifiers is reduced using tuple differential coding. In an embodiment of the invention, the difference between corresponding values of the consecutive classifiers, quantized and stored as an integer number of fixed size. Alternatively, the Huffman code table with code words of variable size could be used to further reduce the size of the sequence. The differential coding takes into account local changes of the extracted features in time. The more robust feature representation can be obtained by employing a long term analysis across multiple frames. In an embodiment of the invention, the difference encoding comprises calculation of the difference between the feature vectors of the current frame and the averaged feature vectors calculated for a number of preceding frames.
  • An alternative method of long term processing of time series of feature vectors comprises linear transformation and de-correlation of the values. Coefficients of such transformation can be obtained during the training phase. Alternatively, the Karhunen-Loève transform or a simplified approximation of such a DCT can be used. An embodiment of the invention uses a non-linear time scale, which increases the robustness of the generated fingerprints to variations in the playback speed of the video content. The feature values within the processing time window are re-sampled non-uniformly according to the selected non-linear scale. In one embodiment, the series of the features is sampled logarithmically and then de-correlated by applying DCT.
  • The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
  • Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
  • Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
  • Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
  • Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims (21)

1-30. (canceled)
31. A method comprising:
receiving, by a processor executing a fingerprint extraction algorithm, a frame of a media signal obtained from a media object;
extracting, by the processor, characteristic features of multiple regions of interest of the media object;
isolating, by the processor, each region of interest;
transforming, by the processor, each region of interest into a rectangular region;
calculating, by the processor, a fingerprint of each region of interest; and
matching, by the processor, each fingerprint with reference fingerprints.
32. The method of claim 31, wherein the fingerprint of each region of interest provides information relating to an identity or other characteristics of each region of interest.
33. The method of claim 31, wherein the matching of each fingerprint with reference fingerprints further comprises matching the fingerprint of each region of interest against multiple regions of reference content.
34. The method of claim 31, further comprising selecting, by the processor, a required number of regions of interest that will be analyzed.
35. The method of claim 34, wherein the selecting is based on a characteristic selected from a group of characteristic types consisting of size, stability, or length of these regions of interest.
36. The method of claim 31, wherein an orientation of the rectangular region of interest is determined by orientation of its edges.
37. The method of claim 31, wherein each rectangular region has an identical size and orientation.
38. The method of claim 37, wherein each rectangular region having an identical size and orientation comprises converting each rectangular region to have the identical size and orientation through a method selected from a group consisting of image rotation, scale, or skew transformations.
39. A computing device comprising:
a processor;
a storage medium for tangibly storing thereon program logic for execution by the processor, the program logic comprising:
frame receiving logic executed by the processor for receiving a frame of a media signal obtained from a media object;
extracting logic executed by the processor for extracting characteristic features of multiple regions of interest of the media object;
isolating logic executed by the processor for isolating each region of interest;
transforming logic executed by the processor for transforming each region of interest into a rectangular region;
calculating logic executed by the processor for calculating a fingerprint of each region of interest; and
matching logic executed by the processor for matching each fingerprint with reference fingerprints.
40. The computing device of claim 39, wherein the fingerprint of each region of interest provides information relating to an identity or other characteristics of each region of interest.
41. The computing device of claim 39, wherein the matching logic for matching each fingerprint with reference fingerprints further comprises reference content matching logic executed by the processor for matching the fingerprint of each region of interest against multiple regions of reference content.
42. The computing device of claim 39, further comprising selecting logic executed by the processor for selecting a required number of regions of interest that will be analyzed.
43. The computing device of claim 42, wherein the selecting is based on a characteristic selected from a group of characteristic types consisting of size, stability, or length of these regions of interest.
44. The computing device of claim 39, wherein an orientation of the rectangular region of interest is determined by orientation of its edges.
45. The computing device of claim 39, wherein each rectangular region has an identical size and orientation.
46. The computing device of claim 45, wherein each rectangular region having an identical size and orientation comprises converting logic executed by the processor for converting each rectangular region to have the identical size and orientation through a method selected from a group consisting of image rotation, scale, or skew transformations.
47. A non-transitory computer-readable storage medium tangibly storing computer program instructions capable of being executed by a computer processor, the computer program instructions defining the steps of:
receiving, by the processor executing a fingerprint extraction algorithm, a frame of a media signal obtained from a media object;
extracting, by the processor, characteristic features of multiple regions of interest of the media object;
isolating, by the processor, each region of interest;
transforming, by the processor, each region of interest into a rectangular region;
calculating, by the processor, a fingerprint of each region of interest; and
matching, by the processor, each fingerprint with reference fingerprints.
48. The medium of claim 47, wherein the fingerprint of each region of interest provides information relating to an identity or other characteristics of each region of interest.
49. The medium of claim 47, wherein the matching of each fingerprint with reference fingerprints further comprises matching the fingerprint of each region of interest against multiple regions of reference content.
50. The medium of claim 47, further comprising selecting, by the processor, a required number of regions of interest that will be analyzed.
US14/594,278 2009-02-13 2015-01-12 Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting Abandoned US20150125036A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/594,278 US20150125036A1 (en) 2009-02-13 2015-01-12 Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15264209P 2009-02-13 2009-02-13
US12/706,658 US8934545B2 (en) 2009-02-13 2010-02-16 Extraction of video fingerprints and identification of multimedia using video fingerprinting
US14/594,278 US20150125036A1 (en) 2009-02-13 2015-01-12 Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/706,658 Continuation US8934545B2 (en) 2009-02-13 2010-02-16 Extraction of video fingerprints and identification of multimedia using video fingerprinting

Publications (1)

Publication Number Publication Date
US20150125036A1 true US20150125036A1 (en) 2015-05-07

Family

ID=42560907

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/706,658 Expired - Fee Related US8934545B2 (en) 2009-02-13 2010-02-16 Extraction of video fingerprints and identification of multimedia using video fingerprinting
US14/594,278 Abandoned US20150125036A1 (en) 2009-02-13 2015-01-12 Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/706,658 Expired - Fee Related US8934545B2 (en) 2009-02-13 2010-02-16 Extraction of video fingerprints and identification of multimedia using video fingerprinting

Country Status (1)

Country Link
US (2) US8934545B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150007243A1 (en) * 2012-02-29 2015-01-01 Dolby Laboratories Licensing Corporation Image Metadata Creation for Improved Image Processing and Content Delivery
US20160182771A1 (en) * 2014-12-23 2016-06-23 Electronics And Telecommunications Research Institute Apparatus and method for generating sensory effect metadata
US20170124379A1 (en) * 2015-10-28 2017-05-04 Xiaomi Inc. Fingerprint recognition method and apparatus
CN109766850A (en) * 2019-01-15 2019-05-17 西安电子科技大学 Fingerprint image matching method based on Fusion Features
JP2019192162A (en) * 2018-04-27 2019-10-31 株式会社日立製作所 Data accumulation system and data searching method
US20200117908A1 (en) * 2016-12-15 2020-04-16 Google Llc Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere
US10909381B2 (en) 2018-05-21 2021-02-02 Google Llc Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content
US11328095B2 (en) 2020-01-07 2022-05-10 Attestiv Inc. Peceptual video fingerprinting
US11640659B2 (en) 2020-01-15 2023-05-02 General Electric Company System and method for assessing the health of an asset
US11641495B2 (en) * 2020-12-07 2023-05-02 Roku, Inc. Use of video frame format as basis for differential handling of automatic content recognition and associated action

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7516074B2 (en) * 2005-09-01 2009-04-07 Auditude, Inc. Extraction and matching of characteristic fingerprints from audio signals
US9510044B1 (en) * 2008-06-18 2016-11-29 Gracenote, Inc. TV content segmentation, categorization and identification and time-aligned applications
US8913851B1 (en) * 2011-04-29 2014-12-16 Google Inc. Fingerprinting image using points of interest for robust image identification
CN102684827B (en) * 2012-03-02 2015-07-29 华为技术有限公司 Data processing method and data processing equipment
US8620021B2 (en) 2012-03-29 2013-12-31 Digimarc Corporation Image-related methods and arrangements
US9202255B2 (en) 2012-04-18 2015-12-01 Dolby Laboratories Licensing Corporation Identifying multimedia objects based on multimedia fingerprint
US9146990B2 (en) 2013-01-07 2015-09-29 Gracenote, Inc. Search and identification of video content
US10554707B2 (en) 2013-08-13 2020-02-04 Imvision Software Technologies Ltd. Method and system for self-detection and efficient transmission of real-time popular recorded over-the-top streams over communication networks
US9432731B2 (en) * 2013-07-17 2016-08-30 Imvision Software Technologies Ltd. Method and system for detecting live over the top (OTT) streams in communications networks
US9674252B2 (en) 2013-07-17 2017-06-06 Imvision Software Technologies Ltd. System and method for efficient delivery of repetitive multimedia content
US10977298B2 (en) * 2013-11-08 2021-04-13 Friend for Media Limited Identifying media components
US9832353B2 (en) 2014-01-31 2017-11-28 Digimarc Corporation Methods for encoding, decoding and interpreting auxiliary data in media signals
US9990693B2 (en) * 2014-04-29 2018-06-05 Sony Corporation Method and device for rendering multimedia content
CN105447929B (en) * 2014-08-29 2017-11-21 北京浪奇捷联科技开发有限公司 A kind of recognition methods of object passage path and system
US11170215B1 (en) * 2016-04-28 2021-11-09 Reality Analytics, Inc. System and method for discriminating and demarcating targets of interest in a physical scene
US20170372142A1 (en) * 2016-06-27 2017-12-28 Facebook, Inc. Systems and methods for identifying matching content
US9972060B2 (en) * 2016-09-08 2018-05-15 Google Llc Detecting multiple parts of a screen to fingerprint to detect abusive uploading videos
CN106875422B (en) * 2017-02-06 2022-02-25 腾讯科技(上海)有限公司 Face tracking method and device
JP7073634B2 (en) * 2017-06-09 2022-05-24 富士フイルムビジネスイノベーション株式会社 Electronic devices and programs
US10440413B2 (en) 2017-07-31 2019-10-08 The Nielsen Company (Us), Llc Methods and apparatus to perform media device asset qualification
US10946745B2 (en) * 2017-08-30 2021-03-16 Texas Instruments Incorporated GPU-less instrument cluster system with full asset sweep
CN110709841B (en) * 2017-12-13 2023-09-12 谷歌有限责任公司 Method, system and medium for detecting and converting rotated video content items
US11380115B2 (en) * 2019-06-04 2022-07-05 Idemia Identity & Security USA LLC Digital identifier for a document

Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4047154A (en) * 1976-09-10 1977-09-06 Rockwell International Corporation Operator interactive pattern processing system
US4135147A (en) * 1976-09-10 1979-01-16 Rockwell International Corporation Minutiae pattern matcher
US4907074A (en) * 1985-10-31 1990-03-06 Canon Kabushiki Kaisha Image pickup apparatus having color separation filters and forming line-sequential luminance and color-difference signals
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
US5768151A (en) * 1995-02-14 1998-06-16 Sports Simulation, Inc. System for determining the trajectory of an object in a sports simulator
US6282307B1 (en) * 1998-02-23 2001-08-28 Arch Development Corporation Method and system for the automated delineation of lung regions and costophrenic angles in chest radiographs
US6332041B1 (en) * 1993-07-19 2001-12-18 Sharp Kabushiki Kaisha Feature-region extraction method and feature-region extraction circuit
US6400831B2 (en) * 1998-04-02 2002-06-04 Microsoft Corporation Semantic video object segmentation and tracking
US6512941B1 (en) * 1999-11-25 2003-01-28 Koninklijke Philips Electronics N.V. MR method for exciting the nuclear magnetization in a limited volume
US6529206B1 (en) * 1998-07-13 2003-03-04 Sony Corporation Image processing apparatus and method, and medium therefor
US20030095710A1 (en) * 2001-11-16 2003-05-22 Mitutoyo Corporation. Systems and methods for boundary detection in images
US20030165193A1 (en) * 2002-03-01 2003-09-04 Hsiao-Ping Chen Method for abstracting multiple moving objects
US6697433B1 (en) * 1998-10-23 2004-02-24 Mitsubishi Denki Kabushiki Kaisha Image decoding apparatus
US20040046896A1 (en) * 1995-05-26 2004-03-11 Canon Kabushiki Kaisha Image processing apparatus and method
US20040213446A1 (en) * 1999-10-12 2004-10-28 Soheil Shams System and method for automatically processing microarrays
US20050197724A1 (en) * 2004-03-08 2005-09-08 Raja Neogi System and method to generate audio fingerprints for classification and storage of audio clips
US20050265460A1 (en) * 2004-05-27 2005-12-01 Samsung Electronics Co., Ltd. Apparatus and method for detecting letter box, and MPEG decoding device having the same
US20060291690A1 (en) * 2003-05-21 2006-12-28 Roberts David K Digital fingerprints and watermarks for images
US20070211958A1 (en) * 2003-11-12 2007-09-13 Michael Khazen Method and Means for Image Processing
WO2007148264A1 (en) * 2006-06-20 2007-12-27 Koninklijke Philips Electronics N.V. Generating fingerprints of video signals
US20090034871A1 (en) * 2007-07-31 2009-02-05 Renato Keshet Method and system for enhancing image signals and other signals to increase perception of depth
US7502063B2 (en) * 2004-08-09 2009-03-10 Aptina Imaging Corporation Camera with scalable resolution
US20090154806A1 (en) * 2007-12-17 2009-06-18 Jane Wen Chang Temporal segment based extraction and robust matching of video fingerprints
US20100007797A1 (en) * 2008-07-08 2010-01-14 Zeitera, Llc Digital Video Fingerprinting Based on Resultant Weighted Gradient Orientation Computation
US20100061587A1 (en) * 2008-09-10 2010-03-11 Yahoo! Inc. System, method, and apparatus for video fingerprinting
US7692817B2 (en) * 2004-06-23 2010-04-06 Sharp Kabushiki Kaisha Image processing method, image processing apparatus, image forming apparatus, computer program product and computer memory product for carrying out image processing by transforming image data to image data having spatial frequency components
US20100177209A1 (en) * 2004-08-11 2010-07-15 Hsuan-Hsien Lee Interactive device capable of improving image processing
US7793318B2 (en) * 2003-09-12 2010-09-07 The Nielsen Company, LLC (US) Digital video signature apparatus and methods for use with video program identification systems
US7906968B2 (en) * 2007-11-16 2011-03-15 Universitaetsklinikum Freiburg NMR tomography method based on NBSEM with 2D spatial encoding by two mutually rotated multipole gradient fields
US20110128444A1 (en) * 2003-07-25 2011-06-02 Gracenote, Inc. Method and device for generating and detecting fingerprints for synchronizing audio and video
US8019132B2 (en) * 2005-08-09 2011-09-13 Nec Corporation System for recognizing fingerprint image, method and program for the same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7174293B2 (en) * 1999-09-21 2007-02-06 Iceberg Industries Llc Audio identification system and method
US7194752B1 (en) * 1999-10-19 2007-03-20 Iceberg Industries, Llc Method and apparatus for automatically recognizing input audio and/or video streams

Patent Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4135147A (en) * 1976-09-10 1979-01-16 Rockwell International Corporation Minutiae pattern matcher
US4047154A (en) * 1976-09-10 1977-09-06 Rockwell International Corporation Operator interactive pattern processing system
US4907074A (en) * 1985-10-31 1990-03-06 Canon Kabushiki Kaisha Image pickup apparatus having color separation filters and forming line-sequential luminance and color-difference signals
US6332041B1 (en) * 1993-07-19 2001-12-18 Sharp Kabushiki Kaisha Feature-region extraction method and feature-region extraction circuit
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
US5768151A (en) * 1995-02-14 1998-06-16 Sports Simulation, Inc. System for determining the trajectory of an object in a sports simulator
US20040046896A1 (en) * 1995-05-26 2004-03-11 Canon Kabushiki Kaisha Image processing apparatus and method
US6282307B1 (en) * 1998-02-23 2001-08-28 Arch Development Corporation Method and system for the automated delineation of lung regions and costophrenic angles in chest radiographs
US6400831B2 (en) * 1998-04-02 2002-06-04 Microsoft Corporation Semantic video object segmentation and tracking
US6529206B1 (en) * 1998-07-13 2003-03-04 Sony Corporation Image processing apparatus and method, and medium therefor
US6697433B1 (en) * 1998-10-23 2004-02-24 Mitsubishi Denki Kabushiki Kaisha Image decoding apparatus
US20040213446A1 (en) * 1999-10-12 2004-10-28 Soheil Shams System and method for automatically processing microarrays
US6512941B1 (en) * 1999-11-25 2003-01-28 Koninklijke Philips Electronics N.V. MR method for exciting the nuclear magnetization in a limited volume
US20030095710A1 (en) * 2001-11-16 2003-05-22 Mitutoyo Corporation. Systems and methods for boundary detection in images
US20030165193A1 (en) * 2002-03-01 2003-09-04 Hsiao-Ping Chen Method for abstracting multiple moving objects
US7265777B2 (en) * 2002-03-01 2007-09-04 Huper Laboratories Co., Ltd. Method for abstracting multiple moving objects
US20060291690A1 (en) * 2003-05-21 2006-12-28 Roberts David K Digital fingerprints and watermarks for images
US20110128444A1 (en) * 2003-07-25 2011-06-02 Gracenote, Inc. Method and device for generating and detecting fingerprints for synchronizing audio and video
US7793318B2 (en) * 2003-09-12 2010-09-07 The Nielsen Company, LLC (US) Digital video signature apparatus and methods for use with video program identification systems
US20070211958A1 (en) * 2003-11-12 2007-09-13 Michael Khazen Method and Means for Image Processing
US7916909B2 (en) * 2003-11-12 2011-03-29 The Institute Of Cancer Research Method and means for image processing
US20050197724A1 (en) * 2004-03-08 2005-09-08 Raja Neogi System and method to generate audio fingerprints for classification and storage of audio clips
US20050265460A1 (en) * 2004-05-27 2005-12-01 Samsung Electronics Co., Ltd. Apparatus and method for detecting letter box, and MPEG decoding device having the same
US8204108B2 (en) * 2004-05-27 2012-06-19 Samsung Electronics Co., Ltd. Apparatus and method for detecting letter box, and MPEG decoding device having the same
US7692817B2 (en) * 2004-06-23 2010-04-06 Sharp Kabushiki Kaisha Image processing method, image processing apparatus, image forming apparatus, computer program product and computer memory product for carrying out image processing by transforming image data to image data having spatial frequency components
US7502063B2 (en) * 2004-08-09 2009-03-10 Aptina Imaging Corporation Camera with scalable resolution
US20100177209A1 (en) * 2004-08-11 2010-07-15 Hsuan-Hsien Lee Interactive device capable of improving image processing
US8019132B2 (en) * 2005-08-09 2011-09-13 Nec Corporation System for recognizing fingerprint image, method and program for the same
US20090324199A1 (en) * 2006-06-20 2009-12-31 Koninklijke Philips Electronics N.V. Generating fingerprints of video signals
WO2007148264A1 (en) * 2006-06-20 2007-12-27 Koninklijke Philips Electronics N.V. Generating fingerprints of video signals
US20090034871A1 (en) * 2007-07-31 2009-02-05 Renato Keshet Method and system for enhancing image signals and other signals to increase perception of depth
US7906968B2 (en) * 2007-11-16 2011-03-15 Universitaetsklinikum Freiburg NMR tomography method based on NBSEM with 2D spatial encoding by two mutually rotated multipole gradient fields
US20090154806A1 (en) * 2007-12-17 2009-06-18 Jane Wen Chang Temporal segment based extraction and robust matching of video fingerprints
US20100007797A1 (en) * 2008-07-08 2010-01-14 Zeitera, Llc Digital Video Fingerprinting Based on Resultant Weighted Gradient Orientation Computation
US20100061587A1 (en) * 2008-09-10 2010-03-11 Yahoo! Inc. System, method, and apparatus for video fingerprinting

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150007243A1 (en) * 2012-02-29 2015-01-01 Dolby Laboratories Licensing Corporation Image Metadata Creation for Improved Image Processing and Content Delivery
US9819974B2 (en) * 2012-02-29 2017-11-14 Dolby Laboratories Licensing Corporation Image metadata creation for improved image processing and content delivery
US20160182771A1 (en) * 2014-12-23 2016-06-23 Electronics And Telecommunications Research Institute Apparatus and method for generating sensory effect metadata
US9936107B2 (en) * 2014-12-23 2018-04-03 Electronics And Telecommunications Research Institite Apparatus and method for generating sensory effect metadata
US20170124379A1 (en) * 2015-10-28 2017-05-04 Xiaomi Inc. Fingerprint recognition method and apparatus
US9904840B2 (en) * 2015-10-28 2018-02-27 Xiaomi Inc. Fingerprint recognition method and apparatus
US10936877B2 (en) * 2016-12-15 2021-03-02 Google Llc Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere
US20200117908A1 (en) * 2016-12-15 2020-04-16 Google Llc Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere
JP2019192162A (en) * 2018-04-27 2019-10-31 株式会社日立製作所 Data accumulation system and data searching method
US10909381B2 (en) 2018-05-21 2021-02-02 Google Llc Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content
US11810353B2 (en) 2018-05-21 2023-11-07 Google Llc Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content
CN109766850A (en) * 2019-01-15 2019-05-17 西安电子科技大学 Fingerprint image matching method based on Fusion Features
US11328095B2 (en) 2020-01-07 2022-05-10 Attestiv Inc. Peceptual video fingerprinting
US11640659B2 (en) 2020-01-15 2023-05-02 General Electric Company System and method for assessing the health of an asset
US11641495B2 (en) * 2020-12-07 2023-05-02 Roku, Inc. Use of video frame format as basis for differential handling of automatic content recognition and associated action

Also Published As

Publication number Publication date
US20100211794A1 (en) 2010-08-19
US8934545B2 (en) 2015-01-13

Similar Documents

Publication Publication Date Title
US8934545B2 (en) Extraction of video fingerprints and identification of multimedia using video fingerprinting
Kwon et al. CAT-Net: Compression artifact tracing network for detection and localization of image splicing
EP2126789B1 (en) Improved image identification
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
EP0720114B1 (en) Method and apparatus for detecting and interpreting textual captions in digital video signals
EP2198376B1 (en) Media fingerprints that reliably correspond to media content
US8655103B2 (en) Deriving an image representation using frequency components of a frequency representation
EP2366170B1 (en) Media fingerprints that reliably correspond to media content with projection of moment invariants
JP2009542081A (en) Generate fingerprint for video signal
US8995708B2 (en) Apparatus and method for robust low-complexity video fingerprinting
KR101191516B1 (en) Enhanced image identification
CN107135401A (en) Key frame extraction method and system
Li et al. Effective and efficient video text extraction using key text points
JP5199349B2 (en) High performance image identification
Ng et al. Classifying photographic and photorealistic computer graphic images using natural image statistics
Dubey Edge based text detection for multi-purpose application
Gopakumar A survey on image splice forgery detection and localization techniques
Ouali et al. Robust video fingerprints using positions of salient regions
Leon et al. Video identification using video tomography
Zeppelzauer et al. Analysis of historical artistic documentaries
Hsia et al. A High-Performance Videotext Detection Algorithm
Chaisorn et al. A simplified ordinal-based method for video signature
Shedge et al. Image Forgery Detection and Localization
Crandall Extraction of unconstrained caption text from general-purpose video
Chua et al. Detection of objects in video in contrast feature domain

Legal Events

Date Code Title Description
AS Assignment

Owner name: EXCALIBUR IP, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAHOO| INC.;REEL/FRAME:038383/0466

Effective date: 20160418

AS Assignment

Owner name: YAHOO| INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EXCALIBUR IP, LLC;REEL/FRAME:038951/0295

Effective date: 20160531

AS Assignment

Owner name: EXCALIBUR IP, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAHOO| INC.;REEL/FRAME:038950/0592

Effective date: 20160531

AS Assignment

Owner name: AUDITUDE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BILOBROV, SERGIY;REEL/FRAME:043062/0657

Effective date: 20100216

Owner name: INTONOW, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AUDITUDE, INC.;REEL/FRAME:043062/0745

Effective date: 20110112

Owner name: YAHOO| INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTONOW, INC.;REEL/FRAME:043062/0812

Effective date: 20110413

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION