WO2001050737A2 - Method and apparatus for reducing false positives in cut detection - Google Patents
Method and apparatus for reducing false positives in cut detection Download PDFInfo
- Publication number
- WO2001050737A2 WO2001050737A2 PCT/EP2000/012864 EP0012864W WO0150737A2 WO 2001050737 A2 WO2001050737 A2 WO 2001050737A2 EP 0012864 W EP0012864 W EP 0012864W WO 0150737 A2 WO0150737 A2 WO 0150737A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frames
- luminance values
- luminance
- change
- frame
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/11—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/785—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/7864—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using domain-transform features, e.g. DCT or wavelet transform coefficients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/142—Detection of scene cut or scene change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/179—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/87—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/60—Solid state media
- G11B2220/65—Solid state media wherein solid state memory is used for storing indexing information or metadata
Definitions
- the present invention is in general related to an apparatus that detects significant scenes of a source video and selects representative keyframes therefrom.
- the present invention in particular relates to determining whether a detected scene change is really a scene change or merely a uniform change in intensity of the image such as when camera flashes occur during a news broadcast etc.
- Video content analysis uses automatic and semi-automatic methods to extract information that describes contents of the recorded material.
- Video content indexing and analysis extracts structure and meaning from visual cues in the video.
- a video clip is taken from a TV program or a home video by selecting frames which reflect the different scenes in a video.
- Zhang may produce skewed results if the differences between respective blocks of two frames are approximately the same with respect to color or intensity. In such a case the system may detect a scene change when in fact the change is only due to camera flashes which occur during a news broadcast.
- a system which will create a visual index for a video source that was previously recorded or while being recorded, which is useable and more accurate in selecting significant keyframes, while providing a useable amount of information for a user.
- This system will detect scene changes and select a key frame from each scene but ignore the detection of scene changes and the selection of key frames where the changes between two frames result from only a substantially uniform change in luminance of substantially all blocks or macroblocks within the frame.
- Figure 1 illustrates a video archival process
- Figures 2A and 2B are block diagrams of devices used in creating a visual index in accordance with a preferred embodiment of the invention ;
- Figure 3 illustrates a frame, a macroblock, and several blocks
- Figure 4 illustrates several DCT coefficients of a block
- Figure 5 illustrates a macroblock and several blocks with DCT coefficients
- Two phases exist in the video content indexing process Two phases exist in the video content indexing process: archival and retrieval.
- video content is analyzed during a video analysis process and a visual index is created.
- automatic significant scene detection is a process of identifying scene changes, i.e., "cuts" (video cut detection or segmentation detection) and identifying static scenes (static scene detection).
- a particular representative frame called a keyframe is extracted. Therefore it is important that correct identification of scene changes occurs otherwise there will be too many keyframes chosen for a single scene or not enough key frames chosen for multiple scene changes.
- Uniform luminance detection is the process of identifying a change in luminance between two frames and is explained in further detail below.
- FIG. 1 A video archival process is shown in Figure 1 for a source tape with previously recorded source video, which may include audio and or text, although a similar process may be followed for other storage devices with previously saved visual information, such as an MPEG file.
- a visual index is created based on the source video.
- Figure 1 illustrates an example of the first process (for previously recorded source tape) for a videotape.
- the source video is rewound, if required, by a playback/recording device such as a VCR.
- the source video is played back. Signals from the source video are received by a television, a VCR or other processing device.
- a media processor in the processing device or an external processor receives the video signals and formats the video signals into frames representing pixel data (frame grabbing).
- a host processor separates each frame into blocks, and transforms the blocks and their associated data to create DCT (discrete cosine transform) coefficients; performs significant scene detection, uniform change in luminance detection and keyframe selection; and builds and stores keyframes as a data structure in a memory, disk or other storage medium.
- DCT discrete cosine transform
- the source tape is rewound to its beginning and in step 106, the source tape is set to record information.
- the data structure is transferred from the memory to the source tape, creating the visual index. The tape may then be rewound to view the visual index. (Instead of a tape, any storage medium can be used or the index could be stored and/or created at the server.)
- Steps 103 and 104 are more specifically illustrated in Figures 2 A and 2B.
- Video exists either in analog (continuous data) or digital (discrete data) form.
- the present example operates in the digital domain and thus uses digital form for processing.
- the source video or video signal is a series of individual images or video frames displayed at a rate high enough (in this example 30 frames per second) so the displayed sequence of images appears as a continuous picture stream.
- These video frames may be uncompressed (NTSC or raw video) or compressed data in a format such as MPEG, MPEG 2, MPEG 4, Motion JPEG or such.
- the information in an uncompressed video is first segmented into frames in a media processor 202, using a frame grabbing technique 204 such as present on the Intel Smart Video Recorder III.
- the decoded signal is next supplied to a dequantizer 218 which dequantizes the decoded signal using data from the table specifier 216. Although shown as occurring in the media processor 203, these steps (steps 214-218) may occur in either the media processor 203, host processor 211 or even another external device depending upon the devices used.
- the DCT coefficients could be delivered directly to the host processor. In all these approaches, processing may be performed in real time.
- the host processor 210 which may be, for example, an Intel ⁇ Pentium chip or other processor or multiprocessor, a Philips ⁇ Trimedia chip or any other multimedia processor; a computer; an enhanced VCR, record/playback device, or television; or any other processor, performs significant scene detection, key frame selection, and building and storing a data structure in an index memory, such as, for example, a hard disk, file, tape, DVD, or other storage medium.
- an index memory such as, for example, a hard disk, file, tape, DVD, or other storage medium.
- the present invention attempts to detect when a scene of a video has changed or a static scene has occurred.
- a scene may represent one or more related images.
- significant scene detection two consecutive frames are compared and, if the frames are determined to be significantly different, a scene change is determined to have occurred between the two frames; and if determined to be significantly alike, processing is performed to determine if a static scene has occurred.
- uniform luminance change detection if a scene change has been detected then the luminance values of the two frames are compared and if a uniform change in luminance is the only major change between the two frames then it is determined that a scene change has not occurred between the two frames.
- FIG. 2 A shows an example of a host processor 210 with luminance change detector 240.
- the DCT blocks are provided by macroblock creator 206 and DCT transformer 220.
- Fig. 2B shows an example of host processor 211 with significant scene detector 230 and luminance charge detector 240.
- the DCT blocks are provided by dequantizer 218.
- the significant scene processor 230 detects scene changes between two frames and then the luminance detector 240 determines whether in fact a scene change has occurred or whether the differences between the two f ames are due to a uniform change in luminance. If a scene change occurred a keyframe is se ected and provided to frame memory 234 and then provided to the index memory 260. If a uniform change in luminance is detected, another keyframe is not selected from this same scene.
- the present invention addresses the concern where two frames are compared and there is a substantial difference detected between two frames. There are many reasons why this substantial difference may not be due to a scene change.
- the video may be a news broadcast where the videographer is taping a press briefing. During this press briefing many camera flashes flash which cause the luminance between two frames to change. Instead of this being detected as a scene change and another keyframe chosen, the present invention detects the uniform change in luminance and treats it as an image from the same scene. Similarly, if the lights are turned on in a room, or the lights flash in a disco, a scene change should not be detected as the difference between the two frames is merely a uniform change in luminance.
- the present method and device uses comparisons of DCT (Discrete Cosine
- each received frame 302 is processed individually in the host processor 210 to create 8 x 8 blocks 440.
- the host processor 210 processes each 8 x 8 block which contains spatial information, using a discrete cosine transformer 220 to extract DCT coefficients and create the macroblock 308.
- the DCT coefficients may be extracted after dequantization and need not be processed by a discrete cosine transformer. Additionally, as previously discussed, DCT coefficients may be automatically extracted depending upon the devices used.
- the DCT transformer provides each of the blocks 440 ( Figure 4), Yl , Y2, Y3,
- each block contains DC information (DC value) and the remaining DCT coefficients contain AC information (AC values).
- DC value DC information
- AC values AC information
- the AC values increase in frequency in a zig-zag order from the right of the DC value, to the DCT coefficient just beneath the DC value, as partially shown in Figure 4.
- the Y values are the luminance values.
- processing is limited to detecting the change in DC values between corresponding blocks of two frames to more quickly produce results and limit processing without a significant loss in efficiency; however, clearly one skilled in the art could compare the difference in the luminance of corresponding macroblocks or any other method which detects a change in luminance.
- the method and device in accordance with a preferred embodiment of the instant invention compares the DC values of respective blocks of two frames to determine whether a substantially uniform change in luminance has occurred.
- the above computation computes the absolute value of the difference between the DC coefficient of each block in the first frame with its respective DC coefficient in the second frame. This difference is then compared to diffmin and diffmax to find the minimum difference and the maximum difference between corresponding DC coefficients between the two frames. If the difference between the maximum difference (diffmax) and minimum difference (diffmin) is less than a certain threshold then all DC values have changed by approximately the same amount indicating a change in luminance.
- the threshold value is chosen anywhere between 0 and 10% of the final diffmax value, but depending on the application this threshold may vary.
- a key frame is not chosen from both frame sequences. It should be noted that other methods of detecting changes in luminance can be used such as using histograms and wavelets etc. and the invention is not limited to the embodiment described above.
- the ratios of the luminance changes compared to the ratios of the chrominance changes could be used to determine the change in luminance, or any other formula for determining luminance change.
- Figs. 6A-D illustrates two scenarios where a scene change is detected but the difference between the two frames is merely a change in luminance.
- Fig. 6A is an example of an image during a camera flash.
- Fig. 6B shows this same image after the camera flash.
- a top view of a disco scene is shown in Fig. 6C during a time period when the lights are off.
- Fig. 6D shows this same scene when the lights are on.
- the present invention is shown using DCT coefficients; however, one may instead use representative values such as wavelet coefficients, histograms etc. or a function which operates on a sub-area of the image to give representative values for that sub-area.
- the present invention has been described with reference to a video indexing system, however it pertains in general to detecting a uniform change in luminance between two frames and therefore can be used as a search device to detect scenes where there are camera flashes, or alternatively as an archival method to pick representative frames.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001550991A JP2003519971A (en) | 1999-12-30 | 2000-12-15 | Method and apparatus for reducing false positives in cut detection |
EP00991976A EP1180307A2 (en) | 1999-12-30 | 2000-12-15 | Method and apparatus for reducing false positives in cut detection |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US47708599A | 1999-12-30 | 1999-12-30 | |
US09/477,085 | 1999-12-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2001050737A2 true WO2001050737A2 (en) | 2001-07-12 |
WO2001050737A3 WO2001050737A3 (en) | 2001-11-15 |
Family
ID=23894478
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2000/012864 WO2001050737A2 (en) | 1999-12-30 | 2000-12-15 | Method and apparatus for reducing false positives in cut detection |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1180307A2 (en) |
JP (1) | JP2003519971A (en) |
CN (1) | CN1252982C (en) |
WO (1) | WO2001050737A2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001050339A2 (en) | 1999-12-30 | 2001-07-12 | Koninklijke Philips Electronics N.V. | Method and apparatus for detecting fast motion scenes |
EP1668903A1 (en) * | 2003-09-12 | 2006-06-14 | Nielsen Media Research, Inc. | Digital video signature apparatus and methods for use with video program identification systems |
US7333712B2 (en) | 2002-02-14 | 2008-02-19 | Koninklijke Philips Electronics N.V. | Visual summary for scanning forwards and backwards in video content |
CN102724385A (en) * | 2012-06-21 | 2012-10-10 | 浙江宇视科技有限公司 | Intelligent video analysis method and device |
US9316841B2 (en) | 2004-03-12 | 2016-04-19 | Koninklijke Philips N.V. | Multiview display device |
CN108769458A (en) * | 2018-05-08 | 2018-11-06 | 东北师范大学 | A kind of deep video scene analysis method |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100825737B1 (en) * | 2005-10-11 | 2008-04-29 | 한국전자통신연구원 | Method of Scalable Video Coding and the codec using the same |
CN100428801C (en) * | 2005-11-18 | 2008-10-22 | 清华大学 | Switching detection method of video scene |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0395268A2 (en) * | 1989-04-27 | 1990-10-31 | Sony Corporation | Motion dependent video signal processing |
US5767922A (en) * | 1996-04-05 | 1998-06-16 | Cornell Research Foundation, Inc. | Apparatus and process for detecting scene breaks in a sequence of video frames |
WO1998055943A2 (en) * | 1997-06-02 | 1998-12-10 | Koninklijke Philips Electronics N.V. | Significant scene detection and frame filtering for a visual indexing system |
US5920360A (en) * | 1996-06-07 | 1999-07-06 | Electronic Data Systems Corporation | Method and system for detecting fade transitions in a video signal |
US5969755A (en) * | 1996-02-05 | 1999-10-19 | Texas Instruments Incorporated | Motion based event detection system and method |
-
2000
- 2000-12-15 WO PCT/EP2000/012864 patent/WO2001050737A2/en not_active Application Discontinuation
- 2000-12-15 EP EP00991976A patent/EP1180307A2/en not_active Withdrawn
- 2000-12-15 JP JP2001550991A patent/JP2003519971A/en not_active Withdrawn
- 2000-12-15 CN CNB008070067A patent/CN1252982C/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0395268A2 (en) * | 1989-04-27 | 1990-10-31 | Sony Corporation | Motion dependent video signal processing |
US5969755A (en) * | 1996-02-05 | 1999-10-19 | Texas Instruments Incorporated | Motion based event detection system and method |
US5767922A (en) * | 1996-04-05 | 1998-06-16 | Cornell Research Foundation, Inc. | Apparatus and process for detecting scene breaks in a sequence of video frames |
US5920360A (en) * | 1996-06-07 | 1999-07-06 | Electronic Data Systems Corporation | Method and system for detecting fade transitions in a video signal |
WO1998055943A2 (en) * | 1997-06-02 | 1998-12-10 | Koninklijke Philips Electronics N.V. | Significant scene detection and frame filtering for a visual indexing system |
Non-Patent Citations (2)
Title |
---|
DIMITROVA N ET AL: "VIDEO KEYFRAME EXTRACTION AND FILTERING: A KEYFRAME IS NOT A KEYFRAME TO EVERYONE" PROCEEDINGS OF THE 6TH. INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT. CIKM '97. LAS VEGAS, NOV. 10 - 14, 1997, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT. CIKM, NEW YORK, ACM, US, vol. CONF. 6, 10 November 1997 (1997-11-10), pages 113-120, XP000775302 ISBN: 0-89791-970-X * |
MANDAL M K ET AL: "IMAGE INDEXING USING MOMENTS AND WAVELETS" IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, IEEE INC. NEW YORK, US, vol. 42, no. 3, 1 August 1996 (1996-08-01), pages 557-564, XP000638539 ISSN: 0098-3063 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001050339A2 (en) | 1999-12-30 | 2001-07-12 | Koninklijke Philips Electronics N.V. | Method and apparatus for detecting fast motion scenes |
US7333712B2 (en) | 2002-02-14 | 2008-02-19 | Koninklijke Philips Electronics N.V. | Visual summary for scanning forwards and backwards in video content |
EP1668903A1 (en) * | 2003-09-12 | 2006-06-14 | Nielsen Media Research, Inc. | Digital video signature apparatus and methods for use with video program identification systems |
EP1668903A4 (en) * | 2003-09-12 | 2011-01-05 | Nielsen Media Res Inc | Digital video signature apparatus and methods for use with video program identification systems |
US8020180B2 (en) | 2003-09-12 | 2011-09-13 | The Nielsen Company (Us), Llc | Digital video signature apparatus and methods for use with video program identification systems |
US8683503B2 (en) | 2003-09-12 | 2014-03-25 | The Nielsen Company(Us), Llc | Digital video signature apparatus and methods for use with video program identification systems |
US9015742B2 (en) | 2003-09-12 | 2015-04-21 | The Nielsen Company (Us), Llc | Digital video signature apparatus and methods for use with video program identification systems |
US9316841B2 (en) | 2004-03-12 | 2016-04-19 | Koninklijke Philips N.V. | Multiview display device |
CN102724385A (en) * | 2012-06-21 | 2012-10-10 | 浙江宇视科技有限公司 | Intelligent video analysis method and device |
CN108769458A (en) * | 2018-05-08 | 2018-11-06 | 东北师范大学 | A kind of deep video scene analysis method |
Also Published As
Publication number | Publication date |
---|---|
CN1349711A (en) | 2002-05-15 |
CN1252982C (en) | 2006-04-19 |
JP2003519971A (en) | 2003-06-24 |
WO2001050737A3 (en) | 2001-11-15 |
EP1180307A2 (en) | 2002-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6496228B1 (en) | Significant scene detection and frame filtering for a visual indexing system using dynamic thresholds | |
EP0944874B1 (en) | Significant scene detection and frame filtering for a visual indexing system | |
US6125229A (en) | Visual indexing system | |
US6766098B1 (en) | Method and apparatus for detecting fast motion scenes | |
JP4942883B2 (en) | Method for summarizing video using motion and color descriptors | |
KR100915847B1 (en) | Streaming video bookmarks | |
US6469749B1 (en) | Automatic signature-based spotting, learning and extracting of commercials and other video content | |
US7159117B2 (en) | Electronic watermark data insertion apparatus and electronic watermark data detection apparatus | |
US5719643A (en) | Scene cut frame detector and scene cut frame group detector | |
JP5005154B2 (en) | Apparatus for reproducing an information signal stored on a storage medium | |
KR20030026529A (en) | Keyframe Based Video Summary System | |
Faernando et al. | Scene change detection algorithms for content-based video indexing and retrieval | |
EP1180307A2 (en) | Method and apparatus for reducing false positives in cut detection | |
KR100812041B1 (en) | A method for auto-indexing using a process of detection of image conversion | |
Yoon et al. | Real-time video indexing and non-linear video browsing for digital TV receivers with persistent storage | |
Lee et al. | Automatic video summarizing tool using MPEG-7 descriptors for STB |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 00807006.7 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): CN JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000991976 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2001 550991 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): CN JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWP | Wipo information: published in national office |
Ref document number: 2000991976 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2000991976 Country of ref document: EP |