WO2007138858A1 - 映像の特殊効果検出装置、特殊効果検出方法、特殊効果検出プログラム及び映像再生装置 - Google Patents
映像の特殊効果検出装置、特殊効果検出方法、特殊効果検出プログラム及び映像再生装置 Download PDFInfo
- Publication number
- WO2007138858A1 WO2007138858A1 PCT/JP2007/060035 JP2007060035W WO2007138858A1 WO 2007138858 A1 WO2007138858 A1 WO 2007138858A1 JP 2007060035 W JP2007060035 W JP 2007060035W WO 2007138858 A1 WO2007138858 A1 WO 2007138858A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image boundary
- frame
- boundary line
- special effect
- frame section
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2622—Signal amplitude transition in the zone between image portions, e.g. soft edges
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2625—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
Definitions
- Special effect detection device special effect detection method, special effect detection program, and video playback device
- the present invention relates to a video special effect detection device, a special effect detection method, and a special effect detection program for detecting special effects included in video, and in particular, can detect video switching due to spatial gradual change of video.
- the present invention relates to a video special effect detection device, a special effect detection method, and a special effect detection program. Examples of video switching based on a gradual change in spatial image are wipe and DVE (digital video effect).
- a special effect of video is a kind of video switching.
- Special video effects include wipe and DVE (Digital Video Effect).
- 1A and 1B are explanatory diagrams showing examples of wipes, respectively.
- 2A to 2I are explanatory diagrams showing examples of DVEs.
- a special effect of video is video switching that is intentionally inserted by a video editor, unlike a cut that is instantaneous video switching that is commonly used.
- Special effects in the video are used in places where the video is semantically important or where the editor wants to impress. For example, it is used for the start point of a new corner or topic, or the turning point of a scene. Therefore, it is important to understand the content and structure of the video by detecting special effects of the video. Information can be acquired.
- JP-A-8-237549 (paragraphs 0011-0016) and JP-A-2005-237002 (paragraphs 0031-0035) describe video switching by gradual change including wipes between adjacent frames. It describes a method of detection using feature value difference values (frame difference values). In the methods described in these, a section in which the frame feature value changes gradually is detected. In the method described in Japanese Patent Laid-Open No. 8-237549, there are frames in which the interframe difference value is equal to or greater than a threshold value for detecting a gentle change, and there is an accumulated interframe difference value. Video transitions due to gradual changes are detected when the value exceeds another large threshold. The amount of change in luminance value between pixels is used as the inter-frame difference value.
- the difference value between frames continues to be equal to or greater than a threshold value for detecting a gradual change, and the difference value between frames in the preceding and succeeding sections.
- a wipe is detected when a frame that falls below a certain threshold continues.
- JP-A-7-288840 (paragraph 0011) and JP-A-11 252501 (paragraph 001 2-0020) describe a method for detecting a wipe.
- Wipe has the property that the video after switching gradually replaces the area of the video before switching, and the video after switching eventually replaces the entire area of the video before switching.
- the uniformly changing wipe is detected using the properties of the wipe.
- the change area of the image in each frame is obtained based on the difference value between pixels between adjacent frames. Wipes are detected by evaluating the total image change area obtained by the logical sum of the image change areas of multiple consecutive frames.
- Japanese Laid-Open Patent Publication No. 11 252509 (paragraphs 0053-0057) also describes a method for detecting a wipe. In the method described in Japanese Patent Laid-Open No. 11-252509, if the frame average of the prediction error increases, it is determined that the possibility of wiping is high!
- Yoshihiko Kawai, Noboru Babaguchi, and Tadahiro Kitahashi identified the DVE detection method as "A method for detecting rib-ray scenes focusing on digital video effects in broadcast-type sports video" (The IEICE Transactions) D—II, Vol. J84— D—II, No. 2, pp. 432—435, 2 (February 001).
- a pre-registered DVE pattern is registered, and a pre-registered DVE By comparing the pattern with the video, a similar pattern is detected as DVE.
- JP-A-7-288840 and JP-A-11-252501 detects the wipe using the property of the wipe that changes uniformly. Wipes can be detected separately from video changes other than special effects. However, since DVE switches video with complicated image conversion, it is extremely difficult to detect DVE using the above-mentioned property of wipe. As a result, wipes and DVEs of all patterns cannot be detected universally.
- the method described in Japanese Patent Laid-Open No. 11-252509 determines that the possibility of wiping is high when the frame average of the prediction error increases. Since the frame average of the prediction error is not limited to the case of wipe,
- Japanese Laid-Open Patent Publication No. 6-259561 discloses a calculation device for calculating the moving speed and moving direction of a target in a moving image with high accuracy and high speed.
- Japanese Laid-Open Patent Publication No. 9-245167 discloses an image matching method for quickly matching complex images.
- Japanese Patent No. 3585977 discloses a moving region detection apparatus that can accurately determine the position of an animal body by image processing even when the shadow of the animal body is on the floor surface.
- the present invention can detect a special effect included in a video for a general purpose and with high accuracy without erroneously detecting a video change other than the special effect, without depending on the pattern of the special effect. It is an object to provide a special effect detection device, a special effect detection method, and a special effect detection program.
- the present invention focuses on the fact that a frame constituting a special effect has a boundary line (referred to as an image boundary line) between two images existing in the frame for a general purpose without depending on a pattern. There is one feature in this point.
- a video special effect detection apparatus comprises: A special effect included in the image is detected by extracting the image boundary line, which is the boundary line between the two existing images.
- the image special effect detection device extracts an image boundary line, which is a boundary line between two images existing in a frame, from each frame of the input video, and outputs an image boundary line that is information describing the image boundary line.
- the image boundary extraction unit that outputs description information and the frame boundary including special effects are detected using the image boundary description information of each frame, and the special effect frame section information that is information for specifying the frame section is output. It is preferable to include a special effect detection unit.
- the special effect is typically video switching with a wipe or digital video effect.
- An image boundary may also include a line in a frame that moves in conjunction with the boundary between two images present in the frame.
- the image boundary line extraction unit detects image boundary line candidate pixels that are candidates for pixels constituting the image boundary line from each frame of the input video, and An image boundary line candidate pixel detection unit that outputs image boundary line candidate pixel information, which is information for specifying boundary line candidate pixels, for each frame, and an image boundary line candidate pixel indicated by the image boundary line candidate pixel information of each frame It is preferable to include a line extraction unit that extracts a line to be extracted as an image boundary line and outputs image boundary line description information, which is information describing the image boundary line, for each frame.
- the image boundary line candidate pixel detection unit calculates a pixel that satisfies any one of a pixel that is an edge, a pixel that has a large inter-frame pixel difference value, a pixel that belongs to a region where motion vectors vary, or a combination of a plurality of pixels It is possible to detect a pixel that satisfies the condition as an image boundary line candidate pixel.
- the line extraction unit may extract a straight line formed by the image boundary line candidate pixels as an image boundary line using Hough transform.
- the special effect detection unit determines whether or not the image has the image boundary line for each frame using the image boundary line description information of each frame.
- An image boundary line possessing frame section detection unit that detects a frame section in which frames having image boundary lines continue as a frame section including a special effect and outputs special effect frame section information that is information for specifying the frame section. It is preferable to include.
- the video special effect detection apparatus is a An image boundary line that is a boundary line between two images existing in a frame is extracted, and a frame section including a special effect is detected based on the extracted image boundary line.
- the image boundary line is included in the frame that constitutes the special effect for general purposes without depending on the pattern, and is not included in the video change frame other than the special effect such as the camera motion. For this reason, special effects can be detected with high accuracy without depending on the pattern, and for general purposes and without erroneously detecting video changes other than special effects.
- the special effect detection unit includes a special effect including a frame section in which the image boundary line indicated by the image boundary line description information of each frame continuously moves. It is preferable to include an image boundary continuous moving frame section detection unit that detects special frame position information that is information that is detected as a frame section and that identifies the frame section.
- the image boundary continuous movement frame section detection unit represents a parameter describing the image boundary of each frame as a feature point in the parameter space, and the feature point representing the image boundary in the parameter space is continuous over time.
- a moving frame segment may be detected as a frame segment including a special effect.
- the video special effect detection apparatus detects a frame section in which the image boundary line continuously moves as a frame section including the special effect.
- the image boundary moves continuously between frames.
- the special effect can be detected with high accuracy without depending on the pattern for general purpose and without erroneously detecting a video change other than the special effect.
- it is compared with a configuration that detects special effects based only on the presence or absence of image boundary lines.
- the special effect can be detected with higher accuracy.
- the special effect detection unit extracts a combination of a plurality of image boundary lines indicated by the image boundary line description information of each frame, and describes the combination of the image boundary lines.
- the image boundary line combination extraction unit that outputs image boundary line combination information that is information to be performed for each frame and the image boundary line combination information for each frame are used to determine whether or not each frame has a combination of image boundary lines. Determine and have a combination of image borders It is preferable to include an image boundary line combination owned frame section detection unit that detects a frame section in which frames are continuous as a frame section including a special effect and outputs special effect frame section information that is information for specifying the frame section. .
- the special effect detection unit extracts a combination of a plurality of image boundary lines indicated by the image boundary line description information of each frame, and describes the combination of the image boundary lines.
- An image boundary line combination extraction unit that outputs image boundary line combination information that is information to be performed for each frame, and a frame section in which a combination of image boundary lines indicated by the image boundary line combination information of each frame continuously moves, It is preferable to include an image boundary line continuous moving frame section detection unit that detects a frame section including a special effect and outputs special effect frame section information that is information for specifying the frame section.
- the image boundary line combination extraction unit may extract a combination of image boundary lines in which the plurality of image boundary lines form a quadrangle or a part of the quadrangle.
- the image boundary line combination continuous moving frame section detection unit represents a parameter describing each image boundary line of the image boundary line combination of each frame as a feature point in the parameter space, and each feature point in the parameter space
- the frame section that moves continuously with time may be detected as a frame section that includes special effects.
- the video special effect detection apparatus extracts a combination of image boundary lines from the frame, and based on the extracted combination of image boundary lines! /, To detect a frame interval including a special effect.
- the image frame formed by the combination of image boundary lines is included in the frame that constitutes the DVE among the special effects, and is not included in the video change frames other than the special effects. For this reason, DVEs among special effects can be detected with high accuracy without erroneously detecting video changes other than special effects.
- special effects can be detected with higher accuracy than a configuration that detects special effects based only on a single image boundary line. DVE can be detected.
- the effect of the present invention is that the special effects included in the video can be detected universally without depending on the pattern of the special effects and with high accuracy without erroneously detecting video changes other than the special effects. It is. [0035]
- the image boundary line extraction unit extracts an image boundary line that is included in the frame constituting the special effect in common and not included in the video change frame other than the special effect from the frame, This is because the frame section including the special effect is detected based on the image boundary line extracted by the special effect detection unit.
- FIG. 1 (A) and (B) are explanatory diagrams showing examples of wipes.
- FIG. 2 (A) to (I) are explanatory diagrams showing examples of DVEs.
- FIG. 3 is a block diagram showing a first embodiment of the special effect detection apparatus according to the present invention.
- FIGS. 4A to 4F are explanatory diagrams showing examples of image boundary lines.
- FIG. 5 is an explanatory diagram showing an example of a block for calculating a motion vector variation degree and its motion vector.
- FIG. 6 is an explanatory diagram showing an example of a frame section in which frames having image boundary lines are continuous.
- FIG. 7 is a flowchart showing the operation of the first exemplary embodiment.
- FIG. 8 is a block diagram showing a second embodiment of the special effect detection device according to the present invention.
- FIGS. 9A to 9C are explanatory diagrams showing an example of how the image boundary line continuously moves between frames.
- FIG. 10 is an explanatory diagram illustrating a trajectory in which feature points representing parameters describing image boundary lines continuously move with time in the parameter space.
- FIG. 11 is a flowchart showing the operation of the second exemplary embodiment.
- FIG. 12 is a block diagram showing a third embodiment of the special effect detection device according to the present invention.
- FIGS. 13A to 13F are explanatory diagrams showing examples of combinations of image boundary lines forming an image frame.
- FIG. 14 is a flowchart showing the operation of the third exemplary embodiment.
- FIG. 15 is a block diagram showing a fourth embodiment of the special effect detection apparatus according to the present invention.
- FIG. 16 is an explanatory view exemplifying a state in which feature points representing respective image boundary lines in the combination of image boundary lines continuously move with time in the parameter space.
- FIG. 17 is a flowchart showing the operation of the fourth exemplary embodiment.
- FIG. 18 is a block diagram showing a fifth embodiment of the special effect detection device according to the present invention.
- FIG. 19 is an explanatory diagram showing that the edge direction of the pixels constituting the image boundary line is perpendicular to the direction of the image boundary line.
- FIG. 20 is a block diagram showing a sixth embodiment of the special effect detection apparatus according to the present invention.
- FIG. 21 is a block diagram showing a seventh embodiment of the special effect detection apparatus according to the present invention.
- FIG. 22 is a block diagram showing an eighth embodiment of the special effect detection apparatus according to the present invention.
- FIG. 23 is a block diagram showing a ninth embodiment of the special effect detection apparatus according to the present invention.
- FIG. 24 is a block diagram showing a tenth embodiment of the special effect detection apparatus according to the present invention.
- FIG. 3 is a block diagram showing a first embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the first exemplary embodiment of the present invention includes an image boundary line extraction unit 11 and a special effect detection unit 21.
- the special effect detection device is realized by an information processing device such as a computer that executes processing according to a program stored in a recording medium. The same applies to the following embodiments.
- the image boundary line extraction unit 11 extracts, from each frame of the input video, an image boundary line that is a boundary line between two images existing in the frame, and describes the extracted image boundary line Output image boundary description information, which is information.
- An image boundary line is a boundary line between two images before and after switching that coexist in a frame that constitutes a special effect.
- a special effect is that the images before and after switching are switched while spatially coexisting. For this reason, the frame constituting the special effect has an image boundary line.
- FIGS. 4A to 4F are explanatory diagrams showing examples of image boundary lines.
- reference numeral 9 denotes an image boundary line.
- the image boundary line does not have to be exactly the boundary line between two images existing in the frame.
- An image boundary may also include a line in the frame that moves in conjunction with the boundary between the two images present in the frame.
- the image boundary line may be a line in the frame that moves in conjunction with the boundary line between two images existing in the frame. Note that the description of the image boundary line described here applies to all the embodiments hereinafter.
- the image boundary line extraction unit 11 includes an image boundary line candidate pixel detection unit 111 and a line extraction unit 112.
- the image boundary line candidate pixel detection unit 111 detects image boundary line candidate pixels, which are candidates for pixels constituting the image boundary line, from each frame of the input video. Then, the image boundary line candidate pixel detection unit 111 outputs image boundary line candidate pixel information, which is information for specifying the detected image boundary line candidate pixel, for each frame.
- the pixels the pixels constituting each frame of the input video may be used as they are, or pixels newly obtained by image processing such as arbitrary resolution conversion may be used. Further, as the frame of the input video, all the frames constituting the input video may be used, or a subset obtained by arbitrary sampling may be used. This applies to all the embodiments thereafter.
- the image boundary line candidate pixel detection unit 111 detects a pixel that matches the properties of the pixels constituting the image boundary line in the special effect.
- a property of the pixels constituting the image boundary line there is a property that the pixels constituting the image boundary line are edges, that is, pixels at locations where the brightness of the image changes rapidly.
- the image boundary is also the force at the boundary between two different images.
- Edge detection operators such as Prewitt, Sobel, Roberts, Robinson, Kirsch, and Laplacian described in the “New Image Analysis Handbook” may be applied to each pixel in the image to detect pixels that are edges.
- a pixel that is an edge may be detected using Canny's edge detection method described in “A Computational Approach to Edge Detection”. Pixels that are edges detected in this way can be used as image boundary line candidate pixels.
- Another property of the pixels constituting the image boundary line is that the pixels constituting the image boundary line have a large inter-frame pixel difference value. Powerful because the image boundary moves.
- a difference value of pixel values is obtained between the corresponding pixel of the frame and a frame adjacent to the frame. Then, a pixel whose difference value is larger than a certain threshold value can be a pixel having a large inter-frame pixel difference value of the frame.
- pixels between adjacent frames in opposite directions for example, previous frame
- adjacent frames in only one direction for example, the next frame
- a difference value between values may be obtained, and both of them may be larger than the value, and the pixel may be the size and pixel of the inter-frame pixel difference value.
- a signal value described by an arbitrary color system may be used as the pixel value.
- a pixel having a large inter-frame pixel difference value thus detected can be set as an image boundary line candidate pixel.
- a pixel having a deviation property between the above two properties may be used as an image boundary line candidate pixel, but it is preferable that a pixel having both of the above properties be an image boundary line candidate pixel.
- pixels having the respective properties may be obtained separately, and pixels having both properties may be used as image boundary line candidate pixels.
- a pixel having one of the properties may be obtained first, and a pixel having the other property may be detected and used as an image boundary line candidate pixel.
- Another property of the pixels constituting the image boundary line is that the pixels constituting the image boundary line belong to an area where motion vectors vary. This is because the pixels that make up the image boundary are on the moving boundary between the two images.
- the region where the motion vector varies is the motion of points close to each other. This is an area where the direction and size of the vector are not aligned.
- the motion vector of the pixel or the small region and a plurality of surrounding pixels or the motion vector of the small region is calculated, and if there is a variation degree of the calculated motion vector, a pixel or a small region that is greater than or equal to the value can be set as a region where the motion vector varies.
- the image boundary candidate pixel detection unit 111 obtains, for example, an average vector of a plurality of target motion vectors, and an inter-vector distance between each motion vector and the average vector.
- the average value of can be used as the degree of motion vector variation! /.
- the degree of motion vector variation is calculated in this way, if the direction and size of a plurality of target motion vectors are the same, the degree of motion vector variation is 0, and the direction and size of the target motion vectors are zero.
- the degree of variation of the motion vector increases.
- a method for calculating a motion vector is described in, for example, PP. 1495-1498 of “New Image Analysis Handbook”.
- Fig. 5 is an explanatory diagram showing a total of nine blocks, including a certain block (or pixel) and its surrounding blocks, and their motion vectors. These motion vectors are expressed by Eq. (1), and the average of these motion vectors is expressed by Eq. (2).
- the degree of motion vector variation V can be calculated as shown in Equation (3) as the average distance between vectors between the motion vector expressed by Equation (1) and the average vector expressed by Equation (2). .
- the image boundary candidate pixel detection unit 111 calculates the degree of variation of the motion vector for each block (or pixel).
- the image boundary line candidate pixel detection unit 111 sets all the pixels belonging to a block (there is! / ⁇ is a pixel) that is greater than or equal to the calculated motion vector variation degree to an area where the motion vector varies. It can be detected as a pixel to which it belongs. Note that the method of detecting pixels belonging to the region where the motion vectors vary as described here is an example, and the present invention is not limited to this method. Pixels belonging to the motion vector variation region thus detected can be set as image boundary line candidate pixels.
- pixels belonging to the region where the motion vectors vary may be used as image boundary line candidate pixels as they are.
- a pixel that belongs to a region where motion vectors vary and has either one of the above two properties or both properties is preferably an image boundary candidate pixel.
- the image boundary line candidate pixel detection unit 111 may extract pixels around the detected image boundary line candidate pixels by expansion processing, and add the peripheral pixels to the image boundary line candidate pixels.
- the image boundary line candidate pixel information may be any information as long as the information specifies the image boundary line candidate pixels detected for each frame.
- the image boundary line candidate pixel information may be binary image information in which all pixels in the frame are expressed as binary values as to whether they are image boundary line candidate pixels. Further, the image boundary line candidate pixel information may be a list indicating the positions of all detected image boundary line candidate pixels.
- the line extraction unit 112 inputs the image boundary line candidate pixel information of each frame output from the image boundary line candidate pixel detection unit 111, and the image boundary line indicated by the image boundary line candidate pixel information for each frame. A line formed by the candidate pixels is extracted as an image boundary line. Then, the image boundary line description information describing the extracted image boundary line is output for each frame.
- image boundary line description information describing the extracted image boundary line is output for each frame.
- the line formed by the image boundary line candidate pixels extracted by the line extraction unit 112 may be limited to a straight line. But rare Since there are special effects that include image boundary lines other than straight lines such as curves, when these special effects are to be detected, the lines formed by the image boundary line candidate pixels extracted by the line extraction unit 112 are straight lines. Should not be limited to.
- a method for extracting the lines formed by the image boundary line candidate pixels any method for extracting the lines of the collective power of the pixels may be used. An example of a method for extracting a line is described in, for example, pp. 1246-1260 of “New Image Analysis Node Book”.
- Hough transform is a method of extracting figures (for example, straight lines, circles, ellipses, parabolas) that can be described by parameters from an image by voting on the parameter space.
- the Hough transform is particularly effective as a method for extracting a straight line.
- the straight line extraction method by the Hough transform is described, for example, in “New Image Analysis Nordborg”, pp. 1254-1256. In the Hough transform using image boundary line candidate pixels as input, each image boundary line candidate pixel is passed.
- Voting is performed on all the straight lines in the parameter space, and the line extraction unit 112 extracts straight lines with a large amount of votes.
- the line extraction unit 112 is extracted by Hough transform using image boundary line candidate pixels as input.
- a straight line can be used as an image boundary line.
- the Hough transform can be extracted if it is a figure that can be described by parameters, it can also be applied to the case where an image boundary line other than a straight line such as a curve is to be extracted.
- the generalized Hough transform described in pp. 1256-1258 of “New Image Analysis Nordborg” can detect figures of any shape. If the generalized Hough transform is used, the image boundary of any shape can be detected. You can extract lines.
- the line extraction unit 112 verifies whether or not the image boundary line candidate pixels constituting the image boundary line are continuously present along the image boundary line with respect to the extracted image boundary line. It's okay. Then, when the image boundary line candidate pixels do not exist continuously, the line extraction unit 112 may consider the image boundary line inappropriate and exclude the image boundary line. For example, the line extraction unit 112 measures the length at which image boundary line candidate pixels exist continuously along the image boundary line, and excludes image boundary lines that are equal to or shorter than the length. Even so ⁇ ⁇
- the image boundary line description information is information describing the image boundary line extracted in each frame.
- the image boundary line description information may be a multi-dimensional parameter describing the straight line.
- p is the length of the perpendicular line where the origin force in the (x, y) coordinate system defined for the frame is also a straight line, and the vertical line is the horizontal axis (X axis).
- the two-dimensional parameter (0) may be used as the image boundary description information.
- the image boundary line description information includes all the pixels constituting the image boundary line. It must be information that identifies Further, only when the image boundary line description information is supplied to the image boundary line possessing frame section detection unit 211 described later (such as the first embodiment), the image boundary line possessing frame section detection unit 211 executes.
- the image boundary line description information may be binary information indicating whether or not each frame includes an image boundary line. Note that the description of the image boundary line description information described here applies to all the embodiments hereinafter.
- the special effect detection unit 21 uses the image boundary line description information of each frame output from the image boundary line extraction unit 11 to detect a frame section including the special effect, and to identify the detected frame section. Output certain special effect frame section information.
- the special effect detection unit 21 includes an image boundary line possessing frame section detection unit 211.
- the image boundary line possessing frame section detection unit 211 uses the image boundary line description information of each frame output from the image boundary line extraction unit 11 to determine whether or not each frame has an image boundary line. Then, the image boundary line possessing frame section detection unit 211 detects a frame section in which frames having image boundary lines continue as a frame section including a special effect. Then, the image boundary line possessing frame section detecting unit 211 outputs special effect frame section information that is information for specifying the detected frame section.
- special effect frame section information that is information for specifying the detected frame section.
- the frame section detected here is not necessarily a frame section in which a plurality of frame forces are also configured. Image border Even if a single frame having a line is detected as a frame section including a special effect, it does not work.
- N is set as the minimum number of frames in the frame section to be detected, and a frame section in which frames having image boundary lines are continuous.
- FIG. 6 is an explanatory diagram showing an example of a frame section in which frames having image boundary lines are continuous.
- FIG. 6 shows the frame sequence of the video in time series, where 1 is a frame having an image boundary and 0 is a frame having no image boundary line. In this example, it is allowed to include a frame that does not include an image boundary line in the middle of the frame section.
- the special effect frame section information is information for specifying a frame section including the detected special effect, and is information indicating the first frame and the last frame of the frame section, for example. Note that the description of the special effect frame section information described here is applicable to all embodiments thereafter.
- the special effect frame section information output as described above can be used for input video playback control. That is, in addition to the above-described configuration, a video playback control device that performs playback control of input video based on special effect frame section information output by the special effect detection device can be provided.
- a video playback device including such a special effect detection device and a video playback control device for example, playback is controlled using the frame section indicated by the special effect frame section information as a playback start point candidate or a playback end point candidate. can do.
- the video playback control apparatus uses an arbitrary frame within the frame section indicated by the special effect frame section information as a playback start point candidate, and determines the start point candidate according to a user's playback instruction (for example, remote control operation). Cue playback from the frame may be performed.
- the first frame and the last frame of the frame section indicated by the special effect frame section information Are frame Fl and frame F2, respectively.
- the video before the special effect frame section is video A, and the video after it is video B.
- the video playback control device may play back up to frame F2 when there is a playback instruction for video A, and start playback from frame F1 when there is a playback instruction for video B.
- special effects are important parts of the video, or parts that the editor wants to particularly impress, such as new corners, topic start points, scene transition points, etc. Used for Therefore, by performing playback control using the special effect frame interval information output by the special effect detection device, it is possible to view video in units that are semantically important such as topics and corners. For this reason, it is possible to quickly access the part where the user wants to view the video and to provide efficient viewing.
- the special effect frame section information output by the special effect detection apparatus described here can be used for playback control of input video applies to all the embodiments hereinafter. In other words, in addition to the special effect detection devices of all the embodiments, it is possible to provide a video playback control device for performing playback control of the input video based on the special effect frame section information output by the special effect detection device. I'll do it.
- an input video power new frame is acquired and supplied to the image boundary line candidate pixel detection unit 111 (step A01).
- the new frame is set as the start frame.
- the image boundary line candidate pixel detection unit 111 detects the image boundary line candidate pixel with respect to the frame force, and outputs image boundary line candidate pixel information for specifying the detected image boundary line candidate pixel (step A02).
- the line extraction unit 112 extracts a line formed by the image boundary line candidate pixels indicated by the image boundary line candidate pixel information as an image boundary line, and an image boundary line description describing the extracted image boundary line Output information (step A03).
- the image boundary line possessing frame section detection unit 211 newly detects a frame section in which frames having image boundary lines continue using the image boundary line description information output up to the current frame (step A04). In order to prevent duplication of the detected frame section, for example, the frame section is detected only when the frame section having the image boundary line in the current frame ends. That's fine.
- the process proceeds to step A05. Otherwise, go to step A06.
- the image boundary owned frame section detection unit 211 sets the frame section as a frame section including a special effect, and specifies a special frame section.
- the effect frame section information is output (step A05).
- the image boundary line is generally included in the frame constituting the special effect without depending on the pattern, and is included in the video change frame other than the special effect such as camera motion. It uses the property that it is not included.
- the image boundary line extraction unit 11 extracts an image boundary line from the frame cover, and the special effect detection unit 21 detects a frame section including a special effect based on the extracted image boundary line. . Therefore, in the first embodiment, there is an effect that the special effect can be detected with high accuracy without depending on the pattern for general purpose and without erroneously detecting a video change other than the special effect.
- FIG. 8 is a block diagram showing a second embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the second exemplary embodiment of the present invention includes an image boundary line extraction unit 11 and a special effect detection unit 22.
- the second embodiment is different from the first embodiment in that the special effect detection unit 21 in the first embodiment shown in FIG. 3 is replaced with a special effect detection unit 22. Since the image boundary detection unit 11 is the same as the image boundary detection unit 11 in the first embodiment, the description thereof is omitted here.
- the special effect detection unit 22 uses the image boundary line description information of each frame output from the image boundary line extraction unit 11 to perform the special effect detection.
- a frame section including an effect is detected, and special effect frame section information that is information for identifying the frame section is output.
- the configuration is different from that of the special effect detector 21 in the first embodiment.
- the special effect detection unit 22 includes an image boundary continuous moving frame section detection unit 221.
- the image boundary line continuous movement frame section detection unit 221 includes special effects in the frame section in which the image boundary line indicated by the image boundary line description information of each frame output from the image boundary line extraction unit 11 continuously moves. Detect as a frame interval. Then, the image boundary continuous moving frame section detection unit 221 outputs special effect frame section information that is information for specifying the detected frame section.
- the image boundary line 9 continuously moves between frames.
- the fact that the image boundary line 9 continuously moves means that the image boundary line 9 moves in the frame while gradually changing its position and inclination with time.
- the vertical image boundary line 9 continuously moves across the frame from the left to the right.
- the image boundary line 9 on the lower side gradually moves toward the lower force of the frame.
- the image boundary line 9 on the left side gradually moves to the left force right of the frame.
- a parameter that describes the image boundary line is represented as a feature point in the parameter space, and a feature point that represents the image boundary line in the parameter space.
- a specific example is shown using the two-dimensional parameters (p, 0) described in the first embodiment as parameters for describing the image boundary line.
- FIG. 10 shows the image boundary line extracted by the image boundary line extraction unit 11 in a frame section including a special effect as a two-dimensional parameter ( ⁇ ), and the two-dimensional parameter of p ⁇ as a feature point. It is explanatory drawing plotted in space. Since the image boundary line in special effects moves continuously between frames, the feature points representing the parameters that describe the image boundary line are also traces that move continuously over time in the parameter space as shown in Fig. 10. be painted. To extract a frame segment in which feature points representing image boundaries move continuously over time in the parameter space, evaluate the distance between feature points in the parameter space. Thus, the continuity between feature points may be determined. For example, the distance in the parameter space between feature points representing image boundary lines extracted between adjacent frames is calculated, and if the distance is within the range, the image boundary lines of those frames are continuous. It is determined.
- the image boundary continuous moving frame section detection unit 221 sequentially performs this process between adjacent frames.
- the image boundary line continuous movement frame section detection unit 221 detects a frame section in which the image boundary line continuously moves when the number of frame sections in which the feature points are determined to be continuous exceeds a certain number of frames. Can be detected as
- the image boundary continuous moving frame section detection unit 221 may perform feature point prediction when determining continuity between feature points in the parameter space. For example, the image boundary line continuous moving frame section detection unit 221 uses an image boundary in which a feature point representing an image boundary extracted from a certain frame (referred to as a current frame) is extracted from the current frame.
- a predicted point of the feature point of the current frame is calculated from the feature point of the past frame, and the predicted point and the feature point of the current frame that is actually extracted If the distance is within a certain threshold, it can be determined that the distance is continuous.
- the image boundary continuous moving frame section detection unit 221 may allow a certain amount of exceptional values when determining the continuity of feature points in the parameter space.
- the image boundary line 9 normally moves continuously from the end of the frame to the other end in the special effect.
- the edge of the frame refers to the area inside the frame near the outer frame of the frame.
- the image boundary usually appears first at the end of the frame, and the image boundary moves continuously through the frame over time and finally disappears at the other end of the frame. .
- the image boundary line continuous movement frame section detection unit 221 may detect a frame section in which the image boundary line continuously moves to the other end force of the frame as a frame section including a special effect. .
- the image boundary line continuous movement frame section detection unit 221 detects from the frame sections in which the image boundary line continuously moves.
- the image boundary continuous moving frame section detection unit 221 calculates, for example, the frame outer frame force and the distance to the image boundary line. When the distance is within a certain threshold, it is determined that the image boundary exists at the end of the frame, and when the distance is greater than or equal to the value, it is determined that the image boundary does not exist at the end of the frame. it can.
- the special effect is a gradual change in which the video before and after the switching is changed while gradually changing the spatial occupancy ratio, so two image areas (for example, vertical In the case of an image boundary line, the left and right image areas, and in the case of a horizontal image boundary line, the upper and lower image areas), the image area whose area is decreasing in time belongs to the image before switching, and the area The image area where is increasing in time belongs to the video after switching.
- the image area whose area is decreasing in time is not similar to the frame of the video after switching, and is partially similar to the frame of the video before switching.
- the image area whose area is increasing in time is not similar to the video frame before switching, but is partially similar to the video frame after switching.
- the image boundary line continuous movement frame section detection unit 221 detects the frame section as a frame section including a special effect when the frame section in which the image boundary line continuously moves further satisfies this property. It may be. In other words, the image boundary line continuous moving frame section detection unit 221 has a frame section force S in which the detected image boundary line continuously moves. Further, in each frame, two frames are classified by the image boundary line of the frame. Out of the image area
- a frame segment may be detected as a frame segment including a special effect.
- the frame after Z before the frame interval may be a frame after Z immediately before the frame interval, or may be a frame after Z by a predetermined number of frames before.
- each frame after Z before the frame section may be a plurality of frames, for example, N frames after Z before the frame section (N is the number of frames)! /.
- any method may be used to determine the similarity between an image region and a frame.
- the similarity (or distance) between the image area and the frame is calculated using the statistical properties (image characteristics) of the pixels contained in each of the image area and the frame, and the threshold value processing is used to calculate the image area and frame similarity.
- the statistical properties (image features) of a pixel include, for example, a luminance and color histogram, an average value of luminance and color, a variance value, texture information, and the like.
- the image boundary line continuous movement frame section detection unit 221 performs similarity determination for each frame, and detects a frame section as a frame section including special effects when the number of frames satisfying the above characteristics exceeds a certain ratio. You may do it.
- the image boundary continuous moving frame section detection unit 221 only calculates the similarity for each frame, and calculates the similarity in the entire frame section (the increasing image area and the previous frame between the frame sections).
- the similarity between the increasing image area and the frame after the frame section, the similarity between the decreasing image area and the frame before the frame section, the decreasing image area and the frame after the frame section may be calculated to determine whether or not the above property is satisfied for the entire frame section, and if satisfied, the frame section may be detected as a frame section including a special effect.
- the image boundary line continuous moving frame section detection unit 221 uses the entire area of the image area when determining the similarity between the image area divided by the image boundary line and the frame before and after the frame section. Frame section using only part of the image area that is not necessary You may determine the similarity with the frame before and behind. For example, the image boundary line continuous moving frame section detection unit 221 may use only an image area close to the image boundary line among the image areas divided by the image boundary line. Further, the image boundary line continuous moving frame section detecting unit 221 detects only the image area sandwiched between the image boundary line of the current frame and the image boundary line of the adjacent frame among the image areas divided by the image boundary line. It may be used.
- an input video power new frame is acquired and supplied to the image boundary line candidate pixel detection unit 111 (step B01).
- the new frame is set as the start frame.
- the image boundary line candidate pixel detection unit 111 detects the image boundary line candidate pixel with respect to the frame force, and outputs image boundary line candidate pixel information for specifying the detected image boundary line candidate pixel (step B02).
- the line extraction unit 112 extracts, as an image boundary line, a line formed by the image boundary line candidate pixel indicated by the image boundary line candidate pixel information, and an image boundary line describing the extracted image boundary line
- the description information is output (step B03).
- the image boundary line continuous movement frame period detection unit 221 uses the image boundary line description information output up to the current frame to detect a frame period in which the image boundary line indicated by the image boundary line description information continuously moves. New detection (step B04). In order to prevent duplication of detected frame sections, the image boundary line continuous movement frame section detection unit 221 performs, for example, only when the frame section in which the image boundary line continuously moves in the current frame ends. It is sufficient to detect the section. If a new frame section in which the image boundary line moves continuously is detected, the process proceeds to step B05. Otherwise, go to step B06.
- the image boundary line continuous moving frame section detection unit 221 sets the frame section as a frame section including special effects, and sets the frame section as a frame section. Output the special effect frame interval information to be identified (step B05). Finally, it is determined whether or not the current frame is an end frame (step B06). If it is an end frame, the process ends. If the current frame is not the end frame, the process proceeds to step B01, the next frame of the video is acquired as a new frame, and the process is continued. Thus, the processing from step B01 to B06 is repeated until the end frame is reached. Execute.
- the special effect uses the property that the image boundary line moves continuously between frames.
- the frame interval in which the image boundary line continuously moves is detected as a frame interval including a special effect, the special effect does not depend on the pattern as in the first embodiment. It is versatile and has the effect of being able to detect with high accuracy without erroneously detecting video changes other than special effects.
- the special effect is detected based on whether or not the image boundary line is continuously moved only by the presence or absence of the image boundary line. Therefore, according to the second embodiment, it is possible to detect the special effect with higher accuracy compared to the first embodiment that detects the special effect based on the presence or absence of the image boundary line. is there.
- FIG. 12 is a block diagram showing a third embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the third exemplary embodiment of the present invention includes an image boundary line extraction unit 11 and a special effect detection unit 23.
- the third embodiment is different from the first embodiment in that the special effect detection unit 21 in the first embodiment shown in FIG. 3 is replaced with a special effect detection unit 23. . Since the image boundary detection unit 11 is the same as the image boundary detection unit 11 in the first embodiment, the description thereof is omitted here.
- the special effect detection unit 23 uses the image boundary line description information of each frame output by the image boundary line extraction unit 11 to generate a special effect.
- the included frame section is detected, and special effect frame section information that is information for specifying the frame section is output.
- the special effect detection unit 23 is different in configuration from the special effect detection unit 21 in the first embodiment.
- the special effect detection unit 23 includes an image boundary line combination extraction unit 231 and an image boundary line combination possession frame section detection unit 232.
- the image boundary line combination extraction unit 231 extracts a combination of a plurality of image boundary lines indicated by the image boundary line description information of each frame output from the image boundary line extraction unit 11.
- the image boundary line combination extracting unit 231 then extracts the extracted image boundary.
- Image boundary line combination information which is information describing the combination of field lines, is output for each frame.
- the combination of image boundary lines is a combination that forms an image frame that indicates the display area of the video that is superimposed on the top of the two videos before and after switching within the frame that constitutes the DVE among special effects. It is desirable that
- FIGS. 13A to 13F are explanatory diagrams showing examples of combinations of image boundary lines 9 forming the image frame as described above.
- the combination of the image boundary lines extracted by the image boundary line combination extraction unit 231 is the image boundary line forming the rectangle. You may limit to the combination of.
- a combination of image boundary lines extracted by the image boundary line combination extraction unit 231 forms a rectangle. Image should not be limited to combinations of borders.
- the image frame formed by the image boundary line 9 does not necessarily fit within the frame, and therefore the image boundary line combination extraction unit 231 extracts the frame.
- the image boundary line combination to be made need not be a combination of image boundary lines forming a closed figure.
- the image boundary line combination extracted by the image boundary line combination extraction unit 231 is an image boundary line 9 that forms a part of a quadrangle (however, two or more sides) as shown in (E) and (F) of FIG. Even a combination of
- the combination of image boundary lines is extracted by limiting the combination of image boundary lines extracted by the image boundary line combination extraction unit 231 to a square shape or a combination of image boundary lines forming a part of a quadrilateral. An example of how to do this is shown.
- the image boundary line extraction unit 11 can select from a plurality of image boundary lines extracted in that frame.
- the combination of image boundary lines that form a rectangle (or a part of the rectangle) that satisfies the first specified condition may be searched for the intermediate force of all the combinations.
- conditions that can be defined by force include the size of the rectangle, the position of the intersection of the image boundary lines, and the angle of intersection of the image boundary lines. These conditions can be set, for example, by investigating a square frame of special effects included in the video prepared for learning.
- the image boundary line combination information describes a combination of image boundary lines extracted in each frame.
- the image boundary line combination information is a set of image boundary line description information that describes each image boundary line of the extracted image boundary line combinations, etc. (For image boundary line description information, ) O
- image boundary line combinations Corresponding to the processing executed by the owned frame section detection unit 23 2, the image boundary line combination information may be binary information indicating whether each frame has a combination of image boundary lines. The description of the image boundary line combination information described here applies to all the embodiments hereinafter.
- the image boundary combination possession frame section detection unit 232 uses the image boundary line combination information of each frame output by the image boundary line combination extraction unit 231, and determines whether or not each frame has a combination of image boundary lines. Determine the key.
- the image boundary combination possession frame interval detection unit 232 is information that detects a frame interval in which frames having a combination of image boundary lines continue as a frame interval including a special effect, and identifies the frame interval. Output special effect frame section information.
- a frame section in which frames having a combination of image boundary lines detected here are continuous not all frames need to have a combination of image boundary lines. It may be allowed to include a certain number of frames without having a combination of image boundary lines in the middle of a frame section.
- the frame section detected here is not necessarily a frame section in which a plurality of frame forces are also configured. Even if a single frame having a combination of image boundaries is detected as a frame section including a special effect, there is no problem.
- a method of detecting a frame section in which frames having a combination of image boundary lines are continuous includes the image boundary lines described in the description of the image boundary line possessing frame section detection unit 211 of the first embodiment.
- the method may be the same as the method for detecting a frame section in which frames having a frame are consecutive.
- the frame boundary combination possessing frame section detection unit 232 further includes a graphic section formed by a combination of image boundary lines of each frame in a frame section in which frames having a combination of detected image boundary lines continue. You may analyze the change of an area with time. Then, the image boundary line combination possession frame section detection unit 232 may incline to detect the above-described frame section as a frame section including a special effect when the area has a certain temporal change criterion. For example, the image boundary combination possession frame section detection unit 232 specially selects the above-described frame section when the area of the figure formed by the combination of the image boundary lines of each frame monotonously increases or decreases with time. You may make it detect as a frame area containing an effect.
- the combination of image boundary lines in the DVE of special effects is superimposed on the two images before and after switching.
- the area of the image frame indicating the display area of the other image is usually monotonically increasing (eg, (A), (D), (F), (H) in FIG. 2) or monotonically decreasing (eg, (B in FIG. 2). ), (C), (E), (G)).
- a frame section in which frames having combinations of image boundary lines continue is specially selected. If it is detected as a frame section including an effect, DVE among special effects can be detected more accurately.
- a new frame of input video power is acquired and supplied to the image boundary candidate pixel detection unit 111 (step C01).
- the new frame is set as the start frame.
- the image boundary line candidate pixel detection unit 111 detects the image boundary line candidate pixel with respect to the frame force, and outputs image boundary line candidate pixel information for specifying the detected image boundary line candidate pixel (step C02).
- the line extraction unit 112 extracts a line formed by the image boundary line candidate pixels indicated by the image boundary line candidate pixel information as an image boundary line, and describes the extracted image boundary line. Output descriptive information (step C03).
- the image boundary line combination extraction unit 231 Extracts a combination of a plurality of image boundary lines indicated by the image boundary line description information and outputs image boundary line combination information describing the extracted image boundary line combination (step C04).
- the image boundary line combination possessing frame section detection unit 232 newly uses the image boundary line combination information output up to the current frame to newly generate a frame section in which frames having image boundary line combinations continue. Detect (Step C05). In order to prevent duplication of the detected frame section, for example, the frame section may be detected only when the frame section having the image boundary line combination in the current frame ends. If a new frame section in which frames having a combination of image boundary lines continue is detected, the process proceeds to step C06. Otherwise, go to step C07.
- the image boundary combination possession frame section detection unit 232 sets the frame section as a frame section including a special effect, and The special effect frame section information that identifies the section is output (step C06). Finally, it is determined whether or not the current frame is an end frame (step C07). If it is an end frame, the process ends. If the current frame is not the end frame, the process proceeds to step C01, the next frame of the video is acquired as a new frame, and the process is continued. Thus, the processing from steps C01 to C07 is executed until the end frame is reached.
- the image frame formed by the combination of the image boundary lines is included in the frame constituting the DVE of the special effect, and is not included in the frame of the video change other than the special effect.
- a combination of image boundary lines is extracted as the frame force, and a frame section including a special effect is detected based on the extracted combination of image boundary lines. Therefore, DVEs among special effects can be detected with high accuracy without erroneously detecting video changes other than special effects.
- a special effect is detected based on a combination of a plurality of image boundary lines. Therefore, compared to the first embodiment in which the special effect is detected based only on the single image boundary line, there is an effect that the DVE among the special effects can be detected with higher accuracy.
- FIG. 15 is a block diagram showing a fourth embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the fourth exemplary embodiment of the present invention includes an image boundary line extraction unit 11 and a special effect detection unit 24.
- the fourth embodiment differs from the first embodiment in that the special effect detection unit 21 in the first embodiment shown in FIG. 3 is replaced with a special effect detection unit 24. . Since the image boundary detection unit 11 is the same as the image boundary detection unit 11 in the first embodiment, the description thereof is omitted here.
- the special effect detection unit 24 uses the image boundary line description information of each frame output by the image boundary line extraction unit 11 to detect the special effect.
- the included frame section is detected, and special effect frame section information that is information for specifying the frame section is output.
- the configuration is different from that of the special effect detector 21 in the first embodiment.
- the special effect detection unit 24 includes an image boundary line combination extraction unit 231 and an image boundary line combination continuous moving frame section detection unit 241. Since the image boundary line combination extraction unit 231 is the same as the image boundary line combination extraction unit 231 in the third embodiment, the description thereof is omitted here.
- the image boundary line combination continuous moving frame section detection unit 241 detects a frame section in which the image boundary line combination indicated by the image boundary line combination information output by the image boundary line combination extraction unit 231 continuously moves. Then, it detects a frame section including a special effect, and outputs special effect frame section information which is information for specifying the frame section.
- the frame section in which the combination of image boundary lines continuously moves refers to the frame section in which each image boundary line of the combination of image boundary lines continuously moves. However, not all image boundary lines in the combination of image boundary lines need to move continuously.
- the image boundary line continuous moving frame section detection unit 241 continuously moves the image boundary line in the frame section even when only some of the image boundary lines move continuously. It may be detected as a moving frame section.
- parameters describing the image boundary lines of the image boundary line combinations extracted from the frame cover are parameter spaces.
- the method detects a frame section in which each feature point continuously moves with time in the parameter space, and detects that frame section as a frame section in which a combination of image boundary lines continuously moves. is there.
- FIG. 16 exemplifies how the feature points representing the image boundary lines of the combination of image boundary lines continuously move with time in the parameter space in a frame section including special effects. It is explanatory drawing. Even if only feature points representing part of the image boundary line of the image boundary line move continuously over time, even if the frame interval is detected as a frame interval where the image boundary line combination moves continuously Good.
- a method for detecting a frame section in which feature points representing each image boundary line continuously move in the parameter space is used in the image boundary line continuous movement frame section detector 221 according to the second embodiment. This is the same as the method described above.
- the image boundary line combination continuous movement frame section detection unit 241 sets the frame section in which the combination of the image boundary lines continuously moves to the other end of the frame as a frame section including a special effect. You may make it detect. That is, the image boundary line combination continuous moving frame section detection unit 241 includes a frame section that includes a special effect in a frame section that continuously moves to the other end of each image boundary line force frame of the combination of image boundary lines. You may make it detect as a area. Note that the method described in the second embodiment may be used as a method for detecting a frame section in which each image boundary line moves to the other end force of the frame. Also, it is not always necessary to move continuously to the other end of the image boundary line force S frame end force of the image boundary line combination.
- Image boundary line combination continuous movement frame section detection unit 241 applies force only to a part of the image boundary lines of the image boundary line combination even if the end force of the S frame continuously moves to the other end.
- the boundary line combination may be detected as a frame section in which the end force of the frame moves continuously to the other end.
- the image boundary line combination continuous movement frame section detection unit 241 further includes an image of each frame in a frame section in which the detected combination of image boundary lines continuously moves. You may analyze the time change of the area of the figure which the combination of a boundary line comprises. Then, the image boundary line combination continuous moving frame section detecting unit 241 may instruct to detect the above-described frame section as a frame section including a special effect when the criteria for changing the area with time is satisfied. Good. For example, the image boundary combination continuous moving frame section detection unit 241 performs the above-described frame section when the area of the figure formed by the combination of the image boundary lines of each frame monotonously increases or decreases with time. May be detected as a frame section including a special effect.
- a combination of image boundary lines is configured in DVE among special effects.
- the area of the image frame that indicates the display area of the image superimposed on the two images before and after switching usually increases monotonically (for example, (A), (D), (F ), (H)) or monotonically decreasing (for example, (B), (C), (E), (G) in Figure 2).
- the frame section in which the combination of image boundary lines continuously moves is limited to the case where the area of the figure formed by the combination of image boundary lines of each frame monotonously increases or decreases with time. If it is detected as a frame section that includes a fruit, it is possible to detect DVE as a special effect more accurately.
- the special effect is that the video before and after switching is gradually switched while gradually changing the spatial occupation ratio. Because of the change, the image area whose area is decreasing in time among the two image areas (the area inside and outside the image boundary line combination) divided by the image boundary line is the video before switching The image area whose area is increasing with time belongs to the video after switching. For this reason, the image area whose area is decreasing in time is not similar to the video frame after switching, but is partially similar to the video frame before switching. Also, the image area whose area is increasing in time is not similar to the frame of the video before switching, but is partially similar to the frame of the video after switching.
- Image boundary line combination continuous movement frame section detection unit 241 detects a frame section as a frame section including a special effect when a frame section in which the combination of image boundary lines continuously moves satisfies this property. You may make it do. That is, the image boundary The line combination continuous moving frame section detection unit 241 further includes two image areas in which a frame section in which the detected combination of image boundary lines continuously moves is further divided in each frame by the combination of image boundary lines of the frame. Out of
- the frame after Z before the frame interval may be a frame after Z immediately before the frame interval, or may be a frame after Z by a predetermined number of frames before.
- each frame after Z before the frame section may be a plurality of frames, for example, N frames after Z before the frame section (N is the number of frames)! /.
- step D01 an input video power new frame is acquired and supplied to the image boundary line candidate pixel detection unit 111 (step D01).
- step D01 when step D01 is executed for the first time, the new frame is set as the start frame.
- the image boundary line candidate pixel detection unit 111 detects the image boundary line candidate pixel from the frame, and outputs image boundary line candidate pixel information for specifying the detected image boundary line candidate pixel (step D02). ).
- the line extraction unit 112 extracts a line formed by the image boundary line candidate pixel indicated by the image boundary line candidate pixel information as an image boundary line, and outputs image boundary line description information describing the extracted image boundary line (Step D03).
- the image boundary line combination extraction unit 231 extracts a plurality of image boundary line combinations indicated by the image boundary line description information, and describes an image boundary line that describes the extracted combination of image boundary lines
- the combination information is output (step D04).
- the image boundary line combination continuous moving frame section detection unit 241 uses the image boundary line combination information output up to the current frame, and the combination of image boundary lines indicated by the image boundary line combination information is continuous. A new frame segment that moves automatically (step DO 5). In order to prevent the overlap of detected frame sections, the image boundary line combination continuous movement frame section detection unit 241 performs, for example, when the frame section in which the combination of image boundary lines continuously moves in the current frame ends. Only the frame interval should be detected! If a new frame section in which the combination of image boundary lines continuously moves is detected, the process proceeds to step D06. Otherwise, go to step D07.
- the image boundary line combination continuous movement frame section detection unit 241 uses the frame section as a frame section including a special effect. And output special effect frame section information for identifying the frame section (step D06). Finally, it is determined whether or not the current frame is an end frame (step D07), and if it is an end frame, the process ends. If the current frame is not the end frame, go to step D01 to acquire the next frame of the video as a new frame and continue processing. In this way, the processing from steps DO 1 to D07 is executed until the end frame is reached.
- a frame section in which the combination of image boundary lines continuously moves is detected as a frame section including a special effect. For this reason, according to the fourth embodiment, a frame section in which frames having a combination of image boundary lines continue is detected as a frame section including a special effect while having the effects of the third embodiment. Compared to the first embodiment, there is an effect that DVE among special effects can be detected with higher accuracy.
- FIG. 18 is a block diagram showing a fifth embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the fifth exemplary embodiment of the present invention includes an image boundary line extraction unit 12 and a special effect detection unit 21.
- the fifth embodiment is shown in FIG.
- the image boundary line extraction unit 11 in the illustrated first embodiment is different from the first embodiment in that the image boundary line extraction unit 12 is replaced with the image boundary line extraction unit 12.
- the configuration in which the image boundary line extraction unit 11 is replaced in the first embodiment is illustrated, but the image boundary may be shifted in the second, third, and fourth embodiments.
- the line extraction unit 11 may be replaced! /.
- the image boundary line extraction unit 12 is a boundary line between two images existing in a frame from each frame of the input video. An image boundary line is extracted, and image boundary line description information that is information describing the extracted image boundary line is output. However, the configuration is different from that of the image boundary line extraction unit 11 in the first embodiment.
- the image boundary line extraction unit 12 includes an image boundary line candidate pixel detection unit 111, an edge direction calculation unit 121, and a weighted Hough conversion unit 122. Since the image boundary line candidate pixel detection unit 111 is the same as the image boundary line candidate pixel detection unit 111 in the first embodiment, the description thereof is omitted here.
- the edge direction calculation unit 121 receives the image boundary line candidate pixel information of each frame output from the image boundary line candidate pixel detection unit 111 as an input, and the edge of each image boundary line candidate pixel indicated by the image boundary line candidate pixel information Calculate the direction. Then, the edge direction calculation unit 121 outputs the calculated edge direction of each image boundary line candidate pixel for each frame.
- the edge direction is the density gradient direction of the image, and any method for calculating the edge direction may be used. An example of the calculation method of the edge direction is described in, for example, p. 1232 of “New Image Analysis Handbook”.
- the weighted Hough transform unit 122 includes the image boundary line candidate pixel information of each frame output from the image boundary line candidate pixel detection unit 111 and the image boundary line candidates of each frame output from the edge direction calculation unit 121.
- the pixel edge direction is used as an input.
- the weighted Hough transform unit 122 for each frame, in the straight line extraction method using the Knoff transform with the image boundary line candidate pixel as an input, the direction of the straight line to be voted and the image boundary line candidate pixel Voting is performed by controlling so that the voting weight increases as the angle formed by the edge direction is closer to the vertical, and a straight line is extracted.
- the weighted Hough transform unit 122 sets the extracted straight line as an image boundary line.
- the weighted Hough transform unit 122 outputs image boundary line description information describing the extracted image boundary line for each frame.
- the voting weight of each image boundary line candidate pixel is uniform.
- the weighted Hough transform unit 122 is different from the first embodiment in that the voting weight is increased as the angle formed between the direction of the straight line that receives the vote and the edge direction of the image boundary line candidate pixel is closer to the vertical.
- the voting weight calculation method for example, when the angle between the direction of the straight line that receives the vote and the edge direction of the image boundary line candidate pixel is 0 (where 0 ⁇ ⁇ / 2), ⁇ Can be used as the weight of the vote.
- the voting weight W may be calculated as in equation (4).
- the special effect detection unit 21 is the same as the special effect detection unit 21 in the first embodiment, and a description thereof will be omitted here.
- the edge direction of the pixels constituting the image boundary line has a property of being perpendicular to the direction of the image boundary line.
- this property is used.
- an image boundary line is extracted by a Hough transform that increases the voting weight as the angle formed by the direction of the straight line that receives the vote and the edge direction of the image boundary line candidate pixel is close to vertical. Therefore, compared with the first embodiment, the fifth embodiment can extract the image boundary line with higher accuracy. As a result, the fifth embodiment has an effect that the special effect can be detected with higher accuracy.
- FIG. 20 is a block diagram showing a sixth embodiment of the special effect detection device according to the present invention.
- the special effect detection apparatus according to the sixth embodiment of the present invention includes an image boundary line extraction unit 13 and a special effect detection unit 21.
- the sixth embodiment is shown in FIG.
- the image boundary line extraction unit 11 in the illustrated first embodiment is different from the first embodiment in that the image boundary line extraction unit 11 is replaced with an image boundary line extraction unit 13.
- the configuration in which the image boundary line extraction unit 11 is replaced in the first embodiment is illustrated, but the image boundary may be shifted in the second, third, and fourth embodiments.
- the line extraction unit 11 may be replaced! /.
- the image boundary line extraction unit 13 is a boundary line between two images existing in a frame from each frame of the input video. An image boundary line is extracted, and image boundary line description information that is information describing the extracted image boundary line is output. However, the image boundary line extraction unit 13 is different in configuration from the image boundary line extraction unit 11 in the first embodiment.
- the image boundary line extraction unit 13 includes an image boundary line candidate pixel detection unit 111, a line extraction unit 112, an edge direction calculation unit 131, and an image boundary line filtering unit 132. Since the image boundary line candidate pixel detection unit 111 is the same as the image boundary line candidate pixel detection unit 111 in the first embodiment, the description thereof is omitted here. Since the line extraction unit 112 is the same as the line extraction unit 112 in the first embodiment, description thereof is omitted here.
- the edge direction calculation unit 131 receives the image boundary line description information of each frame output from the line extraction unit 112 as an input, and the edge of each image boundary line candidate pixel constituting the image boundary line indicated by the image boundary line description information Calculate the direction. Then, the edge direction calculation unit 131 outputs the edge direction of each image boundary line candidate pixel constituting each calculated image boundary line for each frame. Here, it is not necessary to calculate the edge directions of all image boundary line candidate pixels constituting the image boundary line.
- the edge direction calculation unit 131 may calculate the edge direction only for arbitrarily sampled image boundary line candidate pixels.
- the method for calculating the edge direction may be arbitrary. An example of the edge direction calculation method is described in “New Image Analysis Handbook,” p.
- the image boundary filtering unit 132 includes the image boundary line description information of each frame output from the line extraction unit 112 and the image boundaries constituting the image boundary lines of each frame output from the edge direction calculation unit 131.
- the edge direction of the line candidate pixel is used as an input.
- the image boundary line filtering unit 132 displays the image boundary line direction indicated by the image boundary line description information and the image boundary line.
- Image boundary line description information is output when it is statistically determined that the angle formed by the edge direction of each image boundary line candidate pixel constituting the field line is close to vertical. In other cases, the image boundary filtering unit 132 does not output the image boundary description information.
- the direction of the image boundary line and the angle formed by the edge direction of each image boundary line candidate pixel constituting the image boundary line are calculated.
- the ratio of the image boundary line candidate pixels whose absolute value of the difference between each calculated angle and the angle ( ⁇ / 2) indicating the vertical direction is equal to or smaller than a certain threshold value is equal to or larger than the certain threshold value.
- it is statistically determined that the angle formed by the direction of the image boundary line and the edge direction of each image boundary line candidate pixel constituting the image boundary line is close to vertical.
- the direction of the image boundary line and the angle formed by the edge direction of each image boundary line candidate pixel constituting the image boundary line are calculated.
- the absolute value of the difference between each calculated angle and the angle ( ⁇ / 2) indicating the vertical direction or the average value of the square of the difference is less than a certain threshold value. In this case, it is statistically determined that the angle formed by the direction of the image boundary line and the edge direction of each image boundary line candidate pixel constituting the image boundary line is close to vertical.
- the special effect detection unit 21 is the same as the special effect detection unit 21 in the first embodiment, and a description thereof will be omitted here.
- the edge direction of the pixels constituting the image boundary line is perpendicular to the direction of the image boundary line under ideal conditions. It uses the property of becoming.
- the image boundary line extracted by the line extraction unit 112 is nearly perpendicular to the angle formed by the direction of the image boundary line and the edge direction of each image boundary line candidate pixel constituting the image boundary line. If it is not statistically determined, it is removed. Therefore, it is possible to reduce the detection of a line that is not an image boundary line as an image boundary line. As a result, the special effect can be detected with higher accuracy.
- the edge direction is calculated only for the image boundary line candidate pixels constituting the image boundary line extracted by the line extraction unit 112. Therefore, the sixth embodiment has an effect that the amount of calculation can be suppressed as compared with the fifth embodiment.
- FIG. 21 is a block diagram showing a seventh embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the seventh exemplary embodiment of the present invention includes an image boundary line extraction unit 14 and a special effect detection unit 21.
- the seventh embodiment differs from the first embodiment in that the image boundary line extraction unit 11 in the first embodiment shown in FIG. 3 is replaced with an image boundary line extraction unit 14. .
- the configuration in which the image boundary line extraction unit 11 is replaced in the first embodiment is illustrated, but the image boundary may be shifted in the second, third, and fourth embodiments.
- the line extraction unit 11 may be replaced! /.
- the image boundary extraction unit 14 is a boundary between two images existing in a frame from each frame of the input video. An image boundary line is extracted, and image boundary line description information that is information describing the extracted image boundary line is output. However, the image boundary line extraction unit 14 is different in configuration from the image boundary line extraction unit 11 in the first embodiment.
- the image boundary line extraction unit 14 includes an image boundary line candidate pixel detection unit 111, a line extraction unit 112, a motion vector calculation unit 141, and an image boundary line filtering unit 142. Since the image boundary line candidate pixel detection unit 111 is the same as the image boundary line candidate pixel detection unit 111 in the first embodiment, the description thereof is omitted here. Since the line extraction unit 112 is the same as the line extraction unit 112 in the first embodiment, the description thereof is omitted here.
- the motion vector calculation unit 141 receives the image boundary line description information of each frame output from the line extraction unit 112, and calculates the motion vector of a plurality of points on the image boundary line indicated by the image boundary line description information. To do. Then, the motion vector calculation unit 141 outputs the calculated motion vectors of a plurality of points on each image boundary line for each frame.
- the method for calculating the motion vector may be arbitrary. An example of a motion vector calculation method is described in, for example, “New Image Analysis Handbook”, pp. 1495-1498.
- the image boundary filtering unit 142 outputs the image boundary line description information of each frame output from the line extraction unit 112 and the motion vectors of a plurality of points on each image boundary line of each frame output from the motion vector calculation unit 141. As inputs. Then, the image boundary filtering unit 142 Outputs image boundary line description information when the directions and sizes of motion vectors of a plurality of points on the image boundary line indicated by the image boundary line description information are not aligned. In other cases, the image boundary filtering unit 142 does not output the image boundary description information.
- V 1> ⁇ m-mj (7)
- the special effect detection unit 21 is the same as the special effect detection unit 21 in the first embodiment, and a description thereof will be omitted here.
- the image boundary line is a moving boundary between two images, the direction and size of motion vectors of a plurality of points on the image boundary line are not uniform.
- the seventh embodiment uses this property.
- the line extraction unit 112 extracts Image boundary line force Removed when the direction and size of the motion vectors of multiple points on the image boundary line are aligned. Therefore, it is possible to reduce the erroneous detection of a line that is not an image boundary line as an image boundary line. As a result, special effects can be detected with higher accuracy.
- FIG. 22 is a block diagram showing an eighth embodiment of the special effect detection device according to the present invention.
- the special effect detection device according to the eighth exemplary embodiment of the present invention includes a gradual change section detection unit 3, an image boundary line extraction unit 11, and a special effect detection unit 21.
- the eighth embodiment differs from the first embodiment in that it has a gradually changing section section detection unit 3 in addition to the configuration of the first embodiment shown in FIG.
- the combination of the configuration of the first embodiment and the gradual change section detection unit 3 is illustrated here, but the combination of the configuration of another embodiment and the gradual change section detection unit 3 may be used. .
- Gradual change section detector 3 detects each frame force feature of the input video and compares the extracted frame force features to detect a gradual change section where the video changes slowly. To do.
- the gradual change section detection unit 3 supplies the detected frame sequence of the gradual change section as an input to the image boundary line extraction unit 11.
- Each frame force The feature quantity to be extracted may be arbitrary.
- Japanese Patent Application Laid-Open No. 8-237549, Japanese Patent Application Laid-Open No. 2005-237002, “Automatic Partitioning of Full-Moti on Video” describes a method of detecting a gradual change section based on comparison of feature amounts extracted from each frame. Are listed. The methods described in these documents may be used, but other methods for detecting the gradual change interval based on the comparison of feature values may be used.
- the special effect is detected from the gradually changing section of the input video. Therefore, in the eighth embodiment, there is an effect that the special effect of the video can be detected at a higher speed than the other embodiments that detect the special effect such as the direct input video image.
- FIG. 23 is a block diagram showing a ninth embodiment of the special effect detection apparatus according to the present invention. is there.
- the special effect detection device according to the ninth exemplary embodiment of the present invention includes an image boundary line extraction unit 11, a special effect detection unit 21, and a frame comparison unit 4.
- the ninth embodiment differs from the first embodiment in that it includes a frame comparison unit 4 in addition to the configuration of the first embodiment shown in FIG. It should be noted that, here, the force illustrating the configuration of the first embodiment and the frame comparison unit 4 may be a combination of the other embodiments and the frame comparison unit 4! /.
- the frame comparison unit 4 receives the special effect frame interval information output from the special effect detection unit 21, acquires the frames before and after the frame interval indicated by the special effect frame interval information from the input video, and the characteristics of the acquired frame Extract the amount. Then, the frame comparison unit 4 compares the extracted feature amounts to determine whether or not the video has been switched before and after the frame section. The frame comparison unit 4 outputs special effect frame section information when it is determined that the video has been switched. In other cases, the frame comparison unit 4 does not output special effect frame section information.
- the frames before and after the frame section to be acquired do not need to be one frame before and one frame behind.
- a predetermined number of frames before and after may be used.
- the frames before and after the frame section may each be a plurality of frames, for example, N frames before and after the frame section (N is the number of frames).
- the frame comparison unit 4 may compare the feature amounts of a plurality of frames before and after the frame section to determine whether or not the image has been switched before and after the frame section.
- the feature quantity for extracting the frame force may be arbitrary.
- the distance (or similarity) between the feature quantities is calculated.
- the distance between features and the degree of similarity may be calculated by any method.
- the threshold is preferably set by adjusting the distance (or similarity) between the feature quantities of the frames before and after the video switching from the video prepared for learning. ,.
- the special effect frame section information output by the special effect detector 21 is used. Reward Removed when it is determined that the video has not changed before and after the frame section indicated by it. Therefore, in the ninth embodiment, there is an effect that it is possible to reduce erroneous detection of non-special effects.
- FIG. 24 is a block diagram showing a tenth embodiment of the special effect detection device according to the present invention.
- the special effect detection apparatus according to the tenth embodiment of the present invention includes an image boundary line extraction unit 11, a special effect detection unit 21, and a filtering unit 5.
- the tenth embodiment differs from the first embodiment in that it includes a filtering unit 5 in addition to the configuration of the first embodiment shown in FIG.
- the combination of the configuration of the first embodiment and the filtering unit 5 is illustrated here, but the combination of the configuration of the other embodiments and the filtering unit 5 may also be used!
- the filtering unit 5 receives the special effect frame interval information output from the special effect detection unit 21, and the special effect frame interval information so as to limit the number of frame intervals including the special effect detected in an arbitrary time interval. Is output with restrictions. For example, the filtering unit 5 sets the time interval length to L, and the frame interval indicated by the special effect frame interval information output from the special effect detection unit 21 is maximum to the time interval length L at an arbitrary position in the video. And only one special effect frame section information indicating the limited frame section is output.
- the method of restriction is arbitrary. For example, priority may be given to a frame with a long section length indicated by special effect frame section information.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/227,648 US20090263023A1 (en) | 2006-05-25 | 2007-05-16 | Video special effect detection device, video special effect detection method, video special effect detection program, and video replay device |
JP2008517827A JPWO2007138858A1 (ja) | 2006-05-25 | 2007-05-16 | 映像の特殊効果検出装置、特殊効果検出方法、特殊効果検出プログラム及び映像再生装置 |
EP07743470A EP2028619A4 (en) | 2006-05-25 | 2007-05-16 | VIDEO IMAGE SPECIAL EFFECT DETECTION DEVICE, SPECIAL EFFECT DETECTION PROCEDURE, SPECIAL EFFECT DETECTION PROGRAM, AND VIDEO IMAGE RECORDING DEVICE |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-145694 | 2006-05-25 | ||
JP2006145694 | 2006-05-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007138858A1 true WO2007138858A1 (ja) | 2007-12-06 |
Family
ID=38778378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/060035 WO2007138858A1 (ja) | 2006-05-25 | 2007-05-16 | 映像の特殊効果検出装置、特殊効果検出方法、特殊効果検出プログラム及び映像再生装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20090263023A1 (ja) |
EP (1) | EP2028619A4 (ja) |
JP (1) | JPWO2007138858A1 (ja) |
KR (1) | KR20090006861A (ja) |
CN (1) | CN101454803A (ja) |
WO (1) | WO2007138858A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2216983A1 (en) * | 2009-02-05 | 2010-08-11 | Mitsubishi Electric R&D Centre Europe B.V. | Detection of wipe transitions in video |
JP2012518857A (ja) * | 2009-02-25 | 2012-08-16 | 本田技研工業株式会社 | 内側距離形状関係を使用する身体特徴検出及び人間姿勢推定 |
JP2012155399A (ja) * | 2011-01-24 | 2012-08-16 | Denso Corp | 境界検出装置、および境界検出プログラム |
WO2013008746A1 (ja) * | 2011-07-14 | 2013-01-17 | 株式会社メガチップス | 直線検出装置および直線検出方法 |
JP2013020590A (ja) * | 2011-07-14 | 2013-01-31 | Mega Chips Corp | 直線検出装置および直線検出方法 |
JP2013029967A (ja) * | 2011-07-28 | 2013-02-07 | Mega Chips Corp | 直線検出装置および直線検出方法 |
JP2018110945A (ja) * | 2018-04-19 | 2018-07-19 | コニカミノルタ株式会社 | 画像処理装置及び照射野認識方法 |
CN112906553A (zh) * | 2021-02-09 | 2021-06-04 | 北京字跳网络技术有限公司 | 图像处理方法、装置、设备及介质 |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9413477B2 (en) | 2010-05-10 | 2016-08-09 | Microsoft Technology Licensing, Llc | Screen detector |
US9508011B2 (en) * | 2010-05-10 | 2016-11-29 | Videosurf, Inc. | Video visual and audio query |
US9311708B2 (en) | 2014-04-23 | 2016-04-12 | Microsoft Technology Licensing, Llc | Collaborative alignment of images |
JP2011247957A (ja) * | 2010-05-24 | 2011-12-08 | Toshiba Corp | パターン検査方法および半導体装置の製造方法 |
JP5113881B2 (ja) * | 2010-06-03 | 2013-01-09 | 株式会社デンソー | 車両周辺監視装置 |
US20120017150A1 (en) * | 2010-07-15 | 2012-01-19 | MySongToYou, Inc. | Creating and disseminating of user generated media over a network |
US8705866B2 (en) * | 2010-12-07 | 2014-04-22 | Sony Corporation | Region description and modeling for image subscene recognition |
US8503792B2 (en) * | 2010-12-17 | 2013-08-06 | Sony Corporation | Patch description and modeling for image subscene recognition |
US9744421B2 (en) | 2011-06-27 | 2017-08-29 | Swing Profile Limited | Method of analysing a video of sports motion |
JP5871571B2 (ja) | 2011-11-11 | 2016-03-01 | 株式会社Pfu | 画像処理装置、矩形検出方法及びコンピュータプログラム |
JP5854774B2 (ja) * | 2011-11-11 | 2016-02-09 | 株式会社Pfu | 画像処理装置、直線検出方法及びコンピュータプログラム |
JP5822664B2 (ja) | 2011-11-11 | 2015-11-24 | 株式会社Pfu | 画像処理装置、直線検出方法及びコンピュータプログラム |
JP2013232829A (ja) * | 2012-05-01 | 2013-11-14 | Sony Corp | 画像処理装置、画像処理方法およびプログラム |
JP2014092899A (ja) * | 2012-11-02 | 2014-05-19 | Fuji Xerox Co Ltd | 画像処理装置及び画像処理プログラム |
KR102121530B1 (ko) * | 2013-04-25 | 2020-06-10 | 삼성전자주식회사 | 영상을 디스플레이 하는 방법 및 그 장치 |
JP6676873B2 (ja) * | 2014-09-22 | 2020-04-08 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
US10484601B2 (en) * | 2015-08-31 | 2019-11-19 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN109474850B (zh) * | 2018-11-29 | 2021-07-20 | 北京字节跳动网络技术有限公司 | 运动像素视频特效添加方法、装置、终端设备及存储介质 |
CN109600559B (zh) * | 2018-11-29 | 2021-07-23 | 北京字节跳动网络技术有限公司 | 一种视频特效添加方法、装置、终端设备及存储介质 |
CN110298662B (zh) * | 2019-07-04 | 2022-03-22 | 中国工商银行股份有限公司 | 交易重复提交的自动化检测方法及装置 |
CN110503010B (zh) * | 2019-08-06 | 2022-05-06 | 北京达佳互联信息技术有限公司 | 素材显示方法、装置以及电子设备、存储介质 |
CN110675425B (zh) * | 2019-08-22 | 2020-12-15 | 腾讯科技(深圳)有限公司 | 一种视频边框识别方法、装置、设备及介质 |
CN110913205B (zh) * | 2019-11-27 | 2022-07-29 | 腾讯科技(深圳)有限公司 | 视频特效的校验方法及装置 |
CN111182364B (zh) * | 2019-12-27 | 2021-10-19 | 杭州小影创新科技股份有限公司 | 一种短视频版权检测方法及系统 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06259561A (ja) | 1993-03-08 | 1994-09-16 | Mitsubishi Electric Corp | 動画像中の視標速度計算装置および視標追従装置 |
JPH07288840A (ja) | 1994-04-18 | 1995-10-31 | Matsushita Electric Ind Co Ltd | 映像変化点検出方法 |
JPH08237549A (ja) | 1994-06-27 | 1996-09-13 | Natl Univ Of Singapore | 自動ビデオ区分とキーフレーム抽出用システム |
JPH09245167A (ja) | 1996-03-08 | 1997-09-19 | Glory Ltd | 画像照合方法及び装置 |
JPH11252509A (ja) | 1998-03-05 | 1999-09-17 | Kdd Corp | 動画像のカット点検出装置 |
JPH11252501A (ja) | 1998-03-04 | 1999-09-17 | Hitachi Ltd | 動画像の特殊効果検出装置 |
JP3585977B2 (ja) | 1995-03-07 | 2004-11-10 | 松下電器産業株式会社 | 動領域検出装置 |
JP2005237002A (ja) | 2005-02-17 | 2005-09-02 | Kddi Corp | 動画像のカット点検出方法 |
JP2006053921A (ja) * | 2004-08-09 | 2006-02-23 | Microsoft Corp | ダイナミックプログラミングによる境界マッティング |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6195458B1 (en) * | 1997-07-29 | 2001-02-27 | Eastman Kodak Company | Method for content-based temporal segmentation of video |
-
2007
- 2007-05-16 EP EP07743470A patent/EP2028619A4/en not_active Withdrawn
- 2007-05-16 KR KR1020087028653A patent/KR20090006861A/ko not_active Application Discontinuation
- 2007-05-16 WO PCT/JP2007/060035 patent/WO2007138858A1/ja active Application Filing
- 2007-05-16 US US12/227,648 patent/US20090263023A1/en not_active Abandoned
- 2007-05-16 CN CNA2007800192616A patent/CN101454803A/zh active Pending
- 2007-05-16 JP JP2008517827A patent/JPWO2007138858A1/ja not_active Withdrawn
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06259561A (ja) | 1993-03-08 | 1994-09-16 | Mitsubishi Electric Corp | 動画像中の視標速度計算装置および視標追従装置 |
JPH07288840A (ja) | 1994-04-18 | 1995-10-31 | Matsushita Electric Ind Co Ltd | 映像変化点検出方法 |
JPH08237549A (ja) | 1994-06-27 | 1996-09-13 | Natl Univ Of Singapore | 自動ビデオ区分とキーフレーム抽出用システム |
JP3585977B2 (ja) | 1995-03-07 | 2004-11-10 | 松下電器産業株式会社 | 動領域検出装置 |
JPH09245167A (ja) | 1996-03-08 | 1997-09-19 | Glory Ltd | 画像照合方法及び装置 |
JPH11252501A (ja) | 1998-03-04 | 1999-09-17 | Hitachi Ltd | 動画像の特殊効果検出装置 |
JPH11252509A (ja) | 1998-03-05 | 1999-09-17 | Kdd Corp | 動画像のカット点検出装置 |
JP2006053921A (ja) * | 2004-08-09 | 2006-02-23 | Microsoft Corp | ダイナミックプログラミングによる境界マッティング |
JP2005237002A (ja) | 2005-02-17 | 2005-09-02 | Kddi Corp | 動画像のカット点検出方法 |
Non-Patent Citations (10)
Title |
---|
"Detection of Replay Scenes in Broadcasted Sports Video by Focusing on Digital Video Effects", THE TRANSACTIONS OF INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, February 2001 (2001-02-01), pages 432 - 435 |
"Handbook of Image Analysis, New Edition", pages: 1228 - 1246 |
"Handbook of Image Analysis, New Edition", pages: 1232 |
"Handbook of Image Analysis, New Edition", pages: 1246 - 1260 |
"Handbook of Image Analysis, New Edition", pages: 1254 - 1256 |
"Handbook of Image Analysis, New Edition", pages: 1256 - 1258 |
"Handbook of Image Analysis, New Edition", pages: 1495 - 1498 |
H. J. ZHANG; A. KANKANHALLI; S. W. SMOLIAR: "Automatic Partitioning of Full-Motion Video", MULTIMEDIA SYSTEMS 1, 1993, pages 10 - 28, XP000572496, DOI: doi:10.1007/BF01210504 |
HAMPAPUR A. ET AL.: "Digital Video Segmentation", PROCEEDINGS OF THE SECOND ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, ACM PRESS, 1994, pages 357 - 364, XP003019802 * |
JOHN CANNY: "A Computational Approach to Edge Detection", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 8, no. 6, November 1986 (1986-11-01), pages 679 - 698 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2216983A1 (en) * | 2009-02-05 | 2010-08-11 | Mitsubishi Electric R&D Centre Europe B.V. | Detection of wipe transitions in video |
JP2012518857A (ja) * | 2009-02-25 | 2012-08-16 | 本田技研工業株式会社 | 内側距離形状関係を使用する身体特徴検出及び人間姿勢推定 |
US9904845B2 (en) | 2009-02-25 | 2018-02-27 | Honda Motor Co., Ltd. | Body feature detection and human pose estimation using inner distance shape contexts |
JP2012155399A (ja) * | 2011-01-24 | 2012-08-16 | Denso Corp | 境界検出装置、および境界検出プログラム |
WO2013008746A1 (ja) * | 2011-07-14 | 2013-01-17 | 株式会社メガチップス | 直線検出装置および直線検出方法 |
JP2013020590A (ja) * | 2011-07-14 | 2013-01-31 | Mega Chips Corp | 直線検出装置および直線検出方法 |
US9195902B2 (en) | 2011-07-14 | 2015-11-24 | Megachips Corporation | Straight line detection apparatus and straight line detection method |
JP2013029967A (ja) * | 2011-07-28 | 2013-02-07 | Mega Chips Corp | 直線検出装置および直線検出方法 |
JP2018110945A (ja) * | 2018-04-19 | 2018-07-19 | コニカミノルタ株式会社 | 画像処理装置及び照射野認識方法 |
CN112906553A (zh) * | 2021-02-09 | 2021-06-04 | 北京字跳网络技术有限公司 | 图像处理方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2007138858A1 (ja) | 2009-10-01 |
EP2028619A1 (en) | 2009-02-25 |
EP2028619A4 (en) | 2010-10-20 |
US20090263023A1 (en) | 2009-10-22 |
CN101454803A (zh) | 2009-06-10 |
KR20090006861A (ko) | 2009-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007138858A1 (ja) | 映像の特殊効果検出装置、特殊効果検出方法、特殊効果検出プログラム及び映像再生装置 | |
Zabih et al. | A feature-based algorithm for detecting and classifying production effects | |
EP1147669B1 (en) | Video signal processing method and apparatus by feature points extraction in the compressed domain. | |
JP3656036B2 (ja) | Mpeg圧縮ビデオ環境でのディゾルブ/フェード検出方法 | |
CN112990191B (zh) | 一种基于字幕视频的镜头边界检测与关键帧提取方法 | |
US7027509B2 (en) | Hierarchical hybrid shot change detection method for MPEG-compressed video | |
US7046731B2 (en) | Extracting key frames from a video sequence | |
US8184947B2 (en) | Electronic apparatus, content categorizing method, and program therefor | |
TW200401569A (en) | Method and apparatus for motion estimation between video frames | |
US7340096B2 (en) | Method for identification of tokens in video sequences | |
WO2004044846A2 (en) | A method of and system for detecting uniform color segments | |
JP2010503006A (ja) | 適応的なビデオ呈示のための方法および装置 | |
JP2010503006A5 (ja) | ||
US8306123B2 (en) | Method and apparatus to improve the convergence speed of a recursive motion estimator | |
JP5503507B2 (ja) | 文字領域検出装置およびそのプログラム | |
US20090180670A1 (en) | Blocker image identification apparatus and method | |
JP2009212605A (ja) | 情報処理方法、情報処理装置及びプログラム | |
EP1460835B1 (en) | Method for identification of tokens in video sequences | |
Gao et al. | To accelerate shot boundary detection by reducing detection region and scope | |
Ren et al. | Detection of dirt impairments from archived film sequences: survey and evaluations | |
Zhang et al. | Efficient video object segmentation using adaptive background registration and edge-based change detection techniques | |
KR102343029B1 (ko) | 모션벡터 기반 분기처리를 이용한 압축영상의 영상분석 처리 방법 | |
EP1752891A2 (en) | Method and apparatus for establishing and browsing a hierarchical video camera motion transition graph. | |
Iwamoto et al. | Detection of wipes and digital video effects based on a pattern-independent model of image boundary line characteristics | |
Wang et al. | A motion-insensitive dissolve detection method with SURF |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200780019261.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07743470 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008517827 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12227648 Country of ref document: US Ref document number: 1020087028653 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007743470 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |