CN104469546A - Video clip processing method and device - Google Patents

Video clip processing method and device Download PDF

Info

Publication number
CN104469546A
CN104469546A CN201410812127.3A CN201410812127A CN104469546A CN 104469546 A CN104469546 A CN 104469546A CN 201410812127 A CN201410812127 A CN 201410812127A CN 104469546 A CN104469546 A CN 104469546A
Authority
CN
China
Prior art keywords
video segment
fragment
cutting
color histogram
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410812127.3A
Other languages
Chinese (zh)
Other versions
CN104469546B (en
Inventor
龚云波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tvmining Juyuan Media Technology Co Ltd
Original Assignee
Wuxi Tvmining Juyuan Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Tvmining Juyuan Media Technology Co Ltd filed Critical Wuxi Tvmining Juyuan Media Technology Co Ltd
Priority to CN201410812127.3A priority Critical patent/CN104469546B/en
Publication of CN104469546A publication Critical patent/CN104469546A/en
Application granted granted Critical
Publication of CN104469546B publication Critical patent/CN104469546B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques

Abstract

The invention discloses a video clip processing method and device. The method includes the steps that video clips and segmentation accuracy of the video clips are obtained; fragment segmentation is carried out on unqualified video clips with the segmentation accuracy lower than a preset threshold, wherein segmented fragments at least comprise a front fragment and/or a tail fragment of the unqualified video clips; color histogram similarity between the front fragment and a previous video clip, and/or color histogram similarity between the tail fragment and a rear video clip are calculated; when the color histogram similarity is larger than a preset threshold, the front fragment is classified into the previous video clip, and the tail fragment is classified into the rear video clip. According to the method and device, the front portion or the tail portion of the unqualified video clips are classified into the pervious video clip or the rear video clip with a color histogram similar to the front portion or the tail portion, and the aim of correcting the segmented video clips is achieved.

Description

A kind of method and apparatus processing video segment
Technical field
The present invention relates to video field, more specifically, relate to a kind of method and apparatus that process video segment is provided.
Background technology
No matter be on computers, or on the mobile terminal such as smart mobile phone, panel computer, video playback is all that user uses more function.
Video provider is experienced to provide flexible viewing to user, and usually needing video to be carried out cutting becomes multiple such as content-based video segment.Such as, one section of halfhour news program is carried out cutting according to each news footage, be divided into multiple independently news footage, facilitate user to select the news footage oneself wanting to see.
Video slicing can be carried out by the means of human-edited.In order to improve cutting efficiency, can also use further, by modes such as face, sound or captions, cutting being carried out to video.Such as, identify personage by face and sound, determine character, for news, such as host, outdoor scene host, outdoor scene welcome guest etc.The information such as role and title can also be combined, determine content switching position, this switching position is the position of video slicing, and such as, the switching of host is generally content and switches, and host is still a content to the switching of outdoor scene host usually.
But when using aforesaid way to carry out video slicing, there will be the situation that the video segment effect of cutting is undesirable, therefore, how the video segment undesirable to cutting effect carries out process is a problem demanding prompt solution.
Summary of the invention
In view of this, the object of the embodiment of the present invention proposes a kind of method and apparatus processing video segment, and it can process the underproof video segment of cutting effectively.
In order to achieve the above object, the embodiment of the present invention proposes a kind of method processing video segment, comprises the following steps:
Obtain the cutting accuracy rate of video being carried out the video segment after cutting and each video segment;
Cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, wherein, in the fragment after cutting, at least comprises stem fragment and/or the afterbody fragment of described defective video segment;
Calculate the first color histogram similarity between the stem fragment of described defective video segment and the previous video fragment of described defective video segment, and/or, calculate the second color histogram similarity between the afterbody fragment of described defective video segment and a rear video segment of described defective video segment;
When described first color histogram similarity is greater than the second predetermined threshold value, described stem fragment is included into described previous video fragment; When described second color histogram similarity is greater than the 3rd predetermined threshold value, described afterbody fragment is included into a described rear video segment.
In an embodiment of the present invention, described cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, comprising:
Calculate the color histogram similarity between the adjacent video frames in described defective video segment;
The adjacent video frames that the color histogram similarity calculating acquisition is greater than the 4th predetermined threshold value is included into same fragment.
In an embodiment of the present invention, the cutting accuracy rate of each video segment of described acquisition, comprising:
Obtain the cutting feature that each video segment uses;
According to the cutting feature of acquisition and the weight of default cutting feature, calculate the cutting accuracy rate of each video segment.
In an embodiment of the present invention, described cutting accuracy rate is:
P ( y / x ) = exp ( Σ i λ i f i ( x , y ) ) Σ y ( exp ( Σ i λ i f i ( x , y ) ) )
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y is ownership classification, and λ represents the weight of cutting feature, Σ iλ i=1, i represents i-th cutting feature.
In an embodiment of the present invention, described cutting feature comprise following in one or more: face, sound, title and color histogram.
In an embodiment of the present invention, calculate described color histogram similarity, comprising: using each color histogram as vector, calculate two vectorial distances.
The embodiment of the present invention also proposes a kind of device processing video segment, comprising:
Acquisition module, for obtaining the cutting accuracy rate of video being carried out the video segment after cutting and each video segment;
Cutting module, for cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, wherein, at least comprises stem fragment and/or the afterbody fragment of described defective video segment in the fragment after cutting;
Computing module, for calculating the first color histogram similarity between the stem fragment of described defective video segment and the previous video fragment of described defective video segment, and/or, calculate the second color histogram similarity between the afterbody fragment of described defective video segment and a rear video segment of described defective video segment;
Processing module, for when described first color histogram similarity is greater than the second predetermined threshold value, is included into described previous video fragment by described stem fragment; When described second color histogram similarity is greater than the 3rd predetermined threshold value, described afterbody fragment is included into a described rear video segment.
In an embodiment of the present invention, described cutting module comprises:
First computing unit, for calculating the color histogram similarity between the adjacent video frames in described defective video segment;
Processing unit, is included into same fragment for the adjacent video frames color histogram similarity calculating acquisition being greater than the 4th predetermined threshold value.
In an embodiment of the present invention, described acquisition module comprises:
Acquiring unit, obtains the cutting feature that each video segment uses;
Second computing unit, for the weight according to the cutting feature obtained and default cutting feature, calculates the cutting accuracy rate of each video segment.
In an embodiment of the present invention, described cutting accuracy rate is:
P ( y / x ) = exp ( Σ i λ i f i ( x , y ) ) Σ y ( exp ( Σ i λ i f i ( x , y ) ) )
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y is ownership classification, and λ represents the weight of cutting feature, Σ iλ i=1, i represents i-th cutting feature.
In an embodiment of the present invention, described cutting feature comprise following in one or more: face, sound, title and color histogram.
The technical scheme that the embodiment of the present invention provides can comprise following beneficial effect:
By the stem of defective video segment and/or the color histogram of the afterbody video segment adjacent with front and back are compared, the stem of this defective video segment or afterbody are included into the close previous video fragment of color histogram or a rear video segment, thus play the object that the video segment after to cutting revises.
The further feature of the embodiment of the present invention and advantage will be set forth in the following description, and, partly become apparent from specification, or understand by implementing the present invention.Object of the present invention and other advantages realize by structure specifically noted in write specification, claims and accompanying drawing and obtain.
Below by drawings and Examples, the technical scheme of the embodiment of the present invention is described in further detail.
Accompanying drawing explanation
Accompanying drawing is used to provide the further understanding to the embodiment of the present invention, and forms a part for specification, together with embodiments of the present invention for explaining the present invention, does not form the restriction to the embodiment of the present invention.In the accompanying drawings:
Fig. 1 is the method flow diagram of the process video segment in one embodiment of the invention.
Fig. 2 is the method flow diagram of the process video segment in one embodiment of the invention.
Fig. 3 is the method flow diagram processing video segment in one embodiment of the invention.
Fig. 4 is the structural representation of the device of process video segment in one embodiment of the invention.
Fig. 5 is the structural representation of the cutting module of the device of process video segment in one embodiment of the invention.
Fig. 6 is the structural representation of the acquisition module of the device of process video segment in one embodiment of the invention.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described, should be appreciated that preferred embodiment described herein is only for instruction and explanation of the embodiment of the present invention, is not intended to limit the present invention embodiment.
Be illustrated in figure 1 the flow chart of the method for the process video segment in the embodiment of the present invention, the method comprises:
Step S11: obtain the cutting accuracy rate of video being carried out the video segment after cutting and each video segment.
Step S12: cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, wherein, at least comprises stem fragment and/or the afterbody fragment of described defective video segment in the fragment after cutting.
Step S13: calculate the stem fragment of described defective video segment and the first color histogram similarity of previous video fragment, and/or, calculate the afterbody fragment of described defective video segment and the second color histogram similarity of a rear video segment.
Step S14: when described first color histogram similarity is greater than the second predetermined threshold value, described stem fragment is included into previous video fragment; When described second color histogram similarity is greater than the 3rd predetermined threshold value, described afterbody fragment is included into a rear video segment.
In the embodiment of the present invention, by the stem of defective video segment and/or the color histogram of the afterbody video segment adjacent with front and back are compared, the stem of this defective video segment or afterbody are included into the close previous video fragment of color histogram or a rear video segment, thus play the object that the video segment after to cutting revises.
Be illustrated in figure 2 another embodiment of the method for the process video segment proposed in the embodiment of the present invention, in this embodiment, according to color histogram, cutting carried out to defective video segment.This embodiment comprises the following steps:
Step S21: obtain the cutting accuracy rate of video being carried out the video segment after cutting and each video segment.
Step S22: according to cutting accuracy rate and first predetermined threshold value of each video segment, determine defective video segment.
Step S23: calculate the color histogram similarity between the adjacent video frames in defective video segment.
Calculate color histogram similarity, comprising: using the color histogram of frame of video as vector, the color histogram similarity calculated between two frame of video is exactly the vectorial distance of calculating two.
Step S24: the adjacent video frames that the color histogram similarity calculating acquisition is greater than the 4th predetermined threshold value is included into same fragment.
Step S25: calculate the first color histogram similarity between the stem fragment of defective video segment and the previous video fragment of this defective video segment.
Step S26: judge whether the first color histogram similarity is greater than the second predetermined threshold value, if so, performs step S27.
Step S27: the previous video fragment stem fragment of this defective video segment being included into this defective video segment.
Step S28: calculate the second color histogram similarity between the afterbody fragment of defective video segment and a rear video segment of this defective video segment.
Step S29: judge whether the second color histogram similarity is greater than the 3rd predetermined threshold value, if so, performs step S210.
Step S210: the rear video segment afterbody fragment of this defective video segment being included into this defective video segment.
It should be noted that the execution sequence of above-mentioned steps S25-S27 and S28-S210 is not limited to above-mentioned order, also can parallel execution of steps S25-S27 and S28-S210, or perform step S25-S27 again after first performing step S28-S210.In other embodiments of the invention, also step S25-S27 or step S28-S210 can only be performed.Such as, when this defective video segment is last fragment in video, only perform step S25-S27, during first fragment when this defective video segment in video, only perform step S28-S210.
Be illustrated in figure 3 another embodiment of the method for the process video segment proposed in the embodiment of the present invention, in this embodiment, comprise the cutting accuracy rate calculating video segment.This embodiment comprises the following steps:
Step S31: obtain and video is carried out the video segment after cutting.
Step S32: obtain the cutting feature that each video segment uses.
Step S33: according to the cutting feature of acquisition and the weight of default cutting feature, calculate the cutting accuracy rate of each video segment.
In an embodiment of the present invention, the computing formula of cutting accuracy rate is as follows:
P ( y / x ) = exp ( Σ i λ i f i ( x , y ) ) Σ y ( exp ( Σ i λ i f i ( x , y ) ) )
Wherein, f i(x, y) represents two-value aspect of model function, and its value belongs to classification y and video segment characteristic vector x by video segment and decides, and when x, y meet video segment classification condition, f value is 1.Such as, the cutting feature that this video segment uses is face, so, x is face feature vector, and whether y can use face everything point of feature to carry out cutting for this video segment, if, then the value of this f is 1, and in video, which part whether can use which or which cutting feature to carry out cutting, and this pre-sets, λ represents the weight of cutting feature, Σ iλ i=1, i represents i-th cutting feature.
Cutting feature can comprise following in one or more: face, sound, title and color histogram.
Step S34: according to cutting accuracy rate and first predetermined threshold value of each video segment, determine defective video segment.
Below, following steps are performed for each defective video segment:
Step S35: calculate the color histogram similarity between the adjacent video frames in defective video segment.
Calculate color histogram similarity, comprising: using the color histogram of frame of video as vector, the color histogram similarity calculated between two frame of video is exactly the vectorial distance of calculating two.
Step S36: the adjacent video frames that the color histogram similarity calculating acquisition is greater than the 4th predetermined threshold value is included into same fragment.
Step S37: calculate the first color histogram similarity between the stem fragment of defective video segment and the previous video fragment of this defective video segment, and, calculate the second color histogram similarity between the afterbody fragment of defective video segment and a rear video segment of this defective video segment.
Step S38: judge whether the first color histogram similarity is greater than the second predetermined threshold value, and, judge whether the second color histogram similarity is greater than the 3rd predetermined threshold value; When the first color histogram similarity is greater than the second predetermined threshold value, perform step S39; When the second color histogram similarity is greater than the 3rd predetermined threshold value, perform step S310.
Step S39: the previous video fragment stem fragment of this defective video segment being included into this defective video segment.
Step S310: the rear video segment afterbody fragment of this defective video segment being included into this defective video segment.
Correspondingly, as shown in Figure 4, the embodiment of the present invention also proposes a kind of device processing video segment, comprising:
Acquisition module 401, for obtaining the cutting accuracy rate of video being carried out the video segment after cutting and each video segment.
Cutting module 402, for cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, wherein, at least comprises stem fragment and/or the afterbody fragment of described defective video segment in the fragment after cutting.
Computing module 403, for calculating the first color histogram similarity between the stem fragment of described defective video segment and the previous video fragment of described defective video segment, and/or, calculate the second color histogram similarity between the afterbody fragment of described defective video segment and a rear video segment of described defective video segment.
Processing module 404, for when described first color histogram similarity is greater than the second predetermined threshold value, is included into described previous video fragment by described stem fragment; When described second color histogram similarity is greater than the 3rd predetermined threshold value, described afterbody fragment is included into a described rear video segment.
As shown in Figure 5, described cutting module 402 comprises:
First computing unit 4021, for calculating the color histogram similarity between the adjacent video frames in described defective video segment;
Processing unit 4022, is included into same fragment for the adjacent video frames color histogram similarity calculating acquisition being greater than the 4th predetermined threshold value.
As shown in Figure 6, described acquisition module 401 comprises:
Acquiring unit 4011, obtains the cutting feature that each video segment uses;
Second computing unit 4012, for the weight according to the cutting feature obtained and default cutting feature, calculates the cutting accuracy rate of each video segment.
Described cutting accuracy rate is:
P ( y / x ) = exp ( Σ i λ i f i ( x , y ) ) Σ y ( exp ( Σ i λ i f i ( x , y ) ) )
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y is ownership classification, and λ represents the weight of cutting feature, Σ iλ i=1, i represents i-th cutting feature.
Cutting feature comprise following in one or more: face, sound, title and color histogram.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store and optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (11)

1. process a method for video segment, it is characterized in that, comprise the following steps:
Obtain the cutting accuracy rate of video being carried out the video segment after cutting and each video segment;
Cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, wherein, in the fragment after cutting, at least comprises stem fragment and/or the afterbody fragment of described defective video segment;
Calculate the first color histogram similarity between the stem fragment of described defective video segment and the previous video fragment of described defective video segment, and/or, calculate the second color histogram similarity between the afterbody fragment of described defective video segment and a rear video segment of described defective video segment;
When described first color histogram similarity is greater than the second predetermined threshold value, described stem fragment is included into described previous video fragment; When described second color histogram similarity is greater than the 3rd predetermined threshold value, described afterbody fragment is included into a described rear video segment.
2. method according to claim 1, is characterized in that, described cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, comprising:
Calculate the color histogram similarity between the adjacent video frames in described defective video segment;
The adjacent video frames that the color histogram similarity calculating acquisition is greater than the 4th predetermined threshold value is included into same fragment.
3. method according to claim 1, is characterized in that, the cutting accuracy rate of each video segment of described acquisition, comprising:
Obtain the cutting feature that each video segment uses;
According to the cutting feature of acquisition and the weight of default cutting feature, calculate the cutting accuracy rate of each video segment.
4. method according to claim 3, is characterized in that, described cutting accuracy rate is:
P ( y / x ) = exp ( Σ i λ i f i ( x , y ) ) Σ y ( exp ( Σ i λ i f i ( x , y ) ) )
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y is ownership classification, and λ represents the weight of cutting feature, Σ iλ i=1, i represents i-th cutting feature.
5. method according to claim 3, is characterized in that, described cutting feature comprise following in one or more: face, sound, title and color histogram.
6. method according to claim 1, is characterized in that, calculates described color histogram similarity, comprising: using each color histogram as vector, calculates two vectorial distances.
7. process a device for video segment, it is characterized in that, comprising:
Acquisition module, for obtaining the cutting accuracy rate of video being carried out the video segment after cutting and each video segment;
Cutting module, for cutting accuracy rate is carried out fragment cutting lower than the defective video segment of the first predetermined threshold value, wherein, at least comprises stem fragment and/or the afterbody fragment of described defective video segment in the fragment after cutting;
Computing module, for calculating the first color histogram similarity between the stem fragment of described defective video segment and the previous video fragment of described defective video segment, and/or, calculate the second color histogram similarity between the afterbody fragment of described defective video segment and a rear video segment of described defective video segment;
Processing module, for when described first color histogram similarity is greater than the second predetermined threshold value, is included into described previous video fragment by described stem fragment; When described second color histogram similarity is greater than the 3rd predetermined threshold value, described afterbody fragment is included into a described rear video segment.
8. device according to claim 7, is characterized in that, described cutting module comprises:
First computing unit, for calculating the color histogram similarity between the adjacent video frames in described defective video segment;
Processing unit, is included into same fragment for the adjacent video frames color histogram similarity calculating acquisition being greater than the 4th predetermined threshold value.
9. device according to claim 7, is characterized in that, described acquisition module comprises:
Acquiring unit, obtains the cutting feature that each video segment uses;
Second computing unit, for the weight according to the cutting feature obtained and default cutting feature, calculates the cutting accuracy rate of each video segment.
10. device according to claim 9, is characterized in that, described cutting accuracy rate is:
P ( y / x ) = exp ( Σ i λ i f i ( x , y ) ) Σ y ( exp ( Σ i λ i f i ( x , y ) ) )
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y is ownership classification, and λ represents the weight of cutting feature, Σ iλ i=1, i represents i-th cutting feature.
11. devices according to claim 9, is characterized in that, described cutting feature comprise following in one or more: face, sound, title and color histogram.
CN201410812127.3A 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment Expired - Fee Related CN104469546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410812127.3A CN104469546B (en) 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410812127.3A CN104469546B (en) 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment

Publications (2)

Publication Number Publication Date
CN104469546A true CN104469546A (en) 2015-03-25
CN104469546B CN104469546B (en) 2017-09-15

Family

ID=52914791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410812127.3A Expired - Fee Related CN104469546B (en) 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment

Country Status (1)

Country Link
CN (1) CN104469546B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718871A (en) * 2016-01-18 2016-06-29 成都索贝数码科技股份有限公司 Video host identification method based on statistics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090066845A1 (en) * 2005-05-26 2009-03-12 Takao Okuda Content Processing Apparatus, Method of Processing Content, and Computer Program
CN102685398A (en) * 2011-09-06 2012-09-19 天脉聚源(北京)传媒科技有限公司 News video scene generating method
CN103426176A (en) * 2013-08-27 2013-12-04 重庆邮电大学 Video shot detection method based on histogram improvement and clustering algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090066845A1 (en) * 2005-05-26 2009-03-12 Takao Okuda Content Processing Apparatus, Method of Processing Content, and Computer Program
CN102685398A (en) * 2011-09-06 2012-09-19 天脉聚源(北京)传媒科技有限公司 News video scene generating method
CN103426176A (en) * 2013-08-27 2013-12-04 重庆邮电大学 Video shot detection method based on histogram improvement and clustering algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718871A (en) * 2016-01-18 2016-06-29 成都索贝数码科技股份有限公司 Video host identification method based on statistics
CN105718871B (en) * 2016-01-18 2017-11-28 成都索贝数码科技股份有限公司 A kind of video host's recognition methods based on statistics

Also Published As

Publication number Publication date
CN104469546B (en) 2017-09-15

Similar Documents

Publication Publication Date Title
US20230077355A1 (en) Tracker assisted image capture
US20170161953A1 (en) Processing method and device for collecting sensor data
CN104038848A (en) Video processing method and video processing device
CN104185088B (en) A kind of method for processing video frequency and device
CN104572219A (en) Photographing mode switching method and photographing mode switching device
CN103458321A (en) Method and device for loading subtitles
CN104469516A (en) Webpage video processing method and device of Android system
CN104053048A (en) Method and device for video localization
CN108364338B (en) Image data processing method and device and electronic equipment
CN104994404A (en) Method and device for obtaining keywords for video
WO2016202306A1 (en) Video processing method and device
CN104822087B (en) A kind of processing method and processing device of video-frequency band
CN113014957B (en) Video shot segmentation method and device, medium and computer equipment
CN102737383A (en) Camera movement analyzing method and device in video
CN103986981A (en) Recognition method and device of scenario segments of multimedia files
CN103634691A (en) Method and system for editing icons on television terminal
CN105530534A (en) Video clipping method and apparatus
CN103108128A (en) Method, system and mobile terminal of automatic focusing
CN104469546A (en) Video clip processing method and device
CN111914682A (en) Teaching video segmentation method, device and equipment containing presentation file
CN111970560A (en) Video acquisition method and device, electronic equipment and storage medium
CN103092929A (en) Method and device for generation of video abstract
CN110622517A (en) Video processing method and device
CN114339304A (en) Live video processing method and device and storage medium
CN104469545A (en) Method and device for verifying splitting effect of video clip

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method and device for processing video clip

Effective date of registration: 20210104

Granted publication date: 20170915

Pledgee: Inner Mongolia Huipu Energy Co.,Ltd.

Pledgor: WUXI TVMINING MEDIA SCIENCE & TECHNOLOGY Co.,Ltd.

Registration number: Y2020990001517

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170915