CN1331099C - Content based image recognition method - Google Patents

Content based image recognition method Download PDF

Info

Publication number
CN1331099C
CN1331099C CNB2004100350849A CN200410035084A CN1331099C CN 1331099 C CN1331099 C CN 1331099C CN B2004100350849 A CNB2004100350849 A CN B2004100350849A CN 200410035084 A CN200410035084 A CN 200410035084A CN 1331099 C CN1331099 C CN 1331099C
Authority
CN
China
Prior art keywords
image
point
interest
area
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100350849A
Other languages
Chinese (zh)
Other versions
CN1691054A (en
Inventor
谭铁牛
胡卫明
杨金锋
王谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CNB2004100350849A priority Critical patent/CN1331099C/en
Publication of CN1691054A publication Critical patent/CN1691054A/en
Application granted granted Critical
Publication of CN1331099C publication Critical patent/CN1331099C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The present invention relates to a content based image recognition method which comprises the steps that firstly, an image is divided into grids; then, each node of the grids are used as the initial position to carry out region growth to obtain the skin color information at the circumference of each node in the image; an interest point and an interest zone in the image are determined by a mechanism of the mutual vote of the grid cell zone and each node; the main contour of a human body in the image is extracted by the movement of the points on the basis of the interest zone and the interest point; finally, the contour information and local information in the contour are extracted to generate an eigen vector to recognize the image and judge property. The present invention breaks through the problems of aspects, such as low speed, low efficiency, strong dependency of a device, etc. in the existing sensitive image recognition technology, such as colour histogram match, wavelet transformation contour match, skin colour texture description, image central moment match, etc. The present invention solves the problem of the classification of a three-point type swim-suit image, a nude image and a face image of a person so that the sensitive image recognition technology is further expanded, and wide application prospects are developed.

Description

Content-based image-recognizing method
Technical field
The present invention relates to area of pattern recognition, particularly content-based image-recognizing method.
Background technology
Along with the develop rapidly of modern internet technology, network is to the infiltration of global economy and social life, and its degree of depth and influence power be people's expectation head and shoulders above.The network information security becomes a very important problem gradually, wherein to society, especially pupillary influence is more caused people's extensive concern, so the information filtering technology has become urgent theory and actual demand.In the U.S., these problems have just caused the public's attention as far back as 1994, American society was subjected to deeply easily that the misery of harmful network informations such as the online porny of network, violence, vicious speech perplexs at that time, and many news, newspaper, magazine all are flooded with the fear to problems such as porn site, various ugly group, online assaults sexually.A large amount of harmful contents directly causes Congress to pass through " CommunicationsDecency Act (CDA) " and " Child On-line Protection Act (COPA) " two laws on the network.As legal basis, the software industry of the U.S. has been developed themselves Web content filter software (Content blocking filter software) and has been set up Web content auditing system platform (Platform for Internet Content Selection-PICS).Passed through " Children ' s Internet Protection Act (CIPA) " law subsequently again in Congress in 1999 to protect young people, made it avoid the infringement of network harmful information.
To the sensitive information context of detection, abroad some universities (Berkeley, Iowa Standford) have carried out the exploration that part is analyzed sensitization picture on the network.Fleck and Forsyth[D.A.Forsyth, M.M.Fleck, Body plan, Proc.IEEE Conference on ComputerVision an d Pattern Recognition, 1997, pp.678-863.] skin by human body, and the each several part skin area is linked to be one group, discern a width of cloth picture and whether comprise bare content.James Ze Wang[J.Z.Wang, G.Wiederhold, O.Firschein, Systemfor screening objectionable images, Computer CommunicationsJournal, Elsevier Science, 1998,21 (15), pp.1355-1360.] utilize WIPE (Wavelet Image Pornography Elimimation) method that sensitization picture is discerned and filtered.This method synthesis has utilized the Daubechies wavelet transformation, normalization central moment, and color histogram forms the semantic matches vector and carries out image classification identification.Jones and Rehg[M.J.Jones, J.M.Rehg, Statistical color models with application to skindetection, Proc.the International Conference on Computer Visionand Pattern Recognition, 1999, pp.274-280.] the statistics complexion model has been done deep research, they have gathered great amount of images from the internet earlier, and marked the area of skin color in the image by hand, complexion model in then these images therefrom being trained as training sample is that main information removes to detect sensitive image with the colour of skin at last.People such as Bosson [A.Bosson, G.C.Cawley, Y.Chan, R.Harvey, Non-retrieval:Blocking pornographicimages, Proc.the International Conference on Image and VideoRetrieval, 2002, pp.50-60.] detected area of skin color being made little oval block handles, each ellipse is all extracted such as area central point, axial length, features such as eccentricity, and these features are classified.In addition, also have some general CBIR systems, as the QBIC of IBM, the ImageFinder of Attrasoft, the Imatch of MWLabs etc.What deserves to be mentioned is that four scientists of French Inst Nat Rech Inf Automat (INRIA) image and multimedia index group have set up LookThatUp company in 1999, the said firm's image filtering in industry maintains the leading position with the retrieval product.The Image-Filter of LookThatUp utilizes advanced recognizer to carry out real time filtering to the image on the network.
In calendar year 2001, Europe starts the NetProtect plan, and this plan is since end of day in 1 day to 2002 May 1 of January calendar year 2001, by the EADS Matra Systemes of France; Information research institution unites the Matra GlobalNetservices of Hispanic Red Educativa, France, the Hyertech of Greece, the scientific research institutions such as SailLabs of Germany develop jointly.The target of NetProtect plan is to set up the uniform technical standards of european internet information filtering instrument, to realize cross-region, to stride the internet harmful information filtration of language.
Domestic existing anti-yellow software has U.S. duckweed software work chamber to release the anti-yellow expert of U.S. duckweed, the anti-yellow bodyguard in the Forbidden City that ZiJinCheng.N ET releases, the Escort's (protect young people as the love flower, make it not be subjected to the infringement of electronics pornography and drug) who flies the release of great waves software work chamber, the anti-yellow software of " piercing eye " computer of news Fetion breath Science and Technology Ltd. of China Science ﹠ Technology University exploitation, anti-yellow software of " five-element bodyguard " computer of Tsing-Hua University or the like.What need proposition is no matter these domestic network harmful information filtration softwares all can not reach due effect technically or from filter method.The particularly develop rapidly of China's network application in the last few years causes network far-reaching day by day to the influence of society, family, education, so the network harmful information filtration will face unprecedented pressure.It is emphasized that, though internet harmful information filtration technology has worldwide obtained paying close attention to widely and studying, but still have many difficult points aspect the harmful information recognition technology, wherein porny identification and the filter method based on picture material still lacks effective algorithm and sorting technique.Therefore how to develop more robust, the sensitive image recognition technology is still a challenge accurately.
Summary of the invention
The purpose of this invention is to provide a kind of content-based image-recognizing method, the technical matters of solution is to utilize the part of the human body that sensitization picture can express and body information to reach identification to sensitive image.
For achieving the above object, a kind of content-based image-recognizing method comprises step:
At first image is carried out grid dividing;
Be that initial position carries out region growing and obtains colour of skin information around each node in the image with each node of grid then;
Utilize the mechanism of the mutual ballot of grid cell zone and each node to determine point of interest and region-of-interest in the image;
On the basis of region-of-interest and point of interest, the profile of trunk in the extraction image of utilization point;
At last, the local message generating feature vector in extraction profile information and the profile inside is discerned with character image and is judged.
The present invention is a kind of novel sensitive image recognition technology, broken through such as an international difficult problem that has aspects such as sensitive image recognition technology speed is slow, efficient is low, device dependence is strong now such as color histogram coupling, wavelet transformation outline, skin tone texture description, centralized moments of image coupling, solved the bikini image simultaneously, the classification difficult problem of nude image and facial image makes the sensitive image recognition technology obtain further expansion and has opened up wide application prospect.
Description of drawings
Fig. 1 is that how much of image is divided, and wherein, figure (a) is zone and point, and figure (b) is the zone and the relation of subregion on every side, and figure (c) is the point and the relation of subregion on every side;
Fig. 2 is a region growing, and wherein, figure (a) is 4 vector of unit length, and figure (b) is that the direction of growth is determined by composite vector;
Fig. 3 is the zone ballot, and wherein, figure (a) is voting results, figure (b) interesting areas, and black part is divided into the nontarget area;
Fig. 4 is a plurality of processes of extracting trunk profile and local message;
Fig. 5 is initial curve and reference items, wherein, the initial curve that figure (a) is made up of point-of-interest, figure (b) is reference vector and reference point, figure (c) is the motor pattern of point;
Fig. 6 is the topological structure of image classification form;
Fig. 7 is image recognition and judgement flow process.
Embodiment
Principal feature of the present invention is: 1) taked a kind of novel image lattice to divide and region growing technology.This technology can be extracted the colour of skin information in the image fast and effectively; 2) on the basis of network element and node ballot, obtain point of interest and region-of-interest, this mode has shortened the time that the target area obtains, and has reduced calculation cost; 3) initial information extracted as the trunk profile of comprehensive utilization region-of-interest and point of interest, the optimization by the pixel point set generates the trunk profile.This process can not only be obtained the local message of image, also can express the body information of human body simultaneously; 4) extract body characteristics, contour feature, the local feature of human body and establish the image fast classification method.
Provide the explanation of each related in this invention technical scheme detailed problem below in detail: image lattice is divided
It is the method that often adopts in the Flame Image Process that image is carried out suitable division, and our purpose that image is divided is the time of image being carried out bottom layer treatment in order to save here, also is simultaneously for the area-of-interest of positioning image easily.Division methods is shown in accompanying drawing 1 (a), and image is divided into 4 * 4 totally 16 equal zones.Each region representation is a Ij, i wherein, j=1,2,3,4.Therefore also mark 4 sub regions respectively on 4 angles in each zone, for piece image, all be related between the point on each zone and its pairing 4 sub regions and the corner, each names a person for a particular job on every side 4 regional connections together on the corner.We are numbered respectively these zones, subregion and point, as accompanying drawing 1 (a), (b) and (c) shown in.According to Fig. 7 (b) and (c), we can define following 2 matrixes
A = p ij ( 4 ) p i , j + 1 ( 3 ) p i + 1 , j ( 2 ) p i + 1 , j + 1 ( 1 ) P = p ij ( 1 ) P ij ( 2 ) p ij ( 3 ) p ij ( 4 ) - - - ( 1 )
Relation between the subregion that A represents to enclose in zone and it.P represents the relation between corner point and the subregion around it.Just can represent entire image then with zone, point and subregion.This will save computation complexity greatly, for following step lays the foundation.
Region growing
Consider that network image often is of different sizes, we have adopted a kind of new region growing method, and it is that base unit is grown and utilized the colour of skin that has obtained to distribute to detect colour of skin information in the subregion with functional block rather than pixel.At first we are from a p IjSetting out, is that one 6 * 6 functional block is determined at the center around it with it.Determine 4 vector of unit length then, they are respectively from central point p IjPoint to 4 angles of piece, shown in accompanying drawing 2 (a).The stack of these 4 vectors can further be synthesized new vector, is used for determining the direction of growth of piece, shown in accompanying drawing 2 (b).Different directions has different weights, and for determining these weights, a piece is divided into 4 sub-pieces, and each sub-piece is all calculated its colour of skin area ratio, and selects the power of this ratio as the vector of unit length of correspondence.For example,
s → g = w i s → i + w j s → j - - - ( 2 )
Wherein w is weights.Whole growth course all deteriorates at 0 o'clock and just finishes up to running into subregion border or 4 weights.We define S GrowBe the growth district total area, S SkinColour of skin area.Experiment shows, though S GrowComprised the non-colour of skin area in the growth district, but when describing the attribute of zone and point, S GrowCompare S SkinMore effective.So we define
p ij ( e ) = S grow / S sub ( e = 1,2,3,4 ) - - - ( 3 )
The generation of point of interest and region-of-interest
Some zones that do not have colour of skin information belong to the redundant information in the sensitive image identification in the undeniable image, effectively get rid of these redundant area and will reduce the time of extracting the image useful information.Say on directly perceived that the zone that the colour of skin is abundant should be the most interested zone of algorithm, so we have adopted the voting mechanism based on colour of skin information to obtain area-of-interest here.We define a IjValue be
a ij = Σ ( p ij ( 4 ) + p i , j + 1 ( 3 ) + p i + 1 , j ( 2 ) + p i + 1 , j + 1 ( 1 ) ) - - - ( 4 )
Here each component of equation right-hand member is provided by equation (3).Then these variablees of expressing area attribute being carried out normalization obtains
α ij=α ij/M (5)
M=max (a wherein 11, a 12..., a Ij..., a 44), M ≠ 0.Be the difference between the outburst area, also must determine a p IjPossible value.We define p IjValue be
p ij = 1 ( true ) , if &Sigma; e = 1 4 p ij ( e ) &GreaterEqual; &alpha; 0 ( false ) , if &Sigma; e = 1 4 p ij ( e ) < &alpha; - - - ( 6 )
Wherein α is a threshold value.Here all give 0 value for the point that drops on the image boundary, because from shooting angle, the object that describe should be positioned at by the image position intermediate.Selecting those values is that 1 point is a point-of-interest, and then point-of-interest adds up to
N = &Sigma; i = 2 , j = 2 i = 5 , j = 5 p ij - - - ( 7 )
Consider a little and the relation between the zone that we can utilize the value of point-of-interest to vote, that is to say that each point-of-interest all will throw 1 ticket for its peripheral regions.If all internal point all are true in the image, so voting results such as accompanying drawing 3 (a): the final poll in each zone all depends on 4 points that are positioned on its corner
v score(ij)=p ij+p i,j+1+p i+1,j+p i+1,j+1 (8)
The final score in zone is
S region(i,j)=α ij+v score(ij) (9)
We carry out descending sort to the zone according to its score, select from 1 to N N area-of-interest and get rid of other zone altogether, and the target area just can show especially out so.Some results are shown in accompanying drawing 3 (b).Area-of-interest can be used for the body information of expressing human, and the zone that score is higher may comprise prior information and more suspicious content.The primary study area-of-interest can further describe body, and can reduce computation complexity.
Extract trunk profile and local message
For the judgement of sensitive image attribute, we think that information that trunk comprises can describe the character of image.So the local message that extracts in contour of object and the profile is the core content of this part.Accompanying drawing 4 (a-d) has provided the process that profile extracts.At first, we have designed a colour of skin edge detector and have detected colour of skin frontier point, the higher point-of-interest of some weights is coupled together form a closed curve then, shown in accompanying drawing 4 (a).Next, gather drop on this closed curve outside but with a certain distance from curve in colour of skin frontier point, these points are coupled together form another closed curve, shown in accompanying drawing 4 (b) figure.And then adjust the position of non-colour of skin frontier points all on this curve, the curve after being optimized is the trunk contour curve, shown in accompanying drawing 4 (c).At last, we detect non-area of skin color in the profile by using the some growing technology, obtain local message, as the area of non-area of skin color, position or the like, shown in accompanying drawing 4 (d).Here our image that selected a width of cloth to comprise 4 point-of-interests specifically describes our algorithm, as shown in Figure 5.
The set that at first defines all colour of skin frontier points is Q, obviously has the point of some to be positioned on the profile border in the set, thereby we can obtain the initial information of relevant profile from set Q.Can obtain closed curve C by connecting adjacent point-of-interest r, shown in accompanying drawing 5 (a).
C r=l 1′+l 2′+l 3′+l 4′ (10)
At curve G rThe outside make G rParallel curves G R1, with C rBetween at a distance of λ, λ is a threshold value.The colour of skin frontier point set that our definition is clipped between these two curves is Q, and obviously Q is 31 individual subclass, and its size depends on threshold value λ.Have a few that set Q is comprised is connected to each other with point-of-interest, obtains a curve of volume description profile greatly.We define point sets all on this curve is D, defines the difference set E=D-Q of DQ then.Set E has comprised colour of skin point and the outer non-colour of skin point of profile in the profile simultaneously like this.Next we need adjust those not positions of the point on profile, make curve more approach real trunk profile.
When adjusting the position of point, we have defined several basic reference items.At first the mid point of 4 line segments of the adjacent point-of-interest of selection connection is drawn 2 adjacent reference point of vectors directed as the reference point from each point-of-interest, and we are referred to as reference vector, shown in Fig. 5 (b).Reference point has determined band to adjust the affiliated zone of point, and reference vector determines its motor pattern.These reference items are significant for our algorithm, all depend on its corresponding reference vector because each band is adjusted point in the motion of starting stage, or near it, or away from it.
Next, we illustrate the concrete motor pattern of point to be adjusted by Fig. 5 (c).Put us for the colour of skin point to be adjusted and the non-colour of skin and adopt different search plans.Suppose P 0Expression colour of skin point, p 0The non-colour of skin point of ' expression, their motor pattern is expressed as follows respectively:
For colour of skin point p 0,
Figure C20041003508400101
Wherein
&theta; 0 = arccos ( [ s 0 , R ] / | | s 0 | | &CenterDot; | | R | | ) r 0 = s 0 - ( | | s 0 | | cos &theta; 0 / | | R | | ) R - - - ( 12 )
For non-colour of skin point p ' 0,
Figure C20041003508400112
Wherein
Therefrom as can be seen, colour of skin point not only can detect contiguous colour of skin frontier point soon along rectilinear motion, can also the distant colour of skin border of orientation distance.But not colour of skin point can meet the bending status of trunk better along circular motion.Though colour of skin point has different motor patterns with non-colour of skin point, their purpose is consistent, all to be to drop on the outer new colour of skin frontier point of initial profile in order detecting, thereby to remove to approach real trunk profile to a certain extent.Objective function is defined as follows
F(p n)=g 1(p n)+g 1(p n)+g 3(p n) (15)
Whether equation the 1st expression in the right is colour of skin marginal point, and whether the 2nd expression is marginal point, and wherein the edge is obtained by the sobel edge detector, and whether the 3rd remarked pixel is colour of skin point.Definition
f 1=F(p n+1)-F(p n) (16)
f 2=F(p n)-F(p n-1) (17)
If f 1≠ 0 and f 2≠ 0, select P so N-1, P nAnd P N+1The point of target function value maximum is the objective contour point in 3 points, with the initial point among the set E that replaces its correspondence.Curve after the final optimization pass is shown in Fig. 4 (d), and we are referred to as the trunk profile, and the key character of many relevant human bodies is included within the profile.
Next we can extract local message in profile.At first be not difficult to obtain the axis of profile, we utilize a growing method to detect non-area of skin color on the both sides, axis then, thereby can obtain the area and the positional information of these non-area of skin color.We claim that these non-colour of skin information that occupy profile inside are local message.
Image characteristics extraction and character are judged
Discuss according to 4.3 joints, we have in fact abandoned the regional contained information on 4 angles of image, and doing like this is reasonably, because the keynote message of image is generally described in the inside of image.Be not difficult to find out thus 9 point-of-interests are arranged at most among the width of cloth figure.According to the number of point-of-interest, we are divided into 9 big classes to all images, and the length breadth ratio according to image is divided into 3 subclasses to every big class more then.Thereby it is 27 classes that all images is divided into, as shown in Figure 6.This 27 class can be distinguished altogether
N c = 3 &times; ( C 12 0 + C 12 1 + &CenterDot; &CenterDot; &CenterDot; + C 12 8 + C 12 9 ) = 12051 - - - ( 18 )
Plant image, thereby these classifications are described enough by the various difference of sensitive image.For every width of cloth image, we extract following feature, it at first is the geological information of area-of-interest, be the score of area-of-interest and the position in best result zone then, the 3rd is the angle between the non-area of skin color adjacent in the profile, next be the position of the non-area of skin color of profile inner area maximum, this is put the ratio decision of the distance at two ends, axis by regional center.These characteristic expansions are rearranged into following one-dimensional vector
v=[w 0,w 1,…w i,…w n] T (19)
Here profile information is not as characteristic of division, because normal facial image also may have the profile information similar to sensitive image.In addition, the outer non-area of skin color area of profile is not used as characteristic of division yet, and this is because when judging sensitive image, and this feature is also unstable.
For the identification and the coupling of image, we adopt the arest neighbors method, and the cosine similarity measurement is described below
g ( v , v i ) = arg min v i &Element; C i d ( v , v i )
d ( v , v i ) = 1 - v T v i | | v | | | | v i | | - - - ( 20 )
The two-value classification function is
G(v)=g(v,v +)-g(v,v -) (21)
V wherein +And v -Each represents positive and negative template.In identifying, we at first judge the classification of testing image, extract its feature then, with the positive negative sample in it and such training set in feature space, carry out matching ratio and employing formula (21) judge.Because before carrying out the feature comparison, in advance image is classified,, has reduced computation complexity so reduced number of comparisons.Whole deterministic process is seen accompanying drawing 7.

Claims (7)

1. content-based image-recognizing method comprises step:
At first image is carried out grid dividing;
Be that initial position carries out region growing and obtains colour of skin information around each node in the image with each node of grid then;
Utilize the mechanism of the mutual ballot of grid cell zone and each node to determine point of interest and region-of-interest in the image;
On the basis of region-of-interest and point of interest, the profile of trunk in the extraction image of utilization point;
At last, the local message generating feature vector in extraction profile information and the profile inside is discerned with character image and is judged.
2. method according to claim 1 is characterized in that described region growing comprises step:
Determine functional block;
Each functional block is divided into 4 sub-pieces.
3. method according to claim 2 is characterized in that the different directions of growth of described functional block have different weights.
4. method according to claim 1 is characterized in that the mechanism of described mutual ballot comprises:
All give 0 value for the point that drops on the image boundary;
For the network node that drops in the image-region, then give 1 value, otherwise give 0 value more than or equal to prior preset threshold if satisfy around it in subregion the growth district total area and subregion total area ratio;
The final score value in zone is determined jointly by the score value of four grid nodes in corner around it and the value of carrying of itself.
5. method according to claim 1 is characterized in that the profile of trunk in the described extraction image comprises:
Detect colour of skin frontier point, the point-of-interest that area-of-interest is comprised couples together and forms a closed curve;
Collection drops on described closed curve outside but colour of skin frontier point in curve, these points is coupled together form another closed curve;
Adjust the position of non-colour of skin frontier points all on another closed curve, the trunk contour curve after being optimized;
Detect the non-area of skin color in the profile, obtain local message.
6. by the described method of claim 5, it is characterized in that described local message comprises the area and the position of non-area of skin color.
7. by the described method of claim 1, it is characterized in that described image is discerned judge with character and to comprise and extract following feature:
The geometry of position information of area-of-interest;
The score of area-of-interest and the position of best result;
Angle in the profile between the adjacent non-area of skin color;
The position of the non-area of skin color of profile inner area maximum.
CNB2004100350849A 2004-04-23 2004-04-23 Content based image recognition method Expired - Fee Related CN1331099C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100350849A CN1331099C (en) 2004-04-23 2004-04-23 Content based image recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100350849A CN1331099C (en) 2004-04-23 2004-04-23 Content based image recognition method

Publications (2)

Publication Number Publication Date
CN1691054A CN1691054A (en) 2005-11-02
CN1331099C true CN1331099C (en) 2007-08-08

Family

ID=35346484

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100350849A Expired - Fee Related CN1331099C (en) 2004-04-23 2004-04-23 Content based image recognition method

Country Status (1)

Country Link
CN (1) CN1331099C (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100397400C (en) * 2006-02-10 2008-06-25 华为技术有限公司 Graphic retrieve method
CN101030244B (en) * 2006-03-03 2010-08-18 中国科学院自动化研究所 Automatic identity discriminating method based on human-body physiological image sequencing estimating characteristic
CN100412884C (en) * 2006-04-10 2008-08-20 中国科学院自动化研究所 Human face quick detection method based on local description
JP5058575B2 (en) * 2006-12-12 2012-10-24 キヤノン株式会社 Image processing apparatus, control method therefor, and program
CN101334845B (en) * 2007-06-27 2010-12-22 中国科学院自动化研究所 Video frequency behaviors recognition method based on track sequence analysis and rule induction
JP5432241B2 (en) * 2008-04-07 2014-03-05 コーニンクレッカ フィリップス エヌ ヴェ Mesh collision avoidance
CN101763502B (en) * 2008-12-24 2012-07-25 中国科学院自动化研究所 High-efficiency method and system for sensitive image detection
EP2378761A1 (en) * 2009-01-14 2011-10-19 Panasonic Corporation Image pickup device and image pickup method
CN101763634B (en) * 2009-08-03 2011-12-14 北京智安邦科技有限公司 simple objective classification method and device
CN101923652B (en) * 2010-07-23 2012-05-30 华中师范大学 Pornographic picture identification method based on joint detection of skin colors and featured body parts
CN102609715B (en) * 2012-01-09 2015-04-08 江西理工大学 Object type identification method combining plurality of interest point testers
CN103065126B (en) * 2012-12-30 2017-04-12 信帧电子技术(北京)有限公司 Re-identification method of different scenes on human body images
US9305208B2 (en) * 2013-01-11 2016-04-05 Blue Coat Systems, Inc. System and method for recognizing offensive images
CN105303152B (en) * 2014-07-15 2019-03-22 中国人民解放军理工大学 A kind of human body recognition methods again
CN107358150B (en) * 2017-06-01 2020-08-18 深圳赛飞百步印社科技有限公司 Object frame identification method and device and high-speed shooting instrument
US11140108B1 (en) 2020-05-18 2021-10-05 International Business Machines Corporation Intelligent distribution of media data in a computing environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4805223A (en) * 1985-04-22 1989-02-14 The Quantum Fund Limited Skin-pattern recognition method and device
US20030152262A1 (en) * 2002-02-11 2003-08-14 Fei Mao Method and system for recognizing and selecting a region of interest in an image
US20030179931A1 (en) * 2002-03-19 2003-09-25 Hung-Ming Sun Region-based image recognition method
JP2003308530A (en) * 2002-04-15 2003-10-31 Canon I-Tech Inc Image recognizer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4805223A (en) * 1985-04-22 1989-02-14 The Quantum Fund Limited Skin-pattern recognition method and device
US20030152262A1 (en) * 2002-02-11 2003-08-14 Fei Mao Method and system for recognizing and selecting a region of interest in an image
US20030179931A1 (en) * 2002-03-19 2003-09-25 Hung-Ming Sun Region-based image recognition method
JP2003308530A (en) * 2002-04-15 2003-10-31 Canon I-Tech Inc Image recognizer

Also Published As

Publication number Publication date
CN1691054A (en) 2005-11-02

Similar Documents

Publication Publication Date Title
CN1331099C (en) Content based image recognition method
Srihari Automatic indexing and content-based retrieval of captioned images
CN100530222C (en) Image matching method
CN101763507B (en) Face recognition method and face recognition system
CN107330396A (en) A kind of pedestrian&#39;s recognition methods again based on many attributes and many strategy fusion study
CN109670528A (en) The data extending method for blocking strategy at random based on paired samples towards pedestrian&#39;s weight identification mission
CN106023065A (en) Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network
CN109063724A (en) A kind of enhanced production confrontation network and target sample recognition methods
CN106021442B (en) A kind of Internet news summary extracting method
CN109543602A (en) A kind of recognition methods again of the pedestrian based on multi-view image feature decomposition
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN101923652A (en) Pornographic picture identification method based on joint detection of skin colors and featured body parts
CN107506786A (en) A kind of attributive classification recognition methods based on deep learning
CN106503672A (en) A kind of recognition methods of the elderly&#39;s abnormal behaviour
CN106469181A (en) A kind of user behavior pattern analysis method and device
CN106203356A (en) A kind of face identification method based on convolutional network feature extraction
CN106570183A (en) Color picture retrieval and classification method
CN103020265A (en) Image retrieval method and system
CN108509939A (en) A kind of birds recognition methods based on deep learning
CN100461217C (en) Method for cutting complexity measure image grain
CN109800600A (en) Ocean big data susceptibility assessment system and prevention method towards privacy requirements
CN109145971A (en) Based on the single sample learning method for improving matching network model
CN110414483A (en) A kind of face identification method and system based on deep neural network and random forest
CN103984954B (en) Image combining method based on multi-feature fusion
CN106874825A (en) The training method of Face datection, detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070808

Termination date: 20180423

CF01 Termination of patent right due to non-payment of annual fee