CN104574285B - One kind dispels the black-eyed method of image automatically - Google Patents
One kind dispels the black-eyed method of image automatically Download PDFInfo
- Publication number
- CN104574285B CN104574285B CN201310503624.0A CN201310503624A CN104574285B CN 104574285 B CN104574285 B CN 104574285B CN 201310503624 A CN201310503624 A CN 201310503624A CN 104574285 B CN104574285 B CN 104574285B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- value
- color
- black
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention belongs to digital image processing field, disclose one kind and dispel the black-eyed method of image automatically, it is characterised in that first draw left eye bag, the contour area of right eye bag, be designated as pouch profile diagram;Then human face region figure is obtained;And the brightness of each pixel of human face region is extracted, result is obtained and is designated as image B;Then mean filter, high contrast is successively carried out to image B to retain, highlight algorithm and obtain image C;Mean filter, statistics with histogram are successively carried out to image B, with pouch profile diagram and image C calculate again obtaining image D, Gaussian Blur is carried out to image D;Gaussian Blur is carried out to human face region again and obtains face Gauss map;Finally image D, human face region, face Gauss map are calculated, result figure is obtained.This programme is realized dispels function automatically to black-eyed, is preferably minimized the difficulty and threshold of user's operation, can popularize in a variety of portable digital equipments, eliminate the human-computer interaction devices such as mouse, keyboard, hardware cost is low;With facility, it is quick the characteristics of.
Description
Technical field
The present invention relates to a kind of digital image processing method.
Background technology
The extensive utilization of portable digital equipment makes photography turn into the lower operative skill of threshold, so that people can be at any time
Own activity is recorded, for fields such as household, business.The shooting of portrait is exactly conventional techniques means wherein.It is this kind of to pass through number
The personal portrait of word equipment capture possesses more preferable edit capability, for the consideration of aesthetic feeling, and the livid ring around eye in personal portrait are often
Need technical finesse, although but Digital Image Processing can be used soft for this kind of very fine and smooth processing method under normal circumstances
Part enters edlin, but its technical requirements is very high, and operating procedure is various, time-consuming, or even needs the man-machine interaction instrument of specialty
Such as mouse, stylus etc., therefore, this method are difficult to popularization on popular portable digital equipment, cause many users all
Livid ring around eye are felt simply helpless.Therefore it provides one kind automatically dispels the black-eyed method of face in picture, do not allow as a quarter
Slow the problem of.
The content of the invention
For existing digital image-processing methods, time-consuming in personal portrait livid ring around eye processing links, threshold high, hardware
It is required that it is high, be difficult to defect of the popularization in portable terminal device, the present invention proposes a kind of digital image processing method, it is desirable to provide it is a kind of from
Dynamic digital picture livid ring around eye processing means, realize convenient, quick, low threshold, the effect of low cost.Its technical scheme is as follows:
One kind dispels the black-eyed method of image automatically, and step is as follows:
1) a digital picture A is received;
2) particular location of face and eyes is obtained, left eye bag, the contour area of right eye bag is drawn, is designated as a pouch profile
Figure, method is as follows:
Human face region in digital picture A is determined, and the ratio for accounting for face according to the length of an eyes calculates and obtains eye
The detail region of eyeball, is then drawn using Bezier and straight line, obtains the region of left and right pouch.
3) brightness of each pixel of human face region is extracted, a new image B is made;Extracting method is such as
Under:
Calculate the maxima and minima of each pixel pixel value of the human face region;Then by maxima and minima phase
Plus obtain and again divided by 2, resulting value is the brightness value of each pixel;
4) successively carry out mean filter, high contrast to image B to retain, highlight algorithm and obtain image C, method is as follows:
The mean filter, then be that the template removes target to a template on pending image to object pixel
Pixel in itself, include around it closest to 8 pixels, then averaged with the color value of the entire pixels in the template come
Instead of original its pixel value of the object pixel;
The high contrast retains, then is the target pixel value that the target pixel value of pending image is subtracted to Gaussian Blur
Along with 128;Wherein Gaussian Blur is the conversion that each pixel in image is calculated with normal distribution, is:
Wherein r is blur radius, r2=u2+v2, σ is the standard deviation of normal distribution, and u is target pixel points in trunnion axis
The deviant of upper relative preimage vegetarian refreshments position, v is target pixel points on a vertical axis with respect to the deviant of preimage vegetarian refreshments position;Should
Blur radius r scopes are [6,12],
It is described to highlight algorithm, it is that, using the mapping for highlighting mapping table progress color, the mapping equation for highlighting mapping table is:
ColorResult=arrayLight [color];
Wherein, arrayLight is highlights mapping table, and its size is 256, wherein arrayLight [i] >=i;Color is
The priming color value of the pending each pixel of image;
5) image B is first carried out after the mean filter processing, then carries out statistics with histogram, so with the pouch profile
Figure and image C calculate obtaining image D, and method is as follows:
Mean filter processing is with method in 4);
The statistics with histogram is:First preset a size and be 256 array hist [256], and initialize all values
For 0, then the color value of image B each pixel is then indexed automatic+1 as the index of array:
Hist [color]=hist [color]+1;
Color is the color value of each pixel of pending image;
Computational methods with the pouch profile diagram and image C are:
A threshold value threshold is preset, between 32 to 200;
Whether in the pixel be black, if then setting the pixel on image D if then judging the pouch profile diagram
Color value is 255;Otherwise just then judge whether color values of the image B after mean filter on the pixel is less than threshold value
Threshold, if less than if, then sets the color value of the pixel on image D as 255, otherwise just by the pixel on image C
The color value of point is assigned to image D;
6) Gaussian Blur is carried out to image D;The method of Gaussian Blur herein with mode in step 4) Gaussian Blur, its
Blur radius r scope is [2,8], obtains image E.
7) Gaussian Blur is carried out to the human face region and obtains a face Gauss map, the method for Gaussian Blur herein is synchronous
It is rapid 4) in mode Gaussian Blur, the scope of blur radius is [1,6].
8) described image E, human face region, face Gauss map are calculated, obtains result figure;The calculation procedure is as follows:
A) human face region and the face Gauss map are entered as transparency according to the color value of image E this figure
Row transparency blending, its formula is:
Alpha=colorE/255.0;
ColorResult=colorFace*alpha+ (1.0-alpha) * colorFaceGauss;
The result that wherein alpha is normalized for image E color value;ColorE is image E color value;
ColorResult is mixed result;ColorFace is the color value on the human face region;ColorFaceGauss is
Color value in the face Gauss map.
B) mixed value is then carried out into transparency blending again with human face region further according to default transparency to be tied
Fruit is schemed;The formula of wherein transparency blending is:
ColorResultAll=colorResult*textureAlpha+ (1.0-textureAlpha) *
colorFace;
Wherein colorResultAll is the value of the pixel in result figure;ColorResult is the knot that step a) is calculated
Fruit is worth;TextureAlpha is default transparency, and scope is [0.2,0.8];ColorFace is the face on the human face region
Colour.
As the preferred person of this programme, there can be following improvement:
In the preferred embodiment, the step 1) in have a Face datection step, when detecting face, obtain face
Regional location and perform step 2), otherwise terminate all steps.
In the preferred embodiment, there is an eyes detecting step after the Face datection step:When detecting eyes, obtain
Take the particular location of eyes and perform step 2), otherwise terminate all steps.
In the preferred embodiment, the step 5) in the threshold value threshold be 128.
In the preferred embodiment, the step 8) in the transparency textureAlpha be preset as 0.5.
In the preferred embodiment, the step 4) in the blur radius r be 8.
In the preferred embodiment, the step 6) in the blur radius r be 5.
In the preferred embodiment, the step 7) in Gaussian Blur the blur radius be 3.
In the preferred embodiment, the pouch profile diagram is used as the upper of the pouch profile diagram using two horizontal line sections up and down
Lower edge;The two ends of the section of two lines up and down are connected respectively with the Bezier of two horizontal evaginations again.
The beneficial effect that this programme is brought has:
It can realize and dispel function automatically to black-eyed, on the one hand save operating procedure, it is to avoid artificial treatment number
The cumbersome flow of word image, is preferably minimized the difficulty and threshold of user's operation, can popularize in a variety of portable digital equipments, save
But the human-computer interaction device such as mouse, keyboard, hardware cost is low;On the other hand, can quickly it be obtained using simple menu operation
Final effect, its stand-by period is substantially brief, with facility, it is quick the characteristics of.
Embodiment
1) a digital picture A is received;
2) particular location of face and eyes is obtained according to the method for Face datection and eye detection.
Recognition of face is first carried out, a variety of existing methods, such as document " P.Viola and can be used
M.Jones.Rapid Object Detection using a Boosted Cascade of Simple Features, in:
Computer Vision and Pattern RecognitiOn, 2001.CVPR 2001.Proceedings of the
2001IEEE Computer Society Conference on″.The approximate region position of face is obtained according to positioning.
And eye detection, a variety of known technologies can also be used.Such as document An Algorithm for real time
Mentioned in eye detection in face images (T.D ' Orazio, M.Leo, G.Cicirelli, A.Distante)
Method.The particular location of right and left eyes is obtained according to eye detection.
The region starting point of the present embodiment face is (41,29), a width of 418, a height of 482;The center point coordinate of left eye is
(97,128);The center point coordinate of right eye is (390,118);And left eye bag, the contour area of right eye bag are drawn, it is designated as a pouch
Profile diagram, method is as follows:
Human face region in digital picture A is determined, and it is 0.33 (reality to account for the ratio of face according to the length of an eyes
The scope of this upper ratio is between 0.25 to 0.4) detail region for obtaining eyes is calculated, then utilize Bezier and straight
Line is drawn, and obtains the region of left and right pouch::
Using lower edges of two horizontal line sections up and down as the pouch profile diagram;Again with the Bei Sai of two horizontal evaginations
You respectively connect the two ends of the section of two lines up and down curve.
3) brightness of each pixel of human face region is extracted, each pixel pixel value of the human face region is calculated most
Make a new image B greatly;
4) mean filter, high contrast is successively carried out to image B to retain, highlight algorithm and obtain image C;Mean filter be
Give template to object pixel on pending image, the template removes object pixel in itself, include around it closest to
8 pixels, then averaged with the color value of the entire pixels in the template come instead of original its picture of the object pixel
Element value;
The high contrast retains, then is the target pixel value that the target pixel value of pending image is subtracted to Gaussian Blur
Along with 128;Wherein Gaussian Blur is the conversion that each pixel in image is calculated with normal distribution, is:
Wherein r is blur radius, r2=u2+v2, σ is the standard deviation of normal distribution, and u is target pixel points in trunnion axis
The deviant of upper relative preimage vegetarian refreshments position, v is target pixel points on a vertical axis with respect to the deviant of preimage vegetarian refreshments position;Should
Blur radius r scopes are 8,
It is described to highlight algorithm, it is that, using the mapping for highlighting mapping table progress color, the mapping equation for highlighting mapping table is:
ColorResult=arrayLight [color];
Wherein, arrayLight is highlights mapping table, and its size is 256, wherein arrayLight [i] >=i;Color is
The priming color value of the pending each pixel of image;
5) image B is first carried out after the mean filter processing, then carries out statistics with histogram, so with the pouch profile
Figure and image C calculate obtaining image D, and the statistics with histogram is:First preset the array hist that a size is 256
[256], and all values are initialized for 0, then using the color value of image B each pixel as array index, then
Indexed automatic+1:
Hist [color]=hist [color]+1;
Color is the color value of each pixel of pending image;
Computational methods with the pouch profile diagram and image C are:
A default threshold value threshold is 48;
Whether in the pixel be black, if then setting the pixel on image D if then judging the pouch profile diagram
Color value is 255;Otherwise just then judge whether color values of the image B after mean filter on the pixel is less than threshold value
Threshold, if less than if, then sets the color value of the pixel on image D as 255, otherwise just by the pixel on image C
The color value of point is assigned to image D;
6) Gaussian Blur is carried out to image D;The method of Gaussian Blur herein with step 4) method, its Gaussian Blur
Radius value is 5;Obtain image E.
7) Gaussian Blur is carried out to the human face region and obtains a face Gauss map, the same step 4) of method of its Gaussian Blur
Method, herein the radius value of Gaussian Blur be 3;
8) described image E, human face region, face Gauss map are calculated, obtains result figure;The calculation procedure is as follows:
A) human face region and the face Gauss map are entered as transparency according to the color value of image E this figure
Row transparency blending, its formula is:
Alpha=colorE/255.0;
ColorResult=colorFace*alpha+ (1.0-alpha) * colorFaceGauss;
The result that wherein alpha is normalized for image E color value;ColorE is image E color value;
ColorResult is mixed result;ColorFace is the color value on the human face region;ColorFaceGauss is
Color value in the face Gauss map.
B) mixed value is then carried out into transparency blending again with human face region further according to default transparency to be tied
Fruit is schemed;The formula of wherein transparency blending is:
ColorResultAll=colorResult*textureAlpha+ (1.0-textureAlpha) *
colorFace;
Wherein colorResultAll is the value of the pixel in result figure;ColorResult is the knot that step a) is calculated
Fruit is worth;TextureAlpha is default transparency, and value is 0.55;ColorFace is the color value on the human face region.
Through result verification, the present embodiment makes picture livid ring around eye part, the particularly bathochromic effect of pouch part substantially be cut
It is weak, or even visually be difficult to recognize, obtain good effect.
Claims (9)
1. one kind dispels the black-eyed method of image automatically, it is characterised in that:Step is as follows:
1) a digital picture A is received;
2) particular location of face and eyes is obtained, left eye bag, the contour area of right eye bag is drawn, is designated as a pouch profile diagram;
3) brightness of each pixel of human face region is extracted, a new image B is made;Extracting method is as follows:
Calculate the maxima and minima of each pixel pixel value of the human face region;Then maxima and minima is added
Arriving and again divided by 2, resulting value is the brightness value of each pixel;
4) successively carry out mean filter, high contrast to image B to retain, highlight algorithm and obtain image C, method is as follows:
The mean filter, then be that the template removes object pixel to a template on pending image to object pixel
Itself, include around it closest to 8 pixels, then with the color value of the entire pixels in the template average to replace
The object pixel its pixel value originally;
The high contrast retains, then is that the target pixel value that the target pixel value of pending image is subtracted into Gaussian Blur adds again
Upper 128;Wherein Gaussian Blur is the conversion that each pixel in image is calculated with normal distribution, is:
Wherein r is blur radius, r2=u2+v2, σ is the standard deviation of normal distribution, and u is target pixel points phase on the horizontal axis
To the deviant of preimage vegetarian refreshments position, v is target pixel points on a vertical axis with respect to the deviant of preimage vegetarian refreshments position;This is obscured
Radius r scopes are [6,12],
It is described to highlight algorithm, it is that, using the mapping for highlighting mapping table progress color, the mapping equation for highlighting mapping table is:
ColorResult=arrayLight [color];
Wherein, arrayLight is highlights mapping table, and its size is 256, wherein arrayLight [i] >=i;Color is to wait to locate
Manage the priming color value of each pixel of image;
5) image B is first carried out after the mean filter processing, then carries out statistics with histogram, so with the pouch profile diagram with
And image C calculate obtaining image D, method is as follows:
Mean filter processing is with method in 4);
The statistics with histogram is:First preset a size and be 256 array hist [256], and it is 0 to initialize all values,
Then then the color value of image B each pixel is indexed automatic+1 as the index of array:
Hist [color]=hist [color]+1;
Color is the color value of each pixel of pending image;
Computational methods with the pouch profile diagram and image C are:
A threshold value threshold is preset, between 32 to 200;
Whether in the pixel be black, if then setting the color of the pixel on image D if then judging the pouch profile diagram
It is worth for 255;Otherwise just then judge whether color values of the image B after mean filter on the pixel is less than threshold value
Threshold, if less than if, then sets the color value of the pixel on image D as 255, otherwise just by the pixel on image C
The color value of point is assigned to image D;
6) Gaussian Blur is carried out to image D;The method of Gaussian Blur herein is with the Gaussian Blur of mode in step 4), and it is obscured
Radius r scope is [2,8], obtains image E;
7) Gaussian Blur is carried out to the human face region and obtains a face Gauss map, the same step 4) of method of Gaussian Blur herein
The Gaussian Blur of middle mode, the scope of blur radius is [1,6];
8) described image E, human face region, face Gauss map are calculated, obtains result figure;The calculation procedure is as follows:
A) human face region and the face Gauss map are carried out as transparency according to the color value of image E this figure saturating
Lightness is mixed, and its formula is:
Alpha=colorE/255.0;
ColorResult=colorFace* alpha+ (1.0-alpha) * colorFaceGauss;
The result that wherein alpha is normalized for image E color value;ColorE is image E color value;
ColorResult is mixed result;ColorFace is the color value on the human face region;ColorFaceGauss is
Color value in the face Gauss map;
B) mixed value is then subjected to transparency blending further according to default transparency and human face region again and obtains result figure;
The formula of wherein transparency blending is:
ColorResultAll=colorResult* textureAlpha+ (1.0-textureAlpha) * colorFace;
Wherein colorResultAll is the value of the pixel in result figure;ColorResult is the end value that step a) is calculated;
TextureAlpha is default transparency, and scope is [0.2,0.8];ColorFace is the color value on the human face region.
2. one kind dispels the black-eyed method of image automatically according to claim 1, it is characterised in that:The step 1) middle tool
Have a Face datection step, when detecting face, obtain the regional location of face and perform step 2), otherwise terminate all steps
Suddenly.
3. one kind dispels the black-eyed method of image automatically according to claim 2, it is characterised in that:The Face datection step
There is an eyes detecting step after rapid:When detecting eyes, obtain the particular location of eyes and perform step 2), otherwise terminate
All steps.
4. one kind dispels the black-eyed method of image automatically according to claim 1, it is characterised in that:The step 5) in
Threshold value threshold is 128.
5. one kind dispels the black-eyed method of image automatically according to claim 1, it is characterised in that:The step 8) in
The transparency textureAlpha is preset as 0.5.
6. one kind dispels the black-eyed method of image automatically according to claim 1, it is characterised in that:The step 4) in
The blur radius r is 8.
7. one kind dispels the black-eyed method of image automatically according to claim 1, it is characterised in that:The step 6) in
The blur radius r is 5.
8. one kind dispels the black-eyed method of image automatically according to claim 1, it is characterised in that:The step 7) in it is high
This fuzzy described blur radius is 3.
9. one kind dispels the black-eyed method of image automatically according to any one of claim 1 to 8, it is characterised in that:Institute
State pouch profile drawing drawing method as follows:Using lower edges of two horizontal line sections up and down as the pouch profile diagram;Use again
The Bezier of two horizontal evaginations respectively connects the two ends of the section of two lines up and down.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310503624.0A CN104574285B (en) | 2013-10-23 | 2013-10-23 | One kind dispels the black-eyed method of image automatically |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310503624.0A CN104574285B (en) | 2013-10-23 | 2013-10-23 | One kind dispels the black-eyed method of image automatically |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104574285A CN104574285A (en) | 2015-04-29 |
CN104574285B true CN104574285B (en) | 2017-09-19 |
Family
ID=53090268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310503624.0A Active CN104574285B (en) | 2013-10-23 | 2013-10-23 | One kind dispels the black-eyed method of image automatically |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104574285B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021142996A1 (en) * | 2020-01-13 | 2021-07-22 | 五邑大学 | Point cloud denoising method, system, and device employing image segmentation, and storage medium |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105023252A (en) * | 2015-07-14 | 2015-11-04 | 厦门美图网科技有限公司 | Method and system for enhancement processing of beautified image and shooting terminal |
CN105608722B (en) * | 2015-12-17 | 2018-08-31 | 成都品果科技有限公司 | It is a kind of that pouch method and system are gone based on face key point automatically |
CN106875332A (en) * | 2017-01-23 | 2017-06-20 | 深圳市金立通信设备有限公司 | A kind of image processing method and terminal |
CN107392841B (en) * | 2017-06-16 | 2020-04-24 | Oppo广东移动通信有限公司 | Method and device for eliminating black eye in face area and terminal |
CN107862673B (en) * | 2017-10-31 | 2021-08-24 | 北京小米移动软件有限公司 | Image processing method and device |
CN108665498B (en) * | 2018-05-15 | 2023-05-12 | 北京市商汤科技开发有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109325924B (en) * | 2018-09-20 | 2020-12-04 | 广州酷狗计算机科技有限公司 | Image processing method, device, terminal and storage medium |
CN111047533B (en) * | 2019-12-10 | 2023-09-08 | 成都品果科技有限公司 | Beautifying method and device for face image |
CN111462003B (en) * | 2020-03-20 | 2022-08-23 | 稿定(厦门)科技有限公司 | Face image processing method, medium, device and apparatus |
CN113673270B (en) * | 2020-04-30 | 2024-01-26 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111915478B (en) * | 2020-07-14 | 2023-06-23 | 厦门真景科技有限公司 | Beautifying method, device and equipment based on edge protection blurring and computer readable storage medium |
CN112348736B (en) * | 2020-10-12 | 2023-03-28 | 武汉斗鱼鱼乐网络科技有限公司 | Method, storage medium, device and system for removing black eye |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6681058B1 (en) * | 1999-04-15 | 2004-01-20 | Sarnoff Corporation | Method and apparatus for estimating feature values in a region of a sequence of images |
CN101404728A (en) * | 2008-11-12 | 2009-04-08 | 深圳市迅雷网络技术有限公司 | Digital image exposure regulation method and apparatus |
CN101916370A (en) * | 2010-08-31 | 2010-12-15 | 上海交通大学 | Method for processing non-feature regional images in face detection |
US8447132B1 (en) * | 2009-12-09 | 2013-05-21 | CSR Technology, Inc. | Dynamic range correction based on image content |
-
2013
- 2013-10-23 CN CN201310503624.0A patent/CN104574285B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6681058B1 (en) * | 1999-04-15 | 2004-01-20 | Sarnoff Corporation | Method and apparatus for estimating feature values in a region of a sequence of images |
CN101404728A (en) * | 2008-11-12 | 2009-04-08 | 深圳市迅雷网络技术有限公司 | Digital image exposure regulation method and apparatus |
US8447132B1 (en) * | 2009-12-09 | 2013-05-21 | CSR Technology, Inc. | Dynamic range correction based on image content |
CN101916370A (en) * | 2010-08-31 | 2010-12-15 | 上海交通大学 | Method for processing non-feature regional images in face detection |
Non-Patent Citations (3)
Title |
---|
Personal photo enhancement using example images;NEEL JOSHI等;《Acm Transactions on Graphics》;20100331;第29卷(第2期);第397-408页 * |
基于人脸识别的图像美化系统设计与实现;叶龙宝;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120715(第07期);第1-50页 * |
基于迭代多级中值滤波的人脸美化算法;韩静亮等;《计算机应用与软件》;20100531;第27卷(第5期);第227-229页 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021142996A1 (en) * | 2020-01-13 | 2021-07-22 | 五邑大学 | Point cloud denoising method, system, and device employing image segmentation, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104574285A (en) | 2015-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104574285B (en) | One kind dispels the black-eyed method of image automatically | |
CN103927719B (en) | Picture processing method and device | |
US8983152B2 (en) | Image masks for face-related selection and processing in images | |
CN109639982A (en) | A kind of image denoising method, device, storage medium and terminal | |
CN105243371B (en) | A kind of detection method, system and the camera terminal of face U.S. face degree | |
CN106326823B (en) | Method and system for obtaining head portrait in picture | |
KR20210149848A (en) | Skin quality detection method, skin quality classification method, skin quality detection device, electronic device and storage medium | |
CN103268475A (en) | Skin beautifying method based on face and skin color detection | |
WO2022063023A1 (en) | Video shooting method, video shooting apparatus, and electronic device | |
CN105046661B (en) | A kind of method, apparatus and intelligent terminal for lifting video U.S. face efficiency | |
CN103440633B (en) | A kind of digital picture dispels the method for spot automatically | |
KR20120070985A (en) | Virtual experience system based on facial feature and method therefore | |
CN104679242A (en) | Hand gesture segmentation method based on monocular vision complicated background | |
CN104599297A (en) | Image processing method for automatically blushing human face | |
CN106530309A (en) | Video matting method and system based on mobile platform | |
WO2022135574A1 (en) | Skin color detection method and apparatus, and mobile terminal and storage medium | |
WO2023056950A1 (en) | Image processing method and electronic device | |
CN110378847A (en) | Face image processing process, device, medium and electronic equipment | |
CN112036209A (en) | Portrait photo processing method and terminal | |
CN106097261A (en) | Image processing method and device | |
CN109447031A (en) | Image processing method, device, equipment and storage medium | |
CN109389076A (en) | Image partition method and device | |
CN107846555A (en) | Automatic shooting method, device, user terminal and computer-readable storage medium based on gesture identification | |
CN114187166A (en) | Image processing method, intelligent terminal and storage medium | |
CN108564537B (en) | Image processing method, image processing device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |