CN103530900A - Three-dimensional face model modeling method, face tracking method and equipment - Google Patents

Three-dimensional face model modeling method, face tracking method and equipment Download PDF

Info

Publication number
CN103530900A
CN103530900A CN201210231897.XA CN201210231897A CN103530900A CN 103530900 A CN103530900 A CN 103530900A CN 201210231897 A CN201210231897 A CN 201210231897A CN 103530900 A CN103530900 A CN 103530900A
Authority
CN
China
Prior art keywords
frame
expression
model
face
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210231897.XA
Other languages
Chinese (zh)
Other versions
CN103530900B (en
Inventor
沈晓璐
冯雪涛
张辉
金培亭
金智渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN201210231897.XA priority Critical patent/CN103530900B/en
Priority to KR1020130043463A priority patent/KR102024872B1/en
Priority to US13/936,001 priority patent/US20140009465A1/en
Publication of CN103530900A publication Critical patent/CN103530900A/en
Application granted granted Critical
Publication of CN103530900B publication Critical patent/CN103530900B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/167Detection; Localisation; Normalisation using comparisons between temporally consecutive images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Abstract

Provided are a three-dimensional face model modeling method, a face tracking method and a piece of equipment. The provided three-dimensional face model modeling method comprises the following steps: taking a preset standard three-dimensional face model as a working model, setting a specified starting frame as the first frame, carrying out face tracking from the specified starting frame of a plurality of continuously-input video frames based on the working model, extracting face feature points, facial expression parameters and head pose parameters from tracked video frames and generating tracking results corresponding to a preset number of video frames according to predetermined conditions; updating the working model based on the generated tracking results, and continuing performing face tracking on subsequent video frames until tracking of all the video frames is completed if that the difference between appearance parameters of the updated working model and appearance parameters of the working model before updating is not smaller than a predetermined limit value is determined when the working model is updated; and outputting the working model.

Description

The modeling method of three-dimensional face model, face tracking method and equipment
Technical field
The application relates to a kind of modeling method, face tracking method of three-dimensional face model and for carrying out the equipment of described modeling method and face tracking method, relate in particular to a kind of people's face frame of video from continuous input and carry out concurrently the modeling of face tracking and three-dimensional face model, thereby the three-dimensional face model that more approaches user people's face is provided, and/or exports high-precision human facial expression information.
Background technology
Take continuous videos as input, and existing face tracking/modeling technique can be exported the result of differing complexity, for example: expression parameter classification with and intensity, people's face two-dimensional shapes, people's face 3D shape of low-res or people's face 3D shape of high-res.
The basic technology of face tracking/modeling is divided three classes substantially: the technology based on identification, the technology based on matching and the technology based on rebuilding.
But there is following problem in existing face tracking/modeling technique: first, some Technology Needs rely on extra equipment, as binocular camera, depth camera; Secondly, most of Technology Need user's cooperation, carries out user's registration such as manual markings key point, before using, when modeling, keeps amimia or fixing expression etc.; In addition, some technology can not be exported the tracking results of high-res, can not avoid appearance, the impact of attitude on expression parameters precision.
Summary of the invention
The object of the present invention is to provide a kind of modeling method of three-dimensional face model of personalization, from people's face frame of video of continuous input, carry out iteratively face tracking, and on the basis of face tracking result, upgrade three-dimensional face model, thereby the three-dimensional face model that more approaches user people's face is provided.
Another object of the present invention is to provide a kind of face tracking method, from people's face frame of video of continuous input, carry out iteratively face tracking, based on tracking results, upgrade three-dimensional face model, thereby export high-precision human facial expression information.
According to an aspect of the present invention, a kind of modeling method of three-dimensional face model is provided, described three-dimensional face model comprises 3D shape, appearance parameter, the information of expression parameter and head pose, described modeling method comprises the following steps: using default standard three-dimensional face model as working model, and appointment start frame is made as to the first frame, carry out following operation: a) based on working model, from the appointment start frame of a plurality of frame of video of continuous input, start to carry out face tracking, from the frame of video of following the tracks of, extract human face characteristic point, expression parameter and head pose parameter, and the corresponding tracking results of frame of video according to predetermined condition generation with predetermined number, described tracking results comprises each frame of video of tracking and the human face characteristic point extracting from each frame of video of following the tracks of, expression parameter and head pose parameter, b) tracking results based on generating is upgraded working model, wherein, if when upgrading working model, determine that the appearance parameter of the working model upgrading and the difference of the appearance parameter before renewal are not less than predetermined limit value, and the frame of video after the frame of video of described predetermined number is not the last frame in a plurality of frame of video of inputting continuously,, using first frame of video after the frame of video of described predetermined number as specifying start frame, execution step a), and c) output services model.
Described three-dimensional face model can be represented as
S ( a , e , q ) = T ( Σ a i S i a + Σ e j S j e ; q ) ,
Wherein, S is 3D shape, and a is appearance component, and e is expression component, and q is head pose, and T (S, q) represents the 3D shape S to be rotated according to head pose q and/or the function of translation.
Described standard three-dimensional face model can comprise: average shape S 0, appearance component
Figure BDA00001853865800022
expression component
Figure BDA00001853865800023
and standard head attitude q 0.Wherein, i=1:N,
Figure BDA00001853865800024
represent a kind of variation of people's face appearance, j=1:M,
Figure BDA00001853865800025
represent a kind of variation of human face expression.
Can use a series of three-dimensional face data to train described standard three-dimensional face model down online in advance.
Can perform step a) concurrently and step b).
The processing that tracking results based on generating described in step b) is upgraded working model can comprise: in the middle of the tracking results generating, the frame of video of the most approaching neutral expression is chosen as to neutral expression frame; According to the human face characteristic point in neutrality expression frame, from neutrality expression frame, extract people's face outline; Unique point based in neutrality expression frame and people's face outline of extraction upgrade working model.
In selecting the processing of neutral expression frame, can from the predetermined number corresponding tracking results of frame of video that is T in the middle of, select neutral expression frame as follows: the frame of video that is each tracking is calculated expression parameter
Figure BDA00001853865800026
wherein, K is the kind quantity of expression parameter; By the highest expression parameter value of the frequency of occurrences in every kind of expression parameter
Figure BDA00001853865800027
as neutrality expression value; Select frame of video that the deviation of whole K expressions parameters and corresponding neutral expression values is all less than each predetermined threshold value as the neutrality frame of expressing one's feelings.
Can use active contour model algorithm to extract people's face outline from neutrality expression frame.
The processing that people's face outline of described unique point based in neutrality expression frame and extraction upgrades working model can comprise: the head pose that the head pose q of working model is updated to neutral expression frame; The expression component e of working model is set to 0; By by working model S (a, e, q) mate with the characteristic point position of neutral expression frame, and by working model S (a, e, q) the people's face outline calculating and the appearance component a that mates to come correction work model from people's face outline of neutrality expression frame extraction.
Can use a plurality of modeling unit alternately to carry out the processing of the step b) of different iteration, and the processing of different iteration is merged.
In step a), also can export the tracking results generating by input/output interface.
In step b), when the renewal of the model of finishing the work, also can export the working model upgrading by input/output interface.
In step a), can determine described predetermined number according to the accuracy requirement of the input rate of a plurality of frame of video of continuous input, noise quality or tracking, and described predetermined number can be constant or variable.
In step a), can use one of following methods to obtain human face characteristic point, expression parameter and head pose parameter based on working model: active appearance models method (AAM), Active Shape Model Method (ASM) and compound constant active appearance models (Composite Constraint AAM).
If in step a), the result of failure is returned in the processing of described face tracking, can be 0 by the appearance component a of replacement working model, and turn back to the beginning of step a).
According to a further aspect in the invention, a kind of face tracking method based on three-dimensional face model is provided, described three-dimensional face model comprises 3D shape, appearance parameter, the information of expression parameter and head pose, described face tracking method comprises the following steps: using default standard three-dimensional face model as working model, modeling indication is set to 1, and appointment start frame is made as to the first frame, carry out following operation: a) based on working model, from the appointment start frame of a plurality of frame of video of continuous input, start to carry out face tracking, from the frame of video of following the tracks of, extract human face characteristic point, expression parameter and head pose parameter, the corresponding tracking results of frame of video according to predetermined condition generation with predetermined number, described tracking results comprises each frame of video of tracking and the human face characteristic point extracting from each frame of video of following the tracks of, expression parameter and head pose parameter, and export described tracking results, b) if modeling is designated as 1, the tracking results based on generating is upgraded working model, and wherein, when upgrading working model, if the difference of the appearance parameter before determining new appearance parameter and upgrading is less than predetermined limit value, modeling indication is set to 0, c), if the frame of video after the frame of video of described predetermined number is not the last frame in a plurality of frame of video of inputting continuously,, using first frame of video after the frame of video of described predetermined number as specifying start frame, execution step a).
Described three-dimensional face model can be represented as S ( a , e , q ) = T ( Σ a i S i a + Σ e j S j e ; q ) , Wherein, S is 3D shape, and a is appearance component, and e is expression component, and q is head pose, and T (S, q) represents the 3D shape S to be rotated according to head pose q and/or the function of translation.
Described standard three-dimensional face model can comprise: average shape S 0, appearance component
Figure BDA00001853865800041
expression component
Figure BDA00001853865800042
and standard head attitude q 0, wherein, i=1:N,
Figure BDA00001853865800043
represent a kind of variation of people's face appearance, j=1:M,
Figure BDA00001853865800044
represent a kind of variation of human face expression.
Can use a series of three-dimensional face data to train described standard three-dimensional face model down online in advance.
Can perform step a) concurrently and step b).
The processing that tracking results based on generating described in step b) is upgraded working model can comprise: in the middle of the tracking results generating, the frame of video of the most approaching neutral expression is chosen as to neutral expression frame; According to the human face characteristic point in neutrality expression frame, from neutrality expression frame, extract people's face outline; Unique point based in neutrality expression frame and people's face outline of extraction upgrade working model.
In selecting the processing of neutral expression frame, can from the predetermined number corresponding tracking results of frame of video that is T in the middle of, select neutral expression frame as follows: the frame of video that is each tracking is calculated expression parameter
Figure BDA00001853865800045
wherein, K is the kind quantity of expression parameter; By the highest expression parameter value of the frequency of occurrences in every kind of expression parameter
Figure BDA00001853865800046
as neutrality expression value; Select frame of video that the deviation of whole K expressions parameters and corresponding neutral expression values is all less than each predetermined threshold value as the neutrality frame of expressing one's feelings.
Can use active contour model algorithm to extract people's face outline from neutrality expression frame.
The processing that people's face outline of described unique point based in neutrality expression frame and extraction upgrades working model can comprise: the head pose that the head pose q of working model is updated to neutral expression frame; The expression component e of working model is set to 0; By by working model S (a, e, q) mate with the characteristic point position of neutral expression frame, and by working model S (a, e, q) the people's face outline calculating and the appearance component a that mates to come correction work model from people's face outline of neutrality expression frame extraction.
Can use a plurality of modeling unit alternately to carry out the processing of the step b) of different iteration, and the processing of different iteration is merged.
As completing steps c) execution after, can go back output services model.
In step a), can determine described predetermined number according to the accuracy requirement of the input rate of a plurality of frame of video of continuous input, noise quality or tracking, and described predetermined number is constant or variable.
In step a), can use one of following methods to obtain human face characteristic point, expression parameter and head pose parameter based on working model: active appearance models method (AAM), Active Shape Model Method (ASM) and compound constant active appearance models (Composite Constraint AAM).
If in step a), the result of failure is returned in the processing of described face tracking, can be 0 by the appearance component a of replacement working model, and turn back to the beginning of step a).
According to a further aspect in the invention, a kind of modelling apparatus of three-dimensional face model is provided, described three-dimensional face model comprises the information of 3D shape, appearance parameter, expression parameter and head pose, described modelling apparatus comprises: the first module, be used for using default standard three-dimensional face model as working model, and appointment start frame is made as to the first frame, controls the second module a plurality of frame of video of continuous input are processed; The second module, for starting to carry out face tracking based on working model from the appointment start frame of a plurality of frame of video of continuous input, from the frame of video of following the tracks of, extract human face characteristic point, expression parameter and head pose parameter, and the corresponding tracking results of frame of video according to predetermined condition generation with predetermined number, described tracking results comprises each frame of video and the human face characteristic point extracting from each frame of video of following the tracks of, expression parameter and the head pose parameter of tracking; The 3rd module, for the tracking results based on generating, upgrade working model, wherein, if when upgrading working model, the difference of the appearance parameter before determining new appearance parameter and upgrading is not less than predetermined limit value, and the frame of video after the frame of video of described predetermined number is not the last frame in a plurality of frame of video of inputting continuously, using first frame of video after the frame of video of described predetermined number as specifying start frame, controls the second module and continue a plurality of frame of video of continuous input to process; Four module, for output services model.
Described three-dimensional face model can be represented as S ( a , e , q ) = T ( Σ a i S i a + Σ e j S j e ; q ) , Wherein, S is 3D shape, and a is appearance component, and e is expression component, and q is head pose, and T (S, q) represents the 3D shape S to be rotated according to head pose q and/or the function of translation.
Described standard three-dimensional face model can comprise: average shape S 0, appearance component
Figure BDA00001853865800052
expression component
Figure BDA00001853865800053
and standard head attitude q 0, wherein, i=1:N, represent a kind of variation of people's face appearance, j=1:M,
Figure BDA00001853865800055
represent a kind of variation of human face expression.
Described modelling apparatus also can comprise: training module, is used a series of three-dimensional face data to train described standard three-dimensional face model for lower online in advance.
The second module and the 3rd module be executable operations concurrently.
The 3rd module can comprise: the 5th module, in the middle of the tracking results generating, is chosen as neutral expression frame by the frame of video of the most approaching neutral expression; The 6th module, for extracting people's face outline according to the human face characteristic point of neutrality expression frame from neutrality expression frame; The 7th module, for upgrading working model based on the neutrality expression unique point of frame and people's face outline of extraction.
The 5th module can be selected neutral expression frame as follows: for the frame of video of each tracking is calculated expression parameter
Figure BDA00001853865800056
wherein, K is the kind quantity of expression parameter; By the highest expression parameter value of the frequency of occurrences in every kind of expression parameter
Figure BDA00001853865800057
as neutrality expression value; Select frame of video that the deviation of whole K expressions parameters and corresponding neutral expression values is all less than each predetermined threshold value as the neutrality frame of expressing one's feelings.
The 6th module can be used active contour model algorithm to extract people's face outline from neutrality expression frame.
The processing that people's face outline of the unique point of the 7th module based in neutrality expression frame and extraction upgrades working model can comprise: the head pose that the head pose q of working model is updated to neutral expression frame; The expression component e of working model is set to 0; By by working model S (a, e, q) mate with the characteristic point position of neutral expression frame, and by working model S (a, e, q) the people's face outline calculating and the appearance component a that mates to come correction work model from people's face outline of neutrality expression frame extraction.
The 3rd module can comprise a plurality of modeling unit, and described a plurality of modeling unit are alternately carried out the modeling processing of different iteration, and the 3rd module is processed the modeling of different iteration to merge.
The second module is the tracking results of exportable generation also.
When the 3rd module is finished the work the renewal of model, the 3rd module also can be exported the working model upgrading by input/output interface.
The second module can be determined described predetermined number according to the accuracy requirement of the input rate of a plurality of frame of video of continuous input, noise quality or tracking, and described predetermined number can be constant or variable.
The second module can be used one of following methods to obtain human face characteristic point, expression parameter and head pose parameter based on working model: active appearance models method (AAM), Active Shape Model Method (ASM) and compound constant active appearance models (Composite Constraint AAM).
If the second module is carried out the result that failure is returned in the processing of described face tracking, can be 0 by the appearance component a of replacement working model, and restart to carry out face tracking from the appointment start frame of a plurality of frame of video of continuous input.
According to a further aspect in the invention, a kind of face tracking equipment based on three-dimensional face model is provided, described three-dimensional face model comprises 3D shape, appearance parameter, the information of expression parameter and head pose, described face tracking equipment comprises the following steps: the first module, be used for using default standard three-dimensional face model as working model, modeling indication is set to 1, and appointment start frame is made as to the first frame, carry out following operation: the second module, for starting to carry out face tracking based on working model from the appointment start frame of a plurality of frame of video of continuous input, from the frame of video of following the tracks of, extract human face characteristic point, expression parameter and head pose parameter, the corresponding tracking results of frame of video according to predetermined condition generation with predetermined number, described tracking results comprises each frame of video of tracking and the human face characteristic point extracting from each frame of video of following the tracks of, expression parameter and head pose parameter, and export described tracking results, the 3rd module, is designated as 1 if be configured to modeling, and the tracking results based on generating is upgraded working model, wherein, when upgrading working model, if the difference of the appearance parameter before determining new appearance parameter and upgrading is less than predetermined limit value, modeling indication is set to 0, four module, if be configured to frame of video after the frame of video of described predetermined number and be not the last frame in a plurality of frame of video of input continuously,, using first frame of video after the frame of video of described predetermined number as specifying start frame, control the second module and continue a plurality of frame of video of continuous input to process.
Described three-dimensional face model can be represented as S ( a , e , q ) = T ( Σ a i S i a + Σ e j S j e ; q ) , Wherein, S is 3D shape, and a is appearance component, and e is expression component, and q is head pose, and T (S, q) represents the 3D shape S to be rotated according to head pose q and/or the function of translation.
Described standard three-dimensional face model can comprise: average shape S 0, appearance component expression component
Figure BDA00001853865800073
and standard head attitude q 0, wherein, i=1:N,
Figure BDA00001853865800074
represent a kind of variation of people's face appearance, j=1:M,
Figure BDA00001853865800075
represent a kind of variation of human face expression.
Described face tracking equipment can also comprise: training module, is used a series of three-dimensional face data to train described standard three-dimensional face model for lower online in advance.
The second module and the 3rd module be executable operations concurrently.
The 3rd module can comprise: the 5th module, in the middle of the tracking results generating, is chosen as neutral expression frame by the frame of video of the most approaching neutral expression; The 6th module, for extracting people's face outline according to the human face characteristic point of neutrality expression frame from neutrality expression frame; The 7th module, for upgrading working model based on the neutrality expression unique point of frame and people's face outline of extraction.
The 5th module can be selected neutral expression frame as follows: for the frame of video of each tracking is calculated expression parameter
Figure BDA00001853865800076
wherein, K is the kind quantity of expression parameter; By the highest expression parameter value of the frequency of occurrences in every kind of expression parameter
Figure BDA00001853865800077
as neutrality expression value; Select frame of video that the deviation of whole K expressions parameters and corresponding neutral expression values is all less than each predetermined threshold value as the neutrality frame of expressing one's feelings.
The 6th module can be used active contour model algorithm to extract people's face outline from neutrality expression frame.
The processing that people's face outline of the unique point of the 7th module based in neutrality expression frame and extraction upgrades working model can comprise: the head pose that the head pose q of working model is updated to neutral expression frame; The expression component e of working model is set to 0; By by working model S (a, e, q) mate with the characteristic point position of neutral expression frame, and by working model S (a, e, q) the people's face outline calculating and the appearance component a that mates to come correction work model from people's face outline of neutrality expression frame extraction.
The 3rd module can comprise a plurality of modeling unit, and described a plurality of modeling unit are alternately carried out the modeling processing of different iteration, and the 3rd module is processed the modeling of different iteration to merge.。
When the 3rd module is finished the work the renewal of model, the 3rd module can also export by input/output interface the working model upgrading.
The second module can be determined described predetermined number according to the accuracy requirement of the input rate of a plurality of frame of video of continuous input, noise quality or tracking, and described predetermined number is constant or variable.
13, face tracking equipment as claimed in claim 2, wherein, the second module is used one of following methods to obtain human face characteristic point, expression parameter and head pose parameter based on working model: active appearance models method (AAM), Active Shape Model Method (ASM) and compound constant active appearance models (Composite Constraint AAM).
If the second module is carried out the result that failure is returned in the processing of described face tracking, face tracking equipment can be 0 by the appearance component a of replacement working model, and can restart to carry out face tracking from the appointment start frame of a plurality of frame of video of continuous input.
Accompanying drawing explanation
By the description of carrying out below in conjunction with accompanying drawing, above and other object of the present invention and feature will become clearer, wherein:
Figure 1A is the process flow diagram illustrating according to the modeling method of the three-dimensional face model of exemplary embodiment of the present invention;
Figure 1B is the process flow diagram being illustrated according to the processing of the renewal working model in the modeling method of exemplary embodiment of the present invention;
Fig. 1 C is the process flow diagram being illustrated according to the face tracking method of exemplary embodiment of the present invention;
The three-dimensional face model that the schematically illustrated people's face based on general of Fig. 2 builds;
Fig. 3 exemplarily illustrates use characteristic point from frame of video, to extract the schematic diagram of people's face outline;
Fig. 4 exemplarily illustrates the schematic diagram mating carrying out Feature Points Matching and outline between three-dimensional face model and people's face outline;
Fig. 5 A and Fig. 5 B are the schematic diagram illustrating according to modelling apparatus/face tracking equipment of exemplary embodiment of the present invention.
Embodiment
Hereinafter with reference to accompanying drawing, describe exemplary embodiment of the present invention in detail.
The modeling method of three-dimensional face model of the present invention and face tracking method can be realized in multi-purpose computer or application specific processor, described method is to comprise frame of video (a period of time of the continuous input of people's face, within a few minutes) as input, using the high-precision standard three-dimensional face model preset as working model, based on described working model, the frame of video of input is carried out to face tracking, after this use the tracking results that people's face of predetermined quantity is followed the tracks of that described working model is upgraded/revised.After upgrading/revise described working model, continue again frame of video afterwards to carry out face tracking, until obtain the three-dimensional face model that reaches preset limit value or the renewal/correction that all videos frame of input is completed to described face tracking and working model.According to demand, in this process, output has the face tracking result of fine meter feelings and head pose information, and/or after this process, the personalized three-dimensional face model that output obtains.
The frame of video of described continuous input can be the frame of video of a lot of integral photograph of extracting and processes in the digital video frequency flow of being taken by single general digital camera, can be also the frame of video of using a lot of photos of the continuous shooting of digital camera.The frame of video of described continuous input can input to for implementing the modeling method of three-dimensional face model of the present invention and the multi-purpose computer of face tracking method or application specific processor by input/output interface.
Fig. 4 shows the schematically illustrated three-dimensional face model building based on any people's face.Described three-dimensional face model includes, but not limited to the information of 3D shape, appearance parameter, expression parameter and the head pose of people's face.In the present invention, described three-dimensional face model is expressed as
Figure BDA00001853865800091
Figure BDA00001853865800092
wherein, S is 3D shape, and a is appearance component, and e is expression component, and q is head pose, and T (S, q) represents the 3D shape S to be rotated according to head pose q and/or the function of translation.
According to exemplary embodiment of the present invention, high density people's face surface data that online lower use is expressed one's feelings in advance, attitude is different trains the three-dimensional face model of standard.According to other embodiments of the invention, also can use existing additive method to obtain the three-dimensional face model of described standard, or as required, the three-dimensional face model with standard faces feature is defined as to the three-dimensional face model of described standard.
Described standard three-dimensional face model comprises:
Average shape S 0: the mean value of all training samples.
Appearance component each component statement people face a kind of variation aspect appearance.
Expression component
Figure BDA00001853865800094
each component statement people face a kind of variation aspect expression.
Head pose q 0: standard three-dimensional position and the anglec of rotation of describing people's face.
Figure 1A is the process flow diagram illustrating according to the modeling method of the three-dimensional face model of exemplary embodiment of the present invention.
With reference to Fig. 1, when starting the operation of modeling method of three-dimensional face model of the present invention, at behaviour's step S110, using default standard three-dimensional face model as working model, and will specify start frame to be made as the first frame in the frame of video of continuous input.Described standard three-dimensional face model is the three-dimensional face model that uses in advance people's face data of a series of various expressions, attitude to train.
At step S120, appointment start frame based on working model from a plurality of frame of video of continuous input starts to carry out face tracking, from the frame of video of following the tracks of, extract human face characteristic point, expression parameter and head pose parameter, and the corresponding tracking results of frame of video according to predetermined condition generation with predetermined number, described tracking results comprises each frame of video and the human face characteristic point extracting from each frame of video of following the tracks of, expression parameter and the head pose parameter of tracking.According to exemplary embodiment of the present invention, according to the accuracy requirement of the input rate of a plurality of frame of video of continuous input, noise quality or tracking, determine described predetermined number.In addition, described predetermined number is can constant or variable.
According to exemplary embodiment of the present invention, at step S120, also by input/output interface, export the tracking results generating.
According to exemplary embodiment of the present invention, at step S120, use one of following methods to obtain human face characteristic point, expression parameter and head pose parameter based on working model: active appearance models method (AAM), Active Shape Model Method (ASM) and compound constant active appearance models (Composite Constraint AAM).
After this, at step S130, the tracking results based on generating at step S120 is upgraded working model.With reference to Figure 1B, describe the concrete processing of upgrading working model in detail after a while.
According to exemplary embodiment of the present invention, at step S130, when the renewal of the model of finishing the work, also by input/output interface, export the working model upgrading.
In addition, at step S140, if when upgrading working model, the difference of determining the appearance parameter of the working model upgrading and the appearance parameter of the working model before renewal is not less than predetermined limit value, and the frame of video after the frame of video of described predetermined number is not the last frame in a plurality of frame of video of inputting continuously, at step S150, using first frame of video after the frame of video of described predetermined number as specifying start frame, return to step S120, working model based on upgrading, proceeds face tracking from described appointment start frame.
If when upgrading working model, the difference of determining the appearance parameter of the working model upgrading and the appearance parameter of the working model before renewal is less than predetermined limit value, or the frame of video after the frame of video of described predetermined number is the last frame in a plurality of frame of video of inputting continuously, performs step S150.That is to say, when determining while having built the best three-dimensional face model conforming to a predetermined condition, or while having completed the processing to all videos frame, can stop the renewal to working model.
At step S160, using the working model upgrading as personalized three-dimensional face model output.
Figure 1B is exemplarily illustrated in the processing of carrying out in the step S130 of Figure 1A.
With reference to Figure 1B, at step S132, the frame of video of the most approaching neutral expression in the middle of the tracking results generating at step S120 is chosen as to neutral expression frame.According to a preferred embodiment of the invention, in the processing of step S132, supposing that the frame of video that tracking results and reservation number are T is corresponding, is that each frame of video of following the tracks of is calculated expression parameter
Figure BDA00001853865800101
wherein, K is the kind quantity of expression parameter; By the highest expression parameter value of the frequency of occurrences in every kind of expression parameter
Figure BDA00001853865800111
as neutrality expression value; Then, select frame of video that the deviation of whole K expressions parameters and corresponding neutral expression values is all less than each predetermined threshold value as the neutrality frame of expressing one's feelings.
At step S135, according to the human face characteristic point in neutrality expression frame, from neutrality expression frame, extract people's face outline.From step S120, for each frame of video of following the tracks of has been extracted information such as comprising human face characteristic point, expression parameter and head pose parameter.According to exemplary embodiment of the present invention, use the neutrality expression frame that active contour model algorithm is selected from step S132 to extract people's face outline.
Fig. 3 A ~ Fig. 3 C exemplarily illustrates end user's face characteristic point and from frame of video, extracts the processing of people's face outline.According to exemplary embodiment of the present invention, when frame of video from Fig. 3 A is extracted outline, the unique point of described frame of video of take is with reference to (Fig. 3 B), uses active contour model algorithm to extract people's face outline from frame of video and extracts people's face outline as shown in Figure 3 C.Can from neutrality expression frame, extract its people's face outline similarly in the present invention.
After this, at step S138, the unique point based in neutrality expression frame and people's face outline of extraction upgrade working model.Be specially, the head pose q of working model be updated to the head pose of neutral expression frame; The expression component e of working model is set to 0; By by working model S (a, e, q) mate with the characteristic point position of neutral expression frame, and by working model S (a, e, q) the people's face outline calculating and the appearance component a that mates to come correction work model from people's face outline of neutrality expression frame extraction.Fig. 4 B illustrates the working model that makes the unique point of its unique point and the expression of the neutrality in Fig. 4 A frame overlap and lock adjustment working model; Fig. 4 D illustrates working model is overlapped to the processing of adjusting working model, proofreading and correct its appearance parameter as far as possible with people's face outline (Fig. 4 D) of extraction.
In the process of proofreading and correct with the appearance parameter of appearance representation in components, the value of the appearance parameter before record is proofreaied and correct and proofread and correct after the value of appearance parameter, thereby as the basis of step S140 comparison.
Step S120 ~ S150 in Figure 1A carries out face tracking and model modification to the frame of video of continuous input iteratively.According to exemplary embodiment of the present invention, perform step concurrently the processing of S120 and S130.By using current working model to carry out face tracking, extract human face characteristic point, expression parameter and head pose parameter, and comprise that by use the human face characteristic point of extraction and head pose parameter and corresponding frame of video upgrade working model further, can obtain the personalized human face model more approaching with user.At the same time, also the face tracking result of exportable each input video frame, comprises expression parameter, appearance parameter and head pose etc.
Fig. 1 C is illustrated according to the processing of the face tracking method of exemplary embodiment of the present invention.
Because described face tracking method is usingd output face tracking result as fundamental purpose, therefore in Fig. 1 C, once obtain the best model conforming to a predetermined condition, no longer carry out the renewal of working model, but proceed the face tracking processing of follow-up frame of video.
With reference to Fig. 1 C, when starting the operation of modeling method of three-dimensional face model of the present invention, at behaviour's step S110 ', using default standard three-dimensional face model as working model, to specify start frame to be made as the first frame in the frame of video of continuous input, and the variable (for example, modeling indication) that setting indicates whether to proceed working model renewal is 1(or "Yes").Described standard three-dimensional face model is the three-dimensional face model that uses in advance people's face data of a series of various expressions, attitude to train.
Processing and the processing in Figure 1A of step S120 in Fig. 1 C are basic identical, but in Fig. 1 C, after the face tracking of frame of video that completes predetermined number, execution step S125 and S128.At step S125, export the tracking results of the frame of video of each tracking, comprise expression parameter, appearance parameter and head pose etc.
At step S128, determine whether to proceed the renewal (modeling indicates whether to be 1) of working model.If modeling is designated as 1, perform step S130.The processing of described step S130 is identical with the processing of the S130 having described in Figure 1A.
At step S140 ', if when upgrading working model, the difference of determining the appearance parameter of the working model upgrading and the appearance parameter of the working model before renewal is not less than predetermined limit value, at step S150, using first frame of video after the frame of video of described predetermined number as specifying start frame, then return to step S120, the working model based on upgrading, proceeds face tracking from described appointment start frame.
On the other hand, if determine that the difference of the appearance parameter of the working model upgrading and the appearance parameter of the working model before renewal is less than or equal to predetermined limit value,, at step S145, indicate whether that the modeling indication of proceeding working model renewal is set to 0(or "No").That is to say, now according to the present invention, determine and built the three-dimensional face model that enough approaches user.In order to reduce operand, indicate equipment of the present invention no longer to upgrade described working model.
After this, at step S148, determine the whether continuously last frame in a plurality of frame of video of input of frame of video after the frame of video of described predetermined number.If the frame of video after the frame of video of definite described predetermined number is not the last frame in a plurality of frame of video of inputting continuously, perform step S150, using first frame of video after the frame of video of described predetermined number as specifying start frame, then return to step S120.
If the frame of video after the frame of video of definite described predetermined number is the last frame in a plurality of frame of video of inputting continuously, finish the processing of face tracking method of the present invention.
According to exemplary embodiment of the present invention, before finishing the processing of face tracking method of the present invention, the working model of output final updating.
As can be seen here, by using current working model to carry out face tracking, extract human face characteristic point, expression parameter and head pose parameter, and comprise that by use the human face characteristic point of extraction and head pose parameter and corresponding frame of video upgrade working model further, can be based on carrying out iteratively face tracking with the more approaching personalized human face model of user, and exportable each more accurate face tracking result, comprise expression parameter, appearance parameter and head pose etc.
Fig. 5 A illustrates according to the schematic structure of the equipment of the modeling method that realizes described three-dimensional face model of exemplary embodiment of the present invention and/or face tracking method.
Described equipment comprises tracking cell and modeling unit, and described tracking cell is carried out the step S110 ~ S120 in Figure 1A, or the S110 ' ~ S125 in Fig. 1 C, and described modeling unit is carried out the step S130 ~ S150 in Figure 1A or Fig. 1 C.
With reference to Fig. 5 A, tracking cell is used working model (to be initially described standard three-dimensional face model M 0) to frame of video 0 to frame of video t 2-1 carries out face tracking, and will comprise that frame of video 0 is to frame of video t 2-1 and the tracking results of human face characteristic point, expression parameter and the head pose parameter extracted from each frame of video (be the result 0 ~ result t Fig. 5 A 2-1) output.Described tracking results is provided for modeling unit, and as required, can export to user by input/output interface.
Tracking results (result 0 ~ result the t of described modeling unit based on tracking cell output 2-1) upgrade working model (with reference to Figure 1A and Figure 1B, the renewal of working model being have been described in detail above).For convenience of description, the working model in this renewal is called to M 1.
After this, tracking cell is according to predetermined rule, the working model M based on upgrading 1to frame of video t 2-1 to t3 carries out face tracking (describing with reference to Figure 1A as front), and output tracking result (result t 2-1 to result t 3).Modeling unit is based on result t 2-1 ~ result t 3upgrade working model M 1.The processing of so carrying out iteratively face tracking and upgrading working model, until determine the processing that obtains qualified best model or complete whole input video frame.Described tracking cell and modeling unit can operate concurrently.
According to a preferred embodiment of the invention, described equipment can comprise training unit, for lower online in advance, uses a series of three-dimensional face data to train standard three-dimensional face model as working model M 0.
Fig. 5 B illustrates according to the schematic structure of the equipment of the modeling method that realizes described three-dimensional face model of another exemplary embodiment of the present invention and/or face tracking method.
Different from the equipment in Fig. 5 A, the equipment in Fig. 5 B comprises a plurality of modeling unit (as modeling unit A and modeling unit B), and a plurality of modeling unit are alternately carried out the processing of the step S130 of different iteration, and the processing of different iteration is merged.
Although represent with reference to preferred embodiment and described the present invention, it should be appreciated by those skilled in the art that in the situation that do not depart from the spirit and scope of the present invention that are defined by the claims, can carry out various modifications and conversion to these embodiment.

Claims (15)

1. a modeling method for three-dimensional face model, described three-dimensional face model comprises the information of 3D shape, appearance parameter, expression parameter and head pose, described modeling method comprises the following steps:
Using default standard three-dimensional face model as working model, and appointment start frame is made as to the first frame, carries out operation as follows:
A) based on working model, from the appointment start frame of a plurality of frame of video of continuous input, start to carry out face tracking, from the frame of video of following the tracks of, extract human face characteristic point, expression parameter and head pose parameter, and the corresponding tracking results of frame of video according to predetermined condition generation with predetermined number, described tracking results comprises each frame of video and the human face characteristic point extracting from each frame of video of following the tracks of, expression parameter and the head pose parameter of tracking;
B) tracking results based on generating is upgraded working model, wherein, if when upgrading working model, determine that the appearance parameter of the working model upgrading and the difference of the appearance parameter before renewal are not less than predetermined limit value, and the frame of video after the frame of video of described predetermined number is not the last frame in a plurality of frame of video of inputting continuously,, using first frame of video after the frame of video of described predetermined number as specifying start frame, execution step a);
C) output services model.
2. modeling method as claimed in claim 1, wherein, described three-dimensional face model is represented as
S ( a , e , q ) = T ( Σ a i S i a + Σ e j S j e ; q ) ,
Wherein, S is 3D shape, and a is appearance component, and e is expression component, and q is head pose, and T (S, q) represents the 3D shape S to be rotated according to head pose q and/or the function of translation.
3. modeling method as claimed in claim 2, wherein, described standard three-dimensional face model comprises: average shape S 0, appearance component
Figure FDA00001853865700012
expression component
Figure FDA00001853865700013
and standard head attitude q 0,
Wherein, i=1:N,
Figure FDA00001853865700014
represent a kind of variation of people's face appearance, j=1:M,
Figure FDA00001853865700015
represent a kind of variation of human face expression.
4. modeling method as claimed in claim 3, also comprises:
The a series of three-dimensional face data of online lower use train described standard three-dimensional face model in advance.
5. modeling method as claimed in claim 2, wherein, performs step a) and step b) concurrently.
6. modeling method as claimed in claim 3, wherein, the tracking results based on generating described in step b), the processing of upgrading working model comprises:
In the middle of the tracking results generating, the frame of video of the most approaching neutral expression is chosen as to neutral expression frame;
According to the human face characteristic point in neutrality expression frame, from neutrality expression frame, extract people's face outline;
Unique point based in neutrality expression frame and people's face outline of extraction upgrade working model.
7. modeling method as claimed in claim 6, wherein, in selecting the processing of neutral expression frame, from the predetermined number corresponding tracking results of frame of video that is T in the middle of, select neutral expression frame as follows:
For the frame of video of each tracking is calculated expression parameter
Figure FDA00001853865700021
wherein, K is the kind quantity of expression parameter;
By the highest expression parameter value of the frequency of occurrences in every kind of expression parameter
Figure FDA00001853865700022
as neutrality expression value;
Select frame of video that the deviation of whole K expressions parameters and corresponding neutral expression values is all less than each predetermined threshold value as the neutrality frame of expressing one's feelings.
8. modeling method as claimed in claim 7, wherein, is used active contour model algorithm to extract people's face outline from neutrality expression frame.
9. modeling method as claimed in claim 8, wherein, the processing that people's face outline of described unique point based in neutrality expression frame and extraction upgrades working model comprises:
The head pose q of working model is updated to the head pose of neutral expression frame;
The expression component e of working model is set to 0;
By by working model S (a, e, q) mate with the characteristic point position of neutral expression frame, and by working model S (a, e, q) the people's face outline calculating and the appearance component a that mates to come correction work model from people's face outline of neutrality expression frame extraction.
10. modeling method as claimed in claim 2, wherein, is used a plurality of modeling unit alternately to carry out the processing of the step b) of different iteration, and the processing of different iteration is merged.
11. modeling methods as claimed in claim 1, wherein, in step a), also export by input/output interface the tracking results generating.
12. modeling methods as claimed in claim 1, wherein, in step b), when the renewal of the model of finishing the work, also export by input/output interface the working model upgrading.
13. modeling methods as claimed in claim 2, wherein, in step a), according to the accuracy requirement of the input rate of a plurality of frame of video of continuous input, noise quality or tracking, determine described predetermined number, and described predetermined number are constant or variable.
14. modeling methods as claimed in claim 2, wherein, in step a), use one of following methods to obtain human face characteristic point, expression parameter and head pose parameter based on working model: active appearance models method (AAM), Active Shape Model Method (ASM) and compound constant active appearance models (Composite Constraint AAM).
15. modeling methods as claimed in claim 13, wherein, if in step a), the result of failure is returned in the processing of described face tracking, by the appearance component a of replacement working model, is 0, and turns back to the beginning of step a).
CN201210231897.XA 2012-07-05 2012-07-05 Modeling method, face tracking method and the equipment of three-dimensional face model Expired - Fee Related CN103530900B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201210231897.XA CN103530900B (en) 2012-07-05 2012-07-05 Modeling method, face tracking method and the equipment of three-dimensional face model
KR1020130043463A KR102024872B1 (en) 2012-07-05 2013-04-19 Method and apparatus for modeling 3d face, method and apparatus for tracking face
US13/936,001 US20140009465A1 (en) 2012-07-05 2013-07-05 Method and apparatus for modeling three-dimensional (3d) face, and method and apparatus for tracking face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210231897.XA CN103530900B (en) 2012-07-05 2012-07-05 Modeling method, face tracking method and the equipment of three-dimensional face model

Publications (2)

Publication Number Publication Date
CN103530900A true CN103530900A (en) 2014-01-22
CN103530900B CN103530900B (en) 2019-03-19

Family

ID=49932878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210231897.XA Expired - Fee Related CN103530900B (en) 2012-07-05 2012-07-05 Modeling method, face tracking method and the equipment of three-dimensional face model

Country Status (2)

Country Link
KR (1) KR102024872B1 (en)
CN (1) CN103530900B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608710A (en) * 2015-12-14 2016-05-25 四川长虹电器股份有限公司 Non-rigid face detection and tracking positioning method
CN107958479A (en) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 A kind of mobile terminal 3D faces augmented reality implementation method
CN108229246A (en) * 2016-12-14 2018-06-29 上海交通大学 Real-time three-dimensional human face posture method for tracing based on vehicle computing machine platform
CN108345821A (en) * 2017-01-24 2018-07-31 成都理想境界科技有限公司 Face tracking method and apparatus
CN108734735A (en) * 2017-04-17 2018-11-02 佳能株式会社 Object shapes tracks of device and method and image processing system
CN108765464A (en) * 2018-05-31 2018-11-06 山东工商学院 Low-rank re-detection context long time-tracking method and system based on residual compensation
CN110796083A (en) * 2019-10-29 2020-02-14 腾讯科技(深圳)有限公司 Image display method, device, terminal and storage medium
CN111279413A (en) * 2017-10-26 2020-06-12 斯纳普公司 Joint audio and video facial animation system
CN112017230A (en) * 2020-09-07 2020-12-01 浙江光珀智能科技有限公司 Three-dimensional face model modeling method and system
CN113112596A (en) * 2021-05-12 2021-07-13 北京深尚科技有限公司 Face geometric model extraction and 3D face reconstruction method, device and storage medium
CN116152900A (en) * 2023-04-17 2023-05-23 腾讯科技(深圳)有限公司 Expression information acquisition method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122001A1 (en) * 2005-11-30 2007-05-31 Microsoft Corporation Real-time Bayesian 3D pose tracking
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101499128A (en) * 2008-01-30 2009-08-05 中国科学院自动化研究所 Three-dimensional human face action detecting and tracing method based on video stream
CN101777116A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Method for analyzing facial expressions on basis of motion tracking
CN102479388A (en) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 Expression interaction method based on face tracking and analysis

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100292810B1 (en) * 1999-03-19 2001-06-15 윤덕용 A Real time face tracking technique using face's color model and ellipsoid approximation model
WO2001029767A2 (en) * 1999-10-21 2001-04-26 Koninklijke Philips Electronics N.V. System and method for three-dimensional modeling
US6845171B2 (en) * 2001-11-19 2005-01-18 Microsoft Corporation Automatic sketch generation
EP1811456B1 (en) * 2004-11-12 2011-09-28 Omron Corporation Face feature point detector and feature point detector
US7755619B2 (en) * 2005-10-13 2010-07-13 Microsoft Corporation Automatic 3D face-modeling from video
JP4999570B2 (en) * 2007-06-18 2012-08-15 キヤノン株式会社 Facial expression recognition apparatus and method, and imaging apparatus
KR100896643B1 (en) * 2007-06-18 2009-05-08 에스케이 텔레콤주식회사 Method and system for modeling face in three dimension by means of aam, and apparatus applied to the same
CN101159064B (en) * 2007-11-29 2010-09-01 腾讯科技(深圳)有限公司 Image generation system and method for generating image
KR20110021330A (en) * 2009-08-26 2011-03-04 삼성전자주식회사 Apparatus and method for creating third dimension abatar
US8339459B2 (en) * 2009-09-16 2012-12-25 Microsoft Corporation Multi-camera head pose tracking
KR101615719B1 (en) * 2009-09-18 2016-04-27 삼성전자주식회사 Apparatus and method for extracting user's third dimension facial expression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122001A1 (en) * 2005-11-30 2007-05-31 Microsoft Corporation Real-time Bayesian 3D pose tracking
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101499128A (en) * 2008-01-30 2009-08-05 中国科学院自动化研究所 Three-dimensional human face action detecting and tracing method based on video stream
CN101777116A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Method for analyzing facial expressions on basis of motion tracking
CN102479388A (en) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 Expression interaction method based on face tracking and analysis

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608710A (en) * 2015-12-14 2016-05-25 四川长虹电器股份有限公司 Non-rigid face detection and tracking positioning method
CN105608710B (en) * 2015-12-14 2018-10-19 四川长虹电器股份有限公司 A kind of non-rigid Face datection and tracking positioning method
CN108229246A (en) * 2016-12-14 2018-06-29 上海交通大学 Real-time three-dimensional human face posture method for tracing based on vehicle computing machine platform
CN108345821B (en) * 2017-01-24 2022-03-08 成都理想境界科技有限公司 Face tracking method and device
CN108345821A (en) * 2017-01-24 2018-07-31 成都理想境界科技有限公司 Face tracking method and apparatus
CN108734735B (en) * 2017-04-17 2022-05-31 佳能株式会社 Object shape tracking device and method, and image processing system
CN108734735A (en) * 2017-04-17 2018-11-02 佳能株式会社 Object shapes tracks of device and method and image processing system
CN111279413A (en) * 2017-10-26 2020-06-12 斯纳普公司 Joint audio and video facial animation system
CN107958479A (en) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 A kind of mobile terminal 3D faces augmented reality implementation method
CN108765464A (en) * 2018-05-31 2018-11-06 山东工商学院 Low-rank re-detection context long time-tracking method and system based on residual compensation
CN110796083A (en) * 2019-10-29 2020-02-14 腾讯科技(深圳)有限公司 Image display method, device, terminal and storage medium
CN110796083B (en) * 2019-10-29 2023-07-04 腾讯科技(深圳)有限公司 Image display method, device, terminal and storage medium
CN112017230A (en) * 2020-09-07 2020-12-01 浙江光珀智能科技有限公司 Three-dimensional face model modeling method and system
CN113112596A (en) * 2021-05-12 2021-07-13 北京深尚科技有限公司 Face geometric model extraction and 3D face reconstruction method, device and storage medium
CN113112596B (en) * 2021-05-12 2023-10-24 北京深尚科技有限公司 Face geometric model extraction and 3D face reconstruction method, equipment and storage medium
CN116152900A (en) * 2023-04-17 2023-05-23 腾讯科技(深圳)有限公司 Expression information acquisition method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
KR20140009013A (en) 2014-01-22
CN103530900B (en) 2019-03-19
KR102024872B1 (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN103530900A (en) Three-dimensional face model modeling method, face tracking method and equipment
US11423189B2 (en) System for automated generative design synthesis using data from design tools and knowledge from a digital twin
CN108027878B (en) Method for face alignment
Weichel et al. ReForm: integrating physical and digital design through bidirectional fabrication
CN113228115A (en) Transformation of grid geometry into watertight boundary representation
KR100940862B1 (en) Head motion tracking method for 3d facial model animation from a video stream
US20190066351A1 (en) Motion retargeting method for character animation and apparatus thererof
US10109083B2 (en) Local optimization for curvy brush stroke synthesis
CN104978764A (en) Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment
US8473255B2 (en) Method and aids for modelling 3D objects
CN111739167B (en) 3D human head reconstruction method, device, equipment and medium
EP4044118A1 (en) A computer-implemented method, data processing apparatus and computer program for generating three-dimensional pose-estimation data
CN112132970B (en) Natural texture synthesis system and method for 3D printing
US11334690B2 (en) Method for transforming the computer-aided design model of an object
CN115769214A (en) Computer aided design and manufacturing generated design shape optimization with limited size fatigue damage
CN113129447A (en) Three-dimensional model generation method and device based on single hand-drawn sketch and electronic equipment
Wang et al. Improved surface reconstruction using high-frequency details
CN111316283B (en) Gesture recognition method and device
CN109063272B (en) Design method of flexible drilling template
CN111179284B (en) Interactive image segmentation method, system and terminal
CN113487737A (en) Reverse modeling preassembly method based on BIM and holographic visual point cloud fusion
CN113590800A (en) Training method and device of image generation model and image generation method and device
CN111695489A (en) Method and device for verifying modeling route, unmanned vehicle and storage medium
WO2020193972A1 (en) Facial analysis
WO2014130417A1 (en) Multi disciplinary engineering design using image recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190319