WO2012167475A1 - Method and device for generating body animation - Google Patents

Method and device for generating body animation Download PDF

Info

Publication number
WO2012167475A1
WO2012167475A1 PCT/CN2011/077083 CN2011077083W WO2012167475A1 WO 2012167475 A1 WO2012167475 A1 WO 2012167475A1 CN 2011077083 W CN2011077083 W CN 2011077083W WO 2012167475 A1 WO2012167475 A1 WO 2012167475A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
image
feature
action
module
Prior art date
Application number
PCT/CN2011/077083
Other languages
French (fr)
Chinese (zh)
Inventor
董兰芳
陈家辉
李德旭
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2011/077083 priority Critical patent/WO2012167475A1/en
Priority to CN201180001326.0A priority patent/CN103052973B/en
Publication of WO2012167475A1 publication Critical patent/WO2012167475A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Definitions

  • the present invention relates to the field of animation technology, and in particular, to a method and apparatus for generating a body animation. Background technique
  • Image form animation technology refers to the technique of processing a number of images containing human body by a computer to generate human body animation.
  • image body animation technology mainly has three-dimensional model-based technology and image series-based technology.
  • the technology based on the three-dimensional model First, the human body in the input image is mapped to obtain a three-dimensional human body model similar to the human body, and then the animation is generated by driving the three-dimensional human body model to perform a specified action.
  • the advantage of this method is that once a 3D human body model is obtained, an animation of any action can be generated.
  • an embodiment of the present invention provides a method and apparatus for generating a body animation.
  • the technical solution is as follows:
  • a method of generating a form animation comprising:
  • the image of the medium body region is image deformed
  • An image of the deformed body region is overlaid on the background image in the image to generate an animation.
  • an apparatus for generating a form animation comprising: An obtaining module, configured to acquire an initial position of a feature point of a shape in the image;
  • a calculation module configured to read an action sequence of the feature point, and calculate a location of the feature point in the current frame according to the action sequence
  • a deformation module configured to perform image deformation on an image of the shape region in the image according to an initial position of the feature point acquired by the acquisition module and a position of the feature point in the current frame calculated by the calculation module;
  • a generating module configured to overlay an image of the deformed shape region of the deforming module on the background image in the image to generate an animation.
  • the position of the current frame feature point is calculated according to the motion sequence, and then the image is deformed based on the initial position of the acquired feature point, and the position of the current frame feature point is reached, thereby generating a two-dimensional image.
  • Body animation because the method uses two-dimensional image deformation technology, there is no need to establish a three-dimensional model, thus reducing the workload; the method drives the body to generate motion through the action sequence, and only needs to modify the motion sequence to drive a single image to form an arbitrary
  • the shape animation of the motion does not need to analyze and cluster the body motions in a large number of images to acquire different types of motions as in the prior art, so the method is simple to implement and the calculation amount is small.
  • FIG. 1 is a flowchart of a method for generating a body animation according to Embodiment 1 of the present invention
  • FIG. 2 is a flow chart of a method for modeling shape parameters according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic diagram of an upper body image of a human body according to a second embodiment of the present invention.
  • FIG. 4 is a schematic diagram of feature points of a body according to a second embodiment of the present invention.
  • FIG. 5 is a schematic diagram showing the division of a body region according to Embodiment 2 of the present invention.
  • FIG. 6a is a schematic diagram of a first image restoration provided by Embodiment 2 of the present invention.
  • FIG. 6b is a schematic diagram of a second image restoration according to Embodiment 2 of the present invention.
  • 6c is a schematic diagram of a third image restoration provided by Embodiment 2 of the present invention.
  • FIG. 7 is a flowchart of a method for pre-defining an action sequence according to Embodiment 2 of the present invention.
  • FIG. 8 is a flowchart of a method for generating a body animation according to Embodiment 2 of the present invention.
  • FIG. 9 is a schematic diagram of forming an auxiliary feature line according to Embodiment 2 of the present invention.
  • FIG. 10 is a schematic structural diagram of an apparatus for generating a body animation according to Embodiment 3 of the present invention.
  • FIG. 11 is a schematic structural diagram of another apparatus for generating a body animation according to Embodiment 3 of the present invention.
  • FIG. 12 is a schematic structural diagram of another apparatus for generating a body animation according to Embodiment 3 of the present invention.
  • FIG. 13 is a schematic structural diagram of another apparatus for generating a body animation according to Embodiment 3 of the present invention. detailed description
  • Embodiments of the present invention provide a method for generating a body animation, which can generate an animation of a corresponding body for any given image containing a human body, an animal, or a cartoon image.
  • the method flow includes:
  • An image of the deformed body region is overlaid on the background image in the image to generate an animation.
  • the method provided by the embodiment of the present invention calculates the position of the feature point of the shape in the current frame according to the action sequence by reading the action sequence of the feature point, and then performs image deformation on the basis of the initial position of the acquired feature point to reach the current The position of the frame feature points, thus generating a two-dimensional shape animation, because the method uses a two-dimensional image deformation technique, and does not need to establish a three-dimensional model, thereby reducing the workload; the method generates motion by driving the body through the action sequence, and only needs Modifying the sequence of motions can drive a single image to form an animation of an arbitrary motion, without the need to analyze and cluster the body motions in a large number of images to obtain different types of motions as in the prior art, so the method is simple to implement, and the calculation is simple. Small amount.
  • Embodiment 2 Embodiment 2
  • Embodiments of the present invention provide a method for generating a form animation, which can generate an animation of a form in an image for any given image, wherein the shape in the image can be in a human body, an animal, or a cartoon image.
  • a method for generating a form animation which can generate an animation of a form in an image for any given image, wherein the shape in the image can be in a human body, an animal, or a cartoon image.
  • an upper body animation of a human body, an animal, a cartoon image, or the like can be generated, and a whole body animation can be generated.
  • an animation is generated only for an upper body image of a human body, but the present invention is not limited thereto.
  • the method provided by the embodiment of the present invention after the user or the system inputs the original input image, first performs shape parameter modeling on the image, and then generates 2D (Two Dimensions, 2D) animation according to the built model, see FIG. 2,
  • the method flow of shape parameter modeling includes:
  • the original input image is scanned, and the feature of the image in the image is analyzed to obtain a feature point and a body region predefined for the shape.
  • the predefined feature points and the body region are zoomed and displayed to a suitable position in the image, but sometimes the feature points and the body regions displayed in the image cannot completely correspond to the corresponding positions of the body due to the inaccuracy of the positioning. Therefore, the user is required to perform precise positioning according to the displayed feature points and the body regions, as described in step 203.
  • the user can drag the feature points and the body area one by one to the appropriate position on the body, and also correct the fold line of the body area, which is easy to operate. After the user completes the revision, the feature points of the user positioning and the position of the body region are saved.
  • Feature points can be increased or decreased. The more feature points generated, the more detailed the animation effect, and the fewer the feature points, the faster the animation will be generated.
  • the division of the body region is as shown in FIG. 5, and is divided into three parts: a body region (region 0), a left-hand region (region 2), and a right-hand region (region 1).
  • the body area can also be increased, such as separating the upper arm from the lower arm, so that the image of the upper arm and the lower arm can be processed.
  • the purpose of selecting the three regions by using the fold lines is to separate the body from the background while dividing the body into three parts for image deformation.
  • the action sequence of the feature points read in the 2D animation is predefined, see Figure 7.
  • the method of predefining the sequence of actions includes:
  • the action basic element is used to indicate the change of the position of the feature point and the number of frames of the change.
  • the method provided by the embodiment of the present invention predefines four action basic elements, including a static basic element, a translation basic element, a rotation basic element, and a convergence basic element. .
  • Static basic element Indicates that the position of the corresponding feature point is unchanged and there are no parameters.
  • Panning basic element Indicates that the corresponding feature point is to shift a certain displacement within several frames. There are 3 parameters, including the two-dimensional displacement X and y of the movement of the feature point, and the number of frames that continue to move. There are many movements related to translation, such as shrugging, which is to shift the feature points 6 and 7 in Figure 4 up by a certain displacement. At the same time, the translation can also achieve the effect of the exaggerated action in the two-dimensional animation. For example, if the feature points 6 and 7 in FIG. 4 are respectively moved outward, there is an effect that the arms suddenly become strong.
  • the unit of panning is calculated according to the default image size of 1000*1000.
  • the data structure of the panning primitive can be designed as follows:
  • Rotating elementary element Indicates that the corresponding feature point is to be rotated around a feature point within a number of frames. There are 4 parameters, including the surrounding feature point, the rotation angle on the 2D plane of the image, and the vertical The angle of rotation in the dimension plane and the number of frames that the rotation continues.
  • the surrounded feature points are also referred to as rotation base points, and are represented by the number of the feature points.
  • Most of the hand movements in an animation are related to rotation, which is also consistent with the bone movements of the body.
  • the data structure of the rotating primitive can be designed as follows:
  • Convergence Element Indicates that the corresponding feature point is to be moved to a feature point within several frames. There are 3 parameters, including the target feature point, the movement ratio, and the number of frames that are continuously moved.
  • the movement ratio is the percentage of the distance from the feature point to the target feature point. If the movement ratio is 1, the feature point is moved to the position of the target feature point. If the movement ratio is 0.5, then the movement is The position to the middle of the feature point and the target feature point.
  • the rendezvous operation is used to compensate for the defect that the position of the imaginary position cannot be reached due to the definition of the precise displacement value in the translation operation, such as placing the left hand on the right shoulder, because the coordinates of the right shoulder are not known in advance, and the translation cannot be accurately reached by the translation. .
  • the data structure of the meeting element can be designed as follows:
  • each feature point corresponds to one action basic element in one action. Therefore, a specific step of predefining the action according to the motion element and the feature point of the shape is to confirm the position change of the feature point of the shape one by one in the action. When it is confirmed that the position of the feature point is unchanged, the position change of the feature point is represented by a stationary basic element;
  • the position change of the feature point is represented by a translation basic element
  • the position change of the feature point is represented by the convergence basic element.
  • the following uses the shape in Figure 4 as an example.
  • the data structure used to define the action can be designed as follows: typedef struct STR- ACTION ⁇ action Bool bStill[14]; ⁇ Features do not move
  • [14] indicates that the corresponding array has 14 array elements, because there are 14 feature points in Figure 4. It is confirmed by which action element is used to confirm the position change of the feature point in the action, and the change and parameter of the feature point position are recorded in the array element corresponding to the feature point in the corresponding action element.
  • the number of frames in which the motion continues is specifically to find the maximum number of consecutive frames in which the position changes in each feature point, and the maximum number of consecutive frames is used as the number of frames in which the motion continues.
  • the content of the file representing the action can be obtained, assuming that the static element is represented by s, the basic element is represented by t, the basic element is represented by r, and the basic element is represented by m.
  • the actions of the three shapes in Figure 4 are listed below:
  • Sssssssssssssr 8 90 60 10 r 9 -90 60 10 ss ⁇ action 1 ssssssssssm l3 0.5 8 m l2 0.5 8 //action 2 sssssssssst 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5 ⁇ action 3
  • the first ten s indicate that the feature points 0 to 9 do not move
  • "r 8 90 60 10" indicates that the feature point 10 is rotated by 90 degrees around the feature point 8 on the two-dimensional plane of the image within 10 frames.
  • action 2 the first twelve s indicate that the feature points 0 to 11 are not moving, and "m 13 0.5 8" indicates that the feature point 12 moves within the eight frames to the midpoint position of the line connecting the feature point 12 and the feature point 13, " m 12 0.5 8” indicates that the feature point 13 moves to the midpoint position of the line connecting the feature point 13 and the feature point 12 within 8 frames.
  • the actual animation effect of action 2 is to make the fingertips of both hands contact within 8 frames.
  • action 3 the first ten s indicate that the feature points 0 to 9 are not moving, and the last four "t 0 -20 5" indicate that the feature points 10 to 13 are shifted upward by 20 in 5 frames.
  • Action 3 The actual animation effect is to move the left and right hands up.
  • This step can predefine multiple actions for the input image.
  • a sequence of actions that combine predefined actions into feature points Specifically, by combining a plurality of predefined actions, a sequence of actions can be obtained. For example, reading an action sequence of the body in FIG. 4 is as follows:
  • the name of the action sequence is actl, which is decomposed into five actions.
  • This step can pre-combine multiple motion sequences for the input image.
  • the method for generating the body animation includes:
  • the sequence of actions of the animation to be generated is read into the memory, and the sequence of actions can be divided into several actions.
  • This step 801 can be performed before the step 802 or before the step 803, which is not specifically limited in the embodiment of the present invention.
  • the initial position of the feature points of the shape in the image is obtained in the following two ways:
  • the position of the feature point of the shape in the image saved at the completion of the previous motion is obtained, and the position of the acquired feature point is taken as the initial position of the feature point.
  • the initial position of the feature point can have two definitions, one is the position of the feature point before the image is not subjected to any animation, that is, the original input image of the user positioning when the initial shape parameter is modeled.
  • the initial position of the former feature point if the initial position of the former feature point is adopted, an animation similar to the broadcast gymnastics form is generated in the subsequent generation of the animation, that is, each action in the action sequence is based on the shape.
  • the initial feature point position is deformed; if the initial position of the latter feature point is used, the animation of the coherent action is generated in the subsequent animation generation, that is, each action in the action sequence is completed according to the previous action.
  • the position of the point is deformed.
  • the method provided by the embodiment of the present invention does not limit the initial position of which feature point is adopted, and may adopt any one of the above two methods to obtain the initial position of the feature point.
  • the action basic element corresponding to each feature point in the current frame is obtained from the action sequence; and the position of each feature point in the current frame is calculated according to the parameter in the action basic element corresponding to each feature point in the current frame.
  • the current frame refers to a frame that is about to arrive.
  • the amount of translation is:
  • strTrans [j] . iX represents the total displacement of the feature point j in the X direction
  • strTrans [j] . iY represents the total displacement of the feature point j in the y direction
  • strTrans [j] . iTime represents the number of frames in which the feature point j is shifted
  • iTime represents the number of frames in the current frame
  • the rotation primitive For the rotation primitive, first calculate the rotation vector V0, and then calculate the vector VI rotated on the two-dimensional plane of the image, the rotation angle is strRotate [j] . fTheta * iTime I strRotate [ j] . iTime, and then calculate The vector V2 after the rotation of the two-dimensional plane is perpendicular, and the rotation angle is strRotate [j] . fZ * iTime I strRotate [j] . iTime, the rotation base point plus V2 is the new position of the feature point j;
  • iTime is equal to the number of frames in which the action elementary element continues.
  • the position of the current frame feature point is calculated before each frame starts.
  • some feature points are associated, and the position of the associated feature point is also recalculated, as shown in Figure 4
  • the feature point 12 When the feature point 8 is rotated, the feature point 12 also needs to be rotated at the same time.
  • the associated feature point group includes:
  • 804 Deform the image of the body region in the image according to the initial position of the feature point and the position of the feature point in the current frame;
  • the feature points of the shape in the image may be restored to the initial position according to the initial position of the acquired feature points, and each frame in the animation is deformed by using the initial position state, and the state of the initial position is
  • the frame animation is obtained by image deformation before.
  • the image of the shape of the image in the image is deformed according to the initial position of the feature point and the position of the feature point of the current frame, and specifically includes: connecting the feature point according to the initial position of the feature point and the relationship between the feature points Obtaining an initial feature line; connecting the feature points according to the relationship between the position of the feature points in the current frame and the feature points, to obtain the current frame feature line; according to the initial feature line and the current frame feature line, each shape region in the image
  • the image is image-based based on feature line distortion.
  • the reason why the feature line based image deformation technique is employed in the embodiment of the present invention is that the movement of the human body and the animal is driven by the bone.
  • the image deformation since the image deformation has continuity, it is not necessary to calculate every pixel at the time of deformation, and the information of the uncalculated pixel points can be obtained by interpolation from the already calculated pixels.
  • An image of the deformed body region is overlaid on the background image in the image to generate an animation.
  • the above 801 to 805 are processing flow for generating one frame of animation, and then 802 to 805 are executed cyclically when the next frame animation is generated, and a continuous body animation can be generated by continuously looping.
  • an auxiliary feature line may be added on the outer side of the body region, that is, the contour of the body body, and the shape is surrounded by the auxiliary feature line (may not be completely surrounded).
  • the shape inside the auxiliary characteristic line is a phenomenon in which deformation and diffusion do not occur, and on the outside, a region which is relatively close to the auxiliary characteristic line is also a phenomenon in which deformation and diffusion do not occur.
  • the characteristic lines of the body region are 0-1, 1-2, 1-3, 2_4, 3_5, 0-6, and 0_7
  • the characteristic lines of the left-hand region are 6-8, 8-10, and 10- 12, and copy the feature lines of the left-hand area to the two sides to obtain new feature lines 6, -8 ', 6 "-8", 8, -10, 8"-10", 10, -12, And 10"-12
  • the characteristic lines of the right-hand area are 7-9, 9-11, and 11-13, and the characteristic lines of the right-hand area are copied to each side to obtain new characteristic lines 7, -9, , 7 "-9” , 9, -11, 9" -11 ", 11, -13, and 11 " -13 ".
  • 6-8, 8-10, 10-12 can be copied to one side to obtain 6'. -8—1, 8—2-10—1, 10—2-12, , and then find the intersection of 6′ -8—1 and 8—2-10—1, and get 8′, find 8—2 The intersection of -10-1 and 10-2-12, you can get 10,.
  • the shape of the body in the original input image input by the user or the system may be any posture, and is not specifically limited.
  • the shape and shape region of the posture can be performed by using the method provided by the embodiment of the present invention.
  • the positioning, as well as the predefined sequence of actions for it, generates a body animation.
  • the original input image that can be input is an image of a body standing upright and naturally drooping, if the input is not the same
  • the image of the posture may be deformed into an image of the natural standing of the standing hands by the method provided by the embodiment of the present invention, and the deformed image is saved as a default image for subsequent deformation when the animation is generated, and according to This default image generates an action sequence.
  • the method provided by the embodiment of the present invention locates a feature point and a body region of a shape by inputting an image containing a body or an animal, and performs image repair on the image of the body region to repair the background image.
  • the feature point position of each frame is calculated according to the sequence of motions read, and the image deformation of one frame and one frame is performed based on the initial position of the feature point, thereby generating a two-dimensional shape animation, which is adopted by the method
  • the two-dimensional image deformation technique calculates a new position of a feature point in each frame according to the action sequence, forms a feature line, and then deforms to form an image in a new frame, thereby generating an animation effect without requiring a three-dimensional model, thereby reducing Workload;
  • This method proposes four motion basic elements of two-dimensional animation, which are used to combine to form motion and motion sequences, and to drive motion through the motion sequence.
  • Embodiments of the present invention provide an apparatus for generating a form animation, which can generate an animation of a form in an image for any given image, wherein the shape in the image can be in a human body, an animal, or a cartoon image.
  • the device can be used to generate upper body animations of human bodies, animals, cartoons, etc., as well as full-body animations. Referring to Figure 10, the device includes:
  • the obtaining module 1001 is configured to acquire an initial position of a feature point of the shape in the image;
  • the calculation module 1002 is configured to read an action sequence of the feature point, and calculate a position of the feature point in the current frame according to the action sequence;
  • the deformation module 1003 is configured to perform image deformation on the image of the shape region in the image according to the initial position of the feature point acquired by the acquisition module 1001 and the position of the feature point in the current frame calculated by the calculation module 1002;
  • the generating module 1004 is configured to cover an image of the deformed region of the deforming module 1003 on the background image in the image to generate an animation.
  • the device further includes:
  • the first pre-defined module 1005 is configured to scan the original input image before acquiring the initial position of the feature point of the shape in the image, to obtain a feature point and a body region predefined for the shape in the image; For the specific implementation process of the module 1005, refer to step 201 in the second embodiment, and details are not described herein.
  • the processing module 1006 is configured to display the feature points and the body regions predefined by the first pre-defined module 1005 into the image, and save the shape of the predefined feature points and the body regions displayed by the user into the image. The position of the corresponding position on the corresponding location is performed.
  • the specific implementation process of the processing module 1006 is shown in steps 202 and 203 in the second embodiment, and details are not described herein again.
  • the apparatus further includes:
  • the repairing module 1007 is configured to perform image restoration on the image according to the position of the physical region accurately positioned by the user saved by the processing module 1006, and obtain an image and a background image of each physical region.
  • the specific implementation process of the repairing module 1007 is detailed. Step 204 in Example 2, and details are not described herein again.
  • the device further includes:
  • the second pre-defined module 1008 is configured to pre-define an action basic element before the calculation module 1002 reads the action sequence of the feature point, where the action basic element is used to indicate a change in the position of the feature point and a number of frames in which the change lasts.
  • the element includes a static basic element, a translation basic element, a rotation basic element, and a convergence basic element.
  • the third pre-defined module 1009 is configured to pre-define the action according to the action basic element predefined by the second pre-defined module 1008 and the feature point of the feature, where each feature point corresponds to one action basic element; the third predefined For the specific implementation process of the module 1009, refer to step 702 in the second embodiment, and details are not described herein again.
  • the combination module 1010 is configured to combine the actions defined by the third pre-defined module 1009 into the action sequence of the feature points.
  • the specific implementation process of the combination module 1010 refer to step 703 in the second embodiment, and details are not described herein again.
  • the third pre-defined module 1009 is specifically configured to confirm the position change of the feature point of the feature in the action one by one, and when the position of the feature point is confirmed to be unchanged, the position change of the feature point is represented by a static basic element.
  • the position change of the feature point is represented by a translation basic element
  • the parameter of the translation basic element includes the The two-dimensional displacement of the feature point movement and the number of frames that continue to move; when it is confirmed that the feature point is to be rotated around other feature points, the position change of the feature point is represented by a rotation basic element, and the parameters of the rotation basic element include the surrounded feature Point, the angle of rotation on the two-dimensional plane of the image, the angle of rotation in the direction of the two-dimensional plane, and the number of frames in which the rotation continues; when it is confirmed that the feature point is to move to other feature points, the position of the feature point is changed Represented by the convergence element, the parameters of the convergence element include the target feature point, the
  • the calculating module 1002 includes:
  • An acquiring unit configured to obtain, from the action sequence, an action basic element corresponding to each feature point in the current frame, where the current frame refers to a frame to be reached;
  • a calculating unit configured to calculate a position of each feature point in the current frame according to a parameter in an action basic element corresponding to each feature point in the current frame acquired by the acquiring unit.
  • step 803 in the second embodiment For the specific implementation process of the calculation module 1002, refer to step 803 in the second embodiment, and details are not described herein again.
  • the obtaining module 1001 is configured to obtain a position of a feature point of the shape in the original input image, and use the position of the obtained feature point as an initial position of the feature point; or
  • the obtaining module 1001 is specifically configured to acquire the position of the feature point of the shape in the image saved when the previous motion is completed, and take the position of the acquired feature point as the initial position of the feature point.
  • step 802 For the specific implementation process of the obtaining module 1001, refer to step 802 in the second embodiment, and details are not described herein again.
  • the deformation module 1003 is specifically configured to connect the feature points according to the relationship between the initial position of the feature point and the feature point to obtain an initial feature line; according to the position of the feature point in the current frame and the association between the feature points The feature points are connected to obtain the current frame feature line; according to the initial feature line and the current frame feature line, the image of each shape region in the image is subjected to feature line-based image deformation.
  • the deformation module 1003 refer to step 804 in the second embodiment, and details are not described herein again.
  • the shape of the body in the original input image input by the user or the system may be any posture, and is not specifically limited.
  • the shape and shape region of the posture can be performed by using the method provided by the embodiment of the present invention.
  • the positioning, as well as the predefined sequence of actions for it, generates a body animation.
  • the original input image that can be input is an image of a body standing upright and naturally drooping, if the input is not the same
  • the image of the posture may be deformed into an image of the natural standing of the standing hands by the method provided by the embodiment of the present invention, and the deformed image is saved as a default image for subsequent deformation when the animation is generated, and according to This default image generates an action sequence.
  • the embodiment of the present invention locates a feature point and a body region of a shape by inputting an image containing a body or an animal, and performs image repair on the image of the body region to repair the background.
  • Image As the background of the animation, the feature point position of each frame is calculated according to the sequence of motions read, and the image deformation of one frame and one frame is performed based on the initial position of the feature point, thereby generating a two-dimensional shape animation, which is adopted by the method
  • the two-dimensional image deformation technique calculates a new position of a feature point in each frame according to the action sequence, forms a feature line, and then deforms to form an image in a new frame, thereby generating an animation effect without requiring a three-dimensional model, thereby reducing Workload;
  • This method proposes four motion basic elements of two-dimensional animation, which are used to combine to form motion and motion sequences, and to drive motion through the motion sequence.
  • the functions may be assigned to different functional modules according to needs.
  • the internal structure of the device is divided into different functional modules to perform all or part of the functions described above.
  • the device for generating the body animation provided by the above embodiment is the same as the method for generating the body animation. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

A method and device for generating body animation is provided, which relates to the animation technology field. The method comprises: acquiring the initial positions of the feature points of body in the image; reading the action sequences of said feature points and calculating the positions of said feature points in the current frame according to the action sequences; according to the initial positions of said feature points and the positions of said feature points in the current frame, the image of the body region in said image is deformed; the deformed image of the body region is covered on the background image in said image in order to generate animation. Animation is generated using two-dimensional image deformation technology without creating three-dimensional model, thus reducing the workload; In addition, since single image can be driven to form body animation of any action by modifying the action sequences, no analysis nor clustering on large number of images is needed, thus the method is easy to apply and the calculation amount is small.

Description

生成形体动画的方法及装置 技术领域  Method and device for generating body animation
本发明涉及动画技术领域, 特别涉及一种生成形体动画的方法及装置。 背景技术  The present invention relates to the field of animation technology, and in particular, to a method and apparatus for generating a body animation. Background technique
图像形体动画技术是指由计算机对若干张含有人体的图像进行处理, 生成人体动画的 技术。 目前, 图像形体动画技术主要有基于三维模型的技术和基于图像系列的技术。  Image form animation technology refers to the technique of processing a number of images containing human body by a computer to generate human body animation. At present, image body animation technology mainly has three-dimensional model-based technology and image series-based technology.
基于三维模型的技术: 首先对输入的图像中的人体进行映射, 得到与该人体相近的三 维人体模型, 然后通过驱动该三维人体模型进行指定的动作来生成动画。 这种方法的优点 是一但得到了三维人体模型, 就可以生成任意动作的动画。  The technology based on the three-dimensional model: First, the human body in the input image is mapped to obtain a three-dimensional human body model similar to the human body, and then the animation is generated by driving the three-dimensional human body model to perform a specified action. The advantage of this method is that once a 3D human body model is obtained, an animation of any action can be generated.
基于图像系列的技术: 首先从输入的若干张图像中得到人体的若干个动作图像, 然后 在这些动作图像之间作图像渐变来获得动画效果。 这种方法的优点是计算量小, 动画速度 快。  Based on the image series technology: First, several motion images of the human body are obtained from a plurality of input images, and then an image gradient is performed between the motion images to obtain an animation effect. The advantage of this method is that the amount of calculation is small and the animation speed is fast.
在实现本发明的过程中, 发明人发现现有技术至少存在以下问题:  In the process of implementing the present invention, the inventors have found that the prior art has at least the following problems:
对于基于三维模型的技术, 将图像中的人体映射成三维人体模型的过程比较麻烦, 工 作量大; 对于基于图像系列的技术, 由于是对输入的若干张图像中的动作之间作图像渐变 来获得动画, 因此只能产生图像中已有动作的动画。 发明内容  For 3D model-based techniques, the process of mapping the human body in the image into a 3D human body model is cumbersome and labor intensive; for the image series based technology, it is obtained by image grading between the actions of several images in the input. Animation, so you can only produce animations that already have motion in the image. Summary of the invention
为了减少生成形体动画的工作量, 并且能基于单张图像生成各种动作的形体动画, 本 发明实施例提供了一种生成形体动画的方法及装置。 所述技术方案如下:  In order to reduce the workload of generating a form animation, and to generate a body animation of various actions based on a single image, an embodiment of the present invention provides a method and apparatus for generating a body animation. The technical solution is as follows:
一方面, 提供了一种生成形体动画的方法, 所述方法包括:  In one aspect, a method of generating a form animation is provided, the method comprising:
获取图像中形体的特征点的初始位置;  Obtaining an initial position of a feature point of the shape in the image;
读取所述特征点的动作序列, 并根据所述动作序列计算当前帧中所述特征点的位置; 根据所述特征点的初始位置和当前帧中所述特征点的位置, 将所述图像中形体区域的 图像进行图像变形;  Reading an action sequence of the feature point, and calculating a position of the feature point in the current frame according to the action sequence; and the image according to an initial position of the feature point and a position of the feature point in a current frame The image of the medium body region is image deformed;
将变形后的形体区域的图像覆盖在所述图像中的背景图像上, 生成动画。  An image of the deformed body region is overlaid on the background image in the image to generate an animation.
另一方面, 提供了一种生成形体动画的装置, 所述装置包括: 获取模块, 用于获取图像中形体的特征点的初始位置; In another aspect, an apparatus for generating a form animation is provided, the apparatus comprising: An obtaining module, configured to acquire an initial position of a feature point of a shape in the image;
计算模块, 用于读取所述特征点的动作序列, 并根据所述动作序列计算当前帧中所述 特征点的位置;  a calculation module, configured to read an action sequence of the feature point, and calculate a location of the feature point in the current frame according to the action sequence;
变形模块, 用于根据所述获取模块获取的特征点的初始位置和所述计算模块计算的当 前帧中所述特征点的位置, 将所述图像中形体区域的图像进行图像变形;  a deformation module, configured to perform image deformation on an image of the shape region in the image according to an initial position of the feature point acquired by the acquisition module and a position of the feature point in the current frame calculated by the calculation module;
生成模块, 用于将变形模块变形后的形体区域的图像覆盖在所述图像中的背景图像上, 生成动画。  And a generating module, configured to overlay an image of the deformed shape region of the deforming module on the background image in the image to generate an animation.
本发明实施例提供的技术方案带来的有益效果是:  The beneficial effects brought by the technical solutions provided by the embodiments of the present invention are:
通过读取特征点的动作序列, 根据动作序列计算出当前帧特征点的位置, 然后在获取 的特征点的初始位置的基础上进行图像变形, 到达当前帧特征点的位置, 从而生成了二维 形体动画, 由于该方法采用的是二维图像变形技术, 不需要建立三维模型, 因此减少了工 作量; 本方法通过动作序列驱动形体产生动作, 只需要修改动作序列就能驱动单张图像形 成任意动作的形体动画, 而不需要像现有技术那样对大量图像中的形体动作进行分析及聚 类来获取不同类型的动作, 因此本方法实现简单, 计算量小。 附图说明  By reading the action sequence of the feature point, the position of the current frame feature point is calculated according to the motion sequence, and then the image is deformed based on the initial position of the acquired feature point, and the position of the current frame feature point is reached, thereby generating a two-dimensional image. Body animation, because the method uses two-dimensional image deformation technology, there is no need to establish a three-dimensional model, thus reducing the workload; the method drives the body to generate motion through the action sequence, and only needs to modify the motion sequence to drive a single image to form an arbitrary The shape animation of the motion does not need to analyze and cluster the body motions in a large number of images to acquire different types of motions as in the prior art, so the method is simple to implement and the calculation amount is small. DRAWINGS
为了更清楚地说明本发明实施例中的技术方案, 下面将对实施例描述中所需要使用的 附图作简单地介绍, 显而易见地, 下面描述中的附图仅仅是本发明的一些实施例, 对于本 领域普通技术人员来讲, 在不付出创造性劳动的前提下, 还可以根据这些附图获得其他的 附图。  In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described. It is obvious that the drawings in the following description are only some embodiments of the present invention. Other drawings may also be obtained from those of ordinary skill in the art in view of the drawings.
图 1是本发明实施例一提供的生成形体动画的方法流程图;  1 is a flowchart of a method for generating a body animation according to Embodiment 1 of the present invention;
图 2是本发明实施例二提供的形体参数建模的方法流程图;  2 is a flow chart of a method for modeling shape parameters according to Embodiment 2 of the present invention;
图 3是本发明实施例二提供的人体上半身图像示意图;  3 is a schematic diagram of an upper body image of a human body according to a second embodiment of the present invention;
图 4是本发明实施例二提供的形体的特征点示意图;  4 is a schematic diagram of feature points of a body according to a second embodiment of the present invention;
图 5是本发明实施例二提供的形体区域的划分示意图;  FIG. 5 is a schematic diagram showing the division of a body region according to Embodiment 2 of the present invention; FIG.
图 6a是本发明实施例二提供的第一次图像修复示意图;  6a is a schematic diagram of a first image restoration provided by Embodiment 2 of the present invention;
图 6b是本发明实施例二提供的第二次图像修复示意图;  FIG. 6b is a schematic diagram of a second image restoration according to Embodiment 2 of the present invention; FIG.
图 6c是本发明实施例二提供的第三次图像修复示意图;  6c is a schematic diagram of a third image restoration provided by Embodiment 2 of the present invention;
图 7是本发明实施例二提供的预定义动作序列的方法流程图;  7 is a flowchart of a method for pre-defining an action sequence according to Embodiment 2 of the present invention;
图 8是本发明实施例二提供的生成形体动画的方法流程图;  FIG. 8 is a flowchart of a method for generating a body animation according to Embodiment 2 of the present invention; FIG.
图 9是本发明实施例二提供的形成辅助特征线示意图; 图 10是本发明实施例三提供的生成形体动画的装置结构示意图; 9 is a schematic diagram of forming an auxiliary feature line according to Embodiment 2 of the present invention; FIG. 10 is a schematic structural diagram of an apparatus for generating a body animation according to Embodiment 3 of the present invention; FIG.
图 11是本发明实施例三提供的另一种生成形体动画的装置结构示意图;  FIG. 11 is a schematic structural diagram of another apparatus for generating a body animation according to Embodiment 3 of the present invention; FIG.
图 12是本发明实施例三提供的又一种生成形体动画的装置结构示意图;  FIG. 12 is a schematic structural diagram of another apparatus for generating a body animation according to Embodiment 3 of the present invention; FIG.
图 13是本发明实施例三提供的又一种生成形体动画的装置结构示意图。 具体实施方式  FIG. 13 is a schematic structural diagram of another apparatus for generating a body animation according to Embodiment 3 of the present invention. detailed description
为使本发明的目的、 技术方案和优点更加清楚, 下面将结合附图对本发明实施方式作 进一步地详细描述。  The embodiments of the present invention will be further described in detail below with reference to the accompanying drawings.
实施例一  Embodiment 1
本发明实施例提供了一种生成形体动画的方法, 该方法对于任意给定的一张含有人体、 或动物、或卡通形象等形体的图像, 都可以生成相应形体的动画。参见图 1, 方法流程包括:  Embodiments of the present invention provide a method for generating a body animation, which can generate an animation of a corresponding body for any given image containing a human body, an animal, or a cartoon image. Referring to Figure 1, the method flow includes:
101: 获取图像中形体的特征点的初始位置;  101: Obtain an initial position of a feature point of a shape in the image;
102: 读取特征点的动作序列, 并根据该动作序列计算当前帧中特征点的位置;  102: Read an action sequence of the feature point, and calculate a position of the feature point in the current frame according to the action sequence;
103: 根据特征点的初始位置和当前帧中特征点的位置, 将该图像中形体区域的图像进 行图像变形;  103: deforming an image of the body region in the image according to an initial position of the feature point and a position of the feature point in the current frame;
104: 将变形后的形体区域的图像覆盖在该图像中的背景图像上, 生成动画。  104: An image of the deformed body region is overlaid on the background image in the image to generate an animation.
本发明实施例提供的方法, 通过读取特征点的动作序列, 根据动作序列计算出当前帧 中形体的特征点的位置, 然后在获取的特征点的初始位置的基础上进行图像变形, 到达当 前帧特征点的位置, 从而生成了二维形体动画, 由于该方法采用的是二维图像变形技术, 不需要建立三维模型, 因此减少了工作量; 本方法通过动作序列驱动形体产生动作, 只需 要修改动作序列就能驱动单张图像形成任意动作的形体动画, 而不需要像现有技术那样对 大量图像中的形体动作进行分析及聚类来获取不同类型的动作, 因此本方法实现简单, 计 算量小。 实施例二  The method provided by the embodiment of the present invention calculates the position of the feature point of the shape in the current frame according to the action sequence by reading the action sequence of the feature point, and then performs image deformation on the basis of the initial position of the acquired feature point to reach the current The position of the frame feature points, thus generating a two-dimensional shape animation, because the method uses a two-dimensional image deformation technique, and does not need to establish a three-dimensional model, thereby reducing the workload; the method generates motion by driving the body through the action sequence, and only needs Modifying the sequence of motions can drive a single image to form an animation of an arbitrary motion, without the need to analyze and cluster the body motions in a large number of images to obtain different types of motions as in the prior art, so the method is simple to implement, and the calculation is simple. Small amount. Embodiment 2
本发明实施例提供了一种生成形体动画的方法, 该方法对于任意给定的一张图像, 都 可以生成图像中的形体的动画, 其中, 图像中的形体可以为人体、 动物和卡通形象中的一 种。 使用该方法既可以生成人体、 动物和卡通形象等形体的上半身动画, 也可以生成全身 动画, 在本发明实施例中仅以对人体的上半身图像生成动画为例进行说明, 但不限定于此。  Embodiments of the present invention provide a method for generating a form animation, which can generate an animation of a form in an image for any given image, wherein the shape in the image can be in a human body, an animal, or a cartoon image. One kind. In this embodiment, an upper body animation of a human body, an animal, a cartoon image, or the like can be generated, and a whole body animation can be generated. In the embodiment of the present invention, an animation is generated only for an upper body image of a human body, but the present invention is not limited thereto.
本发明实施例提供的方法, 在用户或系统输入原始输入图像之后, 先对该图像进行形 体参数建模, 然后再根据建好的模型生成 2D ( Two Dimensions , 二维) 动画, 参见图 2, 形体参数建模的方法流程包括: The method provided by the embodiment of the present invention, after the user or the system inputs the original input image, first performs shape parameter modeling on the image, and then generates 2D (Two Dimensions, 2D) animation according to the built model, see FIG. 2, The method flow of shape parameter modeling includes:
201: 扫描原始输入图像, 得到对该图像中的形体预定义的特征点及形体区域; 假设用户输入的原始输入图像是人体上半身图像, 如图 3 所示, 为了使后续在图像中 的标注看的更加清楚, 图 3 中仅显示了人体上半身图像的轮廓, 轮廓中间的图像细节没有 显示, 而实际应用时应该是一张真实的人体上半身图像。  201: Scan the original input image to obtain a feature point and a body region predefined for the shape in the image; assume that the original input image input by the user is the upper body image of the human body, as shown in FIG. 3, in order to make subsequent annotations in the image More clearly, Figure 3 shows only the outline of the upper body image of the human body. The image details in the middle of the outline are not displayed, but the actual application should be a real upper body image.
具体地, 扫描原始输入图像, 分析图像中的形体特点, 得到为该形体预定义的特征点 及形体区域。  Specifically, the original input image is scanned, and the feature of the image in the image is analyzed to obtain a feature point and a body region predefined for the shape.
202: 将预定义的特征点及形体区域显示到该图像中;  202: Display a predefined feature point and a body area into the image;
具体地, 将预定义的特征点及形体区域縮放显示到该图像中的合适位置, 但有时由于 定位的不够准确, 显示到图像中的特征点和形体区域不能完全与形体的相应位置准确对应 上, 因此要使用户根据显示的特征点和形体区域进行精确定位, 如步骤 203所述。  Specifically, the predefined feature points and the body region are zoomed and displayed to a suitable position in the image, but sometimes the feature points and the body regions displayed in the image cannot completely correspond to the corresponding positions of the body due to the inaccuracy of the positioning. Therefore, the user is required to perform precise positioning according to the displayed feature points and the body regions, as described in step 203.
203: 保存由用户将显示的预定义的特征点及形体区域拖动到该图像中的形体上的对应 位置进行精确定位后的位置;  203: Save the position where the predefined feature points and the shape regions displayed by the user are dragged to the corresponding positions on the shape in the image to be accurately positioned;
定位时, 用户可以逐个拖动特征点和形体区域到形体上的合适位置, 还可以修正形体 区域的折线, 操作简便。 用户完成修订后保存用户定位的特征点及形体区域的位置。  When positioning, the user can drag the feature points and the body area one by one to the appropriate position on the body, and also correct the fold line of the body area, which is easy to operate. After the user completes the revision, the feature points of the user positioning and the position of the body region are saved.
在本发明实施例中, 特征点共有 14个, 如图 4所示。 特征点可以增加或者减少, 特征 点越多生成的动画效果越细致, 特征点越少生成的动画速度越快。  In the embodiment of the present invention, there are 14 feature points, as shown in FIG. Feature points can be increased or decreased. The more feature points generated, the more detailed the animation effect, and the fewer the feature points, the faster the animation will be generated.
在本发明实施例中,形体区域的划分如图 5所示,共分为三个部分:身体区域(区域 0)、 左手区域 (区域 2 ) 和右手区域 (区域 1 )。 形体区域还可以增加, 如将上臂与下臂分开, 这样就可以处理上臂与下臂有重合的图像了。 通过使用折线分别选择出这三个区域的目的 是将形体与背景分开, 同时将形体分成三个部分, 以用于图像变形。  In the embodiment of the present invention, the division of the body region is as shown in FIG. 5, and is divided into three parts: a body region (region 0), a left-hand region (region 2), and a right-hand region (region 1). The body area can also be increased, such as separating the upper arm from the lower arm, so that the image of the upper arm and the lower arm can be processed. The purpose of selecting the three regions by using the fold lines is to separate the body from the background while dividing the body into three parts for image deformation.
204: 根据保存的由用户精确定位后的形体区域的位置, 对该图像进行图像修复, 得到 各形体区域的图像及背景图像。  204: Perform image restoration on the image according to the saved position of the body region accurately positioned by the user, and obtain an image and a background image of each body region.
在进行图像修复时可以采用现有技术中常用的基于图像平均灰度值的快速图像修复算 法, 但不限定于此。  In the image restoration, a fast image restoration algorithm based on image average gray value commonly used in the prior art can be used, but is not limited thereto.
具体地, 在进行图像修复时, 图像中划分了几个形体区域, 就要进行几次图像修复, 每次针对不同的形体区域进行修复。 之所以要进行多次修复, 是因为图像中形体的手臂区 域可能会挡住身体区域, 或者形体的各部分区域挡住了背景图像, 多次修复可以还原每一 层的图像信息, 这样, 在进行图像变形时, 手臂、 身体和背景是分开的, 不会相互影响。  Specifically, when image restoration is performed, several shape regions are divided in the image, and image restoration is performed several times, and each time for different body regions is repaired. The reason why the repair is repeated is because the arm area of the shape in the image may block the body area, or the parts of the body block the background image, and the multiple repairs can restore the image information of each layer, so that the image is performed. When deformed, the arms, body and background are separated and do not affect each other.
例如, 对划分好形体区域的图 5进行图像修复时, 共需要修复三次: 首先, 对图 5中 的区域 2进行图像修复, 得到图 6a, 图 6a中还原了被左手区域挡住的背景区域和身体区域 的信息; 然后对图 6a中的区域 1进行图像修复, 得到图 6b, 图 6b中还原了被左手区域及 右手区域挡住的背景区域和身体区域的信息; 最后对图 6b中的区域 0进行图像修复, 得到 图 6c, 图 6c中还原了被左手区域、 右手区域和身体区域挡住的背景区域的信息, 这样, 图 6c就是最终得到的背景图像, 每帧动画都以图 6c为背景。 For example, when performing image restoration on FIG. 5 in which the shape region is divided, it is necessary to repair three times: First, image repair is performed on the region 2 in FIG. 5, and FIG. 6a is obtained, and the background region blocked by the left-hand region is restored in FIG. 6a and Body area Information; then image repair is performed on the area 1 in Fig. 6a, and Fig. 6b is obtained. In Fig. 6b, the information of the background area and the body area blocked by the left-hand area and the right-hand area is restored; finally, the image of the area 0 in Fig. 6b is performed. Repair, Figure 6c, Figure 6c restores the information of the background area blocked by the left hand area, the right hand area and the body area, so that Figure 6c is the final background image, each frame animation is in the background of Figure 6c.
进一步地, 在对原始输入图像完成形体参数建模后, 就可以进行 2D动画的生成了, 在 本发明实施例中, 生成 2D动画时读入的特征点的动作序列是预先定义好的, 参见图 7, 预 定义动作序列的方法包括:  Further, after the modeling of the original input image is completed, the 2D animation can be generated. In the embodiment of the present invention, the action sequence of the feature points read in the 2D animation is predefined, see Figure 7. The method of predefining the sequence of actions includes:
701: 预定义动作基本元;  701: predefined action basic element;
动作基本元用于表示特征点位置的变化及变化持续的帧数, 本发明实施例提供的方法 预先定义了 4个动作基本元, 包括静止基本元、 平移基本元、 旋转基本元和会合基本元。  The action basic element is used to indicate the change of the position of the feature point and the number of frames of the change. The method provided by the embodiment of the present invention predefines four action basic elements, including a static basic element, a translation basic element, a rotation basic element, and a convergence basic element. .
静止基本元: 表示相应特征点的位置不变, 没有参数。  Static basic element: Indicates that the position of the corresponding feature point is unchanged and there are no parameters.
平移基本元: 表示相应特征点要在若干帧内平移某个位移, 共有 3 个参数, 包括特征 点移动的二维位移 X和 y, 以及移动持续的帧数。 与平移有关的动作很多, 如耸肩, 就是将 图 4中的特征点 6和 7向上平移某个位移。 同时, 平移还可以达成二维动画中夸张动作的 效果, 如将图 4中的特征点 6和 7分别向外移动, 则有臂膀突然变强壮的效果。 平移的单 位按默认图像大小为 1000*1000 来计算的, 针对输入的不同大小的图像, 将根据默认图像 的大小以及输入图像的实际大小进行坐标换算,即如果输入图像的大小为 iWidth*iHeight, 则平移量 x*=iWidth/1000,
Figure imgf000007_0001
平移基本元的数据结构可以设计如下:
Panning basic element: Indicates that the corresponding feature point is to shift a certain displacement within several frames. There are 3 parameters, including the two-dimensional displacement X and y of the movement of the feature point, and the number of frames that continue to move. There are many movements related to translation, such as shrugging, which is to shift the feature points 6 and 7 in Figure 4 up by a certain displacement. At the same time, the translation can also achieve the effect of the exaggerated action in the two-dimensional animation. For example, if the feature points 6 and 7 in FIG. 4 are respectively moved outward, there is an effect that the arms suddenly become strong. The unit of panning is calculated according to the default image size of 1000*1000. For the images of different sizes input, the coordinate conversion will be performed according to the size of the default image and the actual size of the input image, that is, if the size of the input image is iWidth*iHeight, Then the amount of translation x*=iWidth/1000,
Figure imgf000007_0001
The data structure of the panning primitive can be designed as follows:
typedef struct STR— TRANS 〃平移基本元  Typedef struct STR— TRANS 〃 translation basic element
{  {
bool bTrans; 〃是否为平移  Bool bTrans; 〃 is translation
int iX ; //平移 x  Int iX ; //translation x
int iY ; 〃平移 y  Int iY ; 〃 translation y
int iTime ; 〃平移持续的帧数  Int iTime ; 〃 translation continuous frame number
} STR— TRANS;  } STR- TRANS;
旋转基本元: 表示相应特征点要在若干帧内环绕某个特征点旋转某个角度, 共有 4个 参数, 包括被环绕的特征点、 在该图像的二维平面上的旋转角度、 垂直该二维平面方向上 的旋转角度和旋转持续的帧数。 被环绕的特征点也称为旋转基点, 用该特征点的编号表示。 动画中大多数手部动作都与旋转有关, 这也与形体的骨骼动作相符合。 旋转基本元的数据 结构可以设计如下:  Rotating elementary element: Indicates that the corresponding feature point is to be rotated around a feature point within a number of frames. There are 4 parameters, including the surrounding feature point, the rotation angle on the 2D plane of the image, and the vertical The angle of rotation in the dimension plane and the number of frames that the rotation continues. The surrounded feature points are also referred to as rotation base points, and are represented by the number of the feature points. Most of the hand movements in an animation are related to rotation, which is also consistent with the bone movements of the body. The data structure of the rotating primitive can be designed as follows:
typedef struct STR— ROTATE 〃旋转基本元 bool bRotate ; //是否旋转 Typedef struct STR— ROTATE 〃 rotation primitive Bool bRotate ; // whether to rotate
int iO ; 〃旋转基点 0  Int iO ; 〃 rotation base point 0
float fTheta ; 〃二维平面上的旋转角度  Float fTheta ; 旋转 the angle of rotation on a two-dimensional plane
float fZ ; 〃垂直该二维平面方向上的旋转角度  Float fZ ; 〃 vertical rotation angle in the direction of the two-dimensional plane
int iTime ; 〃旋转持续的帧数  Int iTime ; 〃 rotates the number of frames
} STR— ROTATE;  } STR— ROTATE;
会合基本元: 表示相应特征点要在若干帧内向某个特征点的位置移动, 共有 3个参数, 包括目标特征点、 移动比率和移动持续的帧数。 移动比率是指该特征点向目标特征点移动 的二者距离的百分比, 如果移动比率为 1, 则是将该特征点移到目标特征点的位置, 如果移 动比率为 0. 5, 则是移到该特征点和目标特征点的中间的位置。会合操作用于弥补平移操作 中由于定义了精确的位移数值而无法达到假想中的位置的缺陷, 如将左手放到右肩上, 因 为预先不知道右肩的坐标, 只用平移无法精确地到达。 会合基本元的数据结构可以设计如 下:  Convergence Element: Indicates that the corresponding feature point is to be moved to a feature point within several frames. There are 3 parameters, including the target feature point, the movement ratio, and the number of frames that are continuously moved. The movement ratio is the percentage of the distance from the feature point to the target feature point. If the movement ratio is 1, the feature point is moved to the position of the target feature point. If the movement ratio is 0.5, then the movement is The position to the middle of the feature point and the target feature point. The rendezvous operation is used to compensate for the defect that the position of the imaginary position cannot be reached due to the definition of the precise displacement value in the translation operation, such as placing the left hand on the right shoulder, because the coordinates of the right shoulder are not known in advance, and the translation cannot be accurately reached by the translation. . The data structure of the meeting element can be designed as follows:
typedef struct STR— MIX 〃会合基本元 bool bMix ; 〃是否会合  Typedef struct STR—MIX 〃 convergence basic element bool bMix ;
int iO ; 〃目标特征点  Int iO ; 〃 target feature point
float fRate ; 〃移动比率  Float fRate ; 〃 moving ratio
int iTime ; 〃会合持续的帧数  Int iTime ; 〃 convergence continuous frame number
} STR— MIX ;  } STR- MIX ;
702: 根据动作基本元和形体的特征点预定义动作;  702: pre-defining an action according to a feature element of the action element and a feature point of the shape;
具体地, 在一个动作中每个特征点对应一个动作基本元, 因此, 根据动作基本元和形 体的特征点预定义动作的具体步骤是, 逐个确认形体的特征点在该动作中的位置变化, 当确认特征点的位置不变时, 将该特征点的位置变化用静止基本元表示;  Specifically, each feature point corresponds to one action basic element in one action. Therefore, a specific step of predefining the action according to the motion element and the feature point of the shape is to confirm the position change of the feature point of the shape one by one in the action. When it is confirmed that the position of the feature point is unchanged, the position change of the feature point is represented by a stationary basic element;
当确认特征点要进行平移时, 将该特征点的位置变化用平移基本元表示;  When it is confirmed that the feature point is to be translated, the position change of the feature point is represented by a translation basic element;
当确认特征点要环绕其它特征点进行旋转时, 将该特征点的位置变化用旋转基本元表 示;  When it is confirmed that the feature point is to be rotated around other feature points, the position change of the feature point is expressed by the rotation basic element;
当确认特征点要向其它特征点移动时, 将该特征点的位置变化用会合基本元表示。 下面以图 4中的形体为例进行说明, 用于定义动作的数据结构可以设计如下: typedef struct STR— ACTION 〃动作 bool bStill[14]; 〃特征点不动 When it is confirmed that the feature point is to move to other feature points, the position change of the feature point is represented by the convergence basic element. The following uses the shape in Figure 4 as an example. The data structure used to define the action can be designed as follows: typedef struct STR- ACTION 〃 action Bool bStill[14]; 〃Features do not move
STR— TRANS strTrans[14]; 〃特征点平移  STR— TRANS strTrans[14]; 〃 feature point translation
STR— ROTATE strRotate[14]
Figure imgf000009_0001
STR— ROTATE strRotate[14]
Figure imgf000009_0001
STR— MIX strMix[14]; 〃特征点会合  STR— MIX strMix [14]; 〃 feature points meet
〃该动作持续的帧数  帧The number of frames that the action continues
}STR_ACTI0N;  }STR_ACTI0N;
其中, [14]表示相应的数组有 14个数组元素, 这是因为图 4中共有 14个特征点。 逐 个确认特征点在该动作中的位置变化应该用哪个动作基本元表示, 并在相应的动作基本元 中与该特征点对应的数组元素中记录该特征点位置的变化及参数。  Among them, [14] indicates that the corresponding array has 14 array elements, because there are 14 feature points in Figure 4. It is confirmed by which action element is used to confirm the position change of the feature point in the action, and the change and parameter of the feature point position are recorded in the array element corresponding to the feature point in the corresponding action element.
对于该动作持续的帧数, 具体为找出各特征点中位置变化的最大持续帧数, 将该最大 持续帧数作为该动作持续的帧数。  The number of frames in which the motion continues is specifically to find the maximum number of consecutive frames in which the position changes in each feature point, and the maximum number of consecutive frames is used as the number of frames in which the motion continues.
进一步地, 定义出动作的数据结构后, 就可以得到表示该动作的文件内容, 假设用 s 表示静止基本元, 用 t表示平移基本元, 用 r表示旋转基本元, 用 m表示会合基本元, 下 面列举三个图 4中的形体的动作:  Further, after defining the data structure of the action, the content of the file representing the action can be obtained, assuming that the static element is represented by s, the basic element is represented by t, the basic element is represented by r, and the basic element is represented by m. The actions of the three shapes in Figure 4 are listed below:
s s s s s s s s s s r 8 90 60 10 r 9 -90 60 10 s s 〃动作 1 s s s s s s s s s s s s m l3 0.5 8 m l2 0.5 8 //动作 2 s s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5 〃动作 3 在动作 1中, 前十个 s表示特征点 0至 9不动, "r 8 90 60 10"表示特征点 10在 10 帧内,绕特征点 8在该图像的二维平面上旋转 90度、垂直该二维平面方向上旋转 60度, "r 9 -90 60 10"表示特征点 11在 10帧内, 绕特征点 9在该图像的二维平面上旋转负 90度、 垂直该二维平面方向上旋转 60度, 最后两个 s表示特征点 12和 13不动。 动作 1实际形成 的动画效果是在 10帧内将左手和右手摆动到胸前。  Ssssssssssr 8 90 60 10 r 9 -90 60 10 ss 〃 action 1 ssssssssssssm l3 0.5 8 m l2 0.5 8 //action 2 sssssssssst 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5 〃 action 3 In action 1, the first ten s indicate that the feature points 0 to 9 do not move, and "r 8 90 60 10" indicates that the feature point 10 is rotated by 90 degrees around the feature point 8 on the two-dimensional plane of the image within 10 frames. Rotating 60 degrees vertically in the direction of the two-dimensional plane, "r 9 -90 60 10" indicates that the feature point 11 is rotated within a frame of 10 degrees on the two-dimensional plane of the image, and the two-dimensional plane is perpendicular to the two-dimensional plane. The direction is rotated by 60 degrees, and the last two s indicate that the feature points 12 and 13 are not moving. Action 1 The actual animation effect is to swing the left and right hands to the chest within 10 frames.
在动作 2中, 前十二个 s表示特征点 0至 11不动, "m 13 0.5 8"表示特征点 12在 8 帧内移动到特征点 12和特征点 13连线的中点位置, "m 12 0.5 8" 表示特征点 13在 8帧 内移动到特征点 13和特征点 12连线的中点位置。 动作 2实际形成的动画效果是在 8帧内 使两手的指尖接触。  In action 2, the first twelve s indicate that the feature points 0 to 11 are not moving, and "m 13 0.5 8" indicates that the feature point 12 moves within the eight frames to the midpoint position of the line connecting the feature point 12 and the feature point 13, " m 12 0.5 8" indicates that the feature point 13 moves to the midpoint position of the line connecting the feature point 13 and the feature point 12 within 8 frames. The actual animation effect of action 2 is to make the fingertips of both hands contact within 8 frames.
在动作 3中, 前十个 s表示特征点 0至 9不动, 后四个 "t 0 -20 5"表示特征点 10 至 13在 5帧内向上发生位移 20。 动作 3实际形成的动画效果是使左手和右手向上移动。  In action 3, the first ten s indicate that the feature points 0 to 9 are not moving, and the last four "t 0 -20 5" indicate that the feature points 10 to 13 are shifted upward by 20 in 5 frames. Action 3 The actual animation effect is to move the left and right hands up.
本步骤可以针对输入的图像预先定义多个动作。  This step can predefine multiple actions for the input image.
703: 将预定义的动作组合成特征点的动作序列。 具体地, 将预定义的多个动作进行组合, 就可以得到动作序列, 例如读取图 4 中形体 的一个动作序列内容如下: 703: A sequence of actions that combine predefined actions into feature points. Specifically, by combining a plurality of predefined actions, a sequence of actions can be obtained. For example, reading an action sequence of the body in FIG. 4 is as follows:
#actl#5  #actl#5
s s s s s s s s s s r 8 90 60 10 r 9 -90 60 10 s s  s s s s s s s s s r 8 90 60 10 r 9 -90 60 10 s s
s s s s s s s s s s s s m l3 0.5 8 m l2 0.5 8  s s s s s s s s s s m l3 0.5 8 m l2 0.5 8
s s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5  s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5
s s s s s s s s s s t 0 20 5 t 0 20 5 t 0 20 5 t 0 20 5  s s s s s s s s s t 0 20 5 t 0 20 5 t 0 20 5 t 0 20 5
s s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5  s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5
其中, 该动作序列的名称为 actl, 共分解为 5个动作。  The name of the action sequence is actl, which is decomposed into five actions.
本步骤可以针对输入的图像预先组合多个动作序列。  This step can pre-combine multiple motion sequences for the input image.
进一步地, 完成形体参数建模和预定义好动作序列后, 参见图 8, 生成形体动画的方法 包括:  Further, after the shape parameter modeling and the predefined action sequence are completed, referring to FIG. 8, the method for generating the body animation includes:
801: 读取特征点的动作序列;  801: Read an action sequence of feature points;
具体地, 将要生成的动画的动作序列读进内存中, 该动作序列可以分为若干个动作。 本步骤 801既可以在步骤 802之前执行, 也可以在步骤 803之前执行, 本发明实施例 对此不作具体限定。  Specifically, the sequence of actions of the animation to be generated is read into the memory, and the sequence of actions can be divided into several actions. This step 801 can be performed before the step 802 or before the step 803, which is not specifically limited in the embodiment of the present invention.
802: 获取图像中形体的特征点的初始位置;  802: Obtain an initial position of a feature point of a shape in the image;
具体地, 获取图像中形体的特征点的初始位置有以下两种方式:  Specifically, the initial position of the feature points of the shape in the image is obtained in the following two ways:
获取原始输入图像中形体的特征点的位置, 并将获取的特征点的位置作为特征点的初 始位置; 或者,  Obtaining the position of the feature point of the shape in the original input image, and taking the position of the acquired feature point as the initial position of the feature point; or
获取上一个动作完成时保存的图像中形体的特征点的位置, 并将获取的特征点的位置 作为特征点的初始位置。  The position of the feature point of the shape in the image saved at the completion of the previous motion is obtained, and the position of the acquired feature point is taken as the initial position of the feature point.
从以上两种方式可以看出, 特征点的初始位置可以有两种定义, 一种是图像未进行任 何动画前的特征点的位置, 也就是在最初形体参数建模时用户定位的原始输入图像中特征 点的位置; 另一种是动作序列中上一个完整的动作完成时, 保存的图像中形体的特征点的 位置。  It can be seen from the above two ways that the initial position of the feature point can have two definitions, one is the position of the feature point before the image is not subjected to any animation, that is, the original input image of the user positioning when the initial shape parameter is modeled. The position of the feature point in the middle; the other is the position of the feature point of the shape in the saved image when the last complete action in the action sequence is completed.
在本发明实施例提供的方法中, 如果采用前一种特征点的初始位置, 在后续生成动画 时生成的是类似广播体操形式的动画, 即动作序列中的每个动作都是根据该形体的最初特 征点位置进行变形得到的; 如果采用后一种特征点的初始位置, 在后续生成动画时生成的 是连贯动作的动画, 即动作序列中的每个动作都是根据上一个动作完成时特征点的位置进 行变形得到的。 本发明实施例提供的方法不对采用哪种特征点的初始位置进行限定, 可以采用上述两 种方式中的任意一种来获取特征点的初始位置。 In the method provided by the embodiment of the present invention, if the initial position of the former feature point is adopted, an animation similar to the broadcast gymnastics form is generated in the subsequent generation of the animation, that is, each action in the action sequence is based on the shape. The initial feature point position is deformed; if the initial position of the latter feature point is used, the animation of the coherent action is generated in the subsequent animation generation, that is, each action in the action sequence is completed according to the previous action. The position of the point is deformed. The method provided by the embodiment of the present invention does not limit the initial position of which feature point is adopted, and may adopt any one of the above two methods to obtain the initial position of the feature point.
803: 根据动作序列计算当前帧中特征点的位置;  803: Calculate a position of a feature point in the current frame according to the action sequence;
具体地, 从动作序列中获取当前帧中每个特征点对应的动作基本元; 根据当前帧中每 个特征点对应的动作基本元中的参数, 计算当前帧中每个特征点的位置。 其中, 当前帧是 指即将要到达的一帧。  Specifically, the action basic element corresponding to each feature point in the current frame is obtained from the action sequence; and the position of each feature point in the current frame is calculated according to the parameter in the action basic element corresponding to each feature point in the current frame. Among them, the current frame refers to a frame that is about to arrive.
例如, 首先要知道当前帧是进行到的第几帧: 在一个动作开始时初始化帧数 iTime=0, 每完成一帧动画, iTime=iTime+l, 如果 iTime达到当前动作持续的帧数, 则将 iTime归零 并重新开始计数。  For example, first know the frame to which the current frame is going: Initialize the number of frames iTime=0 at the beginning of an action, iTime=iTime+l for each frame of animation, if iTime reaches the number of frames that the current action continues, Zero the iTime and start counting again.
知道当前帧是进行到的第几帧后, 开始计算当前帧中形体的各特征点的位置, 记特征 点为 j :  After knowing the current frame is the first few frames, the position of each feature point in the current frame is calculated, and the feature point is j:
对于平移基本元, 平移量为:  For a panning primitive, the amount of translation is:
(strTrans [ j] . iX * iTime I strTrans [j] . iTime,  (strTrans [ j] . iX * iTime I strTrans [j] . iTime,
strTrans [ j] . iY * iTime I strTrans [ j] . iTime);  strTrans [ j] . iY * iTime I strTrans [ j] . iTime);
参见前面所述数据结构的定义, 其中, strTrans [j] . iX表示特征点 j在 X方向上平移 的总位移, strTrans [j] . iY表示特征点 j在 y方向上平移的总位移, strTrans [j] . iTime 表示特征点 j平移持续的帧数, iTime表示当前帧的帧数;  See the definition of the data structure described above, where strTrans [j] . iX represents the total displacement of the feature point j in the X direction, strTrans [j] . iY represents the total displacement of the feature point j in the y direction, strTrans [j] . iTime represents the number of frames in which the feature point j is shifted, and iTime represents the number of frames in the current frame;
对于旋转基本元, 先计算旋转向量 V0, 然后计算出在该图像的二维平面上旋转后的向 量 VI, 旋转角度为 strRotate [j] . fTheta * iTime I strRotate [ j] . iTime, 然后计算出垂 直该二维平面旋转后的向量 V2, 旋转角度为 strRotate [j] . fZ * iTime I strRotate [j] . iTime, 旋转基点加上 V2即为特征点 j的新位置; 其中各参数的意义参见前 面所述数据结构的定义;  For the rotation primitive, first calculate the rotation vector V0, and then calculate the vector VI rotated on the two-dimensional plane of the image, the rotation angle is strRotate [j] . fTheta * iTime I strRotate [ j] . iTime, and then calculate The vector V2 after the rotation of the two-dimensional plane is perpendicular, and the rotation angle is strRotate [j] . fZ * iTime I strRotate [j] . iTime, the rotation base point plus V2 is the new position of the feature point j; The definition of the data structure described above;
对于会合基本元,先计算出总的移动向量 V0,然后可以得到当前帧的移动向量 VI = V0 * strMix [j] . fRate * iTime I strMix [j] . iTime, 其中各参数的意义参见前面所述数据结 构的定义。  For the convergence primitive, first calculate the total motion vector V0, and then get the motion vector of the current frame VI = V0 * strMix [j] . fRate * iTime I strMix [j] . iTime, where the meaning of each parameter is as described above The definition of the data structure.
在上述计算中, 如果 iTime大于该动作基本元持续的帧数, 则计算中 iTime与该动作 基本元持续的帧数相等。  In the above calculation, if iTime is greater than the number of frames in which the action elementary element continues, iTime is equal to the number of frames in which the action elementary element continues.
在每一帧开始前都要计算当前帧特征点的位置, 在计算时, 有些特征点是相关联的, 相关联的特征点的位置也要重新计算, 如图 4中的特征点 10如果绕特征点 8旋转的话, 特 征点 12也需要同时旋转, 在图 4所示的例子中相关联的特征点组包括:  The position of the current frame feature point is calculated before each frame starts. When calculating, some feature points are associated, and the position of the associated feature point is also recalculated, as shown in Figure 4 When the feature point 8 is rotated, the feature point 12 also needs to be rotated at the same time. In the example shown in FIG. 4, the associated feature point group includes:
在平移中相关联的特征点组: 6 ( 8, 10, 12), 7 ( 9, 11, 13 ) 在旋转中相关联的特征点组: 8 ( 10, 12), 9 ( 11, 13 ) The set of feature points associated in translation: 6 ( 8, 10, 12), 7 ( 9, 11, 13 ) The set of feature points associated in the rotation: 8 ( 10, 12), 9 ( 11, 13 )
在会合中相关联的特征点组: 10 ( 12), 11 ( 13), 12 ( 10), 13 ( 11 )  The set of feature points associated in the meeting: 10 ( 12), 11 ( 13), 12 ( 10), 13 ( 11 )
804: 根据特征点的初始位置和当前帧中特征点的位置, 将该图像中形体区域的图像进 行图像变形;  804: Deform the image of the body region in the image according to the initial position of the feature point and the position of the feature point in the current frame;
可选地, 可以先根据获取的特征点的初始位置, 将图像中形体的特征点恢复至初始位 置, 动画中的每一帧均使用初始位置这个状态进行变形, 这个初始位置的状态在这一帧动 画之前通过图像变形得到。  Optionally, the feature points of the shape in the image may be restored to the initial position according to the initial position of the acquired feature points, and each frame in the animation is deformed by using the initial position state, and the state of the initial position is The frame animation is obtained by image deformation before.
进一步地, 根据特征点的初始位置和当前帧特征点的位置, 将该图像中形体区域的图 像进行图像变形, 具体包括: 按照特征点的初始位置和特征点之间的关联性将特征点相连, 得到初始特征线; 按照当前帧中特征点的位置和特征点之间的关联性将特征点相连, 得到 当前帧特征线; 根据初始特征线和当前帧特征线, 对该图像中各形体区域的图像进行基于 特征线的图像变形。 本发明实施例采用基于特征线的图像变形技术的原因是人体和动物等 形体的运动是由骨骼驱动的。  Further, the image of the shape of the image in the image is deformed according to the initial position of the feature point and the position of the feature point of the current frame, and specifically includes: connecting the feature point according to the initial position of the feature point and the relationship between the feature points Obtaining an initial feature line; connecting the feature points according to the relationship between the position of the feature points in the current frame and the feature points, to obtain the current frame feature line; according to the initial feature line and the current frame feature line, each shape region in the image The image is image-based based on feature line distortion. The reason why the feature line based image deformation technique is employed in the embodiment of the present invention is that the movement of the human body and the animal is driven by the bone.
另外, 由于图像变形具有连续性, 在变形时不必每个像素都计算, 未计算的像素点的 信息可以通过已经计算好的像素按插值得到。  In addition, since the image deformation has continuity, it is not necessary to calculate every pixel at the time of deformation, and the information of the uncalculated pixel points can be obtained by interpolation from the already calculated pixels.
805: 将变形后的形体区域的图像覆盖在该图像中的背景图像上, 生成动画。  805: An image of the deformed body region is overlaid on the background image in the image to generate an animation.
上述 801至 805是生成一帧动画的处理流程, 接着生成下一帧动画时循环执行 802至 805即可, 不断循环即可生成连续的形体动画。  The above 801 to 805 are processing flow for generating one frame of animation, and then 802 to 805 are executed cyclically when the next frame animation is generated, and a continuous body animation can be generated by continuously looping.
在进行基于特征线的图像变形时, 对于比较单一的特征线, 离特征线较远的区域和变 形幅度比较大的区域很容易出现变形扩散的现象, 例如在测试时手臂旋转幅度超过 90度时 就会产生变形扩散的现象。 为了使图像变形时不会出现变形扩散的现象, 可以在上述特征 线的基础上在形体区域的外侧, 即形体的轮廓上增加辅助特征线, 用辅助特征线将形体包 围起来(可能没有全部包围),在这些辅助特征线内部的形体是不会出现变形扩散的现象的, 而在外部, 离辅助特征线比较近的区域也是不会出现变形扩散的现象。  When performing feature line-based image deformation, for a relatively single feature line, the area farther from the feature line and the area with a larger deformation range are prone to deformation and diffusion, for example, when the arm rotates more than 90 degrees during the test. There will be a phenomenon of deformation and diffusion. In order to prevent deformation and diffusion when the image is deformed, an auxiliary feature line may be added on the outer side of the body region, that is, the contour of the body body, and the shape is surrounded by the auxiliary feature line (may not be completely surrounded). The shape inside the auxiliary characteristic line is a phenomenon in which deformation and diffusion do not occur, and on the outside, a region which is relatively close to the auxiliary characteristic line is also a phenomenon in which deformation and diffusion do not occur.
例如, 参见图 9, 身体区域的特征线为 0-1、 1-2、 1-3、 2_4、 3_5、 0-6和 0_7, 左手 区域的特征线为 6-8、 8-10和 10-12, 并将左手区域的特征线分别向两边复制一份, 得到新 的特征线 6, -8 ' 、 6 " -8 " 、 8, -10,、 8 " -10"、 10, -12, 和 10" -12 ", 右手区域的特 征线为 7-9、 9-11和 11-13, 并将右手区域的特征线分别向两边复制一份, 得到新的特征线 7, -9, 、 7 " -9" 、 9, -11,、 9" -11 "、 11, -13, 和 11 " -13 "。 对左手区域特征线的复制 操作为, 先计算 6-8的长度 Ll, 取1^=1^1/5, L即为复制的特征线与原先特征线的距离。 然 后对左手区域上每一段要复制的特征线, 将特征线沿与其垂直的两个方向平移 L, 可以各自 得到两条新的特征线。 然后再求出同一侧相邻特征线的交点, 该交点即为原先特征线交点 对应的新的特征线的交点,比如 6-8、 8-10、 10-12向一侧复制可以得到 6 ' -8—1、 8—2-10—1、 10—2-12, , 然后求出 6 ' -8—1 和 8—2-10—1 的交点, 可以得到 8 ' ,求出 8—2-10—1 和 10—2-12, 的交点, 可以得到 10, 。 For example, referring to Figure 9, the characteristic lines of the body region are 0-1, 1-2, 1-3, 2_4, 3_5, 0-6, and 0_7, and the characteristic lines of the left-hand region are 6-8, 8-10, and 10- 12, and copy the feature lines of the left-hand area to the two sides to obtain new feature lines 6, -8 ', 6 "-8", 8, -10, 8"-10", 10, -12, And 10"-12", the characteristic lines of the right-hand area are 7-9, 9-11, and 11-13, and the characteristic lines of the right-hand area are copied to each side to obtain new characteristic lines 7, -9, , 7 "-9" , 9, -11, 9" -11 ", 11, -13, and 11 " -13 ". For the copy operation of the left-hand area feature line, first calculate the length L1 of 6-8, and take 1^=1^1/5, where L is the distance between the copied feature line and the original feature line. Then, for each feature line to be copied on each segment of the left-hand region, the feature line is translated L in two directions perpendicular thereto, and each can be Get two new feature lines. Then, the intersection point of the adjacent feature line on the same side is obtained, and the intersection point is the intersection point of the new feature line corresponding to the intersection point of the original feature line. For example, 6-8, 8-10, 10-12 can be copied to one side to obtain 6'. -8—1, 8—2-10—1, 10—2-12, , and then find the intersection of 6′ -8—1 and 8—2-10—1, and get 8′, find 8—2 The intersection of -10-1 and 10-2-12, you can get 10,.
在本发明实施例中, 用户或系统输入的原始输入图像中的形体形态可以是任意姿势, 并不受具体限定, 任何姿势的形体都可以使用本发明实施例提供的方法进行特征点和形体 区域的定位, 以及为其预定义相适应的动作序列, 生成形体动画。 但是, 为了使形体动画 更流畅、 动作更多、 效果更好, 并且使动作序列的预定义简单, 可以要求输入的原始输入 图像是一张形体立正站立双手自然下垂的图像, 如果输入的不是该姿势的形体图像, 可以 先通过本发明实施例提供的方法将该图像变形成为形体立正站立双手自然下垂的图像, 并 将变形后的图像保存, 作为后续生成动画时进行变形的默认图像, 并根据该默认图像生成 动作序列。  In the embodiment of the present invention, the shape of the body in the original input image input by the user or the system may be any posture, and is not specifically limited. The shape and shape region of the posture can be performed by using the method provided by the embodiment of the present invention. The positioning, as well as the predefined sequence of actions for it, generates a body animation. However, in order to make the body animation more smooth, more action, better effect, and simple pre-definition of the action sequence, the original input image that can be input is an image of a body standing upright and naturally drooping, if the input is not the same The image of the posture may be deformed into an image of the natural standing of the standing hands by the method provided by the embodiment of the present invention, and the deformed image is saved as a default image for subsequent deformation when the animation is generated, and according to This default image generates an action sequence.
本发明实施例提供的方法, 通过输入一张含有人体或动物等形体的图像, 在上面定位 出形体的特征点及形体区域, 并对形体区域的图像进行图像修复, 以修复后得到的背景图 像作为动画的背景, 根据读入的动作序列计算出每帧的特征点位置, 并以特征点的初始位 置为基础进行一帧一帧的图像变形, 从而生成二维的形体动画, 由于该方法采用的是二维 图像变形技术, 根据动作序列计算出每一帧中特征点的新位置, 形成特征线, 然后变形形 成新帧中的图像, 产生动画的效果, 不需要建立三维模型, 因此减少了工作量; 本方法通 过提出二维形体动画的 4个动作基本元, 用于组合形成动作和动作序列, 并通过动作序列 驱动形体产生动作, 因此只需要修改动作序列就能驱动单张图像形成任意动作的形体动画, 而不需要像现有技术那样对大量图像中的形体动作进行分析及聚类来获取不同类型的动 作, 因此本方法实现简单, 计算量小; 另外, 平移基本元的操作还可以产生二维形体动画 中的一些夸张动作的效果, 如上述实施例中提到的肩膀的特征点突然向外平移达到形体突 然变强壮的效果。 实施例三  The method provided by the embodiment of the present invention locates a feature point and a body region of a shape by inputting an image containing a body or an animal, and performs image repair on the image of the body region to repair the background image. As the background of the animation, the feature point position of each frame is calculated according to the sequence of motions read, and the image deformation of one frame and one frame is performed based on the initial position of the feature point, thereby generating a two-dimensional shape animation, which is adopted by the method The two-dimensional image deformation technique calculates a new position of a feature point in each frame according to the action sequence, forms a feature line, and then deforms to form an image in a new frame, thereby generating an animation effect without requiring a three-dimensional model, thereby reducing Workload; This method proposes four motion basic elements of two-dimensional animation, which are used to combine to form motion and motion sequences, and to drive motion through the motion sequence. Therefore, it is only necessary to modify the motion sequence to drive a single image to form an arbitrary motion. The animation of the action, without the need for a lot of The shape motion in the image is analyzed and clustered to obtain different types of motions. Therefore, the method is simple to implement and the amount of calculation is small. In addition, the operation of shifting the basic element can also produce some exaggerated motion effects in the two-dimensional animation, such as The feature points of the shoulder mentioned in the above embodiment are suddenly shifted outward to achieve the effect that the body suddenly becomes strong. Embodiment 3
本发明实施例提供了一种生成形体动画的装置, 该装置对于任意给定的一张图像, 都 可以生成图像中的形体的动画, 其中, 图像中的形体可以为人体、 动物和卡通形象中的一 种。 使用该装置既可以生成人体、 动物和卡通形象等形体的上半身动画, 也可以生成全身 动画。 参见图 10, 该装置包括:  Embodiments of the present invention provide an apparatus for generating a form animation, which can generate an animation of a form in an image for any given image, wherein the shape in the image can be in a human body, an animal, or a cartoon image. One kind. The device can be used to generate upper body animations of human bodies, animals, cartoons, etc., as well as full-body animations. Referring to Figure 10, the device includes:
获取模块 1001, 用于获取图像中形体的特征点的初始位置; 计算模块 1002, 用于读取特征点的动作序列, 并根据该动作序列计算当前帧中特征点 的位置; The obtaining module 1001 is configured to acquire an initial position of a feature point of the shape in the image; The calculation module 1002 is configured to read an action sequence of the feature point, and calculate a position of the feature point in the current frame according to the action sequence;
变形模块 1003, 用于根据获取模块 1001获取的特征点的初始位置和计算模块 1002计 算的当前帧中特征点的位置, 将该图像中形体区域的图像进行图像变形;  The deformation module 1003 is configured to perform image deformation on the image of the shape region in the image according to the initial position of the feature point acquired by the acquisition module 1001 and the position of the feature point in the current frame calculated by the calculation module 1002;
生成模块 1004,用于将变形模块 1003变形后的形体区域的图像覆盖在该图像中的背景 图像上, 生成动画。  The generating module 1004 is configured to cover an image of the deformed region of the deforming module 1003 on the background image in the image to generate an animation.
进一步地, 参见图 11, 该装置还包括:  Further, referring to FIG. 11, the device further includes:
第一预定义模块 1005,用于在获取模块 1001获取图像中形体的特征点的初始位置之前, 扫描原始输入图像, 得到对该图像中的形体预定义的特征点及形体区域; 第一预定义模块 1005的具体实施过程详见实施例二中的步骤 201, 此处不再赘述;  The first pre-defined module 1005 is configured to scan the original input image before acquiring the initial position of the feature point of the shape in the image, to obtain a feature point and a body region predefined for the shape in the image; For the specific implementation process of the module 1005, refer to step 201 in the second embodiment, and details are not described herein.
处理模块 1006,用于将第一预定义模块 1005预定义的特征点及形体区域显示到该图像 中, 并保存由用户将显示的预定义的特征点及形体区域拖动到该图像中的形体上的对应位 置进行精确定位后的位置; 处理模块 1006 的具体实施过程详见实施例二中的步骤 202 和 203, 此处不再赘述。  The processing module 1006 is configured to display the feature points and the body regions predefined by the first pre-defined module 1005 into the image, and save the shape of the predefined feature points and the body regions displayed by the user into the image. The position of the corresponding position on the corresponding location is performed. The specific implementation process of the processing module 1006 is shown in steps 202 and 203 in the second embodiment, and details are not described herein again.
更进一步地, 参见图 12, 该装置还包括:  Further, referring to FIG. 12, the apparatus further includes:
修复模块 1007, 用于根据处理模块 1006保存的由用户精确定位后的形体区域的位置, 对该图像进行图像修复, 得到各形体区域的图像及背景图像; 修复模块 1007的具体实施过 程详见实施例二中的步骤 204, 此处不再赘述。  The repairing module 1007 is configured to perform image restoration on the image according to the position of the physical region accurately positioned by the user saved by the processing module 1006, and obtain an image and a background image of each physical region. The specific implementation process of the repairing module 1007 is detailed. Step 204 in Example 2, and details are not described herein again.
再进一步地, 参见图 13, 该装置还包括:  Further, referring to FIG. 13, the device further includes:
第二预定义模块 1008, 用于在计算模块 1002读取特征点的动作序列之前, 预定义动作 基本元, 该动作基本元用于表示特征点位置的变化及变化持续的帧数, 该动作基本元包括 静止基本元、 平移基本元、 旋转基本元和会合基本元; 第二预定义模块 1008的具体实施过 程详见实施例二中的步骤 701, 此处不再赘述;  The second pre-defined module 1008 is configured to pre-define an action basic element before the calculation module 1002 reads the action sequence of the feature point, where the action basic element is used to indicate a change in the position of the feature point and a number of frames in which the change lasts. The element includes a static basic element, a translation basic element, a rotation basic element, and a convergence basic element. For the specific implementation process of the second pre-defined module 1008, refer to step 701 in the second embodiment, and details are not described herein.
第三预定义模块 1009,用于根据第二预定义模块 1008预定义的动作基本元和该形体的 特征点预定义动作, 在该动作中每个特征点对应一个动作基本元; 第三预定义模块 1009的 具体实施过程详见实施例二中的步骤 702, 此处不再赘述;  The third pre-defined module 1009 is configured to pre-define the action according to the action basic element predefined by the second pre-defined module 1008 and the feature point of the feature, where each feature point corresponds to one action basic element; the third predefined For the specific implementation process of the module 1009, refer to step 702 in the second embodiment, and details are not described herein again.
组合模块 1010, 用于将第三预定义模块 1009预定义的动作组合成特征点的动作序列; 组合模块 1010的具体实施过程详见实施例二中的步骤 703, 此处不再赘述。  The combination module 1010 is configured to combine the actions defined by the third pre-defined module 1009 into the action sequence of the feature points. For the specific implementation process of the combination module 1010, refer to step 703 in the second embodiment, and details are not described herein again.
具体地, 第三预定义模块 1009, 具体用于逐个确认该形体的特征点在该动作中的位置 变化, 当确认特征点的位置不变时, 将该特征点的位置变化用静止基本元表示; 当确认特 征点要进行平移时, 将该特征点的位置变化用平移基本元表示, 平移基本元的参数包括该 特征点移动的二维位移和移动持续的帧数; 当确认特征点要环绕其它特征点进行旋转时, 将该特征点的位置变化用旋转基本元表示, 旋转基本元的参数包括被环绕的特征点、 在该 图像的二维平面上的旋转角度、 垂直该二维平面方向上的旋转角度和旋转持续的帧数; 当 确认特征点要向其它特征点移动时, 将该特征点的位置变化用会合基本元表示, 会合基本 元的参数包括目标特征点、 移动比率和移动持续的帧数。 Specifically, the third pre-defined module 1009 is specifically configured to confirm the position change of the feature point of the feature in the action one by one, and when the position of the feature point is confirmed to be unchanged, the position change of the feature point is represented by a static basic element. When confirming that the feature point is to be translated, the position change of the feature point is represented by a translation basic element, and the parameter of the translation basic element includes the The two-dimensional displacement of the feature point movement and the number of frames that continue to move; when it is confirmed that the feature point is to be rotated around other feature points, the position change of the feature point is represented by a rotation basic element, and the parameters of the rotation basic element include the surrounded feature Point, the angle of rotation on the two-dimensional plane of the image, the angle of rotation in the direction of the two-dimensional plane, and the number of frames in which the rotation continues; when it is confirmed that the feature point is to move to other feature points, the position of the feature point is changed Represented by the convergence element, the parameters of the convergence element include the target feature point, the movement ratio, and the number of frames that are moved.
进一步地, 计算模块 1002, 包括:  Further, the calculating module 1002 includes:
获取单元, 用于从动作序列中获取当前帧中每个特征点对应的动作基本元, 其中, 当 前帧是指即将要到达的一帧;  An acquiring unit, configured to obtain, from the action sequence, an action basic element corresponding to each feature point in the current frame, where the current frame refers to a frame to be reached;
计算单元, 用于根据获取单元获取的当前帧中每个特征点对应的动作基本元中的参数, 计算当前帧中每个特征点的位置。  And a calculating unit, configured to calculate a position of each feature point in the current frame according to a parameter in an action basic element corresponding to each feature point in the current frame acquired by the acquiring unit.
计算模块 1002的具体实施过程详见实施例二中的步骤 803, 此处不再赘述。  For the specific implementation process of the calculation module 1002, refer to step 803 in the second embodiment, and details are not described herein again.
可选地, 获取模块 1001, 具体用于获取原始输入图像中形体的特征点的位置, 并将获 取的特征点的位置作为特征点的初始位置; 或者,  Optionally, the obtaining module 1001 is configured to obtain a position of a feature point of the shape in the original input image, and use the position of the obtained feature point as an initial position of the feature point; or
获取模块 1001, 具体用于获取上一个动作完成时保存的图像中形体的特征点的位置, 并将获取的特征点的位置作为特征点的初始位置。  The obtaining module 1001 is specifically configured to acquire the position of the feature point of the shape in the image saved when the previous motion is completed, and take the position of the acquired feature point as the initial position of the feature point.
获取模块 1001的具体实施过程详见实施例二中的步骤 802, 此处不再赘述。  For the specific implementation process of the obtaining module 1001, refer to step 802 in the second embodiment, and details are not described herein again.
更进一步地, 变形模块 1003, 具体用于按照特征点的初始位置和特征点之间的关联性 将特征点相连, 得到初始特征线; 按照当前帧中特征点的位置和特征点之间的关联性将特 征点相连, 得到当前帧特征线; 根据初始特征线和当前帧特征线, 对图像中各形体区域的 图像进行基于特征线的图像变形。 变形模块 1003 的具体实施过程详见实施例二中的步骤 804, 此处不再赘述。  Further, the deformation module 1003 is specifically configured to connect the feature points according to the relationship between the initial position of the feature point and the feature point to obtain an initial feature line; according to the position of the feature point in the current frame and the association between the feature points The feature points are connected to obtain the current frame feature line; according to the initial feature line and the current frame feature line, the image of each shape region in the image is subjected to feature line-based image deformation. For the specific implementation process of the deformation module 1003, refer to step 804 in the second embodiment, and details are not described herein again.
在本发明实施例中, 用户或系统输入的原始输入图像中的形体形态可以是任意姿势, 并不受具体限定, 任何姿势的形体都可以使用本发明实施例提供的方法进行特征点和形体 区域的定位, 以及为其预定义相适应的动作序列, 生成形体动画。 但是, 为了使形体动画 更流畅、 动作更多、 效果更好, 并且使动作序列的预定义简单, 可以要求输入的原始输入 图像是一张形体立正站立双手自然下垂的图像, 如果输入的不是该姿势的形体图像, 可以 先通过本发明实施例提供的方法将该图像变形成为形体立正站立双手自然下垂的图像, 并 将变形后的图像保存, 作为后续生成动画时进行变形的默认图像, 并根据该默认图像生成 动作序列。  In the embodiment of the present invention, the shape of the body in the original input image input by the user or the system may be any posture, and is not specifically limited. The shape and shape region of the posture can be performed by using the method provided by the embodiment of the present invention. The positioning, as well as the predefined sequence of actions for it, generates a body animation. However, in order to make the body animation more smooth, more action, better effect, and simple pre-definition of the action sequence, the original input image that can be input is an image of a body standing upright and naturally drooping, if the input is not the same The image of the posture may be deformed into an image of the natural standing of the standing hands by the method provided by the embodiment of the present invention, and the deformed image is saved as a default image for subsequent deformation when the animation is generated, and according to This default image generates an action sequence.
综上所述, 本发明实施例通过输入一张含有人体或动物等形体的图像, 在上面定位出 形体的特征点及形体区域, 并对形体区域的图像进行图像修复, 以修复后得到的背景图像 作为动画的背景, 根据读入的动作序列计算出每帧的特征点位置, 并以特征点的初始位置 为基础进行一帧一帧的图像变形, 从而生成二维的形体动画, 由于该方法采用的是二维图 像变形技术, 根据动作序列计算出每一帧中特征点的新位置, 形成特征线, 然后变形形成 新帧中的图像, 产生动画的效果, 不需要建立三维模型, 因此减少了工作量; 本方法通过 提出二维形体动画的 4个动作基本元, 用于组合形成动作和动作序列, 并通过动作序列驱 动形体产生动作, 因此只需要修改动作序列就能驱动单张图像形成任意动作的形体动画, 而不需要像现有技术那样对大量图像中的形体动作进行分析及聚类来获取不同类型的动 作, 因此本方法实现简单, 计算量小; 另外, 平移基本元的操作还可以产生二维形体动画 中的一些夸张动作的效果, 如上述实施例中提到的肩膀的特征点突然向外平移达到形体突 然变强壮的效果。 需要说明的是: 上述实施例提供的生成形体动画的装置在生成形体动画时, 仅以上述 各功能模块的划分进行举例说明, 实际应用中, 可以根据需要而将上述功能分配由不同的 功能模块完成, 即将装置的内部结构划分成不同的功能模块, 以完成以上描述的全部或者 部分功能。 另外, 上述实施例提供的生成形体动画的装置与生成形体动画的方法实施例属 于同一构思, 其具体实现过程详见方法实施例, 这里不再赘述。 In summary, the embodiment of the present invention locates a feature point and a body region of a shape by inputting an image containing a body or an animal, and performs image repair on the image of the body region to repair the background. Image As the background of the animation, the feature point position of each frame is calculated according to the sequence of motions read, and the image deformation of one frame and one frame is performed based on the initial position of the feature point, thereby generating a two-dimensional shape animation, which is adopted by the method The two-dimensional image deformation technique calculates a new position of a feature point in each frame according to the action sequence, forms a feature line, and then deforms to form an image in a new frame, thereby generating an animation effect without requiring a three-dimensional model, thereby reducing Workload; This method proposes four motion basic elements of two-dimensional animation, which are used to combine to form motion and motion sequences, and to drive motion through the motion sequence. Therefore, it is only necessary to modify the motion sequence to drive a single image to form an arbitrary motion. The animation of the motion does not need to analyze and cluster the body motions in a large number of images to obtain different types of motions as in the prior art, so the method is simple to implement and the amount of calculation is small; in addition, the operation of shifting the basic elements is Can produce the effect of some exaggerated actions in a two-dimensional body animation, such as the above The feature points of the shoulder mentioned in the example are suddenly translated outward to achieve the effect that the body suddenly becomes strong. It should be noted that, in the device for generating the body animation provided by the above embodiment, only the division of each functional module described above is illustrated in the process of generating the body animation. In practical applications, the functions may be assigned to different functional modules according to needs. Upon completion, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the device for generating the body animation provided by the above embodiment is the same as the method for generating the body animation. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
上述本发明实施例序号仅仅为了描述, 不代表实施例的优劣。  The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完 成, 也可以通过程序来指令相关的硬件完成, 所述的程序可以存储于一种计算机可读存储 介质中, 上述提到的存储介质可以是只读存储器, 磁盘或光盘等。  A person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium. The storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
以上所述仅为本发明的较佳实施例, 并不用以限制本发明, 凡在本发明的精神和原则 之内, 所作的任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。  The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc., which are within the spirit and scope of the present invention, should be included in the protection of the present invention. Within the scope.

Claims

权 利 要 求 书 Claim
1、 一种生成形体动画的方法, 其特征在于, 所述方法包括:  A method for generating a form animation, the method comprising:
获取图像中形体的特征点的初始位置;  Obtaining an initial position of a feature point of the shape in the image;
读取所述特征点的动作序列, 并根据所述动作序列计算当前帧中所述特征点的位置; 根据所述特征点的初始位置和当前帧中所述特征点的位置, 将所述图像中形体区域的图 像进行图像变形;  Reading an action sequence of the feature point, and calculating a position of the feature point in the current frame according to the action sequence; and the image according to an initial position of the feature point and a position of the feature point in a current frame The image of the medium body region is image deformed;
将变形后的形体区域的图像覆盖在所述图像中的背景图像上, 生成动画。  An image of the deformed body region is overlaid on the background image in the image to generate an animation.
2、根据权利要求 1所述的方法,其特征在于,获取图像中形体的特征点的初始位置之前, 所述方法还包括: The method according to claim 1, wherein before the obtaining the initial position of the feature point of the shape in the image, the method further comprises:
扫描原始输入图像, 得到对所述图像中的形体预定义的特征点及形体区域;  Scanning the original input image to obtain a feature point and a body region predefined for the shape in the image;
将所述预定义的特征点及形体区域显示到所述图像中;  Displaying the predefined feature points and body regions into the image;
保存由用户将显示的预定义的特征点及形体区域拖动到所述图像中的形体上的对应位置 进行精确定位后的位置。  The position where the predefined feature points and the body regions displayed by the user are dragged to the corresponding positions on the shape in the image is saved.
3、根据权利要求 2所述的方法, 其特征在于, 保存由用户将显示的预定义的特征点及形 体区域拖动到所述图像中的形体上的对应位置进行精确定位后的位置之后,所述方法还包括: 根据保存的由所述用户精确定位后的形体区域的位置, 对所述图像进行图像修复, 得到 各形体区域的图像及背景图像。 3. The method according to claim 2, wherein the user saves the displayed predefined feature points and body regions to a corresponding position on the shape in the image for precise positioning. The method further includes: performing image restoration on the image according to the saved position of the body region accurately positioned by the user, to obtain an image and a background image of each body region.
4、 根据权利要求 1所述的方法, 其特征在于, 读取所述特征点的动作序列之前, 所述方 法还包括: 4. The method according to claim 1, wherein before the sequence of actions of the feature points is read, the method further comprises:
预定义动作基本元, 所述动作基本元用于表示特征点位置的变化及变化持续的帧数, 所 述动作基本元包括静止基本元、 平移基本元、 旋转基本元和会合基本元;  Predefining an action element, the action element is used to indicate a change in a feature point position and a number of frames in which the change continues, and the action element includes a static element, a translation element, a rotation element, and a convergence element;
根据所述动作基本元和所述形体的特征点预定义动作, 在所述动作中每个特征点对应一 个动作基本元;  Defining an action according to the action element and the feature point of the shape, wherein each feature point corresponds to an action element in the action;
将预定义的动作组合成所述特征点的动作序列。  The predefined actions are combined into a sequence of actions of the feature points.
5、根据权利要求 4所述的方法, 其特征在于, 根据所述动作基本元和所述形体的特征点 预定义动作, 包括: The method according to claim 4, characterized in that: according to the action element and the feature point of the body Predefined actions, including:
逐个确认所述形体的特征点在所述动作中的位置变化,  Confirming the positional changes of the feature points of the body in the action one by one,
当确认所述特征点的位置不变时, 将所述特征点的位置变化用所述静止基本元表示; 当确认所述特征点要进行平移时, 将所述特征点的位置变化用所述平移基本元表示, 所 述平移基本元的参数包括所述特征点移动的二维位移和移动持续的帧数;  When it is confirmed that the position of the feature point is unchanged, the position change of the feature point is represented by the stationary basic element; when it is confirmed that the feature point is to be translated, the position change of the feature point is described by Translating a basic element representation, wherein the parameter of the translation basic element includes a two-dimensional displacement of the feature point movement and a number of frames that are continuously moved;
当确认所述特征点要环绕其它特征点进行旋转时, 将所述特征点的位置变化用所述旋转 基本元表示, 所述旋转基本元的参数包括被环绕的特征点、 在所述图像的二维平面上的旋转 角度、 垂直所述二维平面方向上的旋转角度和旋转持续的帧数;  When it is confirmed that the feature point is to be rotated around other feature points, the position change of the feature point is represented by the rotation basic element, and the parameter of the rotation basic element includes the surrounded feature point, in the image a rotation angle on a two-dimensional plane, a rotation angle in a direction perpendicular to the two-dimensional plane, and a number of frames in which the rotation continues;
当确认所述特征点要向其它特征点移动时, 将所述特征点的位置变化用所述会合基本元 表示, 所述会合基本元的参数包括目标特征点、 移动比率和移动持续的帧数。  When it is confirmed that the feature point is to be moved to other feature points, the position change of the feature point is represented by the convergence basic element, and the parameters of the convergence element include the target feature point, the movement ratio, and the number of frames that are continuously moved. .
6、根据权利要求 5述的方法, 其特征在于, 根据所述动作序列计算当前帧中所述特征点 的位置, 包括: The method according to claim 5, wherein calculating the position of the feature point in the current frame according to the action sequence comprises:
从所述动作序列中获取当前帧中每个特征点对应的动作基本元;  Obtaining an action basic element corresponding to each feature point in the current frame from the action sequence;
根据所述当前帧中每个特征点对应的动作基本元中的参数, 计算当前帧中每个特征点的 位置。  And calculating a position of each feature point in the current frame according to a parameter in the action basic element corresponding to each feature point in the current frame.
7、 根据权利要求 1所述的方法, 其特征在于, 获取图像中形体的特征点的初始位置, 包 括: 7. The method according to claim 1, wherein the initial position of the feature points of the shape in the image is obtained, including:
获取原始输入图像中形体的特征点的位置, 并将获取的所述特征点的位置作为所述特征 点的初始位置;  Obtaining a position of a feature point of the shape in the original input image, and taking the obtained position of the feature point as an initial position of the feature point;
或,  Or,
获取上一个动作完成时保存的图像中形体的特征点的位置, 并将获取的所述特征点的位 置作为所述特征点的初始位置。  The position of the feature point of the feature in the image saved at the completion of the previous action is obtained, and the acquired position of the feature point is taken as the initial position of the feature point.
8、根据权利要求 1所述的方法, 其特征在于, 根据所述特征点的初始位置和当前帧中所 述特征点的位置, 将所述图像中形体区域的图像进行图像变形, 包括: The method according to claim 1, wherein the image of the body region in the image is image-deformed according to the initial position of the feature point and the position of the feature point in the current frame, including:
按照所述特征点的初始位置和所述特征点之间的关联性将所述特征点相连, 得到初始特 征线;  And connecting the feature points according to an initial position of the feature point and an association between the feature points to obtain an initial feature line;
按照当前帧中所述特征点的位置和所述特征点之间的关联性将所述特征点相连, 得到当 前帧特征线; Correlating the feature points according to the position of the feature points in the current frame and the feature points, and obtaining Pre-frame feature line;
根据所述初始特征线和所述当前帧特征线, 对所述图像中各形体区域的图像进行基于特 征线的图像变形。  Based on the initial feature line and the current frame feature line, the image of each body region in the image is subjected to feature line based image deformation.
9、根据权利要求 1至 8任一权利要求所述的方法, 其特征在于, 所述图像中的形体为人 体、 动物和卡通形象中的一种。 9. A method according to any one of claims 1 to 8, wherein the form in the image is one of a human, an animal and a cartoon.
10、 一种生成形体动画的装置, 其特征在于, 所述装置包括: 10. An apparatus for generating a form animation, wherein the apparatus comprises:
获取模块, 用于获取图像中形体的特征点的初始位置;  An obtaining module, configured to acquire an initial position of a feature point of a shape in the image;
计算模块, 用于读取所述特征点的动作序列, 并根据所述动作序列计算当前帧中所述特 征点的位置;  a calculation module, configured to read an action sequence of the feature point, and calculate a location of the feature point in the current frame according to the action sequence;
变形模块, 用于根据所述获取模块获取的特征点的初始位置和所述计算模块计算的当前 帧中所述特征点的位置, 将所述图像中形体区域的图像进行图像变形;  a deformation module, configured to perform image deformation on an image of the shape region in the image according to an initial position of the feature point acquired by the acquisition module and a position of the feature point in the current frame calculated by the calculation module;
生成模块, 用于将变形模块变形后的形体区域的图像覆盖在所述图像中的背景图像上, 生成动画。  And a generating module, configured to overlay an image of the deformed shape region of the deforming module on the background image in the image to generate an animation.
11、 根据权利要求 10所述的装置, 其特征在于, 所述装置还包括: The device according to claim 10, wherein the device further comprises:
第一预定义模块, 用于在所述获取模块获取图像中形体的特征点的初始位置之前, 扫描 原始输入图像, 得到对所述图像中的形体预定义的特征点及形体区域;  a first pre-defined module, configured to: before the acquiring module acquires an initial position of a feature point of the shape in the image, scan the original input image to obtain a feature point and a body region predefined for the shape in the image;
处理模块, 用于将所述第一预定义模块预定义的特征点及形体区域显示到所述图像中, 并保存由用户将显示的预定义的特征点及形体区域拖动到所述图像中的形体上的对应位置进 行精确定位后的位置。  a processing module, configured to display a feature point and a body region predefined by the first predefined module into the image, and save a predefined feature point and a body region displayed by the user into the image The position of the corresponding position on the shape to be precisely positioned.
12、 根据权利要求 11所述的装置, 其特征在于, 所述装置还包括: The device according to claim 11, wherein the device further comprises:
修复模块, 用于根据所述处理模块保存的由所述用户精确定位后的形体区域的位置, 对 所述图像进行图像修复, 得到各形体区域的图像及背景图像。  And a repairing module, configured to perform image repair on the image according to the position of the shape region accurately stored by the user and saved by the processing module, to obtain an image and a background image of each shape region.
13、 根据权利要求 10所述的装置, 其特征在于, 所述装置还包括: 13. The device according to claim 10, wherein the device further comprises:
第二预定义模块, 用于在所述计算模块读取所述特征点的动作序列之前, 预定义动作基 本元, 所述动作基本元用于表示特征点位置的变化及变化持续的帧数, 所述动作基本元包括 静止基本元、 平移基本元、 旋转基本元和会合基本元; a second pre-defined module, configured to pre-define an action basic element before the calculation module reads the action sequence of the feature point, where the action basic element is used to indicate a change in a feature point position and a number of frames in which the change continues. The action basic element includes Static primitive, translation primitive, rotation primitive, and convergence primitive;
第三预定义模块, 用于根据所述第二预定义模块预定义的动作基本元和所述形体的特征 点预定义动作, 在所述动作中每个特征点对应一个动作基本元;  a third pre-defined module, configured to pre-define an action according to the action basic element predefined by the second predefined module and the feature point of the shape, where each feature point corresponds to one action basic element;
组合模块, 用于将所述第三预定义模块预定义的动作组合成所述特征点的动作序列。  And a combination module, configured to combine the actions predefined by the third predefined module into a sequence of actions of the feature points.
14、 根据权利要求 13所述的装置, 其特征在于, 所述第三预定义模块, 具体用于逐个确 认所述形体的特征点在所述动作中的位置变化, 当确认所述特征点的位置不变时, 将所述特 征点的位置变化用所述静止基本元表示; 当确认所述特征点要进行平移时, 将所述特征点的 位置变化用所述平移基本元表示, 所述平移基本元的参数包括所述特征点移动的二维位移和 移动持续的帧数; 当确认所述特征点要环绕其它特征点进行旋转时, 将所述特征点的位置变 化用所述旋转基本元表示, 所述旋转基本元的参数包括被环绕的特征点、 在所述图像的二维 平面上的旋转角度、 垂直所述二维平面方向上的旋转角度和旋转持续的帧数; 当确认所述特 征点要向其它特征点移动时, 将所述特征点的位置变化用所述会合基本元表示, 所述会合基 本元的参数包括目标特征点、 移动比率和移动持续的帧数。 The apparatus according to claim 13, wherein the third pre-defined module is specifically configured to confirm the position change of the feature point of the shape in the action one by one, when confirming the feature point When the position is unchanged, the position change of the feature point is represented by the static basic element; when it is confirmed that the feature point is to be translated, the position change of the feature point is represented by the translation basic element, The parameter of the panning element includes a two-dimensional displacement of the feature point movement and a number of frames of the moving duration; when it is confirmed that the feature point is to be rotated around other feature points, the position of the feature point is changed by the rotation. Meta-representation, the parameters of the rotating elementary element include a surrounding feature point, a rotation angle on a two-dimensional plane of the image, a rotation angle in a direction perpendicular to the two-dimensional plane, and a number of frames in which the rotation continues; When the feature point is to be moved to other feature points, the position change of the feature point is represented by the convergence basic element, and the parameters of the convergence basic element include Standard feature point, the ratio of movement and the moving continuous frames.
15、 根据权利要求 14述的装置, 其特征在于, 所述计算模块, 包括: The device according to claim 14, wherein the calculating module comprises:
获取单元, 用于从所述动作序列中获取当前帧中每个特征点对应的动作基本元; 计算单元, 用于根据所述获取单元获取的当前帧中每个特征点对应的动作基本元中的参 数, 计算当前帧中每个特征点的位置。  An acquiring unit, configured to acquire, from the sequence of actions, an action basic element corresponding to each feature point in the current frame; and a calculating unit, configured to use, according to the action basic element corresponding to each feature point in the current frame acquired by the acquiring unit Parameters that calculate the position of each feature point in the current frame.
16、 根据权利要求 10所述的装置, 其特征在于, 所述获取模块, 具体用于获取原始输入 图像中形体的特征点的位置, 并将获取的所述特征点的位置作为所述特征点的初始位置; 或, 获取上一个动作完成时保存的图像中形体的特征点的位置, 并将获取的所述特征点的位置作 为所述特征点的初始位置。 The device according to claim 10, wherein the acquiring module is configured to acquire a position of a feature point of the shape in the original input image, and use the acquired position of the feature point as the feature point. Or initial position; or, obtain the position of the feature point of the feature in the image saved at the completion of the previous action, and take the acquired position of the feature point as the initial position of the feature point.
17、 根据权利要求 10所述的装置, 其特征在于, 所述变形模块, 具体用于按照所述特征 点的初始位置和所述特征点之间的关联性将所述特征点相连, 得到初始特征线; 按照当前帧 中所述特征点的位置和所述特征点之间的关联性将所述特征点相连, 得到当前帧特征线; 根 据所述初始特征线和所述当前帧特征线, 对所述图像中各形体区域的图像进行基于特征线的 图像变形。 The device according to claim 10, wherein the deformation module is configured to connect the feature points according to an initial position of the feature point and an association between the feature points to obtain an initial Feature line; connecting the feature points according to the relationship between the position of the feature point in the current frame and the feature point to obtain a current frame feature line; according to the initial feature line and the current frame feature line, Feature line based image deformation is performed on images of the respective body regions in the image.
18、 根据权利要求 10至 17任一权利要求所述的装置, 其特征在于, 所述图像中的形体 为人体、 动物和卡通形象中的一种。 18. Apparatus according to any of claims 10 to 17, wherein the shape in the image is one of a human body, an animal and a cartoon.
PCT/CN2011/077083 2011-07-12 2011-07-12 Method and device for generating body animation WO2012167475A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2011/077083 WO2012167475A1 (en) 2011-07-12 2011-07-12 Method and device for generating body animation
CN201180001326.0A CN103052973B (en) 2011-07-12 2011-07-12 Generate method and the device of body animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/077083 WO2012167475A1 (en) 2011-07-12 2011-07-12 Method and device for generating body animation

Publications (1)

Publication Number Publication Date
WO2012167475A1 true WO2012167475A1 (en) 2012-12-13

Family

ID=47295365

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/077083 WO2012167475A1 (en) 2011-07-12 2011-07-12 Method and device for generating body animation

Country Status (2)

Country Link
CN (1) CN103052973B (en)
WO (1) WO2012167475A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251389A (en) * 2016-08-01 2016-12-21 北京小小牛创意科技有限公司 The method and apparatus making animation

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597979B (en) * 2018-12-17 2023-05-12 北京嘀嘀无限科技发展有限公司 Target object clustering method and device
CN110473248A (en) * 2019-08-16 2019-11-19 上海索倍信息科技有限公司 A kind of measurement method using picture construction human 3d model
CN110490958B (en) * 2019-08-22 2023-09-01 腾讯科技(深圳)有限公司 Animation drawing method, device, terminal and storage medium
CN113556600B (en) * 2021-07-13 2023-08-18 广州虎牙科技有限公司 Drive control method and device based on time sequence information, electronic equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI220234B (en) * 2003-10-21 2004-08-11 Ind Tech Res Inst A method to simulate animated images for an object
US20070035541A1 (en) * 2005-07-29 2007-02-15 Michael Isner Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh
CN101082985A (en) * 2006-12-15 2007-12-05 浙江大学 Decompounding method for three-dimensional object shapes based on user easy interaction
WO2008141125A1 (en) * 2007-05-10 2008-11-20 The Trustees Of Columbia University In The City Of New York Methods and systems for creating speech-enabled avatars
CN101354795A (en) * 2008-08-28 2009-01-28 北京中星微电子有限公司 Method and system for driving three-dimensional human face cartoon based on video
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101473352A (en) * 2006-04-24 2009-07-01 索尼株式会社 Performance driven facial animation
CN101777195A (en) * 2010-01-29 2010-07-14 浙江大学 Three-dimensional face model adjusting method
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074033B (en) * 2009-11-24 2015-07-29 新奥特(北京)视频技术有限公司 A kind of animation method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI220234B (en) * 2003-10-21 2004-08-11 Ind Tech Res Inst A method to simulate animated images for an object
US20070035541A1 (en) * 2005-07-29 2007-02-15 Michael Isner Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh
CN101473352A (en) * 2006-04-24 2009-07-01 索尼株式会社 Performance driven facial animation
CN101082985A (en) * 2006-12-15 2007-12-05 浙江大学 Decompounding method for three-dimensional object shapes based on user easy interaction
WO2008141125A1 (en) * 2007-05-10 2008-11-20 The Trustees Of Columbia University In The City Of New York Methods and systems for creating speech-enabled avatars
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101354795A (en) * 2008-08-28 2009-01-28 北京中星微电子有限公司 Method and system for driving three-dimensional human face cartoon based on video
CN101777195A (en) * 2010-01-29 2010-07-14 浙江大学 Three-dimensional face model adjusting method
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251389A (en) * 2016-08-01 2016-12-21 北京小小牛创意科技有限公司 The method and apparatus making animation
WO2018024089A1 (en) * 2016-08-01 2018-02-08 北京小小牛创意科技有限公司 Animation creation method and device

Also Published As

Publication number Publication date
CN103052973A (en) 2013-04-17
CN103052973B (en) 2015-12-02

Similar Documents

Publication Publication Date Title
US11348314B2 (en) Fast and deep facial deformations
CN111194550B (en) Processing 3D video content
CN101303772A (en) Method for modeling non-linear three-dimensional human face based on single sheet image
KR101148101B1 (en) Method for retargeting expression
US11928778B2 (en) Method for human body model reconstruction and reconstruction system
WO2012167475A1 (en) Method and device for generating body animation
CN112734890A (en) Human face replacement method and device based on three-dimensional reconstruction
CN104778736A (en) Three-dimensional garment animation generation method driven by single video content
EP1627282A2 (en) Rig baking
US11769309B2 (en) Method and system of rendering a 3D image for automated facial morphing with a learned generic head model
Orvalho et al. Transferring the rig and animations from a character to different face models
CN115272608A (en) Human hand reconstruction method and equipment
CN115457171A (en) Efficient expression migration method adopting base expression space transformation
Li et al. Animating cartoon faces by multi‐view drawings
Huang et al. Detail-preserving controllable deformation from sparse examples
CN111862287A (en) Eye texture image generation method, texture mapping method, device and electronic equipment
CN115908664B (en) Animation generation method and device for man-machine interaction, computer equipment and storage medium
US20230196702A1 (en) Object Deformation with Bindings and Deformers Interpolated from Key Poses
CN116958450B (en) Human body three-dimensional reconstruction method for two-dimensional data
Ono et al. 3D character model creation from cel animation
US20210074076A1 (en) Method and system of rendering a 3d image for automated facial morphing
Niiro et al. Assembling a Pipeline for 3D Face Interpolation
CN117456099A (en) Method for generating character rendering model from a group of action sequences based on key frames
Yao et al. Neural Radiance Field-based Visual Rendering: A Comprehensive Review
Haetinger Fast iterative inversion for non-linear vector fields applied to images, videos and volumes

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180001326.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11867231

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11867231

Country of ref document: EP

Kind code of ref document: A1