WO2002017234A1 - Apparatus and method for generating synthetic face image based on shape information about face image - Google Patents

Apparatus and method for generating synthetic face image based on shape information about face image Download PDF

Info

Publication number
WO2002017234A1
WO2002017234A1 PCT/KR2001/001167 KR0101167W WO0217234A1 WO 2002017234 A1 WO2002017234 A1 WO 2002017234A1 KR 0101167 W KR0101167 W KR 0101167W WO 0217234 A1 WO0217234 A1 WO 0217234A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
texture
shape
face image
Prior art date
Application number
PCT/KR2001/001167
Other languages
French (fr)
Inventor
Seong Whan Lee
Chang-Yu Lu
Bon Woo Hwang
Original Assignee
Virtualmedia Co., Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Virtualmedia Co., Ltd filed Critical Virtualmedia Co., Ltd
Priority to JP2002521224A priority Critical patent/JP2004506996A/en
Priority to AU2001269581A priority patent/AU2001269581A1/en
Publication of WO2002017234A1 publication Critical patent/WO2002017234A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Definitions

  • the present invention relates to an apparatus and method for generating a synthetic face image, and more particularly, to an apparatus and method for generating a synthetic face image based on shape information about an input face image.
  • a face image has generally been used as a medium of showing personal character most fully and allowing easy and natural communication.
  • Such a face image is used in applications such as exit and entry control/security systems, criminal searching/montage composing systems, computer interfacing, animations, and games.
  • Representative applications using technology of synthesizing face images are generation of a character image and makeup design.
  • a caricature of a face image which is a sort of a character image, is produced based on the features of a person's face. Accordingly, a caricature of a face image is not only used for producing animations or entertaining programs but also used as a symbol or icon representing a person, as a unique signature in PC communications or E-mail, and as an avatar of a user in the virtual reality.
  • An image processing technique using digital filters can be used for images picked up in an environment in which lighting and backgrounds are limited, so for simple two-dimensional images in which an object is not discriminated from a background, the quality of output images is depending on lighting or other changes in an environment. No proper methods for supplementing this technique have been proposed. Moreover, since shape information about an object is not separately generated in a conventional caricature generation method, it is very complex to perform a revising operation such as exaggerating of the features of the face in a generated caricature or changing of a facial expression, and it is nearly impossible to restore a face image or extend the generated caricature to a three-dimensional avatar.
  • an object of the present invention to provide an apparatus and method for generating a synthetic face image based on shape information about a face image, in which shape information about a face is extracted from an input face image, and a synthetic face image is generated based on the extracted shape information, so that a user can obtain a natural and elaborate caricature image, view a makeup design image using his/her face image in advance, easily add a variety of accessories to the synthetic image, easily change the synthetic image, and view the result image of adding accessories to or changing the synthetic image in real time.
  • an apparatus ⁇ for synthesizing a new face image based on shape information about an input face image includes a user interfacing device for receiving face image information and a user control command, transmitting the same to an image processing device, receiving synthesized face image information from the image processing device, and outputting or storing the synthesized face image information according to the user control command; and an image processing device for extracting shape information, which is represented by a deformation field with respect to a predetermined reference image, and texture information, which is color or brightness information about an input image mapped on the reference image, about an input face image from face image information transmitted from the user interfacing device, and for transforming a texture image selected from among texture images, which are stored in an image database in advance and have the same shape as that of the reference image, or an image generated by performing a weighted summation of the selected texture image and a texture image reflecting the extracted texture information, using the shape information about the input face image according to the user control command, thereby generating a
  • the method includes the steps of (a) extracting shape information, which is represented by a deformation field with respect to shape information about a predetermined reference image, and texture information, which is color or brightness information about an input image mapped on the reference image about an input face image from input face image information; and (b) transforming a texture image selected from among texture images, which are stored in an image database in advance and have the same shape as that of the reference image, or an image generated by performing a weighted summation of the selected texture image and a texture image reflecting the extracted texture information, using the shape information about the input face image according to a user control command, thereby generating a synthetic face image.
  • FIG. 1 A is a block diagram of the functional configuration of a first embodiment of a synthetic face image generating apparatus according to the present invention
  • FIG. 1 B is a block diagram of the functional configuration of a second embodiment of a synthetic face image generating apparatus according to the present invention
  • FIG. 2 is a block diagram of the mechanical configuration of a computer system in which the first and second embodiments of the present invention are implemented;
  • FIG. 3 is a basic flowchart of a procedure of generating a synthetic face image according to the present invention.
  • FIG. 4 is a detailed flowchart of the step of extracting face information shown in FIG. 3;
  • FIG. 5 is a flowchart of a procedure of generating a caricature image according to a face image synthesizing method performed by an apparatus for generating a synthetic face image according to the present invention;
  • FIG. 6 is a flowchart of a procedure of generating a caricature image according to a sample image replacing method performed by an apparatus for generating a synthetic face image according to the present invention.
  • FIG. 7 is a flowchart of a makeup design procedure performed by an apparatus for generating a synthetic face image according to the present invention.
  • FIGS. 1A and 1B are block diagrams of a first embodiment 1 and a second embodiment 40, respectively, of a synthetic face image generating apparatus based on shape information about a face image according to the present invention.
  • the first embodiment 1 shown in FIG. 1 includes one or more user interfacing devices 10a and 10b, a communication network 20, and an image processing device 30, thereby operating in a network environment.
  • the second embodiment 40 shown in FIG. 2 includes a user interfacing device 50 and an image processing device 60, thereby operating in a single computer system.
  • the user interfacing devices 10a and 10b and the image processing device 30 of the first embodiment 1 , and the second embodiment 40 are realized as a computer system 70 including a computer 72 with at least one central processing unit (CPU) 74 and a memory unit 73, an input device 75, and an output device 76, as shown in FIG. 2.
  • the elements of the computer system 70 are connected to one another through one or more bus structures 77.
  • the CPU 74 includes an arithmetical and logical unit (ALU) 741 for performing an arithmetical operation and a logical operation, a register set 742 for temporarily storing data and commands, and a control unit 743 for controlling the operations of the computer system 70.
  • the CPU 74 used in the present invention is not restricted to a particular structure manufactured by a particular company but can be any type of processor having the above basic structure.
  • the memory unit 73 includes a high-speed main memory 731 and an auxiliary memory 732 for storing data for a long time.
  • the main memory 731 may include a random access memory (RAM) semiconductor chip and a read only memory (ROM) semiconductor chip.
  • the auxiliary memory 732 may include a floppy disc, a hard disc, a CD-ROM, a flash memory, and a device for storing data using electricity, magnetism, light or other recording medium.
  • the main memory 731 may include a video display memory for displaying an image through a display device. It will be understood by those skilled in the field of the present invention that the memory unit 73 can include a variety of replaceable elements that have different storing performance.
  • the input device 75 may include a keyboard, a mouse, and a physical converter (for example, a microphone).
  • the output device 76 may include a display unit, a printer, and a physical converter (for example, a speaker).
  • a network interface, a modem, or the like can be used as an input/output device.
  • the computer system 70 is provided with an operating system and at least one application program.
  • An operating system is a series of software controlling the operations of the computer system 70 and assignment of resources.
  • An application program is a series of software performing work, which is requested by a user, using available computer resources through the operating system. This software is stored in the memory unit 73. Consequently, a computer-basis synthetic face image generating apparatus according to the present invention is realized as the combination of the computer system 70 and one or more application programs which are installed and operate in the computer system 70.
  • the first embodiment 1 shown in FIG. 1A has the same functions as the second embodiment 40 shown in FIG. 1 B, with the exception that the embodiment 1 further includes communication processors 14 and 31 for data transmission through the communication network 20. Thus, the present invention will be described on the basis of the first embodiment 1.
  • each of the user interfacing devices 10a and 10b receives face image information and a user control command from a user, receives a synthesized image in response to the user control command, and revises, stores or outputs the synthesized image.
  • Each of the user interfacing devices 10a and 10b includes an image information input unit 11 , a user command input unit 12, an input/output controller 13, a communication processor 14, an image revision unit 15, an image storage unit 16, and an output unit 17.
  • the image information input unit 11 receives face image information from a user and can be realized as, for example, a scanner or a digital camera.
  • the image information input unit 11 may include a plurality of cameras for receiving images picked up at different angles and a camera supplementary device such as a light adjustor. Since the image information input unit 11 as an element of the present invention should be considered in the functional aspect, the image information input unit 11 should be largely construed as including not only the input device 75 of FIG. 2 but also the auxiliary memory 732 of FIG. 2 which stores face image information in advance.
  • the user command input unit 12 receives user control commands (for example, user information, a face image synthesis control signal, and an image revision control signal) from a user.
  • the user command input unit 12 can be realized as a device such as a keyboard, a mouse or a touch screen through which a user can input a command or information.
  • the input/output controller 13 controls face image information input through the image information input unit 11 and a user control command input through the user command input unit 12 to be transmitted to the image processing device 30 through the communication processor 14.
  • the input/output controller 13 also receives new image information which is synthesized by the image processing device 30 in response to the user control command and controls the new image information to be revised, stored or output.
  • the communication processor 14 is connected to the input/output controller 13 and transmits data to or receives data from the image processing device 30 through the communication network 20.
  • the communication processor 14 can be realized as a device including a serial/parallel port, a universal serial bus (USB) port or an IEEE1394 port for internal connection, and an Ethernet card for transmitting and receiving data including image information through Internet.
  • USB universal serial bus
  • the image revision unit 15 is connected to the input/output controller 13 and revises the angle, size or texture of an image corresponding to image information synthesized by and transmitted from the image processing device 30 in response to a user control command input through the user command input unit 12.
  • the image storage unit 16 corresponds to the auxiliary memory 732 of FIG. 2 and stores new image information synthesized by and transmitted from the image processing device 30 or image information revised by the image revision unit 15 under the control of the input/output controller 13.
  • the output unit 17 corresponds to the output device 76 of FIG. 2. Under the control of the input/output controller, the output unit 17 displays user interface display information used for receiving a user control command required when the image processing device 30 synthesizes a new image and displays or prints new image information synthesized by and transmitted from the image processing device 30 or image information revised by the image revision unit 15.
  • the communication network 20 transmits data between one or more user interfacing devices 10a and 10b and the image processing device 30 in the first embodiment 1 of the present invention shown in FIG. 1A.
  • the communication network 20 can be realized as one of a variety of types of networks such as wire/wireless Internet, local area networks and private lines.
  • the image processing device 30 processes image information transmitted from one or more user interfacing devices 10a and 10b, and synthesizes a new image based on the image information in response to a user control command, and transmits the synthesized image to a corresponding user interfacing device.
  • the image processing device 30 includes a communication processor 31 , an image processor 32, and an image database (DB) 33.
  • the communication processor 31 transmits data to and receives data from one or more user interfacing devices 10a and 10b.
  • the communication processor 31 can be realized as a device including an Ethernet card transmitting and receiving data including image information through Internet and a serial/parallel port, a universal serial bus (USB) port or an IEEE1394 port for internal connection.
  • USB universal serial bus
  • the image processor 32 extracts shape information about an input face, image, indicated as a deformation field with respect to a reference image, and texture information, i.e., color or brightness information, about the input face image mapped on the reference image from face image information transmitted from the user interfacing devices 10a and 10b, analyzes a user control command transmitted from the user interfacing devices 10a and 10b to interpret a user's request, and synthesizes a new face image using the extracted shape information, the extracted texture information and a variety of images stored in the image DB 33 according to the interpreted user's request.
  • the image processor 32 includes a face information extractor 321 , a face image synthesizer 322, a partial image replacing unit 323, and an accessory image adding unit 324.
  • the face information extractor 321 extracts from face image information transmitted from the user interfacing devices 10a and 10b shape information about an input face image, indicated as a deformation field with respect to a reference image, and texture information, i.e., color or brightness information, of the input face image mapped on the reference image using the shape information.
  • the face image synthesizer 322 synthesizes a new face image by transforming a texture image selected from among texture images stored in the image DB 33 in response to a user control command or transforming an image, which is generated by performing a weighted summation of the selected texture image and a texture image reflecting texture information extracted by the face information extractor 321 , using shape information about an input image, which is extracted by the face information extractor 321.
  • the partial image replacing unit 323 replaces a portion or the entire area of a new face image synthesized by the face image synthesizer 322 with a sample image having the highest similarity among sample images stored in the image DB 33.
  • the accessory image adding unit 324 adds an accessory image, which is selected from among accessory images stored in the image DB 33 according to a user control command, to a face image synthesized by the face image synthesizer 322.
  • the image DB 33 previously stores image information necessary for processing an input face image in the image processor 32.
  • the image DB 33 includes a face model DB 331 , an additional image DB 332, a sample image DB 333, a makeup image DB 334, and an accessory image DB 335.
  • the face model DB 331 stores various types of information (a shape average, a texture average, shape eigenvectors, and texture eigenvectors which are previously obtained with respect to a plurality of model face images) used for extracting shape information and texture information, based on a reference image, from an input face image in the face information extractor 321.
  • the various types of information stored in the face model DB 331 will be described in detail later when FIG. 4 is described.
  • the additional image DB 332 stores information about caricature images which have the same shape as that of a reference image but have different styles of texture information such as an animation style, a sketch style and a watercolor printing style.
  • the sample image DB 333 stores information about various caricature sample images including changes in a shape and a facial expression at each particular portion of a face image.
  • the makeup image DB 334 stores information about makeup design images which have the same shape as that of a reference image and are expressed with texture information expressing a variety of sample makeups.
  • the accessory image DB 335 stores information about images such as glasses, hairstyles, hats, earrings, and bodies which can be added to a synthesized face image.
  • one or more user interfacing devices 10a and 10b are connected to a single image processing device 30 through the communication processors 14 and 31 and the communication network 20.
  • the user interfacing device 50 and the image processing device 60 can be integrated and operated within a single computer system 70.
  • the face information extractor 321 or 621 of the image processor 32 or 62 receives an input face image from the user interfacing device 10a, 10b or 50 in step S10, and extracts shape information about the input face image with respect to a predetermined reference image and texture information, i.e., color or brightness information, of the input face image mapped on the reference image in step S11.
  • step S12 the face image synthesizer 322 or 622 of the image processor 32 or 62 synthesizes a new face image using the texture information based on the reference image and the shape information about the input face image extracted by the face information extractor 321 or 621 , in response to a user control command (a face image synthesis control signal) received from the user interfacing device 10a, 10b or 50.
  • the face image synthesizer 322 or 622 restores the shape of the input face image using the shape information extracted from the input face image and warps the restored shape of the input face image using the extracted texture information, thereby synthesizing a user's face image.
  • the texture information based on the reference image which is used for synthesizing a face image
  • different new images having the shape of the input face image can be synthesized.
  • the synthesized face image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57.
  • the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives a user control command indicating whether to change the shape information about the displayed face image from the user in step S13.
  • the shape information about the input face image is changed according to the user control command (for example, a control signal instructing partial transform such as enlarging or reducing a particular portion of the displayed face image by dragging the portion with a mouse or a control signal instructing entire transform such as exaggerating the entire face using a slide bar), and the operation goes back to step S12 to synthesize a new face image.
  • a control signal instructing partial transform such as enlarging or reducing a particular portion of the displayed face image by dragging the portion with a mouse or a control signal instructing entire transform such as exaggerating the entire face using a slide bar
  • the accessory image adding unit 324 or 624 adds various accessory images stored in the image DB 33 or 63 to the face image synthesized in step S12, or the partial image replacing unit 323 or 623 replaces a particular portion of the face image synthesized in step S12 with various sample images stored in the image DB 33 or 63, thereby adding various additional effects to the synthesized face image in step S14. Thereafter, the face image synthesized by the image processing device 30 or 60 is transmitted to the user interfacing device 10a, 10b or 50 and displayed for the user.
  • the image revision unit 15 or 55 finally revises the synthesized face image in response to a user control command (an image revision control signal) received through the user command input unit 12 or 52 in step S15.
  • the synthesized face image revised by the image revision unit 15 or 55 is stored in the image storage unit 16 or 56, or displayed or printed by the output unit 17 or 57 in step S16.
  • the face information extracting step S11 of FIG. 3 can be summarized as a procedure of obtaining shape information S, makeup and texture information T,lie from an input face image based on a face model.
  • shape information about a face image is expressed by a deformation field with respect to a reference image
  • texture information about a face image is expressed by color or brightness information about an input image mapped on the reference image.
  • Texture information T about a face image is defined as the color or brightness value of each point on an input image with respect to each corresponding point p, (t-1, ..., ⁇ ) on a reference image.
  • a synthetic image obtained using a shape average and a texture average is used as a reference image in an embodiment of the present invention
  • a reference image which can be used in the present invention is not restricted to the above embodiment. Any one among m face images prepared previously can be used as a reference image.
  • Face models stored in the face model DB 331 or 631 are previously obtained according to the following procedure.
  • the shape average S , the texture average f , the shape eigenvector s, (z-1 , ..., m-1), and texture eigenvector t, (z-1 , ..., m- ⁇ ) are stored in the face model DB 331 or 631 and used for extracting shape information and texture information about an input face image.
  • step S111 of FIG. 4 the input face image is normalized. More specifically, predetermined feature points (for example, a central point of each pupil and a central point of a lip) are extracted from an input face image, and then the input face image is moved up, down, to the left and to the right and the size thereof is adjusted so that the positions of the extracted feature points of the input face image can be located at the corresponding feature points of a reference image.
  • predetermined feature points for example, a central point of each pupil and a central point of a lip
  • Such an image normalizing step can be automatically performed through predetermined software or can be manually performed in response to a control command input by the user. Since a detailed procedure thereof is beyond the scope of the present invention, a description thereof will be omitted.
  • step S112 shape information is estimated. More specifically, a hierarchical, gradient-based optical flow algorithm (Lucas and Kanade) is applied to the normalized input face image obtained in step S111 and to the reference image (or a synthetic texture estimation image T ⁇ n _ k having the same shape as that of the reference image), thereby estimating shape information S m _ prej (the value of the positional difference between points on the normalized input face image and corresponding points on the reference image) based on the reference image.
  • shape information S m _ prej the value of the positional difference between points on the normalized input face image and corresponding points on the reference image
  • the shape information S ⁇ n _ pre j obtained according to a hierarchical, gradient-based optical flow algorithm in step S112 may include an error value due to a light or shadow in the input face image. Accordingly, in step S1 13, compensation is performed on the shape information S m _p rej .
  • the error value is compensated for by sequentially performing linear decomposition based on the shape eigenvector s, (z-1 , ..., m ⁇ ) and linear superposition with respect to the shape information S mj ⁇ rej , thereby obtaining the error compensated shape information S m correCtj .
  • the result value of the shape information a weighted sum S m _ ⁇ of the estimated shape information S m _p reJ obtained in step S112 and the compensated shape information S m _co r eci j obtained in step S113, as calculated by Equation (2).
  • step S1 14 backward warping is performed. More specifically, the input face image is transformed into the reference image using the shape information S mj obtained in step S113. This procedure is referred to as "backward warping".
  • step S115 linear decomposition is performed on texture information about the backward warped image based on the texture eigenvector t, (t-1, ..., m ⁇ ), and then linear superposition is performed on the result of linear decomposition, thereby obtaining texture information T, himself j about the input face image.
  • Such a repetition is performed until the vector norm
  • the input face image can be restored using the shape information S,ilia and the texture information T m about the input face image, which is obtained based on the reference image.
  • the texture information T, reinforce about the input face image obtained based on the reference image is transformed according to the shape information S,ilia about the input face image obtained based on the reference image, thereby synthesizing the input face image.
  • Representative examples in which the characteristics of such a synthetic image can be utilized are generation of a caricature of a face image and makeup design. A method of generating a caricature of a face image is divided into a face image synthesizing method and a sample image replacement method.
  • a procedure of generating a caricature image using the face image synthesizing method performed in the synthetic face image generating apparatuses 1 and 40 based on the shape information about a face image according to the present invention will be described with reference to FIG. 5.
  • the face information extractor 321 or 621 of the image processor 32 or 62 receives an input face image from the user interfacing device 10a, 10b or 50 in step S20, and extracts shape information S in and texture information T in about the input face image based on a predetermined reference image in step S21.
  • the face image synthesizer 322 or 662 presents different styles of caricature images (for example, an animation style, a sketch style, and a watercolor printing style) stored in the additional image DB 332 or 632 to a user through the user interfacing device 10a, 10b or 50 so that the user can select a desired style of caricature in step S22.
  • different styles of caricature images stored in the additional image DB 332 or 632 have the same shape as that of the reference image.
  • step S23 the face image synthesizer 322 or 622 synthesizes the selected style of caricature image or an image, which is generated by performing a weighted summation of the selected caricature image and an image reflecting the texture information T ⁇ n about the input face image, with the shape information S in about the input face image, thereby generating a caricature image reflecting the shape information about the user's face.
  • step S24 the synthesized caricature image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57, and the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives a user control command indicating whether to change the shape information about the displayed caricature image from the user.
  • the shape information S in is changed according to the user control command (for example, a control signal instructing partial transform such as enlarging or reducing a particular portion of the displayed face image by dragging the portion with a mouse or a control signal instructing entire transform such as exaggerating the entire face using a slide bar), and the operation goes back to step S22 to synthesize a new caricature image.
  • the accessory image adding unit 324 or 624 retrieves various accessory images (glasses, hairstyles, hats, earrings, and bodies) stored in the accessory image DB 335 or 635, and adds the same to the caricature image.
  • the accessory image adding unit 324 or 624 adds an accessory image
  • more natural result can be obtained by automatically performing a size and position adjustment using the shape information S in extracted in step S21.
  • the partial image replacement unit 323 or 623 replaces a particular portion of the caricature image with a sample image retrieved from the sample image DB 333 or 633, thereby expressing happiness, sadness or anger on the caricature image.
  • a moving picture effect can be accomplished by using animation frames expressing changes in the face.
  • the caricature image synthesized by the image processing device 30 or 60 is transmitted to the user interfacing device 10a, 10b or 50 and displayed for the user.
  • the image revision unit 15 or 55 finally revises the synthesized caricature image in response to a user control command (an image revision control signal) received through the user command input unit 12 or 52 in step S15.
  • the synthesized caricature image revised by the image revision unit 15 or 55 is stored in the image storage unit 16 or 56, or displayed or printed by the output unit 17 or 57 in step S16.
  • the caricature image obtained through such a method can be immediately used for a particular purpose.
  • the caricature image can be used as a draft when an artist produces a caricature, thereby increasing productivity during manual treatment.
  • a procedure of generating a caricature image using the sample image replacement method performed in the synthetic face image generating apparatuses 1 and 40 based on the shape information about a face image according to the present invention will be described with reference to FIG. 6.
  • FIG. 6 further includes step S35 and S36 in addition to steps shown in FIG. 5.
  • steps S30 through S34 and S37 through S39 will be omitted.
  • the method of FIG. 6 is a method of replacing a part or the entire portion of a caricature image synthesized according to a method as shown in FIG. 5 with a sample image prepared in the sample image DB 333 or 633.
  • Sample images stored in the sample image DB 333 or 633 are formed based on the result of performing a statistic analysis on shape information about different faces.
  • the sample image formation method can be divided into two cases: a case where transformation of a sample image is admitted, and a case where transformation of a sample image is not admitted.
  • this method When using the sample image formation method in the case where transformation of a sample image is not admitted, a part or the entire portion of a caricature image is replaced with only a previously formed sample image to synthesize a new caricature image. Accordingly, this method has an advantage of accomplishing high definition but is disadvantageous in that it is difficult to substantially reflect the shape of the input image and many sample images as possible as can be formed should be prepared in advance.
  • a method of measuring a difference D used in step S35 of FIG. 6 can be expressed by Equation (3).
  • O ⁇ W ⁇ , C s , (z-1 , ..., n) is shape information about an input image
  • C u (z-1 , ..., n is the difference T re / between texture information about the input image and texture information about a reference image
  • C ⁇ (z-1 , ..., ) is the difference T ref between texture information about the sample image and the texture information about the reference image.
  • the partial image replacing unit 323 or 623 measure the difference D between the input image and each sample image in step S35 and replaces a part or the entire portion of a caricature image with a sample image having a minimum difference in step S36.
  • FIG. 7 is the same as FIG. 5, with the exception that FIG. 7 includes steps S42 through S45, instead of S22 through S24 of FIG. 5. Thus, descriptions of the other steps S40, S41 , and S46 through S48 will be omitted.
  • step S42 the face image synthesizer 322 or 662 presents a variety of sample makeup images stored in the makeup image DB 334 or 634 to a user through the user interfacing device 10a, 10b or 50 so that the user can select a desired sample makeup image.
  • the sample makeup images have the same shape as that of a reference image.
  • the face image synthesizer 322 or 622 transforms the selected sample makeup image or an image, which is generated by performing a weighted summation of the selected sample makeup image and an image reflecting texture information T in extracted from an input face image in step S41 , using shape information S in extracted from the input face image in step S41 , thereby synthesizing a face image in which the selected sample makeup image is applied to the user's face image.
  • step S44 the synthesized face image reflecting the sample makeup image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57, and the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives a user control command instructing revision of the makeup of the displayed face image from the user.
  • the face image synthesizer 322 or 622 revises the face image reflecting the sample makeup image according to the user control command, and the revised face image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57.
  • step S45 the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives from the user a user control command indicating whether the user is satisfied with the displayed face image.
  • the procedure proceeds to step S46 of adding an accessory.
  • the procedure returns to step S42.
  • shape information which is represented by a deformation field with respect to a reference image, about an input face image is extracted from the input face image, and images which have the same shape as that of the reference image but have different types of texture information and the extracted shape information about the input face image are used, thereby synthesizing natural new images reflecting the shape of the input face image and having high quality regardless of the state of the input face image. Accordingly, the present invention can be effectively utilized in a variety of fields such as generation of character images, virtual makeup design, making of montages for criminal search, animations and games.
  • a user in case of makeup design, a user can easily create and check a makeup design on the user's face image and can easily revise the makeup design partially or entirely.
  • the present invention can be easily applied to a variety of applications such as realization of an avatar in the virtual reality based on shape information, restoration of a three-dimensional face image, and video chatting which need a face image.

Abstract

An apparatus and method for generating a synthetic face image based on shape information about an input face image are provided. The apparatus extracts shape information, which is represented by a deformation field with respect to a predetermined reference image, and texture information, which is color or brightness information about an input image mapped on the reference image, about the input face image from face image information transmitted from a user interfacing device, and transforms a variety of face images, which are stored in an image database in advance and have the same shape as that of the reference image, using the shape information about the input face image according to a user control command, thereby synthesizing face images reflecting the shape information about the input face image. Accordingly, natural new images reflecting the shape of the input face image and having high quality regardless of the state of the input face image can be synthesized by using images, which have the same shape information as that about the reference image but have different types of texture information, and the extracted shape information about the input face image.

Description

APPARATUS AND METHOD FOR GENERATING SYNTHETIC FACE AGE BASED ON SHAPE
INFORMATION ABOUT FACE IMAGE
Technical Field The present invention relates to an apparatus and method for generating a synthetic face image, and more particularly, to an apparatus and method for generating a synthetic face image based on shape information about an input face image.
A face image has generally been used as a medium of showing personal character most fully and allowing easy and natural communication. Such a face image is used in applications such as exit and entry control/security systems, criminal searching/montage composing systems, computer interfacing, animations, and games. Representative applications using technology of synthesizing face images are generation of a character image and makeup design.
A caricature of a face image, which is a sort of a character image, is produced based on the features of a person's face. Accordingly, a caricature of a face image is not only used for producing animations or entertaining programs but also used as a symbol or icon representing a person, as a unique signature in PC communications or E-mail, and as an avatar of a user in the virtual reality.
Background Art
In order to generate caricatures, conventionally, professional painters manually draw caricatures, or face images are automatically processed using digital filters. Here, a method of processing an image using digital filters gives an input image an effect of a caricature manually created as a whole by realizing effects of watercolor painting or charcoal drawing using combinations of filters having appropriate effects on an input image. When a professional painter draws a caricature manually, a natural and highly perfect caricature can be obtained, but it takes a long time and it is difficult to maintain consistent quality. Accordingly, this method is restricted to particular conditions. An image processing technique using digital filters can be used for images picked up in an environment in which lighting and backgrounds are limited, so for simple two-dimensional images in which an object is not discriminated from a background, the quality of output images is depending on lighting or other changes in an environment. No proper methods for supplementing this technique have been proposed. Moreover, since shape information about an object is not separately generated in a conventional caricature generation method, it is very complex to perform a revising operation such as exaggerating of the features of the face in a generated caricature or changing of a facial expression, and it is nearly impossible to restore a face image or extend the generated caricature to a three-dimensional avatar.
In the case of makeup design, conventionally, a consumer determines a style indirectly based on pictures of models having on makeup. Recently, a makeup designing method using a computer has been introduced. This method allows to a user to apply products to sample model images in various ways but does not provide natural effects that can be obtained when directly applying makeup to the face image of a consumer. In other words, even if the same makeup product is used, color can be different depending on various and complex conditions such as shades and reflected light due to illumination and features of the shape of the face, so it is nearly impossible to naturally derive the makeup effects on the face image of a consumer from the makeup effects on a sample model image.
Disclosure of the Invention To solve the above problems, it is an object of the present invention to provide an apparatus and method for generating a synthetic face image based on shape information about a face image, in which shape information about a face is extracted from an input face image, and a synthetic face image is generated based on the extracted shape information, so that a user can obtain a natural and elaborate caricature image, view a makeup design image using his/her face image in advance, easily add a variety of accessories to the synthetic image, easily change the synthetic image, and view the result image of adding accessories to or changing the synthetic image in real time.
To achieve the object of the invention, there is provided an apparatus ■ for synthesizing a new face image based on shape information about an input face image. The apparatus includes a user interfacing device for receiving face image information and a user control command, transmitting the same to an image processing device, receiving synthesized face image information from the image processing device, and outputting or storing the synthesized face image information according to the user control command; and an image processing device for extracting shape information, which is represented by a deformation field with respect to a predetermined reference image, and texture information, which is color or brightness information about an input image mapped on the reference image, about an input face image from face image information transmitted from the user interfacing device, and for transforming a texture image selected from among texture images, which are stored in an image database in advance and have the same shape as that of the reference image, or an image generated by performing a weighted summation of the selected texture image and a texture image reflecting the extracted texture information, using the shape information about the input face image according to the user control command, thereby generating a synthetic face image. There is also provided a method for synthesizing a new face image based on shape information about an input face image. The method includes the steps of (a) extracting shape information, which is represented by a deformation field with respect to shape information about a predetermined reference image, and texture information, which is color or brightness information about an input image mapped on the reference image about an input face image from input face image information; and (b) transforming a texture image selected from among texture images, which are stored in an image database in advance and have the same shape as that of the reference image, or an image generated by performing a weighted summation of the selected texture image and a texture image reflecting the extracted texture information, using the shape information about the input face image according to a user control command, thereby generating a synthetic face image.
Brief Description of the Drawings
FIG. 1 A is a block diagram of the functional configuration of a first embodiment of a synthetic face image generating apparatus according to the present invention; FIG. 1 B is a block diagram of the functional configuration of a second embodiment of a synthetic face image generating apparatus according to the present invention;
FIG. 2 is a block diagram of the mechanical configuration of a computer system in which the first and second embodiments of the present invention are implemented;
FIG. 3 is a basic flowchart of a procedure of generating a synthetic face image according to the present invention;
FIG. 4 is a detailed flowchart of the step of extracting face information shown in FIG. 3; FIG. 5 is a flowchart of a procedure of generating a caricature image according to a face image synthesizing method performed by an apparatus for generating a synthetic face image according to the present invention;
FIG. 6 is a flowchart of a procedure of generating a caricature image according to a sample image replacing method performed by an apparatus for generating a synthetic face image according to the present invention; and
FIG. 7 is a flowchart of a makeup design procedure performed by an apparatus for generating a synthetic face image according to the present invention.
Best mode for carrying out the Invention
Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings. FIGS. 1A and 1B are block diagrams of a first embodiment 1 and a second embodiment 40, respectively, of a synthetic face image generating apparatus based on shape information about a face image according to the present invention. The first embodiment 1 shown in FIG. 1 includes one or more user interfacing devices 10a and 10b, a communication network 20, and an image processing device 30, thereby operating in a network environment. The second embodiment 40 shown in FIG. 2 includes a user interfacing device 50 and an image processing device 60, thereby operating in a single computer system.
The user interfacing devices 10a and 10b and the image processing device 30 of the first embodiment 1 , and the second embodiment 40 are realized as a computer system 70 including a computer 72 with at least one central processing unit (CPU) 74 and a memory unit 73, an input device 75, and an output device 76, as shown in FIG. 2. The elements of the computer system 70 are connected to one another through one or more bus structures 77. The CPU 74 includes an arithmetical and logical unit (ALU) 741 for performing an arithmetical operation and a logical operation, a register set 742 for temporarily storing data and commands, and a control unit 743 for controlling the operations of the computer system 70. The CPU 74 used in the present invention is not restricted to a particular structure manufactured by a particular company but can be any type of processor having the above basic structure.
The memory unit 73 includes a high-speed main memory 731 and an auxiliary memory 732 for storing data for a long time. The main memory 731 may include a random access memory (RAM) semiconductor chip and a read only memory (ROM) semiconductor chip. The auxiliary memory 732 may include a floppy disc, a hard disc, a CD-ROM, a flash memory, and a device for storing data using electricity, magnetism, light or other recording medium. The main memory 731 may include a video display memory for displaying an image through a display device. It will be understood by those skilled in the field of the present invention that the memory unit 73 can include a variety of replaceable elements that have different storing performance.
The input device 75 may include a keyboard, a mouse, and a physical converter (for example, a microphone). The output device 76 may include a display unit, a printer, and a physical converter (for example, a speaker). Alternatively, a network interface, a modem, or the like can be used as an input/output device.
The computer system 70 is provided with an operating system and at least one application program. An operating system is a series of software controlling the operations of the computer system 70 and assignment of resources. An application program is a series of software performing work, which is requested by a user, using available computer resources through the operating system. This software is stored in the memory unit 73. Consequently, a computer-basis synthetic face image generating apparatus according to the present invention is realized as the combination of the computer system 70 and one or more application programs which are installed and operate in the computer system 70. The first embodiment 1 shown in FIG. 1A has the same functions as the second embodiment 40 shown in FIG. 1 B, with the exception that the embodiment 1 further includes communication processors 14 and 31 for data transmission through the communication network 20. Thus, the present invention will be described on the basis of the first embodiment 1.
Referring to FIG. 1A, each of the user interfacing devices 10a and 10b receives face image information and a user control command from a user, receives a synthesized image in response to the user control command, and revises, stores or outputs the synthesized image. Each of the user interfacing devices 10a and 10b includes an image information input unit 11 , a user command input unit 12, an input/output controller 13, a communication processor 14, an image revision unit 15, an image storage unit 16, and an output unit 17.
The image information input unit 11 receives face image information from a user and can be realized as, for example, a scanner or a digital camera. In addition, the image information input unit 11 may include a plurality of cameras for receiving images picked up at different angles and a camera supplementary device such as a light adjustor. Since the image information input unit 11 as an element of the present invention should be considered in the functional aspect, the image information input unit 11 should be largely construed as including not only the input device 75 of FIG. 2 but also the auxiliary memory 732 of FIG. 2 which stores face image information in advance.
The user command input unit 12 receives user control commands (for example, user information, a face image synthesis control signal, and an image revision control signal) from a user. The user command input unit 12 can be realized as a device such as a keyboard, a mouse or a touch screen through which a user can input a command or information. The input/output controller 13 controls face image information input through the image information input unit 11 and a user control command input through the user command input unit 12 to be transmitted to the image processing device 30 through the communication processor 14. The input/output controller 13 also receives new image information which is synthesized by the image processing device 30 in response to the user control command and controls the new image information to be revised, stored or output.
The communication processor 14 is connected to the input/output controller 13 and transmits data to or receives data from the image processing device 30 through the communication network 20. For example, the communication processor 14 can be realized as a device including a serial/parallel port, a universal serial bus (USB) port or an IEEE1394 port for internal connection, and an Ethernet card for transmitting and receiving data including image information through Internet.
The image revision unit 15 is connected to the input/output controller 13 and revises the angle, size or texture of an image corresponding to image information synthesized by and transmitted from the image processing device 30 in response to a user control command input through the user command input unit 12.
The image storage unit 16 corresponds to the auxiliary memory 732 of FIG. 2 and stores new image information synthesized by and transmitted from the image processing device 30 or image information revised by the image revision unit 15 under the control of the input/output controller 13. The output unit 17 corresponds to the output device 76 of FIG. 2. Under the control of the input/output controller, the output unit 17 displays user interface display information used for receiving a user control command required when the image processing device 30 synthesizes a new image and displays or prints new image information synthesized by and transmitted from the image processing device 30 or image information revised by the image revision unit 15.
The communication network 20 transmits data between one or more user interfacing devices 10a and 10b and the image processing device 30 in the first embodiment 1 of the present invention shown in FIG. 1A. The communication network 20 can be realized as one of a variety of types of networks such as wire/wireless Internet, local area networks and private lines.
In the first embodiment 1 of the present invention shown in FIG. 1 , the image processing device 30 processes image information transmitted from one or more user interfacing devices 10a and 10b, and synthesizes a new image based on the image information in response to a user control command, and transmits the synthesized image to a corresponding user interfacing device. The image processing device 30 includes a communication processor 31 , an image processor 32, and an image database (DB) 33.
The communication processor 31 transmits data to and receives data from one or more user interfacing devices 10a and 10b. Corresponding to the communication processor 14 in each of the user interfacing devices 10a and 10b, the communication processor 31 can be realized as a device including an Ethernet card transmitting and receiving data including image information through Internet and a serial/parallel port, a universal serial bus (USB) port or an IEEE1394 port for internal connection. The image processor 32 extracts shape information about an input face, image, indicated as a deformation field with respect to a reference image, and texture information, i.e., color or brightness information, about the input face image mapped on the reference image from face image information transmitted from the user interfacing devices 10a and 10b, analyzes a user control command transmitted from the user interfacing devices 10a and 10b to interpret a user's request, and synthesizes a new face image using the extracted shape information, the extracted texture information and a variety of images stored in the image DB 33 according to the interpreted user's request. The image processor 32 includes a face information extractor 321 , a face image synthesizer 322, a partial image replacing unit 323, and an accessory image adding unit 324.
The face information extractor 321 extracts from face image information transmitted from the user interfacing devices 10a and 10b shape information about an input face image, indicated as a deformation field with respect to a reference image, and texture information, i.e., color or brightness information, of the input face image mapped on the reference image using the shape information.
The face image synthesizer 322 synthesizes a new face image by transforming a texture image selected from among texture images stored in the image DB 33 in response to a user control command or transforming an image, which is generated by performing a weighted summation of the selected texture image and a texture image reflecting texture information extracted by the face information extractor 321 , using shape information about an input image, which is extracted by the face information extractor 321.
The partial image replacing unit 323 replaces a portion or the entire area of a new face image synthesized by the face image synthesizer 322 with a sample image having the highest similarity among sample images stored in the image DB 33. The accessory image adding unit 324 adds an accessory image, which is selected from among accessory images stored in the image DB 33 according to a user control command, to a face image synthesized by the face image synthesizer 322. The image DB 33 previously stores image information necessary for processing an input face image in the image processor 32. The image DB 33 includes a face model DB 331 , an additional image DB 332, a sample image DB 333, a makeup image DB 334, and an accessory image DB 335. The face model DB 331 stores various types of information (a shape average, a texture average, shape eigenvectors, and texture eigenvectors which are previously obtained with respect to a plurality of model face images) used for extracting shape information and texture information, based on a reference image, from an input face image in the face information extractor 321. The various types of information stored in the face model DB 331 will be described in detail later when FIG. 4 is described.
The additional image DB 332 stores information about caricature images which have the same shape as that of a reference image but have different styles of texture information such as an animation style, a sketch style and a watercolor printing style.
The sample image DB 333 stores information about various caricature sample images including changes in a shape and a facial expression at each particular portion of a face image. The makeup image DB 334 stores information about makeup design images which have the same shape as that of a reference image and are expressed with texture information expressing a variety of sample makeups.
The accessory image DB 335 stores information about images such as glasses, hairstyles, hats, earrings, and bodies which can be added to a synthesized face image.
In the first embodiment 1 of the present invention shown in FIG. 1A, one or more user interfacing devices 10a and 10b are connected to a single image processing device 30 through the communication processors 14 and 31 and the communication network 20. Alternatively, as in the second embodiment 40 of the present invention shown in FIG. 1 B, the user interfacing device 50 and the image processing device 60 can be integrated and operated within a single computer system 70.
The basic operations of the synthetic face image generating apparatuses 1 and 40 based on shape information about a face image according to the present invention will be described with reference to FIG. 3.
The face information extractor 321 or 621 of the image processor 32 or 62 receives an input face image from the user interfacing device 10a, 10b or 50 in step S10, and extracts shape information about the input face image with respect to a predetermined reference image and texture information, i.e., color or brightness information, of the input face image mapped on the reference image in step S11.
In step S12, the face image synthesizer 322 or 622 of the image processor 32 or 62 synthesizes a new face image using the texture information based on the reference image and the shape information about the input face image extracted by the face information extractor 321 or 621 , in response to a user control command (a face image synthesis control signal) received from the user interfacing device 10a, 10b or 50. In other words, the face image synthesizer 322 or 622 restores the shape of the input face image using the shape information extracted from the input face image and warps the restored shape of the input face image using the extracted texture information, thereby synthesizing a user's face image. Here, by appropriately changing or replacing the texture information based on the reference image, which is used for synthesizing a face image, different new images having the shape of the input face image can be synthesized.
The synthesized face image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57. The user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives a user control command indicating whether to change the shape information about the displayed face image from the user in step S13.
When the user control command instructs to change the shape information, the shape information about the input face image is changed according to the user control command (for example, a control signal instructing partial transform such as enlarging or reducing a particular portion of the displayed face image by dragging the portion with a mouse or a control signal instructing entire transform such as exaggerating the entire face using a slide bar), and the operation goes back to step S12 to synthesize a new face image.
When the user does not want to change the shape information in step S13, in response to an additional user control command, the accessory image adding unit 324 or 624 adds various accessory images stored in the image DB 33 or 63 to the face image synthesized in step S12, or the partial image replacing unit 323 or 623 replaces a particular portion of the face image synthesized in step S12 with various sample images stored in the image DB 33 or 63, thereby adding various additional effects to the synthesized face image in step S14. Thereafter, the face image synthesized by the image processing device 30 or 60 is transmitted to the user interfacing device 10a, 10b or 50 and displayed for the user. The image revision unit 15 or 55 finally revises the synthesized face image in response to a user control command (an image revision control signal) received through the user command input unit 12 or 52 in step S15. The synthesized face image revised by the image revision unit 15 or 55 is stored in the image storage unit 16 or 56, or displayed or printed by the output unit 17 or 57 in step S16.
The face information extracting step S11 of FIG. 3 can be summarized as a procedure of obtaining shape information S,„ and texture information T,„ from an input face image based on a face model.
In the present invention, shape information about a face image is expressed by a deformation field with respect to a reference image, and texture information about a face image is expressed by color or brightness information about an input image mapped on the reference image. In other words, shape information S about a face image is defined as the positional difference between each point/?, on a reference image (z'=1 , ..., n; n is the number of predetermined points on the reference image) and each corresponding point on the face image. Texture information T about a face image is defined as the color or brightness value of each point on an input image with respect to each corresponding point p, (t-1, ..., ή) on a reference image. While a synthetic image obtained using a shape average and a texture average is used as a reference image in an embodiment of the present invention, a reference image which can be used in the present invention is not restricted to the above embodiment. Any one among m face images prepared previously can be used as a reference image.
Face models stored in the face model DB 331 or 631 are previously obtained according to the following procedure. First, shape information S, (/'=1 , ..., m) and texture information 2} ( =1 , ..., m) are extracted from the previously prepared m face images based on a reference image. Thereafter, a shape average S composed of averages of shape information S7 (/=1 , ..., m) at each point pi (t=1 , ..., ) and a texture average f composed of averages of texture information Tj
Figure imgf000015_0001
, ..., ) are obtained. Then a covariance Cs of a shape difference SJ = SJ -S (/'=1, ..., m) and a covariance Cτ of a texture difference TJ = TJ -T (/-1 , ..., m) are obtained.
These obtained values are processed by principal component analysis, thereby obtaining the shape eigenvector s, (z=1 , ..., mΛ) and texture eigenvector t, (z'=1 , ..., -1) of a covariance of the m face models. Therefore, a face image can be expressed by Equation (1) based on the shape eigenvector s, (/=1, ..., mΛ) and the texture eigenvector t, (z-1, ..., m- ).
S = S + ∑als„ T = T + ∑βfl ...(1)
(=1 ;=1
(Here, ά, β e Rm~l , and m is the number of face models) Through such a procedure, the shape average S , the texture average f , the shape eigenvector s, (z-1 , ..., m-1), and texture eigenvector t, (z-1 , ..., m-\) are stored in the face model DB 331 or 631 and used for extracting shape information and texture information about an input face image.
The face information extracting step S11 of FIG. 3 will be described in detail with reference to FIG. 4. In step S111 of FIG. 4, the input face image is normalized. More specifically, predetermined feature points (for example, a central point of each pupil and a central point of a lip) are extracted from an input face image, and then the input face image is moved up, down, to the left and to the right and the size thereof is adjusted so that the positions of the extracted feature points of the input face image can be located at the corresponding feature points of a reference image. Such an image normalizing step can be automatically performed through predetermined software or can be manually performed in response to a control command input by the user. Since a detailed procedure thereof is beyond the scope of the present invention, a description thereof will be omitted.
In step S112, shape information is estimated. More specifically, a hierarchical, gradient-based optical flow algorithm (Lucas and Kanade) is applied to the normalized input face image obtained in step S111 and to the reference image (or a synthetic texture estimation image Tιn_k having the same shape as that of the reference image), thereby estimating shape information Sm_prej (the value of the positional difference between points on the normalized input face image and corresponding points on the reference image) based on the reference image. By using the hierarchical, gradient-based optical flow algorithm, the correspondence between two similar images can be expressed by an optical flow using the intensity of the two images.
The shape information Sιn_pre j obtained according to a hierarchical, gradient-based optical flow algorithm in step S112 may include an error value due to a light or shadow in the input face image. Accordingly, in step S1 13, compensation is performed on the shape information Sm_prej.
More specifically, the error value is compensated for by sequentially performing linear decomposition based on the shape eigenvector s, (z-1 , ..., mΛ) and linear superposition with respect to the shape information Smj}rej, thereby obtaining the error compensated shape information Sm correCtj. Here, to increase the degree of freedom in transform, it is preferable to use as the result value of the shape information a weighted sum Sm_ι of the estimated shape information Sm_preJ obtained in step S112 and the compensated shape information Sm_co recij obtained in step S113, as calculated by Equation (2).
Figure imgf000017_0001
(Here, Q≤ W≤ X) In step S1 14, backward warping is performed. More specifically, the input face image is transformed into the reference image using the shape information Smj obtained in step S113. This procedure is referred to as "backward warping".
In step S115, linear decomposition is performed on texture information about the backward warped image based on the texture eigenvector t, (t-1, ..., mΛ), and then linear superposition is performed on the result of linear decomposition, thereby obtaining texture information T,„j about the input face image.
Thereafter, the normalized input face image obtained in step S112 is replaced with the input face image transformed into the reference image in step S114, and the reference image is replaced with a texture image having the same shape as that of the reference image. Then, steps S112 through S116 are repeated, thereby obtaining S,„_k (/=1 , ...). In other words, in a k-\h repetition, S,„j, Smj)rej, and S,„_co,τβc/ _/ in steps S112 and S113 are replaced with SιnJ, Smj)n , and Sm_correct , respectively, and S,„ i in step S114 is replaced with Sm(=S,„j+...+ Sιn ). In addition, Tmj in step S115 is replaced with T,„j. TmJ obtained in the last repetition becomes the final texture information T,„ about the input face image. Such a repetition is performed until the vector norm |SOT _k | of S,„ is less than a predetermined threshold value or until the number of repetitions is equal to or greater than a predetermined number, thereby obtaining shape information S,„ about the input face image based on the reference image in step S117.
Thereafter, the input face image can be restored using the shape information S,„ and the texture information Tm about the input face image, which is obtained based on the reference image. In other words, the texture information T,„ about the input face image obtained based on the reference image is transformed according to the shape information S,„ about the input face image obtained based on the reference image, thereby synthesizing the input face image. Representative examples in which the characteristics of such a synthetic image can be utilized are generation of a caricature of a face image and makeup design. A method of generating a caricature of a face image is divided into a face image synthesizing method and a sample image replacement method.
A procedure of generating a caricature image using the face image synthesizing method performed in the synthetic face image generating apparatuses 1 and 40 based on the shape information about a face image according to the present invention will be described with reference to FIG. 5.
The face information extractor 321 or 621 of the image processor 32 or 62 receives an input face image from the user interfacing device 10a, 10b or 50 in step S20, and extracts shape information Sin and texture information Tin about the input face image based on a predetermined reference image in step S21.
Thereafter, the face image synthesizer 322 or 662 presents different styles of caricature images (for example, an animation style, a sketch style, and a watercolor printing style) stored in the additional image DB 332 or 632 to a user through the user interfacing device 10a, 10b or 50 so that the user can select a desired style of caricature in step S22. Here, different styles of caricature images stored in the additional image DB 332 or 632 have the same shape as that of the reference image.
In step S23, the face image synthesizer 322 or 622 synthesizes the selected style of caricature image or an image, which is generated by performing a weighted summation of the selected caricature image and an image reflecting the texture information Tιn about the input face image, with the shape information Sin about the input face image, thereby generating a caricature image reflecting the shape information about the user's face. In step S24, the synthesized caricature image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57, and the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives a user control command indicating whether to change the shape information about the displayed caricature image from the user.
When the user control command instructs to change the shape information, the shape information Sin is changed according to the user control command (for example, a control signal instructing partial transform such as enlarging or reducing a particular portion of the displayed face image by dragging the portion with a mouse or a control signal instructing entire transform such as exaggerating the entire face using a slide bar), and the operation goes back to step S22 to synthesize a new caricature image. In step S25, in response to an additional user control command, the accessory image adding unit 324 or 624 retrieves various accessory images (glasses, hairstyles, hats, earrings, and bodies) stored in the accessory image DB 335 or 635, and adds the same to the caricature image. When the accessory image adding unit 324 or 624 adds an accessory image, more natural result can be obtained by automatically performing a size and position adjustment using the shape information Sin extracted in step S21. In addition, the partial image replacement unit 323 or 623 replaces a particular portion of the caricature image with a sample image retrieved from the sample image DB 333 or 633, thereby expressing happiness, sadness or anger on the caricature image. Moreover, a moving picture effect can be accomplished by using animation frames expressing changes in the face.
Thereafter, the caricature image synthesized by the image processing device 30 or 60 is transmitted to the user interfacing device 10a, 10b or 50 and displayed for the user. The image revision unit 15 or 55 finally revises the synthesized caricature image in response to a user control command (an image revision control signal) received through the user command input unit 12 or 52 in step S15. The synthesized caricature image revised by the image revision unit 15 or 55 is stored in the image storage unit 16 or 56, or displayed or printed by the output unit 17 or 57 in step S16.
The caricature image obtained through such a method can be immediately used for a particular purpose. Alternatively, the caricature image can be used as a draft when an artist produces a caricature, thereby increasing productivity during manual treatment.
A procedure of generating a caricature image using the sample image replacement method performed in the synthetic face image generating apparatuses 1 and 40 based on the shape information about a face image according to the present invention will be described with reference to FIG. 6.
FIG. 6 further includes step S35 and S36 in addition to steps shown in FIG. 5. Thus, descriptions of the other steps S30 through S34 and S37 through S39 will be omitted. In other words, the method of FIG. 6 is a method of replacing a part or the entire portion of a caricature image synthesized according to a method as shown in FIG. 5 with a sample image prepared in the sample image DB 333 or 633.
Sample images stored in the sample image DB 333 or 633 are formed based on the result of performing a statistic analysis on shape information about different faces. The sample image formation method can be divided into two cases: a case where transformation of a sample image is admitted, and a case where transformation of a sample image is not admitted.
According to the sample image formation method in the case where transformation of a sample image is admitted, a normalized sample image is formed, and the size and shape of the sample image are transformed based on shape information S,„ extracted from an input face image in step S31. This method has advantages of substantially reflecting the shape of the input face image and requiring a small number of sample images but has disadvantages of distorting sample image by transformation and deteriorating entire picture quality.
When using the sample image formation method in the case where transformation of a sample image is not admitted, a part or the entire portion of a caricature image is replaced with only a previously formed sample image to synthesize a new caricature image. Accordingly, this method has an advantage of accomplishing high definition but is disadvantageous in that it is difficult to substantially reflect the shape of the input image and many sample images as possible as can be formed should be prepared in advance.
A method of measuring a difference D used in step S35 of FIG. 6 can be expressed by Equation (3).
D = W -∑(Ci, -Cn) + (l-W) -∑(Ctl -Cφ) ...(3)
;=1 (=1
Here, O≤ W≤ , Cs, (z-1 , ..., n) is shape information about an input image, CV(z'=1 , ..., n) is shape information about a sample image, Cu (z-1 , ..., n is the difference Tre/ between texture information about the input image and texture information about a reference image, and Cφ (z-1 , ..., ) is the difference Tref between texture information about the sample image and the texture information about the reference image. Instead of immediately applying shape information and texture information to Cs„ C, C.„ and Cq„ coefficients of eigenvectors obtained by linearly analyzing the shape information and the texture information, as shown in Equation (1), can be used. Here, each coefficient has a dimension of (mΛ).
The partial image replacing unit 323 or 623 measure the difference D between the input image and each sample image in step S35 and replaces a part or the entire portion of a caricature image with a sample image having a minimum difference in step S36.
When a caricature image is generated according to the sample image replacement method, instead of the entire generated caricature image, the codes of a sample image with which a part or the entire portion of the caricature image will be replaced are compressed and transmitted in a low-speed communication environment, thereby markedly increasing a compression ratio. A procedure of designing a makeup performed in the synthetic face image generating apparatuses 1 and 40 based on the shape information about a face image according to the present invention will be described with reference to FIG. 7.
FIG. 7 is the same as FIG. 5, with the exception that FIG. 7 includes steps S42 through S45, instead of S22 through S24 of FIG. 5. Thus, descriptions of the other steps S40, S41 , and S46 through S48 will be omitted.
In step S42, the face image synthesizer 322 or 662 presents a variety of sample makeup images stored in the makeup image DB 334 or 634 to a user through the user interfacing device 10a, 10b or 50 so that the user can select a desired sample makeup image. Here, the sample makeup images have the same shape as that of a reference image.
In step S43, the face image synthesizer 322 or 622 transforms the selected sample makeup image or an image, which is generated by performing a weighted summation of the selected sample makeup image and an image reflecting texture information Tin extracted from an input face image in step S41 , using shape information Sin extracted from the input face image in step S41 , thereby synthesizing a face image in which the selected sample makeup image is applied to the user's face image. In step S44, the synthesized face image reflecting the sample makeup image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57, and the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives a user control command instructing revision of the makeup of the displayed face image from the user. The face image synthesizer 322 or 622 revises the face image reflecting the sample makeup image according to the user control command, and the revised face image is transmitted to the user interfacing device 10a, 10b or 50 and displayed through the output unit 17 or 57. Thereafter, in step S45, the user command input unit 12 or 52 of the user interfacing device 10a, 10b or 50 receives from the user a user control command indicating whether the user is satisfied with the displayed face image. Here, when the user control command indicates "satisfied", the procedure proceeds to step S46 of adding an accessory. In contrast* when the user control command indicates "not satisfied", the procedure returns to step S42.
While this invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in forms and details may be made therein without departing from the spirit and scope of the invention. The above embodiments have been used in a descriptive sense only and not for purpose of limitation. Therefore, the scope of the invention will be defined by the appended claims not by the above description, and it should be construed that modifications made within the scope of the invention are covered by the present invention.
Industrial Applicability
According to the present invention, first, shape information, which is represented by a deformation field with respect to a reference image, about an input face image is extracted from the input face image, and images which have the same shape as that of the reference image but have different types of texture information and the extracted shape information about the input face image are used, thereby synthesizing natural new images reflecting the shape of the input face image and having high quality regardless of the state of the input face image. Accordingly, the present invention can be effectively utilized in a variety of fields such as generation of character images, virtual makeup design, making of montages for criminal search, animations and games.
Second, in case of generating caricature images, different caricatures including the characteristics of the shape of a user can be immediately generated, and a part or the entire portion of a generated caricature can be exaggerated or transformed. In addition, by using information about the shape of a user's face, a complex image revision process can be simplified and automatized, thereby improving productivity in generating characters.
Third, in case of makeup design, a user can easily create and check a makeup design on the user's face image and can easily revise the makeup design partially or entirely.
Fourth, the result of adding a variety of accessories to a new synthesized face image can be immediately checked. In addition, the present invention can be easily applied to a variety of applications such as realization of an avatar in the virtual reality based on shape information, restoration of a three-dimensional face image, and video chatting which need a face image.

Claims

What is claimed is:
1. An apparatus for synthesizing a new face image based on shape information about an input face image, the apparatus comprising: a user interfacing device for receiving face image information and a user control command, transmitting the same to an image processing device, receiving synthesized face image information from the image processing device, and outputting or storing the synthesized face image information according to the user control command; and an image processing device for extracting shape information and texture information about an input face image from face image information transmitted from the user interfacing device, the shape information being represented by a deformation field with respect to a predetermined reference image, the texture information being color or brightness information about an input image mapped on the reference image, and for transforming a texture image selected from among texture images, which are stored in an image database in advance and have the same shape as that of the reference image, or an image generated by performing a weighted summation of the selected texture image and a texture image reflecting the extracted texture information, using the shape information about the input face image according to the user control command, thereby generating a synthetic face image.
2. The apparatus of claim 1 , wherein the user interfacing device and the image processing device are realized within a single computer system.
3. The apparatus of claim 1 , wherein the user interfacing device and the image processing device are separately realized in two or more computer systems, respectively, the apparatus further comprising a communication network for transmitting data and receiving data between the user interfacing device and the image processing device.
4. The apparatus of claim 2 or 3, wherein the image processing device comprises: a face information extractor for extracting the shape information and the texture information about the input face image from the face image information transmitted from the user interfacing device, the shape information being represented by a deformation field with respect to the predetermined reference image, the texture information being color or brightness information about the input image mapped on the reference image; a face image synthesizer for transforming the texture image selected from among the texture images, which are stored in the image database in advance and have the same shape as that of the reference image, or the image generated by performing a weighted summation of the selected texture image and the texture image reflecting the extracted texture information, using the shape information about the input face image according to the user control command, thereby generating the synthetic face image; and an image database for storing information about the reference image and texture information about a variety of images having the same shape as that of the reference image.
5. The apparatus of claim 4, wherein the image database comprises: a face model database for storing a shape average, a texture average, shape eigenvectors, and texture eigenvectors which are obtained by performing a principal component analysis on a shape average, a texture average, a covariance of a shape difference, and a covariance of a texture difference which are obtained from reference image based shape information and texture information which is extracted from a plurality of face images; an additional image database for storing information about different styles of caricature images having the same shape as that of the reference image; and a makeup image database for storing information about a variety of makeup design images having the same shape as that of the reference image.
6. The apparatus of claim 5, wherein the face information extractor comprises: a normalization module for normalizing the input face image with respect to the reference image; a shape information estimation module for estimating the reference image based shape information by performing a hierarchical, gradient-based optical flow algorithm on the normalized input face image and the reference image; a shape information compensation module compensating for for an error value of the shape information estimated in the shape information estimation module by performing linear decomposition and linear superposition on the estimated shape information based on the shape eigenvectors stored in the face model database; a backward warping module for transforming the shape of the input face image into the shape of the reference image using the compensated shape information; a texture information determining module for determining the texture information about the input face image by performing linear decomposition and linear superposition on texture information about the backward warped image based on the texture eigenvectors stored in the face model database; and a repetition module for repeating the above modules until a predetermined condition is satisfied, thereby generating the shape information about the input face image based on the reference image.
7. The apparatus of claim 5, wherein the image database further comprises a sample image database for storing information about caricature sample images having different facial expressions at each particular portion of a face image, and the image processing device further comprises a partial image replacing unit for replacing a part or. the entire portion of the synthetic face image generated by the face image synthesizer with a sample image having a highest similarity to the synthetic face image among the sample images stored in the sample image database.
8. The apparatus of claim 5, wherein the image database further comprises an accessory image database for storing information about a variety of accessory images which can be added to the synthetic face image, and the image processing device further comprises an accessory image adding unit for adding an accessory image, which is selected from among the accessory images stored in the accessory image database according to a user control command, to the synthetic face image generated by the face image synthesizer.
9. A method for synthesizing a new face image based on shape information about an input face image, the method comprising the steps of:
(a) extracting shape information and texture information from input face image, the shape information being represented by a deformation field with respect to a predetermined reference image, the texture information being color or brightness information about an input image mapped on the reference image; and
(b) transforming a texture image selected from among texture images, which are stored in an image database in advance and have the same shape as that of the reference image, or an image generated by performing a weighted summation of the selected texture image and a texture image reflecting the extracted texture information, using the shape information about the input face image according to a user control command, thereby generating a synthetic face image.
10. The method of claim 9, wherein step (a) comprises sub steps of:
(a1) normalizing the input face image with respect to the reference image;
(a2) estimating the reference image based shape information by performing a hierarchical, gradient-based optical flow algorithm on the normalized input face image and the reference image;
(a3) compensating for an error value of the shape information estimated in step (a2) by performing linear decomposition and linear superposition on the estimated shape information based on shape eigenvectors stored in the image database in advance;
(a4) transforming the shape of the input face image into the shape of the reference image using the shape information compensated in step (a3);
(a5) determining the texture information about the input face image by performing linear decomposition and linear superposition on texture information about the image transformed in step (a4) based on texture eigenvectors stored in the image database in advance; and
(a6) reflecting the results of steps (a4) and (a5) and repeating steps (a2) through (a5) until a predetermined condition is satisfied, thereby generating the shape information and the texture information about the input face image based on the reference image.
11. The method of claim 10, wherein step (a) further comprises the step of: (aO) generating the shape eigenvectors and the texture eigenvectors by performing a principal component analysis on a shape average, a texture average, a covariance of a shape difference, and a covariance of a texture difference which are obtained from reference image based shape information and texture information which is extracted from a plurality of model face images.
12. The method of claim 9, wherein step (b) comprises the sub steps of:
(b1) selecting a caricature image from among different styles of caricature images, which are stored in the image database and have the same shape as that of the reference image, according to a user control command; and
(b2) synthesizing the caricature image selected in step (b1) or an image generated by performing a weighted summation of the selected caricature image and an image reflecting the extracted texture information with the shape information about the input face image, thereby generating a caricature image reflecting the shape information about the input face image.
13. The method of claim 12, wherein step (b) further comprises the sub step of:
(b3) when changing of the shape information is determined by a user control command, changing the shape information about the input face image according to a user control command controlling a change in the shape information and repeating steps (b1) and (b2).
14. The method of claim 12, further comprising the step of:
(c) replacing a part or the entire portion of the caricature image synthesized in step (b) with a sample image having a highest similarity to the synthesized caricature image among sample images stored in the image database.
15. The method of claim 14, wherein the similarity is determined in accordance with a weighted sum of differences in shape information between the synthesized caricature image and the sample image and the sum of differences in texture information between the synthesized caricature image and the sample image, or a weighted sum of the sum of differences between eigenvector coefficients representing the shape information about the synthesized caricature image and eigenvector coefficients representing the shape information about the sample image and the sum of differences between eigenvector coefficients representing the texture information about the synthesized caricature image and eigenvector coefficients representing the texture information about the sample image.
16. The method of claim 12 or 14, further comprising the step of:
(d) adding an accessory image, which is selected from among accessory images stored in the image database according to a user control command, to the synthesized caricature image.
17. The method of claim 16, wherein the position and size of the accessory image added in step (d) is determined in accordance with the shape information extracted in step (a) about the input face image.
18. The method of claim 9, wherein step (b) comprises the sub steps of:
(b1) selecting a makeup design image from among different styles of makeup design images, which are stored in the image database and have the same shape as that of the reference image, according to a user control command;
(b2) synthesizing the makeup design image selected in step (b1) or an image generated by performing a weighted summation of the selected makeup design image and an image reflecting the extracted texture information with the shape information about the input face image, thereby generating a makeup design image reflecting the shape information about the input face image; and
(b3) changing texture information about the reference image according to a user control command instructing revision of a makeup.
19. The method of claim 18, further comprising the step of:
(c) adding an accessory image, which is selected from among accessory images stored in the image database according to a user control command, to the synthesized makeup design image.
20. The method of claim 19, wherein the position and size of the accessory image added in step (c) is determined in accordance with the shape information extracted in step (a) about the input face image.
PCT/KR2001/001167 2000-08-22 2001-07-07 Apparatus and method for generating synthetic face image based on shape information about face image WO2002017234A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2002521224A JP2004506996A (en) 2000-08-22 2001-07-07 Apparatus and method for generating synthetic face image based on form information of face image
AU2001269581A AU2001269581A1 (en) 2000-08-22 2001-07-07 Apparatus and method for generating synthetic face image based on shape information about face image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020000048616A KR20000064110A (en) 2000-08-22 2000-08-22 Device and method for automatic character generation based on a facial image
KR2000/0048616 2000-08-22

Publications (1)

Publication Number Publication Date
WO2002017234A1 true WO2002017234A1 (en) 2002-02-28

Family

ID=19684433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2001/001167 WO2002017234A1 (en) 2000-08-22 2001-07-07 Apparatus and method for generating synthetic face image based on shape information about face image

Country Status (5)

Country Link
JP (1) JP2004506996A (en)
KR (2) KR20000064110A (en)
CN (1) CN1447955A (en)
AU (1) AU2001269581A1 (en)
WO (1) WO2002017234A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1510973A3 (en) * 2003-08-29 2006-08-16 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
US20080086380A1 (en) * 2005-07-01 2008-04-10 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Alteration of promotional content in media works
CN101847268A (en) * 2010-04-29 2010-09-29 北京中星微电子有限公司 Cartoon human face image generation method and device based on human face images
US7835568B2 (en) 2003-08-29 2010-11-16 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
US9215512B2 (en) 2007-04-27 2015-12-15 Invention Science Fund I, Llc Implementation of media content alteration
US9230601B2 (en) 2005-07-01 2016-01-05 Invention Science Fund I, Llc Media markup system for content alteration in derivative works
US9426387B2 (en) 2005-07-01 2016-08-23 Invention Science Fund I, Llc Image anonymization
US9583141B2 (en) 2005-07-01 2017-02-28 Invention Science Fund I, Llc Implementing audio substitution options in media works
US10762665B2 (en) 2018-05-23 2020-09-01 Perfect Corp. Systems and methods for performing virtual application of makeup effects based on a source image
US11594071B2 (en) 2018-12-03 2023-02-28 Chanel Parfums Beaute Method for simulating the realistic rendering of a makeup product

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100385896B1 (en) * 1999-12-28 2003-06-02 김남규 Method and Apparatus for Providing and Using of 3 Dimensional Image Which Represents the User in Cyber-Space
KR20010091743A (en) * 2000-03-17 2001-10-23 박호성 A formation method of an automatic caricature
KR20010092618A (en) * 2000-03-22 2001-10-26 이민호 Automatic generation and output of caricature of a face using image information
KR20020014176A (en) * 2000-08-16 2002-02-25 김세진 Apparatus and method for instant photographing and characterizing user's feature
KR20000064110A (en) * 2000-08-22 2000-11-06 이성환 Device and method for automatic character generation based on a facial image
KR20010000426A (en) * 2000-09-28 2001-01-05 김용환 Method of Intelligent Image Interface
KR20020057447A (en) * 2001-01-04 2002-07-11 심한억 The Method of Making a 3D Animation Movie By Controling 3D Character Directly
KR100407685B1 (en) * 2001-01-12 2003-12-01 윤경현 Method for representing Color paper mosaic using computer
KR100422470B1 (en) * 2001-02-15 2004-03-11 비쥬텍쓰리디(주) Method and apparatus for replacing a model face of moving image
KR20020069595A (en) * 2001-02-27 2002-09-05 강석령 System and method for producing caricatures
KR20020082328A (en) * 2001-04-20 2002-10-31 김장휘 The techknowledge maken my animation on network
KR20010079219A (en) * 2001-06-23 2001-08-22 조한수 Story board game system using video and its method
KR20030042403A (en) * 2001-11-22 2003-05-28 조윤석 Facial character manufacturing method by fitting facial edgeline
KR100473593B1 (en) * 2002-05-03 2005-03-08 삼성전자주식회사 Apparatus and method for producing three-dimensional caricature
KR100912872B1 (en) * 2002-10-09 2009-08-19 삼성전자주식회사 Apparatus and method for producing three-dimensional caricature
CN1313979C (en) 2002-05-03 2007-05-02 三星电子株式会社 Apparatus and method for generating 3-D cartoon
KR20030091306A (en) * 2002-05-27 2003-12-03 이채헌 The dynamic character and image character making system and the method using face components relationship on face image.
KR20040009460A (en) * 2002-07-23 2004-01-31 주식회사 페이스쓰리디 System and method for constructing three dimensional montaged geometric face
KR20040049759A (en) * 2002-12-07 2004-06-12 김창모 Caricature to the contents of mobile phone,PDA or Internet
KR101028257B1 (en) * 2003-06-16 2011-04-11 엘지전자 주식회사 Avatar editing method for mobile communication device
KR100791034B1 (en) * 2004-09-02 2008-01-03 (주)제니텀 엔터테인먼트 컴퓨팅 Method of Hair-Style Shaping based-on Face Recognition and apparatus thereof
KR100764130B1 (en) * 2005-03-29 2007-10-05 (주)제니텀 엔터테인먼트 컴퓨팅 Method of virtual face shaping based on automatic face extraction and apparatus thereof
JP5035524B2 (en) * 2007-04-26 2012-09-26 花王株式会社 Facial image composition method and composition apparatus
KR100929561B1 (en) * 2007-08-31 2009-12-03 (주)에프엑스기어 Specialized video contents providing system reflecting user-specified facial image / audio data
KR100967895B1 (en) * 2007-08-31 2010-07-06 (주)에프엑스기어 The system which provide a specialized teaching contents where the data which the user designates is reflected
KR100929564B1 (en) * 2007-08-31 2009-12-03 (주)에프엑스기어 Specialized virtual avatar providing system reflecting user specified facial image
KR100902995B1 (en) * 2007-10-23 2009-06-15 에스케이 텔레콤주식회사 Method for making face image of golden ratio, and apparatus applied to the same
KR100952382B1 (en) * 2009-07-29 2010-04-14 숭실대학교산학협력단 Animation automatic generating apparatus of user-based and its method
CN102054287B (en) * 2009-11-09 2015-05-06 腾讯科技(深圳)有限公司 Facial animation video generating method and device
JP2011228936A (en) * 2010-04-20 2011-11-10 Shiseido Co Ltd Moving image transmission system, transmitter, receiver, moving image management device, transmission program, reception program, and moving image management program
WO2012043910A1 (en) * 2010-10-01 2012-04-05 엘지전자 주식회사 Image display device and image displaying method thereof
KR101862128B1 (en) * 2012-02-23 2018-05-29 삼성전자 주식회사 Method and apparatus for processing video information including face
JP2014016746A (en) * 2012-07-06 2014-01-30 Sony Computer Entertainment Inc Image processing apparatus and image processing method
KR101374313B1 (en) * 2012-08-14 2014-03-13 주식회사 바른기술 An apparatus for transmitting simplified motion information excluding background images and displaying the information by utilizing avatar and the methods thereof
KR101494880B1 (en) 2012-11-07 2015-02-25 한국과학기술연구원 Apparatus and method for generating cognitive avatar
KR101418878B1 (en) * 2013-04-22 2014-07-17 명지대학교 산학협력단 System for generating montage using facial feature and method therefor
KR101635730B1 (en) * 2014-10-08 2016-07-20 한국과학기술연구원 Apparatus and method for generating montage, recording medium for performing the method
KR102288280B1 (en) 2014-11-05 2021-08-10 삼성전자주식회사 Device and method to generate image using image learning model
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Image generation method and device
CN104751408B (en) * 2015-03-26 2018-01-19 广东欧珀移动通信有限公司 The method of adjustment and device of face head portrait
CN105184249B (en) 2015-08-28 2017-07-18 百度在线网络技术(北京)有限公司 Method and apparatus for face image processing
CN105427238B (en) * 2015-11-30 2018-09-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107705240B (en) * 2016-08-08 2021-05-04 阿里巴巴集团控股有限公司 Virtual makeup trial method and device and electronic equipment
US10860841B2 (en) 2016-12-29 2020-12-08 Samsung Electronics Co., Ltd. Facial expression image processing method and apparatus
CN108492344A (en) * 2018-03-30 2018-09-04 中国科学院半导体研究所 A kind of portrait-cartoon generation method
KR102400609B1 (en) * 2021-06-07 2022-05-20 주식회사 클레온 A method and apparatus for synthesizing a background and a face by using deep learning network
KR102623592B1 (en) 2022-01-06 2024-01-11 (주)키미티즈 system and method for manufacturing smart character design
CN115171199B (en) * 2022-09-05 2022-11-18 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
KR102627033B1 (en) * 2023-05-08 2024-01-19 주식회사 알마로꼬 System and method for generating participatory content using artificial intelligence technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696892A (en) * 1992-07-10 1997-12-09 The Walt Disney Company Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession of textures derived from temporally related source images
US5774591A (en) * 1995-12-15 1998-06-30 Xerox Corporation Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
KR20000063344A (en) * 2000-06-26 2000-11-06 김성호 Facial Caricaturing method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04199474A (en) * 1990-11-29 1992-07-20 Matsushita Electric Ind Co Ltd Face picture synthetic device
JP3533717B2 (en) * 1994-09-27 2004-05-31 松下電器産業株式会社 Image synthesis device
JP3799633B2 (en) * 1995-06-16 2006-07-19 セイコーエプソン株式会社 Face image processing method and face image processing apparatus
JP2918499B2 (en) * 1996-09-17 1999-07-12 株式会社エイ・ティ・アール人間情報通信研究所 Face image information conversion method and face image information conversion device
US6661906B1 (en) * 1996-12-19 2003-12-09 Omron Corporation Image creating apparatus
JP3551668B2 (en) * 1996-12-20 2004-08-11 オムロン株式会社 Portrait transmission device, portrait communication device and method
KR20010091743A (en) * 2000-03-17 2001-10-23 박호성 A formation method of an automatic caricature
KR20000037042A (en) * 2000-04-06 2000-07-05 김정렬 Automatic character producing system
KR100376760B1 (en) * 2000-07-05 2003-03-19 케이포테크놀로지 주식회사 Method for manufacturing caricature
KR20000059236A (en) * 2000-07-24 2000-10-05 조경식 On the internet, the way of putting photo image on 3D-modeling, bringing each layer object, painting user's face, wearing a wig, putting on glasses, ....etc
KR20000064110A (en) * 2000-08-22 2000-11-06 이성환 Device and method for automatic character generation based on a facial image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696892A (en) * 1992-07-10 1997-12-09 The Walt Disney Company Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession of textures derived from temporally related source images
US5774591A (en) * 1995-12-15 1998-06-30 Xerox Corporation Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
KR20000063344A (en) * 2000-06-26 2000-11-06 김성호 Facial Caricaturing method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1510973A3 (en) * 2003-08-29 2006-08-16 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN100345165C (en) * 2003-08-29 2007-10-24 三星电子株式会社 Method and apparatus for image-based photorealistic 3D face modeling
US7835568B2 (en) 2003-08-29 2010-11-16 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
US20080086380A1 (en) * 2005-07-01 2008-04-10 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Alteration of promotional content in media works
US9230601B2 (en) 2005-07-01 2016-01-05 Invention Science Fund I, Llc Media markup system for content alteration in derivative works
US9426387B2 (en) 2005-07-01 2016-08-23 Invention Science Fund I, Llc Image anonymization
US9583141B2 (en) 2005-07-01 2017-02-28 Invention Science Fund I, Llc Implementing audio substitution options in media works
US9215512B2 (en) 2007-04-27 2015-12-15 Invention Science Fund I, Llc Implementation of media content alteration
CN101847268A (en) * 2010-04-29 2010-09-29 北京中星微电子有限公司 Cartoon human face image generation method and device based on human face images
US10762665B2 (en) 2018-05-23 2020-09-01 Perfect Corp. Systems and methods for performing virtual application of makeup effects based on a source image
US11594071B2 (en) 2018-12-03 2023-02-28 Chanel Parfums Beaute Method for simulating the realistic rendering of a makeup product

Also Published As

Publication number Publication date
KR20000064110A (en) 2000-11-06
KR100407111B1 (en) 2003-11-28
KR20020015642A (en) 2002-02-28
AU2001269581A1 (en) 2002-03-04
JP2004506996A (en) 2004-03-04
CN1447955A (en) 2003-10-08

Similar Documents

Publication Publication Date Title
WO2002017234A1 (en) Apparatus and method for generating synthetic face image based on shape information about face image
Blanz et al. A morphable model for the synthesis of 3D faces
US11861936B2 (en) Face reenactment
US6556196B1 (en) Method and apparatus for the processing of images
CN111127304B (en) Cross-domain image conversion
US5995119A (en) Method for generating photo-realistic animated characters
US20050180657A1 (en) System and method for image-based surface detail transfer
US11024060B1 (en) Generating neutral-pose transformations of self-portrait images
CN108734749A (en) The visual style of image converts
US20030234871A1 (en) Apparatus and method of modifying a portrait image
JPH1091809A (en) Operating method for function arithmetic processor control machine
Chang et al. Transferable videorealistic speech animation
CN110853119A (en) Robust reference picture-based makeup migration method
CN109376698A (en) Human face model building and device, electronic equipment, storage medium, product
US20240029345A1 (en) Methods and system for generating 3d virtual objects
Zeng et al. Avatarbooth: High-quality and customizable 3d human avatar generation
JP2024504063A (en) Computing platform for facilitating augmented reality experiences using third-party assets
Xu et al. RelightGAN: Instance-level generative adversarial network for face illumination transfer
WO2020188924A1 (en) Information processing device, search method, and non-transitory computer-readable medium having program stored thereon
JP2002525764A (en) Graphics and image processing system
KR20010082779A (en) Method for producing avatar using image data and agent system with the avatar
JP6714145B2 (en) Synthetic image generation device, synthetic image generation method and program thereof
Benamira et al. Interpretable Disentangled Parametrization of Measured BRDF with $\beta $-VAE
WO2005076225A1 (en) Posture and motion analysis using quaternions
JP2723070B2 (en) User interface device with human image display

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002521224

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 018144128

Country of ref document: CN

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase