US8391642B1 - Method and system for creating a custom image - Google Patents
Method and system for creating a custom image Download PDFInfo
- Publication number
- US8391642B1 US8391642B1 US12/263,256 US26325608A US8391642B1 US 8391642 B1 US8391642 B1 US 8391642B1 US 26325608 A US26325608 A US 26325608A US 8391642 B1 US8391642 B1 US 8391642B1
- Authority
- US
- United States
- Prior art keywords
- image
- face
- extracted
- model
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/772—Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Definitions
- personalized licensed merchandise where licensed properties such as art, celebrities, fictional characters, sports figures, toys/games, etc. are personalized with users' images or other expressions.
- a method and system for producing an image are disclosed. Exemplary embodiments include creating an image template having a customizable region and extracting a 2D object from a source image. The 2D object is transformed using a 3D model in response to the customizable region, and the transformed 2D object is merged into the image template to create a customized image.
- FIG. 1 is a flow diagram conceptually illustrating a process for creating an image in accordance with one or more embodiments of the present invention.
- FIG. 2 is a block diagram conceptually illustrating aspects of a system for creating an image in accordance with one or more embodiments of the present invention.
- FIG. 3 is a block diagram conceptually illustrating a work-flow process for creating an image in accordance with one or more embodiments of the present invention.
- FIG. 4 illustrates transforming a 2D face image using a 3D model in accordance with one or more embodiments of the present invention.
- FIG. 5 is a block diagram illustrating a system overview for an image customization process in accordance with one or more embodiments of the present invention.
- a system and method are disclosed that, for example, enable content owners to create region-based templates that users can personalize with their own images and add to various types of merchandise.
- the systems and methods can be deployed as web services, thus enabling content owners and users to easily customize images.
- FIG. 1 is a flow diagram illustrating a process for producing a customized image in accordance with disclosed embodiments.
- An image template is created that includes a customizable region in block 100 .
- a 2D object is extracted from a source image.
- the source image is a photograph of a person
- the object is the person's face.
- images other than a person's face are used as the customizable region and source image object. For example, photographs of articles of clothing could be customized and incorporated into a template image.
- the template could include an image such as a photograph of a celebrity or character, with a stand-in whose face would be replaced by the user's face.
- the stand-in's face region is identified so that the angular orientation and displaying zoom factor can be determined for the customizable region.
- the template is typically stored as a file that can include at least some of the following: the staged photograph along with meta-data that describes the facial orientation, the display zoom factor, the lighting conditions, the positions of visible skin regions on the stand-in's body, etc, as well as a bitmap that shows which pixels can be altered.
- multiple templates are created. For example, different stand-ins may be used to represent categories for various gender, age, height, weight, etc. Creating multiple templates gives the user an option of choosing a gender and size, for example, that is appropriate to an actual body type.
- the source image is provided, for example, from a user via a website that showcases specific licensed content of interest to the user.
- the user provides the 2D source image, typically a frontal source photograph.
- the source photograph is used to extract the desired 2D object—the image of the user's face.
- the 2D object is transformed using a 3D model. In this way, the extracted object is adjusted in response to the customizable region of the image template. For instance, where the source image is the user's photograph, the user's face image can be scaled and oriented to match that of the template photograph.
- the transformed 2D object is then merged into the image template to create a composite image in block 106 .
- the composite image can then be formatted to desired merchandise and routed to a merchandise service provider.
- FIG. 2 is a block diagram conceptually illustrating a system 200 for producing a customized image in accordance with embodiments of the disclosed invention.
- a computer system 210 (which could include a plurality of networked computer systems) has access to a memory 212 , such as an integral memory and/or a memory accessible via a network connection, for example.
- the memory 212 stores an image template with a customizable region.
- a software program is stored on the memory 212 or on another memory accessible by the computer 210 , such that computer 210 is programmed to execute the method described in conjunction with FIG. 1 . Accordingly, a 2D object is extracted from a source image 214 corresponding to the customizable region of the image template.
- the source image 214 such as a photograph
- the computer system 210 can be provided by a user communicating with the computer system 210 via the internet.
- the 2D object is transformed using a 3D model, and the transformed 2D object is merged into the image template to create a customized image 216 .
- FIG. 3 is a flow diagram illustrating an exemplary work-flow process in which a photograph of a character is customized to include the image of a user's face.
- a staged photograph 222 is provided to a content authoring process 224 .
- the staged photograph includes the character 230 and a stand-in subject 232 whose face will be replaced by a user's face.
- the customizable region is the face of the stand-in subject 232 .
- the content authorizing process 224 analyzes the staged photograph 222 to extract meta-data, such as facial orientation, face size, visible skin regions, etc. of the stand in subject 232 in block 110 of the process 224 .
- the image template 220 is created by combining the source photograph 222 with the meta-data.
- the image template 220 and the source photograph 214 are used by a content “mash-up” or customization process 240 to merge the image of the user's face with the image template 222 .
- the source photograph 214 is provided to the computer 210 that executes the customization process 240 , for example, by the user over the internet.
- the source photograph 214 is analyzed and information regarding the object to be extracted—the image of the user's face—is identified. Information such as the facial orientation and facial region are extracted from the source photograph 214 .
- the 2D image of the user's face is transformed using a 3D model.
- Suitable face detection techniques may be used to identify the face image in the source photograph 214 .
- a 3D morphable model can be used to fit the 2D face target area into a 3D face model. Face features and texture are extracted and mapped to the 3D model.
- the 3D face model can be used to generate virtual views of the users face in various orientations, and a modified 2D face image is created from the 3D model with a given angular position and size within a certain range.
- the position and orientation of the light source as well as the ambient intensity are configured to match the lighting conditions found in the staged photograph 214 .
- FIG. 4 illustrates an example of the transformation process.
- the 2D source face object 250 with a first orientation is applied to a 3D face model 252 ; along with an image of the desired second facial orientation 254 in accordance with the image template.
- the source 2D object is transformed to the second orientation using the 3D model 252 , as shown in image 256 .
- 3D modeling techniques for developing a 3D model for a particular face exist. For example, some techniques use a generic 3D face model with one or more 2D source face images. Several features of the face, such as the eyes, mouth, etc., and the associated locations around key points of the facial features are extracted and fit to the generic model, resulting in a 3D model of the desired face.
- One algorithm for identifying such facial features combines techniques known as “coarse-to-fine searching” (face detection) and “global-to-local matching” (feature extraction).
- face detection face detection
- feature extraction feature extraction
- a set of multiresolution templates is built for the whole face and individual facial features.
- a resolution pyramid structure is also established for the input face image. This algorithm first tries to find the rough face location in the image at the lowest resolution by globally matching it with the face templates. The higher resolution images and templates are used to refine the face location. Then each facial feature is located using a combination of techniques, including image processing, template matching and deformable templates. Finally, a feedback procedure is provided to verify extracted features using the anthropometry of human faces and if necessary, the features will be rematched.
- Another suitable process involves providing several generic 3D face models, and one of the 3D models is selected based on similarity to the source face image. For example, 2D positions (x and y coordinates) of predetermined facial feature points are extracted from the source image. The z coordinates of the points are estimated and the most similar model is retrieved based on geometrical measurements. Once the appropriate 3D model has been selected, the facial features extracted from the source image are mapped to the 3D model to produce the 3D model of the source face. Using this 3D model, the source 2D image is transformed in response to the image template.
- the generation of the transformed 2D source image with the desired orientation, size, etc. is illustrated in block 124 .
- the transformed 2D image is mapped to the customizable region of the image template, and in block 128 , other areas of visible skin in the image template are adjusted to match the skin tones of the transformed face image.
- FIG. 5 illustrates an example of a system overview for the image customization process.
- a staged photograph creator 270 provides the image template 220 to the computer system 210 providing the customization services. Users access content provider or photograph services websites 272 to select the desired image templates and upload their source photographs 214 , which are provided to the customization services system 210 . The user's image is transformed and merged with the template image as disclosed herein. The customized image 216 can then be sent to print service providers 280 for customizing the desired article, such as a clothing article, souvenir item, etc., and shipped to the user via desired distribution services 282 .
- Applications for the customized images include personalized children's storybooks where the child's face is merged into stories with their favorite live or cartoon characters, and clothing, posters, mousepads, etc. that combine images of users' faces with their favorite music, TV, and sports celebrities.
- attributes of the object image from the source image in addition to the orientation and size of the object are changed.
- One example is a personalized a children's story book. If the storybook has a certain drawing style, such as cartoon style, it is desirable to make the object image to be processed and converted to the cartoon style also.
- the process for doing this includes analyzing the style of the staged photo in terms of color and texture. If the staged photo is black and white, the object image to be merged also needs to be converted to black and white. If the staged photo has small color variations, then the color variations in the object image also needs to be reduced.
- This analysis process can be done after the orientation and size of the template is computed and stored as meta-date (for example, block 110 of FIG. 3 ). The processing of the object image can be done as part of the transformation process represented in block 124 of FIG. 3 .
Abstract
A method and system for producing an image includes creating an image template having a customizable region and extracting a 2D object from a source image. The 2D object is transformed using a 3D model in response to the customizable region, and the transformed 2D object is merged into the image template to create a customized image.
Description
This Application claims the benefit of U.S. Provisional patent application Ser. No. 61/052,534, filed May 12, 2008, which is hereby incorporated by reference in it's entirety.
Many users desire self-expression and immersive experiences. For example, one area of expression is referred to as “personalized licensed merchandise,” where licensed properties such as art, celebrities, fictional characters, sports figures, toys/games, etc. are personalized with users' images or other expressions.
Users' attempts to combine their images with images of favorite characters, actors, sports stars, etc. have often been unsatisfactory. Known photo editing programs, for example, are expensive, complicated to use, require extensive manual effort, and do not support sophisticated 3D models to enable objects to be rotated to the proper orientation. This can make providing such customized offerings at an attractive price point difficult. Other services offer images of characters that can be added to clothing, posters, calendars, mugs, etc. along with customized text, but do not allow the user's image to be combined with the images.
For these and other reasons, there is a need for the present invention.
A method and system for producing an image are disclosed. Exemplary embodiments include creating an image template having a customizable region and extracting a 2D object from a source image. The 2D object is transformed using a 3D model in response to the customizable region, and the transformed 2D object is merged into the image template to create a customized image.
Embodiments of the invention are better understood with reference to the following drawings. The elements of the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding similar parts.
In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” “leading.” “trailing,” etc., is used with reference to the orientation of the Figure(s) being described. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
A system and method are disclosed that, for example, enable content owners to create region-based templates that users can personalize with their own images and add to various types of merchandise. The systems and methods can be deployed as web services, thus enabling content owners and users to easily customize images.
In embodiments where customizable region and source images include faces, the template could include an image such as a photograph of a celebrity or character, with a stand-in whose face would be replaced by the user's face. Using processes such as digital matting techniques, the stand-in's face region is identified so that the angular orientation and displaying zoom factor can be determined for the customizable region. The template is typically stored as a file that can include at least some of the following: the staged photograph along with meta-data that describes the facial orientation, the display zoom factor, the lighting conditions, the positions of visible skin regions on the stand-in's body, etc, as well as a bitmap that shows which pixels can be altered.
In some implementations, multiple templates are created. For example, different stand-ins may be used to represent categories for various gender, age, height, weight, etc. Creating multiple templates gives the user an option of choosing a gender and size, for example, that is appropriate to an actual body type.
The source image is provided, for example, from a user via a website that showcases specific licensed content of interest to the user. The user provides the 2D source image, typically a frontal source photograph. The source photograph is used to extract the desired 2D object—the image of the user's face. As illustrated in block 104, the 2D object is transformed using a 3D model. In this way, the extracted object is adjusted in response to the customizable region of the image template. For instance, where the source image is the user's photograph, the user's face image can be scaled and oriented to match that of the template photograph. The transformed 2D object is then merged into the image template to create a composite image in block 106. The composite image can then be formatted to desired merchandise and routed to a merchandise service provider.
The image template 220 and the source photograph 214 are used by a content “mash-up” or customization process 240 to merge the image of the user's face with the image template 222. The source photograph 214 is provided to the computer 210 that executes the customization process 240, for example, by the user over the internet. In block 120, the source photograph 214 is analyzed and information regarding the object to be extracted—the image of the user's face—is identified. Information such as the facial orientation and facial region are extracted from the source photograph 214. In block 122, the 2D image of the user's face is transformed using a 3D model.
Suitable face detection techniques may be used to identify the face image in the source photograph 214. A 3D morphable model can be used to fit the 2D face target area into a 3D face model. Face features and texture are extracted and mapped to the 3D model. The 3D face model can be used to generate virtual views of the users face in various orientations, and a modified 2D face image is created from the 3D model with a given angular position and size within a certain range. The position and orientation of the light source as well as the ambient intensity are configured to match the lighting conditions found in the staged photograph 214.
Several suitable 3D modeling techniques for developing a 3D model for a particular face exist. For example, some techniques use a generic 3D face model with one or more 2D source face images. Several features of the face, such as the eyes, mouth, etc., and the associated locations around key points of the facial features are extracted and fit to the generic model, resulting in a 3D model of the desired face.
One algorithm for identifying such facial features combines techniques known as “coarse-to-fine searching” (face detection) and “global-to-local matching” (feature extraction). A set of multiresolution templates is built for the whole face and individual facial features. A resolution pyramid structure is also established for the input face image. This algorithm first tries to find the rough face location in the image at the lowest resolution by globally matching it with the face templates. The higher resolution images and templates are used to refine the face location. Then each facial feature is located using a combination of techniques, including image processing, template matching and deformable templates. Finally, a feedback procedure is provided to verify extracted features using the anthropometry of human faces and if necessary, the features will be rematched.
Another suitable process involves providing several generic 3D face models, and one of the 3D models is selected based on similarity to the source face image. For example, 2D positions (x and y coordinates) of predetermined facial feature points are extracted from the source image. The z coordinates of the points are estimated and the most similar model is retrieved based on geometrical measurements. Once the appropriate 3D model has been selected, the facial features extracted from the source image are mapped to the 3D model to produce the 3D model of the source face. Using this 3D model, the source 2D image is transformed in response to the image template.
Referring to FIG. 3 , the generation of the transformed 2D source image with the desired orientation, size, etc. is illustrated in block 124. In block 126, the transformed 2D image is mapped to the customizable region of the image template, and in block 128, other areas of visible skin in the image template are adjusted to match the skin tones of the transformed face image.
Applications for the customized images include personalized children's storybooks where the child's face is merged into stories with their favorite live or cartoon characters, and clothing, posters, mousepads, etc. that combine images of users' faces with their favorite music, TV, and sports celebrities.
In some implementations, attributes of the object image from the source image in addition to the orientation and size of the object such as a face are changed. One example is a personalized a children's story book. If the storybook has a certain drawing style, such as cartoon style, it is desirable to make the object image to be processed and converted to the cartoon style also. The process for doing this includes analyzing the style of the staged photo in terms of color and texture. If the staged photo is black and white, the object image to be merged also needs to be converted to black and white. If the staged photo has small color variations, then the color variations in the object image also needs to be reduced. This analysis process can be done after the orientation and size of the template is computed and stored as meta-date (for example, block 110 of FIG. 3 ). The processing of the object image can be done as part of the transformation process represented in block 124 of FIG. 3 .
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
Claims (17)
1. A method for producing an image, comprising:
creating an image template having a customizable region including an image of a face;
extracting a 2D object from a source image, including identifying a face in the source image and extracting features of the identified face;
creating a 3D model of the extracted 2D object, including fitting the extracted features to a 3D face model;
manipulating the 3D model in response to the customizable region to transform the extracted 2D object to a modified 2D object, including generating a plurality of virtual views of the extracted face in various orientations such that the modified 2D object is a reoriented image of the identified face having an angular position based on the customizable region; and
merging the transformed 2D object into the image template by a computer processor to create a customized image.
2. The method of claim 1 , wherein creating the image template includes analyzing a staged image.
3. The method of claim 1 , wherein creating the 3D model includes:
providing a plurality of generic 3D face models;
selecting one of the generic 3D face models based on the source image; and applying the extracted features to the selected 3D face model.
4. The method of claim 1 , wherein creating the image template includes identifying visible skin regions, and wherein merging the transformed 2D image includes adjusting the visible skin regions to match skin tones of the image of the face in the source image.
5. The method of claim 1 , wherein creating the image template includes analyzing a staged image.
6. The method of claim 5 , wherein analyzing the staged image includes extracting meta-data regarding the customizable region.
7. The method of claim 1 , wherein creating the image template includes receiving a staged image.
8. The method of claim 1 , further comprising receiving the source image.
9. The method of claim 1 , further comprising printing the customized image.
10. The method of claim 1 , wherein fitting the extracted features includes fitting the extracted features to a 3D morphable model.
11. A system for customizing an image, comprising:
a computer system;
a memory accessible by the computer system storing an image template having a customizable region, wherein the customizable region includes an image of a face;
wherein the computer system is operable to:
identify a face in a source image;
extract a 2D object including the identified face from the source image corresponding to the customizable region;
extract features of the identified face;
create a 3D face model of the extracted 2D object;
fit the extracted features to the 3D face model;
manipulate the 3D face model in response to the customizable region to transform the extracted 2D object to a modified 2D object, including generating a plurality of virtual views of the extracted face in various orientations such that the modified 2D object is a reoriented image of the identified face having an angular position based on the customizable region; and
merge the transformed 2D object into the image template to create a customized image.
12. The system of claim 11 , wherein the memory is integral to the computer system.
13. The system of claim 11 , wherein the computer system is operable to identify visible skin regions in the image template and adjust the visible skin regions to match skin tones of the image of the face.
14. The system of claim 11 , wherein the computer system is operable to analyze a staged image to create the image template.
15. The system of claim 11 , further comprising a printer for receiving and printing the customized image.
16. A system for customizing an image, comprising:
a computer system;
a memory accessible by the computer system storing an image template having a customizable region, wherein the customizable region includes an image of a face;
wherein the computer system is operable to:
identify a face in a source image;
extract a 2D object including the identified face from the source image corresponding to the customizable region;
extract features of the identified face;
create a 3D face model of the extracted 2D object, including selecting one generic 3D face model from a plurality generic 3D face models based on the source image, and apply the extracted features to the selected 3D face model;
manipulate the 3D face model in response to the customizable region to transform the extracted 2D object to a modified 2D object, including generating a plurality of virtual views of the extracted face in various orientations such that the modified 2D object is a reoriented image of the identified face having an angular position based on the customizable region; and
merge the transformed 2D object into the image template to create a customized image.
17. A machine readable medium storing a software program that when executed performs a method comprising:
creating an image template having a customizable region including an image of a face;
extracting a 2D object from a source image, including identifying a face in the source image and extracting features of the identified face;
creating a 3D face model of the extracted 2D object, including
providing a plurality of generic 3D face models;
selecting one of the generic 3D face models based on the source image; and
applying the extracted features to the selected 3D face model;
manipulating the 3D model in response to the customizable region to transform the extracted 2D object to a modified 2D object, including generating a plurality of virtual views of the extracted face in various orientations such that the modified 2D object is a reoriented image of the identified face having an angular position based on the customizable region; and
merging the transformed 2D object into the image template to create a customized image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/263,256 US8391642B1 (en) | 2008-05-12 | 2008-10-31 | Method and system for creating a custom image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US5253408P | 2008-05-12 | 2008-05-12 | |
US12/263,256 US8391642B1 (en) | 2008-05-12 | 2008-10-31 | Method and system for creating a custom image |
Publications (1)
Publication Number | Publication Date |
---|---|
US8391642B1 true US8391642B1 (en) | 2013-03-05 |
Family
ID=47749070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/263,256 Active 2031-03-07 US8391642B1 (en) | 2008-05-12 | 2008-10-31 | Method and system for creating a custom image |
Country Status (1)
Country | Link |
---|---|
US (1) | US8391642B1 (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110157228A1 (en) * | 2009-12-29 | 2011-06-30 | Ptucha Raymond W | Method for group interactivity |
US20110157218A1 (en) * | 2009-12-29 | 2011-06-30 | Ptucha Raymond W | Method for interactive display |
US20120082341A1 (en) * | 2010-10-01 | 2012-04-05 | Yuichiro Takeuchi | Image processing apparatus, image processing method, and computer-readable storage medium |
US20140025190A1 (en) * | 2011-03-02 | 2014-01-23 | Andy Wu | Single-Action Three-Dimensional Model Printing Methods |
US20140085293A1 (en) * | 2012-09-21 | 2014-03-27 | Luxand, Inc. | Method of creating avatar from user submitted image |
US20140168375A1 (en) * | 2011-07-25 | 2014-06-19 | Panasonic Corporation | Image conversion device, camera, video system, image conversion method and recording medium recording a program |
US20140198177A1 (en) * | 2013-01-15 | 2014-07-17 | International Business Machines Corporation | Realtime photo retouching of live video |
US20160180441A1 (en) * | 2014-12-22 | 2016-06-23 | Amazon Technologies, Inc. | Item preview image generation |
US9965793B1 (en) | 2015-05-08 | 2018-05-08 | Amazon Technologies, Inc. | Item selection based on dimensional criteria |
US10083357B2 (en) | 2014-12-22 | 2018-09-25 | Amazon Technologies, Inc. | Image-based item location identification |
US20190072934A1 (en) * | 2017-09-01 | 2019-03-07 | Debbie Eunice Stevens-Wright | Parametric portraiture design and customization system |
US20190073115A1 (en) * | 2017-09-05 | 2019-03-07 | Crayola, Llc | Custom digital overlay kit for augmenting a digital image |
WO2020061622A1 (en) * | 2018-09-26 | 2020-04-02 | Oobee Doobee and Friends Pty Ltd | A storybook compilation system |
US11037281B2 (en) * | 2017-11-22 | 2021-06-15 | Tencent Technology (Shenzhen) Company Limited | Image fusion method and device, storage medium and terminal |
US11282275B1 (en) * | 2020-11-17 | 2022-03-22 | Illuni Inc. | Apparatus and method for generating storybook |
US11380050B2 (en) * | 2019-03-22 | 2022-07-05 | Tencent Technology (Shenzhen) Company Limited | Face image generation method and apparatus, device, and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5577179A (en) * | 1992-02-25 | 1996-11-19 | Imageware Software, Inc. | Image editing system |
US6181806B1 (en) * | 1993-03-29 | 2001-01-30 | Matsushita Electric Industrial Co., Ltd. | Apparatus for identifying a person using facial features |
US6366316B1 (en) * | 1996-08-30 | 2002-04-02 | Eastman Kodak Company | Electronic imaging system for generating a composite image using the difference of two images |
US6556196B1 (en) * | 1999-03-19 | 2003-04-29 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | Method and apparatus for the processing of images |
US6606096B2 (en) * | 2000-08-31 | 2003-08-12 | Bextech Inc. | Method of using a 3D polygonization operation to make a 2D picture show facial expression |
US20070237421A1 (en) * | 2006-03-29 | 2007-10-11 | Eastman Kodak Company | Recomposing photographs from multiple frames |
US20070258627A1 (en) * | 2001-12-17 | 2007-11-08 | Geng Z J | Face recognition system and method |
US20080143854A1 (en) * | 2003-06-26 | 2008-06-19 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US7440593B1 (en) * | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US7643671B2 (en) * | 2003-03-24 | 2010-01-05 | Animetrics Inc. | Facial recognition system and method |
US7711155B1 (en) * | 2003-04-14 | 2010-05-04 | Videomining Corporation | Method and system for enhancing three dimensional face modeling using demographic classification |
US7859551B2 (en) * | 1993-10-15 | 2010-12-28 | Bulman Richard L | Object customization and presentation system |
-
2008
- 2008-10-31 US US12/263,256 patent/US8391642B1/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5577179A (en) * | 1992-02-25 | 1996-11-19 | Imageware Software, Inc. | Image editing system |
US6181806B1 (en) * | 1993-03-29 | 2001-01-30 | Matsushita Electric Industrial Co., Ltd. | Apparatus for identifying a person using facial features |
US7859551B2 (en) * | 1993-10-15 | 2010-12-28 | Bulman Richard L | Object customization and presentation system |
US6366316B1 (en) * | 1996-08-30 | 2002-04-02 | Eastman Kodak Company | Electronic imaging system for generating a composite image using the difference of two images |
US6556196B1 (en) * | 1999-03-19 | 2003-04-29 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. | Method and apparatus for the processing of images |
US6606096B2 (en) * | 2000-08-31 | 2003-08-12 | Bextech Inc. | Method of using a 3D polygonization operation to make a 2D picture show facial expression |
US20070258627A1 (en) * | 2001-12-17 | 2007-11-08 | Geng Z J | Face recognition system and method |
US7643671B2 (en) * | 2003-03-24 | 2010-01-05 | Animetrics Inc. | Facial recognition system and method |
US7711155B1 (en) * | 2003-04-14 | 2010-05-04 | Videomining Corporation | Method and system for enhancing three dimensional face modeling using demographic classification |
US20080143854A1 (en) * | 2003-06-26 | 2008-06-19 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US7440593B1 (en) * | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US20070237421A1 (en) * | 2006-03-29 | 2007-10-11 | Eastman Kodak Company | Recomposing photographs from multiple frames |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110157218A1 (en) * | 2009-12-29 | 2011-06-30 | Ptucha Raymond W | Method for interactive display |
US9253447B2 (en) * | 2009-12-29 | 2016-02-02 | Kodak Alaris Inc. | Method for group interactivity |
US20110157228A1 (en) * | 2009-12-29 | 2011-06-30 | Ptucha Raymond W | Method for group interactivity |
US10636326B2 (en) | 2010-10-01 | 2020-04-28 | Sony Corporation | Image processing apparatus, image processing method, and computer-readable storage medium for displaying three-dimensional virtual objects to modify display shapes of objects of interest in the real world |
US20120082341A1 (en) * | 2010-10-01 | 2012-04-05 | Yuichiro Takeuchi | Image processing apparatus, image processing method, and computer-readable storage medium |
US9536454B2 (en) * | 2010-10-01 | 2017-01-03 | Sony Corporation | Image processing apparatus, image processing method, and computer-readable storage medium |
US20140025190A1 (en) * | 2011-03-02 | 2014-01-23 | Andy Wu | Single-Action Three-Dimensional Model Printing Methods |
US8817332B2 (en) * | 2011-03-02 | 2014-08-26 | Andy Wu | Single-action three-dimensional model printing methods |
US20140168375A1 (en) * | 2011-07-25 | 2014-06-19 | Panasonic Corporation | Image conversion device, camera, video system, image conversion method and recording medium recording a program |
US20140085293A1 (en) * | 2012-09-21 | 2014-03-27 | Luxand, Inc. | Method of creating avatar from user submitted image |
US9314692B2 (en) * | 2012-09-21 | 2016-04-19 | Luxand, Inc. | Method of creating avatar from user submitted image |
US20140198177A1 (en) * | 2013-01-15 | 2014-07-17 | International Business Machines Corporation | Realtime photo retouching of live video |
US20160180441A1 (en) * | 2014-12-22 | 2016-06-23 | Amazon Technologies, Inc. | Item preview image generation |
US10083357B2 (en) | 2014-12-22 | 2018-09-25 | Amazon Technologies, Inc. | Image-based item location identification |
US9965793B1 (en) | 2015-05-08 | 2018-05-08 | Amazon Technologies, Inc. | Item selection based on dimensional criteria |
US20190072934A1 (en) * | 2017-09-01 | 2019-03-07 | Debbie Eunice Stevens-Wright | Parametric portraiture design and customization system |
US20190073115A1 (en) * | 2017-09-05 | 2019-03-07 | Crayola, Llc | Custom digital overlay kit for augmenting a digital image |
US11037281B2 (en) * | 2017-11-22 | 2021-06-15 | Tencent Technology (Shenzhen) Company Limited | Image fusion method and device, storage medium and terminal |
WO2020061622A1 (en) * | 2018-09-26 | 2020-04-02 | Oobee Doobee and Friends Pty Ltd | A storybook compilation system |
US11380050B2 (en) * | 2019-03-22 | 2022-07-05 | Tencent Technology (Shenzhen) Company Limited | Face image generation method and apparatus, device, and storage medium |
US11282275B1 (en) * | 2020-11-17 | 2022-03-22 | Illuni Inc. | Apparatus and method for generating storybook |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8391642B1 (en) | Method and system for creating a custom image | |
US11600033B2 (en) | System and method for creating avatars or animated sequences using human body features extracted from a still image | |
US20210224765A1 (en) | System and method for collaborative shopping, business and entertainment | |
US11450075B2 (en) | Virtually trying cloths on realistic body model of user | |
US10002337B2 (en) | Method for collaborative shopping | |
Huang et al. | Arcimboldo-like collage using internet images | |
US8422794B2 (en) | System for matching artistic attributes of secondary image and template to a primary image | |
US9959453B2 (en) | Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature | |
US8854395B2 (en) | Method for producing artistic image template designs | |
US8849853B2 (en) | Method for matching artistic attributes of a template and secondary images to a primary image | |
US8289340B2 (en) | Method of making an artistic digital template for image display | |
AU2017228685A1 (en) | Sketch2painting: an interactive system that transforms hand-drawn sketch to painting | |
US8345057B2 (en) | Context coordination for an artistic digital template for image display | |
US20130215116A1 (en) | System and Method for Collaborative Shopping, Business and Entertainment | |
Zhang et al. | Compositional model-based sketch generator in facial entertainment | |
US20110029914A1 (en) | Apparatus for generating artistic image template designs | |
CN102473318A (en) | Processing digital templates for image display | |
CN114930399A (en) | Image generation using surface-based neurosynthesis | |
US10911695B2 (en) | Information processing apparatus, information processing method, and computer program product | |
JP2010507854A (en) | Method and apparatus for virtual simulation of video image sequence | |
JP7278724B2 (en) | Information processing device, information processing method, and information processing program | |
EP3091510A1 (en) | Method and system for producing output images and method for generating image-related databases | |
US20210256174A1 (en) | Computer aided systems and methods for creating custom products | |
KR102178396B1 (en) | Method and apparatus for manufacturing image output based on augmented reality | |
US20180232781A1 (en) | Advertisement system and advertisement method using 3d model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETRUSZKA, ADAM;LIN, QIAN;REEL/FRAME:021814/0424 Effective date: 20081031 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |