US20100302275A1 - System and method for displaying selected garments on a computer-simulated mannequin - Google Patents

System and method for displaying selected garments on a computer-simulated mannequin Download PDF

Info

Publication number
US20100302275A1
US20100302275A1 US12/646,062 US64606209A US2010302275A1 US 20100302275 A1 US20100302275 A1 US 20100302275A1 US 64606209 A US64606209 A US 64606209A US 2010302275 A1 US2010302275 A1 US 2010302275A1
Authority
US
United States
Prior art keywords
garment
mannequin
garments
rendering
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/646,062
Inventor
Carlos Saldanha
Andrea M. Froncioni
Paul A. Kruszewski
Gregory J. Saumier-Finch
Caroline M. Trudeau
Fadi G. Bachaalani
Nader Morcos
Sylvain B. Cote
Patrick R. Guevin
Jean-Francois B. St.Arnaud
Serge Veillet
Louise L. Guay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
My Virtual Model Inc
Original Assignee
My Virtual Model Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by My Virtual Model Inc filed Critical My Virtual Model Inc
Priority to US12/646,062 priority Critical patent/US20100302275A1/en
Publication of US20100302275A1 publication Critical patent/US20100302275A1/en
Priority to US13/098,178 priority patent/US20110273444A1/en
Priority to US13/350,716 priority patent/US20120188232A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S345/00Computer graphics processing and selective visual display systems
    • Y10S345/949Animation processing method
    • Y10S345/952Simulation

Definitions

  • the present invention relates to methods and systems for producing images of computer-simulated clothing.
  • the concept of a computerized or simulated dressing environment is a user-operated display system that generates computer-simulated images of a human figure wearing one or more selected garments.
  • the simulated human figure thus represents a virtual model or mannequin for modeling clothes.
  • Such an environment should ideally provide the user with the capability of viewing the mannequin and garment from a plurality of viewpoints to give a three-dimensional experience.
  • an individualized experience is provided that allows the user to see what selected clothes look like when worn by different people.
  • the degree to which the system takes into account the physical forces acting on a garment as it is worn determine in large part how visually realistic the computer-generated images are.
  • Simulation of the draping and collision of a garment object with a mannequin using a three-dimensional modeling environment e.g., Maya, manufactured by Alias Wavefront of Toronto, Canada
  • a three-dimensional modeling environment e.g., Maya, manufactured by Alias Wavefront of Toronto, Canada
  • a simulated dressing environment could be implemented with a three-dimensional modeling environment simply by simulating particular dressing scenes in response to user inputs and then rendering two-dimensional images directly from the simulation scene.
  • the massive amount of computation required to perform a collision and draping simulation for any particular mannequin and garment makes three-dimensional modeling an impractical way by itself in most commonly available computing environments to generate the multiple images of different mannequins and garments needed to implement a dressing environment.
  • a primary aspect of the present invention is a method for efficiently producing images of a computer-simulated mannequin wearing a garment or garments, the geometries of which are defined by selected mannequin and garment parameter values.
  • An image includes any spatial function derived from a perspective projection of a three-dimensional scene either existing in the real world or as modeled by a computer. This definition includes not only the usual two-dimensional intensity image, such as that formed upon the human retina when viewing a scene in the real world or that captured on photographic film through a camera aperture, but also two-dimensional functions incorporating both intensity and phase information for use in wavefront reconstruction (i.e., holograms).
  • the present invention primarily deals with digital images (i.e., discrete two-dimensional functions) derived from three-dimensional scenes by the process of rendering.
  • An image should therefore be taken to mean any form of such rendered data that is capable of being represented internally by a computer and/or transmitted over a computer network.
  • a visually informative representation that can actually be perceived by the human eye, such as that produced on a computer display, the term visual image will be used.
  • the present invention includes performing a draping and collision of a garment with a mannequin within a three-dimensional simulation scene to generate a rendering frame from which an image of a mannequin wearing a garment can be rendered, and further includes generating rendering frames containing mannequins and garments as defined by selected parameter values by shape blending the mannequins and/or garments of previously generated rendering frames. Linear combinations of the parameter values of previously generated rendering frames (e.g., as produced by interpolating between such values) are thus used to generate rendering frames with the desired mannequin and garment.
  • the invention includes the generation of a rendering frame containing a mannequin wearing a particular garment from a collision and draping simulation and the further addition of garment constraints corresponding to particular predefined shells around the mannequin that mimic the way the garment behaves when worn with another particular garment.
  • garment constraints are defined so as to conform to various dressing conventions or rules relating to how clothes are worn, e.g., the wearing of a coat over a shirt.
  • Rendering frames corresponding to different versions of a garment may thus be produced, where the information contained within separately generated rendering frames corresponding to particular versions of garments can then be used to produce a composite image of the garments worn in combination.
  • images can be rendered separately from each such rendering frame and layered upon one another in an appropriate order, or a composite image can be rendered using the depth information contained in each rendering frame. In this way, mixing and matching of garments on a mannequin is facilitated.
  • Another embodiment of the invention relates to a computerized dressing environment for displaying a selected garment worn by a selected mannequin in which garment images rendered from a three-dimensional simulation scene are stored in a repository and displayed in accordance with user inputs.
  • the garment images include images of a plurality of garments, including versions of garments, and renderings of each garment from a plurality of viewpoints so as to provide a three-dimensional experience to the user.
  • garment images corresponding to particular versions are selected in accordance with versioning rules by a versioning rule interpreter.
  • the appropriate garment images are then layered upon an image of a selected mannequin to create a composite image.
  • the layering order of the garment images is dictated by compositing rules derived from dressing conventions.
  • Another embodiment of the invention relates to a method for efficiently populating such a garment image repository with garment images by using the methods described above.
  • FIG. 1A shows the panels of a garment object within a simulation scene.
  • FIG. 1B shows the initial frame of a simulation scene in which the garment is placed over the mannequin in a dressing pose.
  • FIG. 1C shows the final frame of a simulation scene after simulation of draping and collision of a garment with a mannequin and animation of the mannequin to a display pose.
  • FIG. 2 shows the frames of a simulation scene as a simulation progresses.
  • FIG. 3 shows the modifying of object parameters within a rendering frame and performance of a partial further simulation to generate a modified rendering frame.
  • FIG. 4 shows the rendering of garment images from rendering frames with different camera positions.
  • FIG. 5 shows a plurality of pre-rendered garment images and the process steps for storing the images in a repository.
  • FIG. 6 shows constraining shells defined around a mannequin that are used in defining particular versions of a garment.
  • FIGS. 7A through 7C show multiple versions of a garment as defined within a rendering frame.
  • FIGS. 8A and 8B show a depiction of the rendering frames for two garments and the corresponding garment images rendered therefrom as displayed in layers
  • FIG. 9 shows a composite image made up of multiple garment images.
  • FIG. 10 is a block diagram showing the components of a system for populating a repository with images.
  • FIG. 11 is a block diagram of an implementation of a system for displaying selected images of garments worn by a mannequin over a network.
  • the present invention is a system and method for efficiently providing a computer-simulated dressing environment in which a user is presented with an image of a selected human figure wearing selected clothing.
  • a user selects parameter values that define the form of the human figure, referred to herein as a virtual mannequin, that is to wear the selected clothing.
  • Such parameters may be actual body measurements that define in varying degrees of precision the form of the mannequin or could be the selection of a particular mannequin from a population of mannequins available for presentation to the user.
  • One type of user may input parameter values that result in a virtual mannequin that is most representative of the user's own body in order to more fully simulate the experience of actually trying on a selected garment.
  • mannequins on a different basis in order to obtain images such as for use in animated features or as an aid in the manufacturing of actual garments.
  • the particular garment to be worn by the virtual mannequin is selected from a catalogue of available garments, where each garment may be further selected according to, e.g., style, color, or physical dimension.
  • an image of a virtual mannequin wearing selected garments is generated by using a three-dimensional modeling environment that provides a cloth simulation of the garment interacting with the mannequin.
  • This provides a more visually accurate representation presented to the user in the form of a two-dimensional image rendered from the three-dimensional model.
  • the simulation is performed by constructing three-dimensional models of the garment and mannequin using vector or polygon-based graphics techniques, referred to as garment and mannequin objects, respectively, and placing the garment and mannequin objects together in a three-dimensional simulation scene.
  • a scene in this context is a three-dimensional data structure that is made to contain one or more three-dimensional objects and defines their relative position and motion. Such a scene may be organized into a number of frames representing discrete points during a simulation or animation sequence. An image may be rendered from a frame by computing a perspective projection of the objects contained in the scene in accordance with a specified viewpoint and lighting condition.
  • the garment After constructing a simulation scene containing the mannequin and garment, the garment is fitted on the mannequin by simulating the draping and collision of the garment with the mannequin due to physical forces.
  • a simulation may facilitated by modeling garments as individual panels corresponding to the sewing patterns used to construct the actual garments, where the panels are closed surfaces bounded by curved or straight lines. Texture mapping may be used to map different cloth fabrics and colors, and ornamental details such as buttons, collars, and pockets to the garment object in the simulation scene.
  • One or more rendering frames are then created by performing a draping and collision simulation of the garment with the mannequin, which includes animating the mannequin from a dressing pose to a display pose.
  • the animation takes place within the three-dimensional modeling system that simulates motion and collision of the cloth making up the garment as the mannequin moves.
  • a two-dimensional image for presentation to the user may then be rendered from the rendering frame in accordance with a selected camera position that determines the particular view that is rendered.
  • the simulation may provide for a plurality of display poses by the mannequin with rendering frames generated for each such display pose.
  • the simulated environment It is desirable for the simulated environment to have the capability of displaying a number of different mannequins wearing garments of different dimensions.
  • One way of providing this functionality is to perform the simulation and rendering as described above separately and in real-time for each selected mannequin and garment. Simulating the draping and collision of a garment with a mannequin is computationally intensive, however, and real-time simulation may thus not be practical in most situations.
  • the simulation may be fully performed with representative mannequins and garments defined by reference parameters to generate three-dimensional reference rendering frames.
  • Shape blending techniques are used to modify the mannequin and/or garment parameters to desired selected values by interpolating between the corresponding parameter values of reference rendering frames.
  • garment and/or mannequin parameter values corresponding to the desired changes are modified within a rendering frame, and a partial further simulation is performed that creates a new rendering frame containing the changed mannequin and/or garment.
  • the dimensions of the individual panels making up the garment may be changed, with the resulting panels being then blended together within the simulation environment.
  • the dimensions of a mannequin may be changed by blending the shapes of previously simulated mannequins.
  • keyframing in this context, refers to assigning values to specific garment or mannequin parameters in a simulation scene and generating a new frame using a linear combination of parameter values (e.g., interpolation or extrapolation) generated from a previous simulation.
  • a new rendering frame is generated that contains a mannequin with different measurements and/or a garment with a different dimensions as selected by the user.
  • the simulation need only be fully performed once with a representative garment and mannequin, with keyframing of parameter values within the three-dimensional modeling system being used to generate rendering frames containing a particular mannequin and garment as selected by a user.
  • Simulation of the modified garment interacting with the mannequin as the partial further simulation takes place requires much less computation than a complete resimulation of the draping and collision of a changed garment over a mannequin. Only when the user selects a garment or mannequin that cannot be generated by linearly combining parameters from a previously generated rendering frame does a full draping and collision simulation need to be performed.
  • a mannequin wearing multiple selected garments are generated by simulating the simultaneous draping and collision of multiple garments with the virtual mannequin in a single simulation scene to create a single rendering frame.
  • dressing rules may be used that dictate how garments should be layered in the simulation scene in accordance with dressing conventions. Changes to the mannequin and/or garment can then made to the rendering frame by the keyframing and partial further simulation technique described above.
  • the two-dimensional image of the mannequin wearing the multiple garments could then be rendered using the Z-coordinates (where the Z-coordinate represents depth in the three-dimensional model) of the mannequin and garment objects in the rendering frame.
  • Such rendering using Z-coordinates may be performed, for example, based on individual pixels (Z-buffering) or by sorting individual polygons based upon a representative Z-coordinate.
  • two-dimensional images of mannequins and single garments are pre-rendered from rendering frames generated as described above and stored in a repository for later display in response to user inputs, where the garment images correspond to a plurality of different garments and views of such garments.
  • the methods described above enable such a repository to be efficiently populated.
  • multiple versions of single garments may be defined which are then simulated and rendered into two-dimensional images, where the two-dimensional renderings of specific garment versions may then be combined with renderings of specific versions of other garments according to versioning rules.
  • Such versions of garments enable the garment images rendered from separate simulations to be combined in a composite image.
  • Particular versions of particular garments are simulated and rendered into two-dimensional garment images in a manner that mimics the physical interaction between multiple garments in a simultaneous draping and collision simulation.
  • An approximation to such a simulation is effected by creating each version of a garment in a manner such that the garment is constrained to reside within or outside of particular predefined shells defined around the mannequin.
  • Different versions of a garment are created by first simulating the draping and collision of a representative garment with a mannequin as described above. Shells are then defined around the mannequin, and portions of the garment are constrained to reside either inside or outside of particular shells according to the particular version being created.
  • Versioning rules then define which versions of the garment objects are to be used when particular multiple garments are selected to be worn together by the mannequin. Collisions of multiple garments with one another are thus resolved in a manner that allows single garments to be independently simulated and rendered for later combination into a composite image. Such combination may be performed by layering the images in a prescribed order or by using the depth information contained in the rendering frame of each garment.
  • the pre-rendered two-dimensional garment images are then combinable into a composite display, with the particular version images to be used being chosen by a version rule interpreter that interprets the versioning rules.
  • Such two-dimensional images of garment versions are generated for all of the possible mannequins and single garments that the user is allowed to select for display.
  • a repository of two-dimensional images is thus created where the individual images can be layered upon one another in order to display a selected mannequin wearing selected multiple garments.
  • the two-dimensional images are layered upon one another in a prescribed order to create the final composite two-dimensional image presented to the user.
  • the layering is performed using a rule-based interpreter that interprets compositing rules that define in what order specific garments should be appear relative to other garments.
  • Such compositing rules are based upon dressing rules that define the how clothes are conventionally worn. For example, one such dressing rule is that jackets are worn over shirts, and the corresponding compositing rule would be that the rendering of a jacket should be layered on top of the rendering of a shirt.
  • Independently pre-rendering single garments also allows for the computational overhead to be further reduced by generating a rendering frame with a representative mannequin and garment, and then modifying the garment and/or mannequin by keyframing the garment and/or mannequin parameter values in a rendering frame and performing a partial further simulation of the interaction of the modified garment with the mannequin as described above.
  • the two-dimensional images derived from the rendering frames may also include renderings from a plurality of camera positions. A user may then select a particular viewing perspective in which to view the selected mannequin wearing selected multiple garments, with the pre-rendered images used to make up the composite image being rendered from the camera position corresponding to that viewing perspective.
  • the pre-rendering procedure can thus be performed for a population of mannequins and for a plurality of different garments and versions of garments at a plurality of camera positions to generate a repository of two-dimensional garment images that may be combined together in response to user selection of garment and/or mannequin parameter values.
  • a system for displaying a selected computer-simulated mannequin wearing a selected garment includes a user interface by which a user selects a mannequin image and one or more garments to be worn by the mannequin from a repository of pre-rendered garment images, the mannequin image and garment images then being combined to form a composite image.
  • the system then further includes a versioning rule interpreter for choosing among versions of the garment images for displaying in accordance with versioning rules that define which versions of particular garments are permitted when combined with another particular garment.
  • Versions of garment images may also be defined which differ in a fitting characteristic (e.g., loose, snug, etc.) or a wearing style (e.g., shirt tucked in or out, sweater buttoned or unbuttoned, etc.)
  • a compositing rule interpreter is provided for displaying the two-dimensional images of versions of user-selected garments chosen by the versioning rule interpreter and of a selected mannequin in a layered order dictated by compositing rules.
  • a repository of garment images is created which can be drawn upon to provide a simulated dressing environment for displaying a selected computer-simulated mannequin wearing selected garments.
  • a user interface enables the user to select a particular mannequin (e.g., derived from specified body measurements) and particular garments to be worn by the mannequin.
  • Certain embodiments may allow the user to also specify the viewpoint of the image eventually rendered to a display and/or the display pose of the mannequin.
  • Exemplary applications of the dressing environment include its use as part of a computerized catalogue in which users select particular garments to be worn by particular mannequins and as a tool for use by animators to generate images of dressed mannequins that can be incorporated in an animation sequence.
  • the dressing environment can also be used to simulate the appearance of garments as an aid to the manufacture of actual garments from predefined sewing patterns.
  • the garment images are two-dimensional images of garments that are pre-rendered from three-dimensional rendering frames generated by simulating the draping and collision of a garment with a mannequin in a three-dimensional modeling environment.
  • the repository contains garment images that differ according to garment type, style, dimensions, and the particular mannequin which is to be shown wearing the garment. Additionally, different versions of each garment are provided which are generated so as to be combinable with other garment images on a selected mannequin by layering the garment images on a two-dimensional image of a selected mannequin in a prescribed order.
  • Versions of garments are also defined that differ according to a fitting characteristic (e.g., loose fit, snug fit, etc.) or a wearing style (e.g., buttoned, unbuttoned, tucked in or out, etc.).
  • the repository contains the garment images rendered from a plurality of camera positions. A user is thus able to dress a selected mannequin with selected garments and view the mannequin from a plurality of angles.
  • pre-rendered images corresponding to a plurality of mannequin display poses are also stored in the repository.
  • rendering frames are stored in the repository after extraction of the garment object.
  • an image can be rendered from an arbitrary camera position. Because the displayed images are ultimately derived from three-dimensional simulations, a visually realistic experience is provided to the user but in a much more efficient manner than would be the case if the simulations were performed in real time.
  • FIGS. 1A through 1 C three stages of the simulation process are shown in which objects corresponding to a garment and a mannequin are generated and placed within a three-dimensional scene.
  • FIG. 1A shows a garment object made up of a plurality of garment panels GP, where the panels can be defined with respect to shape and dimension so as to correspond to the sewing patterns used to construct an actual garment.
  • a panel is defined as a region enclosed by two or more NURBS curves which are joined together and tessellated to form a garment.
  • FIG. 1B shows the garment panels GP and a mannequin M.
  • FIG. 1C shows the three-dimensional scene after the simulation process has completed.
  • the garment panels GP are joined together (i.e., corresponding to the stitching of sewing patterns) to form the garment G.
  • the draping and collision of the garment G with the mannequin M due to physical forces is also simulated, and the mannequin is animated from the dressing pose to a display pose with motion of the garment being concomitantly simulated.
  • FIG. 2 shows a number of representative frames F 1 through F 70 of the simulation scene as the simulation progresses.
  • Frame F 1 corresponds to the initial sewing position as previously depicted in FIG. 1B
  • frames F 2 and F 3 show the progression of the draping and collision simulation which culminates at frame F 40 in which the completed garment G is fitted over the mannequin M in the dressing pose.
  • the simulation further progresses to frame F 70 where the mannequin M is animated to move to a display pose, moving the garment G along with it.
  • Frame F 70 thus forms a rendering frame from which a two-dimensional image of the garment G can be rendered and deposited into the repository as a garment image.
  • rendering frames corresponding to a number of different display poses may be generated.
  • a rendering frame can be generated as described above, and a garment image corresponding to the garment type is generated.
  • a full draping and collision simulation starting with the garment panels is first performed for each garment type with a reference garment and a reference mannequin to thereby generate a reference rendering frame.
  • the mannequin and/or garment parameter values are then modified in the reference rendering frame, with the geometry of the scene then being updated by the cloth solver in accordance with the internal dynamic model of the modeling environment.
  • the three-dimensional modeling environment generates the modified mannequin and/or garment objects as linear combinations of parameters calculated in the prior reference simulation so that a full resimulation does not have to be performed. Thus only a partial resimulation needs to be performed to generate a new rendering frame containing the modified mannequin and/or garment.
  • FIG. 3 shows a number of representative frames F 70 through F 80 of a resimulation scene showing the parameter modifying and partial resimulation process.
  • Frame F 70 is the reference rendering frame, having been previously generated with a reference mannequin and garment as described above, and from which garment images corresponding to the reference garment can be rendered.
  • parameter values of the mannequin M or the garment G are modified while the simulation process is temporarily halted.
  • Such parameter values that can be modified at this point include various dimensions of the mannequin M as well as dimensions and shapes of the garment panels GP that make up the garment G.
  • the simulation is then restarted with the modified parameter values which completes at frame F 75 .
  • the three-dimensional modeling environment is able to retain the information produced as a result of the reference simulation so that the coordinates of the mannequin and garment objects at frame F 75 are solved without doing a complete draping and collision simulation with the modified parameters.
  • Frame 75 can then be employed as a rendering frame for the modified garment and/or mannequin with a garment image rendered therefrom.
  • Frame F 76 of FIG. 3 shows how the garment and mannequin parameters can be further modified from those of the rendering frame in frame F 75 , with partial resimulation performed to generate a sequence of frames ending at frame F 80 .
  • the procedure can then be repeated as needed to in order to generate garment images corresponding to any number of modifications made to the garment and/or mannequin.
  • the repository of garment images can be efficiently populated with garments of different dimensions suitable for layering on a mannequin chosen from a population of mannequins of different dimensions.
  • the population of garment images in the repository includes renderings of each garment from a plurality of viewing angles in order to simulate the three-dimensional experience for the ultimate user.
  • FIG. 4 shows how garment images corresponding to different viewing perspectives are created from rendering frames by turning on different cameras for the rendering process.
  • a camera in this context is the viewing position within the scene from which an image is rendered.
  • Shown in the figure are a plurality of rendering frames H 1 through H 12 generated as described above for three different garments (i.e., garments differing according to type or garment parameter values) as fitted on three mannequins.
  • Frames H 1 through H 4 are rendering frames generated for a particular garment and mannequin that differ only in the particular camera C 1 through C 4 which is turned on.
  • Rendering the garment object from each of the four frames then produces four views of the garment, designated garment images DG 1 through DG 4 as shown in FIG. 5 .
  • rendering the garment objects from frames H 5 through H 9 and frames H 9 through H 12 produces four perspective views of each of those garments, also shown in FIG. 5 as garments DG 5 through DG 12 .
  • FIG. 5 shows that the garment images DG 1 through DG 12 are two-dimensional graphics files that go through a sequence of steps before being stored in the repository.
  • the files are named and catalogued at step 51 so as to be accessible when needed to generate a particular composite image.
  • image processing is performed at step 52 to convert the files to a desired image file format (e.g., jpeg, tiff, gif) which may or may not include data compression.
  • the files are stored in the repository (e.g., located on a hard disk or other appropriate storage medium) at step 53 .
  • each garment image is created and stored in the repository in order to enable multiple garment images to be layered on a two-dimensional rendering of a mannequin, with the garments being rendered from rendering frames in an independent manner.
  • Each version is defined to be combinable with one or more particular garments and is rendered from a rendering frame in which the garment is constrained to reside within or outside of particular predefined shells around the mannequin. The constraining shells serve to mimic the collisions with another garment that would take place were a simulation to be performed with that other garment.
  • FIGS. 7A through 7C show three versions of a representative garments G in which portions of the garment in each version have been constrained to reside within or outside of particular shells.
  • Garment images may then be rendered from the version rendering frame at a plurality of camera angles to correspond to different views of the garment version.
  • Creating versions of garments at the level of the rendering frame instead of in the two-dimensional garment image itself permits large numbers of viewing perspective renderings to be generated from a single version rendering frame in a consistent manner.
  • FIG. 8A shows a plurality of shells surrounding a mannequin M as seen from above, with portions of a jacket garment G 3 and a shirt garment G 1 constrained to reside within or outside of shells C and J.
  • FIG. 8A thus represents what a combination of the two separate rendering frames containing garments G 1 and G 3 would look like.
  • the versioning rule interpreter selects particular versions of those garments from the garment image repository in accordance with a versioning rule.
  • the versioning rule would select the versions of the jacket G 3 and shirt G 1 that have been rendered from rendering frames with the garments constrained as shown in FIG. 8A which ensures that any rendering of the jacket G 3 will reside outside of a rendering from the same camera angle of shirt G 1 .
  • FIG. 8B shows the two-dimensional garment images of garments G 1 and G 3 that have been retrieved from the repository in accordance with the versioning rules and a two-dimensional mannequin image M.
  • the compositing rule interpreter displays the images in a layered order as defined by a compositing rule which, in this case, dictates that the jacket image G 3 will be layered on top of the shirt G 1 , both of which are layered on top of the mannequin image M.
  • FIG. 9 shows a composite image as would be presented to a user as a result of the layering process.
  • FIG. 10 shows in block diagram form the primary software components of an image generation system for populating a repository with images.
  • a three-dimensional modeling environment 100 in conjunction with a cloth simulator 104 are used to simulate the draping and collision of a garment with a mannequin.
  • a parameter input block 102 inputs user-defined parameters (e.g., from a display terminal) into the modeling environment in order to define the garment and mannequin parameters as described above for the simulation.
  • a rendering frame generator 108 communicates with the modeling environment 100 in order to extract rendering frames therefrom. The rendering frame generator 108 also works with the modeling environment to perform shape blending upon reference rendering frames in order to generate frames with modified parameters without performing a full simulation. Versioning tools 106 are used within the rendering frame generator to create the particular versions of the garments that are combinable with other garments according to versioning rules.
  • the versioning tools 106 interface with the three-dimensional modeling environment (e.g., as a C/C++ shared object library in conjunction with scripts written in a scripting language of the three-dimensional modeling environment such as the Maya Embedded Language) and enable a user to define garment shells and associate simulation properties (e.g., collision offset, cloth stiffness, cloth thickness) to garments and mannequins within the simulation.
  • Simulation properties e.g., collision offset, cloth stiffness, cloth thickness
  • Images of garments and mannequins are rendered from the rendering frames at a selected viewpoint by the rendering engine 110 .
  • the images are then converted to a convenient file format, named, and catalogued to enable access by the display system, and stored in the image repository 112 .
  • FIG. 11 is a block diagram showing the software components of such an implementation.
  • the server side of the system includes an http server 120 and a page generator 118 for generating the html (hypertext markup language) pages containing the composite images in accordance with the user request.
  • the html page generator 118 Upon receiving a request from the user to display a particular mannequin wearing particular garments from a particular viewing perspective, the html page generator 118 (which may be, e.g., a common gateway interface script or a program communicating with the http server via an application server layer) communicates with a versioning rule interpreter 114 in order to select the particular images retrieved from the image repository 112 .
  • the retrieved images are layered into a composite image that is embedded into an html page, with the layering dictated by a compositing rule interpreter 116 with which the page generator 118 also communicates.
  • the html page containing the desired image is then transmitted by the http server 120 over a network to the http browser 124 that is the user interface in this implementation.
  • Such an implementation would be particularly suitable for use in an online internet catalogue, for example, in which the garment images are used to inform purchaser decisions.
  • the user may establish a virtual identity by selecting a particular mannequin, naming the mannequin, and establishing other parameters that govern how the mannequin interacts with the dressing environment as well as possibly other virtual environments.
  • Such information could, for example, be stored in the form of a cookie on the user's machine which is transmitted to the http server upon connection with the user's browser.
  • FIG. 11 An appropriate implementation of the display system shown in FIG. 11 (e.g., non-networked) can then be used to render images of mannequins wearing selected garments that can be used in animated features or as an aid to the garment design process.
  • rendering frames rather than images are stored in the repository and retrieved for display in response to user requests.
  • Select objects such as garments are extracted from particular frames of simulation scenes containing select garments and mannequins to generate rendering frames that are stored in the repository.
  • the system retrieves the appropriate rendering frames according to versioning rules and renders a composite image from a selected viewpoint.
  • the particular viewpoint presented to the user at any one time is a static image, but it may be updated rapidly enough to give the impression of a continuously changing viewpoint.
  • the images are rendered from the frames either simultaneously using the depth information contained therein, or separately from each frame with the separately rendered images then being displayed in layered order dictated by compositing rules.
  • the functions of the system could be implemented on a stand-alone machine or distributed over a network, e.g., as where rendering frames are downloaded to a java applet executed by a web browser that renders the images displayed to the user.
  • available hardware performance may be such as to make it desirable to simulate draping and collision of select garments and mannequins according to user requests in real time.
  • rendering frames are generated from the user-selected three-dimensional simulation scenes, and images for displaying to the user are then rendered.
  • the simulation scene in this embodiment may be changed in accordance with user preferences, for example, animating the mannequin within the simulation to move from a dressing pose to a user-selected target pose before generating a rendering frame.
  • Shape blending between previously generated rendering frames can be used to improve performance in generating rendering frames with modified garment and/or mannequin parameters.
  • the garments can simultaneously simulated in a single scene, or separate simulations can be performed for each garment with the rendering frames generated therefrom being combined in accordance with versioning rules.

Abstract

A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.

Description

    FIELD OF THE INVENTION
  • The present invention relates to methods and systems for producing images of computer-simulated clothing.
  • BACKGROUND
  • The concept of a computerized or simulated dressing environment is a user-operated display system that generates computer-simulated images of a human figure wearing one or more selected garments. The simulated human figure thus represents a virtual model or mannequin for modeling clothes. Such an environment should ideally provide the user with the capability of viewing the mannequin and garment from a plurality of viewpoints to give a three-dimensional experience. By allowing the user to also select in some manner the particular human figure that is to wear the garment, an individualized experience is provided that allows the user to see what selected clothes look like when worn by different people.
  • The degree to which the system takes into account the physical forces acting on a garment as it is worn determine in large part how visually realistic the computer-generated images are. Simulation of the draping and collision of a garment object with a mannequin using a three-dimensional modeling environment (e.g., Maya, manufactured by Alias Wavefront of Toronto, Canada) allows the rendering of a two-dimensional image of the mannequin and garment that is quite realistic in appearance. It is desirable in a simulated dressing environment, however, for a user to be able to select among a variety of different mannequins and/or garments for displaying. Accordingly, a simulated dressing environment could be implemented with a three-dimensional modeling environment simply by simulating particular dressing scenes in response to user inputs and then rendering two-dimensional images directly from the simulation scene. The massive amount of computation required to perform a collision and draping simulation for any particular mannequin and garment, however, makes three-dimensional modeling an impractical way by itself in most commonly available computing environments to generate the multiple images of different mannequins and garments needed to implement a dressing environment.
  • SUMMARY OF THE INVENTION
  • A primary aspect of the present invention is a method for efficiently producing images of a computer-simulated mannequin wearing a garment or garments, the geometries of which are defined by selected mannequin and garment parameter values. An image, as the term is used herein, includes any spatial function derived from a perspective projection of a three-dimensional scene either existing in the real world or as modeled by a computer. This definition includes not only the usual two-dimensional intensity image, such as that formed upon the human retina when viewing a scene in the real world or that captured on photographic film through a camera aperture, but also two-dimensional functions incorporating both intensity and phase information for use in wavefront reconstruction (i.e., holograms). The present invention primarily deals with digital images (i.e., discrete two-dimensional functions) derived from three-dimensional scenes by the process of rendering. An image should therefore be taken to mean any form of such rendered data that is capable of being represented internally by a computer and/or transmitted over a computer network. When referring specifically to a visually informative representation that can actually be perceived by the human eye, such as that produced on a computer display, the term visual image will be used.
  • In one embodiment, the present invention includes performing a draping and collision of a garment with a mannequin within a three-dimensional simulation scene to generate a rendering frame from which an image of a mannequin wearing a garment can be rendered, and further includes generating rendering frames containing mannequins and garments as defined by selected parameter values by shape blending the mannequins and/or garments of previously generated rendering frames. Linear combinations of the parameter values of previously generated rendering frames (e.g., as produced by interpolating between such values) are thus used to generate rendering frames with the desired mannequin and garment.
  • In another embodiment, the invention includes the generation of a rendering frame containing a mannequin wearing a particular garment from a collision and draping simulation and the further addition of garment constraints corresponding to particular predefined shells around the mannequin that mimic the way the garment behaves when worn with another particular garment. These garment constraints are defined so as to conform to various dressing conventions or rules relating to how clothes are worn, e.g., the wearing of a coat over a shirt. Rendering frames corresponding to different versions of a garment may thus be produced, where the information contained within separately generated rendering frames corresponding to particular versions of garments can then be used to produce a composite image of the garments worn in combination. For example, images can be rendered separately from each such rendering frame and layered upon one another in an appropriate order, or a composite image can be rendered using the depth information contained in each rendering frame. In this way, mixing and matching of garments on a mannequin is facilitated.
  • Another embodiment of the invention relates to a computerized dressing environment for displaying a selected garment worn by a selected mannequin in which garment images rendered from a three-dimensional simulation scene are stored in a repository and displayed in accordance with user inputs. The garment images include images of a plurality of garments, including versions of garments, and renderings of each garment from a plurality of viewpoints so as to provide a three-dimensional experience to the user. In order to display a selected mannequin wearing selected multiple garments, garment images corresponding to particular versions are selected in accordance with versioning rules by a versioning rule interpreter. The appropriate garment images are then layered upon an image of a selected mannequin to create a composite image. The layering order of the garment images is dictated by compositing rules derived from dressing conventions. Another embodiment of the invention relates to a method for efficiently populating such a garment image repository with garment images by using the methods described above.
  • Other objects, features, and advantages of the invention will become evident in light of the following detailed description of exemplary embodiments according to the present invention considered in conjunction with the referenced drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A shows the panels of a garment object within a simulation scene.
  • FIG. 1B shows the initial frame of a simulation scene in which the garment is placed over the mannequin in a dressing pose.
  • FIG. 1C shows the final frame of a simulation scene after simulation of draping and collision of a garment with a mannequin and animation of the mannequin to a display pose.
  • FIG. 2 shows the frames of a simulation scene as a simulation progresses.
  • FIG. 3 shows the modifying of object parameters within a rendering frame and performance of a partial further simulation to generate a modified rendering frame.
  • FIG. 4 shows the rendering of garment images from rendering frames with different camera positions.
  • FIG. 5 shows a plurality of pre-rendered garment images and the process steps for storing the images in a repository.
  • FIG. 6 shows constraining shells defined around a mannequin that are used in defining particular versions of a garment.
  • FIGS. 7A through 7C show multiple versions of a garment as defined within a rendering frame.
  • FIGS. 8A and 8B show a depiction of the rendering frames for two garments and the corresponding garment images rendered therefrom as displayed in layers
  • FIG. 9 shows a composite image made up of multiple garment images.
  • FIG. 10 is a block diagram showing the components of a system for populating a repository with images.
  • FIG. 11 is a block diagram of an implementation of a system for displaying selected images of garments worn by a mannequin over a network.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention is a system and method for efficiently providing a computer-simulated dressing environment in which a user is presented with an image of a selected human figure wearing selected clothing. In such an environment, a user selects parameter values that define the form of the human figure, referred to herein as a virtual mannequin, that is to wear the selected clothing. Such parameters may be actual body measurements that define in varying degrees of precision the form of the mannequin or could be the selection of a particular mannequin from a population of mannequins available for presentation to the user. One type of user may input parameter values that result in a virtual mannequin that is most representative of the user's own body in order to more fully simulate the experience of actually trying on a selected garment. Other types of users may select mannequins on a different basis in order to obtain images such as for use in animated features or as an aid in the manufacturing of actual garments. The particular garment to be worn by the virtual mannequin is selected from a catalogue of available garments, where each garment may be further selected according to, e.g., style, color, or physical dimension.
  • In order to provide a more realistic representation of the physical fitting of the garment on the mannequin, an image of a virtual mannequin wearing selected garments is generated by using a three-dimensional modeling environment that provides a cloth simulation of the garment interacting with the mannequin. This provides a more visually accurate representation presented to the user in the form of a two-dimensional image rendered from the three-dimensional model. The simulation is performed by constructing three-dimensional models of the garment and mannequin using vector or polygon-based graphics techniques, referred to as garment and mannequin objects, respectively, and placing the garment and mannequin objects together in a three-dimensional simulation scene. A scene in this context is a three-dimensional data structure that is made to contain one or more three-dimensional objects and defines their relative position and motion. Such a scene may be organized into a number of frames representing discrete points during a simulation or animation sequence. An image may be rendered from a frame by computing a perspective projection of the objects contained in the scene in accordance with a specified viewpoint and lighting condition.
  • After constructing a simulation scene containing the mannequin and garment, the garment is fitted on the mannequin by simulating the draping and collision of the garment with the mannequin due to physical forces. Such a simulation may facilitated by modeling garments as individual panels corresponding to the sewing patterns used to construct the actual garments, where the panels are closed surfaces bounded by curved or straight lines. Texture mapping may be used to map different cloth fabrics and colors, and ornamental details such as buttons, collars, and pockets to the garment object in the simulation scene. One or more rendering frames are then created by performing a draping and collision simulation of the garment with the mannequin, which includes animating the mannequin from a dressing pose to a display pose. The animation takes place within the three-dimensional modeling system that simulates motion and collision of the cloth making up the garment as the mannequin moves. A two-dimensional image for presentation to the user may then be rendered from the rendering frame in accordance with a selected camera position that determines the particular view that is rendered. In certain embodiments, the simulation may provide for a plurality of display poses by the mannequin with rendering frames generated for each such display pose.
  • It is desirable for the simulated environment to have the capability of displaying a number of different mannequins wearing garments of different dimensions. One way of providing this functionality is to perform the simulation and rendering as described above separately and in real-time for each selected mannequin and garment. Simulating the draping and collision of a garment with a mannequin is computationally intensive, however, and real-time simulation may thus not be practical in most situations. In order to reduce the computational overhead associated with displaying multiple mannequins or garments of selected dimensions, the simulation may be fully performed with representative mannequins and garments defined by reference parameters to generate three-dimensional reference rendering frames. Shape blending techniques are used to modify the mannequin and/or garment parameters to desired selected values by interpolating between the corresponding parameter values of reference rendering frames. In accordance with the invention, garment and/or mannequin parameter values corresponding to the desired changes are modified within a rendering frame, and a partial further simulation is performed that creates a new rendering frame containing the changed mannequin and/or garment. For example, the dimensions of the individual panels making up the garment may be changed, with the resulting panels being then blended together within the simulation environment. Similarly, the dimensions of a mannequin may be changed by blending the shapes of previously simulated mannequins. The parameters are thus keyframed within the simulation sequence, where keyframing, in this context, refers to assigning values to specific garment or mannequin parameters in a simulation scene and generating a new frame using a linear combination of parameter values (e.g., interpolation or extrapolation) generated from a previous simulation. In this way, a new rendering frame is generated that contains a mannequin with different measurements and/or a garment with a different dimensions as selected by the user. Thus, the simulation need only be fully performed once with a representative garment and mannequin, with keyframing of parameter values within the three-dimensional modeling system being used to generate rendering frames containing a particular mannequin and garment as selected by a user. Simulation of the modified garment interacting with the mannequin as the partial further simulation takes place requires much less computation than a complete resimulation of the draping and collision of a changed garment over a mannequin. Only when the user selects a garment or mannequin that cannot be generated by linearly combining parameters from a previously generated rendering frame does a full draping and collision simulation need to be performed.
  • Another desirable feature of a simulated dressing environment is for the user to be able to display a mannequin wearing multiple selected garments (e.g., outfits). In one embodiment of the invention, images of a mannequin wearing multiple selected garments are generated by simulating the simultaneous draping and collision of multiple garments with the virtual mannequin in a single simulation scene to create a single rendering frame. In this embodiment, dressing rules may be used that dictate how garments should be layered in the simulation scene in accordance with dressing conventions. Changes to the mannequin and/or garment can then made to the rendering frame by the keyframing and partial further simulation technique described above. The two-dimensional image of the mannequin wearing the multiple garments could then be rendered using the Z-coordinates (where the Z-coordinate represents depth in the three-dimensional model) of the mannequin and garment objects in the rendering frame. Such rendering using Z-coordinates may be performed, for example, based on individual pixels (Z-buffering) or by sorting individual polygons based upon a representative Z-coordinate.
  • As noted above, however, draping and collision simulation is computationally intensive, and even more so in the case of multiple garments, making simulation of user-selected mannequins wearing selected multiple garments in real time in order to render images therefrom impractical in most situations. Therefore, in a presently preferred embodiment of the invention, two-dimensional images of mannequins and single garments are pre-rendered from rendering frames generated as described above and stored in a repository for later display in response to user inputs, where the garment images correspond to a plurality of different garments and views of such garments. The methods described above enable such a repository to be efficiently populated. In addition, in order to avoid the computational complexity of pre-rendering two-dimensional images corresponding to every possible combination of multiple garments on every possible mannequin, multiple versions of single garments may be defined which are then simulated and rendered into two-dimensional images, where the two-dimensional renderings of specific garment versions may then be combined with renderings of specific versions of other garments according to versioning rules. Such versions of garments enable the garment images rendered from separate simulations to be combined in a composite image.
  • Particular versions of particular garments are simulated and rendered into two-dimensional garment images in a manner that mimics the physical interaction between multiple garments in a simultaneous draping and collision simulation. An approximation to such a simulation is effected by creating each version of a garment in a manner such that the garment is constrained to reside within or outside of particular predefined shells defined around the mannequin. Different versions of a garment are created by first simulating the draping and collision of a representative garment with a mannequin as described above. Shells are then defined around the mannequin, and portions of the garment are constrained to reside either inside or outside of particular shells according to the particular version being created. Versioning rules then define which versions of the garment objects are to be used when particular multiple garments are selected to be worn together by the mannequin. Collisions of multiple garments with one another are thus resolved in a manner that allows single garments to be independently simulated and rendered for later combination into a composite image. Such combination may be performed by layering the images in a prescribed order or by using the depth information contained in the rendering frame of each garment.
  • The pre-rendered two-dimensional garment images are then combinable into a composite display, with the particular version images to be used being chosen by a version rule interpreter that interprets the versioning rules. Such two-dimensional images of garment versions are generated for all of the possible mannequins and single garments that the user is allowed to select for display. A repository of two-dimensional images is thus created where the individual images can be layered upon one another in order to display a selected mannequin wearing selected multiple garments. The two-dimensional images are layered upon one another in a prescribed order to create the final composite two-dimensional image presented to the user. The layering is performed using a rule-based interpreter that interprets compositing rules that define in what order specific garments should be appear relative to other garments. Such compositing rules are based upon dressing rules that define the how clothes are conventionally worn. For example, one such dressing rule is that jackets are worn over shirts, and the corresponding compositing rule would be that the rendering of a jacket should be layered on top of the rendering of a shirt.
  • Independently pre-rendering single garments also allows for the computational overhead to be further reduced by generating a rendering frame with a representative mannequin and garment, and then modifying the garment and/or mannequin by keyframing the garment and/or mannequin parameter values in a rendering frame and performing a partial further simulation of the interaction of the modified garment with the mannequin as described above. The two-dimensional images derived from the rendering frames may also include renderings from a plurality of camera positions. A user may then select a particular viewing perspective in which to view the selected mannequin wearing selected multiple garments, with the pre-rendered images used to make up the composite image being rendered from the camera position corresponding to that viewing perspective. The pre-rendering procedure can thus be performed for a population of mannequins and for a plurality of different garments and versions of garments at a plurality of camera positions to generate a repository of two-dimensional garment images that may be combined together in response to user selection of garment and/or mannequin parameter values.
  • In accordance with the invention, a system for displaying a selected computer-simulated mannequin wearing a selected garment includes a user interface by which a user selects a mannequin image and one or more garments to be worn by the mannequin from a repository of pre-rendered garment images, the mannequin image and garment images then being combined to form a composite image. The system then further includes a versioning rule interpreter for choosing among versions of the garment images for displaying in accordance with versioning rules that define which versions of particular garments are permitted when combined with another particular garment. Versions of garment images may also be defined which differ in a fitting characteristic (e.g., loose, snug, etc.) or a wearing style (e.g., shirt tucked in or out, sweater buttoned or unbuttoned, etc.) A compositing rule interpreter is provided for displaying the two-dimensional images of versions of user-selected garments chosen by the versioning rule interpreter and of a selected mannequin in a layered order dictated by compositing rules.
  • In a presently preferred exemplary embodiment of the invention to be described further below, a repository of garment images is created which can be drawn upon to provide a simulated dressing environment for displaying a selected computer-simulated mannequin wearing selected garments. In such a system, a user interface enables the user to select a particular mannequin (e.g., derived from specified body measurements) and particular garments to be worn by the mannequin. Certain embodiments may allow the user to also specify the viewpoint of the image eventually rendered to a display and/or the display pose of the mannequin. Exemplary applications of the dressing environment include its use as part of a computerized catalogue in which users select particular garments to be worn by particular mannequins and as a tool for use by animators to generate images of dressed mannequins that can be incorporated in an animation sequence. The dressing environment can also be used to simulate the appearance of garments as an aid to the manufacture of actual garments from predefined sewing patterns.
  • In one embodiment, the garment images are two-dimensional images of garments that are pre-rendered from three-dimensional rendering frames generated by simulating the draping and collision of a garment with a mannequin in a three-dimensional modeling environment. The repository contains garment images that differ according to garment type, style, dimensions, and the particular mannequin which is to be shown wearing the garment. Additionally, different versions of each garment are provided which are generated so as to be combinable with other garment images on a selected mannequin by layering the garment images on a two-dimensional image of a selected mannequin in a prescribed order. Versions of garments are also defined that differ according to a fitting characteristic (e.g., loose fit, snug fit, etc.) or a wearing style (e.g., buttoned, unbuttoned, tucked in or out, etc.). Finally, the repository contains the garment images rendered from a plurality of camera positions. A user is thus able to dress a selected mannequin with selected garments and view the mannequin from a plurality of angles. In another embodiment, pre-rendered images corresponding to a plurality of mannequin display poses are also stored in the repository. In another alternate embodiment, rendering frames are stored in the repository after extraction of the garment object. After retrieving the appropriate garment from the repository (i.e., according to user selection and in accordance with versioning rules), an image can be rendered from an arbitrary camera position. Because the displayed images are ultimately derived from three-dimensional simulations, a visually realistic experience is provided to the user but in a much more efficient manner than would be the case if the simulations were performed in real time.
  • During the simulation process, a three-dimensional simulation scene is created from which one or more three-dimensional rendering frames can be generated. Garment images are then rendered from the rendering frames. Referring first to FIGS. 1A through 1C, three stages of the simulation process are shown in which objects corresponding to a garment and a mannequin are generated and placed within a three-dimensional scene. FIG. 1A shows a garment object made up of a plurality of garment panels GP, where the panels can be defined with respect to shape and dimension so as to correspond to the sewing patterns used to construct an actual garment. In the Maya modeling environment, for example, a panel is defined as a region enclosed by two or more NURBS curves which are joined together and tessellated to form a garment. The garment panels GP and a mannequin M are then placed together in a three-dimensional scene as shown in FIG. 1B, where the mannequin is shown in a dressing pose and the garment panels are placed at positions around the mannequin appropriate for the subsequent simulation. FIG. 1C shows the three-dimensional scene after the simulation process has completed. During the simulation, the garment panels GP are joined together (i.e., corresponding to the stitching of sewing patterns) to form the garment G. The draping and collision of the garment G with the mannequin M due to physical forces is also simulated, and the mannequin is animated from the dressing pose to a display pose with motion of the garment being concomitantly simulated.
  • FIG. 2 shows a number of representative frames F1 through F70 of the simulation scene as the simulation progresses. Frame F1 corresponds to the initial sewing position as previously depicted in FIG. 1B, and frames F2 and F3 show the progression of the draping and collision simulation which culminates at frame F40 in which the completed garment G is fitted over the mannequin M in the dressing pose. The simulation further progresses to frame F70 where the mannequin M is animated to move to a display pose, moving the garment G along with it. Frame F70 thus forms a rendering frame from which a two-dimensional image of the garment G can be rendered and deposited into the repository as a garment image. As noted earlier, in one particular embodiment rendering frames corresponding to a number of different display poses may be generated.
  • For each type of garment G (i.e., shirt, pants, coat, etc.), a rendering frame can be generated as described above, and a garment image corresponding to the garment type is generated. In order to reduce the computational overhead involved in generating garment images that differ only with respect to certain garment parameter values such as garment dimensions and style, or with respect to mannequin parameter values that define the particular mannequin with which the draping and collision simulation of the garment takes place, a full draping and collision simulation starting with the garment panels is first performed for each garment type with a reference garment and a reference mannequin to thereby generate a reference rendering frame. The mannequin and/or garment parameter values are then modified in the reference rendering frame, with the geometry of the scene then being updated by the cloth solver in accordance with the internal dynamic model of the modeling environment. The three-dimensional modeling environment generates the modified mannequin and/or garment objects as linear combinations of parameters calculated in the prior reference simulation so that a full resimulation does not have to be performed. Thus only a partial resimulation needs to be performed to generate a new rendering frame containing the modified mannequin and/or garment.
  • FIG. 3 shows a number of representative frames F70 through F80 of a resimulation scene showing the parameter modifying and partial resimulation process. Frame F70 is the reference rendering frame, having been previously generated with a reference mannequin and garment as described above, and from which garment images corresponding to the reference garment can be rendered. At frame F71, parameter values of the mannequin M or the garment G are modified while the simulation process is temporarily halted. Such parameter values that can be modified at this point include various dimensions of the mannequin M as well as dimensions and shapes of the garment panels GP that make up the garment G. The simulation is then restarted with the modified parameter values which completes at frame F75. The three-dimensional modeling environment is able to retain the information produced as a result of the reference simulation so that the coordinates of the mannequin and garment objects at frame F75 are solved without doing a complete draping and collision simulation with the modified parameters. Frame 75 can then be employed as a rendering frame for the modified garment and/or mannequin with a garment image rendered therefrom. Frame F76 of FIG. 3 shows how the garment and mannequin parameters can be further modified from those of the rendering frame in frame F75, with partial resimulation performed to generate a sequence of frames ending at frame F80. The procedure can then be repeated as needed to in order to generate garment images corresponding to any number of modifications made to the garment and/or mannequin. In this way, the repository of garment images can be efficiently populated with garments of different dimensions suitable for layering on a mannequin chosen from a population of mannequins of different dimensions.
  • As noted above, the population of garment images in the repository includes renderings of each garment from a plurality of viewing angles in order to simulate the three-dimensional experience for the ultimate user. FIG. 4 shows how garment images corresponding to different viewing perspectives are created from rendering frames by turning on different cameras for the rendering process. (A camera in this context is the viewing position within the scene from which an image is rendered.) Shown in the figure are a plurality of rendering frames H1 through H12 generated as described above for three different garments (i.e., garments differing according to type or garment parameter values) as fitted on three mannequins. Frames H1 through H4 are rendering frames generated for a particular garment and mannequin that differ only in the particular camera C1 through C4 which is turned on. Rendering the garment object from each of the four frames then produces four views of the garment, designated garment images DG1 through DG4 as shown in FIG. 5. Similarly, rendering the garment objects from frames H5 through H9 and frames H9 through H12 produces four perspective views of each of those garments, also shown in FIG. 5 as garments DG5 through DG12.
  • FIG. 5 shows that the garment images DG1 through DG12 are two-dimensional graphics files that go through a sequence of steps before being stored in the repository. First, the files are named and catalogued at step 51 so as to be accessible when needed to generate a particular composite image. Next, image processing is performed at step 52 to convert the files to a desired image file format (e.g., jpeg, tiff, gif) which may or may not include data compression. Finally, the files are stored in the repository (e.g., located on a hard disk or other appropriate storage medium) at step 53.
  • As noted above, a plurality of different versions of each garment image are created and stored in the repository in order to enable multiple garment images to be layered on a two-dimensional rendering of a mannequin, with the garments being rendered from rendering frames in an independent manner. Each version is defined to be combinable with one or more particular garments and is rendered from a rendering frame in which the garment is constrained to reside within or outside of particular predefined shells around the mannequin. The constraining shells serve to mimic the collisions with another garment that would take place were a simulation to be performed with that other garment. FIG. 6 shows a mannequin M around which are defined a plurality of shell regions (i.e., regions within or outside of particular shells) designated A through G that represent a plurality of offset distances from the mannequin. A version of a garment is constructed by constraining portions of the garment in a rendering frame to reside within or outside of particular shells. The particular constraints chosen for a version are designed to correspond to where the portions of the garment would reside were it to be collided with another particular garment in a simulation scene. FIGS. 7A through 7C show three versions of a representative garments G in which portions of the garment in each version have been constrained to reside within or outside of particular shells. Garment images may then be rendered from the version rendering frame at a plurality of camera angles to correspond to different views of the garment version. Creating versions of garments at the level of the rendering frame instead of in the two-dimensional garment image itself permits large numbers of viewing perspective renderings to be generated from a single version rendering frame in a consistent manner.
  • When a composite display showing the mannequin wearing multiple selected garments is to be generated by the dressing environment, a versioning rule interpreter selects particular versions of the garments to be displayed in accordance with predefined versioning rules. A compositing rule interpreter then displays the two-dimensional images of the selected garments and of a selected mannequin in a layered order dictated by compositing rules. To illustrate by way of example, FIG. 8A shows a plurality of shells surrounding a mannequin M as seen from above, with portions of a jacket garment G3 and a shirt garment G1 constrained to reside within or outside of shells C and J. FIG. 8A thus represents what a combination of the two separate rendering frames containing garments G1 and G3 would look like. When garments G1 and G3 are selected to be worn by the mannnequin, the versioning rule interpreter selects particular versions of those garments from the garment image repository in accordance with a versioning rule. In this case, the versioning rule would select the versions of the jacket G3 and shirt G1 that have been rendered from rendering frames with the garments constrained as shown in FIG. 8A which ensures that any rendering of the jacket G3 will reside outside of a rendering from the same camera angle of shirt G1. FIG. 8B shows the two-dimensional garment images of garments G1 and G3 that have been retrieved from the repository in accordance with the versioning rules and a two-dimensional mannequin image M. The compositing rule interpreter displays the images in a layered order as defined by a compositing rule which, in this case, dictates that the jacket image G3 will be layered on top of the shirt G1, both of which are layered on top of the mannequin image M. FIG. 9 shows a composite image as would be presented to a user as a result of the layering process.
  • The above-described preferred embodiment has thus been described as a system and method in which images of garments and mannequins that have been pre-rendered from frames of three-dimensional simulation scenes are stored in a repository for selective retrieval in order to from composite images. FIG. 10 shows in block diagram form the primary software components of an image generation system for populating a repository with images. A three-dimensional modeling environment 100 in conjunction with a cloth simulator 104 are used to simulate the draping and collision of a garment with a mannequin. (An example of a three-dimensional modeling environment and cloth simulator is the aforementioned Maya and Maya Cloth.) A parameter input block 102 inputs user-defined parameters (e.g., from a display terminal) into the modeling environment in order to define the garment and mannequin parameters as described above for the simulation. A rendering frame generator 108 communicates with the modeling environment 100 in order to extract rendering frames therefrom. The rendering frame generator 108 also works with the modeling environment to perform shape blending upon reference rendering frames in order to generate frames with modified parameters without performing a full simulation. Versioning tools 106 are used within the rendering frame generator to create the particular versions of the garments that are combinable with other garments according to versioning rules. The versioning tools 106 interface with the three-dimensional modeling environment (e.g., as a C/C++ shared object library in conjunction with scripts written in a scripting language of the three-dimensional modeling environment such as the Maya Embedded Language) and enable a user to define garment shells and associate simulation properties (e.g., collision offset, cloth stiffness, cloth thickness) to garments and mannequins within the simulation. Images of garments and mannequins are rendered from the rendering frames at a selected viewpoint by the rendering engine 110. The images are then converted to a convenient file format, named, and catalogued to enable access by the display system, and stored in the image repository 112.
  • Another aspect of the preferred exemplary embodiment described above is a display system for retrieving images from the image repository and combining the images into a composite image for displaying to a user. One possible implementation of such a display system is as a client and server communicating over a network, in which the client part of the system (i.e., the user interface) is a hypertext transport protocol (http) or web browser that receives and displays the composite images of the clothed mannequins that the user requests. FIG. 11 is a block diagram showing the software components of such an implementation. The server side of the system includes an http server 120 and a page generator 118 for generating the html (hypertext markup language) pages containing the composite images in accordance with the user request. Upon receiving a request from the user to display a particular mannequin wearing particular garments from a particular viewing perspective, the html page generator 118 (which may be, e.g., a common gateway interface script or a program communicating with the http server via an application server layer) communicates with a versioning rule interpreter 114 in order to select the particular images retrieved from the image repository 112. Next, the retrieved images are layered into a composite image that is embedded into an html page, with the layering dictated by a compositing rule interpreter 116 with which the page generator 118 also communicates. The html page containing the desired image is then transmitted by the http server 120 over a network to the http browser 124 that is the user interface in this implementation. Such an implementation would be particularly suitable for use in an online internet catalogue, for example, in which the garment images are used to inform purchaser decisions. In this embodiment, the user may establish a virtual identity by selecting a particular mannequin, naming the mannequin, and establishing other parameters that govern how the mannequin interacts with the dressing environment as well as possibly other virtual environments. Such information could, for example, be stored in the form of a cookie on the user's machine which is transmitted to the http server upon connection with the user's browser.
  • Other implementations of the system shown in could be used by professional animators to generate images of clothed characters or by garment designers to generate images of garments as they are designed for actual manufacture. In those cases, the system could be implemented either over a network or as a stand-alone machine. Such users may be expected to use the system for populating the image repository with garment images shown in FIG. 10 to generate images corresponding to their own garment designs. An appropriate implementation of the display system shown in FIG. 11 (e.g., non-networked) can then be used to render images of mannequins wearing selected garments that can be used in animated features or as an aid to the garment design process.
  • In another embodiment, rendering frames rather than images are stored in the repository and retrieved for display in response to user requests. Select objects such as garments are extracted from particular frames of simulation scenes containing select garments and mannequins to generate rendering frames that are stored in the repository. When a user selects a display of a particular mannequin and garment combination, the system, retrieves the appropriate rendering frames according to versioning rules and renders a composite image from a selected viewpoint. The particular viewpoint presented to the user at any one time is a static image, but it may be updated rapidly enough to give the impression of a continuously changing viewpoint. The images are rendered from the frames either simultaneously using the depth information contained therein, or separately from each frame with the separately rendered images then being displayed in layered order dictated by compositing rules. The functions of the system could be implemented on a stand-alone machine or distributed over a network, e.g., as where rendering frames are downloaded to a java applet executed by a web browser that renders the images displayed to the user.
  • In certain situations, available hardware performance may be such as to make it desirable to simulate draping and collision of select garments and mannequins according to user requests in real time. In such an embodiment, rendering frames are generated from the user-selected three-dimensional simulation scenes, and images for displaying to the user are then rendered. The simulation scene in this embodiment may be changed in accordance with user preferences, for example, animating the mannequin within the simulation to move from a dressing pose to a user-selected target pose before generating a rendering frame. Shape blending between previously generated rendering frames can be used to improve performance in generating rendering frames with modified garment and/or mannequin parameters. In order to display the mannequin wearing multiple garments, the garments can simultaneously simulated in a single scene, or separate simulations can be performed for each garment with the rendering frames generated therefrom being combined in accordance with versioning rules.
  • Although the invention has been described in conjunction with the foregoing specific embodiments, many alternatives, variations, and modifications will be apparent to those of ordinary skill in the art. Such alternatives, variations, and modifications are intended to fall within the scope of the following appended claims.

Claims (45)

1. A method for producing an image of a computer-simulated mannequin wearing a garment as defined by selected mannequin and garment parameter values, comprising:
generating objects corresponding to a representative mannequin and a garment placed in a simulation scene within a three-dimensional modeling environment;
simulating draping and collision of the garment with the mannequin within the simulation scene to generate a three-dimensional rendering frame of the mannequin wearing the garment;
constraining portions of the garment to reside within or outside of particular shells defined around the mannequin in the rendering frame; and,
rendering an image from the rendering frame.
2. The method of claim 1 wherein the rendered image is used to form a visual image on a computer display device.
3. The method of claim 1 further comprising generating rendering frames containing mannequin or garment objects as defined by selected parameter values by shape blending corresponding objects of previously generated rendering frames.
4. The method of claim 1 wherein the garment object comprises a plurality of garment panels that are connected together during the draping and collision simulation and further wherein the garment parameters include panel dimensions.
5. The method of claim 1 wherein two-dimensional images are rendered from a rendering frame using a plurality of camera positions.
6. The method of claim 1 further comprising performing a further partial simulation on the simulation scene within the modeling environment after constraining portions of the garment to reside within or outside of particular shells defined around the mannequin in the rendering frame.
7. The method of claim 1 further comprising generating a rendering frame containing the mannequin wearing multiple selected garments and wherein particular shells around the mannequin are defined such that collisions between the garments are prevented.
8. The method of claim 7 wherein specific versions of garments are defined that reside within or outside of particular shells and further wherein the versions of multiple garments used to generate the rendering frame are selected in accordance with versioning rules that define which versions of a particular garment are permitted when combined with another particular garment.
9. The method of claim 7 wherein separate rendering frames are generated for each garment.
10. The method of claim 9 wherein the separate rendering frames are combined into a composite two-dimensional image using Z-coordinates of the objects.
11. The method of claim 9 wherein the garments contained in the separate rendering frames are rendered into separate two-dimensional garment images that are layered upon a two dimensional rendering of the mannequin to create a composite two-dimensional image.
12. The method of claim 11 further comprising layering the separate two-dimensional images on a two-dimensional image of the mannequin in accordance with a compositing rule that defines in what order specific garment images should be layered to thereby generate a composite two-dimensional image of the mannequin wearing the garments.
13. The method of claim 1 further comprising mapping texture objects to the garment objects in rendering frames wherein the texture objects are selected from a group consisting of colors, fabric patterns, buttons, collars, and ornaments.
14. The method of claim 1 wherein an image rendered from the rendering frame is transmitted over a network to a display device.
15. A processor-readable storage medium having processor-executable instructions for performing the method recited in claim 1.
16. A method for producing an image of a computer-simulated mannequin wearing a garment as defined by selected mannequin and garment parameter values, comprising:
generating objects corresponding to a representative mannequin and a garment placed in a simulation scene within a three-dimensional modeling environment;
simulating draping and collision of the garment with the mannequin within the simulation scene to generate a three-dimensional rendering frame of the mannequin wearing the garment;
generating rendering frames containing mannequin or garment objects as defined by selected parameter values by shape blending corresponding objects of previously generated rendering frames; and,
rendering an image from the rendering frame.
17. The method of claim 16 wherein the garment object comprises a plurality of garment panels that are connected together during the draping and collision simulation and further wherein the garment parameters include panel dimensions.
18. The method of claim 16 further comprising generating a rendering frame containing the mannequin wearing multiple selected garments and wherein particular shells around the mannequin are defined such that collisions between the garments are prevented.
19. A method for generating an image of a computer-simulated garment suitable for combining into a composite image of a selected computer-simulated mannequin wearing selected garments, comprising:
generating objects corresponding to a mannequin and a garment placed in a simulation scene within a three-dimensional modeling environment;
simulating draping and collision of the garment with the mannequin in the simulation scene to generate a three-dimensional rendering frame containing the mannequin wearing the garment;
constraining portions of the garment to reside within or outside of particular shells defined around the mannequin in the rendering frame; and,
rendering a garment image from the rendering frame.
20. The method of claim 19 further comprising rendering images of a plurality of versions of particular garments that are combinable into composite images in accordance with versioning rules, wherein a version of a garment is generated by constraining portions of the garment object within a rendering frame to reside within or outside of a particular shell defined around the mannequin.
21. The method of claim 20 further comprising generating rendering frames containing mannequin or garment objects as defined by selected parameter values by shape blending corresponding objects of previously generated rendering frames.
22. The method of claim 19 further comprising mapping texture objects to the garment object in a rendering frame before rendering the garment into a two-dimensional garment image.
23. The method of claim 19 further comprising rendering from a rendering frame a plurality of garment images corresponding to a plurality of camera positions.
24. The method of claim 20 wherein a garment in the rendering frame is modified in accordance with a selected garment parameter value by modifying the parameter in the rendering frame and performing a partial further simulation to simulate motion and collision of the modified garment with the mannequin.
25. The method of claim 24 wherein the garment model comprises a plurality of garment panels that are connected together during the draping and collision simulation and wherein the garment parameters include panel dimension parameters.
26. The method of claim 20 further comprising storing in a garment image repository garment images corresponding to a plurality of garment parameter values and created for a population of mannequins defined by a plurality of parameter values.
27. The method of claim 20 wherein the versions of particular garments that are rendered into garment images include versions differing by a fitting characteristic.
28. The method of claim 20 wherein the versions of particular garments that are rendered into garment images include versions differing by a wearing style.
29. A system for generating images of a computer-simulated mannequin wearing a garment as defined by selected mannequin and garment parameter values, comprising:
a user interface by which a user selects a mannequin and one or more garments to be worn by the mannequin, wherein the mannequin and garments selected may be further defined by specific mannequin and garment parameter values;
a three-dimensional modeling environment for generating objects corresponding to a representative mannequin and a garment placed in a simulation scene and for simulating draping and collision of the garment with the mannequin within the simulation scene to generate a three-dimensional rendering frame of the mannequin wearing the garment; and,
means for constraining portions of the garment to reside within or outside of particular shells defined around the mannequin in the rendering frame;
30. The system of claim 29 wherein particular shells around the mannequin are defined such that collisions between the garments are prevented when a rendering frame containing the mannequin wearing multiple selected garments is generated.
31. The system of claim 30 wherein specific versions of garments are defined that reside within or outside of particular shells and further wherein the versions of multiple garments used to generate the rendering frame are selected in accordance with versioning rules that define which versions of a particular garment are permitted when combined with another particular garment.
32. A system for generating images of a computer-simulated mannequin wearing a garment as defined by selected mannequin and garment parameter values, comprising:
a user interface by which a user selects a mannequin and one or more garments to be worn by the mannequin, wherein the mannequin and garments selected may be further defined by specific mannequin and garment parameter values;
a three-dimensional modeling environment for generating objects corresponding to a representative mannequin and a garment placed in a simulation scene and for simulating draping and collision of the garment with the mannequin within the simulation scene to generate a three-dimensional rendering frame of the mannequin wearing the garment; and,
means for generating a rendering frame containing mannequin or garment objects as defined by selected parameter values by shape blending corresponding objects of previously generated rendering frames.
33. The system of claim 32 further comprising means for constraining portions of the garment to reside within or outside of particular shells defined around the mannequin in the rendering frame.
34. A system for displaying a selected computer-simulated mannequin wearing a selected garment, comprising:
a user interface by which a user selects a mannequin and one or more garments to be worn by the mannequin, wherein the mannequin and garments selected may be further defined by specific mannequin and garment parameter values;
a repository containing a plurality of two-dimensional garment images and mannequin images as defined by specific parameters;
a compositing rule interpreter for displaying the two-dimensional images of user-selected garments and of a selected mannequin in a layered order dictated by compositing rules.
35. The system of claim 34 wherein the garment images contained in the repository are created by rendering an image from a three-dimensional simulation scene containing a mannequin wearing the garment.
36. The system of claim 34 further comprising a versioning rule interpreter for choosing among versions of the garment images for displaying in accordance with versioning rules that define which versions of particular garments are permitted when combined with another particular garment.
37. The system of claim 35 wherein the compositing rule interpreter displays two-dimensional images of versions of user-selected garments chosen by the versioning rule interpreter and of a selected mannequin in a layered order dictated by the compositing rules.
38. The system of claim 34 wherein the garment images are created by:
generating objects corresponding to a mannequin and a garment placed in a simulation scene within a three-dimensional modeling environment;
simulating draping and collision of the garment with the mannequin in the simulation scene to generate a three-dimensional rendering frame containing the mannequin wearing the garment;
constraining portions of the garment to reside within or outside of particular shells defined around the mannequin in the rendering frame; and,
rendering a two-dimensional garment image from the rendering frame.
39. The system of claim 34 wherein the mannequin parameters include a parameter corresponding to a body measurement.
40. The system of claim 34 wherein the mannequin parameters include a parameter designating selection of a particular mannequin from a population of mannequins.
41. The system of claim 34 wherein the garment parameters are selected from a group consisting of dimension, color, and style.
42. The system of claim 34 wherein the plurality of two-dimensional garment and mannequin images are rendered from a plurality of selectable camera angles.
43. The system of claim 34 wherein the user interface permits selection of versions of particular garments that are rendered into garment images that exhibit a particular wearing style.
44. A system for displaying a selected computer-simulated mannequin wearing a selected garment, comprising:
a user interface by which a user selects a mannequin and one or more garments to be worn by the mannequin, wherein the mannequin and garments selected may be further defined by specific mannequin and garment parameter values;
a repository containing a plurality of two-dimensional garment images and mannequin images as defined by specific parameters, wherein the images contained in the repository are created by rendering an image from a three-dimensional simulation scene containing a mannequin wearing the garment;
means for displaying the two-dimensional images of user-selected garments and of a selected mannequin in a layered order determined from depth information contained in the simulation scene.
45. The system of claim 44 wherein the plurality of two-dimensional garment and mannequin images are rendered from a plurality of selectable camera angles.
US12/646,062 1999-11-12 2009-12-23 System and method for displaying selected garments on a computer-simulated mannequin Abandoned US20100302275A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/646,062 US20100302275A1 (en) 1999-11-12 2009-12-23 System and method for displaying selected garments on a computer-simulated mannequin
US13/098,178 US20110273444A1 (en) 1999-11-12 2011-04-29 System and method for displaying selected garments on a computer-simulated mannequin
US13/350,716 US20120188232A1 (en) 1999-11-12 2012-01-13 System and method for displaying selected garments on a computer-simulated mannequin

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/439,225 US7663648B1 (en) 1999-11-12 1999-11-12 System and method for displaying selected garments on a computer-simulated mannequin
US12/646,062 US20100302275A1 (en) 1999-11-12 2009-12-23 System and method for displaying selected garments on a computer-simulated mannequin

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/439,225 Continuation US7663648B1 (en) 1999-11-12 1999-11-12 System and method for displaying selected garments on a computer-simulated mannequin

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/098,178 Continuation US20110273444A1 (en) 1999-11-12 2011-04-29 System and method for displaying selected garments on a computer-simulated mannequin

Publications (1)

Publication Number Publication Date
US20100302275A1 true US20100302275A1 (en) 2010-12-02

Family

ID=23743825

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/439,225 Expired - Fee Related US7663648B1 (en) 1999-11-12 1999-11-12 System and method for displaying selected garments on a computer-simulated mannequin
US12/646,062 Abandoned US20100302275A1 (en) 1999-11-12 2009-12-23 System and method for displaying selected garments on a computer-simulated mannequin
US13/098,178 Abandoned US20110273444A1 (en) 1999-11-12 2011-04-29 System and method for displaying selected garments on a computer-simulated mannequin
US13/350,716 Abandoned US20120188232A1 (en) 1999-11-12 2012-01-13 System and method for displaying selected garments on a computer-simulated mannequin

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/439,225 Expired - Fee Related US7663648B1 (en) 1999-11-12 1999-11-12 System and method for displaying selected garments on a computer-simulated mannequin

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/098,178 Abandoned US20110273444A1 (en) 1999-11-12 2011-04-29 System and method for displaying selected garments on a computer-simulated mannequin
US13/350,716 Abandoned US20120188232A1 (en) 1999-11-12 2012-01-13 System and method for displaying selected garments on a computer-simulated mannequin

Country Status (3)

Country Link
US (4) US7663648B1 (en)
AU (1) AU2210801A (en)
WO (1) WO2001035342A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012118948A (en) * 2010-12-03 2012-06-21 Ns Solutions Corp Extended reality presentation device, and extended reality presentation method and program
WO2013177456A1 (en) * 2012-05-23 2013-11-28 1-800 Contacts, Inc. Systems and methods for adjusting a virtual try-on
ITMI20121398A1 (en) * 2012-08-07 2014-02-08 Bella Gabriele Di SYSTEM AND METHOD TO ASSOCIATE CLOTHING GARMENTS WITH HUMAN SOMATIC FEATURES OR OTHER CLOTHING GARMENTS, DISPLAYING THEM VIA THE WEB.
WO2015031164A1 (en) * 2013-08-30 2015-03-05 Glasses.Com Inc. Systems and methods for generating a 3-d model of a user using a rear-facing camera
US9167155B2 (en) 2012-04-02 2015-10-20 Fashion3D Sp. z o.o. Method and system of spacial visualisation of objects and a platform control system included in the system, in particular for a virtual fitting room
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
CN107481095A (en) * 2017-07-26 2017-12-15 深圳市盛世华服信息有限公司 A kind of virtual Design of Popular Dress Ornaments method for customizing of 3D and custom-built system
US20180197331A1 (en) * 2015-08-14 2018-07-12 Metail Limited Method and system for generating an image file of a 3d garment model on a 3d body model
US20180240280A1 (en) * 2015-08-14 2018-08-23 Metail Limited Method and system for generating an image file of a 3d garment model on a 3d body model

Families Citing this family (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418407B2 (en) * 1999-10-14 2008-08-26 Jarbridge, Inc. Method for electronic gifting using merging images
US7346543B1 (en) * 2000-02-24 2008-03-18 Edmark Tomima L Virtual showroom method
GB0101371D0 (en) 2001-01-19 2001-03-07 Virtual Mirrors Ltd Production and visualisation of garments
US20030050864A1 (en) * 2001-09-13 2003-03-13 Koninklijke Philips Electronics N.V. On-line method for aiding a customer in the purchase of clothes
GB0219623D0 (en) * 2002-08-22 2002-10-02 British Telecomm Method and system for virtual object generation
GB0220514D0 (en) 2002-09-04 2002-10-09 Depuy Int Ltd Acetabular cup spacer arrangement
US20050267614A1 (en) 2004-03-05 2005-12-01 Looney Michael T System and method of virtual modeling of thin materials
US7937253B2 (en) 2004-03-05 2011-05-03 The Procter & Gamble Company Virtual prototyping system and method
KR100511210B1 (en) * 2004-12-27 2005-08-30 주식회사지앤지커머스 Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service besiness method thereof
US7657340B2 (en) * 2006-01-31 2010-02-02 Dragon & Phoenix Software, Inc. System, apparatus and method for facilitating pattern-based clothing design activities
US8737704B2 (en) 2006-08-08 2014-05-27 The Procter And Gamble Company Methods for analyzing absorbent articles
US7979256B2 (en) 2007-01-30 2011-07-12 The Procter & Gamble Company Determining absorbent article effectiveness
US20090019053A1 (en) * 2007-07-13 2009-01-15 Yahoo! Inc. Method for searching for and marketing fashion garments online
US20090079743A1 (en) * 2007-09-20 2009-03-26 Flowplay, Inc. Displaying animation of graphic object in environments lacking 3d redndering capability
US8296648B2 (en) * 2008-10-28 2012-10-23 Vistaprint Technologies Limited Method and system for displaying variable shaped products on a computer display
US20100228646A1 (en) * 2009-03-05 2010-09-09 Robert Eric Heidel Integration of scanner/sensor apparatuses, web-based interfaces, and pull-supply chain management into product, clothing, apparel, shoe, and/or accessory markets
US9256974B1 (en) * 2010-05-04 2016-02-09 Stephen P Hines 3-D motion-parallax portable display software application
US11244223B2 (en) 2010-06-08 2022-02-08 Iva Sareen Online garment design and collaboration system and method
US10628666B2 (en) 2010-06-08 2020-04-21 Styku, LLC Cloud server body scan data system
US10628729B2 (en) 2010-06-08 2020-04-21 Styku, LLC System and method for body scanning and avatar creation
US11640672B2 (en) 2010-06-08 2023-05-02 Styku Llc Method and system for wireless ultra-low footprint body scanning
US8599196B2 (en) 2010-06-18 2013-12-03 Michael Massen System and method for generating computer rendered cloth
KR20120085476A (en) * 2011-01-24 2012-08-01 삼성전자주식회사 Method and apparatus for reproducing image, and computer-readable storage medium
US9241184B2 (en) * 2011-06-01 2016-01-19 At&T Intellectual Property I, L.P. Clothing visualization
US9244022B2 (en) 2011-06-16 2016-01-26 The Procter & Gamble Company Mannequins for use in imaging and systems including the same
US20130304604A1 (en) * 2011-11-02 2013-11-14 Michael Theodor Hoffman Systems and methods for dynamic digital product synthesis, commerce, and distribution
JP2013101528A (en) 2011-11-09 2013-05-23 Sony Corp Information processing apparatus, display control method, and program
US9305373B2 (en) * 2012-09-11 2016-04-05 Potential Dynamics Corp. Customized real-time media system and method
US9830423B2 (en) * 2013-03-13 2017-11-28 Abhishek Biswas Virtual communication platform for healthcare
US11694797B2 (en) * 2012-10-30 2023-07-04 Neil S. Davey Virtual healthcare communication platform
US9304652B1 (en) 2012-12-21 2016-04-05 Intellifect Incorporated Enhanced system and method for providing a virtual space
US10366175B2 (en) 2013-03-15 2019-07-30 3D Tech Llc System and method for automated manufacturing of custom apparel
US9836806B1 (en) * 2013-06-07 2017-12-05 Intellifect Incorporated System and method for presenting user progress on physical figures
US10743732B2 (en) 2013-06-07 2020-08-18 Intellifect Incorporated System and method for presenting user progress on physical figures
US9818224B1 (en) * 2013-06-20 2017-11-14 Amazon Technologies, Inc. Augmented reality images based on color and depth information
US20150134302A1 (en) 2013-11-14 2015-05-14 Jatin Chhugani 3-dimensional digital garment creation from planar garment photographs
US10366439B2 (en) 2013-12-27 2019-07-30 Ebay Inc. Regional item reccomendations
US9728097B2 (en) 2014-08-19 2017-08-08 Intellifect Incorporated Wireless communication between physical figures to evidence real-world activity and facilitate development in real and virtual spaces
US20160092956A1 (en) 2014-09-30 2016-03-31 Jonathan Su Garment size mapping
GB201420090D0 (en) * 2014-11-12 2014-12-24 Knyttan Ltd Image to item mapping
US20170046769A1 (en) * 2015-08-10 2017-02-16 Measur3D, Inc. Method and Apparatus to Provide A Clothing Model
US20200250892A1 (en) * 2015-08-10 2020-08-06 Measur3D, Llc Generation of Improved Clothing Models
US10474927B2 (en) * 2015-09-03 2019-11-12 Stc. Unm Accelerated precomputation of reduced deformable models
EP3156976B1 (en) * 2015-10-14 2019-11-27 Dassault Systèmes Computer-implemented method for defining seams of a virtual garment or furniture upholstery
US10218793B2 (en) * 2016-06-13 2019-02-26 Disney Enterprises, Inc. System and method for rendering views of a virtual space
US9856585B1 (en) * 2016-09-19 2018-01-02 Umm-Al-Qura University Circular loom of mannequin
US11478033B2 (en) 2016-11-06 2022-10-25 Global Apparel Partners Inc. Knitted textile methods
US11094136B2 (en) 2017-04-28 2021-08-17 Linden Research, Inc. Virtual reality presentation of clothing fitted on avatars
US11145138B2 (en) * 2017-04-28 2021-10-12 Linden Research, Inc. Virtual reality presentation of layers of clothing on avatars
US11948057B2 (en) * 2017-06-22 2024-04-02 Iva Sareen Online garment design and collaboration system and method
US10613710B2 (en) 2017-10-22 2020-04-07 SWATCHBOOK, Inc. Product simulation and control system for user navigation and interaction
JP7224112B2 (en) * 2018-05-21 2023-02-17 Juki株式会社 sewing system
NL2022937B1 (en) * 2019-04-12 2020-10-20 Yoox Net A Porter Group S P A Method and Apparatus for Accessing Clothing
WO2020079235A1 (en) * 2018-10-19 2020-04-23 Yoox Net-A-Porter Group S.P.A. Method and apparatus for accessing clothing
CN110211222B (en) * 2019-05-07 2023-08-01 谷东科技有限公司 AR immersion type tour guide method and device, storage medium and terminal equipment
CN110211213A (en) * 2019-06-11 2019-09-06 深圳市瑞云科技有限公司 Display systems and method are tried in electric business dress ornament simulation based on CG real-time rendering on
US11715266B2 (en) * 2021-05-10 2023-08-01 Infosys Limited Method and system for organizing a virtual showroom with one or more 3D images
CN116797699B (en) * 2023-08-28 2023-12-15 武汉博润通文化科技股份有限公司 Intelligent animation modeling method and system based on three-dimensional technology

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4539585A (en) * 1981-07-10 1985-09-03 Spackova Daniela S Previewer
US5680528A (en) * 1994-05-24 1997-10-21 Korszun; Henry A. Digital dressing room
US5850222A (en) * 1995-09-13 1998-12-15 Pixel Dust, Inc. Method and system for displaying a graphic image of a person modeling a garment
US5912675A (en) * 1996-12-19 1999-06-15 Avid Technology, Inc. System and method using bounding volumes for assigning vertices of envelopes to skeleton elements in an animation system
US5974400A (en) * 1994-11-17 1999-10-26 Hitachi, Ltd. Trying-on apparel virtually (electronically) while protecting private data using irreversible process
US5982389A (en) * 1996-06-17 1999-11-09 Microsoft Corporation Generating optimized motion transitions for computer animated objects
US6384819B1 (en) * 1997-10-15 2002-05-07 Electric Planet, Inc. System and method for generating an animatable character
US6462740B1 (en) * 1999-07-30 2002-10-08 Silicon Graphics, Inc. System for in-scene cloth modification
US6476804B1 (en) * 2000-07-20 2002-11-05 Sony Corporation System and method for generating computer animated graphical images of an exterior patch surface layer of material stretching over an understructure
US6573890B1 (en) * 1998-06-08 2003-06-03 Microsoft Corporation Compression of animated geometry using geometric transform coding
US6608631B1 (en) * 2000-05-02 2003-08-19 Pixar Amination Studios Method, apparatus, and computer program product for geometric warps and deformations
US6909431B1 (en) * 1999-03-01 2005-06-21 Lucas Digital Ltd. Position and shape control for cloth and soft body animation
US6968297B1 (en) * 1999-10-08 2005-11-22 Lectra Sa Method and device for simulating and representing the dressing of a mannequin

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4149246A (en) 1978-06-12 1979-04-10 Goldman Robert N System for specifying custom garments
US4261012A (en) 1979-06-18 1981-04-07 Maloomian Laurence G System and method for composite display
US4546434C1 (en) 1979-10-03 2002-09-17 Debbie A Gioello Method for designing apparel
US4598376A (en) 1984-04-27 1986-07-01 Richman Brothers Company Method and apparatus for producing custom manufactured items
CA1237528A (en) 1985-06-06 1988-05-31 Debbie Gioello Method for designing apparel
CA1274919A (en) 1985-07-27 1990-10-02 Akio Ohba Method of forming curved surfaces and the apparatus
US4926344A (en) 1988-03-16 1990-05-15 Minnesota Mining And Manufacturing Company Data storage structure of garment patterns to enable subsequent computerized prealteration
US4916624A (en) 1988-03-16 1990-04-10 Minnesota Mining And Manufacturing Company Computerized system for prealteration of garment pattern data
US4916634A (en) 1988-03-16 1990-04-10 Minnesota Mining And Manufacturing Copany System for preparing garment pattern data to enable subsequent computerized prealteration
US4899448A (en) 1988-05-16 1990-02-13 Huang Ding S Basic formula for active sketch pattern drawing in upper body tailoring
US4984155A (en) 1988-08-29 1991-01-08 Square D Company Order entry system having catalog assistance
US4885844A (en) 1988-11-14 1989-12-12 Chun Joong H Computer aided custom tailoring with disposable measurement clothing
US5247610A (en) 1989-03-17 1993-09-21 Hitachi, Ltd. Method and apparatus for generating graphics
US5060171A (en) 1989-07-27 1991-10-22 Clearpoint Research Corporation A system and method for superimposing images
US4984721A (en) * 1989-09-07 1991-01-15 E.R.A. Display Co. Ltd. Garment hanger
US5383111A (en) 1989-10-06 1995-01-17 Hitachi, Ltd. Visual merchandizing (VMD) control method and system
US5163006A (en) 1990-02-15 1992-11-10 Michelle Deziel System for designing custom-made, formfitted clothing, such as bathing suits, and method therefor
US5495568A (en) * 1990-07-09 1996-02-27 Beavin; William C. Computerized clothing designer
US5504845A (en) * 1990-09-10 1996-04-02 Modacad, Inc. Method for remodeling and rendering three-dimensional surfaces
US5163007A (en) 1990-11-13 1992-11-10 Halim Slilaty System for measuring custom garments
US5341305A (en) 1991-05-02 1994-08-23 Gerber Garment Technology, Inc. A computerized pattern development system capable of direct designer input
JP2614691B2 (en) 1992-01-23 1997-05-28 旭化成工業株式会社 Method and apparatus for visualizing assembled shape of paper pattern
JP3117097B2 (en) 1992-01-28 2000-12-11 ソニー株式会社 Image conversion device
JPH0696100A (en) 1992-09-09 1994-04-08 Mitsubishi Electric Corp Remote transaction system
US5566867A (en) 1993-05-28 1996-10-22 Goray; Jill Customizable garment form system
US5551021A (en) 1993-07-30 1996-08-27 Olympus Optical Co., Ltd. Image storing managing apparatus and method for retreiving and displaying merchandise and customer specific sales information
US5530652A (en) 1993-08-11 1996-06-25 Levi Strauss & Co. Automatic garment inspection and measurement system
US5557527A (en) * 1993-08-31 1996-09-17 Shima Seiki Manufacturing Ltd. Knit design system and a method for designing knit fabrics
US5530793A (en) 1993-09-24 1996-06-25 Eastman Kodak Company System for custom imprinting a variety of articles with images obtained from a variety of different sources
JP2703173B2 (en) 1993-12-27 1998-01-26 株式会社ワールド Pattern making system
US5715314A (en) 1994-10-24 1998-02-03 Open Market, Inc. Network sales system
US5680314A (en) 1995-08-25 1997-10-21 Patterson; Douglas R. Garment sizing system
AU1328597A (en) * 1995-11-30 1997-06-19 Virtual Technologies, Inc. Tactile feedback man-machine interface device
US5774870A (en) 1995-12-14 1998-06-30 Netcentives, Inc. Fully integrated, on-line interactive frequency and award redemption program
US5937081A (en) 1996-04-10 1999-08-10 O'brill; Michael R. Image composition system and method of using same
DE19635753A1 (en) 1996-09-03 1998-04-23 Kaufhof Warenhaus Ag Virtual imaging device for selecting clothing from catalogue
US5897620A (en) 1997-07-08 1999-04-27 Priceline.Com Inc. Method and apparatus for the sale of airline-specified flight tickets
US5930769A (en) 1996-10-07 1999-07-27 Rose; Andrea System and method for fashion shopping
US6310627B1 (en) 1998-01-20 2001-10-30 Toyo Boseki Kabushiki Kaisha Method and system for generating a stereoscopic image of a garment
US5987929A (en) * 1998-04-20 1999-11-23 Bostani; Arman Method and apparatus for fabrication of composite and arbitrary three dimensional objects
US6113395A (en) * 1998-08-18 2000-09-05 Hon; David C. Selectable instruments with homing devices for haptic virtual reality medical simulation
US6307568B1 (en) * 1998-10-28 2001-10-23 Imaginarix Ltd. Virtual dressing over the internet
US6404426B1 (en) 1999-06-11 2002-06-11 Zenimax Media, Inc. Method and system for a computer-rendered three-dimensional mannequin

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4539585A (en) * 1981-07-10 1985-09-03 Spackova Daniela S Previewer
US5680528A (en) * 1994-05-24 1997-10-21 Korszun; Henry A. Digital dressing room
US5974400A (en) * 1994-11-17 1999-10-26 Hitachi, Ltd. Trying-on apparel virtually (electronically) while protecting private data using irreversible process
US5850222A (en) * 1995-09-13 1998-12-15 Pixel Dust, Inc. Method and system for displaying a graphic image of a person modeling a garment
US5982389A (en) * 1996-06-17 1999-11-09 Microsoft Corporation Generating optimized motion transitions for computer animated objects
US5912675A (en) * 1996-12-19 1999-06-15 Avid Technology, Inc. System and method using bounding volumes for assigning vertices of envelopes to skeleton elements in an animation system
US6384819B1 (en) * 1997-10-15 2002-05-07 Electric Planet, Inc. System and method for generating an animatable character
US6573890B1 (en) * 1998-06-08 2003-06-03 Microsoft Corporation Compression of animated geometry using geometric transform coding
US6909431B1 (en) * 1999-03-01 2005-06-21 Lucas Digital Ltd. Position and shape control for cloth and soft body animation
US6462740B1 (en) * 1999-07-30 2002-10-08 Silicon Graphics, Inc. System for in-scene cloth modification
US6968297B1 (en) * 1999-10-08 2005-11-22 Lectra Sa Method and device for simulating and representing the dressing of a mannequin
US6608631B1 (en) * 2000-05-02 2003-08-19 Pixar Amination Studios Method, apparatus, and computer program product for geometric warps and deformations
US6476804B1 (en) * 2000-07-20 2002-11-05 Sony Corporation System and method for generating computer animated graphical images of an exterior patch surface layer of material stretching over an understructure

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012118948A (en) * 2010-12-03 2012-06-21 Ns Solutions Corp Extended reality presentation device, and extended reality presentation method and program
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US9167155B2 (en) 2012-04-02 2015-10-20 Fashion3D Sp. z o.o. Method and system of spacial visualisation of objects and a platform control system included in the system, in particular for a virtual fitting room
US9378584B2 (en) 2012-05-23 2016-06-28 Glasses.Com Inc. Systems and methods for rendering virtual try-on products
AU2013266184C1 (en) * 2012-05-23 2018-08-23 Luxottica Retail North America Inc. Systems and methods for adjusting a virtual try-on
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9235929B2 (en) 2012-05-23 2016-01-12 Glasses.Com Inc. Systems and methods for efficiently processing virtual 3-D data
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9311746B2 (en) 2012-05-23 2016-04-12 Glasses.Com Inc. Systems and methods for generating a 3-D model of a virtual try-on product
WO2013177456A1 (en) * 2012-05-23 2013-11-28 1-800 Contacts, Inc. Systems and methods for adjusting a virtual try-on
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US10147233B2 (en) 2012-05-23 2018-12-04 Glasses.Com Inc. Systems and methods for generating a 3-D model of a user for a virtual try-on product
AU2013266184B2 (en) * 2012-05-23 2018-02-22 Luxottica Retail North America Inc. Systems and methods for adjusting a virtual try-on
ITMI20121398A1 (en) * 2012-08-07 2014-02-08 Bella Gabriele Di SYSTEM AND METHOD TO ASSOCIATE CLOTHING GARMENTS WITH HUMAN SOMATIC FEATURES OR OTHER CLOTHING GARMENTS, DISPLAYING THEM VIA THE WEB.
WO2015031164A1 (en) * 2013-08-30 2015-03-05 Glasses.Com Inc. Systems and methods for generating a 3-d model of a user using a rear-facing camera
US20180240280A1 (en) * 2015-08-14 2018-08-23 Metail Limited Method and system for generating an image file of a 3d garment model on a 3d body model
US20180197331A1 (en) * 2015-08-14 2018-07-12 Metail Limited Method and system for generating an image file of a 3d garment model on a 3d body model
US10636206B2 (en) * 2015-08-14 2020-04-28 Metail Limited Method and system for generating an image file of a 3D garment model on a 3D body model
US10867453B2 (en) * 2015-08-14 2020-12-15 Metail Limited Method and system for generating an image file of a 3D garment model on a 3D body model
CN107481095A (en) * 2017-07-26 2017-12-15 深圳市盛世华服信息有限公司 A kind of virtual Design of Popular Dress Ornaments method for customizing of 3D and custom-built system

Also Published As

Publication number Publication date
US20120188232A1 (en) 2012-07-26
AU2210801A (en) 2001-06-06
US20110273444A1 (en) 2011-11-10
US7663648B1 (en) 2010-02-16
WO2001035342A1 (en) 2001-05-17

Similar Documents

Publication Publication Date Title
US7663648B1 (en) System and method for displaying selected garments on a computer-simulated mannequin
US20200380333A1 (en) System and method for body scanning and avatar creation
US10628666B2 (en) Cloud server body scan data system
US11640672B2 (en) Method and system for wireless ultra-low footprint body scanning
US6310627B1 (en) Method and system for generating a stereoscopic image of a garment
US20090079743A1 (en) Displaying animation of graphic object in environments lacking 3d redndering capability
CN101477701B (en) Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
US20070273711A1 (en) 3D graphics system and method
US20110298897A1 (en) System and method for 3d virtual try-on of apparel on an avatar
JP3314704B2 (en) Method of synthesizing image showing fitting state and virtual fitting system using the method
CN109934933B (en) Simulation method based on virtual reality and image simulation system based on virtual reality
CN109523345A (en) WebGL virtual fitting system and method based on virtual reality technology
Li et al. In-home application (App) for 3D virtual garment fitting dressing room
CN106548392B (en) Virtual fitting implementation method based on webG L technology
CN109325990A (en) Image processing method and image processing apparatus, storage medium
JP3721418B2 (en) Method for controlling the level of detail displayed on a computer generated screen display of a composite structure
KR100900823B1 (en) An efficient real-time skin wrinkle rendering method and apparatus in character animation
Millan et al. Impostors and pseudo-instancing for GPU crowd rendering
KR100828935B1 (en) Method of Image-based Virtual Draping Simulation for Digital Fashion Design
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
CA2289413C (en) System and method for displaying selected garments on a computer-simulated mannequin
Liu Computer 5G virtual reality environment 3D clothing design
Fondevilla et al. Fashion transfer: Dressing 3d characters from stylized fashion sketches
CN116982088A (en) Layered garment for conforming to underlying body and/or garment layers
JP2739447B2 (en) 3D image generator capable of expressing wrinkles

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION