US20120327084A1 - Layered Personalization - Google Patents

Layered Personalization Download PDF

Info

Publication number
US20120327084A1
US20120327084A1 US13/455,934 US201213455934A US2012327084A1 US 20120327084 A1 US20120327084 A1 US 20120327084A1 US 201213455934 A US201213455934 A US 201213455934A US 2012327084 A1 US2012327084 A1 US 2012327084A1
Authority
US
United States
Prior art keywords
layer
rendering
renderings
attribute
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/455,934
Inventor
Grant Thomas-Lepore
Iwao Hatanaka
Murali Menon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gemvara Inc
Original Assignee
Gemvara Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gemvara Inc filed Critical Gemvara Inc
Priority to US13/455,934 priority Critical patent/US20120327084A1/en
Assigned to Gemvara, Inc. reassignment Gemvara, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HATANAKA, IWAO, THOMAS-LEPORE, GRANT
Assigned to Gemvara, Inc. reassignment Gemvara, Inc. CORRECTIVE ASSIGNMENT TO CORRECT THE RE-RECORD ASSIGNMENT TO ADD OMITTED INVENTOR. PREVIOUSLY RECORDED ON REEL 028260 FRAME 0148. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS' INTEREST. Assignors: HATANAKA, IWAO, MENON, MURALI, THOMAS-LEPORE, GRANT
Publication of US20120327084A1 publication Critical patent/US20120327084A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/20Configuration CAD, e.g. designing by assembling or positioning modules selected from libraries of predesigned modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization

Definitions

  • Such techniques may be sufficient for non-customizable products or for products with very limited customizability. Such techniques are not, however, sufficient to convey to the consumer an accurate understanding of the appearance of a highly customizable product before the consumer finalizes the purchase decision. If the final appearance of the product is particularly important to the consumer, this inability to view an accurate representation of the final product, reflecting all customizations, may make the consumer unwilling to purchase such a product.
  • a computer system includes a three-dimensional model of an object such as a piece of jewelry.
  • the model is divided into multiple layers, each of which contains one or more components of the object.
  • Each layer is associated with one or more attribute types, each of which is associated with a corresponding plurality of possible attribute values.
  • the system pre-renders each layer with each possible attribute type and each possible attribute value for that type and layer.
  • the resulting layer renderings may be combined with each other to produce personalized renderings of the entire object without the need to pre-render all possible combinations of attribute values.
  • Responsibility for rendering the layers and the final complete object personalization may be divided between client and server in a variety of ways to increase efficiency.
  • a computer-implemented method is used in conjunction with a three-dimensional computer model of an object.
  • the model includes a plurality of layers, wherein each of the plurality of layers includes at least one corresponding component in the model.
  • Each of the plurality of layers is associated with at least one attribute.
  • the method includes: (A) rendering each of the plurality of layers with each of a plurality of values of the at least one attribute to produce a plurality of layer renderings; (B) receiving a first request for a first rendering of a personalized object specifying a plurality of attribute values; (C) selecting, from among the plurality of layer renderings, a subset of layer renderings corresponding to the specified plurality of attribute values; and (D) combining the selected subset of layer renderings to produce the first rendering of the personalized object.
  • FIG. 1 shows a two-dimensional rendering of a three-dimensional model of a ring according to one embodiment of the present invention
  • FIG. 2 is a diagram of an object model representing an object, such as a ring, according to one embodiment of the present invention
  • FIG. 3 shows renderings of various layers of an object model using different attribute values according to one embodiment of the present invention
  • FIG. 4 is a flow chart of a method for creating renderings of layers of an object according to one embodiment of the present invention
  • FIG. 5 is a dataflow diagram of a system for performing the method of FIG. 4 according to one embodiment of the present invention
  • FIG. 6 illustrates an example of combining renderings of four layers to produce a customized view of an object according to one embodiment of the present invention
  • FIG. 7 is a dataflow diagram of a system for combining renderings of layers of an object to produce a rendering of the object as a whole according to one embodiment of the present invention
  • FIG. 8 is a flowchart of a method performed by the system of FIG. 7 according to one embodiment of the present invention.
  • FIG. 9 illustrates the use of a reference object to indicate the scale of a rendered object according to one embodiment of the present invention.
  • FIG. 10 illustrates a “fly-by” view of an object according to one embodiment of the present invention
  • FIG. 11 illustrates combining a ground plane containing shadows with a rendering of a layer of an object according to one embodiment of the present invention.
  • FIGS. 12A-D illustrate combining renderings of variable-shaped components with renderings of fixed-shape components according to embodiments of the present invention.
  • Embodiments of the present invention are directed to a method for efficiently generating componentized 2D (2 dimensional) rasterized views of an object, such as a ring or other piece of jewelry, from a 3D (3 dimensional) model of the object.
  • a 3D CAD (Computer Aided Design) model is used to represent a complete 3D geometry of the object.
  • the object is decomposed into components or parts that can be personalized on demand.
  • a ring may have a shank, center stone, side stones, and associated settings.
  • a user may want to change the type of center and side stones, or the metal types of the shank, center stone, and side stone settings.
  • Embodiments of the present invention personalize components of the ring or other object by structuring, labeling, and processing a 3D CAD model of the object to generate a tractable set of 2D views that can be combined on demand into a large combinatorial set of photorealistic, personalized object views.
  • a designer or other user may create a 3D model of an object, such as by using standard CAD software.
  • FIG. 1 an example is shown of a two-dimensional rendering of a three-dimensional model 100 of an object, a ring in this example.
  • FIG. 2 a diagram is illustrated of an object model 200 representing an object, such as a ring.
  • the particular ring object rendering 100 shown in FIG. 1 has eight components 102 a - g: a shank 102 a , center stone setting metal 102 b , center stone 102 c , a first pair of side stones 102 d - e, and a second pair of side stones 102 f - g .
  • the corresponding object model 200 shown in FIG. 2 contains components 202 a - g, which correspond to the components 102 a - g in the rendering 100 of FIG. 1 .
  • the particular object model 200 shown in FIG. 2 contains seven components 202 a - g, this is merely an example; object models may contain any number of components.
  • the components in a particular object model may be selected in any manner.
  • the model may be decomposed into components that are relevant for a particular domain, such as personalization by a customer through a web site.
  • Components may, however, be selected from within the CAD model in any manner.
  • Components 202 a - g in the object model 200 may be grouped into m layers that may represent domain relevant characteristics of the object.
  • layer 204 a contains components 202 a - b
  • layer 204 b contains component 202 c
  • layer 204 c contains components 202 d - e
  • layer 204 d contains components 202 f - g.
  • the rendering 100 of the object model 200 may be divided into layers 204 a - d, where layer 104 a contains shank component 102 a and center stone setting metal component 102 b , layer 104 b contains center stone component 102 c , layer 104 c contains first side stone components 102 d - e, and layer 104 d contains second side stone components 102 f - g.
  • layer 104 c contains multiple side stones 102 d - e, to facilitate personalization of all of the side stones 102 d - e in the layer 104 c simultaneously.
  • layer 104 c contains multiple side stones 102 d - e, to facilitate personalization of all of the side stones 102 d - e in the layer 104 c simultaneously.
  • those side stones might be grouped into two layers of 50 stones each, so that the user could independently select features (such as stone types) for the two sub-sets independently.
  • CAD software may be used to facilitate the process of creating and managing layers.
  • many existing CAD packages allow the user to organize different components of a CAD model into custom-named groups (i.e. Metal 01, Gem 01, etc.).
  • custom-named groups may be created and used to represent layers in the object model.
  • Components may be added to the groups in order to add such components to layers in the object model.
  • the attributes for each layer may be loaded into the CAD system so that the CAD system may apply any applicable attribute to components in any particular layer.
  • Each of the layers 202 a - d in the object model 200 may have n attributes that describe physical properties of the object.
  • component 202 a has two attributes 206 a - b.
  • Each of the attributes 206 a - b has a type and a value (attribute 206 a has type 208 a and value 208 b; attribute 206 b has type 210 a and value 210 b ).
  • attribute types include, but are not limited to, color, material type (e.g., type of metal or stone), shape, size, and finish.
  • Each attribute type may have a corresponding permissible set or range of attribute values.
  • an attribute with a type of “metal type” may have permissible values such as “gold” and “silver,” while an attribute with a type of “size” may have permissible values which are floating point numbers ranging from 1 mm to 500 mm.
  • Each attribute may have any number of permissible attribute values.
  • components 202 b - g have their own attributes, although they are not shown in FIG. 2 .
  • Each component may have any number of attributes.
  • the value of n may vary from component to component.
  • attributes are associated with entire layers rather than individual components, in which case the attribute types and values associated with a particular layer are applied to all components within that layer. In this case, the value of n may vary from layer to layer.
  • FIG. 3 illustrates a simplified example in which each of the layers 104 a - d from FIG. 1 has exactly one attribute, each of which has four possible values.
  • row 302 a illustrates four renderings 304 a - d of layer 104 a, representing four possible values of a “metal color” attribute
  • row 302 b illustrates four renderings 306 a - d of layer 104 d, representing four possible values of a “stone color” attribute
  • row 302 c illustrates four renderings 308 a - d of layer 104 c, representing four possible values of a “stone color” attribute
  • row 302 d illustrates four renderings 310 a - d of layer 104 b, representing four possible values of a “stone color” attribute.
  • the number of layers m may be variable for a particular object.
  • each layer in an object representing a bracelet may represent a distinct chain in the bracelet.
  • the user may be allowed to add and/or subtract chains from the bracelet, thereby adding and/or subtracting layers from the object model representing the bracelet.
  • an earring may have a variable number of posts, corresponding to a variable number of layers.
  • the use of variable layers is particularly useful for representing accessories in jewelry but may be used for any purpose. Certain layers within an object may be designated as required, in which case they may not be removed from the underlying object model. Other restrictions may be placed on layers, such as a maximum number of additional layers which may be added to a particular object model.
  • FIG. 4 illustrates one embodiment of a system 500 which performs the method 400 of FIG. 4 .
  • the system 500 includes a rendering engine 502 which enters a loop over each layer L in the object model 200 (step 402 ). Within this loop, the rendering engine 502 enters a loop over each possible combination A of values of attributes in layer L (step 404 ). The number of such combinations is equal to the sum of the number of possible attribute values for each attribute type in layer L.
  • the method 400 applies the current combination of attribute values A to all components within the current layer L and renders the resulting components to produce a two-dimensional rendering of layer L (step 406 ).
  • the rendering engine 502 may render each layer L in any way, such as by using commercially available ray tracing software (e.g., VRay) by defining properties for physical materials (e.g., metal, gemstones) to produce “true-to-life” photo-realistic imagery.
  • VRay commercially available ray tracing software
  • the final rendering for each layer may represent only objects in that layer, when the rendering engine 502 renders a particular layer L, it may render not only components in layer L, but also components in other layers, to make the final rendering of layer L more realistic.
  • the rendering engine 502 may first render the entire modeled object, so that any effects of other layers on the current layer L may be reflected in the rendering. Representations of components in layers other than the current layer L may then be removed from the rendering of layer L, to produce the final rendering for layer L which is stored in the layer renderings 504 . This may be accomplished, for example, through use of the alpha channel, which allows objects to be present in the scene and so affect light reflections, refractions, shadows, etc. without being saved in the final image file.
  • the rendering engine 502 repeats step 406 for all remaining combinations of attribute values within layer L (step 408 ).
  • the rendering engine 502 repeats steps 404 - 408 for the remaining layers in the object model (step 410 ).
  • steps 404 - 408 for the remaining layers in the object model (step 410 ).
  • a separate two-dimensional rendering is produced for each possible combination of attribute values within each layer. For example, in the case of the object model 200 shown in FIG.
  • the rendering engine 502 produces layer renderings 504 , which contains a set of renderings 302 a of layer 204 a, a set of renderings 302 b of layer 204 b, a set of renderings 302 c of layer 204 c, and a set of renderings 302 d of layer 204 d.
  • the resulting 2D renderings 504 may be stored in any form, such as in individual image files on a hard disk drive or other storage medium. Information about the attributes and other data associated with the layer renderings 504 may also be stored in any form. Such data may, for example, be stored in the same files as those which contain the renderings 504 , or in separate files.
  • Not all attribute values may result in distinct renderings from each other. For example, changing a certain attribute value of a layer may merely affect the price of the components in the layer, but may not affect how the components in that layer are rendered. In other words two distinct values of a particular attribute may result in the same rendering of the corresponding layer. In this case, it is not necessary to create separate, redundant, renderings of the layer for both attribute values. Instead, a single rendering may be used to represent both attribute values.
  • redundant renderings may be eliminated in any of a variety of ways. For example, all renderings may first be produced using the method 400 of FIG. 4 . Redundant renderings may then be identified and consolidated, such that each set of two or more redundant renderings is reduced to a single representative rendering. When any of the renderings in the set is required for use in rendering the entire object, the representative rendering may be used.
  • redundancies may be identified before the redundant renderings are produced.
  • the method 400 may determine whether rendering the components in the current layer L using the current combination of attribute values A will produce a rendering that has already been produced by the method 400 . If so, the method 400 may refrain from producing the rendering again, and instead store a pointer or other record indicating that the previously-generated rendering should be used whenever a rendering of layer L using attribute values A is needed.
  • the object model 200 may include data about the components 202 a - g in addition to the attributes which are used to create the layer renderings.
  • Such metadata 212 need not be treated by method 400 as an attribute for purposes of generating the possible combinations of attribute values in step 404 . More generally, the metadata 212 need not be used by the method 400 at all in generating the layer renderings in step 406 . Examples of such metadata include prices and SKUs of components.
  • metadata 212 associated with component 202 a, is shown in FIG. 2 for purposes of example, any kind and amount of metadata may be associated with any of the components 202 a - g in the object model 200 .
  • metadata may be associated with one or more of the layers 204 a - d, or with the object model 200 as a whole. Metadata may be assigned automatically and/or manually by a user.
  • the two-dimensional renderings 504 of different layers 204 a - d, once produced, may be combined with each other in any combination to form a large number of personalized views of the entire modeled object.
  • a system 700 is shown for creating such a rendering of the entire modeled object according to one embodiment of the present invention.
  • FIG. 8 a flowchart is shown of a method 800 performed by the system 700 of FIG. 7 according to one embodiment of the present invention.
  • the system 700 includes a layer rendering selector 702 which selects one rendering from each of the sets 302 a - d ( FIG. 5 ) of layer renderings 504 to produce a set of selected layer renderings 704 ( FIG. 8 , step 802 ).
  • the selected layer renderings 704 include rendering 706 a from layer 202 a, rendering 706 b from layer 202 b, rendering 706 c from layer 202 c, and rendering 706 d from layer 204 d.
  • a layer rendering combiner 708 combines the selected layer renderings 704 together to form a two-dimensional rendering 710 of the entire modeled object ( FIG. 8 , step 804 ).
  • the object rendering 710 like the individual layer renderings 504 , may be represented and stored as a raster image rather than as a three-dimensional model.
  • FIG. 6 illustrates an example in which rendering 304 c is selected from layer renderings 302 a (Layer 1); rendering 306 b is selected from layer renderings 302 b (Layer 2); rendering 308 a is selected from layer renderings 302 c (Layer 3); and rendering 310 c is selected from layer renderings 302 d (Layer 4).
  • layer renderings 304 c, 308 a, 310 c, and 306 b represent the selected layer renderings 706 a, 706 b, 706 c, and 706 d, respectively.
  • renderings 304 a, 308 a, 310 c, and 306 b are combined together to form a rendering 600 of the entire object modeled by the object model 200 , representing a particular combination of attribute values.
  • the rendering 600 in FIG. 6 is an example of the object rendering 710 in FIG. 7 .
  • a combination of the method 400 ( FIG. 4 ) and the method 800 ( FIG. 8 ) to produce object renderings such as the object rendering 600 shown in FIG. 6
  • the ring components may, for example, be assigned to 5 layers, where layer 1 is the shank, layer 2 is the center stone setting, layer 3 is the center stone, layer 4 is 50 alternating side stones, and layer 5 is the other 50 alternating side stones.
  • the shank layer has a “metal type” attribute with 10 possible values (representing 10 possible types of metal)
  • the center stone setting has a “metal type” attribute with 10 possible values
  • the center stone layer has a “gemstone type” attribute with 21 possible values
  • each of the two side stone layers has its own “gemstone type” attribute with 21 possible values.
  • a total of only 62 2D views (10+10+21+21+21) need to be rendered by method 400 to produce the layer renderings 504 shown in FIG. 5 .
  • This small number of renderings may be combined into 926,100 possible permutations, or “personalized” views.
  • One advantage of embodiments of the present invention is that they may be used to produce a very large number of personalized object renderings by rendering only a very small number of renderings of layers of the object. This is important because the process 400 used in FIG. 4 to create each individual layer rendering—producing realistic, two-dimensional rasterized images of the layers from a three-dimensional CAD model—is resource intensive, requiring significant computer processing resources or significant time to perform. In contrast, the process 800 used in FIG. 8 to combine existing rasterized layer renderings together to produce a rasterized image of the entire modeled object is computationally inexpensive.
  • Embodiments of the present invention only need to perform the computationally-expensive process 400 of FIG. 4 a single time, to produce a relatively small number of layer renderings 504 .
  • this process 400 may, for example, be performed on a computer having significant computing resources, such as a server, graphics workstation, or cluster of such computers.
  • any number of realistic raster images of the entire object may be produced quickly, any number of times, by less-powerful computers using the method 800 of FIG. 8 .
  • Embodiments of the present invention therefore, provide significant increases in efficiency of generating realistic images of customized objects, without any loss of quality of such images, in comparison to previously-used techniques.
  • n i k refers to the k th attribute on layer i
  • m refers to the total number of layers
  • Embodiments of the present invention may be used to display information other than renderings of the modeled object 200 .
  • it can be useful when presenting 3D models in a 2D view to show physical scale with respect to a common reference object.
  • a dime or other coin may be used as a common reference object for jewelry models.
  • FIG. 9 illustrates such an example, in which a rendering 100 of a ring is combined with a rendering 902 of a dime to produce a rendering 904 which shows both the ring and the dime, rendered to scale.
  • the combined rendering 904 may, for example, be produced by combining together rasterized images of the ring and dime.
  • renderings may be created of the reference object at various spatial views, and the resulting renderings may be stored for later use in combining with renderings of other objects
  • the reference object may be rendered as semi-transparent so as not to obscure the primary object being rendered.
  • components may be combined in other ways to produce the final object rendering 710 .
  • the resulting pendant may be rendered to display the chain threaded through the bail(s) of the pendant.
  • the final rendering may reflect the size of the chain and of the bail(s) to accurately represent how the chain would appear if threaded through the bail(s).
  • a ring may be rendered as fitted to a model of a human hand.
  • a necklace may be displayed as fitted to a model of a human neck. Such renderings may accurately represent how such jewelry would appear when worn on particular parts of the human body.
  • a set of 2D views of the entire object from various perspectives may be generated to allow a 3D Virtual Reality (VR) “fly-by” of the object.
  • the sequence of 2D views that comprise the fly-by may, for example, include “camera” views of the object from different spatial locations.
  • An example of such views 1002 a - 1 is shown in FIG. 10 . As can be seen from FIG. 10 , if the views 1002 a - 1 were to be displayed on-screen in sequence, the result would be the appearance of flying around the ring to view it from different perspectives.
  • Such different camera views may be rendered using the techniques disclosed above, using any pre-selected combination of attribute values.
  • the different camera views 1002 a - 1 may include different personalized combinations of the object being rendered.
  • attribute values of one or more layers in the object may be varied from camera view to camera view.
  • the effect is to show attribute values (e.g., stone types/colors, metal types/colors) of the rendered object changing as the fly-by progresses. This process could also be used to generate an entire fly-by animation in the personalized configuration selected by the user.
  • One advantage of changing the attribute values in this way is that it allows the user to see not only the same object from different angles, but also different personalizations of the object, but without incurring the resource overhead (memory, processor time, and disk storage) required to render a complete fly-by for each distinct personalization of the object.
  • Shadows in the final rendering 710 of the modeled object 200 to make the rendering 710 as realistic as possible. It is inefficient, however, to store separate renderings of the shadows created by components in a layer for every possible combination of attribute values for that layer, since changes in most attribute values (e.g., colors and materials) do not affect the shadows cast by the layer. Therefore, the shadows of all components having a fixed shape in an object may be rendered and stored in a single layer referred to herein as a “ground plane,” which may be thought of as a “shadow layer” because its purpose is to store shadows cast by fixed-shape components of the object. As a result, such shadows need not be stored in other renderings of those components.
  • the ground plane may be stored as a layer rendering (e.g., as a raster image) in addition to and separate from the layer renderings 504 .
  • multiple different ground plane layers may be created of diverse colors and patterns. The purpose of this is to allow the object to be displayed in different surroundings for aesthetic purposes.
  • FIG. 11 shows an example in which a ground plane 1102 is combined with an object rendering 1104 to produce a final object rendering 1106 which contains both the shadows from the ground plane 1102 and the components from the object rendering 1104 .
  • the shadows of those components whose shapes may vary may be handled differently from those with invariant shapes.
  • the shadows of variable-shape components may be rendered and stored within the layer renderings 504 of those components themselves (rather than in the ground plane). For example, if a particular component may have either a rectangular or oval shape, a rectangular version of the component and its shadow may be rendered and stored in one layer rendering, while the oval version of the component and its shadow may be rendered and stored in another layer rendering. If the rectangular version of the component is later selected for inclusion in the final object, the pre-rendering of the rectangular object and its shadow may be combined with the other selected components to produce the final object rendering.
  • FIGS. 12A-B examples are shown in which renderings of a variable-shaped object are combined with renderings of an invariant-shaped object.
  • FIG. 12A illustrates an example in which a first rendering 1202 a of variable-shaped components includes the shadows of those components, and in which a rendering 1204 of invariant-shaped components does not include the shadows of those components.
  • the rendering 1202 a including the shadows it contains, is combined with the rendering 1204 , to produce final object rendering 1206 a.
  • a ground plane representing shadows of the invariant-shaped objects in rendering 1204 , could also be combined with renderings 1202 a and 1204 to produce final object rendering 1206 a.
  • FIG. 12B illustrates an example in which a second rendering 1202 b of the variable-shaped components from rendering 1202 a includes the shadows of those components.
  • the shadows in rendering 1202 b differ from those in rendering 1202 a.
  • the same rendering 1204 of the invariant-shaped objects is used.
  • the rendering 1202 b, including the shadows it contains, is combined with the rendering 1204 , to produce final object rendering 1206 b.
  • the variable-shaped component may be separated out into its own plane.
  • the ground planes 1202 a and 1202 b include both variable-shaped components and their shadows.
  • the ground planes 1212 a and 1212 b solely contain shadows; i.e., they do not contain the variable-shaped components.
  • variable-shaped components have been separated out into planes 1215 a and 1215 b.
  • the fixed-shaped components are retained within their own plane 1214 in FIGS. 12C and 12D .
  • holes may be rendered at appropriate locations within the variable-shaped components in planes 1215 a and 1215 b so that the variable-shaped components appear to interact realistically with the fixed-shape components of plane 1214 when the fixed-shape and variable-shape components are combined together.
  • the holes are placed at locations where the fixed-shape components intersect the variable-shape components.
  • the ground (shadow) plane (layer 1212 a or layer 121 b ) may be rendered first, i.e., at the “bottom” of the stack.
  • the fixed-shape components (layer 1214 ) may be rendered next, i.e., on “top” of the ground plane, in the “middle” of the stack.
  • the variable-shape components (e.g., layer 1215 a or 1215 b ) may be layered last, i.e., on “top” of the other two planes, at the “top” of the stack. This achieves a realistic three-dimensional effect in which the fixed-shape components appear to pass through the holes in the variable-shaped components in the resulting final object renderings 1216 a and 1216 b.
  • Each of the layer renderings 504 represents a particular layer rendered with one or more particular attribute values.
  • Each of the layer renderings 504 may be encoded with information such as the name of the layer and the names (e.g., types) of the attributes of the layer by, for example, saving the layer rendering in a file having a filename which includes text representing the layer and attribute name(s), so that the particular file which encodes a particular layer with particular attributes may be easily identified.
  • a filename may, for example, have a format such as: “ ⁇ design name>_RenComp_ ⁇ layer name>_ ⁇ attribute name>_ ⁇ view>”. Names of multiple attributes may be encoded within such a filename. Note that “ ⁇ view>” represents the type of view of the layer rendering, such as front, side, or top.
  • the filename “Design_RenComp_CS_E_P” may be used to store a file containing a rendering of a layer containing an emerald (“E” in the filename) selected for the center stone layer (“CS” in the filename), rendered in perspective view (“P” in the filename).
  • the filename “Design_RenComp_SM_RG_P” may be used to store a file containing a rendering of a layer containing rose gold (“RG” in the filename) selected for the shank metal layer (“SM” in the filename), also rendered in perspective view (“P” in the filename). This encoding scheme may be used to facilitate combining the 2D layer renderings 504 into the final object rendering 710 .
  • the final object rendering 710 may be stored in a file having a filename which encodes information about which layers are represented in the final object rendering 710 .
  • a filename of the following form may be used: “ ⁇ design name>_Ren_ ⁇ shank>- ⁇ shank2>- ⁇ center stone metal>- ⁇ side stone metal>_ ⁇ center stone type>- ⁇ primary side stone type> ⁇ secondary side stone type>_ ⁇ view>”.
  • the filename “Design_Ren_YG--YG-_E-DE_P” may be used for a ring in which a yellow gold shank (“YG” in the filename) is combined with an Emerald center stone (“E” in the filename) with Diamond primary side stones and Emerald secondary side stones (“DE” in the filename) in a perspective view (“P” in the filename).
  • the object model 200 may be used for other purposes.
  • One example is to calculate the price of a particular customized object (i.e., a personalized object reflecting a particular combination of attribute values). Such calculation may be performed, for example, by providing the attribute values of the object model 200 to a price calculation engine, which may use the attribute values (possibly in combination with information such as the current cost of particular types of gemstones, markup amounts, and discounts) to calculate the price of the entire personalized object.
  • Metadata such as metadata 212 , may be used in addition to, or instead of, the object model's attribute values to perform the price calculation.
  • Pricing for a particular component may be determined in any way, such as by calculating the price based on features (e.g., size, material) of the component, or simply by looking up the price of the component (such as by using the component's SKU as an index into a database). However the price is calculated, the resulting price may be displayed to the user as part of or in addition to the final object rendering 710 . As a result, the consumer may select a particular set of attribute values for each layer, and in response immediately see a photo-realistic rendering of the object along with its associated price.
  • the server could perform the layer pre-rendering process 400 of FIG. 4 a single time. Then, when a user at one of the clients requests a personalized object having a particular combination of attribute values, the client could transmit the attribute values to the server over a network.
  • the server could transmit back to the client, over the network, the pre-rendered layer renderings corresponding to the selected attribute values.
  • the client could then perform the layer-combining process 800 of FIG. 8 to produce the final rendering of personalized object, having a particular combination of attribute values selected by the user.
  • the server could perform the layer-combining process 800 of FIG. 8 to produce the final rendering of personalized object, having a particular combination of attribute values selected by the user.
  • the server could then transmit the personalized object rendering back to the client over a network.
  • the client could then simply display the personalized object rendering to the user.
  • the server may perform a one-time transmission of all of the layer renderings 504 to each of the clients. Then, when a user at a particular client makes a request for a particular personalized rendering having a particular combination of attribute values, the client may perform the layer-combining process 800 of FIG. 8 without the need to make a trip to the server.
  • the client computer need not perform the computationally-intensive layer rendering process 400 of FIG. 4 .
  • the client computer may be a relatively low-end computer, such as the kind typically used by home computer users, having a conventional web browsing client but lacking the CAD software and other software necessary to perform the layer-rendering process 400 of FIG. 4 .
  • the personalized rendering may be cached so that it may be displayed in response to subsequent requests for the same combination of attribute values, without needing to re-perform the layer-combining method 800 of FIG. 8 . If personalized views are created at the server, then such caching may be performed at the server. Additionally or alternatively, the server may transmit such personalized views to one or more of the clients so that subsequent requests at those clients may be serviced quickly, without the need for a trip to the server.
  • each client may transmit any personalized views it generates back to the server, so that subsequent requests made by the same or other clients may be serviced without the need to re-generate the same view.
  • certain personalized object views representing certain combinations of attribute values may be pre-generated into complete object renderings so that such renderings are ready to display immediately to users upon selection of those combinations of attribute values, without the need to perform the layer-rendering process 400 of FIG. 4 or the layer-combining process 800 of FIG. 8 .
  • the particular set of personalized object views to pre-render may be selected in any way. For example, certain attribute value combinations which are known or suspected to be highly desirable, such as white gold metal and diamond stone for use in a wedding ring, may be pre-rendered into final object renderings. When a user selects any such combination of attribute values, the corresponding object rendering may be displayed to the user immediately, merely by displaying the pre-generated object rendering to the user.
  • Combinations of attribute values to pre-render may also be selected, for example, using rules.
  • a particular rule might apply to a particular kind of jewelry or a particular model of ring.
  • an “engagement ring” rule might specify that it is preferred for engagement rings to have diamonds as the stone, and that certain colors should not be combined with certain other colors within an engagement ring.
  • Such a rule may then be used to automatically pre-render all component combinations which satisfy the rule.
  • Such pre-renderings may, for example, be generated at the merchant's site before deploying the system for use by users.
  • pre-rendered combinations may be produced in any of a variety of ways. For example, they may be produced by rendering the entire object as a single scene, based on the individual attribute values selected by the user. As another example, pre-rendered combinations may be produced by combining together existing pre-renderings of the individual components selected by the user, using process 800 . The latter technique may be used to significantly reduce the amount of time necessary to produce popular pre-renderings.
  • a search facility may be provided through which the user may search for particular component combinations.
  • Search may be conducted in two ways: static search and dynamic search.
  • static search only those combinations which have already been pre-rendered may be available for searching. Therefore, initially only those combinations which have been pre-selected for pre-rendering when the system is initialized may be available for searching.
  • users selected particular combinations of components with particular attributes also referred to herein as “particular variations”.
  • the renderings of such particular variations may be saved and added to the store of particular variations which are available for searching.
  • dynamic search the system will interrogate all attributes of an object to determine whether or not a component combination will satisfy the search criteria. If the component combination matches the search criteria via attribute interrogation and the corresponding object does not exist, the object will be created dynamically and will be returned in the search results. Note that the dynamic search will incur more performance overhead than the static search.
  • data processed by embodiments of the present invention may be stored in any form.
  • three-dimensional design data may be stored in CAD files, which may be subdivided into any number of files, such as one file per design, one file per layer, or one file per component.
  • Meta-data such as information about the type and number of components in a design, may be stored in the same or different file from the design data itself.
  • the method 400 uses all combinations of values of all attributes of each layer L to render the layer L, this is not a requirement of the present invention. Rather, for example, only a subset of the layer's attribute types may be used to render the layer. As another example, the method 400 may produce layer renderings for fewer than all possible values of an attribute. If a user subsequently requests a combination of attribute values for which not all required layer renderings were previously produced, any needed layer renderings may be produced in response to such a request, and then used to produce a final object rendering using the method 800 of FIG. 8 . Alternatively, for example, a layer rendering representing attribute values which are closest to those requested by the user may be selected, and then used to produce a final object rendering using the method 800 of FIG. 8 , thereby avoiding the need to produce additional layer renderings.
  • an online retail web site may allow a user to select any one of a plurality of object, such as any one of a plurality of items of jewelry, and then customize the selected object for purchase using the techniques disclosed herein.
  • the techniques described above may be implemented, for example, in hardware, software tangibly stored on a computer-readable medium, firmware, or any combination thereof.
  • the techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
  • Program code may be applied to input entered using the input device to perform the functions described and to generate output.
  • the output may be provided to one or more output devices.
  • Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language.
  • the programming language may, for example, be a compiled or interpreted programming language.
  • Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor.
  • Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output.
  • Suitable processors include, by way of example, both general and special purpose microprocessors.
  • the processor receives instructions and data from a read-only memory and/or a random access memory.
  • Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays).
  • a computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk.

Abstract

A computer system includes a three-dimensional model of an object such as a piece of jewelry. The model is divided into multiple layers, each of which contains one or more components of the object. Each layer is associated with one or more attribute types, each of which is associated with a corresponding plurality of possible attribute values. The system pre-renders each layer with each possible attribute type and each possible attribute value for that type and layer. The resulting layer renderings may be combined with each other to produce personalized renderings of the entire object without the need to pre-render all possible combinations of attribute values. Responsibility for rendering the layers and the final complete object personalization may be divided between client and server in a variety of ways to increase efficiency.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/111,773, filed on May 19, 2011, entitled, “Layered Personalization,” which is hereby incorporated by reference herein; which is a continuation of U.S. patent application Ser. No. 12/684,103, filed on Jan. 7, 2010, entitled, “Layered Personalization,” which is hereby incorporated by reference herein; which claims the benefit of U.S. Prov. Pat. App. Ser. No. 61/152,549, filed on Feb. 13, 2009, entitled, “Layered Personalization,” which is hereby incorporated by reference herein; and U.S. Prov. Pat. App. Ser. No. 61/230,192, filed on Jul. 31, 2009, entitled, “Layered Personalization,” which is hereby incorporated by reference herein.
  • BACKGROUND
  • Customers are increasingly demanding personal control over the products they purchase. For example, for many years computer retailers have provided consumers with the ability to specify the precise components of the computers they wish to purchase. In response to a particular customer's custom order, the retailer manufactures a single computer having the components specified by the customer, and then ships the custom-built computer to the consumer. This is an early example of what has now come to be known as “mass customization”—the manufacture and sale of highly-customizable mass-produced products, in quantities as small as one. Mass customization is now spreading to a wider and wider variety of products.
  • Purchasers of computers are primarily interested in the internal functionality of the computers they purchase, not their external appearance. Therefore, it is relatively unimportant for a purchaser of a computer to see what a customized computer will look like before completing the purchase.
  • This is not true, however, for many other products, such as jewelry, for which aesthetics are a primary component of the consumer's purchasing decision. Traditionally, product catalogs and web sites have been able to provide consumers with high-quality images of products offered because such products have not been customizable. Therefore, traditionally it has been sufficient to provide consumers with a single image of a non-customizable product before purchase. Even when products have been customizable, they have not been highly customizable. For example, in some cases it has been possible to select the product's color from among a small selection of offered colors. In this case, traditional catalogs and web sites might either display a single image of a product, alongside a palette of colors, or instead display separate images of the product, one in each color.
  • Such techniques may be sufficient for non-customizable products or for products with very limited customizability. Such techniques are not, however, sufficient to convey to the consumer an accurate understanding of the appearance of a highly customizable product before the consumer finalizes the purchase decision. If the final appearance of the product is particularly important to the consumer, this inability to view an accurate representation of the final product, reflecting all customizations, may make the consumer unwilling to purchase such a product.
  • Although one way to enable the consumer to view customized versions of a product for evaluation before purchase is to provide the consumer's computer with software for rendering any possible customized version of the product, doing so using existing techniques would require equipping each consumer's computer with powerful CAD software which is capable of producing realistic two-dimensional renderings of the product based on a three-dimensional CAD model. Few, if any, consumers would be willing to incur this cost and expense.
  • What is needed, therefore, are improved techniques for quickly generating and displaying a wide range of high-quality images of highly-customizable products.
  • SUMMARY
  • A computer system includes a three-dimensional model of an object such as a piece of jewelry. The model is divided into multiple layers, each of which contains one or more components of the object. Each layer is associated with one or more attribute types, each of which is associated with a corresponding plurality of possible attribute values. The system pre-renders each layer with each possible attribute type and each possible attribute value for that type and layer. The resulting layer renderings may be combined with each other to produce personalized renderings of the entire object without the need to pre-render all possible combinations of attribute values. Responsibility for rendering the layers and the final complete object personalization may be divided between client and server in a variety of ways to increase efficiency.
  • For example, in one embodiment of the present invention, a computer-implemented method is used in conjunction with a three-dimensional computer model of an object. The model includes a plurality of layers, wherein each of the plurality of layers includes at least one corresponding component in the model. Each of the plurality of layers is associated with at least one attribute. The method includes: (A) rendering each of the plurality of layers with each of a plurality of values of the at least one attribute to produce a plurality of layer renderings; (B) receiving a first request for a first rendering of a personalized object specifying a plurality of attribute values; (C) selecting, from among the plurality of layer renderings, a subset of layer renderings corresponding to the specified plurality of attribute values; and (D) combining the selected subset of layer renderings to produce the first rendering of the personalized object.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a two-dimensional rendering of a three-dimensional model of a ring according to one embodiment of the present invention;
  • FIG. 2 is a diagram of an object model representing an object, such as a ring, according to one embodiment of the present invention;
  • FIG. 3 shows renderings of various layers of an object model using different attribute values according to one embodiment of the present invention;
  • FIG. 4 is a flow chart of a method for creating renderings of layers of an object according to one embodiment of the present invention;
  • FIG. 5 is a dataflow diagram of a system for performing the method of FIG. 4 according to one embodiment of the present invention;
  • FIG. 6 illustrates an example of combining renderings of four layers to produce a customized view of an object according to one embodiment of the present invention;
  • FIG. 7 is a dataflow diagram of a system for combining renderings of layers of an object to produce a rendering of the object as a whole according to one embodiment of the present invention;
  • FIG. 8 is a flowchart of a method performed by the system of FIG. 7 according to one embodiment of the present invention;
  • FIG. 9 illustrates the use of a reference object to indicate the scale of a rendered object according to one embodiment of the present invention;
  • FIG. 10 illustrates a “fly-by” view of an object according to one embodiment of the present invention;
  • FIG. 11 illustrates combining a ground plane containing shadows with a rendering of a layer of an object according to one embodiment of the present invention; and
  • FIGS. 12A-D illustrate combining renderings of variable-shaped components with renderings of fixed-shape components according to embodiments of the present invention.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention are directed to a method for efficiently generating componentized 2D (2 dimensional) rasterized views of an object, such as a ring or other piece of jewelry, from a 3D (3 dimensional) model of the object. A 3D CAD (Computer Aided Design) model is used to represent a complete 3D geometry of the object. The object is decomposed into components or parts that can be personalized on demand.
  • For example, a ring may have a shank, center stone, side stones, and associated settings. To personalize the ring a user may want to change the type of center and side stones, or the metal types of the shank, center stone, and side stone settings. Embodiments of the present invention personalize components of the ring or other object by structuring, labeling, and processing a 3D CAD model of the object to generate a tractable set of 2D views that can be combined on demand into a large combinatorial set of photorealistic, personalized object views.
  • More specifically, in accordance with embodiments of the present invention, a designer or other user may create a 3D model of an object, such as by using standard CAD software. Referring to FIG. 1, an example is shown of a two-dimensional rendering of a three-dimensional model 100 of an object, a ring in this example. Referring to FIG. 2, a diagram is illustrated of an object model 200 representing an object, such as a ring.
  • The particular ring object rendering 100 shown in FIG. 1 has eight components 102 a-g: a shank 102 a, center stone setting metal 102 b, center stone 102 c, a first pair of side stones 102 d-e, and a second pair of side stones 102 f-g. Similarly, the corresponding object model 200 shown in FIG. 2 contains components 202 a-g, which correspond to the components 102 a-g in the rendering 100 of FIG. 1. Although the particular object model 200 shown in FIG. 2 contains seven components 202 a-g, this is merely an example; object models may contain any number of components.
  • The components in a particular object model may be selected in any manner. For example, the model may be decomposed into components that are relevant for a particular domain, such as personalization by a customer through a web site. Components may, however, be selected from within the CAD model in any manner.
  • Components 202 a-g in the object model 200 may be grouped into m layers that may represent domain relevant characteristics of the object. Although the example object model 200 shown in FIG. 2 contains four layers 204 a-d (i.e., m=4), object models may contain any number of layers, each of which may contain any number of components.
  • In the example object model 200 shown in FIG. 2, layer 204 a contains components 202 a-b, layer 204 b contains component 202 c, layer 204 c contains components 202 d-e, and layer 204 d contains components 202 f-g. Similarly, the rendering 100 of the object model 200 may be divided into layers 204 a-d, where layer 104 a contains shank component 102 a and center stone setting metal component 102 b, layer 104 b contains center stone component 102 c, layer 104 c contains first side stone components 102 d-e, and layer 104 d contains second side stone components 102 f-g.
  • Although components may be grouped within layers in any way, it may be particularly useful to group similar components together within a single layer. For example, layer 104 c contains multiple side stones 102 d-e, to facilitate personalization of all of the side stones 102 d-e in the layer 104 c simultaneously. As another example, if a ring were to contain 100 side stones, those side stones might be grouped into two layers of 50 stones each, so that the user could independently select features (such as stone types) for the two sub-sets independently. These are merely examples of ways in which components may be grouped into layers and do not constitute limitations of the present invention.
  • Features in existing CAD software may be used to facilitate the process of creating and managing layers. For example, many existing CAD packages allow the user to organize different components of a CAD model into custom-named groups (i.e. Metal 01, Gem 01, etc.). Such custom-named groups may be created and used to represent layers in the object model. Components may be added to the groups in order to add such components to layers in the object model. The attributes for each layer may be loaded into the CAD system so that the CAD system may apply any applicable attribute to components in any particular layer.
  • Each of the layers 202 a-d in the object model 200 may have n attributes that describe physical properties of the object. In the example shown in FIG. 2, component 202 a has two attributes 206 a-b. Each of the attributes 206 a-b has a type and a value (attribute 206 a has type 208 a and value 208 b; attribute 206 b has type 210 a and value 210 b). Examples of attribute types include, but are not limited to, color, material type (e.g., type of metal or stone), shape, size, and finish. Each attribute type may have a corresponding permissible set or range of attribute values. For example, an attribute with a type of “metal type” may have permissible values such as “gold” and “silver,” while an attribute with a type of “size” may have permissible values which are floating point numbers ranging from 1 mm to 500 mm. Each attribute may have any number of permissible attribute values.
  • For ease of illustration only the attributes of component 202 a are shown in FIG. 2. It should be assumed, however, that components 202 b-g have their own attributes, although they are not shown in FIG. 2.
  • Each component may have any number of attributes. In other words, the value of n may vary from component to component. In certain examples provided herein, attributes are associated with entire layers rather than individual components, in which case the attribute types and values associated with a particular layer are applied to all components within that layer. In this case, the value of n may vary from layer to layer.
  • FIG. 3 illustrates a simplified example in which each of the layers 104 a-d from FIG. 1 has exactly one attribute, each of which has four possible values. In particular, row 302 a illustrates four renderings 304 a-d of layer 104 a, representing four possible values of a “metal color” attribute; row 302 b illustrates four renderings 306 a-d of layer 104 d, representing four possible values of a “stone color” attribute; row 302 c illustrates four renderings 308 a-d of layer 104 c, representing four possible values of a “stone color” attribute; and row 302 d illustrates four renderings 310 a-d of layer 104 b, representing four possible values of a “stone color” attribute.
  • Although the examples shown in FIGS. 1-3 illustrate an object which has a fixed number of layers, this is not a requirement of the present invention. Alternatively, the number of layers m may be variable for a particular object. For example, each layer in an object representing a bracelet may represent a distinct chain in the bracelet. The user may be allowed to add and/or subtract chains from the bracelet, thereby adding and/or subtracting layers from the object model representing the bracelet. As another example, an earring may have a variable number of posts, corresponding to a variable number of layers. The use of variable layers is particularly useful for representing accessories in jewelry but may be used for any purpose. Certain layers within an object may be designated as required, in which case they may not be removed from the underlying object model. Other restrictions may be placed on layers, such as a maximum number of additional layers which may be added to a particular object model.
  • Once an object model, such as the object model 200 shown in FIG. 2, exists, embodiments of the present invention may render a set of 2D views of layers of the object model having all permissible attribute values. One embodiment of a method 400 for creating such renderings is shown in FIG. 4. FIG. 5 illustrates one embodiment of a system 500 which performs the method 400 of FIG. 4.
  • The system 500 includes a rendering engine 502 which enters a loop over each layer L in the object model 200 (step 402). Within this loop, the rendering engine 502 enters a loop over each possible combination A of values of attributes in layer L (step 404). The number of such combinations is equal to the sum of the number of possible attribute values for each attribute type in layer L.
  • The method 400 applies the current combination of attribute values A to all components within the current layer L and renders the resulting components to produce a two-dimensional rendering of layer L (step 406). The rendering engine 502 may render each layer L in any way, such as by using commercially available ray tracing software (e.g., VRay) by defining properties for physical materials (e.g., metal, gemstones) to produce “true-to-life” photo-realistic imagery.
  • Although the final rendering for each layer may represent only objects in that layer, when the rendering engine 502 renders a particular layer L, it may render not only components in layer L, but also components in other layers, to make the final rendering of layer L more realistic. For example, to produce a rendering of a particular layer L, the rendering engine 502 may first render the entire modeled object, so that any effects of other layers on the current layer L may be reflected in the rendering. Representations of components in layers other than the current layer L may then be removed from the rendering of layer L, to produce the final rendering for layer L which is stored in the layer renderings 504. This may be accomplished, for example, through use of the alpha channel, which allows objects to be present in the scene and so affect light reflections, refractions, shadows, etc. without being saved in the final image file.
  • The rendering engine 502 repeats step 406 for all remaining combinations of attribute values within layer L (step 408). The rendering engine 502 repeats steps 404-408 for the remaining layers in the object model (step 410). As a result of this process 400, a separate two-dimensional rendering is produced for each possible combination of attribute values within each layer. For example, in the case of the object model 200 shown in FIG. 2, which contains four layers 204 a-d, the rendering engine 502 produces layer renderings 504, which contains a set of renderings 302 a of layer 204 a, a set of renderings 302 b of layer 204 b, a set of renderings 302 c of layer 204 c, and a set of renderings 302 d of layer 204 d.
  • The resulting 2D renderings 504 may be stored in any form, such as in individual image files on a hard disk drive or other storage medium. Information about the attributes and other data associated with the layer renderings 504 may also be stored in any form. Such data may, for example, be stored in the same files as those which contain the renderings 504, or in separate files.
  • Not all attribute values may result in distinct renderings from each other. For example, changing a certain attribute value of a layer may merely affect the price of the components in the layer, but may not affect how the components in that layer are rendered. In other words two distinct values of a particular attribute may result in the same rendering of the corresponding layer. In this case, it is not necessary to create separate, redundant, renderings of the layer for both attribute values. Instead, a single rendering may be used to represent both attribute values.
  • Such redundant renderings may be eliminated in any of a variety of ways. For example, all renderings may first be produced using the method 400 of FIG. 4. Redundant renderings may then be identified and consolidated, such that each set of two or more redundant renderings is reduced to a single representative rendering. When any of the renderings in the set is required for use in rendering the entire object, the representative rendering may be used.
  • Alternatively, for example, redundancies may be identified before the redundant renderings are produced. For example, in step 406, the method 400 may determine whether rendering the components in the current layer L using the current combination of attribute values A will produce a rendering that has already been produced by the method 400. If so, the method 400 may refrain from producing the rendering again, and instead store a pointer or other record indicating that the previously-generated rendering should be used whenever a rendering of layer L using attribute values A is needed.
  • The object model 200 may include data about the components 202 a-g in addition to the attributes which are used to create the layer renderings. An example of such metadata 212, associated with component 202 a, is shown in FIG. 2. Such metadata 212 need not be treated by method 400 as an attribute for purposes of generating the possible combinations of attribute values in step 404. More generally, the metadata 212 need not be used by the method 400 at all in generating the layer renderings in step 406. Examples of such metadata include prices and SKUs of components. Although only metadata 212, associated with component 202 a, is shown in FIG. 2 for purposes of example, any kind and amount of metadata may be associated with any of the components 202 a-g in the object model 200. Additionally or alternatively, metadata may be associated with one or more of the layers 204 a-d, or with the object model 200 as a whole. Metadata may be assigned automatically and/or manually by a user.
  • The two-dimensional renderings 504 of different layers 204 a-d, once produced, may be combined with each other in any combination to form a large number of personalized views of the entire modeled object. Referring to FIG. 7, a system 700 is shown for creating such a rendering of the entire modeled object according to one embodiment of the present invention. Referring to FIG. 8, a flowchart is shown of a method 800 performed by the system 700 of FIG. 7 according to one embodiment of the present invention. The system 700 includes a layer rendering selector 702 which selects one rendering from each of the sets 302 a-d (FIG. 5) of layer renderings 504 to produce a set of selected layer renderings 704 (FIG. 8, step 802). In the example shown in FIG. 7, the selected layer renderings 704 include rendering 706 a from layer 202 a, rendering 706 b from layer 202 b, rendering 706 c from layer 202 c, and rendering 706 d from layer 204 d. A layer rendering combiner 708 combines the selected layer renderings 704 together to form a two-dimensional rendering 710 of the entire modeled object (FIG. 8, step 804). The object rendering 710, like the individual layer renderings 504, may be represented and stored as a raster image rather than as a three-dimensional model.
  • FIG. 6 illustrates an example in which rendering 304 c is selected from layer renderings 302 a (Layer 1); rendering 306 b is selected from layer renderings 302 b (Layer 2); rendering 308 a is selected from layer renderings 302 c (Layer 3); and rendering 310 c is selected from layer renderings 302 d (Layer 4). In this example, layer renderings 304 c, 308 a, 310 c, and 306 b represent the selected layer renderings 706 a, 706 b, 706 c, and 706 d, respectively. These renderings 304 a, 308 a, 310 c, and 306 b are combined together to form a rendering 600 of the entire object modeled by the object model 200, representing a particular combination of attribute values. The rendering 600 in FIG. 6 is an example of the object rendering 710 in FIG. 7.
  • To appreciate the benefits of using a combination of the method 400 (FIG. 4) and the method 800 (FIG. 8) to produce object renderings, such as the object rendering 600 shown in FIG. 6, consider a ring with a single shank, a single center stone, a center stone setting metal, and 100 side stones, for a total of 103 components (1 shank, 1 center stone, 1 center stone setting metal, and 100 side stones). The ring components may, for example, be assigned to 5 layers, where layer 1 is the shank, layer 2 is the center stone setting, layer 3 is the center stone, layer 4 is 50 alternating side stones, and layer 5 is the other 50 alternating side stones. Suppose the shank layer has a “metal type” attribute with 10 possible values (representing 10 possible types of metal), the center stone setting has a “metal type” attribute with 10 possible values, the center stone layer has a “gemstone type” attribute with 21 possible values, and each of the two side stone layers has its own “gemstone type” attribute with 21 possible values. In this case a total of only 62 2D views (10+10+21+21+21) need to be rendered by method 400 to produce the layer renderings 504 shown in FIG. 5. This small number of renderings, however, may be combined into 926,100 possible permutations, or “personalized” views.
  • One advantage of embodiments of the present invention, therefore, is that they may be used to produce a very large number of personalized object renderings by rendering only a very small number of renderings of layers of the object. This is important because the process 400 used in FIG. 4 to create each individual layer rendering—producing realistic, two-dimensional rasterized images of the layers from a three-dimensional CAD model—is resource intensive, requiring significant computer processing resources or significant time to perform. In contrast, the process 800 used in FIG. 8 to combine existing rasterized layer renderings together to produce a rasterized image of the entire modeled object is computationally inexpensive.
  • Embodiments of the present invention only need to perform the computationally-expensive process 400 of FIG. 4 a single time, to produce a relatively small number of layer renderings 504. In a distributed computing environment, this process 400 may, for example, be performed on a computer having significant computing resources, such as a server, graphics workstation, or cluster of such computers. Then, once the layer renderings 504 have been produced, any number of realistic raster images of the entire object may be produced quickly, any number of times, by less-powerful computers using the method 800 of FIG. 8. Embodiments of the present invention, therefore, provide significant increases in efficiency of generating realistic images of customized objects, without any loss of quality of such images, in comparison to previously-used techniques.
  • More specifically, if ni k refers to the kth attribute on layer i, and m refers to the total number of layers, then using this method, in general a total of
  • i = 1 i = m k n i k 2 D
  • views would need to be rendered to produce
  • i = 1 i = m k n i k
  • possible personalized views. This represents a significant reduction in the number of renderings that need to be performed to produce all possible personalized views of the entire object.
  • Embodiments of the present invention may be used to display information other than renderings of the modeled object 200. For example, it can be useful when presenting 3D models in a 2D view to show physical scale with respect to a common reference object. For example, a dime or other coin may be used as a common reference object for jewelry models. FIG. 9 illustrates such an example, in which a rendering 100 of a ring is combined with a rendering 902 of a dime to produce a rendering 904 which shows both the ring and the dime, rendered to scale.
  • The combined rendering 904 may, for example, be produced by combining together rasterized images of the ring and dime. For example, renderings may be created of the reference object at various spatial views, and the resulting renderings may be stored for later use in combining with renderings of other objects As shown in the example of FIG. 9, the reference object may be rendered as semi-transparent so as not to obscure the primary object being rendered.
  • Although in certain examples disclosed herein, different components are combined together merely by combining pre-rendered rasterized images of those components, components may be combined in other ways to produce the final object rendering 710. For example, if the user selects a particular chain for inclusion in a pendant, the resulting pendant may be rendered to display the chain threaded through the bail(s) of the pendant. The final rendering may reflect the size of the chain and of the bail(s) to accurately represent how the chain would appear if threaded through the bail(s).
  • As another example, a ring may be rendered as fitted to a model of a human hand. Similarly, a necklace may be displayed as fitted to a model of a human neck. Such renderings may accurately represent how such jewelry would appear when worn on particular parts of the human body.
  • In addition to the primary set of renderings 504 of layers of the object described above, a set of 2D views of the entire object from various perspectives may be generated to allow a 3D Virtual Reality (VR) “fly-by” of the object. The sequence of 2D views that comprise the fly-by may, for example, include “camera” views of the object from different spatial locations. An example of such views 1002 a-1 is shown in FIG. 10. As can be seen from FIG. 10, if the views 1002 a-1 were to be displayed on-screen in sequence, the result would be the appearance of flying around the ring to view it from different perspectives.
  • Such different camera views may be rendered using the techniques disclosed above, using any pre-selected combination of attribute values. Alternatively, for example, the different camera views 1002 a-1 may include different personalized combinations of the object being rendered. In other words, attribute values of one or more layers in the object may be varied from camera view to camera view. When such camera views are displayed as an animation, the effect is to show attribute values (e.g., stone types/colors, metal types/colors) of the rendered object changing as the fly-by progresses. This process could also be used to generate an entire fly-by animation in the personalized configuration selected by the user. One advantage of changing the attribute values in this way is that it allows the user to see not only the same object from different angles, but also different personalizations of the object, but without incurring the resource overhead (memory, processor time, and disk storage) required to render a complete fly-by for each distinct personalization of the object.
  • It is also desirable to include shadows in the final rendering 710 of the modeled object 200 to make the rendering 710 as realistic as possible. It is inefficient, however, to store separate renderings of the shadows created by components in a layer for every possible combination of attribute values for that layer, since changes in most attribute values (e.g., colors and materials) do not affect the shadows cast by the layer. Therefore, the shadows of all components having a fixed shape in an object may be rendered and stored in a single layer referred to herein as a “ground plane,” which may be thought of as a “shadow layer” because its purpose is to store shadows cast by fixed-shape components of the object. As a result, such shadows need not be stored in other renderings of those components. In particular, such shadows need not be stored in the layer renderings 504 (FIG. 5). Instead, the ground plane may be stored as a layer rendering (e.g., as a raster image) in addition to and separate from the layer renderings 504. Furthermore, multiple different ground plane layers may be created of diverse colors and patterns. The purpose of this is to allow the object to be displayed in different surroundings for aesthetic purposes.
  • When the entire object model 200 is later rendered, one or more of the ground planes may be combined with the selected layer renderings 704 as part of the process 800 (FIG. 8) performed by the system 700 of FIG. 7 to produce the final object rendering 710. FIG. 11 shows an example in which a ground plane 1102 is combined with an object rendering 1104 to produce a final object rendering 1106 which contains both the shadows from the ground plane 1102 and the components from the object rendering 1104.
  • The shadows of those components whose shapes may vary may be handled differently from those with invariant shapes. In particular, the shadows of variable-shape components may be rendered and stored within the layer renderings 504 of those components themselves (rather than in the ground plane). For example, if a particular component may have either a rectangular or oval shape, a rectangular version of the component and its shadow may be rendered and stored in one layer rendering, while the oval version of the component and its shadow may be rendered and stored in another layer rendering. If the rectangular version of the component is later selected for inclusion in the final object, the pre-rendering of the rectangular object and its shadow may be combined with the other selected components to produce the final object rendering.
  • One benefit of storing the shadows of invariant-shaped components in the ground plane and storing shadows of variable-shaped components in the individual renderings of those components' layers is that this scheme stores only as many different shadows as are necessary to produce accurate final object renderings. Referring to FIGS. 12A-B, examples are shown in which renderings of a variable-shaped object are combined with renderings of an invariant-shaped object. FIG. 12A illustrates an example in which a first rendering 1202 a of variable-shaped components includes the shadows of those components, and in which a rendering 1204 of invariant-shaped components does not include the shadows of those components. The rendering 1202 a, including the shadows it contains, is combined with the rendering 1204, to produce final object rendering 1206 a. Note that a ground plane, representing shadows of the invariant-shaped objects in rendering 1204, could also be combined with renderings 1202 a and 1204 to produce final object rendering 1206 a.
  • FIG. 12B illustrates an example in which a second rendering 1202 b of the variable-shaped components from rendering 1202 a includes the shadows of those components. The shadows in rendering 1202 b differ from those in rendering 1202 a. In FIG. 12B, the same rendering 1204 of the invariant-shaped objects is used. The rendering 1202 b, including the shadows it contains, is combined with the rendering 1204, to produce final object rendering 1206 b.
  • In order to achieve the realistic effect of visualizing the variable-shaped component as interfacing correctly with the fixed-shape component without requiring a second image of either component, the variable-shaped component may be separated out into its own plane. For example, recall that in FIGS. 12A and 12B, the ground planes 1202 a and 1202 b include both variable-shaped components and their shadows. Alternatively, for example, in the embodiment illustrated in FIGS. 12C and 12D, the ground planes 1212 a and 1212 b solely contain shadows; i.e., they do not contain the variable-shaped components.
  • Instead, in the embodiment illustrated in FIGS. 12C and 12D, the variable-shaped components have been separated out into planes 1215 a and 1215 b. As in FIGS. 12A and 12B, the fixed-shaped components are retained within their own plane 1214 in FIGS. 12C and 12D. Note that holes may be rendered at appropriate locations within the variable-shaped components in planes 1215 a and 1215 b so that the variable-shaped components appear to interact realistically with the fixed-shape components of plane 1214 when the fixed-shape and variable-shape components are combined together. In particular, the holes are placed at locations where the fixed-shape components intersect the variable-shape components.
  • When the layers in FIGS. 12C or in FIGS. 12D are combined, the ground (shadow) plane (layer 1212 a or layer 121 b) may be rendered first, i.e., at the “bottom” of the stack. The fixed-shape components (layer 1214) may be rendered next, i.e., on “top” of the ground plane, in the “middle” of the stack. The variable-shape components (e.g., layer 1215 a or 1215 b) may be layered last, i.e., on “top” of the other two planes, at the “top” of the stack. This achieves a realistic three-dimensional effect in which the fixed-shape components appear to pass through the holes in the variable-shaped components in the resulting final object renderings 1216 a and 1216 b.
  • Each of the layer renderings 504 represents a particular layer rendered with one or more particular attribute values. Each of the layer renderings 504 may be encoded with information such as the name of the layer and the names (e.g., types) of the attributes of the layer by, for example, saving the layer rendering in a file having a filename which includes text representing the layer and attribute name(s), so that the particular file which encodes a particular layer with particular attributes may be easily identified. Such a filename may, for example, have a format such as: “<design name>_RenComp_<layer name>_<attribute name>_<view>”. Names of multiple attributes may be encoded within such a filename. Note that “<view>” represents the type of view of the layer rendering, such as front, side, or top.
  • For example, the filename “Design_RenComp_CS_E_P” may be used to store a file containing a rendering of a layer containing an emerald (“E” in the filename) selected for the center stone layer (“CS” in the filename), rendered in perspective view (“P” in the filename). As another example, the filename “Design_RenComp_SM_RG_P” may be used to store a file containing a rendering of a layer containing rose gold (“RG” in the filename) selected for the shank metal layer (“SM” in the filename), also rendered in perspective view (“P” in the filename). This encoding scheme may be used to facilitate combining the 2D layer renderings 504 into the final object rendering 710.
  • Similarly, the final object rendering 710 may be stored in a file having a filename which encodes information about which layers are represented in the final object rendering 710. For example, in the case of a ring, a filename of the following form may be used: “<design name>_Ren_<shank>-<shank2>-<center stone metal>-<side stone metal>_<center stone type>-<primary side stone type><secondary side stone type>_<view>”. For example, the filename “Design_Ren_YG--YG-_E-DE_P” may be used for a ring in which a yellow gold shank (“YG” in the filename) is combined with an Emerald center stone (“E” in the filename) with Diamond primary side stones and Emerald secondary side stones (“DE” in the filename) in a perspective view (“P” in the filename).
  • As mentioned above, not all information in the object model 200 need be used to generate distinct layer renderings. Rather, certain information in the object model 200 may be used for other purposes. One example is to calculate the price of a particular customized object (i.e., a personalized object reflecting a particular combination of attribute values). Such calculation may be performed, for example, by providing the attribute values of the object model 200 to a price calculation engine, which may use the attribute values (possibly in combination with information such as the current cost of particular types of gemstones, markup amounts, and discounts) to calculate the price of the entire personalized object. Metadata, such as metadata 212, may be used in addition to, or instead of, the object model's attribute values to perform the price calculation.
  • Pricing for a particular component may be determined in any way, such as by calculating the price based on features (e.g., size, material) of the component, or simply by looking up the price of the component (such as by using the component's SKU as an index into a database). However the price is calculated, the resulting price may be displayed to the user as part of or in addition to the final object rendering 710. As a result, the consumer may select a particular set of attribute values for each layer, and in response immediately see a photo-realistic rendering of the object along with its associated price.
  • Responsibility for performing different steps in the process of creating the personalized object rendering may be divided among computing devices and components in any of a variety of ways. For example, in a client-server system, the server could perform the layer pre-rendering process 400 of FIG. 4 a single time. Then, when a user at one of the clients requests a personalized object having a particular combination of attribute values, the client could transmit the attribute values to the server over a network.
  • In response, the server could transmit back to the client, over the network, the pre-rendered layer renderings corresponding to the selected attribute values. The client could then perform the layer-combining process 800 of FIG. 8 to produce the final rendering of personalized object, having a particular combination of attribute values selected by the user.
  • As another example, in response to the request from the client, the server could perform the layer-combining process 800 of FIG. 8 to produce the final rendering of personalized object, having a particular combination of attribute values selected by the user. The server could then transmit the personalized object rendering back to the client over a network. The client could then simply display the personalized object rendering to the user.
  • As yet another example, the server may perform a one-time transmission of all of the layer renderings 504 to each of the clients. Then, when a user at a particular client makes a request for a particular personalized rendering having a particular combination of attribute values, the client may perform the layer-combining process 800 of FIG. 8 without the need to make a trip to the server.
  • In any of these cases, the client computer need not perform the computationally-intensive layer rendering process 400 of FIG. 4. As a result, the client computer may be a relatively low-end computer, such as the kind typically used by home computer users, having a conventional web browsing client but lacking the CAD software and other software necessary to perform the layer-rendering process 400 of FIG. 4.
  • Once a particular personalized object rendering is produced, whether by a client or server, the personalized rendering may be cached so that it may be displayed in response to subsequent requests for the same combination of attribute values, without needing to re-perform the layer-combining method 800 of FIG. 8. If personalized views are created at the server, then such caching may be performed at the server. Additionally or alternatively, the server may transmit such personalized views to one or more of the clients so that subsequent requests at those clients may be serviced quickly, without the need for a trip to the server.
  • If the personalized object views are generated by the client machines, then such caching may, for example, be performed at each client as it generates each personalized view. Additionally or alternatively, each client may transmit any personalized views it generates back to the server, so that subsequent requests made by the same or other clients may be serviced without the need to re-generate the same view.
  • To further increase the speed at which personalized object views may be displayed to users, certain personalized object views representing certain combinations of attribute values may be pre-generated into complete object renderings so that such renderings are ready to display immediately to users upon selection of those combinations of attribute values, without the need to perform the layer-rendering process 400 of FIG. 4 or the layer-combining process 800 of FIG. 8.
  • The particular set of personalized object views to pre-render may be selected in any way. For example, certain attribute value combinations which are known or suspected to be highly desirable, such as white gold metal and diamond stone for use in a wedding ring, may be pre-rendered into final object renderings. When a user selects any such combination of attribute values, the corresponding object rendering may be displayed to the user immediately, merely by displaying the pre-generated object rendering to the user.
  • Combinations of attribute values to pre-render may also be selected, for example, using rules. A particular rule, for example, might apply to a particular kind of jewelry or a particular model of ring. For example, an “engagement ring” rule might specify that it is preferred for engagement rings to have diamonds as the stone, and that certain colors should not be combined with certain other colors within an engagement ring. Such a rule may then be used to automatically pre-render all component combinations which satisfy the rule. Such pre-renderings may, for example, be generated at the merchant's site before deploying the system for use by users.
  • Such pre-rendered combinations may be produced in any of a variety of ways. For example, they may be produced by rendering the entire object as a single scene, based on the individual attribute values selected by the user. As another example, pre-rendered combinations may be produced by combining together existing pre-renderings of the individual components selected by the user, using process 800. The latter technique may be used to significantly reduce the amount of time necessary to produce popular pre-renderings.
  • A search facility may be provided through which the user may search for particular component combinations. Search may be conducted in two ways: static search and dynamic search. With static search, only those combinations which have already been pre-rendered may be available for searching. Therefore, initially only those combinations which have been pre-selected for pre-rendering when the system is initialized may be available for searching. As users selected particular combinations of components with particular attributes (also referred to herein as “particular variations”), the renderings of such particular variations may be saved and added to the store of particular variations which are available for searching. With dynamic search, the system will interrogate all attributes of an object to determine whether or not a component combination will satisfy the search criteria. If the component combination matches the search criteria via attribute interrogation and the corresponding object does not exist, the object will be created dynamically and will be returned in the search results. Note that the dynamic search will incur more performance overhead than the static search.
  • It is to be understood that although the invention has been described above in terms of particular embodiments, the foregoing embodiments are provided as illustrative only, and do not limit or define the scope of the invention. Various other embodiments, including but not limited to the following, are also within the scope of the claims. For example, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions. Furthermore, although particular embodiments of the present invention are described in connection with jewelry, the same techniques may be applied to any kind of object.
  • For example, data processed by embodiments of the present invention may be stored in any form. For example, three-dimensional design data may be stored in CAD files, which may be subdivided into any number of files, such as one file per design, one file per layer, or one file per component. Meta-data, such as information about the type and number of components in a design, may be stored in the same or different file from the design data itself.
  • Although in the example shown in FIG. 4, the method 400 uses all combinations of values of all attributes of each layer L to render the layer L, this is not a requirement of the present invention. Rather, for example, only a subset of the layer's attribute types may be used to render the layer. As another example, the method 400 may produce layer renderings for fewer than all possible values of an attribute. If a user subsequently requests a combination of attribute values for which not all required layer renderings were previously produced, any needed layer renderings may be produced in response to such a request, and then used to produce a final object rendering using the method 800 of FIG. 8. Alternatively, for example, a layer rendering representing attribute values which are closest to those requested by the user may be selected, and then used to produce a final object rendering using the method 800 of FIG. 8, thereby avoiding the need to produce additional layer renderings.
  • Although only a single object model 200 is shown in FIG. 2, the techniques disclosed herein may be used in systems including any number of object models. For example, an online retail web site may allow a user to select any one of a plurality of object, such as any one of a plurality of items of jewelry, and then customize the selected object for purchase using the techniques disclosed herein.
  • The techniques described above may be implemented, for example, in hardware, software tangibly stored on a computer-readable medium, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input entered using the input device to perform the functions described and to generate output. The output may be provided to one or more output devices.
  • Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be a compiled or interpreted programming language.
  • Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium.

Claims (1)

1. A method performed by at least one computer processor executing computer-readable computer program instructions tangibly stored on a first non-transitory computer-readable medium, the method for use with a three-dimensional computer model of an object, the model comprising a plurality of layers L, each of the plurality of layers comprising at least one corresponding component in the model, each of the plurality of layers being associated with at least one attribute, the method comprising:
(A) rendering each of the plurality of layers L with each of a plurality of values A of the at least one attribute to produce a plurality of layer renderings, comprising:
(A)(1) entering a first loop over each of the plurality of layers L;
(A)(2) entering a second loop over each of the plurality of values A of the at least one attribute; and
(A)(3) for each particular layer within the plurality of layers L and for each particular attribute value with the plurality of attribute values A, applying the particular attribute value to the particular layer to produce a rendering of the particular layer;
(B) storing the plurality of layer renderings on a second non-transitory computer-readable medium;
(C) receiving a first request for a first rendering of a personalized object specifying a plurality of attribute values;
(D) selecting, from among the plurality of stored layer renderings, a subset of layer renderings corresponding to the specified plurality of attribute values; and
(E) combining the selected subset of layer renderings to produce the first rendering of the personalized object.
US13/455,934 2009-02-13 2012-04-25 Layered Personalization Abandoned US20120327084A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/455,934 US20120327084A1 (en) 2009-02-13 2012-04-25 Layered Personalization

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US15254909P 2009-02-13 2009-02-13
US23019209P 2009-07-31 2009-07-31
US12/684,103 US20100169059A1 (en) 2009-02-13 2010-01-07 Layered Personalization
US13/111,773 US8194069B2 (en) 2009-02-13 2011-05-19 Layered personalization
US13/455,934 US20120327084A1 (en) 2009-02-13 2012-04-25 Layered Personalization

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/111,773 Continuation US8194069B2 (en) 2009-02-13 2011-05-19 Layered personalization

Publications (1)

Publication Number Publication Date
US20120327084A1 true US20120327084A1 (en) 2012-12-27

Family

ID=42285965

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/684,103 Abandoned US20100169059A1 (en) 2009-02-13 2010-01-07 Layered Personalization
US13/111,773 Active US8194069B2 (en) 2009-02-13 2011-05-19 Layered personalization
US13/455,934 Abandoned US20120327084A1 (en) 2009-02-13 2012-04-25 Layered Personalization

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US12/684,103 Abandoned US20100169059A1 (en) 2009-02-13 2010-01-07 Layered Personalization
US13/111,773 Active US8194069B2 (en) 2009-02-13 2011-05-19 Layered personalization

Country Status (6)

Country Link
US (3) US20100169059A1 (en)
EP (1) EP2396774A4 (en)
JP (1) JP5542158B2 (en)
CN (1) CN102334143A (en)
CA (1) CA2785575C (en)
WO (1) WO2010093493A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11755790B2 (en) 2020-01-29 2023-09-12 America's Collectibles Network, Inc. System and method of bridging 2D and 3D assets for product visualization and manufacturing

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7069108B2 (en) 2002-12-10 2006-06-27 Jostens, Inc. Automated engraving of a customized jewelry item
MX2009007745A (en) 2007-01-18 2009-12-01 Jostens Inc System and method for generating instructions for customization.
MX2009009622A (en) 2007-03-12 2009-12-01 Jostens Inc Method for embellishment placement.
US8977377B2 (en) 2010-02-25 2015-03-10 Jostens, Inc. Method for digital manufacturing of jewelry items
US20120127198A1 (en) * 2010-11-22 2012-05-24 Microsoft Corporation Selection of foreground characteristics based on background
US11042923B2 (en) 2011-09-29 2021-06-22 Electronic Commodities Exchange, L.P. Apparatus, article of manufacture and methods for recommending a jewelry item
US20130226646A1 (en) * 2011-09-29 2013-08-29 Electronic Commodities Exchange Apparatus, Article of Manufacture, and Methods for In-Store Preview of an Online Jewelry Item
US10204366B2 (en) 2011-09-29 2019-02-12 Electronic Commodities Exchange Apparatus, article of manufacture and methods for customized design of a jewelry item
US10417686B2 (en) 2011-09-29 2019-09-17 Electronic Commodities Exchange Apparatus, article of manufacture, and methods for recommending a jewelry item
US8626601B2 (en) 2011-09-29 2014-01-07 Electronic Commodities Exchange, L.P. Methods and systems for providing an interactive communication session with a remote consultant
US9208265B2 (en) * 2011-12-02 2015-12-08 Jostens, Inc. System and method for jewelry design
AU2013215218B2 (en) * 2012-01-31 2015-04-23 Google Inc. Method for improving speed and visual fidelity of multi-pose 3D renderings
GB2499024A (en) * 2012-02-03 2013-08-07 Microgen Aptitude Ltd 3D integrated development environment(IDE) display
US9058605B2 (en) * 2012-04-20 2015-06-16 Taaz, Inc. Systems and methods for simulating accessory display on a subject
US9177533B2 (en) * 2012-05-31 2015-11-03 Microsoft Technology Licensing, Llc Virtual surface compaction
EP2717226A1 (en) 2012-10-04 2014-04-09 Colors of Eden Gmbh Method for generating personalized product views
WO2014103045A1 (en) * 2012-12-28 2014-07-03 楽天株式会社 Image processing device, image processing method, image processing program, and computer-readable recording medium with said program recorded thereon
US9582615B2 (en) 2013-01-16 2017-02-28 Jostens, Inc. Modeling using thin plate spline technology
US9165409B2 (en) 2013-02-15 2015-10-20 Micro*D, Inc. System and method for creating a database for generating product visualizations
US9224234B2 (en) 2013-02-15 2015-12-29 Micro*D, Inc. System and method for generating product visualizations
US9996854B2 (en) * 2013-06-28 2018-06-12 Aerva, Inc. Hierarchical systems, apparatus and methods for displaying context-aware content
USD789228S1 (en) 2013-11-25 2017-06-13 Jostens, Inc. Bezel for a ring
US10430851B2 (en) 2016-06-09 2019-10-01 Microsoft Technology Licensing, Llc Peripheral device customization
US10690158B2 (en) 2016-09-13 2020-06-23 Watchfire Signs, Llc Technologies for interlocking structures
CN106681770B (en) * 2016-12-29 2020-10-13 金蝶软件(中国)有限公司 Dynamic modification method and device for subassembly attributes in composite assembly
IT201700052301A1 (en) * 2017-05-15 2018-11-15 Casarotto Roberto S N C Di Casarotto Cristian E Denis "METHOD FOR THE DESIGN AND PRODUCTION OF PRECIOUS OBJECTS CUSTOMIZED BY REMOTE INTERACTIVE DEVICES"
KR102255404B1 (en) * 2018-03-15 2021-05-24 주식회사 비주얼 Method and electric apparatus for recommending jewelry product
EP3621019A1 (en) * 2018-09-07 2020-03-11 Giffits GmbH Computer-implemented method for generating supply and/or order data for an individual object
IL263348A (en) * 2018-11-28 2020-05-31 Berger Shechter Rachel Method and system for generating snowflake-based jewelry
KR102531172B1 (en) * 2020-07-30 2023-05-11 주식회사 비주얼 Electric apparatus for designing jewelry product and Method for designing jewelry product using the Electric apparatus
CN111930291A (en) * 2020-10-09 2020-11-13 广州宸祺出行科技有限公司 Method and system for realizing personalized shadow on Android platform

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060107203A1 (en) * 2004-11-15 2006-05-18 Microsoft Corporation Electronic document style matrix

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5398309A (en) * 1993-05-17 1995-03-14 Intel Corporation Method and apparatus for generating composite images using multiple local masks
US5649032A (en) * 1994-11-14 1997-07-15 David Sarnoff Research Center, Inc. System for automatically aligning images to form a mosaic image
US6016150A (en) * 1995-08-04 2000-01-18 Microsoft Corporation Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers
JPH10334157A (en) * 1997-05-29 1998-12-18 Denso Corp Article sale analytic system
JPH11259528A (en) * 1998-03-12 1999-09-24 Suehiro:Kk Product introduction and selection supporting system for ring and accessory
US6369830B1 (en) * 1999-05-10 2002-04-09 Apple Computer, Inc. Rendering translucent layers in a display system
CA2461038C (en) * 1999-11-15 2009-11-03 My Virtual Model Inc. System and method for displaying selected garments on a computer-simulated mannequin
US6727925B1 (en) * 1999-12-20 2004-04-27 Michelle Lyn Bourdelais Browser-based room designer
US7149665B2 (en) * 2000-04-03 2006-12-12 Browzwear International Ltd System and method for simulation of virtual wear articles on virtual models
US6825852B1 (en) * 2000-05-16 2004-11-30 Adobe Systems Incorporated Combining images including transparency by selecting color components
US7246044B2 (en) * 2000-09-13 2007-07-17 Matsushita Electric Works, Ltd. Method for aiding space design using network, system therefor, and server computer of the system
US20030109949A1 (en) * 2000-09-28 2003-06-12 Kenji Ikeda Commodity design creating and processing system
US6856323B2 (en) * 2001-04-09 2005-02-15 Weather Central, Inc. Layered image rendering
US6568455B2 (en) * 2001-04-10 2003-05-27 Robert M. Zieverink Jewelry making method using a rapid prototyping machine
JP2003150666A (en) * 2001-11-16 2003-05-23 Matsumura Gold & Silver Co Ltd Recording medium for jewelry design system
JP2003296617A (en) * 2002-04-03 2003-10-17 Yappa Corp Sales support system of ring accessary by 3d image
US7051040B2 (en) * 2002-07-23 2006-05-23 Lightsurf Technologies, Inc. Imaging system providing dynamic viewport layering
US20040243361A1 (en) * 2002-08-19 2004-12-02 Align Technology, Inc. Systems and methods for providing mass customization
US7069108B2 (en) * 2002-12-10 2006-06-27 Jostens, Inc. Automated engraving of a customized jewelry item
WO2005020129A2 (en) * 2003-08-19 2005-03-03 Bandalong Entertainment Customizable avatar and differentiated instant messaging environment
CN100388274C (en) * 2003-12-05 2008-05-14 英业达股份有限公司 Electron map manufacturing system and its method
US20050222862A1 (en) * 2004-03-30 2005-10-06 Kristen Guhde System and method for designing custom jewelry and accessories
BRPI0419068A (en) * 2004-11-19 2008-01-29 Edgenet Inc automatic method and system for object configuration
US20060274070A1 (en) * 2005-04-19 2006-12-07 Herman Daniel L Techniques and workflows for computer graphics animation system
JP4935275B2 (en) * 2006-03-14 2012-05-23 大日本印刷株式会社 Information providing system and information providing method, etc.
US7945343B2 (en) * 2006-12-18 2011-05-17 Nike, Inc. Method of making an article of footwear
TW200828043A (en) * 2006-12-29 2008-07-01 Cheng-Hsien Yang Terminal try-on simulation system and operating and applying method thereof
US20090021513A1 (en) * 2007-07-18 2009-01-22 Pixblitz Studios Inc. Method of Customizing 3D Computer-Generated Scenes
US7818217B2 (en) * 2007-07-20 2010-10-19 Nike, Inc. Method of customizing an article

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060107203A1 (en) * 2004-11-15 2006-05-18 Microsoft Corporation Electronic document style matrix

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11755790B2 (en) 2020-01-29 2023-09-12 America's Collectibles Network, Inc. System and method of bridging 2D and 3D assets for product visualization and manufacturing

Also Published As

Publication number Publication date
WO2010093493A9 (en) 2010-10-07
CA2785575A1 (en) 2010-08-19
EP2396774A4 (en) 2016-07-06
US20110216062A1 (en) 2011-09-08
CN102334143A (en) 2012-01-25
JP2012518226A (en) 2012-08-09
CA2785575C (en) 2013-04-09
WO2010093493A2 (en) 2010-08-19
US8194069B2 (en) 2012-06-05
US20100169059A1 (en) 2010-07-01
WO2010093493A3 (en) 2010-12-02
JP5542158B2 (en) 2014-07-09
EP2396774A2 (en) 2011-12-21

Similar Documents

Publication Publication Date Title
US8194069B2 (en) Layered personalization
US10089662B2 (en) Made-to-order direct digital manufacturing enterprise
US20200175580A1 (en) 3d imaging
US9817561B2 (en) Proposing visual display components for processing data
US10552897B2 (en) 3D imaging
US8117089B2 (en) System for segmentation by product category of product images within a shopping cart
US10586262B2 (en) Automated system and method for the customization of fashion items
EP1835466A2 (en) Method and apparatus for geometric data processing and a parts catalog system
CN107015659A (en) A kind of virtual try-in method of wrist-watch and system
US11244001B2 (en) Method for retrieving similar virtual material appearances
US9639924B2 (en) Adding objects to digital photographs
KR102031647B1 (en) System and Method for generating 3-Dimensional composited image of goods and packing box
CA3121348A1 (en) Systems and methods for generating three-dimensional models corresponding to product bundles
KR20200041120A (en) Method for recommending a bouquet
US11755790B2 (en) System and method of bridging 2D and 3D assets for product visualization and manufacturing
EP2717226A1 (en) Method for generating personalized product views
JP7097500B1 (en) Output program, output device and output method
KR20200047240A (en) Jewelry sale system and method using augmented reality
US20220288858A1 (en) Tray packing system for additive manufacturing
Gogousis et al. Caricature generation utilizing the notion of anti-face
KR20200047241A (en) Marker ring for augmented reality
JP2007172188A (en) System and program for supporting commodity development analysis
KR20070052955A (en) A bakery ordering method using image simulation

Legal Events

Date Code Title Description
AS Assignment

Owner name: GEMVARA, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:THOMAS-LEPORE, GRANT;HATANAKA, IWAO;REEL/FRAME:028260/0148

Effective date: 20100114

AS Assignment

Owner name: GEMVARA, INC., MASSACHUSETTS

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RE-RECORD ASSIGNMENT TO ADD OMITTED INVENTOR. PREVIOUSLY RECORDED ON REEL 028260 FRAME 0148. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS' INTEREST;ASSIGNORS:THOMAS-LEPORE, GRANT;HATANAKA, IWAO;MENON, MURALI;REEL/FRAME:028280/0245

Effective date: 20100114

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION