US20120297309A1 - Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices - Google Patents

Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices Download PDF

Info

Publication number
US20120297309A1
US20120297309A1 US13/298,095 US201113298095A US2012297309A1 US 20120297309 A1 US20120297309 A1 US 20120297309A1 US 201113298095 A US201113298095 A US 201113298095A US 2012297309 A1 US2012297309 A1 US 2012297309A1
Authority
US
United States
Prior art keywords
character
data
user
attribute
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/298,095
Inventor
Ian N. Robb
Michael B. Madlener
Ken J. McGuire
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Treehouse Avatar Technologies Inc
Original Assignee
Treehouse Avatar Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=35005211&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20120297309(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Treehouse Avatar Technologies Inc filed Critical Treehouse Avatar Technologies Inc
Priority to US13/298,095 priority Critical patent/US20120297309A1/en
Publication of US20120297309A1 publication Critical patent/US20120297309A1/en
Assigned to TREEHOUSE SOLUTIONS, INC. reassignment TREEHOUSE SOLUTIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MADLENER, MICHAEL B., ROBB, IAN N.
Assigned to Treehouse Avatar Technologies Inc. reassignment Treehouse Avatar Technologies Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TREEHOUSE SOLUTIONS INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce

Definitions

  • the invention relates generally to an apparatus and method for presenting data over an information network based on choices made by the users of the network and collecting data related to the choices made by the users. More particularly, the invention relates to an apparatus and method for presenting audio presentations and visual image presentations to a network user based on choices made by the user while in a network site and collecting data related to the choices in real-time.
  • visual image is broadly defined as drawn, printed or modeled objects, characters or scenes, including still, animation, motion, live action and video.
  • character is used to describe certain aspects and features of the invention, for example, the term “character-enabled” is often used.
  • the use of “character” instead of a collective “character, object or scene” is done for ease in readability of the specification and is not intended in any way to limit the scope of the invention.
  • the information and data made available over a network site is typically the same for each visitor to that network site.
  • each visitor to a web site is generally presented the same audio and visual image data contained within the various web pages comprising the web site.
  • Links presented on the web pages generally transfer the visitor to other web pages or in some cases to other web sites.
  • All in all, contemporary web sites are static in nature in that they fail to take into consideration the individuality of their visitors and instead present to each visitor a substantially identical audio/visual experience. As a result, visitors to contemporary web sites often become bored with the web site in a relatively short time thereby reducing visitor time on a web site and the possibility of frequent, repeat visits by the user.
  • market research is used for a variety of purposes, including: market strategy, product development, product adoption, program evaluation, price sensitivity, name and message testing, awareness, usage, attitude, and behavior tracking, advertising testing, market tracking, customer satisfaction, customer profiling and segmentation, corporate image studies, employee satisfaction, bench marking and public opinion polls.
  • Qualitative research involves the more “touchy-feely” aspect of gauging tastes, preferences and opinions, and includes focus groups, on-line focus groups, one-on-one interviews and executive interviews.
  • Quantitative research involves the sampling of a base of respondents to enable the statistical inference of the data over a larger population. The data obtained is tabulated into useful categories that allow the researcher to draw statistically-sound conclusions.
  • Qualitative research includes telephone surveys, mail surveys, intercept surveys and e-mail surveys.
  • the cost estimate for a market research firm to conduct, analyze and summarize a focus group with between eight to ten people is between $4,000 to $6,000.
  • Market research firms also employ the Internet to conduct focus group studies. Some firms have a database of e-mail addresses of individuals who have agreed to be surveyed on an as-needed basis, while other firms purchase lists of e-mail addresses that fit a targeted profile. These focus groups are conducted by showing a user pictures of products or a concept and then posing a series of questions to the user. Those responses are then tabulated with the responses from other users. The costs associated with on-line focus groups are similar to regular focus groups.
  • mall intercepts The most common quantitative method suggested for teen-market analysis is mall intercepts.
  • interviewers intercept mall shoppers that meet a certain targeted profile. These individuals are then interviewed for no more than twenty minutes and asked product and concept questions.
  • the cost to perform a mall-intercept study varies, depending on the number of respondents targeted, the malls involved, and the time involved to conduct the surveys. For example, the cost of a mall intercept, in which 1,000 responses are received from shoppers in several geographic regions throughout the US may be as high as $100,000.
  • the present invention is directed to an apparatus and method that employs selectable and modifiable animation to collect data related to the choices made by the users of an information network.
  • the invention in a first aspect, relates to a method having application within an information network having at least one character-enabled network site.
  • the method provides for the presentation of data to a network user based on choices made by the user while the user is within a character-enabled network site.
  • the method includes the step of creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a persona for the character.
  • Each attribute is defined by at least one of either audio data and/or visual image data.
  • An attribute may comprise one or more pieces of audio data, one or more pieces of visual image data or a combination of one or more pieces of audio data and visual image data.
  • the method further includes the step of providing to the user interface, at least one of either an audio presentation or a visual image presentation selected from a plurality of presentations based on the persona of the character created.
  • the present invention presents to the user a customized audio and/or visual image experience while the user is visiting the network site.
  • the method further comprises the step of storing persona data indicative of the selected attributes.
  • the present invention allows for the collection of user choices which may be indicative of the user's tastes, preferences and opinions.
  • the plurality of presentations may include passive presentations and interactive presentations, each in turn comprising one or both of a visual image displayed on the user interface and sound heard through the user interface.
  • the method when an interactive presentation is provided to the user interface, the method further includes the step of, in response to user interaction with the interactive presentation, providing to the user interface at least one of either an audio presentation and/or a visual image presentation selected from the plurality of presentations.
  • the present invention allows for further customization of the audio/visual experience.
  • the method further includes the step of storing data indicative of user interaction with the interactive presentation.
  • the invention in a second aspect, relates to an apparatus for presenting data to a network user based on choices made by the user while within a character-enabled network site.
  • the apparatus includes a character processor for creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a persona for the character. Each attribute is defined by audio data and/or visual image data.
  • the apparatus further includes a selection processor for providing to the user interface, at least one of either an audio presentation and/or a visual image presentation selected from a plurality of presentations based on the persona of the character created.
  • the invention in a third aspect, relates to a method having application within an information network having at least one character-enabled network site.
  • the method provides for the presentation of data to a network user based on choices made by the user while the user is within a character-enabled network site.
  • the method includes the step of associating a character with the user.
  • the character has a plurality of attributes, each defined by at least one of either audio data and/or visual image data.
  • the plurality of attributes collectively defines a character persona.
  • the method further includes the step of providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona.
  • the interactive presentation is defined by audio data and/or visual image data.
  • Also included in the method is the step of, in response to user interaction with the interactive presentation, providing to the user interface at least one of another interactive presentation and a passive presentation.
  • the passive presentation is defined by at least one of audio data and visual image data.
  • the present invention takes into account the actions of the user, which are likely to be indicative of the tastes, preferences and opinions of the user, and customizes the audio/visual experience presented to the user accordingly.
  • the step of providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona includes the steps of linking the character persona with interactive presentations of interest; and selecting for presentation to the user interface those interactive presentation that are linked with the character persona.
  • the step of providing to the user interface at least one of another interactive presentation and a passive presentation in response to user interaction with the interactive presentation comprises the steps of linking the user interaction with other interactive presentations and passive presentations of interest; and selecting for presentation to the user interface, those other interactive presentations and passive presentations that are linked with the character persona.
  • the invention in a fourth aspect, relates to an apparatus for presenting data to a network user based on choices made by the user while within a character-enabled network site.
  • the apparatus includes a character processor for associating a character with the user.
  • the character has a plurality of attributes, each attribute defined by at least one of either audio data and/or visual image data.
  • the plurality of attributes collectively defines a character persona.
  • the character processor may comprise a user interface functioning in cooperation with site programs which may be resident in the character-enabled network site.
  • the apparatus further includes a selection processor for providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona.
  • the interactive presentation is defined by audio data and/or visual image data.
  • the selection processor also, in response to user interaction with the interactive presentation, provides to the user interface at least one of another interactive presentation and a passive presentation.
  • the passive presentation is defined by at least one of either audio data and/or visual image data.
  • the selection processor may comprise site programs which may be resident in the character-enabled network site. These site programs operate in conjunction with various stored audio data/presentations and visual image data/presentations to provide the presentations to the user interface.
  • the invention in a fifth aspect, relates to a method that finds application within an information network having a database and at least one character-enabled network site accessible through a user interface with audio and visual image presentation capability.
  • the method is for obtaining and storing data indicative of one or more attribute selections made by a network user while within the character-enabled network site.
  • the method includes the steps of storing at least one of either audio data and/or visual image data of a plurality of characters, each character having at least one associated modifiable attribute.
  • the method further includes the step of storing at least one of either audio data and/or visual image data of at least one modification attribute.
  • the method also includes the step of presenting the plurality of characters to the user through the user interface for selection by the user.
  • the method Upon selection of a character, the method includes the step of storing data indicative of the selected character in a database and presenting the at least one modification attribute to the user through the user interface for selection by the user. Upon selection of the modification attribute, the method further includes the step of storing data indicative of the selected modification attribute in the database.
  • the invention in a sixth aspect, relates to an apparatus for obtaining and storing data indicative of one or more attribute selections made by a network user through a user interface with audio and visual image presentation capability.
  • the apparatus includes a character memory storing at least one of either audio data and/or visual image data of a plurality of characters, each having at least one associated modifiable attribute.
  • the apparatus further includes an attribute memory for storing at least one of either audio data and/or visual image data of at least one modification attribute.
  • the apparatus also includes a processor for presenting the plurality of characters to the user through the user interface for selection by the user. Upon selection of a character, the processor presents the at least one modification attribute to the user for selection by the user.
  • a database for storing data indicative of the selected character and the selected at least one modification attribute.
  • the invention in a seventh aspect, relates to a method finding application in an information network having at least one character-enabled network site.
  • the method is for sharing data among network users based on choices made by each of the users while within a character-enabled network site.
  • the method includes the steps of, for each user, creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a character profile. Each attribute is defined by at least one of either audio data and/or visual image data.
  • the method also includes the step of providing to at least one user interface, at least one of either an audio presentation and/or a visual image presentation indicative of at least one other character profile. Also included is the step of providing a communications link between the users.
  • FIG. 1 is a block diagram of an information network including a user side and a network-site side having character-enabled network sites operating in accordance with the present invention
  • FIG. 2 is a top-level flowchart depicting the process by which a network user explores the information network of FIG. 1 ;
  • FIG. 3 is a detailed flowchart depicting the process by which a user interacts with the character-enabled network sites of FIG. 1 ;
  • FIG. 4 depicts a page of an exemplary character-enabled network site having a collection of pre-profiled characters
  • FIG. 5 depicts a follow-up to the screen of FIG. 4 , in which one of the pre-profiled characters has been selected in order to gather additional information related to the persona of the character;
  • FIG. 6 depicts a follow-up screen to the screen of FIG. 5 , in which a detail of the selected pre-profiled character is presented and animated comments indicative of the character's persona are presented;
  • FIG. 7 depicts a follow-up screen to the screen of FIG. 6 , in which the remaining characters are dismissed and the opportunity to modify the selected pre-profiled character is presented;
  • FIG. 8 depicts a follow-up screen to the screen of FIG. 7 in which a roll-over of the shirt causes the shirt to highlight thereby indicating that the shirt may be modified;
  • FIG. 9 depicts a follow-up screen to the screen of FIG. 8 in which several choices with regard to the brand of shirt are presented;
  • FIG. 10 depicts a follow-up screen to the screen of FIG. 9 in which the shirt selected is displayed on the character;
  • FIG. 11 depicts an exemplary database table including records of choices made by network users.
  • FIG. 12 is a flow chart depicting the process of collecting and analyzing the data generated by users when exploring character-enabled network sites.
  • an information network including a user side 10 and a network-site side 12 interfacing through a network 14 .
  • the network 14 provides the means through which a user may access a plurality of network sites 16 a , 16 b and character-enabled network sites (“C-E sites”) 16 c , 16 d .
  • C-E sites character-enabled network sites
  • the features of the C-E sites 16 c , 16 d are described in detail below.
  • the network 14 may include, by way of example, but not necessarily by way of limitation, the Internet, Internet II, Intranets, and similar evolutionary versions of same.
  • the client side 10 includes a user interface 18 and network browser 20 through which a user may communicate with the network-site side 12 via the network 14 .
  • the user interface 18 may include a personal computer, network work station or any other similar device having a central processing unit (CPU) and monitor with at least one of audio presentation, i.e. sound, capability and visual image presentation, e.g. video, animation, etc., capability. Other devices may include portable communication devices that access the information network, such as cellular telephones or hand held devices, e.g., Palm Pilots.
  • the client side 10 further includes a graphical user interface (GUI) that facilitates communication between the client side and the network-site side 12 .
  • Client-side software may be resident in the user interface 18 . Alternatively, the client-side software may be network-based software capable of being accessed over the network 14 . For example, a user may be able to access the client-side software directly on the World-Wide-Web (“the Web”).
  • the network-site side 12 includes a plurality of network sites 16 a - 16 d and associated servers 22 a , 22 b . Also included on the network-site side 12 is a central database 24 for storing information and a search engine 26 .
  • the server 22 b houses a program memory 28 for storing the network-site software programs, i.e., “site programs”, which operate each of the C-E sites 16 c , 16 d in accordance with the invention. Also housed within the program memory 28 is the search engine software and database software.
  • the server 22 b also houses a source data 30 for storing the data required by the site programs. While FIG.
  • the information network may include any number of these items.
  • the other server 22 a on the network-site side 12 includes similar memory and storage devices, which for ease of illustration are not depicted. The devices store the programs and data necessary to operate the network sites 16 a , 16 b associated with the server 22 a . In the exemplary information network of FIG. 1 , however, these network sites 16 a , 16 b are not configured to operate as character-enabled sites.
  • C-E sites 16 c , 16 d operate under the control of site programs housed in the program memory 28 .
  • the site programs are created in browser usable file formats, such as but not limited to JavaScript, Flash Animation (.SWF), HTML, dHTML, CGI, ASP and Cold Fusion, to present either one or both of audio data/presentations and visual image data/presentations to the user interface 18 .
  • the audio data and visual image data required by the site programs is stored in the source data 30 .
  • the site programs are designed to provide to the user interface 18 audio presentations and visual image presentations tailored to the “persona” of a character, as defined by a network user. These audio presentations and visual image presentations are selected from a plurality of presentations resident within the information network.
  • the “persona” of a character is defined by a number of attributes, which in turn are defined by at least one of audio data and visual image data.
  • “Attributes” as used herein means a quality or characteristic inherent or ascribed to a character, object, or scene. Character attributes may include physical characteristics, emotional characteristics, personal interests, opinions and preferences. Object and scene attributes generally include but are not limited to physical characteristics.
  • the persona of a character may be further defined by the actions of the character, as controlled by the user through the user interface 18 .
  • the “attribute” aspect of a character persona may be defined by a user in any of several ways.
  • the character may have a pre-determined persona which the user may choose to adopt.
  • the user may modify or customize the persona of a pre-profiled character.
  • the user may create his own character persona from scratch.
  • the “action” aspect of a character persona is defined by the user based on how the user interacts with the audio presentations and visual image presentations provided to the user interface.
  • the persona of a character determines the experience the user has on the C-E site 16 c , 16 d .
  • Different characters call up different audio presentations and visual image presentations.
  • different music, games, books, movies, and videos may be provided to the user interface 18 .
  • the present invention cross references or links character attributes and character actions to specific audio presentations or visual images presentations. This cross referencing or linking may be accomplished through a look-up table or through frame technology. Using the attributes and actions associated with a given character, the site program determines which audio presentation and visual image presentations to present to the user interface 18 .
  • the site program in combination with the audio data and visual image data stored in the source data 30 define one or more pre-profiled characters.
  • the site program/data defines the characters such that each has his or her own persona.
  • An example of several characters is presented in FIG. 4 .
  • a detail of one of these characters is presented in FIG. 6 .
  • the user gets a quick glimpse of the character's persona in two ways. First, the user sees what the character looks like and how he is dressed. Second, as the user does a roll-over of each character, there is a visual or audio response that gives the user a sense of that character's personality.
  • the site programs are designed to provide to the user interface 18 audio presentations and visual image presentations directed toward the persona of a character.
  • the pre-defined attributes of the character determine the audio presentations and visual image presentations provided to the user interface 18 .
  • the site program/data provides the audio data or visual image data necessary to modify or change select attributes of a pre-profiled character.
  • the site program/data may present to the user a pre-profiled character of a human figure wearing a “brand A” shirt, while further presenting visual images representative of selectable attributes, e.g., brand B, brand C or brand D shirts.
  • the site program/data may provide for further modification of an attribute. For example, once the visual image data for a specific brand is presented and selected, the site program/data may present to the user the option of changing the style, size or color of the shirt.
  • the site program monitors the development of a customized character, notes the attributes modifications and selections made by the user and selects the audio presentations and visual image presentations provided to the user interface 18 accordingly. More specifically, the site program keeps track of the character attributes selected and modified by a user. Certain C-E site information is associated with certain character attributes and actions. For example, if a user decides that his character will wear athletic shoes then audio presentations and visual image presentations related to sports are provided to the user interface 18 . If the user selects trance music as background music to accompany his character then audio presentations and visual image presentations related to that type of music are provided to the user interface 18 .
  • the site program/data may allow the user to create a character from scratch. This may be done using commercially available animation programs such as Flash Animation (.swf) and Cold Fusion. Similar to the customized character, the site program monitors the development of a created character, notes the attributes of the created character and selects the audio presentations and visual image presentations provided at the user interface 18 accordingly.
  • Adobe Connect .swf
  • Cold Fusion Similar to the customized character, the site program monitors the development of a created character, notes the attributes of the created character and selects the audio presentations and visual image presentations provided at the user interface 18 accordingly.
  • the user interface 18 is provided with at least one of an audio presentation or a visual image presentation.
  • the presentations provided are selected from a plurality of presentations resident within the information network based on the persona of the character.
  • Exemplary audio presentations include background music, sound effects, dialog and character comments.
  • Exemplary visual presentations include background scenery, text-identified links, pictorial-identified links, pop-up menus and windows.
  • These presentations may be further categorized as being either passive or interactive.
  • Interactive presentations allow for the user to make an action-related choice via the user interface 18 .
  • the user interface 18 may be provided with a text-identified link that gives the user the choice to follow the link to another page on the C-E site or to another network site.
  • a pop-up window may appear on the user interface 18 asking the user a survey question.
  • Many other interactive presentations may be provided to the user interface 18 .
  • Passive presentations do not allow for user interaction.
  • An example of a passive presentation is a non-hyperlinked text or graphic.
  • the choices made by a user in response to the interactive presentations may be used to further define the persona of the character and to adjust the audio presentations and visual image presentations provided at the user interface 18 .
  • a user enters a network site via the user interface 18 ( FIG. 1 ) and network browser 20 .
  • the network site entered may be a C-E site 16 c , 16 d accessed through the server 22 b and thus operating in accordance with the invention.
  • the network site 16 a , 16 b entered by a user may not offer the user the audio or visual image experience imparted by the invention.
  • the user at step S 3 ( FIG. 2 ), surfs the network site or the network.
  • step S 4 upon entering a C-E site, the user is asked to associate with a character. Details related to character association are presented in the flow charts of FIG. 3 , which are described in detail below. In general, however, upon entering a character-enabled site the user is given the opportunity to choose from a group of pre-profiled characters or create a custom character. Each of the pre-profiled characters has a built-in profile corresponding to its personality. The user is further given the opportunity to adjust the profile of any of the given pre-profiled characters.
  • the user may be able to make choices regarding the pre-profiled character's hairstyle, ethnicity (skin tone), clothing (top, bottom, outerwear, fabric choice, brands, style, size, and color), eye wear, hat (style, fir, how to wear the hat), shoes, food/drinks to consume, vehicle to ride, accessories (cell phone, Palm Pilot) and background music.
  • a choice that choice is animated onto the character.
  • the user chooses a particular shoe for the character to wear from a group of four photos of shoes, that choice is transformed into an animated shoe.
  • the user makes a character selection.
  • the user is presented with an visual image display of a plurality of pre-profiled characters, each with a set of attributes ( FIG. 4 ).
  • a roll-over of each character highlights the character and may offer a sound bite indicative of the character's personality ( FIG. 5 ).
  • a continued roll-over of a character reveals a full figure of the character and audio or visual comments which further indicate the personality of the character ( FIG. 6 ).
  • the remaining characters are dismissed.
  • the character may be a previously-selected character which the user may have used in the past and which may be automatically associated with the user, via the IP address plus cookie of the user's computer or, called up by the user from the database 24 .
  • the process for saving a character is described later.
  • the character may be one which is created by the user using any one of several well-known animation programs, such as Flash Animation or Cold Fusion. Data pertaining to the character selections made by a user are stored in the central database 24 at steps S 21 a , S 22 a and S 23 a.
  • step S 24 the user is given the option to make attribute modifications. If the user does not want to modify his character, the user may begin to surf the network site and the network ( FIG. 2 , step S 5 ). If the user does want to modify his character, then any of a plurality of modifications may occur, depending on the options as defined by the site program/data. In one configuration, attribute modifications are controlled by a roll-over effect. As a user rolls over attributes, e.g., shirts, pants, hand-held devices, of a character, modifiable attributes highlight to indicate that choices are available ( FIG. 8 ).
  • attributes e.g., shirts, pants, hand-held devices
  • the user may choose to modify his character's hair by selecting the color (step S 26 ) and length (step S 29 ). If the user chooses to modify the color then at step S 27 the user is presented with a plurality of color choices. Once the selection is made the selected choice is stored in the central database (step S 28 ). Likewise, if the user chooses to modify the length of hair, at step S 30 the user is presented with a plurality of length choices. Once the selection is made, the choice is stored in the central database (step S 31 ).
  • An example of an additional available modification is the option to change the shirt being worn by the character (step S 37 ).
  • step S 38 , S 39 , S 40 and S 41 the user is presented with a plurality of options regarding the brand ( FIG. 9 ), color, style and other options of the shirt. Once a choice is made by the user, the choice is displayed on the character ( FIG. 10 ). Selections made by the user are stored in the database 24 at steps S 42 , S 43 , S 44 and S 45 .
  • a character's persona may also be changed by adding attributes to the character. For example, at step S 32 the user is presented with the option of adding a hat to his character. If the user decides to have his character wear a hat then, at steps S 33 and S 34 , the user is also presented with options regarding the style and color of hat. Again, each selection made by a user is stored in the central database 24 at steps S 23 and S 24 .
  • step S 46 the user decides if he wants to continue modifying his character. If the user decides to continue the modification process the user proceeds to steps S 47 where other character attributes may be changed, removed or added.
  • the number of available modifications which may be made to a character are within the control of the proprietor of the C-E site.
  • the character attributes available for modification are programmed into the site program and the necessary audio data and visual image data is stored in the data storage. By periodically revising the attribute selection, the site provides the user with new animation experiences.
  • the user may be rewarded for each choice made, for example, through the use of sound, e.g. “nice choice”, or character movement, e.g. hand clapping.
  • the user may decide to surf the network site in which the character was created.
  • the character accompanies the user as he navigates through the site.
  • the character may interact with the user through various comments and actions. For example, if the user is inactive within the site for a period of time, the character may start to tap his foot to entice the user to act.
  • Data regarding the portions of the network site visited by the user are stored in the database at step S 9 .
  • data regarding the links selected by the user may be cross-referenced to the character and stored in the database.
  • the user has the option of further modifying his character's profile. Any modifications made to the character are stored in the central database 24 .
  • the user may choose to surf the network. This may be accomplished in several ways.
  • the C-E site in which the user currently resides may include links to other network sites. The user may choose to follow these links to the associated network sites.
  • the link from the C-E site 16 d may be to another C-E site 16 c or it may be to a network site 16 b that is not character-enabled.
  • the persona data of the character associated with the user may be transferred to the other C-E site.
  • the transfer of persona data may be accomplished by cookie sharing. For example, a string of JavaScript may be written to allow the other character-enabled method site's 16 c cookie to recognize the cookie from the first C-E site 16 d.
  • the links selected by the user and his associated character may be recorded in the central database 24 .
  • the central database 24 thus contains information as to the profile of the character and the links of interest to the character. This type of information may be beneficial to the proprietor of the network site as a means of determining the type of people who are visiting its network site.
  • users of C-E sites may be able to share or exchange data.
  • the character-enabled sites may be configured to support a chat room or other virtual environment, wherein the various users may enter the room or environment under the guise of their character and communicate with each other via the user interface.
  • Character persona data is shared among visitors through, for example, JavaScript programming which presents data indicative of character's persona to the audio/visual display of the user interface. This data may include a picture of the character, a sound bite from the character and/or a written description of the character. Communication between users is provided using well known communications protocols such as that used by ICQ or AOL Instant Messenger.
  • step S 12 Once the user is finished surfing the network site or the network, at step S 12 , he is given the option of saving his character for future use. If the user selects to do so then at step S 13 the user is asked to assign a name to his character. The user may also be asked to designate a password. Upon doing so, the user-assigned name is added to the central database and the attributes associated with the user's character, which are stored in the central database, are linked to the user-assigned name.
  • the character created by the user may be retrieved from the central database 24 by the user through other C-E sites. This is accomplished by a plug-in written, for example in Java, located at the newly accessed C-E site. While within the new network site, the user may be able to further modify his character.
  • the plug-in also allows any changes a user makes to his character or any choices made on a network site to be stored in the central database 24 .
  • the central database 24 ( FIG. 1 ) comprises processes that gather, process and store data.
  • the database software may be implemented using Microsoft SQL7, Oracle8i or Access database programs.
  • the central database 24 comprises a plurality of tables which store data indicative of the activities occurring at each of the C-E sites. Such activities may include, but are not limited to, user selection and modifications of character, user navigation through a site, length of time at certain parts of a site, brand product selected and links followed. Essentially, each choice a user makes when within a C-E site is stored in the central database 24 .
  • a exemplary database table is shown in FIG. 11 .
  • the data stored at various points throughout the network exploration process (steps S 9 , S 10 , S 14 , S 21 a . . . S 45 ) is compiled in a main database table at step S 50 .
  • outside parties e.g., character-enabled site proprietors or customers are given the opportunity to analyze the data.
  • the data may be analyzed, using well known market research techniques, including both qualitative and quantitative techniques to develop taste, preference and opinion statistics of users.
  • the outside party is given the opportunity to combine the database data with third-party data, such as census data and income data.
  • the data is combined and analyzed.
  • the data is presented to the outside party.
  • the site program/data of a C-E site may be designed to provide a means of capturing data related to the identity, tastes, preferences and opinions of site users.
  • the identity of a user by designing pre-profiled characters having a combination of attributes which define a character persona, the system is able to provide a means for determining the demographics of the users visiting a site. For example, if a user selects a pre-profiled character that is female, it is likely that the user is female. As a further example, if the pre-profiled character appears to be a certain age, the selected character is likely to be indicative of the age of the user. Additional character attributes may be indicative of user profession, income, geographic location and ethnicity.
  • the present invention allows for the determination and collection of user information without asking the user to disclose personal information such as age, gender, name, e-mail address, etc.
  • the user may, however, give more personal information if they choose. For example, the geographic location of a user may be determined if the user chooses to provide his zip code.
  • the clothing, accessories, music and other attributes associated with a character identified with by a user are likely to provide an indication of the general tastes, preferences and opinions of that user. Any attribute modifications made by the user provide further insight into the tastes, preferences and opinions of that user.
  • the present invention provides a means by which the tastes, preferences and opinions of a portion of the public, i.e. the users of character-enables sites, may be monitored by manufacturers of consumer products. For example, a clothing manufacturer may use the system to test market a new style of shirt. The manufacturer would incorporate animation software and animation data necessary to display a number of shirts of varying styles into an existing character-enabled site or alternatively, establish its own character-enabled site.
  • each specific shirt style experiences is tallied and stored in the central database 24 .
  • Each hit may also be cross referenced to the persona of the character making the hit.
  • the system collects data indicative of the demographics of the users and the styles of shirts favored by the users which fall within a specific demographic.
  • additional taste, preference and opinion data may be collected regarding the most popular color for each shirt by providing the user a palette of shirt colors from which to choose.
  • Taste, preference and opinion data may be collected on virtually any consumer product.
  • an automobile manufacturer may test market car options and accessories
  • a beverage manufacturer may test market a new can design
  • a cellular telephone manufacturer may gather information on preferred size, shape and color of cell phones.
  • the system of the present invention may be used to conduct opinion surveys on political issues and current events. For example, a user may be presented with animations representative of political figures and asked to choose which character he wants to be. A user may be presented with an animation of a character holding an empty can and asked to choose between dropping the can in the street or into a trash can.
  • the system of the present invention provides for the compilation and provision of data about a target audience.
  • the system provides the data necessary to determine market trends in real-time and forecast trends based on the popularity of certain profiles and choices made by users.
  • the system allows for companies to test market products through specific profiles that are programmed into the system to thereby derive marketing answers in real-time.
  • Quick response time to trends is a crucial factor in determining the success of a marketing program.
  • the present invention provides for such a response.

Abstract

A character having a plurality of attributes is created by a network user while within a character-enabled network site. Each attribute is defined by at least one of either audio data and/or visual image data and is selected by the user from a plurality of attributes presented to the user through a user interface. The combination of attributes defines a persona for the character. At least one of either an audio presentation and/or a visual image presentation is provided to the user interface. The presentations presented are selected from a plurality of presentations based on the character's persona. Data related to character attributes are stored in a database. One or more of the presentations presented to the user may be interactive, in that it allows for the user to make choices.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of pending U.S. application Ser. No. 11/186,723, filed Jul. 20, 2005, which is a continuation of U.S. application Ser. No. 09/614,572, filed Jul. 12, 2000 and issued Oct. 4, 2005 as U.S. Pat. No. 6,952,716.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention relates generally to an apparatus and method for presenting data over an information network based on choices made by the users of the network and collecting data related to the choices made by the users. More particularly, the invention relates to an apparatus and method for presenting audio presentations and visual image presentations to a network user based on choices made by the user while in a network site and collecting data related to the choices in real-time. As used herein “visual image” is broadly defined as drawn, printed or modeled objects, characters or scenes, including still, animation, motion, live action and video. Throughout the specification, the term “character” is used to describe certain aspects and features of the invention, for example, the term “character-enabled” is often used. The use of “character” instead of a collective “character, object or scene” is done for ease in readability of the specification and is not intended in any way to limit the scope of the invention.
  • 2. Description of Related Art
  • The information and data made available over a network site is typically the same for each visitor to that network site. For example, in the context of the world-wide-web (“the web”), each visitor to a web site is generally presented the same audio and visual image data contained within the various web pages comprising the web site. Links presented on the web pages generally transfer the visitor to other web pages or in some cases to other web sites. All in all, contemporary web sites are static in nature in that they fail to take into consideration the individuality of their visitors and instead present to each visitor a substantially identical audio/visual experience. As a result, visitors to contemporary web sites often become bored with the web site in a relatively short time thereby reducing visitor time on a web site and the possibility of frequent, repeat visits by the user.
  • Hence, those concerned with increasing network site loyalty have sensed the need for an apparatus and method for presenting to network user's audio data and visual image data that is indicative of the individuality of the network user. The present invention fulfills this need and others.
  • The collection of data related to the personal choices and preferences of an individual is essential for effective market research. The major purpose of market research is to minimize the risk to be undertaken by a company. By itself, market research is rarely conclusive, but instead is a useful tool to enable companies to make decisions that are more informed. Market research is used for a variety of purposes, including: market strategy, product development, product adoption, program evaluation, price sensitivity, name and message testing, awareness, usage, attitude, and behavior tracking, advertising testing, market tracking, customer satisfaction, customer profiling and segmentation, corporate image studies, employee satisfaction, bench marking and public opinion polls.
  • There are two basic types of market research, qualitative and quantitative. Qualitative research involves the more “touchy-feely” aspect of gauging tastes, preferences and opinions, and includes focus groups, on-line focus groups, one-on-one interviews and executive interviews. Quantitative research involves the sampling of a base of respondents to enable the statistical inference of the data over a larger population. The data obtained is tabulated into useful categories that allow the researcher to draw statistically-sound conclusions. Qualitative research includes telephone surveys, mail surveys, intercept surveys and e-mail surveys.
  • Current market research is expensive and often time consuming. For example, for a hypothetical manufacturing company to gauge the tastes, preferences and opinions of the teen market as a basis to improve product development and enhance revenues, it has been suggested that focus groups, on-line focus groups and mall intercepts are the best approaches.
  • The cost estimate for a market research firm to conduct, analyze and summarize a focus group with between eight to ten people is between $4,000 to $6,000. Market research firms also employ the Internet to conduct focus group studies. Some firms have a database of e-mail addresses of individuals who have agreed to be surveyed on an as-needed basis, while other firms purchase lists of e-mail addresses that fit a targeted profile. These focus groups are conducted by showing a user pictures of products or a concept and then posing a series of questions to the user. Those responses are then tabulated with the responses from other users. The costs associated with on-line focus groups are similar to regular focus groups.
  • The most common quantitative method suggested for teen-market analysis is mall intercepts. In a mall intercept, interviewers intercept mall shoppers that meet a certain targeted profile. These individuals are then interviewed for no more than twenty minutes and asked product and concept questions. The cost to perform a mall-intercept study varies, depending on the number of respondents targeted, the malls involved, and the time involved to conduct the surveys. For example, the cost of a mall intercept, in which 1,000 responses are received from shoppers in several geographic regions throughout the US may be as high as $100,000.
  • Hence, those concerned with collecting information related to user and consumer choices and preferences have sensed a need for an apparatus and method that enables a less expensive, more efficient and more reliable means of capturing specific and broad-base data on users, consumers and products. A need has also been felt for an apparatus and method of collecting market research data in real-time. The present invention clearly fulfills these needs and others.
  • SUMMARY OF THE INVENTION
  • Briefly, and in general terms, the present invention is directed to an apparatus and method that employs selectable and modifiable animation to collect data related to the choices made by the users of an information network.
  • In a first aspect, the invention relates to a method having application within an information network having at least one character-enabled network site. The method provides for the presentation of data to a network user based on choices made by the user while the user is within a character-enabled network site. In its basic form the method includes the step of creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a persona for the character. Each attribute is defined by at least one of either audio data and/or visual image data. An attribute may comprise one or more pieces of audio data, one or more pieces of visual image data or a combination of one or more pieces of audio data and visual image data. The method further includes the step of providing to the user interface, at least one of either an audio presentation or a visual image presentation selected from a plurality of presentations based on the persona of the character created.
  • By providing audio and visual image presentations to the user interface based on the persona of the created character, the present invention presents to the user a customized audio and/or visual image experience while the user is visiting the network site.
  • In a more detailed facet of the invention, the method further comprises the step of storing persona data indicative of the selected attributes. By storing this data, the present invention allows for the collection of user choices which may be indicative of the user's tastes, preferences and opinions. In another detailed aspect, the plurality of presentations may include passive presentations and interactive presentations, each in turn comprising one or both of a visual image displayed on the user interface and sound heard through the user interface. In another detailed facet, when an interactive presentation is provided to the user interface, the method further includes the step of, in response to user interaction with the interactive presentation, providing to the user interface at least one of either an audio presentation and/or a visual image presentation selected from the plurality of presentations. By providing audio and/or visual image presentations to the user interface based on the response made by the user to an interactive presentation the present invention allows for further customization of the audio/visual experience. In yet another detailed aspect of the invention, the method further includes the step of storing data indicative of user interaction with the interactive presentation.
  • In a second aspect, the invention relates to an apparatus for presenting data to a network user based on choices made by the user while within a character-enabled network site. The apparatus includes a character processor for creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a persona for the character. Each attribute is defined by audio data and/or visual image data. The apparatus further includes a selection processor for providing to the user interface, at least one of either an audio presentation and/or a visual image presentation selected from a plurality of presentations based on the persona of the character created.
  • In a third aspect, the invention relates to a method having application within an information network having at least one character-enabled network site. The method provides for the presentation of data to a network user based on choices made by the user while the user is within a character-enabled network site. In its basic form the method includes the step of associating a character with the user. The character has a plurality of attributes, each defined by at least one of either audio data and/or visual image data. The plurality of attributes collectively defines a character persona. The method further includes the step of providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona. The interactive presentation is defined by audio data and/or visual image data. Also included in the method is the step of, in response to user interaction with the interactive presentation, providing to the user interface at least one of another interactive presentation and a passive presentation. The passive presentation is defined by at least one of audio data and visual image data.
  • By providing one or more of either an interactive or a passive presentation to the user interface based on the responses and choices made by the user to an interactive presentation, the present invention takes into account the actions of the user, which are likely to be indicative of the tastes, preferences and opinions of the user, and customizes the audio/visual experience presented to the user accordingly.
  • In a detailed aspect of the invention, the step of providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona includes the steps of linking the character persona with interactive presentations of interest; and selecting for presentation to the user interface those interactive presentation that are linked with the character persona. In another facet of the invention, the step of providing to the user interface at least one of another interactive presentation and a passive presentation in response to user interaction with the interactive presentation comprises the steps of linking the user interaction with other interactive presentations and passive presentations of interest; and selecting for presentation to the user interface, those other interactive presentations and passive presentations that are linked with the character persona.
  • In a fourth aspect, the invention relates to an apparatus for presenting data to a network user based on choices made by the user while within a character-enabled network site. The apparatus includes a character processor for associating a character with the user. The character has a plurality of attributes, each attribute defined by at least one of either audio data and/or visual image data. The plurality of attributes collectively defines a character persona. In a basic configuration of the apparatus the character processor may comprise a user interface functioning in cooperation with site programs which may be resident in the character-enabled network site. The apparatus further includes a selection processor for providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona. The interactive presentation is defined by audio data and/or visual image data. The selection processor also, in response to user interaction with the interactive presentation, provides to the user interface at least one of another interactive presentation and a passive presentation. The passive presentation is defined by at least one of either audio data and/or visual image data. In a basic configuration of the apparatus the selection processor may comprise site programs which may be resident in the character-enabled network site. These site programs operate in conjunction with various stored audio data/presentations and visual image data/presentations to provide the presentations to the user interface.
  • In a fifth aspect, the invention relates to a method that finds application within an information network having a database and at least one character-enabled network site accessible through a user interface with audio and visual image presentation capability. The method is for obtaining and storing data indicative of one or more attribute selections made by a network user while within the character-enabled network site. The method includes the steps of storing at least one of either audio data and/or visual image data of a plurality of characters, each character having at least one associated modifiable attribute. For each modifiable attribute the method further includes the step of storing at least one of either audio data and/or visual image data of at least one modification attribute. The method also includes the step of presenting the plurality of characters to the user through the user interface for selection by the user. Upon selection of a character, the method includes the step of storing data indicative of the selected character in a database and presenting the at least one modification attribute to the user through the user interface for selection by the user. Upon selection of the modification attribute, the method further includes the step of storing data indicative of the selected modification attribute in the database.
  • In a sixth aspect, the invention relates to an apparatus for obtaining and storing data indicative of one or more attribute selections made by a network user through a user interface with audio and visual image presentation capability. The apparatus includes a character memory storing at least one of either audio data and/or visual image data of a plurality of characters, each having at least one associated modifiable attribute. For each modifiable attribute, the apparatus further includes an attribute memory for storing at least one of either audio data and/or visual image data of at least one modification attribute. The apparatus also includes a processor for presenting the plurality of characters to the user through the user interface for selection by the user. Upon selection of a character, the processor presents the at least one modification attribute to the user for selection by the user. Further included in the apparatus is a database for storing data indicative of the selected character and the selected at least one modification attribute.
  • In a seventh aspect, the invention relates to a method finding application in an information network having at least one character-enabled network site. The method is for sharing data among network users based on choices made by each of the users while within a character-enabled network site. The method includes the steps of, for each user, creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a character profile. Each attribute is defined by at least one of either audio data and/or visual image data. The method also includes the step of providing to at least one user interface, at least one of either an audio presentation and/or a visual image presentation indicative of at least one other character profile. Also included is the step of providing a communications link between the users.
  • These and other features and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate by way of example the features of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an information network including a user side and a network-site side having character-enabled network sites operating in accordance with the present invention;
  • FIG. 2 is a top-level flowchart depicting the process by which a network user explores the information network of FIG. 1;
  • FIG. 3 is a detailed flowchart depicting the process by which a user interacts with the character-enabled network sites of FIG. 1;
  • FIG. 4 depicts a page of an exemplary character-enabled network site having a collection of pre-profiled characters;
  • FIG. 5 depicts a follow-up to the screen of FIG. 4, in which one of the pre-profiled characters has been selected in order to gather additional information related to the persona of the character;
  • FIG. 6 depicts a follow-up screen to the screen of FIG. 5, in which a detail of the selected pre-profiled character is presented and animated comments indicative of the character's persona are presented;
  • FIG. 7 depicts a follow-up screen to the screen of FIG. 6, in which the remaining characters are dismissed and the opportunity to modify the selected pre-profiled character is presented;
  • FIG. 8 depicts a follow-up screen to the screen of FIG. 7 in which a roll-over of the shirt causes the shirt to highlight thereby indicating that the shirt may be modified;
  • FIG. 9 depicts a follow-up screen to the screen of FIG. 8 in which several choices with regard to the brand of shirt are presented;
  • FIG. 10 depicts a follow-up screen to the screen of FIG. 9 in which the shirt selected is displayed on the character;
  • FIG. 11 depicts an exemplary database table including records of choices made by network users; and
  • FIG. 12 is a flow chart depicting the process of collecting and analyzing the data generated by users when exploring character-enabled network sites.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Referring now to the drawings, wherein like reference numerals denote like or corresponding parts throughout the drawing figures, and particularly to FIG. 1, there is shown an information network including a user side 10 and a network-site side 12 interfacing through a network 14. The network 14 provides the means through which a user may access a plurality of network sites 16 a, 16 b and character-enabled network sites (“C-E sites”) 16 c, 16 d. The features of the C-E sites 16 c, 16 d are described in detail below. The network 14 may include, by way of example, but not necessarily by way of limitation, the Internet, Internet II, Intranets, and similar evolutionary versions of same.
  • The client side 10 includes a user interface 18 and network browser 20 through which a user may communicate with the network-site side 12 via the network 14. The user interface 18 may include a personal computer, network work station or any other similar device having a central processing unit (CPU) and monitor with at least one of audio presentation, i.e. sound, capability and visual image presentation, e.g. video, animation, etc., capability. Other devices may include portable communication devices that access the information network, such as cellular telephones or hand held devices, e.g., Palm Pilots. The client side 10 further includes a graphical user interface (GUI) that facilitates communication between the client side and the network-site side 12. Client-side software may be resident in the user interface 18. Alternatively, the client-side software may be network-based software capable of being accessed over the network 14. For example, a user may be able to access the client-side software directly on the World-Wide-Web (“the Web”).
  • The network-site side 12 includes a plurality of network sites 16 a-16 d and associated servers 22 a, 22 b. Also included on the network-site side 12 is a central database 24 for storing information and a search engine 26. The server 22 b houses a program memory 28 for storing the network-site software programs, i.e., “site programs”, which operate each of the C-E sites 16 c, 16 d in accordance with the invention. Also housed within the program memory 28 is the search engine software and database software. The server 22 b also houses a source data 30 for storing the data required by the site programs. While FIG. 1 depicts only one server 22 b with two associated C-E sites, 16 c, 16 d, the information network may include any number of these items. The other server 22 a on the network-site side 12 includes similar memory and storage devices, which for ease of illustration are not depicted. The devices store the programs and data necessary to operate the network sites 16 a, 16 b associated with the server 22 a. In the exemplary information network of FIG. 1, however, these network sites 16 a, 16 b are not configured to operate as character-enabled sites.
  • In accordance with the invention, C-E sites 16 c, 16 d operate under the control of site programs housed in the program memory 28. The site programs are created in browser usable file formats, such as but not limited to JavaScript, Flash Animation (.SWF), HTML, dHTML, CGI, ASP and Cold Fusion, to present either one or both of audio data/presentations and visual image data/presentations to the user interface 18. The audio data and visual image data required by the site programs is stored in the source data 30.
  • The site programs are designed to provide to the user interface 18 audio presentations and visual image presentations tailored to the “persona” of a character, as defined by a network user. These audio presentations and visual image presentations are selected from a plurality of presentations resident within the information network. The “persona” of a character is defined by a number of attributes, which in turn are defined by at least one of audio data and visual image data. “Attributes” as used herein means a quality or characteristic inherent or ascribed to a character, object, or scene. Character attributes may include physical characteristics, emotional characteristics, personal interests, opinions and preferences. Object and scene attributes generally include but are not limited to physical characteristics. The persona of a character may be further defined by the actions of the character, as controlled by the user through the user interface 18.
  • In accordance with the present invention, the “attribute” aspect of a character persona may be defined by a user in any of several ways. For example, the character may have a pre-determined persona which the user may choose to adopt. Alternatively the user may modify or customize the persona of a pre-profiled character. Additionally, the user may create his own character persona from scratch. Each of these character development approaches is described more fully below. The “action” aspect of a character persona is defined by the user based on how the user interacts with the audio presentations and visual image presentations provided to the user interface.
  • The persona of a character determines the experience the user has on the C-E site 16 c, 16 d. Different characters call up different audio presentations and visual image presentations. For example, depending on the persona of the character selected, different music, games, books, movies, and videos may be provided to the user interface 18. The present invention cross references or links character attributes and character actions to specific audio presentations or visual images presentations. This cross referencing or linking may be accomplished through a look-up table or through frame technology. Using the attributes and actions associated with a given character, the site program determines which audio presentation and visual image presentations to present to the user interface 18.
  • With regard to pre-profiled characters, the site program in combination with the audio data and visual image data stored in the source data 30 define one or more pre-profiled characters. The site program/data defines the characters such that each has his or her own persona. An example of several characters is presented in FIG. 4. A detail of one of these characters is presented in FIG. 6. The user gets a quick glimpse of the character's persona in two ways. First, the user sees what the character looks like and how he is dressed. Second, as the user does a roll-over of each character, there is a visual or audio response that gives the user a sense of that character's personality.
  • As previously mentioned in accordance with the invention, the site programs are designed to provide to the user interface 18 audio presentations and visual image presentations directed toward the persona of a character. In the case of a pre-profiled character, the pre-defined attributes of the character determine the audio presentations and visual image presentations provided to the user interface 18.
  • With regard to customized characters, the site program/data provides the audio data or visual image data necessary to modify or change select attributes of a pre-profiled character. For example, as shown in FIG. 9, the site program/data may present to the user a pre-profiled character of a human figure wearing a “brand A” shirt, while further presenting visual images representative of selectable attributes, e.g., brand B, brand C or brand D shirts. As a subset of the attribute selections, the site program/data may provide for further modification of an attribute. For example, once the visual image data for a specific brand is presented and selected, the site program/data may present to the user the option of changing the style, size or color of the shirt.
  • As an additional feature of the present invention, the site program monitors the development of a customized character, notes the attributes modifications and selections made by the user and selects the audio presentations and visual image presentations provided to the user interface 18 accordingly. More specifically, the site program keeps track of the character attributes selected and modified by a user. Certain C-E site information is associated with certain character attributes and actions. For example, if a user decides that his character will wear athletic shoes then audio presentations and visual image presentations related to sports are provided to the user interface 18. If the user selects trance music as background music to accompany his character then audio presentations and visual image presentations related to that type of music are provided to the user interface 18.
  • With regard to created characters, the site program/data may allow the user to create a character from scratch. This may be done using commercially available animation programs such as Flash Animation (.swf) and Cold Fusion. Similar to the customized character, the site program monitors the development of a created character, notes the attributes of the created character and selects the audio presentations and visual image presentations provided at the user interface 18 accordingly.
  • As previously mention, when within a C-E site, the user interface 18 is provided with at least one of an audio presentation or a visual image presentation. The presentations provided are selected from a plurality of presentations resident within the information network based on the persona of the character. Exemplary audio presentations include background music, sound effects, dialog and character comments. Exemplary visual presentations include background scenery, text-identified links, pictorial-identified links, pop-up menus and windows.
  • These presentations may be further categorized as being either passive or interactive. Interactive presentations allow for the user to make an action-related choice via the user interface 18. For example, the user interface 18 may be provided with a text-identified link that gives the user the choice to follow the link to another page on the C-E site or to another network site. As another example, a pop-up window may appear on the user interface 18 asking the user a survey question. Many other interactive presentations may be provided to the user interface 18. Passive presentations, on the other hand, do not allow for user interaction. An example of a passive presentation is a non-hyperlinked text or graphic. As an additional feature of the present invention, the choices made by a user in response to the interactive presentations may be used to further define the persona of the character and to adjust the audio presentations and visual image presentations provided at the user interface 18.
  • In operation, with reference to FIG. 2, at steps S1 and S2, a user enters a network site via the user interface 18 (FIG. 1) and network browser 20. The network site entered may be a C-E site 16 c, 16 d accessed through the server 22 b and thus operating in accordance with the invention. Alternatively, the network site 16 a, 16 b entered by a user may not offer the user the audio or visual image experience imparted by the invention. In this situation the user, at step S3 (FIG. 2), surfs the network site or the network.
  • At step S4, upon entering a C-E site, the user is asked to associate with a character. Details related to character association are presented in the flow charts of FIG. 3, which are described in detail below. In general, however, upon entering a character-enabled site the user is given the opportunity to choose from a group of pre-profiled characters or create a custom character. Each of the pre-profiled characters has a built-in profile corresponding to its personality. The user is further given the opportunity to adjust the profile of any of the given pre-profiled characters. For example, the user may be able to make choices regarding the pre-profiled character's hairstyle, ethnicity (skin tone), clothing (top, bottom, outerwear, fabric choice, brands, style, size, and color), eye wear, hat (style, fir, how to wear the hat), shoes, food/drinks to consume, vehicle to ride, accessories (cell phone, Palm Pilot) and background music. As a user makes a choice, that choice is animated onto the character. As an example, when the user chooses a particular shoe for the character to wear from a group of four photos of shoes, that choice is transformed into an animated shoe.
  • With reference to FIG. 3, at steps S20-S23, the user makes a character selection. For example, at step S21, the user is presented with an visual image display of a plurality of pre-profiled characters, each with a set of attributes (FIG. 4). A roll-over of each character highlights the character and may offer a sound bite indicative of the character's personality (FIG. 5). A continued roll-over of a character reveals a full figure of the character and audio or visual comments which further indicate the personality of the character (FIG. 6). Upon selection of a character, the remaining characters are dismissed.
  • Alternatively, at step S22, the character may be a previously-selected character which the user may have used in the past and which may be automatically associated with the user, via the IP address plus cookie of the user's computer or, called up by the user from the database 24. The process for saving a character is described later. In addition, at step S23, the character may be one which is created by the user using any one of several well-known animation programs, such as Flash Animation or Cold Fusion. Data pertaining to the character selections made by a user are stored in the central database 24 at steps S21 a, S22 a and S23 a.
  • Once the user has selected his new character or accessed his previously-used character, at step S24, the user is given the option to make attribute modifications. If the user does not want to modify his character, the user may begin to surf the network site and the network (FIG. 2, step S5). If the user does want to modify his character, then any of a plurality of modifications may occur, depending on the options as defined by the site program/data. In one configuration, attribute modifications are controlled by a roll-over effect. As a user rolls over attributes, e.g., shirts, pants, hand-held devices, of a character, modifiable attributes highlight to indicate that choices are available (FIG. 8). For example, at step S25, the user may choose to modify his character's hair by selecting the color (step S26) and length (step S29). If the user chooses to modify the color then at step S27 the user is presented with a plurality of color choices. Once the selection is made the selected choice is stored in the central database (step S28). Likewise, if the user chooses to modify the length of hair, at step S30 the user is presented with a plurality of length choices. Once the selection is made, the choice is stored in the central database (step S31). An example of an additional available modification is the option to change the shirt being worn by the character (step S37). If the user chooses to modify the shirt then at steps S38, S39, S40 and S41 the user is presented with a plurality of options regarding the brand (FIG. 9), color, style and other options of the shirt. Once a choice is made by the user, the choice is displayed on the character (FIG. 10). Selections made by the user are stored in the database 24 at steps S42, S43, S44 and S45.
  • A character's persona may also be changed by adding attributes to the character. For example, at step S32 the user is presented with the option of adding a hat to his character. If the user decides to have his character wear a hat then, at steps S33 and S34, the user is also presented with options regarding the style and color of hat. Again, each selection made by a user is stored in the central database 24 at steps S23 and S24.
  • At step S46, the user decides if he wants to continue modifying his character. If the user decides to continue the modification process the user proceeds to steps S47 where other character attributes may be changed, removed or added. The number of available modifications which may be made to a character are within the control of the proprietor of the C-E site. The character attributes available for modification are programmed into the site program and the necessary audio data and visual image data is stored in the data storage. By periodically revising the attribute selection, the site provides the user with new animation experiences. As an incentive to get users to make modifications to their characters the user may be rewarded for each choice made, for example, through the use of sound, e.g. “nice choice”, or character movement, e.g. hand clapping.
  • Returning to FIG. 2, once the user has exhausted all possible attribute modification options and has completed the customization of his character, at steps S5 and S6, the user may decide to surf the network site in which the character was created. The character accompanies the user as he navigates through the site. Depending on the site program/data, the character may interact with the user through various comments and actions. For example, if the user is inactive within the site for a period of time, the character may start to tap his foot to entice the user to act. Data regarding the portions of the network site visited by the user are stored in the database at step S9. For example, data regarding the links selected by the user may be cross-referenced to the character and stored in the database. As an additional feature, when the user is surfing the C-E site wherein his character was created, the user has the option of further modifying his character's profile. Any modifications made to the character are stored in the central database 24.
  • At step S7 the user may choose to surf the network. This may be accomplished in several ways. For example, the C-E site in which the user currently resides may include links to other network sites. The user may choose to follow these links to the associated network sites. With reference to FIG. 1, the link from the C-E site 16 d may be to another C-E site 16 c or it may be to a network site 16 b that is not character-enabled. If the user follows a link to another C-E site 16 c, the persona data of the character associated with the user may be transferred to the other C-E site. The transfer of persona data may be accomplished by cookie sharing. For example, a string of JavaScript may be written to allow the other character-enabled method site's 16 c cookie to recognize the cookie from the first C-E site 16 d.
  • The links selected by the user and his associated character may be recorded in the central database 24. The central database 24 thus contains information as to the profile of the character and the links of interest to the character. This type of information may be beneficial to the proprietor of the network site as a means of determining the type of people who are visiting its network site.
  • As an additional aspect of the invention, users of C-E sites may be able to share or exchange data. For example, the character-enabled sites may be configured to support a chat room or other virtual environment, wherein the various users may enter the room or environment under the guise of their character and communicate with each other via the user interface. Character persona data is shared among visitors through, for example, JavaScript programming which presents data indicative of character's persona to the audio/visual display of the user interface. This data may include a picture of the character, a sound bite from the character and/or a written description of the character. Communication between users is provided using well known communications protocols such as that used by ICQ or AOL Instant Messenger.
  • Once the user is finished surfing the network site or the network, at step S12, he is given the option of saving his character for future use. If the user selects to do so then at step S13 the user is asked to assign a name to his character. The user may also be asked to designate a password. Upon doing so, the user-assigned name is added to the central database and the attributes associated with the user's character, which are stored in the central database, are linked to the user-assigned name.
  • In accordance with the present invention, the character created by the user may be retrieved from the central database 24 by the user through other C-E sites. This is accomplished by a plug-in written, for example in Java, located at the newly accessed C-E site. While within the new network site, the user may be able to further modify his character. The plug-in also allows any changes a user makes to his character or any choices made on a network site to be stored in the central database 24.
  • The central database 24 (FIG. 1) comprises processes that gather, process and store data. The database software may be implemented using Microsoft SQL7, Oracle8i or Access database programs. In an exemplary embodiment, the central database 24 comprises a plurality of tables which store data indicative of the activities occurring at each of the C-E sites. Such activities may include, but are not limited to, user selection and modifications of character, user navigation through a site, length of time at certain parts of a site, brand product selected and links followed. Essentially, each choice a user makes when within a C-E site is stored in the central database 24. A exemplary database table is shown in FIG. 11.
  • With reference to FIG. 12, the data stored at various points throughout the network exploration process (steps S9, S10, S14, S21 a . . . S45) is compiled in a main database table at step S50. At step S51 outside parties, e.g., character-enabled site proprietors or customers are given the opportunity to analyze the data. At step S52, the data may be analyzed, using well known market research techniques, including both qualitative and quantitative techniques to develop taste, preference and opinion statistics of users. At step S53, the outside party is given the opportunity to combine the database data with third-party data, such as census data and income data. At steps S54 and S55, the data is combined and analyzed. At step S56 the data, either analyzed or unanalyzed, is presented to the outside party.
  • In accordance with the present invention, the site program/data of a C-E site may be designed to provide a means of capturing data related to the identity, tastes, preferences and opinions of site users. With respect to the identity of a user, by designing pre-profiled characters having a combination of attributes which define a character persona, the system is able to provide a means for determining the demographics of the users visiting a site. For example, if a user selects a pre-profiled character that is female, it is likely that the user is female. As a further example, if the pre-profiled character appears to be a certain age, the selected character is likely to be indicative of the age of the user. Additional character attributes may be indicative of user profession, income, geographic location and ethnicity. It is significant to note that the present invention allows for the determination and collection of user information without asking the user to disclose personal information such as age, gender, name, e-mail address, etc. The user may, however, give more personal information if they choose. For example, the geographic location of a user may be determined if the user chooses to provide his zip code.
  • With respect to tastes, preferences and opinions, the clothing, accessories, music and other attributes associated with a character identified with by a user are likely to provide an indication of the general tastes, preferences and opinions of that user. Any attribute modifications made by the user provide further insight into the tastes, preferences and opinions of that user. In this respect, the present invention provides a means by which the tastes, preferences and opinions of a portion of the public, i.e. the users of character-enables sites, may be monitored by manufacturers of consumer products. For example, a clothing manufacturer may use the system to test market a new style of shirt. The manufacturer would incorporate animation software and animation data necessary to display a number of shirts of varying styles into an existing character-enabled site or alternatively, establish its own character-enabled site. The number of “hits” each specific shirt style experiences is tallied and stored in the central database 24. Each hit may also be cross referenced to the persona of the character making the hit. Thus the system collects data indicative of the demographics of the users and the styles of shirts favored by the users which fall within a specific demographic. Continuing with the shirt example, additional taste, preference and opinion data may be collected regarding the most popular color for each shirt by providing the user a palette of shirt colors from which to choose.
  • The foregoing is merely one example of the market research capabilities provided by the present invention. Taste, preference and opinion data may be collected on virtually any consumer product. For example, an automobile manufacturer may test market car options and accessories, a beverage manufacturer may test market a new can design, a cellular telephone manufacturer may gather information on preferred size, shape and color of cell phones. Besides consumer products evaluations, the system of the present invention may be used to conduct opinion surveys on political issues and current events. For example, a user may be presented with animations representative of political figures and asked to choose which character he wants to be. A user may be presented with an animation of a character holding an empty can and asked to choose between dropping the can in the street or into a trash can.
  • Thus, the system of the present invention provides for the compilation and provision of data about a target audience. The system provides the data necessary to determine market trends in real-time and forecast trends based on the popularity of certain profiles and choices made by users. The system allows for companies to test market products through specific profiles that are programmed into the system to thereby derive marketing answers in real-time. Quick response time to trends is a crucial factor in determining the success of a marketing program. The present invention provides for such a response.
  • While this invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, preferred embodiments of the invention as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention as defined in the claims.

Claims (20)

1. A method of collecting data on-line in response to user choices made on-line, said method comprising:
storing a plurality of character data in a database;
storing a plurality of character-attribute data in a database;
linking the character attribute data with one or more of the character data;
presenting to a user interface, one or more character data defining one or more characters for selection by the user;
upon selection of a character, presenting in real time to the user interface, the selected character along with at least one of the character-attribute data linked to the selected character for selection by the user;
upon selection of a character attribute, presenting in real time to the user interface, the selected character including the selected character attribute; and
tallying the number of times the selected character attribute has been selected.
2. The method of claim 1 wherein the character data comprises at least one of audio data and visual image data.
3. The method of claim 1 wherein the character-attribute data comprises at least one of audio data and visual image data.
4. The method of claim 1 further comprising:
storing data in a database indicative of the selected character and selected character attribute, the selected character and selected character attributes collectively defining a character persona;
storing a plurality of character-persona data;
linking the character-persona data with one or more of the character-attribute data; and
presenting to the user interface, one or more character-persona data linked to the character persona.
5. The method of claim 4 wherein the character-persona data is different from the character data and the character-attribute data.
6. The method of claim 4 wherein the character-persona data comprises at least one of audio presentations and visual image presentations.
7. The method of claim 4 wherein the character-persona data comprise at least one link to a network site.
8. The method of claim 7 wherein the network site is able to present to the user interface the selected character including the selected character attribute.
9. An on-line data collection and presentation system comprising:
a plurality of character data; a plurality of character-attribute data linked with one or more of the character data; and
a processor programmed to:
present to a user interface, one or more of the character data defining one or more characters for selection by the user;
upon selection of a character, present in real time to the user interface, the selected character along with at least one of the character-attribute data linked to the selected character for selection by the user;
upon selection of a character attribute, present in real time to the user interface, the selected character including the selected character attribute; and
tally the number of times the selected character attribute has been selected.
10. The system of claim 9 herein character data comprises at least one of audio data and visual image data.
11. The system of claim 9 wherein the character-attribute data comprises at least one of audio data and visual image data.
12. The system of claim 9 further comprising a plurality of character persona data linked with one or more of the character-attribute data and the processor is further programmed to present to the user interface, one or more character-persona data linked to the character persona.
13. The system of claim 12 wherein the processor is further programmed to store data indicative of the selected character and selected character attribute collectively defining a character persona.
14. The system of claim 12 wherein the character-persona data comprises at least one of audio presentations and visual image presentations.
15. A method of communicating through an information network, said method comprising:
storing a plurality of character data in a database;
storing a plurality of character-attribute data in a database;
linking the character attribute data with one or more of the character data;
providing for the creation of on-line characters by:
presenting to a user interface one or more character data defining one or more characters for selection;
upon selection of a character, presenting in real time to the user interface, the selected character along with at least one of the character-attribute data linked to the selected character for selection; and
upon selection of a character attribute, presenting in real time to the user interface, the selected character including the selected character attribute;
tallying the number of time the selected character attribute has been selected; and
providing a communications link that allows a plurality of created characters to be presented on a common network site.
16. The method of claim 15 wherein the common network site is different from the network site through which at least one of the characters was created.
17. The method of claim 15 wherein the communications link allows data to be exchanged through the user interfaces associated with each respective character.
18. A method of navigating network sites on an information network, comprising the steps of:
a) presenting to a user interface, one or more characters for selection by a user;
b) upon selection of a character, presenting in real-time to the user interface, the selected character along with at least one character attribute for selection by said user;
c) upon selection of a character attribute, presenting in real-time to the user interface, the selected character including the selected character attribute;
d) storing data indicative of said selected character and said selected character attribute;
e) navigating a first character-enabled (CE) network site wherein said stored data indicative of said selected character and said selected character attribute accompanies said user during said navigation of said CE network site;
f) following a link to a second CE network site;
g) sharing said stored data indicative of said selected character and said selected character attribute with said second CE network site; and
h) navigating said second character-enabled (CE) network site, wherein said stored data indicative of said selected character and said selected character attribute accompanies said user during said navigation of said CE network site.
19. The method of claim 18 wherein said stored data indicative of said selected character and said selected character attribute is stored as a cookie and transferred to said second CE network site.
20. The method of claim 18 further comprising the step of analyzing said stored data indicative of said character, said character attributes and said portions of said CE network site visited by said user to develop marketing statistics representative of said users.
US13/298,095 2000-07-12 2011-11-16 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices Abandoned US20120297309A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/298,095 US20120297309A1 (en) 2000-07-12 2011-11-16 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/614,572 US6952716B1 (en) 2000-07-12 2000-07-12 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US11/186,723 US7860942B2 (en) 2000-07-12 2005-07-20 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US12/956,382 US8180858B2 (en) 2000-07-12 2010-11-30 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US13/298,095 US20120297309A1 (en) 2000-07-12 2011-11-16 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/956,382 Continuation US8180858B2 (en) 2000-07-12 2010-11-30 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

Publications (1)

Publication Number Publication Date
US20120297309A1 true US20120297309A1 (en) 2012-11-22

Family

ID=35005211

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/614,572 Expired - Lifetime US6952716B1 (en) 2000-07-12 2000-07-12 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US11/186,723 Expired - Lifetime US7860942B2 (en) 2000-07-12 2005-07-20 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US12/956,382 Expired - Lifetime US8180858B2 (en) 2000-07-12 2010-11-30 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US13/298,095 Abandoned US20120297309A1 (en) 2000-07-12 2011-11-16 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/614,572 Expired - Lifetime US6952716B1 (en) 2000-07-12 2000-07-12 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US11/186,723 Expired - Lifetime US7860942B2 (en) 2000-07-12 2005-07-20 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US12/956,382 Expired - Lifetime US8180858B2 (en) 2000-07-12 2010-11-30 Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

Country Status (1)

Country Link
US (4) US6952716B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070204228A1 (en) * 2006-02-24 2007-08-30 Brian Minear System and method of controlling a graphical user interface at a wireless device
US20100185520A1 (en) * 2002-06-20 2010-07-22 Linda Gottfried Method and system for sharing brand information
US20150304806A1 (en) * 2014-04-17 2015-10-22 Ebay Inc. Image customization to enhance transaction experience
US11204678B1 (en) * 2019-12-11 2021-12-21 Amazon Technologies, Inc. User interfaces for object exploration in virtual reality environments

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553864A (en) 1992-05-22 1996-09-10 Sitrick; David H. User image integration into audiovisual presentation system and methodology
JP2001229209A (en) * 2000-02-14 2001-08-24 Nec Corp Design system
US7516196B1 (en) * 2000-03-21 2009-04-07 Nokia Corp. System and method for delivery and updating of real-time data
US6952716B1 (en) * 2000-07-12 2005-10-04 Treehouse Solutions, Inc. Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US7827488B2 (en) 2000-11-27 2010-11-02 Sitrick David H Image tracking and substitution system and methodology for audio-visual presentations
US7398233B1 (en) * 2001-06-15 2008-07-08 Harris Interactive, Inc. System and method for conducting product configuration research over a computer-based network
US7890387B2 (en) 2001-06-15 2011-02-15 Harris Interactive Inc. System and method for conducting product configuration research over a computer-based network
AU2003266002A1 (en) * 2002-05-06 2003-11-17 Benjamin M. Goldberg Localized audio networks and associated digital accessories
US20050075155A1 (en) * 2003-01-30 2005-04-07 David Sitrick Video architecture and methodology for family of related games
US20040255232A1 (en) * 2003-06-11 2004-12-16 Northwestern University Networked presentation system
KR100657065B1 (en) * 2004-01-29 2006-12-13 삼성전자주식회사 Device and method for character processing in wireless terminal
WO2005074596A2 (en) * 2004-01-30 2005-08-18 Yahoo! Inc. Method and apparatus for providing real-time notification for avatars
US7707520B2 (en) * 2004-01-30 2010-04-27 Yahoo! Inc. Method and apparatus for providing flash-based avatars
JP4257607B2 (en) * 2004-09-14 2009-04-22 ソニー株式会社 Information processing apparatus and method, and program
WO2006096776A2 (en) * 2005-03-07 2006-09-14 The University Of Georgia Research Foundation,Inc. Teleportation systems and methods in a virtual environment
US20070219849A1 (en) * 2005-11-28 2007-09-20 Voiceport, Llc Automated method, system, and program for generation of an audio survey
US20070174235A1 (en) * 2006-01-26 2007-07-26 Michael Gordon Method of using digital characters to compile information
US9098577B1 (en) * 2006-03-31 2015-08-04 Qurio Holdings, Inc. System and method for creating collaborative content tracks for media content
US7925723B1 (en) 2006-03-31 2011-04-12 Qurio Holdings, Inc. Collaborative configuration of a media environment
US20080120670A1 (en) * 2006-10-31 2008-05-22 Robert Easton System and method for tracking consumer activities within a digital magazine
US9530117B2 (en) * 2007-02-13 2016-12-27 International Business Machines Corporation Method and apparatus for transforming user requests and responses based on a persona
US20080201369A1 (en) * 2007-02-16 2008-08-21 At&T Knowledge Ventures, Lp System and method of modifying media content
US8156146B2 (en) * 2007-09-28 2012-04-10 Xcerion Aktiebolag Network file system
US9773247B1 (en) * 2007-12-07 2017-09-26 Jpmorgan Chase Bank, N.A. Adaptive and customizable account interface system and method
US8029359B2 (en) 2008-03-27 2011-10-04 World Golf Tour, Inc. Providing offers to computer game players
US8387094B1 (en) * 2009-04-09 2013-02-26 Tp Lab, Inc. Method and system to automatically select data network videos as television shows based on a persona
US10049379B2 (en) * 2009-06-12 2018-08-14 Rentrak Corporation Quantitative branding analysis
KR20100138700A (en) * 2009-06-25 2010-12-31 삼성전자주식회사 Method and apparatus for processing virtual world
US20150312298A1 (en) * 2011-03-24 2015-10-29 Kevin J. O'Keefe Method and system for information exchange and processing
US8762226B2 (en) * 2011-05-04 2014-06-24 Etsy, Inc. Item discovery tools and methods for shopping in an electronic commerce environment
US9141977B2 (en) 2011-09-07 2015-09-22 Elwha Llc Computational systems and methods for disambiguating search terms corresponding to network members
US9167099B2 (en) 2011-09-07 2015-10-20 Elwha Llc Computational systems and methods for identifying a communications partner
US9159055B2 (en) 2011-09-07 2015-10-13 Elwha Llc Computational systems and methods for identifying a communications partner
US9195848B2 (en) 2011-09-07 2015-11-24 Elwha, Llc Computational systems and methods for anonymized storage of double-encrypted data
US9491146B2 (en) 2011-09-07 2016-11-08 Elwha Llc Computational systems and methods for encrypting data for anonymous storage
US10546306B2 (en) 2011-09-07 2020-01-28 Elwha Llc Computational systems and methods for regulating information flow during interactions
US9928485B2 (en) 2011-09-07 2018-03-27 Elwha Llc Computational systems and methods for regulating information flow during interactions
US10523618B2 (en) 2011-09-07 2019-12-31 Elwha Llc Computational systems and methods for identifying a communications partner
US10546295B2 (en) 2011-09-07 2020-01-28 Elwha Llc Computational systems and methods for regulating information flow during interactions
US10606989B2 (en) 2011-09-07 2020-03-31 Elwha Llc Computational systems and methods for verifying personal information during transactions
US9747561B2 (en) 2011-09-07 2017-08-29 Elwha Llc Computational systems and methods for linking users of devices
US9432190B2 (en) 2011-09-07 2016-08-30 Elwha Llc Computational systems and methods for double-encrypting data for subsequent anonymous storage
US9690853B2 (en) * 2011-09-07 2017-06-27 Elwha Llc Computational systems and methods for regulating information flow during interactions
US10365816B2 (en) * 2013-08-21 2019-07-30 Intel Corporation Media content including a perceptual property and/or a contextual property
CN104731829B (en) * 2013-12-24 2019-06-21 腾讯科技(深圳)有限公司 A kind of interactive approach and device of network picture
US20160217496A1 (en) * 2015-01-23 2016-07-28 Disney Enterprises, Inc. System and Method for a Personalized Venue Experience

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5948061A (en) * 1996-10-29 1999-09-07 Double Click, Inc. Method of delivery, targeting, and measuring advertising over networks
US20020055833A1 (en) * 1999-08-23 2002-05-09 Deborah Sterling Systems and methods for virtual population mutual relationship management using electronic computer driven networks
US6692359B1 (en) * 1991-02-15 2004-02-17 America Online, Inc. Method of interfacing on a computer network by visual representations of users, method of interacting and computer network

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW266277B (en) * 1994-12-31 1995-12-21 Sega Of America Inc Videogame system and methods for enhanced processing and display of graphical character elements
JP3671259B2 (en) * 1995-05-31 2005-07-13 カシオ計算機株式会社 Display device
US5913040A (en) * 1995-08-22 1999-06-15 Backweb Ltd. Method and apparatus for transmitting and displaying information between a remote network and a local computer
US6577998B1 (en) * 1998-09-01 2003-06-10 Image Link Co., Ltd Systems and methods for communicating through computer animated images
US5884029A (en) 1996-11-14 1999-03-16 International Business Machines Corporation User interaction with intelligent virtual objects, avatars, which interact with other avatars controlled by different users
US5983190A (en) * 1997-05-19 1999-11-09 Microsoft Corporation Client server animation system for managing interactive user interface characters
US6100881A (en) * 1997-10-22 2000-08-08 Gibbons; Hugh Apparatus and method for creating interactive multimedia presentation using a shoot lost to keep track of audio objects of a character
US6078891A (en) * 1997-11-24 2000-06-20 Riordan; John Method and system for collecting and processing marketing data
US6466213B2 (en) 1998-02-13 2002-10-15 Xerox Corporation Method and apparatus for creating personal autonomous avatars
JP3028367B1 (en) 1998-10-08 2000-04-04 コナミ株式会社 Character expression method, recording medium, image display device, and video game device
US6448980B1 (en) * 1998-10-09 2002-09-10 International Business Machines Corporation Personalizing rich media presentations based on user response to the presentation
JP2000187738A (en) * 1998-10-12 2000-07-04 Fujitsu Ltd Picture generation device, database and storage medium
US6600725B1 (en) * 1998-12-16 2003-07-29 At&T Corp. Apparatus and method for providing multimedia conferencing services with selective information services
US6634949B1 (en) * 1999-02-26 2003-10-21 Creative Kingdoms, Llc Multi-media interactive play system
US6954902B2 (en) * 1999-03-31 2005-10-11 Sony Corporation Information sharing processing method, information sharing processing program storage medium, information sharing processing apparatus, and information sharing processing system
US7061493B1 (en) * 1999-04-07 2006-06-13 Fuji Xerox Co., Ltd. System for designing and rendering personalities for autonomous synthetic characters
JP2001014282A (en) * 1999-06-29 2001-01-19 Sony Corp Device and method for information processing and medium
US7080070B1 (en) * 1999-07-02 2006-07-18 Amazon Technologies, Inc. System and methods for browsing a database of items and conducting associated transactions
US6522333B1 (en) 1999-10-08 2003-02-18 Electronic Arts Inc. Remote communication through visual representations
US6772195B1 (en) 1999-10-29 2004-08-03 Electronic Arts, Inc. Chat clusters for a virtual world application
US7328171B2 (en) * 1999-11-12 2008-02-05 Hewlett-Packard Development Company, L.P. System and method for ordering consumer items in electronic commerce
US6727925B1 (en) * 1999-12-20 2004-04-27 Michelle Lyn Bourdelais Browser-based room designer
US7346543B1 (en) * 2000-02-24 2008-03-18 Edmark Tomima L Virtual showroom method
US6948131B1 (en) * 2000-03-08 2005-09-20 Vidiator Enterprises Inc. Communication system and method including rich media tools
US7149665B2 (en) * 2000-04-03 2006-12-12 Browzwear International Ltd System and method for simulation of virtual wear articles on virtual models
WO2001090869A1 (en) * 2000-05-02 2001-11-29 Macri Vincent J Processing system for interactive, personal and idiosyncratic control of images and devices
US6954728B1 (en) 2000-05-15 2005-10-11 Avatizing, Llc System and method for consumer-selected advertising and branding in interactive media
US6545682B1 (en) * 2000-05-24 2003-04-08 There, Inc. Method and apparatus for creating and customizing avatars using genetic paradigm
US7526440B2 (en) * 2000-06-12 2009-04-28 Walker Digital, Llc Method, computer product, and apparatus for facilitating the provision of opinions to a shopper from a panel of peers
US6901379B1 (en) * 2000-07-07 2005-05-31 4-D Networks, Inc. Online shopping with virtual modeling and peer review
US6952716B1 (en) 2000-07-12 2005-10-04 Treehouse Solutions, Inc. Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6692359B1 (en) * 1991-02-15 2004-02-17 America Online, Inc. Method of interfacing on a computer network by visual representations of users, method of interacting and computer network
US5948061A (en) * 1996-10-29 1999-09-07 Double Click, Inc. Method of delivery, targeting, and measuring advertising over networks
US20020055833A1 (en) * 1999-08-23 2002-05-09 Deborah Sterling Systems and methods for virtual population mutual relationship management using electronic computer driven networks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185520A1 (en) * 2002-06-20 2010-07-22 Linda Gottfried Method and system for sharing brand information
US8725803B2 (en) * 2002-06-20 2014-05-13 Sinoeast Concept Limited Method and system for sharing brand information
US20070204228A1 (en) * 2006-02-24 2007-08-30 Brian Minear System and method of controlling a graphical user interface at a wireless device
US8582729B2 (en) * 2006-02-24 2013-11-12 Qualcomm Incorporated System and method of controlling a graphical user interface at a wireless device
US20150304806A1 (en) * 2014-04-17 2015-10-22 Ebay Inc. Image customization to enhance transaction experience
US9503845B2 (en) * 2014-04-17 2016-11-22 Paypal, Inc. Image customization to enhance transaction experience
US11204678B1 (en) * 2019-12-11 2021-12-21 Amazon Technologies, Inc. User interfaces for object exploration in virtual reality environments

Also Published As

Publication number Publication date
US8180858B2 (en) 2012-05-15
US20050273722A1 (en) 2005-12-08
US6952716B1 (en) 2005-10-04
US20110072109A1 (en) 2011-03-24
US7860942B2 (en) 2010-12-28

Similar Documents

Publication Publication Date Title
US8180858B2 (en) Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices
US7337127B1 (en) Targeted marketing system and method
Stewart et al. Media influences on marketing communications
US10755286B2 (en) Targeted marketing system and method
US10783528B2 (en) Targeted marketing system and method
US20090158170A1 (en) Automatic profile-based avatar generation
KR101865710B1 (en) System and method for directing content to users of a social networking engine
US9324117B2 (en) Method and system for dynamic web display
US20080097843A1 (en) Method of network merchandising incorporating contextual and personalized advertising
JP2002541557A (en) Online Method and Apparatus for Gathering Demographic Information About Users of World Wide Websites
JP2011039909A (en) Method and system for optimizing presentation information
KR100762829B1 (en) Intelligent searching service system and method in accordance with the customer?s personalized shopping preference
AU2019240635A1 (en) Targeted marketing system and method
US20020069119A1 (en) Method and system for interactive real-time creation of printed and electronic media with customized look and feel for individual users
Kuo et al. Personalization technology application to Internet content provider
CA2538712A1 (en) Rich media personal selling system
Stewart et al. The effects of media on marketing communications
KR102577767B1 (en) Location-based clothing recommendation advertisement service apparatus
KR100521752B1 (en) System and method for providing information of customer's purchase pattern to affiliated stores
Yamamoto et al. Enhanced IoT-Aware Online Shopping System
Andika The Effect Of Color Scheme On Purchase Intention With Attitude Towards Website As Mediation Variable (Study Case On Lazada. Co. Id In Malang)
Katrandjiev et al. Online Visual Merchandising Structural Elements And Optimization For Apparel Web Stores
Lundberg Web designers, don’t be afraid to use low-quality images in e-retail, unless you want to impress users: Purchase intent and attitudes on product listing pages with varying product image quality
Cho Effects of social networking sites (SNSs) on hyper media computer mediated environments (HCMEs)
Wood Personalization of the Web interface: Avatars as vehicles for visual persuasion in the online decision making process

Legal Events

Date Code Title Description
AS Assignment

Owner name: TREEHOUSE SOLUTIONS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROBB, IAN N.;MADLENER, MICHAEL B.;REEL/FRAME:033490/0050

Effective date: 20050110

AS Assignment

Owner name: TREEHOUSE AVATAR TECHNOLOGIES INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TREEHOUSE SOLUTIONS INC.;REEL/FRAME:033491/0940

Effective date: 20131017

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION