WO2007130693A2 - Methods and systems for processing an interchange of real time effects during video communication - Google Patents
Methods and systems for processing an interchange of real time effects during video communication Download PDFInfo
- Publication number
- WO2007130693A2 WO2007130693A2 PCT/US2007/011143 US2007011143W WO2007130693A2 WO 2007130693 A2 WO2007130693 A2 WO 2007130693A2 US 2007011143 W US2007011143 W US 2007011143W WO 2007130693 A2 WO2007130693 A2 WO 2007130693A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- video
- real
- time
- implemented method
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/215—Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/58—Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/85—Providing additional services to players
- A63F13/87—Communicating with other players during game play, e.g. by e-mail or chat
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1081—Input via voice recognition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1087—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/5553—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6045—Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/65—Methods for processing data by generating or executing the game program for computing the condition of a game character
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- General Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Computer implemented methods for interactively modifying a video image or avatar image are provided. The video image or avatar can be transmitted or shared between a first user and a second user using a computer program that is executed on at least one computer in a computer network. Additionally, the first user and the second user interact through respective computing systems that are at least partially executing the computer program. One method includes a video capture system interfaced with the computer program that can be used to capture substantial real-time video of the first user. The method continues by identifying components of the video image of the first user that can be modified using real-time effects in the captured real-time video In another operation, the method identifies controller input from either the first user or the second user. The controller input detected by the computing system is identified.
Description
METHODS AND SYSTEMS FOR PROCESSING AN INTERCHANGE OF REAL TIME EFFECTS DURING VIDEO
COMMUNICATION
BACKGROUND OF THE INVENTION
1. Field of the Invention fOOOl] The present invention relates generally to interactive multimedia entertainment and more particularly, interactive user control and manipulation of representations of users in a virtual space.
2. Description of the Related Art
[0002] The video game industry has seen many changes over the years. As computing power has expanded, developers of video games have likewise created game software that takes advantage of these increases in computing power. To this end, video game developers have been coding games that incorporate sophisticated operations and mathematics to produce a very realistic game experience.
[0003] Example gaming platforms include the Sony Playstation or Sony Playstation2 (PS2), each of which is sold in the form of a game console. As is well known, the game console is designed to connect to a monitor (usually a television) and enable user interaction through handheld controllers. The game console is designed with specialized processing hardware, including a CPU3 a graphics synthesizer for processing intensive graphics operations, a vector unit for performing geometry transformations, and other glue hardware, firmware, and software. The game console is further designed with an optical disc tray for receiving game compact discs for local play through the game console. Online gaming is also possible, wherein a user can interactively play against or with other users over the Internet. [0004] As game complexity continues to intrigue players, gaming software and hardware manufacturers have continued to innovate to enable additional interactivity. In reality, however, the way in which users interact with a game has not changed dramatically over the years. Commonly, users still play computer games using hand held controllers or interact with programs using mouse pointing devices.
[0005] Some computer programs define virtual worlds. A virtual world is a simulated environment in which users may interact with each other via one or more computer
processors. Users may appear on a video screen in the form of representations referred to as avatars. The degree of interaction between the avatars and the simulated environment is implemented by one or more computer applications that govern such interactions as simulated physics, exchange of information between users, and the like. The nature of interactions among users of the virtual world is often limited by the constraints of the system implementing the virtual world.
[0006] In view of the foregoing, there is a need for methods and systems that enable more advanced user interactivity with game play.
SUMMARY
[0007] An invention is described for improving and enhancing verbal and non-verbal communications. The system improves and enhances verbal and non-verbal communication by automatic and user controlled application of Real-Time Effects (RTE) to audio and video input in a networked environment. The input can be for communications commonly referred to as "chat", and specifically for video based chat. Video chat can occur over a network, such as the Internet. The Effects defined herein are applicable to both video and audio, and combinations thereof. During a chat session, persons involved in some discussion can interactively cause the application of an RTE on the video image of the person he/she is communicating with or cause the application of an RTE on his/her own vide image in substantially real-time. An RTE, as will be described in greater detail below, is an effect that is selected by one of the participants of the chat, which can be applied and integrated into the video image and/or audio of one of the participants. The effect can take on many forms, such as video pixel patches that can be integrated into specific portions of one of the participants faces, bodies, or surroundings. The video pixel patches are preferably applied in such a way that they integrate into the moving video frames, and therefore, the integration appears to be substantially done in real-time.
[0008] In one embodiment, a computer implemented method for interactively modifying a video image is disclosed. The video image can be transmitted between a first user and a second user using a computer program that is executed on at least one computer in a computer network. Additionally, the first user and the second user interacting through respective computing systems that are at least partially executing the computer program. The method begins by providing a video capture system interfaced with the computer program that can be used to capture real-time video of the first user. The method continues by identifying
components of the video image of the first user that can be modified using real-time effects in the captured real-time video. In another operation, the method identifies controller input from either the first user or the second user. The controller input detected by the computing system is identified to determine which of the identified components of the first user will be modified. In response to the identified controller input, another operation of the method augments the real-time video captured of the first user by applying the real-time effects to the identified components of the first user. The method concludes by displaying the augmented real-time video of the first user on a screen connected to the computing system of one or both of the first and second users.
[0009] In another embodiment, a computer implemented method for interactively modifying a video image and audio is disclosed. The video image and audio transmitted can be transmitted between a first user and a second user using a computer program that is executed on at least one computer in a computer network to enable a chat communication. The first user and the second user can interact through respective computing systems that are at least partially executing the computer program. The method begins by providing a video and audio capture system on each of the respective computing systems of the first and second users. The video and audio capture system interfaced with the computer program to enable the chat communication. The method continues by capturing real-time video and audio of the first user through the video and audio capture system connected to the computing system of the first user. In another operation, the method identifies components of the video image of the first user that can be modified using real-time effects in the captured real-time video. In yet another operation, the method identifies audio segments of audio captured by the video and audio capture system that can be modified using real-time effects. The method continues by identifying user input from either the first user or the second user, to determine which of the identified audio segments of the first user will be modified. In response to the identified user input, another operation of the method is to apply the real-time effects to either one or both of the identified components of the first user or the audio segments. Another operation of the method is to output real-time video and audio of the first user on a screen and an audio output device connected to the computing system of one or both of the first and second users. The output real-time video and audio includes the applied real-time effects. As used herein, realtime is substantially real-time, and the delay should be minimal so as to not noticeably impact the flow of display data.
[0010] In yet another embodiment, a computer implemented method for interactively modifying a video image during chat communication in conjunction with game play over a network is disclosed. The method begins by providing a video and audio capture system on respective computing systems of a first and second users. The video and audio capture system interfaced with the computer program to enable the chat communication. The method continues by capturing real-time video and audio of a first user through the video and audio capture system connected to the computing system of the first user. In another operation, the method identifies components of the video image of the first user that can be modified using real-time effects in the captured real-time video. In still another operation, the method identifies audio segments of audio captured by the video and audio capture system that can be modified using real-time effects. In yet another operation, the method identifies user input from either the first user or the second user. The identification of the user input determining which of the identified audio segments of the first user will be modified. The method continues by applying real-time effects to either one or both of the identified components of the first user or the audio segments in response to the identified user input. In another operation, the method outputs real-time video and audio of the first user on a screen and audio output equipment connected to the computing system of one or both of the first and second users. The output real-time video and audio includes the applied real-time effects. [0011] In still another embodiment, a computer implemented method for interactively animating an avatar in response to real world input is described. The avatar can be transmitted between a first user and a second user using a computer program that is executed on at least one computer in a computer network. Additionally, the first user and the second user each interact using a respective computing system that is at least partially executing the computer program. The method is initiated by identifying components of the avatar representing the first user that can be modified using real-time effects. The method continues by identifying controller input from either the first user or the second user. The controller input detected by the computing system. The identification of the controller input determining which of the identified components of the avatar representing the first user will be modified. In response to the identified controller input the real-time effects are applied to the identified components of the avatar representing the first user. The avatar of the first user being augmented to reflect the application of the real-time effects. In another operation, the method displays the
augmented avatar of the first user on a screen connected to the computing system of one or both of the first and second users.
|0012] In another embodiment, a computer implemented method for automatically modifying an avatar image in substantial real-time in conjunction with communication over a network is disclosed. The method is initiated by providing a video and audio capture system on a respective computing system of a first and a second users. The video and audio capture system interfaced with the computer program to enable the real-time communication. In another operation, the method detects real-time changes in facial expression of the first user in the captured video of the first user. In yet another operation, the method detects real-time changes in vocal characteristics of the first user. Another operation automatically applies realtime effects to the avatar image that represents the first user in response to the monitored realtime video and audio of the first user. The method moves to outputting the avatar image that represents the first user with the automatically applied real-time effect on a screen connected to the computing system of one or both of the first and second users. [0013] The advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The invention, together with further advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings. [0015] Figure 1 is a simplified block diagram of a high level overview of a system for improving and enhancing verbal and non-verbal communications in accordance with one embodiment of the present invention.
[0016] Figure 2 is an illustration of facial features recognized and tracked by the video recognition and tracking unit in accordance with one embodiment of the present invention. [0017] Figure 3 shows an example of recognition and tracking of facial features on a video image, in accordance with one embodiment of the invention.
[0018] Figure 4A shows example groups of the predefined video effects library in accordance with one embodiment of the present invention.
[0019] Figure 4B shows the partial contents of the custom video effects library in accordance with one embodiment of the present invention.
[0020J Figures 5 A-5F provide an example of how a user would apply a RTE to a remote user in accordance with one embodiment of the present invention.
[0021] Figures 6A-6D illustrate how a user can apply a RTE to himself in order to emphasize an emotion in accordance with one embodiment of the present invention.
[0022] Figures 7A-7C show how a user (Marks) can apply a RTE to a user (m3rCy Flus]-[) in accordance with one embodiment of the present invention.
[0023] Figure 8 A demonstrates multiple video RTE and a graphical representation of verbal communication between the users in accordance with one embodiment of the present invention.
[0024] Figure 8B shows the result of a fire and smoke video effect when applied to a users forearms and hands in accordance with one embodiment of the present invention.
[0025] Figure 8C shows the result of applying a water effect over the video feed in accordance with one embodiment of the present invention.
[0026] Figure 8D demonstrate an effect that superimposes butterflies on a user's video feed in accordance with one embodiment of the present invention.
[0027] Figure 8E demonstrates an effect where virtual spiders are interacting with a user in accordance with one embodiment of the present invention.
[0028] Figure 9A shows how the video chat with RTE could be implemented with online games in accordance with one embodiment of the present invention.
[0029] Figure 9B demonstrate how the video chat could compliment an online text/voice chat application(e.g., like instant messaging "IM") in accordance with one embodiment of the present invention.
[0030] Figure 10 shows how the RTE augments the video and audio output based on feedback from the user after the RTE is triggered in accordance with one embodiment of the present invention.
[0031] Figure 11 schematically illustrates the overall system architecture of the Sony®
Playstation 3® entertainment device, a console that may be compatible with controllers for implementing an avatar control system in accordance with one embodiment of the present invention.
[0032] Figure 12 schematically illustrates the architecture of the Cell processor, in accordance with one embodiment of the present invention.
[0033] Figures 13A-13K illustrate examples avatar components, environments, and responses, in accordance with one embodiment.
DETAILED DESCRIPTION
[0034] An invention is described for improving and enhancing verbal and non-verbal communications. The system improves and enhances verbal and non-verbal communication by automatic and user controlled application of Real-Time Effects (RTE) to audio and video input in a networked environment. The input can be for communications commonly referred to as "chat", and specifically for video based chat. Video chat can occur over a network, such as the Internet. The Effects defined herein are applicable to both video and audio, and combinations thereof. During a chat session, persons involved in some discussion can interactively cause the application of an RTE on the video image of the person he/she is communicating with or cause the application of an RTE on his/her own vide image in substantially real-time. An RTE, as will be described in greater detail below, is an effect that is selected by one of the participants of the chat, which can be applied and integrated into the video image and/or audio of one of the participants. The effect can take on many forms, such as video pixel patches that can be integrated into specific portions of one of the participants faces, bodies, or surroundings. The video pixel patches are preferably applied in such a way that they integrate into the moving video frames, and therefore, the integration appears to be substantially done in real-time. It will be obvious to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
[0035] In another embodiment, the application of animations to an avatar may be provided. An avatar is an icon that can be selected by a user to represent him or her, can enhance the communications experience. The communications may be video chat or text chat and may be standalone applications or bundled with interactive applications such as videogames. During a chat session, the avatars for the people involved can interact with real world stimuli occurring around the chat participants based on input received from microphones and video cameras. Additionally, the chat participants can interactively cause the application of an RTE on an avatar of a person he/she is communicating with or cause the application of an RTE on his/her own avatar in substantially real-time. The computer system of embodiments defined herein
may receive the output from a plurality of ambient microphones and a video camera, housed in an AV input, using a connection. The computer system can use the data from ambient microphones and a video camera with a variety of other data to display an image onto a video screen and output audio.
[0036] Figure 1 is a simplified block diagram of a high level overview of a system for improving and enhancing verbal and non-verbal communications in accordance with one embodiment of the present invention. As shown in Figure 1, a system 100 is capable of inputting data from at least one controller 102 A, at least one ambient microphone 102B, at least one video camera 102C, and at least one player microphone 102D. [0037] In one embodiment, the video camera captures image frames and digitizes the image frames to define a pixel map. Video output from the video camera 102C is initially fed into an input unit 104. The input unit 104 can be in the form of circuitry, or a software-controlled driver. From the input unit 104, the video output from the video camera 102C is passed to a video capture unit 112 and further processed by a video recognition and tracking unit 116. The video recognition and tracking unit 116 is meant to recognize facial features and body parts of a user along with the movements of the user. Additionally, the video recognition and tracking unit 116 may be capable of capturing the background surroundings, and other elements within the captured images. A frame processor 120 uses the output from the video recognition and tracking unit 116 and can augment the image with video from a video effects library 108. The video effects library 108 contains at least two libraries shown as a predefined video effects 108 A and custom video effects 108B, which can be selectively applied by the user or automatically applied by the system 100. It is possible for the video effects library 108 to contain fewer or more libraries so long as the libraries contain predefined and custom video effects. In operation, the frame processor outputs data to a graphics processor/renderer 124 that computes and outputs the final images displayed to the user, shown as video out 132. The graphics processor/renderer 124 also feeds information regarding the state of the system 100 to a communications link 126. [0038] The audio input from the ambient microphones 102B and the player microphones 102D may be initially passed through the input unit 104 and then captured by a sound capture unit 110 that may pass the captured data to a sound recognition unit 114. Sound data is then passed to a sound processor 118 that can also receive input from a sound effects library 106. The sound effects library 106 contains at least two libraries shown as predefined sound effects
106 A and custom sound effects 106B that can be selectively applied by the user or automatically applied by the system 100. It is possible for the sound effect library to contain fewer or more libraries so long as it has predefined and custom audio effects. In one embodiment, the sound processor 118 outputs the final mixed sounds for the system 100, shown as audio out 130, and feeds information regarding the state of the system 100 to a communications link 126.
[0039] In one embodiment, the communications link 126 connects the system 100 to a network 128 that can connect system 100 with a remote system 150 that is capable of interfacing with the system 100 and is operated by a remote user (not shown). Figure 1 shows the system 100 being connected to a single remote system 150 via the network 128, but it should be understood that a plurality of remote systems 150 and their corresponding users can be connected to system 100 via the network 128. The remote system 150 is capable of understanding the state of the system 100 as reported by the sound processor 118 and the graphics processor/renderer 124. The remote system 150 combines the information regarding the state of the system 100 with input from the remote user before producing audio out 154and video out 152.
[0040] The controllers 102 A accept input from the users of the system 100 and the input may be processed simultaneously with the video and audio input. In one embodiment of the present invention the user presses a button or combination of buttons on the controller to initiate or apply an RTE. In another embodiment, an RTE is automatically initiated when triggered by an event defined in the software being processed by the system 100. As noted above, the RTE is an effect that is applied to a video communication participant's image, during video rendering. For example, the RTE can be applied to a portion of the participant's face, surroundings, etc., and the RTE is applied in substantially real-time. In such an example, the RTE may be applied such that the applied effect blends in to the user's image or surroundings. The applied RTE may also be configured to track the movements of the user's image or surroundings, so that the RTE can change as the user changes. In this embodiment, allowing the RTE to be applied dynamically to a moving image and change with the image allows for a more realistic rendition of the RTE during a dynamic communication session between video chat participants.
[0041] Figure 2 is an illustration of facial features recognized and tracked by the video recognition and tracking unit 116 in accordance with one embodiment of the present
invention. As shown in Figure 2 the video image acquired from the video camera is analyzed to recognize and track the eyes 202 from the face of a user 200. In one embodiment the video recognition and tracking unit 116 can recognize a users' eyes by overlaying a grid system 204 on the video image and searching for color contrast between the exposed sclera 206 and the iris 208 and/or surrounding skin tone. Once the eyes 202 are located, the video recognition and tracking unit 1 16 can use the grid system 204 and facial characteristics statistical data to recognize a mouth 210 within a mouth area 1 12 and eyebrows 212 within eyebrow areas 214. Recognizing and tracking the eyes 202, the eyebrows 212 and the mouth 210 of the user will enable improvement and enhancement of verbal and non-verbal communications of users of the systems 100. Note that the present invention is not limited to the previously mentioned methods to track facial features and that other methods can be used to recognize and track facial features. It should also be noted that the system 100 is not limited to tracking facial features, but can also be applied to track arms, hands, torsos, legs, feet, and even portions of the surroundings. Additionally, the grid system 204 is provided as a simplistic example of the analysis of specific portions of a captured image. Finer grids may also be used, as well as pixel-by-pixel or pixel group analysis and comparisons with a database of recognition pixel data. Additionally, although specific description is provided regarding an image, it should be understood that the RTEs may be applied to multiple still images as well as the multiple images that make up a video sequence of images. The video may consist of individual full images or compressed video, such as those defined in any one of the currently available MPEG compression standards, which are incorporated by reference. [0042] Figure 3 shows an example of recognition and tracking of facial features on a video image, in accordance with one embodiment of the invention. In one embodiment, a video capture system such as a video camera is used to capture image frames and the image frames can be digitized by the camera to define a pixel map of the image frames. In one embodiment, pixel regions of the video image are used to identify characteristics of a user captured in the video image. Figure 3 shows pixel regions that can be used to identify and track an eyebrow area 214 and a mouth area 212 over one or more frames. Additional pixel regions that the system 100 could identify and track include pixel regions for ears 304, a nose 306 and areas of the head 302. In one example, the identified and tracked areas of a user can be subjected to an RTE stored in the video effects library 108, that will enhance the communications
experience between users. Again, it is noted that the application of RTE is not restricted nor limited to facial features.
[0043] In one embodiment, video chat is conducted at relatively low video resolutions. Because of the relatively low video resolution a user's facial expression may not always be clearly conveyed. As will be shown in Figure 4 A, the predefined video effects 108 A can be grouped into different categories depending on the facial feature they modify. In one embodiment the predefined video effects 108 A are cartoonish animations blended with the real time video that overly to distort a target user's facial features to, for example, ensure remote users understand the emotional state of the sender. Similarly, the predefined video effects could be used in a comical nature for taunting if a sender applies a predefined video effect to a remote user. Because cartoonish animations assist in conveying emotions to remote users the communication experience between users of the video chat is enhanced. [0044] Figure 4 A shows example groups of the predefined video effects library 108 A in accordance with one embodiment of the present invention. Though the graphics in Figure 4 are static, the present invention deals with animated graphics to create substantially real time video effects. The effects are substantially real time because there is a slight delay in the application of the effect due to the complex, but rapid, calculations necessary to apply the RTE. However, the end result is a video effect that appears to be substantially presented in real time during the video communication. If applied to the user the RTE can follow the movement of the user moving fluidly as if the selected RTE were attached to or integrated with the user receiving the RTE. The pre-defined video effects 108 A are video effects that are loaded onto the system 100 by the system manufacturer, as part of software processed by the system 100 or downloaded over the network 128. Regardless of how the pre-defined video effects 108 A are loaded on the system 100 it may be possible that they may be common among all users of systems capable of interfacing with the system 100. [0045] For example, pre-defined video effects for the mouth area 212 could include smiles, frowns, puckers and grimaces. Pre-defined video effects for eyebrow area 214 could include various eyebrow animations including scowls and a variety of animations to express emotions of surprise or doubt. Applying dangling earrings or making pointy elf like ears are effects that could be applied to ear area 304 and morphing a clown nose or extending the nose "Pinocchio style" to infer lying could be applied to nose area 306. Additionally, adding horns to head area 302 is possible along with a variety of hats and halos. More examples of RTE not shown in
Figure 4A are effects applied to a user's eyes such as making their eyes glow red, or pop out of their head or an exaggerated rolling of their eyeballs. Additionally, RTE can be applied to a user's arms, legs, feet and even the area surrounding a user.
[0046] In another embodiment there are pre-defined RTEs that enable a user to apply an entire theme to themselves or another user. For example, the groups defined in Figure 4A would be completely different and include categories such as girls themes, baby themes, animal themes and celebration themes. Possible girls themes include a RTE where a user is suddenly dressed in a "Little Bo Peep" outfit and standing in a field with sheep. Under baby themes there could be a selection where a user is shown with a pacifier in their mouth wearing a diaper with a crib in the background accompanied by the sound of a baby crying. With the animal theme a user's face could be superimposed on a jackass and enlarged front teeth placed in their mouth. Additionally, under celebrations, a user could have a party hat on top of their head, confetti and tickertape falling from the sky and the sound of cheers in the background. The pre-defined video effects shown in Figure 4A and the examples listed are not inclusive of all of the potential effects. One skilled in the art should recognize that the potential for effects is unlimited and only constrained by a programmer's imagination and moral fiber. [0047] Figure 4B shows the partial contents of the custom video effects library 108B in accordance with one embodiment of the present invention. The custom video effects 108B are unique to one particular user and are created or customized by the user. The user can create custom video effects by recording and editing video or by taking pre-defined video effects and modifying them to their liking. Examples of what users could create or modify include animations of the user sticking their tongue out or an animation depicting the user vomiting. The user could also create animations of them smiling and replacing their teeth with fangs or revealing that some teeth are missing. It would also be possible to sell or license custom video effect using a model similar to mobile phone ring-tones. For example a user would be able to visit a website or custom video effect portal where they would be able to download custom video effects after paying a license or sales fee. The examples listed are intended to be possible custom effects and are not intended to be restrictive.
[0048] Figures 5A-5F provide an example of how a user would apply a RTE to a remote user in accordance with one embodiment of the present invention. In this embodiment the user 504 whose screen name (m3rCy Flu5]-[) is displayed under his live video image 502 will apply a RTE to a target user 508 whose screen name (Marks) is displayed under his live video image
506. Figures 5A-5F show, as examples, still photographs as representative of live video images and it should be noted there could be more users engaged in the video chat connected by the network 128 rather than just the two users shown in the figures. In this embodiment the user 504 would initiate the application of a RTE by selecting which user the RTE will be applied to, in this case, the target user 508. The controller buttons associated with symbols 510 allow the user 504 to scroll through available users because, as discussed above, it is possible for more users to be connected using the network 128. The user 504 selects the target user 508 by pressing the Ll button on his controller at which point the user 504 will see the example RTE categories 512 shown in Figure 5B. In this example the user 504 selects a video RTE by pressing Ll . After selecting the video RTE the user 504 may be allowed to choose from the options shown in Figure 5C. As previously mentioned, the system 100 can apply RTE to more than a user's face but this example is dealing with applying a RTE to the face of the target user 508.
[0049] The user 504 can scroll through more possible video effects by pressing the controller buttons corresponding to the symbols 510. Having selected to apply a RTE to the eyebrows of the target user 508 brings user 504 to the choices shown in Figure 5D. At this point the user 504 selects the set of eyebrows he wants to apply to the target user 508. The user 504 can select from any of the eyebrows available in the predefined video effects 108 A or his custom video effects 108B by scrolling through the possible selections. The system 100 can indicate which of the possible selections are from the custom video effects 108B by using different icons, colors, or fonts or any combination thereof.
[0050] Continuing with this example the user 504 applies eyebrows 520 by pressing L2 which results in the graphic shown in Figure 5E. In this example the eyebrows 520 are shown on the video feed from the target user 508 as a preview since the user 504 still has the option to accept or cancel the application of the RTE. In a different embodiment there is a separate preview window that would allow the user 504 to see the target user 508 with the applied RTE. This would maintain one window where there is an unaltered video feed from the target user 508. In Figure 5F, the user 504 has canceled the application of the eyebrow RTE to the target user 508 and has returned to the previous menu where he can select different eyebrows from the video effects library.
[0051] Returning to Figure 5E, note that the user 504 has the option to add more effects. In particular, the user 504 can press Ll to add video or press Rl to add audio. This feature
allows the user 504 to add multiple RTE and preview the effects before sending them to the remote users. While this embodiment has used the controller buttons shown in 5A-5F, other embodiments can be manipulated using controllers responsive to relative positional movement and motion capable of transmitting signals via a wired or wireless link. In another embodiment, specific buttons on a controller or specific voice commands can be used to dynamically apply a predetermined RTE. For example, the user can program a button to always apply horns over the other person's head. This feature could be considered a "hot button" that can be quickly pressed so the RTE immediately shows up, without having to navigate through multiple selection screens. Once a hot button is programmed, it may be reprogrammed on demand.
[0052] Figures 6A-6D illustrate how a user 508 can apply a RTE to himself in order to emphasize an emotion in accordance with one embodiment of the present invention. Figure 6 A shows that the user 508 (Marks) is conducting a video chat with user 504 (m3rCy Flu7]- [). To apply a RTE to his own streaming video the user 508 presses Ll that takes us to Figure 6B. To progress from Figure 6B to 6C the user 508 presses Ll to initiate the application of a video RTE. Having already discussed examples of how an RTE may be selected, Figure 6C summarizes the areas of the user's 508 face which may receive an applied RTE. Figure 6D shows the application of the RTE. In one embodiment the video RTE will remain for a predetermined period of time. In another embodiment the video RTE will stay in place tracking the users movements until the initiator or the RTE cancels the RTE. [0053] Figures 7A-7C show how a user 508 (Marks) can apply a RTE to a user 504 (m3rCy Flus]-[) in accordance with one embodiment of the present invention. Figure 7 A is where the user 508 selects to apply a RTE to the user 504. Since the process a user undertakes to select a RTE was previously covered the steps have been omitted and Figure 7B shows the user 508 selecting the RTE to apply to the user 504. From Figure 7C the user 508 chooses to accept the RTE by selecting the icon labeled 710 resulting in the RTE being applied to user 504. Once the user 508 accepts the RTE by pressing the corresponding button on his controller the user 504 will have the RTE applied to his video image. All participants in the video chat will see the user 504 with the RTE applied. Because the video RTE was applied by the user 508 to the user 504 the duration of the video RTE will only be for a predetermined period of time. In another embodiment the video RTE would stay in place until the sender, user 508, sends a command canceling the RTE.
[0054] In another embodiment RTE will not be visible to all participants because a user can choose to not allow RTE to be applied to their video stream because of the potentially offensive nature of RTE. Furthermore, pre-defined RTE can be assigned maturity ratings similar to movie ratings from the Motion Picture Association of America or the video game ratings conducted by the Entertainment Software Rating Board. This would allow the system 100 to filter incoming RTE to ensure that only pre-defined RTE within a specific rating are displayed. This feature would assist parents in making sure their children are not exposed to potentially offensive RTE.
[0055] To clarify the application of custom video effects assume the user 508 selected a custom RTE from Figure 7B. In order for the custom video effects of the user 508 to be displayed on the screen of the user 504 the effect will need to be transmitted across the network 128. Once transmitted, the custom video effect can be cached in the system of the user 504 but the user 504 will not be able to voluntarily use the custom video effect. In one embodiment, if the system the user 504 is using has been configured to reject custom RTE the RTE sent from the user 508 will not be displayed to the user 504. Figure 8 A demonstrates multiple video RTE and a graphical representation of verbal communication between the users in accordance with one embodiment of the present invention. As previously discussed these effects could be applied as a theme or individually selected by a user. [0056]- Figure 8B shows the result of a fire and smoke video effect when applied to a users forearms and hands in accordance with one embodiment of the present invention. Recall that the effects are video effects and the flames and smoke may, in one embodiment, follow the user's movement.
[0057] Figure 8C shows the result of applying a water effect over the video feed in accordance with one embodiment of the present invention. This effect can be applied to a specific area of the video feed or broadly to the entire viewable area.
[0058] Figure 8D demonstrate an effect that superimposes butterflies on a user's video feed in accordance with one embodiment of the present invention. In this example the butterflies are "in front" of the user. This example shows how effects can be placed in the user's environment and enables the themes, discussed above, to have backgrounds and foregrounds. [0059] Figure 8E demonstrates an effect where virtual spiders are interacting with a user in accordance with one embodiment of the present invention. This example shows how a user
can "pick up" and move a virtual spider. The virtual spiders, like an RTE, can also be applied to the image of the other person of the video chat.
[0060] While the discussion of the RTE performed by the system 100 has been primarily devoted to video processing the system 100 is also capable of performing RTE on audio input. Figures 4A and 4B show predefined and custom video libraries but they could also be shown as a predefined and custom audio library. Just like their video counterparts, the effects from the audio libraries can be triggered by a user or automatically by the system. Similarly, the audio effect will be substantially real time because of the delay required to process the effect and the transmission delay across the network 128.
[0061] The audio RTE would be initiated in the same manner as the video RTE as shown in Figures 4A-4F except applying audio RTE instead of video RTE. For example, when choosing the RTE the user applying the effect would hear a short preview that the other users could not hear. The duration of the audio RTE would also be configurable ranging from automatically stopping the RTE after a few seconds to waiting for a user to cancel the effect. [0062] Examples of possible audio RTE include shifting the pitch of a users voice up or down, and adding echo and reverberation. Modifications to a user's voice are not the only application of audio RTE. Pre-recorded sounds, similar to what disc jockeys use during radio broadcasts, would be available for users to initiate or automatically added by the system 100. The combination of audio and video RTE will make the communications experience using system 100 much richer, engaging and interactive than other forms of communications. [0063] Figure 9A shows how the video chat with RTE could be implemented with online games in accordance with one embodiment of the present invention. This embodiment shows the two users off to the side of the game being played, however their video images could be placed over the game screen 902 in order to maximize game viewing area of a display. In this embodiment the RTE can be initiated by either any of the users or automatically by the system depending on the software being manipulated.
[0064] User initiation of the RTE can be accomplished using the controller or by voice command. Automatic initiation of the RTE by the system would be triggered by specific events occurring. For example, if the system 100 was running a baseball simulation and one user hits a grand slam the video RTE would turn their eyes into animated dollar signs and the audio RTE would play a cash register "cha-ching" sound in the background. Conversely, their
opponent could have animated tears placed on their face and the sound of a baby crying in the background.
[0065] Figure 9B demonstrate how the video chat could compliment an online text/voice chat apρlication(e.g., like instant messaging "IM") in accordance with one embodiment of the present invention. This application would be a step forward from current instant messaging and video messaging because of the ability to apply RTE to both the video and audio from other users. The benefit of this application is that it would allow people who have lost their voice the ability to communicate using an instant messenger typing system while expressively communicating their emotions using the video RTE. The application could also be used to convert voice chat into text chat or even output Braille to assist the hearing and sight impaired.
[0066] Figure 10 shows how the RTE augments the video and audio output based on feedback from the user after the RTE is triggered in accordance with one embodiment of the present invention. System 100 has a microphone 1002, a video camera 102C and a controller 102 A attached. A RTE is triggered at time is at zero either automatically or by the user. Once the RTE is triggered the system 100 begins processing the effect and begins receiving feedback from the video camera 102C and the microphone 1002 regarding the user's facial expression and voice. At time 1 , the video camera and microphone receive the image and sound shown at time 1 on their respective graphs 1004 and graph 1006. The RTE are processed and outputs the image and sound shown in graph 1008. Progressing to time 2, the user continues to speak, but as seen at time 2 on the graph 1006 the user is speaking louder and at time 1. Accordingly, the RTE modifies the output image making the users head slightly larger and opening the users mouth slightly wider.
[0067] At time 3, the user continues to speak but louder than at time 2 as shown in the graph 1006. Additionally, the user has furrowed his brow and opened his mouth wider as shown in the graph 1004. The RTE receives this feedback and further increases the size of the user's head, makes the user's eyebrows bushier and opens the user's mouth even more. At this point an audio RTE could be implemented such as making the user's voice deeper and more menacing or conversely, gradually increasing in pitch. Finally, at time 4, the user has continued to become louder than at time 3 as indicated by the intensity on the graph 1006. The mouth of the user is wide open and the eyebrows indicate anger as shown in the graph 1004. As shown on graph 1008 the RTE has increased the size of the user's head, made the
eyebrows bushier, and really opened up the user's mouth. The user's eyes could also be animated with flames or simply turned a menacing shade of red to further convey anger. [0068] Another example of where video feedback could be used to augment a RTE is in a vomiting RTE. After the triggering of the vomit RTE the video camera 102C can monitor the user's mouth only allowing vomit to spew forth if the user's mouth is open. Along the same vein, if the user's mouth remains closed, the RTE could be animated to show the user's cheeks expanding and their face turning green. The listed examples are not a complete list of the possible effects that could be enhanced by feedback from the video camera 102C and the microphone 1002. It should be understood that there are countless different embodiments where the use of a microphone and camera for feedback can enhance RTE. [0069] Figure 11 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console that may be compatible with controllers for implementing an avatar control system in accordance with one embodiment of the present invention. A system unit 1100 is provided, with various peripheral devices coπnectable to the system unit 1100. The system unit 1100 comprises: a Cell processor 1128; a Rambus® dynamic random access memory (XDRAM) unit 1 126; a Reality Synthesizer graphics unit 1130 with a dedicated video random access memory (VRAM) unit 1132; and an I/O bridge 1134. The system unit 1100 also comprises a BIu Ray® Disk BD-ROM® optical disk reader 1140 for reading from a disk 1140a and a removable slot-in hard disk drive (HDD) 1136, accessible through the I/O bridge 1134. Optionally the system unit 1100 also comprises a memory card reader 1138 for reading compact flash memory cards, Memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge 1134. [0070] The I/O bridge 1134 also connects to six Universal Serial Bus (USB) 2.0 ports 1 124; a gigabit Ethernet port 1122; an IEEE 802.1 lb/g wireless network (Wi-Fi) port 1120; and a Bluetooth® wireless link port 1 1 18 capable of supporting of up to seven Bluetooth connections.
[0071] In operation the I/O bridge 1 134 handles all wireless, USB and Ethernet data, including data from one or more game controllers 1102. For example when a user is playing a game, the I/O bridge 1134 receives data from the game controller 1102 via a Bluetooth link and directs it to the Cell processor 1128, which updates the current state of the game accordingly.
[0072] The wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controllers 1102, such as: a remote control 1104; a keyboard 1106; a mouse 1108; a portable entertainment device 1110 such as a Sony Playstation Portable® entertainment device; a video camera such as an EyeToy® video camera 1112; and a microphone headset 1114. Such peripheral devices may therefore in principle be connected to the system unit 1100 wirelessly; for example the portable entertainment device 1110 may communicate via a Wi-Fi ad-hoc connection, whilst the microphone headset 1114 may communicate via a Bluetooth link.
[0073] The provision of these interfaces means that the Playstation 3 device is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over IP telephones, mobile telephones, printers and scanners.
[0074] In addition, a legacy memory card reader 1 116 may be connected to the system unit via a USB port 1124, enabling the reading of memory cards 1148 of the kind used by the Playstation® or Playstation 2® devices.
[0075] In the present embodiment, the game controller 1102 is operable to communicate wirelessly with the system unit 1100 via the Bluetooth link. However, the game controller 1102 can instead be connected to a USB port, thereby also providing power by which to charge the battery of the game controller 1102. In addition to one or more analog joysticks and conventional control buttons, the game controller is sensitive to motion in six degrees of freedom, corresponding to translation and rotation in each axis. Consequently gestures and movements by the user of the game controller may be translated as inputs to a game in addition to or instead of conventional button or joystick commands. Optionally, other wirelessly enabled peripheral devices such as the PlaystationTM Portable device may be used as a controller. In the case of the PlaystationTM Portable device, additional game or control information (for example, control instructions or number of lives) may be provided on the screen of the device. Other alternative or supplementary control devices may also be used, such as a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown) or bespoke controllers, such as a single or several large buttons for a rapid-response quiz game (also not shown).
[0076] The remote control 1104 is also operable to communicate wirelessly with the system unit 1100 via a Bluetooth link. The remote control 1104 comprises controls suitable for the
operation of the BIu RayTM Disk BD-ROM reader 1140 and for the navigation of disk content.
[0077] The BIu RayTM Disk BD-ROM reader 1140 is operable to read CD-ROMs compatible with the Playstation and PlayStation 2 devices, in addition to conventional prerecorded and recordable CDs, and so-called Super Audio CDs. The reader 1140 is also operable to read DVD-ROMs compatible with the Playstation 2 and PlayStation 3 devices, in addition to conventional pre-recorded and recordable DVDs. The reader 1140 is further operable to read BD-ROMs compatible with the Playstation 3 device, as well as conventional pre-recorded and recordable Blu-Ray Disks.
[0078] The system unit 1100 is operable to supply audio and video, either generated or decoded by the Playstation 3 device via the Reality Synthesizer graphics unit 1130, through audio and video connectors to a display and sound output device 1142 such as a monitor or television set having a display 1 144 and one or more loudspeakers 1 146. The audio connectors 1150 may include conventional analogue and digital outputs whilst the video connectors 1152 may variously include component video, S-video, composite video and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as PAL or NTSC, or in 72Op, 108Oi or 1080p high definition. [0079] Audio processing (generation, decoding and so on) is performed by the Cell processor 1128. The Playstation 3 device's operating system supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks.
[0080] In the present embodiment, the video camera 1112 comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data may be transmitted in an appropriate format such as an intra-image based MPEG (motion picture expert group) standard for decoding by the system unit 1 100. The camera LED indicator is arranged to illuminate in response to appropriate control data from the system unit 1100, for example to signify adverse lighting conditions. Embodiments of the video camera 1112 may variously connect to the system unit 1100 via a USB, Bluetooth or Wi-Fi communication port. Embodiments of the video camera may include one or more associated microphones and also be capable of transmitting audio data. In embodiments of the video camera, the CCD may have a resolution
suitable for high-definition video capture. In use, images captured by the video camera may for example be incorporated within a game or interpreted as game control inputs. [0081] In general, in order for successful data communication to occur with a peripheral device such as a video camera or remote control via one of the communication ports of the system unit 1100, an appropriate piece of software such as a device driver should be provided. Device driver technology is well-known and will not be described in detail here, except to say that the skilled man will be aware that a device driver or similar software interface may be required in the present embodiment described.
[0082] Referring now to Figure 12, the Cell processor 1128 has an architecture comprising four basic components: external input and output structures comprising a memory controller 1260 and a dual bus interface controller 1270A,B; a main processor referred to as the Power Processing Element 1250; eight co-processors referred to as Synergistic Processing Elements (SPEs) 1210A-H; and a circular data bus connecting the above components referred to as the Element Interconnect Bus 1280. The total floating point performance of the Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the Playstation 2 device's Emotion Engine. [0083] The Power Processing Element (PPE) 1250 is based upon a two-way simultaneous multithreading Power 970 compliant PowerPC core (PPU) 1255 running with an internal clock of 3.2 GHz. It comprises a 512 kB level 2 (L2) cache and a 32 kB level 1 (Ll) cache. The PPE 1250 is capable of eight single position operations per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz. The primary role of the PPE 1250 is to act as a controller for the Synergistic Processing Elements 121 OA-H, which handle most of the computational workload. In operation the PPE 1250 maintains a job queue, scheduling jobs for the Synergistic Processing Elements 121 OA-H and monitoring their progress. Consequently each Synergistic Processing Element 1210A-H runs a kernel whose role is to fetch a job, execute it and synchronized with the PPE 1250.
[0084] Each Synergistic Processing Element (SPE) 1210A-H comprises a respective Synergistic Processing Unit (SPU) 1220A-H, and a respective Memory Flow Controller (MFC) 1240 A-H comprising in turn a respective Dynamic Memory Access Controller (DMAC) 1242 A-H, a respective Memory Management Unit (MMU) 1244A-H and a bus interface (not shown). Each SPU 1220A-H is a RISC processor clocked at 3.2 GHz and comprising 256 kB local RAM 1230A-H, expandable in principle to 4 GB. Each SPE gives a theoretical 25.6 GFLOPS of single precision performance. An SPU can operate on 4 single
precision floating point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit integers in a single clock cycle. In the same clock cycle it can also perform a memory operation. The SPU 1220 A-H does not directly access the system memory XDRAM 1126; the 64-bit addresses formed by the SPU 1220 A-H are passed to the MFC 1240 A-H which instructs its DMA controller 1242 A-H to access memory via the Element Interconnect Bus 1280 and the memory controller 1260.
[0085] The Element Interconnect Bus (EIB) 1280 is a logically circular communication bus internal to the Cell processor 1128 which connects the above processor elements, namely the PPE 1250, the memory controller 1260, the dual bus interface 1270A,B and the 8 SPEs 121 OA-H, totaling 12 participants. Participants can simultaneously read and write to the bus at a rate of 8 bytes per clock cycle. As noted previously, each SPE 121 OA-H comprises a DMAC 1242 A-H for scheduling longer read or write sequences. The EIB comprises four channels, two each in clockwise and anti-clockwise directions. Consequently for twelve participants, the longest step-wise data-flow between any two participants is six steps in the appropriate direction. The theoretical peak instantaneous EIB bandwidth for 12 slots is therefore 96B per clock, in the event of full utilization through arbitration between participants. This equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes per second) at a clock rate of 3.2GHz.
[0086] The memory controller 1260 comprises an XDRAM interface 1262, developed by Rambus Incorporated. The memory controller interfaces with the Rambus XDRAM 1126 with a theoretical peak bandwidth of 25.6 GB/s.
[0087] The dual bus interface 1270A,B comprises a Rambus FlexIO® system interface 1272A,B. The interface is organized into 12 channels each being 8 bits wide, with five paths being inbound and seven outbound. This provides a theoretical peak bandwidth of 62.4 GB/s (36.4 GB/s outbound, 26 GB/s inbound) between the Cell processor and the I/O Bridge 1134 via controller 1272A and the Reality Simulator graphics unit 1130 via controller 1272B. [0088] Data sent by the Cell processor 1128 to the Reality Simulator graphics unit 1130 will typically comprise display lists, being a sequence of commands to draw vertices, apply textures to polygons, specify lighting conditions, and so on.
[0089] Figure 13A is an example of a situation where the computer system 100 would be implemented in accordance with one embodiment of the present invention. In this example
the output from the computer system 100 has divided the screen 204b into a game area 206b and an avatar area 222b. The avatar area 222b, in this example, is displaying six different avatars 208b, 210b, 212b, 214b, 216b and 218b representative of the six people engaged in the game play displayed in the game area 206b. In this situation the user 200b has a user avatar 216b displayed in the avatar area 222b. Note that the avatars shown in Figure 13 A are representative of possible avatars and actual implementation can look significantly different from what is shown in Figure 13 A.
[0090] In this embodiment the users can customize and modify their avatar. Figure 13B is an example of a shell 210'b used to create the avatar 210b in accordance with one embodiment of the present invention. Customization options for one embodiment can include height, face shape 264b, hair/accessories 262b, eyes 268b, eyebrows 266b, nose 270b, skin tone, clothing (based on shirts 278b, sleeves 276b, pants 282b, and shoes 284b), and environment (indicated by the cross hatching). Figure 13C is an example of a customized avatar in accordance with one embodiment of the present invention. In this case a user has populated the shell 210'b from Figure 13B resulting in the avatar 210b. The previous list of customizations is not complete and is meant to convey that a user's avatar can be customized to closely match the user the avatar is meant to represent. Other customizations not listed that help an avatar resemble the user are within the scope of this disclosure.
[0091] However, users can choose to make their avatar not representative of their physical self and instead be more playful when selecting their avatar. To this end, programmers can provide preset avatar themes that reflect different characteristics a user can express. Examples of preset avatar themes include a grumpy old man, ditzy cheerleader, tough action hero, geeky scientist, stoner wastoid, gangsta rapper, and a redneck. The examples for preset avatar themes are not inclusive and are provided as a guide to possible themes and not meant to be restrictive.
[0092] The ability for the user's customized avatars to respond to external real world stimuli can be a function of the ambient microphones and a video camera connected to the computer system 100 in addition to other possible input devices. In one embodiment the ambient microphones 102Bb and the video camera 102b are integrated into one unit shown as the AV input 224b. The ambient microphones 102Bb are composed of an array of unidirectional microphones designed to pick up sounds from an environment associated with a user where the computer system 100 is located. This feature may be linked to the sound capture ability of
the computer system 100 that is able to identify and recognize various sounds. For example, in one embodiment, the computer system 100 can recognize the sound of a door 250b opening and/or or the sound of real world music 260b playing in the background. [0093] In one embodiment, the ambient microphones 102Bb help the computer system 100 locate the area in the room where the sound is emanating giving the avatars for the remote players the option to turn their heads in the direction of the real world sound. Furthermore, depending on the type of sound identified and the chosen preset avatar theme the reaction of the avatars can be different. For example, as shown in Figures 13D-13G if the ambient microphones 102Bb detect music in the background environment an avatar might automatically start dancing in beat with the rhythm of the real world music 260b. It is also possible that another avatar would automatically become agitated by the same music and speech balloons filled with "Turn down the music!" would appear over the avatar's head. The word balloons above that avatar could grow larger and the avatar more agitated the louder the real world music 260b is played. Alternatively, the avatars could be given pre-recorded voices and actually "speak" in response to specific real world stimuli. In the previous example an avatar could actually say, "Turn down the music!" if background music is detected. In another example, shown in Figures 13H-13K a male opens the door 250b and walks into the room where the system 100b is located and the various avatars could look toward the door or ask "Who just came in?" Upon the computer system 100 receiving the speech "Hey guys!" the computer system 100 is able to recognize the voice as male and any female avatars could respond with catcalls and whistling in the direction the male voice.
[0094] Beyond automated response from the avatar based on a user-selected theme the users could control the avatar in real time using voice commands inputted through a microphone or using the controller. The user 200b could change any of the adjustable characteristics of their avatar including the avatar theme. The ability to manually modify the avatar in real time would give users the option of changing their avatar as often as people change their moods. Users could also manually control specific aspect of the avatar. For example, if a user wanted their avatar to have a specific facial expression commands issued through the controller or special voice commands would initiate a particular facial expression. A user could also initiate Real-Time Effects (RTE) that would be imposed on, or incorporated into, their avatar. For example a user could light the hair of their avatar on fire or make the eyes of their avatar glow red.
[0095] Embodiments may include capturing depth data to better identify the real-world user and to direct activity of an avatar or scene. The object can be something the person is holding or can also be the person's hand. In the this description, the terms "depth camera" and "three- dimensional camera" refer to any camera that is capable of obtaining distance or depth information as well as two-dimensional pixel information. For example, a depth camera can utilize controlled infrared lighting to obtain distance information. Another exemplary depth camera can be a stereo camera pair, which triangulates distance information using two standard cameras. Similarly, the term "depth sensing device" refers to any type of device that is capable of obtaining distance information as well as two-dimensional pixel information. [0096] Recent advances in three-dimensional imagery have opened the door for increased possibilities in real-time interactive computer animation. In particular, new "depth cameras" provide the ability to capture and map the third-dimension in addition to normal two- dimensional video imagery. With the new depth data, embodiments of the present invention allow the placement of computer-generated objects in various positions within a video scene in real-time, including behind other objects.
[0097] Moreover, embodiments of the present invention provide real-time interactive gaming experiences for users. For example, users can interact with various computer-generated objects in real-time. Furthermore, video scenes can be altered in real-time to enhance the user's game experience. For example, computer generated costumes can be inserted over the user's clothing, and computer generated light sources can be utilized to project virtual shadows within a video scene. Hence, using the embodiments of the present invention and a depth camera, users can experience an interactive game environment within their own living room. Similar to normal cameras, a depth camera captures two-dimensional data for a plurality of pixels that comprise the video image. These values are color values for the pixels, generally red, green, and blue (RGB) values for each pixel. In this manner, objects captured by the camera appear as two-dimension objects on a monitor.
[0098] Embodiments of the present invention also contemplate distributed image processing configurations. For example, the invention is not limited to the captured image and display image processing taking place in one or even two locations, such as in the CPU or in the CPU and one other element. For example, the input image processing can just as readily take place in an associated CPU, processor or device that can perform processing; essentially all of
image processing can be distributed throughout the interconnected system. Thus, the present invention is not limited to any specific image processing hardware circuitry and/or software. The embodiments described herein are also not limited to any specific combination of general hardware circuitry and/or software, nor to any particular source for the instructions executed by processing components.
[0099] With the above embodiments in mind, it should be understood that the invention may employ various computer-implemented operations involving data stored in computer systems. These operations include operations requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
[00100] The present invention may be used as presented herein or in combination with other user input mechanisms and notwithstanding mechanisms that track the angular direction of the sound and/or mechanisms that track the position of the object actively or passively, mechanisms using machine vision, combinations thereof and where the object tracked may include ancillary controls or buttons that manipulate feedback to the system and where such feedback may include but is not limited light emission from light sources, sound distortion means, or other suitable transmitters and modulators as well as buttons, pressure pad, etc. that may influence the transmission or modulation of the same, encode state, and/or transmit commands from or to the device being tracked.
[00101] The invention may be practiced with other computer system configurations including game consoles, gaming computers or computing devices, hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention may also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a network. For instance, on-line gaming systems and software may also be used.
[00102] With the above embodiments in mind, it should be understood that the invention may employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or
magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
[00103] Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus may be specially constructed for the required purposes, such as the carrier network discussed above, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
[00104] The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium may be any data storage device that can store data, which can thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, FLASH based memory, CD-ROMs, CD-Rs, CD-RWs, DVDs, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer systems so that the computer readable code may be stored and executed in a distributed fashion. [00105] Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the embodiments defined herein. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the described embodiments. [00106] What is claimed is:
Claims
1. A computer implemented method for interactively modifying a video image, the video image transmitted between a first user and a second user using a computer program that is executed on at least one computer in a computer network and each of the first user and the second user interacting through a respective computing system that is at least partially executing the computer program, comprising: providing a video capture system interfaced with the computer program; capturing real-time video of the first user; identifying components of the video image of the first user that can be modified using real-time effects in the captured real-time video; identifying controller input from either the first user or the second user, the controller input being detected by the computing system, and the identification of the controller input determines which of the identified components of the first user will be modified; applying real-time effects to the identified components of the first user in response to the identified controller input, and the real-time video captured of the first user is augmented with the real-time effects; and displaying augmented real-time video of the first user on a screen connected to the computing system of one or both of the first and second users.
2. The computer implemented method of claim 1, wherein interactively modifying a video image includes: identification of pixel regions of the video image to identify characteristics of the first user; tracking the pixel regions over one or more frames; and applying changes to pixel data contained in the pixel regions so that the video image is interactively modified.
3. The computer implemented method of claim 1, wherein the video capture system includes a camera that captures image frames and digitizes the image frames to define a pixel map of the image frames.
4. The computer implemented method of claim 2, wherein the identified components of the video image relate to the characteristics of the first user.
5. The computer implemented method of claim 4, wherein the characteristics of the first user include facial and body components, and the facial and body components are identified by recognizing characteristics that are common in facial and body components.
6. The computer implemented method of claim 5, wherein a location of eyes of the first user define characteristics that are common facial components.
7. The computer implemented method of claim 1, wherein controller input is defined by user selection of one of button presses, motion, light indicators, relative positional movement, or a combination thereof.
8. The computer implemented method of claim 7, wherein the controller input detected by the computing system is via a wired or wireless link.
9. The computer implemented method of claim 1, where determining which of the identified components of the first user will be modified is assisted by mapping specific controller inputs to the identifying components of the video image.
10. The computer implemented method of claim 2, wherein applying real-time effects to the identified components includes directing the application of changes to the pixel data contained in the pixel regions of the video image, and the real-time effects including one of pre- rendered video animations, themes, custom video animations, and combinations thereof.
11. The computer implemented method of claim 10, wherein when the changes to the pixel data are filtered onto the captured real-time video of the first user, the captured realtime video of the first user is in an augmented state.
12. A computer implemented method for interactively modifying a video image and audio, the video image and audio transmitted between a first user and a second user using a computer program that is executed on at least one computer in a computer network to enable a chat communication, and each of the first user and the second user interacting through a respective computing system that is at least partially executing the computer program, comprising: providing a video and audio capture system on each of the respective computing systems of the first and second users, the video and audio capture system being interfaced with the computer program to enable the chat communication; capturing real-time video and audio of the first user through the video and audio capture system connected to the computing system of the first user; identifying components of the video image of the first user that can be modified using real-time effects in the captured real-time video; identifying audio segments of audio captured by the video and audio capture system that can be modified using real-time effects; identifying user input from either the first user or the second user, and the identification of the user input determines which of the identified audio segments of the first user will be modified; applying real-time effects to either one or both of the identified components of the first user or the audio segments in response to the identified user input; and outputting real-time video and audio of the first user on a screen connected to the computing system of one or both of the first and second users, the output real-time video and audio including the applied real-time effects.
13. The computer implemented method of claim 12, wherein interactively modifying a video image includes: identification of pixel regions of the video image to identify characteristics of the first user; tracking the pixel regions over one or more frames; and applying changes to pixel data contained in the pixel regions so that the video image is interactively modified.
14. The computer implemented method of claim 12, wherein the video capture system includes a camera that captures image frames and digitizes the image frames to define a pixel map of the image frames.
15. The computer implemented method of claim 13, wherein applying real- time effects to the identified components includes directing the application of changes to the pixel data contained in the pixel regions of the video image, and the real-time effects including one of pre-rendered video animations, themes, custom video animations, and combinations thereof.
16. A computer implemented method for interactively modifying a video image during chat communication in conjunction with game play over a network, comprising: providing a video and audio capture system on a respective computing system of the first and second users, the video and audio capture system being interfaced with the computer program to enable the chat communication; capturing real-time video and audio of a first user through the video and audio capture system connected to the computing system of the first user; identifying components of the video image of the first user that can be modified using real-time effects in the captured real-time video; identifying audio segments of audio captured by the video and audio capture system that can be modified using real-time effects; identifying user input from either the first user or the second user, and the identification of the user input determines which of the identified components audio segments of the first user will be modified; applying real-time effects to either one or both of the identified components of the first user or the audio segments in response to the identified user input; and outputting real-time video and audio of the first user on a screen connected to the computing system of one or both of the first and second users, the output real-time video and audio including the applied real-time effects.
17. The computer implemented method of claim 16, wherein interactively modifying a video image includes: identification of pixel regions of the video image to identify characteristics of the first user; tracking the pixel regions over one or more frames; and applying changes to pixel data contained in the pixel regions so that the video image is interactively modified.
18. The computer implemented method of claim 16, wherein the video capture system includes a camera that captures image frames and digitizes the image frames to define a pixel map of the image frames.
19. The computer implemented method of claim 18, wherein user input is defined by user selection of one of button presses, motion, light indicators, relative positional movement, or a combination thereof.
20. The computer implemented method of claim 17, wherein applying real-time effects to the identified components includes directing the application of changes to the pixel data contained in the pixel regions of the video image, and the real-time effects including one of pre-rendered video animations, themes, custom video animations, and combinations thereof.
21. A computer implemented method for interactively animating an avatar in response to real world input, the avatar transmitted between a first user and a second user using a computer program that is executed on at least one computer in a computer network and each of the first user and the second user interacting through a respective computing system that is at least partially executing the computer program, comprising: identifying components of the avatar representing the first user that can be modified using real-time effects; identifying controller input from either the first user or the second user, the controller input being detected by the computing system, and the identification of the controller input determines which of the identified components of the avatar representing the first user will be modified; applying the real-time effects to the identified components of the avatar representing the first user in response to the identified controller input, the avatar of the first user being augmented with the real-time effects; and displaying the augmented avatar of the first user on a screen connected to the computing system of one or both of the first and second users.
22. The computer implemented method of claim 21 , wherein interactively modifying an avatar image includes: identification of pixel regions of the avatar to identify characteristics of the avatar representing the first user; tracking the pixel regions over one or more frames; and applying changes to pixel data contained in the pixel regions so that the avatar is interactively modified.
23. The computer implemented method as described in claim 21, further comprising: providing a video capture system interfaced with the computer program; capturing real-time video of the first user; processing the real-time video to identify at least one facial expression of the first user; periodically updating a facial expression of the avatar representing the first user to correspond with the facial expression of the first user.
24. The computer implemented method of claim 23, wherein the video capture system includes a camera that captures image frames and digitizes the image frames to define a pixel map of the image frames.
25. The computer implemented method of claim 24, wherein a location of eyes of the first user are used to determine facial characteristics that define the facial expression of the first user.
26. The computer implemented method of claim 21 , wherein controller input is defined by user selection of one of button presses, motion, light indicators, relative positional movement, or a combination thereof.
27. The computer implemented method of claim 26, wherein the controller input detected by the computing system is via a wired or wireless link.
28. The computer implemented method of claim 21, where determining which of the identified components of the first user will be modified is assisted by mapping specific controller inputs to the identifying components of the avatar.
29. The computer implemented if claim 21, further comprising: providing an audio capture system interfaced with the computer program; capturing real-time audio of the ambient sounds within an environment associated with the first user; processing the ambient sounds to identify particular sounds; automatically applying real-time effects to avatars representing the first user or the second user in response to the identified particular sounds within the ambient sounds.
30. The computer implemented method as described in claim 29, wherein the audio capture system includes an array of unidirectional microphones that capture sounds from different portions of the environment associated with the first user.
31. A computer implemented method for automatically modifying an avatar image in substantial real-time in conjunction with communication over a network, comprising: providing a video and audio capture system on a respective computing system of a first and a second users, the video and audio capture system being interfaced with the computer program to enable the real-time communication; detecting real-time changes in facial expression of the first user in the captured video of the first user; detecting real-time changes in vocal characteristics of the first user; automatically applying real-time effects to the avatar image representing the first user in response to the monitored real-time video and audio of the first user; and outputting the avatar image representing the first user with the automatically applied real-time effect on a screen connected to the computing system of one or both of the first and second users.
32. The computer implemented method of claim 31 , wherein the audio capture system includes a microphone that captures vocal sounds from the first user and digitizes the vocal sounds to define a vocal characteristic of the first user.
33. The computer implemented method of claim 31, wherein the video capture system includes a camera that captures image frames and digitizes the image frames to define a pixel map of the image frames.
34. The computer implemented method of claim 33, wherein the detecting realtime changes in facial expression includes: identifying pixel regions of the image frames to identify facial characteristics of the first user; and tracking and comparing the pixel regions over one or more frames.
35. The computer implemented method of claim 31, wherein detecting real-time changes in vocal characteristics includes: capturing a first vocal characteristic of the first user at a first time; capturing a second vocal characteristic of the first user at a second time; comparing the first vocal characteristic to the second vocal characteristic; and determining a magnitude for the real-time effects.
36. The computer implemented method of claim 35, further comprising: applying the real-time effects with the magnitude either progressively over time or at once.
37. The computer implemented method of claim 36, wherein when the real-time effects are applied progressively over time, a characteristic of the avatar changes is size, shape, or expression, or a combination thereof.
38. A computer readable media including program instructions for automatically modifying an avatar image in substantial real-time in conjunction with communication over a network, the computer readable media, comprising: program instructions for providing a video and audio capture system on a respective computing system of a first and a second users, the video and audio capture system being interfaced with the computer program to enable the real-time communication; program instructions for detecting real-time changes in facial expression of the first user in the captured video of the first user; program instructions for detecting real-time changes in vocal characteristics of the first user; program instructions for automatically applying real-time effects to the avatar image representing the first user in response to the monitored real-time video and audio of the first user; and program instructions for outputting the avatar image representing the first user with the automatically applied real-time effect on a screen connected to the computing system of one or both of the first and second users, and detecting real-time changes in vocal characteristics includes, processing program instructions for directing operations of, (i) capturing a first vocal characteristic of the first user at a first time; (ii) capturing a second vocal characteristic of the first user at a second time; (iii) comparing the first vocal characteristic to the second vocal characteristic; and (iv) determining a magnitude for the real-time effects.
39. The computer readable media as recited in claim 38, wherein the program instructions are at least partially stored on one of the computing systems of the first and second user and on a computing system coupled to the network.
40. The computer readable media as recited in claim 38, further comprising: program instructions for applying the real-time effects with the magnitude either progressively over time or at once, and when the real-time effects are applied progressively over time, a characteristic of the avatar changes is size, shape, or expression, or a combination thereof.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US74664006P | 2006-05-07 | 2006-05-07 | |
US60/746,640 | 2006-05-07 | ||
US74677306P | 2006-05-08 | 2006-05-08 | |
US74677706P | 2006-05-08 | 2006-05-08 | |
US60/746,777 | 2006-05-08 | ||
US60/746,773 | 2006-05-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2007130693A2 true WO2007130693A2 (en) | 2007-11-15 |
WO2007130693A3 WO2007130693A3 (en) | 2008-03-06 |
Family
ID=38668402
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/011143 WO2007130693A2 (en) | 2006-05-07 | 2007-05-07 | Methods and systems for processing an interchange of real time effects during video communication |
PCT/US2007/011141 WO2007130691A2 (en) | 2006-05-07 | 2007-05-07 | Method for providing affective characteristics to computer generated avatar during gameplay |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/011141 WO2007130691A2 (en) | 2006-05-07 | 2007-05-07 | Method for providing affective characteristics to computer generated avatar during gameplay |
Country Status (4)
Country | Link |
---|---|
US (3) | US8766983B2 (en) |
EP (2) | EP2016562A4 (en) |
JP (1) | JP4921550B2 (en) |
WO (2) | WO2007130693A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10217029B1 (en) | 2018-02-26 | 2019-02-26 | Ringcentral, Inc. | Systems and methods for automatically generating headshots from a plurality of still images |
Families Citing this family (478)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002073542A (en) | 2000-08-31 | 2002-03-12 | Sony Corp | Method for use reservation of server, reservation managing device and program storage medium |
US8543390B2 (en) * | 2004-10-26 | 2013-09-24 | Qnx Software Systems Limited | Multi-channel periodic signal enhancement system |
US8488023B2 (en) * | 2009-05-20 | 2013-07-16 | DigitalOptics Corporation Europe Limited | Identifying facial expressions in acquired digital images |
US7459624B2 (en) | 2006-03-29 | 2008-12-02 | Harmonix Music Systems, Inc. | Game controller simulating a musical instrument |
JP5330640B2 (en) * | 2006-05-09 | 2013-10-30 | 任天堂株式会社 | GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD |
US8384665B1 (en) * | 2006-07-14 | 2013-02-26 | Ailive, Inc. | Method and system for making a selection in 3D virtual environment |
WO2008021091A2 (en) * | 2006-08-11 | 2008-02-21 | Packetvideo Corp. | 'system and method for delivering interactive audiovisual experiences to portable devices' |
US20080066137A1 (en) * | 2006-08-25 | 2008-03-13 | Sbc Knowledge Ventures, Lp | System and method of displaying system content |
US7966567B2 (en) * | 2007-07-12 | 2011-06-21 | Center'd Corp. | Character expression in a geo-spatial environment |
JP2010520565A (en) * | 2007-03-02 | 2010-06-10 | オーガニック・モーション | System and method for tracking and recording a three-dimensional object |
JP5149547B2 (en) * | 2007-06-07 | 2013-02-20 | 株式会社コナミデジタルエンタテインメント | GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM |
US8678896B2 (en) | 2007-06-14 | 2014-03-25 | Harmonix Music Systems, Inc. | Systems and methods for asynchronous band interaction in a rhythm action game |
EP2206540A1 (en) | 2007-06-14 | 2010-07-14 | Harmonix Music Systems, Inc. | System and method for simulating a rock band experience |
KR20090006371A (en) * | 2007-07-11 | 2009-01-15 | 야후! 인크. | Method and system for providing virtual co-presence to broadcast audiences in an online broadcasting system |
US8726194B2 (en) | 2007-07-27 | 2014-05-13 | Qualcomm Incorporated | Item selection using enhanced control |
JP5286267B2 (en) * | 2007-08-03 | 2013-09-11 | 株式会社キャメロット | Game device, game program, and object operation method |
WO2009029063A1 (en) * | 2007-08-24 | 2009-03-05 | Tc Digital Games Llc | System and methods for multi-platform trading card game |
US8308573B2 (en) * | 2007-08-31 | 2012-11-13 | Lava Two, Llc | Gaming device for multi-player games |
WO2009029113A1 (en) * | 2007-08-31 | 2009-03-05 | Vulano Group, Inc. | Transaction management system in a multicast or broadcast wireless communication network |
WO2009029105A1 (en) * | 2007-08-31 | 2009-03-05 | Vulano Group, Inc. | Virtual aggregation processor for incorporating reverse path feedback into content delivered on a forward path |
US8572176B2 (en) * | 2007-08-31 | 2013-10-29 | Lava Two, Llc | Forward path multi-media management system with end user feedback to distributed content sources |
TWI372645B (en) * | 2007-10-17 | 2012-09-21 | Cywee Group Ltd | An electronic game controller with motion-sensing capability |
US20090132361A1 (en) * | 2007-11-21 | 2009-05-21 | Microsoft Corporation | Consumable advertising in a virtual world |
US8028094B2 (en) * | 2007-12-04 | 2011-09-27 | Vixs Systems, Inc. | USB video card and dongle device with video encoding and methods for use therewith |
US7993190B2 (en) * | 2007-12-07 | 2011-08-09 | Disney Enterprises, Inc. | System and method for touch driven combat system |
US9211077B2 (en) * | 2007-12-13 | 2015-12-15 | The Invention Science Fund I, Llc | Methods and systems for specifying an avatar |
US8356004B2 (en) * | 2007-12-13 | 2013-01-15 | Searete Llc | Methods and systems for comparing media content |
US20090171164A1 (en) * | 2007-12-17 | 2009-07-02 | Jung Edward K Y | Methods and systems for identifying an avatar-linked population cohort |
US20090164458A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems employing a cohort-linked avatar |
US8615479B2 (en) | 2007-12-13 | 2013-12-24 | The Invention Science Fund I, Llc | Methods and systems for indicating behavior in a population cohort |
US20090157751A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying an avatar |
US20090157625A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for identifying an avatar-linked population cohort |
US20090157481A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a cohort-linked avatar attribute |
US20090156955A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for comparing media content |
US8069125B2 (en) * | 2007-12-13 | 2011-11-29 | The Invention Science Fund I | Methods and systems for comparing media content |
US20090164302A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a cohort-linked avatar attribute |
US20090157660A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems employing a cohort-linked avatar |
US20090157813A1 (en) * | 2007-12-17 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for identifying an avatar-linked population cohort |
US8195593B2 (en) | 2007-12-20 | 2012-06-05 | The Invention Science Fund I | Methods and systems for indicating behavior in a population cohort |
US8150796B2 (en) * | 2007-12-20 | 2012-04-03 | The Invention Science Fund I | Methods and systems for inducing behavior in a population cohort |
US9418368B2 (en) * | 2007-12-20 | 2016-08-16 | Invention Science Fund I, Llc | Methods and systems for determining interest in a cohort-linked avatar |
US20090164131A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a media content-linked population cohort |
US20090164503A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a media content-linked population cohort |
US9775554B2 (en) * | 2007-12-31 | 2017-10-03 | Invention Science Fund I, Llc | Population cohort-linked avatar |
EP2244797A4 (en) * | 2008-01-17 | 2011-06-15 | Vivox Inc | Scalable techniques for providing real-lime per-avatar streaming data in virtual reality systems thai employ per-avatar rendered environments |
US8719077B2 (en) * | 2008-01-29 | 2014-05-06 | Microsoft Corporation | Real world and virtual world cross-promotion |
US20090210301A1 (en) * | 2008-02-14 | 2009-08-20 | Microsoft Corporation | Generating customized content based on context data |
US20090215512A1 (en) * | 2008-02-25 | 2009-08-27 | Tc Websites Llc | Systems and methods for a gaming platform |
US20090227368A1 (en) * | 2008-03-07 | 2009-09-10 | Arenanet, Inc. | Display of notational object in an interactive online environment |
US8368753B2 (en) * | 2008-03-17 | 2013-02-05 | Sony Computer Entertainment America Llc | Controller with an integrated depth camera |
US20090241039A1 (en) * | 2008-03-19 | 2009-09-24 | Leonardo William Estevez | System and method for avatar viewing |
US8904430B2 (en) | 2008-04-24 | 2014-12-02 | Sony Computer Entertainment America, LLC | Method and apparatus for real-time viewer interaction with a media presentation |
US8099462B2 (en) * | 2008-04-28 | 2012-01-17 | Cyberlink Corp. | Method of displaying interactive effects in web camera communication |
US8875026B2 (en) | 2008-05-01 | 2014-10-28 | International Business Machines Corporation | Directed communication in a virtual environment |
US20090312100A1 (en) * | 2008-06-12 | 2009-12-17 | Harris Scott C | Face Simulation in Networking |
KR20090132346A (en) * | 2008-06-20 | 2009-12-30 | 삼성전자주식회사 | Apparatus and method for dynamically organizing community space in cyber space |
US8663013B2 (en) | 2008-07-08 | 2014-03-04 | Harmonix Music Systems, Inc. | Systems and methods for simulating a rock band experience |
US20120246585A9 (en) * | 2008-07-14 | 2012-09-27 | Microsoft Corporation | System for editing an avatar |
US8446414B2 (en) * | 2008-07-14 | 2013-05-21 | Microsoft Corporation | Programming APIS for an extensible avatar system |
US9324173B2 (en) * | 2008-07-17 | 2016-04-26 | International Business Machines Corporation | System and method for enabling multiple-state avatars |
US8957914B2 (en) | 2008-07-25 | 2015-02-17 | International Business Machines Corporation | Method for extending a virtual environment through registration |
US8384719B2 (en) * | 2008-08-01 | 2013-02-26 | Microsoft Corporation | Avatar items and animations |
US10166470B2 (en) | 2008-08-01 | 2019-01-01 | International Business Machines Corporation | Method for providing a virtual world layer |
US8305345B2 (en) * | 2008-08-07 | 2012-11-06 | Life Technologies Co., Ltd. | Multimedia playing device |
US20100035692A1 (en) * | 2008-08-08 | 2010-02-11 | Microsoft Corporation | Avatar closet/ game awarded avatar |
US8788957B2 (en) * | 2008-08-22 | 2014-07-22 | Microsoft Corporation | Social virtual avatar modification |
GB2463123A (en) * | 2008-09-09 | 2010-03-10 | Skype Ltd | Video communications system with game playing feature |
US8654251B2 (en) | 2008-09-11 | 2014-02-18 | University Of Malta | Method and apparatus for generating and transmitting synchronized video data |
US20100066750A1 (en) * | 2008-09-16 | 2010-03-18 | Motorola, Inc. | Mobile virtual and augmented reality system |
US9384469B2 (en) * | 2008-09-22 | 2016-07-05 | International Business Machines Corporation | Modifying environmental chat distance based on avatar population density in an area of a virtual world |
US20100077318A1 (en) * | 2008-09-22 | 2010-03-25 | International Business Machines Corporation | Modifying environmental chat distance based on amount of environmental chat in an area of a virtual world |
JP2010086178A (en) * | 2008-09-30 | 2010-04-15 | Fujifilm Corp | Image synthesis device and control method thereof |
US8133119B2 (en) * | 2008-10-01 | 2012-03-13 | Microsoft Corporation | Adaptation for alternate gaming input devices |
WO2010042449A2 (en) * | 2008-10-06 | 2010-04-15 | Vergence Entertainment Llc | System for musically interacting avatars |
US8683354B2 (en) * | 2008-10-16 | 2014-03-25 | At&T Intellectual Property I, L.P. | System and method for distributing an avatar |
US7980997B2 (en) | 2008-10-23 | 2011-07-19 | University Of Southern California | System for encouraging a user to perform substantial physical activity |
US9412126B2 (en) * | 2008-11-06 | 2016-08-09 | At&T Intellectual Property I, Lp | System and method for commercializing avatars |
US9262890B2 (en) * | 2008-11-07 | 2016-02-16 | Sony Computer Entertainment America Llc | Customizing player-generated audio in electronic games |
US9352219B2 (en) | 2008-11-07 | 2016-05-31 | Sony Interactive Entertainment America Llc | Incorporating player-generated audio in an electronic game |
WO2010060211A1 (en) * | 2008-11-28 | 2010-06-03 | Nortel Networks Limited | Method and apparatus for controling a camera view into a three dimensional computer-generated virtual environment |
US8988421B2 (en) * | 2008-12-02 | 2015-03-24 | International Business Machines Corporation | Rendering avatar details |
US8156054B2 (en) | 2008-12-04 | 2012-04-10 | At&T Intellectual Property I, L.P. | Systems and methods for managing interactions between an individual and an entity |
US9529423B2 (en) * | 2008-12-10 | 2016-12-27 | International Business Machines Corporation | System and method to modify audio components in an online environment |
US20100153858A1 (en) * | 2008-12-11 | 2010-06-17 | Paul Gausman | Uniform virtual environments |
US9741147B2 (en) * | 2008-12-12 | 2017-08-22 | International Business Machines Corporation | System and method to modify avatar characteristics based on inferred conditions |
US8214433B2 (en) * | 2008-12-15 | 2012-07-03 | International Business Machines Corporation | System and method to provide context for an automated agent to service multiple avatars within a virtual universe |
US9075901B2 (en) * | 2008-12-15 | 2015-07-07 | International Business Machines Corporation | System and method to visualize activities through the use of avatars |
JP2010142592A (en) * | 2008-12-22 | 2010-07-01 | Nintendo Co Ltd | Game program and game device |
CN102210158B (en) * | 2008-12-24 | 2014-04-16 | Lg电子株式会社 | An iptv receiver and method for controlling an application in the iptv receiver |
US8326853B2 (en) * | 2009-01-20 | 2012-12-04 | International Business Machines Corporation | Virtual world identity management |
JP5294318B2 (en) * | 2009-01-21 | 2013-09-18 | 任天堂株式会社 | Information processing program and information processing apparatus |
US8682028B2 (en) | 2009-01-30 | 2014-03-25 | Microsoft Corporation | Visual target tracking |
US8565476B2 (en) * | 2009-01-30 | 2013-10-22 | Microsoft Corporation | Visual target tracking |
US8267781B2 (en) | 2009-01-30 | 2012-09-18 | Microsoft Corporation | Visual target tracking |
US8294767B2 (en) | 2009-01-30 | 2012-10-23 | Microsoft Corporation | Body scan |
US8577084B2 (en) | 2009-01-30 | 2013-11-05 | Microsoft Corporation | Visual target tracking |
US8588465B2 (en) | 2009-01-30 | 2013-11-19 | Microsoft Corporation | Visual target tracking |
US8295546B2 (en) | 2009-01-30 | 2012-10-23 | Microsoft Corporation | Pose tracking pipeline |
US8866821B2 (en) | 2009-01-30 | 2014-10-21 | Microsoft Corporation | Depth map movement tracking via optical flow and velocity prediction |
US8577085B2 (en) | 2009-01-30 | 2013-11-05 | Microsoft Corporation | Visual target tracking |
US8565477B2 (en) * | 2009-01-30 | 2013-10-22 | Microsoft Corporation | Visual target tracking |
US9652030B2 (en) | 2009-01-30 | 2017-05-16 | Microsoft Technology Licensing, Llc | Navigation of a virtual plane using a zone of restriction for canceling noise |
US20110293144A1 (en) * | 2009-02-02 | 2011-12-01 | Agency For Science, Technology And Research | Method and System for Rendering an Entertainment Animation |
US9105014B2 (en) | 2009-02-03 | 2015-08-11 | International Business Machines Corporation | Interactive avatar in messaging environment |
US20100201693A1 (en) * | 2009-02-11 | 2010-08-12 | Disney Enterprises, Inc. | System and method for audience participation event with digital avatars |
KR101558553B1 (en) * | 2009-02-18 | 2015-10-08 | 삼성전자 주식회사 | Facial gesture cloning apparatus |
US9276761B2 (en) * | 2009-03-04 | 2016-03-01 | At&T Intellectual Property I, L.P. | Method and apparatus for group media consumption |
US10482428B2 (en) * | 2009-03-10 | 2019-11-19 | Samsung Electronics Co., Ltd. | Systems and methods for presenting metaphors |
US8773355B2 (en) * | 2009-03-16 | 2014-07-08 | Microsoft Corporation | Adaptive cursor sizing |
US8988437B2 (en) * | 2009-03-20 | 2015-03-24 | Microsoft Technology Licensing, Llc | Chaining animations |
US9256282B2 (en) * | 2009-03-20 | 2016-02-09 | Microsoft Technology Licensing, Llc | Virtual object manipulation |
US9489039B2 (en) * | 2009-03-27 | 2016-11-08 | At&T Intellectual Property I, L.P. | Systems and methods for presenting intermediaries |
FI20095371A (en) * | 2009-04-03 | 2010-10-04 | Aalto Korkeakoulusaeaetioe | A method for controlling the device |
US20100253689A1 (en) * | 2009-04-07 | 2010-10-07 | Avaya Inc. | Providing descriptions of non-verbal communications to video telephony participants who are not video-enabled |
US8806337B2 (en) * | 2009-04-28 | 2014-08-12 | International Business Machines Corporation | System and method for representation of avatars via personal and group perception, and conditional manifestation of attributes |
US8253746B2 (en) * | 2009-05-01 | 2012-08-28 | Microsoft Corporation | Determine intended motions |
US9898675B2 (en) | 2009-05-01 | 2018-02-20 | Microsoft Technology Licensing, Llc | User movement tracking feedback to improve tracking |
US8340432B2 (en) | 2009-05-01 | 2012-12-25 | Microsoft Corporation | Systems and methods for detecting a tilt angle from a depth image |
US20100277470A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Systems And Methods For Applying Model Tracking To Motion Capture |
US8503720B2 (en) | 2009-05-01 | 2013-08-06 | Microsoft Corporation | Human body pose estimation |
US9015638B2 (en) * | 2009-05-01 | 2015-04-21 | Microsoft Technology Licensing, Llc | Binding users to a gesture based system and providing feedback to the users |
US8638985B2 (en) | 2009-05-01 | 2014-01-28 | Microsoft Corporation | Human body pose estimation |
US8942428B2 (en) | 2009-05-01 | 2015-01-27 | Microsoft Corporation | Isolate extraneous motions |
US8649554B2 (en) | 2009-05-01 | 2014-02-11 | Microsoft Corporation | Method to control perspective for a camera-controlled computer |
US9377857B2 (en) * | 2009-05-01 | 2016-06-28 | Microsoft Technology Licensing, Llc | Show body position |
US9498718B2 (en) * | 2009-05-01 | 2016-11-22 | Microsoft Technology Licensing, Llc | Altering a view perspective within a display environment |
US8181123B2 (en) * | 2009-05-01 | 2012-05-15 | Microsoft Corporation | Managing virtual port associations to users in a gesture-based computing environment |
US8161398B2 (en) * | 2009-05-08 | 2012-04-17 | International Business Machines Corporation | Assistive group setting management in a virtual world |
US20100295771A1 (en) * | 2009-05-20 | 2010-11-25 | Microsoft Corporation | Control of display objects |
US8352884B2 (en) | 2009-05-21 | 2013-01-08 | Sony Computer Entertainment Inc. | Dynamic reconfiguration of GUI display decomposition based on predictive model |
US20100306825A1 (en) * | 2009-05-27 | 2010-12-02 | Lucid Ventures, Inc. | System and method for facilitating user interaction with a simulated object associated with a physical location |
US9182814B2 (en) * | 2009-05-29 | 2015-11-10 | Microsoft Technology Licensing, Llc | Systems and methods for estimating a non-visible or occluded body part |
US20100302365A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Depth Image Noise Reduction |
US8509479B2 (en) | 2009-05-29 | 2013-08-13 | Microsoft Corporation | Virtual object |
US8465366B2 (en) | 2009-05-29 | 2013-06-18 | Harmonix Music Systems, Inc. | Biasing a musical performance input to a part |
US20100306685A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | User movement feedback via on-screen avatars |
US8176442B2 (en) * | 2009-05-29 | 2012-05-08 | Microsoft Corporation | Living cursor control mechanics |
US9383823B2 (en) | 2009-05-29 | 2016-07-05 | Microsoft Technology Licensing, Llc | Combining gestures beyond skeletal |
US8744121B2 (en) | 2009-05-29 | 2014-06-03 | Microsoft Corporation | Device for identifying and tracking multiple humans over time |
US9400559B2 (en) | 2009-05-29 | 2016-07-26 | Microsoft Technology Licensing, Llc | Gesture shortcuts |
US8625837B2 (en) | 2009-05-29 | 2014-01-07 | Microsoft Corporation | Protocol and format for communicating an image from a camera to a computing environment |
US8379101B2 (en) * | 2009-05-29 | 2013-02-19 | Microsoft Corporation | Environment and/or target segmentation |
US8145594B2 (en) * | 2009-05-29 | 2012-03-27 | Microsoft Corporation | Localized gesture aggregation |
US8320619B2 (en) * | 2009-05-29 | 2012-11-27 | Microsoft Corporation | Systems and methods for tracking a model |
US8803889B2 (en) | 2009-05-29 | 2014-08-12 | Microsoft Corporation | Systems and methods for applying animations or motions to a character |
US20100302138A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Methods and systems for defining or modifying a visual representation |
US8418085B2 (en) * | 2009-05-29 | 2013-04-09 | Microsoft Corporation | Gesture coach |
US20100306716A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Extending standard gestures |
US8542252B2 (en) * | 2009-05-29 | 2013-09-24 | Microsoft Corporation | Target digitization, extraction, and tracking |
US8856691B2 (en) * | 2009-05-29 | 2014-10-07 | Microsoft Corporation | Gesture tool |
US8661353B2 (en) * | 2009-05-29 | 2014-02-25 | Microsoft Corporation | Avatar integrated shared media experience |
US7914344B2 (en) * | 2009-06-03 | 2011-03-29 | Microsoft Corporation | Dual-barrel, connector jack and plug assemblies |
CN101930284B (en) * | 2009-06-23 | 2014-04-09 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
US8390680B2 (en) * | 2009-07-09 | 2013-03-05 | Microsoft Corporation | Visual representation expression based on player expression |
US9159151B2 (en) * | 2009-07-13 | 2015-10-13 | Microsoft Technology Licensing, Llc | Bringing a visual representation to life via learned input from the user |
GB2471871B (en) * | 2009-07-15 | 2011-12-14 | Sony Comp Entertainment Europe | Apparatus and method for a virtual dance floor |
US20110025689A1 (en) * | 2009-07-29 | 2011-02-03 | Microsoft Corporation | Auto-Generating A Visual Representation |
US8275590B2 (en) | 2009-08-12 | 2012-09-25 | Zugara, Inc. | Providing a simulation of wearing items such as garments and/or accessories |
US9141193B2 (en) * | 2009-08-31 | 2015-09-22 | Microsoft Technology Licensing, Llc | Techniques for using human gestures to control gesture unaware programs |
JP5143287B2 (en) * | 2009-09-18 | 2013-02-13 | 株式会社東芝 | Relay device |
US8963829B2 (en) | 2009-10-07 | 2015-02-24 | Microsoft Corporation | Methods and systems for determining and tracking extremities of a target |
US7961910B2 (en) | 2009-10-07 | 2011-06-14 | Microsoft Corporation | Systems and methods for tracking a model |
US8564534B2 (en) * | 2009-10-07 | 2013-10-22 | Microsoft Corporation | Human tracking system |
US9981193B2 (en) | 2009-10-27 | 2018-05-29 | Harmonix Music Systems, Inc. | Movement based recognition and evaluation |
WO2011056657A2 (en) | 2009-10-27 | 2011-05-12 | Harmonix Music Systems, Inc. | Gesture-based user interface |
US8847878B2 (en) | 2009-11-10 | 2014-09-30 | Apple Inc. | Environment sensitive display tags |
US20110109617A1 (en) * | 2009-11-12 | 2011-05-12 | Microsoft Corporation | Visualizing Depth |
US8791787B2 (en) * | 2009-12-11 | 2014-07-29 | Sony Corporation | User personalization with bezel-displayed identification |
US8977972B2 (en) | 2009-12-31 | 2015-03-10 | Intel Corporation | Using multi-modal input to control multiple objects on a display |
WO2011096976A1 (en) * | 2010-02-05 | 2011-08-11 | Sony Computer Entertainment Inc. | Controller for interfacing with a computing program using position, orientation, or motion |
US9400695B2 (en) * | 2010-02-26 | 2016-07-26 | Microsoft Technology Licensing, Llc | Low latency rendering of objects |
US8874243B2 (en) | 2010-03-16 | 2014-10-28 | Harmonix Music Systems, Inc. | Simulating musical instruments |
KR20110107428A (en) * | 2010-03-25 | 2011-10-04 | 삼성전자주식회사 | Digital apparatus and method for providing user interface for making contents and recording medium recorded program for executing thereof method |
TWI439960B (en) | 2010-04-07 | 2014-06-01 | Apple Inc | Avatar editing environment |
US9542038B2 (en) | 2010-04-07 | 2017-01-10 | Apple Inc. | Personalizing colors of user interfaces |
JP2011223531A (en) * | 2010-04-14 | 2011-11-04 | Sony Computer Entertainment Inc | Portable information terminal, network connection method, network connection system, and server |
US9100200B2 (en) | 2010-06-01 | 2015-08-04 | Genband Us Llc | Video augmented text chatting |
US8384770B2 (en) | 2010-06-02 | 2013-02-26 | Nintendo Co., Ltd. | Image display system, image display apparatus, and image display method |
US9245177B2 (en) * | 2010-06-02 | 2016-01-26 | Microsoft Technology Licensing, Llc | Limiting avatar gesture display |
US10843078B2 (en) * | 2010-06-07 | 2020-11-24 | Affectiva, Inc. | Affect usage within a gaming context |
US20110304629A1 (en) * | 2010-06-09 | 2011-12-15 | Microsoft Corporation | Real-time animation of facial expressions |
US20110306423A1 (en) * | 2010-06-10 | 2011-12-15 | Isaac Calderon | Multi purpose wireless game control console |
US20110306397A1 (en) | 2010-06-11 | 2011-12-15 | Harmonix Music Systems, Inc. | Audio and animation blending |
US8562403B2 (en) | 2010-06-11 | 2013-10-22 | Harmonix Music Systems, Inc. | Prompting a player of a dance game |
US9358456B1 (en) | 2010-06-11 | 2016-06-07 | Harmonix Music Systems, Inc. | Dance competition game |
EP2395769B1 (en) | 2010-06-11 | 2015-03-04 | Nintendo Co., Ltd. | Image display program, image display system, and image display method |
US8851994B2 (en) * | 2010-06-30 | 2014-10-07 | Sony Corporation | Game device, game control method, and game control program adapted to control game by using position and posture of input device |
US9024166B2 (en) | 2010-09-09 | 2015-05-05 | Harmonix Music Systems, Inc. | Preventing subtractive track separation |
JP5739674B2 (en) | 2010-09-27 | 2015-06-24 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
US8854356B2 (en) * | 2010-09-28 | 2014-10-07 | Nintendo Co., Ltd. | Storage medium having stored therein image processing program, image processing apparatus, image processing system, and image processing method |
US9218316B2 (en) | 2011-01-05 | 2015-12-22 | Sphero, Inc. | Remotely controlling a self-propelled device in a virtualized environment |
US9114838B2 (en) | 2011-01-05 | 2015-08-25 | Sphero, Inc. | Self-propelled device for interpreting input from a controller device |
US9090214B2 (en) | 2011-01-05 | 2015-07-28 | Orbotix, Inc. | Magnetically coupled accessory for a self-propelled device |
US9429940B2 (en) | 2011-01-05 | 2016-08-30 | Sphero, Inc. | Self propelled device with magnetic coupling |
US10281915B2 (en) | 2011-01-05 | 2019-05-07 | Sphero, Inc. | Multi-purposed self-propelled device |
US8570320B2 (en) * | 2011-01-31 | 2013-10-29 | Microsoft Corporation | Using a three-dimensional environment model in gameplay |
US8942917B2 (en) | 2011-02-14 | 2015-01-27 | Microsoft Corporation | Change invariant scene recognition by an agent |
US8620113B2 (en) | 2011-04-25 | 2013-12-31 | Microsoft Corporation | Laser diode modes |
WO2012166072A1 (en) * | 2011-05-31 | 2012-12-06 | Echostar Ukraine, L.L.C. | Apparatus, systems and methods for enhanced viewing experience using an avatar |
US8760395B2 (en) | 2011-05-31 | 2014-06-24 | Microsoft Corporation | Gesture recognition techniques |
US9245368B2 (en) * | 2011-06-05 | 2016-01-26 | Apple Inc. | Device and method for dynamically rendering an animation |
US8884949B1 (en) | 2011-06-06 | 2014-11-11 | Thibault Lambert | Method and system for real time rendering of objects from a low resolution depth camera |
DE102012208748B4 (en) * | 2011-06-21 | 2023-07-13 | International Business Machines Corporation | Method and system for remote control of functions of a mouse pointer of a computer unit |
RU2455676C2 (en) | 2011-07-04 | 2012-07-10 | Общество с ограниченной ответственностью "ТРИДИВИ" | Method of controlling device using gestures and 3d sensor for realising said method |
US8943396B2 (en) * | 2011-07-18 | 2015-01-27 | At&T Intellectual Property I, Lp | Method and apparatus for multi-experience adaptation of media content |
CN103826711A (en) * | 2011-07-22 | 2014-05-28 | 格里奇索弗特公司 | Game enhancement system for gaming environment |
AU2012306059A1 (en) | 2011-09-08 | 2014-03-27 | Paofit Holdings Pte Ltd | System and method for visualizing synthetic objects withinreal-world video clip |
US9762524B2 (en) | 2011-09-28 | 2017-09-12 | Elwha Llc | Multi-modality communication participation |
US9906927B2 (en) | 2011-09-28 | 2018-02-27 | Elwha Llc | Multi-modality communication initiation |
US9699632B2 (en) | 2011-09-28 | 2017-07-04 | Elwha Llc | Multi-modality communication with interceptive conversion |
US9477943B2 (en) | 2011-09-28 | 2016-10-25 | Elwha Llc | Multi-modality communication |
US9002937B2 (en) | 2011-09-28 | 2015-04-07 | Elwha Llc | Multi-party multi-modality communication |
US9503550B2 (en) | 2011-09-28 | 2016-11-22 | Elwha Llc | Multi-modality communication modification |
US20130109302A1 (en) * | 2011-10-31 | 2013-05-02 | Royce A. Levien | Multi-modality communication with conversion offloading |
US9788349B2 (en) | 2011-09-28 | 2017-10-10 | Elwha Llc | Multi-modality communication auto-activation |
US8740706B2 (en) | 2011-10-25 | 2014-06-03 | Spielo International Canada Ulc | Gaming console having movable screen |
US8635637B2 (en) | 2011-12-02 | 2014-01-21 | Microsoft Corporation | User interface presenting an animated avatar performing a media reaction |
US9100685B2 (en) | 2011-12-09 | 2015-08-04 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US10013787B2 (en) * | 2011-12-12 | 2018-07-03 | Faceshift Ag | Method for facial animation |
US20140364239A1 (en) * | 2011-12-20 | 2014-12-11 | Icelero Inc | Method and system for creating a virtual social and gaming experience |
US20130166274A1 (en) * | 2011-12-21 | 2013-06-27 | Avaya Inc. | System and method for managing avatars |
US20130203026A1 (en) * | 2012-02-08 | 2013-08-08 | Jpmorgan Chase Bank, Na | System and Method for Virtual Training Environment |
KR20130096538A (en) * | 2012-02-22 | 2013-08-30 | 삼성전자주식회사 | Mobile communication terminal and method for generating contents data thereof |
US8638344B2 (en) * | 2012-03-09 | 2014-01-28 | International Business Machines Corporation | Automatically modifying presentation of mobile-device content |
US20130257877A1 (en) * | 2012-03-30 | 2013-10-03 | Videx, Inc. | Systems and Methods for Generating an Interactive Avatar Model |
US8898687B2 (en) | 2012-04-04 | 2014-11-25 | Microsoft Corporation | Controlling a media program based on a media reaction |
CA2775700C (en) | 2012-05-04 | 2013-07-23 | Microsoft Corporation | Determining a future portion of a currently presented media program |
US10155168B2 (en) | 2012-05-08 | 2018-12-18 | Snap Inc. | System and method for adaptable avatars |
KR20150012274A (en) | 2012-05-14 | 2015-02-03 | 오보틱스, 아이엔씨. | Operating a computing device by detecting rounded objects in image |
US9827487B2 (en) | 2012-05-14 | 2017-11-28 | Sphero, Inc. | Interactive augmented reality using a self-propelled device |
US9292758B2 (en) * | 2012-05-14 | 2016-03-22 | Sphero, Inc. | Augmentation of elements in data content |
US9247306B2 (en) | 2012-05-21 | 2016-01-26 | Intellectual Ventures Fund 83 Llc | Forming a multimedia product using video chat |
US9456244B2 (en) * | 2012-06-25 | 2016-09-27 | Intel Corporation | Facilitation of concurrent consumption of media content by multiple users using superimposed animation |
US10056791B2 (en) | 2012-07-13 | 2018-08-21 | Sphero, Inc. | Self-optimizing power transfer |
US20140018169A1 (en) * | 2012-07-16 | 2014-01-16 | Zhong Yuan Ran | Self as Avatar Gaming with Video Projecting Device |
WO2014014238A1 (en) | 2012-07-17 | 2014-01-23 | Samsung Electronics Co., Ltd. | System and method for providing image |
US9779757B1 (en) * | 2012-07-30 | 2017-10-03 | Amazon Technologies, Inc. | Visual indication of an operational state |
US8976043B2 (en) * | 2012-08-20 | 2015-03-10 | Textron Innovations, Inc. | Illuminated sidestick controller, such as an illuminated sidestick controller for use in aircraft |
US9360932B1 (en) * | 2012-08-29 | 2016-06-07 | Intellect Motion Llc. | Systems and methods for virtually displaying real movements of objects in a 3D-space by means of 2D-video capture |
WO2014063724A1 (en) | 2012-10-22 | 2014-05-01 | Longsand Limited | Collaborative augmented reality |
JP6178066B2 (en) * | 2012-11-06 | 2017-08-09 | 株式会社ソニー・インタラクティブエンタテインメント | Information processing apparatus, information processing method, program, and information storage medium |
US10410180B2 (en) * | 2012-11-19 | 2019-09-10 | Oath Inc. | System and method for touch-based communications |
US9857470B2 (en) | 2012-12-28 | 2018-01-02 | Microsoft Technology Licensing, Llc | Using photometric stereo for 3D environment modeling |
JP6134151B2 (en) * | 2013-02-04 | 2017-05-24 | 任天堂株式会社 | GAME SYSTEM, GAME DEVICE, GAME PROCESSING METHOD, AND GAME PROGRAM |
US9940553B2 (en) | 2013-02-22 | 2018-04-10 | Microsoft Technology Licensing, Llc | Camera/object pose from predicted coordinates |
US9721586B1 (en) | 2013-03-14 | 2017-08-01 | Amazon Technologies, Inc. | Voice controlled assistant with light indicator |
US20140278403A1 (en) * | 2013-03-14 | 2014-09-18 | Toytalk, Inc. | Systems and methods for interactive synthetic character dialogue |
US10220303B1 (en) | 2013-03-15 | 2019-03-05 | Harmonix Music Systems, Inc. | Gesture-based music game |
US10068363B2 (en) * | 2013-03-27 | 2018-09-04 | Nokia Technologies Oy | Image point of interest analyser with animation generator |
WO2014153689A1 (en) * | 2013-03-29 | 2014-10-02 | Intel Corporation | Avatar animation, social networking and touch screen applications |
US10275807B2 (en) | 2013-06-14 | 2019-04-30 | M2 Media Group | Systems and methods for generating customized avatars and customized online portals |
US9251405B2 (en) * | 2013-06-20 | 2016-02-02 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US9678583B2 (en) * | 2013-07-23 | 2017-06-13 | University Of Kentucky Research Foundation | 2D and 3D pointing device based on a passive lights detection operation method using one camera |
US20150103184A1 (en) * | 2013-10-15 | 2015-04-16 | Nvidia Corporation | Method and system for visual tracking of a subject for automatic metering using a mobile device |
US9829882B2 (en) | 2013-12-20 | 2017-11-28 | Sphero, Inc. | Self-propelled device with center of mass drive system |
KR101827550B1 (en) | 2014-01-31 | 2018-02-08 | 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 | Augmented reality skin manager |
EP3100240B1 (en) | 2014-01-31 | 2018-10-31 | Empire Technology Development LLC | Evaluation of augmented reality skins |
WO2015116183A2 (en) | 2014-01-31 | 2015-08-06 | Empire Technology Development, Llc | Subject selected augmented reality skin |
WO2015116182A1 (en) | 2014-01-31 | 2015-08-06 | Empire Technology Development, Llc | Augmented reality skin evaluation |
US9928874B2 (en) | 2014-02-05 | 2018-03-27 | Snap Inc. | Method for real-time video processing involving changing features of an object in the video |
US10250537B2 (en) * | 2014-02-12 | 2019-04-02 | Mark H. Young | Methods and apparatuses for animated messaging between messaging participants represented by avatar |
US10147220B2 (en) * | 2014-03-14 | 2018-12-04 | Carnegie Mellon University | Precomputing data for an interactive system having discrete control inputs |
US9672416B2 (en) | 2014-04-29 | 2017-06-06 | Microsoft Technology Licensing, Llc | Facial expression tracking |
US9679212B2 (en) * | 2014-05-09 | 2017-06-13 | Samsung Electronics Co., Ltd. | Liveness testing methods and apparatuses and image processing methods and apparatuses |
JP2016018313A (en) * | 2014-07-07 | 2016-02-01 | 任天堂株式会社 | Program, information processing apparatus, communication system, and communication method |
US9282287B1 (en) | 2014-09-09 | 2016-03-08 | Google Inc. | Real-time video transformations in video conferences |
CN105396289A (en) * | 2014-09-15 | 2016-03-16 | 掌赢信息科技(上海)有限公司 | Method and device for achieving special effects in process of real-time games and multimedia sessions |
US9607573B2 (en) | 2014-09-17 | 2017-03-28 | International Business Machines Corporation | Avatar motion modification |
US20160110044A1 (en) * | 2014-10-20 | 2016-04-21 | Microsoft Corporation | Profile-driven avatar sessions |
US10068127B2 (en) * | 2014-12-19 | 2018-09-04 | Iris Id, Inc. | Automatic detection of face and thereby localize the eye region for iris recognition |
KR20160105321A (en) * | 2015-02-27 | 2016-09-06 | 임머숀 코퍼레이션 | Generating actions based on a user's mood |
US9558760B2 (en) * | 2015-03-06 | 2017-01-31 | Microsoft Technology Licensing, Llc | Real-time remodeling of user voice in an immersive visualization system |
US10116901B2 (en) | 2015-03-18 | 2018-10-30 | Avatar Merger Sub II, LLC | Background modification in video conferencing |
US9940637B2 (en) | 2015-06-05 | 2018-04-10 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US10293260B1 (en) * | 2015-06-05 | 2019-05-21 | Amazon Technologies, Inc. | Player audio analysis in online gaming environments |
US9704298B2 (en) | 2015-06-23 | 2017-07-11 | Paofit Holdings Pte Ltd. | Systems and methods for generating 360 degree mixed reality environments |
EP3330579B1 (en) * | 2015-07-31 | 2019-06-19 | Nippon Piston Ring Co., Ltd. | Piston ring and manufacturing method thereof |
US9818228B2 (en) | 2015-08-07 | 2017-11-14 | Microsoft Technology Licensing, Llc | Mixed reality social interaction |
US9922463B2 (en) | 2015-08-07 | 2018-03-20 | Microsoft Technology Licensing, Llc | Virtually visualizing energy |
JP6604557B2 (en) * | 2015-08-10 | 2019-11-13 | 日本アイ・ティ・エフ株式会社 | Piston ring and engine |
US9843766B2 (en) * | 2015-08-28 | 2017-12-12 | Samsung Electronics Co., Ltd. | Video communication device and operation thereof |
US11138207B2 (en) | 2015-09-22 | 2021-10-05 | Google Llc | Integrated dynamic interface for expression-based retrieval of expressive media content |
US10474877B2 (en) * | 2015-09-22 | 2019-11-12 | Google Llc | Automated effects generation for animated content |
WO2017051962A1 (en) * | 2015-09-25 | 2017-03-30 | 엘지전자 주식회사 | Mobile terminal and control method thereof |
US10007332B2 (en) | 2015-09-28 | 2018-06-26 | Interblock D.D. | Electronic gaming machine in communicative control with avatar display from motion-capture system |
CN105791692B (en) * | 2016-03-14 | 2020-04-07 | 腾讯科技(深圳)有限公司 | Information processing method, terminal and storage medium |
US10339365B2 (en) | 2016-03-31 | 2019-07-02 | Snap Inc. | Automated avatar generation |
JP2017188833A (en) | 2016-04-08 | 2017-10-12 | ソニー株式会社 | Information processing device and information processing method; and program |
TWI581626B (en) * | 2016-04-26 | 2017-05-01 | 鴻海精密工業股份有限公司 | System and method for processing media files automatically |
US10474353B2 (en) | 2016-05-31 | 2019-11-12 | Snap Inc. | Application control using a gesture based trigger |
JP7140465B2 (en) * | 2016-06-10 | 2022-09-21 | 任天堂株式会社 | Game program, information processing device, information processing system, game processing method |
US11580608B2 (en) | 2016-06-12 | 2023-02-14 | Apple Inc. | Managing contact information for communication applications |
US10607386B2 (en) | 2016-06-12 | 2020-03-31 | Apple Inc. | Customized avatars and associated framework |
WO2018005673A1 (en) | 2016-06-28 | 2018-01-04 | Against Gravity Corp. | Systems and methods providing temporary decoupling of user avatar synchronicity for presence enhancing experiences |
US10360708B2 (en) | 2016-06-30 | 2019-07-23 | Snap Inc. | Avatar based ideogram generation |
US10348662B2 (en) | 2016-07-19 | 2019-07-09 | Snap Inc. | Generating customized electronic messaging graphics |
CN109844735A (en) * | 2016-07-21 | 2019-06-04 | 奇跃公司 | Affective state for using user controls the technology that virtual image generates system |
US20180052512A1 (en) * | 2016-08-16 | 2018-02-22 | Thomas J. Overly | Behavioral rehearsal system and supporting software |
US10609036B1 (en) | 2016-10-10 | 2020-03-31 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US10198626B2 (en) | 2016-10-19 | 2019-02-05 | Snap Inc. | Neural networks for facial modeling |
US10432559B2 (en) | 2016-10-24 | 2019-10-01 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US10593116B2 (en) | 2016-10-24 | 2020-03-17 | Snap Inc. | Augmented reality object manipulation |
JP6746801B2 (en) * | 2016-12-09 | 2020-08-26 | ユニティ アイピーアール エイピーエスUnity Ipr Aps | Creation, broadcasting, and viewing of 3D content |
US10179291B2 (en) | 2016-12-09 | 2019-01-15 | Microsoft Technology Licensing, Llc | Session speech-to-text conversion |
US10311857B2 (en) * | 2016-12-09 | 2019-06-04 | Microsoft Technology Licensing, Llc | Session text-to-speech conversion |
JP6240301B1 (en) * | 2016-12-26 | 2017-11-29 | 株式会社コロプラ | Method for communicating via virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program |
KR101858168B1 (en) * | 2017-01-02 | 2018-05-16 | (주) 마로스튜디오 | System and method of providing interactive anmation system based on remote-control using digital character |
US10242503B2 (en) | 2017-01-09 | 2019-03-26 | Snap Inc. | Surface aware lens |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US10242477B1 (en) | 2017-01-16 | 2019-03-26 | Snap Inc. | Coded vision system |
US10951562B2 (en) | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
US10454857B1 (en) | 2017-01-23 | 2019-10-22 | Snap Inc. | Customized digital avatar accessories |
US10438393B2 (en) * | 2017-03-16 | 2019-10-08 | Linden Research, Inc. | Virtual reality presentation of body postures of avatars |
US11069103B1 (en) | 2017-04-20 | 2021-07-20 | Snap Inc. | Customized user interface for electronic communications |
US11893647B2 (en) | 2017-04-27 | 2024-02-06 | Snap Inc. | Location-based virtual avatars |
US10212541B1 (en) | 2017-04-27 | 2019-02-19 | Snap Inc. | Selective location-based identity communication |
KR102515132B1 (en) | 2017-04-27 | 2023-03-28 | 스냅 인코포레이티드 | A geographic level representation of a user's location on a social media platform |
KR102549029B1 (en) | 2017-05-16 | 2023-06-29 | 애플 인크. | Emoji recording and sending |
DK179948B1 (en) * | 2017-05-16 | 2019-10-22 | Apple Inc. | Recording and sending Emoji |
US10861210B2 (en) | 2017-05-16 | 2020-12-08 | Apple Inc. | Techniques for providing audio and video effects |
US10679428B1 (en) | 2017-05-26 | 2020-06-09 | Snap Inc. | Neural network-based image stream modification |
CN107463367B (en) * | 2017-06-22 | 2021-05-18 | 北京星选科技有限公司 | Transition animation realization method and device |
US10931728B1 (en) * | 2017-06-25 | 2021-02-23 | Zoosk, Inc. | System and method for user video chats with progressively clearer images |
US11122094B2 (en) | 2017-07-28 | 2021-09-14 | Snap Inc. | Software application manager for messaging applications |
US10586368B2 (en) | 2017-10-26 | 2020-03-10 | Snap Inc. | Joint audio-video facial animation system |
US10657695B2 (en) | 2017-10-30 | 2020-05-19 | Snap Inc. | Animated chat presence |
US10870056B2 (en) * | 2017-11-01 | 2020-12-22 | Sony Interactive Entertainment Inc. | Emoji-based communications derived from facial features during game play |
US11460974B1 (en) | 2017-11-28 | 2022-10-04 | Snap Inc. | Content discovery refresh |
CN114915606A (en) | 2017-11-29 | 2022-08-16 | 斯纳普公司 | Group stories in electronic messaging applications |
KR102517427B1 (en) | 2017-11-29 | 2023-04-03 | 스냅 인코포레이티드 | Graphic rendering for electronic messaging applications |
KR102614048B1 (en) * | 2017-12-22 | 2023-12-15 | 삼성전자주식회사 | Electronic device and method for displaying object for augmented reality |
US10949648B1 (en) | 2018-01-23 | 2021-03-16 | Snap Inc. | Region-based stabilized face tracking |
US10726603B1 (en) | 2018-02-28 | 2020-07-28 | Snap Inc. | Animated expressive icon |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
US10613827B2 (en) * | 2018-03-06 | 2020-04-07 | Language Line Services, Inc. | Configuration for simulating a video remote interpretation session |
GB2571956B (en) | 2018-03-14 | 2022-04-27 | Sony Interactive Entertainment Inc | Head-mountable apparatus and methods |
CA3098833A1 (en) | 2018-03-22 | 2019-09-26 | Infinite Kingdoms Llc | Connected avatar technology |
US11310176B2 (en) | 2018-04-13 | 2022-04-19 | Snap Inc. | Content suggestion system |
CN112041891A (en) | 2018-04-18 | 2020-12-04 | 斯纳普公司 | Expression enhancing system |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
US10375313B1 (en) | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
DK180212B1 (en) | 2018-05-07 | 2020-08-19 | Apple Inc | USER INTERFACE FOR CREATING AVATAR |
CN112512649A (en) * | 2018-07-11 | 2021-03-16 | 苹果公司 | Techniques for providing audio and video effects |
US11074675B2 (en) | 2018-07-31 | 2021-07-27 | Snap Inc. | Eye texture inpainting |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
US10896534B1 (en) | 2018-09-19 | 2021-01-19 | Snap Inc. | Avatar style transformation using neural networks |
US10636218B2 (en) | 2018-09-24 | 2020-04-28 | Universal City Studios Llc | Augmented reality for an amusement ride |
US10895964B1 (en) | 2018-09-25 | 2021-01-19 | Snap Inc. | Interface to display shared user groups |
JP6770562B2 (en) * | 2018-09-27 | 2020-10-14 | 株式会社コロプラ | Program, virtual space provision method and information processing device |
US10904181B2 (en) | 2018-09-28 | 2021-01-26 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US10698583B2 (en) | 2018-09-28 | 2020-06-30 | Snap Inc. | Collaborative achievement interface |
US11245658B2 (en) | 2018-09-28 | 2022-02-08 | Snap Inc. | System and method of generating private notifications between users in a communication session |
US11189070B2 (en) | 2018-09-28 | 2021-11-30 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
JP6672414B1 (en) * | 2018-10-02 | 2020-03-25 | 株式会社スクウェア・エニックス | Drawing program, recording medium, drawing control device, drawing control method |
US10529155B1 (en) * | 2018-10-15 | 2020-01-07 | Alibaba Group Holding Limited | Employing pressure signatures for personal identification |
US11103795B1 (en) | 2018-10-31 | 2021-08-31 | Snap Inc. | Game drawer |
US10872451B2 (en) | 2018-10-31 | 2020-12-22 | Snap Inc. | 3D avatar rendering |
US10893236B2 (en) * | 2018-11-01 | 2021-01-12 | Honda Motor Co., Ltd. | System and method for providing virtual interpersonal communication |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
US10902661B1 (en) | 2018-11-28 | 2021-01-26 | Snap Inc. | Dynamic composite user identifier |
US10861170B1 (en) | 2018-11-30 | 2020-12-08 | Snap Inc. | Efficient human pose tracking in videos |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11055514B1 (en) | 2018-12-14 | 2021-07-06 | Snap Inc. | Image face manipulation |
US10609332B1 (en) | 2018-12-21 | 2020-03-31 | Microsoft Technology Licensing, Llc | Video conferencing supporting a composite video stream |
US11516173B1 (en) | 2018-12-26 | 2022-11-29 | Snap Inc. | Message composition interface |
US11032670B1 (en) | 2019-01-14 | 2021-06-08 | Snap Inc. | Destination sharing in location sharing system |
US10939246B1 (en) | 2019-01-16 | 2021-03-02 | Snap Inc. | Location-based context information sharing in a messaging system |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US10656797B1 (en) | 2019-02-06 | 2020-05-19 | Snap Inc. | Global event-based avatar |
US10984575B2 (en) | 2019-02-06 | 2021-04-20 | Snap Inc. | Body pose estimation |
US10936066B1 (en) | 2019-02-13 | 2021-03-02 | Snap Inc. | Sleep detection in a location sharing system |
KR20200101208A (en) * | 2019-02-19 | 2020-08-27 | 삼성전자주식회사 | Electronic device and method for providing user interface for editing of emoji in conjunction with camera function thereof |
US10964082B2 (en) | 2019-02-26 | 2021-03-30 | Snap Inc. | Avatar based on weather |
US10852918B1 (en) | 2019-03-08 | 2020-12-01 | Snap Inc. | Contextual information in chat |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11166123B1 (en) | 2019-03-28 | 2021-11-02 | Snap Inc. | Grouped transmission of location data in a location sharing system |
US10674311B1 (en) | 2019-03-28 | 2020-06-02 | Snap Inc. | Points of interest in a location sharing system |
SG11202111323RA (en) * | 2019-03-29 | 2021-11-29 | Guangzhou Huya Information Technology Co Ltd | Live broadcast interaction method and apparatus, live broadcast system and electronic device |
CN114026877A (en) * | 2019-04-17 | 2022-02-08 | 麦克赛尔株式会社 | Image display device and display control method thereof |
US10992619B2 (en) | 2019-04-30 | 2021-04-27 | Snap Inc. | Messaging system with avatar generation |
USD916809S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916871S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916810S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916872S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916811S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
US10893385B1 (en) | 2019-06-07 | 2021-01-12 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11188190B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11676199B2 (en) | 2019-06-28 | 2023-06-13 | Snap Inc. | Generating customizable avatar outfits |
US11307747B2 (en) | 2019-07-11 | 2022-04-19 | Snap Inc. | Edge gesture interface with smart interactions |
US11455081B2 (en) | 2019-08-05 | 2022-09-27 | Snap Inc. | Message thread prioritization interface |
US10911387B1 (en) | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
US11559740B2 (en) * | 2019-09-13 | 2023-01-24 | Gree, Inc. | Video modification and transmission using tokens |
US11320969B2 (en) | 2019-09-16 | 2022-05-03 | Snap Inc. | Messaging system with battery level sharing |
US11425062B2 (en) | 2019-09-27 | 2022-08-23 | Snap Inc. | Recommended content viewed by friends |
CN110719415B (en) * | 2019-09-30 | 2022-03-15 | 深圳市商汤科技有限公司 | Video image processing method and device, electronic equipment and computer readable medium |
US11080917B2 (en) | 2019-09-30 | 2021-08-03 | Snap Inc. | Dynamic parameterized user avatar stories |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
JP7046044B6 (en) | 2019-11-08 | 2022-05-06 | グリー株式会社 | Computer programs, server devices and methods |
US11063891B2 (en) | 2019-12-03 | 2021-07-13 | Snap Inc. | Personalized avatar notification |
US11128586B2 (en) | 2019-12-09 | 2021-09-21 | Snap Inc. | Context sensitive avatar captions |
US11036989B1 (en) | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
CN111162993B (en) * | 2019-12-26 | 2022-04-26 | 上海连尚网络科技有限公司 | Information fusion method and device |
US11140515B1 (en) | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11169658B2 (en) | 2019-12-31 | 2021-11-09 | Snap Inc. | Combined map icon with action indicator |
JP7066764B2 (en) * | 2020-01-22 | 2022-05-13 | グリー株式会社 | Computer programs, methods and server equipment |
KR20220133249A (en) | 2020-01-30 | 2022-10-04 | 스냅 인코포레이티드 | A system for creating media content items on demand |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
US11284144B2 (en) | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11217020B2 (en) | 2020-03-16 | 2022-01-04 | Snap Inc. | 3D cutout image modification |
US11818286B2 (en) | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US11956190B2 (en) | 2020-05-08 | 2024-04-09 | Snap Inc. | Messaging system with a carousel of related entities |
DK202070625A1 (en) | 2020-05-11 | 2022-01-04 | Apple Inc | User interfaces related to time |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
US11543939B2 (en) | 2020-06-08 | 2023-01-03 | Snap Inc. | Encoded image based messaging system |
US11922010B2 (en) | 2020-06-08 | 2024-03-05 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
US11356392B2 (en) | 2020-06-10 | 2022-06-07 | Snap Inc. | Messaging system including an external-resource dock and drawer |
US11580682B1 (en) | 2020-06-30 | 2023-02-14 | Snap Inc. | Messaging system with augmented reality makeup |
US11863513B2 (en) | 2020-08-31 | 2024-01-02 | Snap Inc. | Media content playback and comments management |
US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11470025B2 (en) | 2020-09-21 | 2022-10-11 | Snap Inc. | Chats with micro sound clips |
US11452939B2 (en) | 2020-09-21 | 2022-09-27 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11910269B2 (en) | 2020-09-25 | 2024-02-20 | Snap Inc. | Augmented reality content items including user avatar to share location |
US11615592B2 (en) | 2020-10-27 | 2023-03-28 | Snap Inc. | Side-by-side character animation from realtime 3D body motion capture |
US11660022B2 (en) | 2020-10-27 | 2023-05-30 | Snap Inc. | Adaptive skeletal joint smoothing |
US11748931B2 (en) | 2020-11-18 | 2023-09-05 | Snap Inc. | Body animation sharing and remixing |
US11450051B2 (en) | 2020-11-18 | 2022-09-20 | Snap Inc. | Personalized avatar real-time motion capture |
US11734894B2 (en) | 2020-11-18 | 2023-08-22 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
US11790531B2 (en) | 2021-02-24 | 2023-10-17 | Snap Inc. | Whole body segmentation |
US11809633B2 (en) | 2021-03-16 | 2023-11-07 | Snap Inc. | Mirroring device with pointing based navigation |
US11734959B2 (en) | 2021-03-16 | 2023-08-22 | Snap Inc. | Activating hands-free mode on mirroring device |
US11798201B2 (en) | 2021-03-16 | 2023-10-24 | Snap Inc. | Mirroring device with whole-body outfits |
US11908243B2 (en) | 2021-03-16 | 2024-02-20 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
US11544885B2 (en) | 2021-03-19 | 2023-01-03 | Snap Inc. | Augmented reality experience based on physical items |
US11562548B2 (en) | 2021-03-22 | 2023-01-24 | Snap Inc. | True size eyewear in real time |
GB2606344A (en) * | 2021-04-28 | 2022-11-09 | Sony Interactive Entertainment Europe Ltd | Computer-implemented method and system for generating visual adjustment in a computer-implemented interactive entertainment environment |
US11652960B2 (en) * | 2021-05-14 | 2023-05-16 | Qualcomm Incorporated | Presenting a facial expression in a virtual meeting |
US11636654B2 (en) | 2021-05-19 | 2023-04-25 | Snap Inc. | AR-based connected portal shopping |
US11776190B2 (en) | 2021-06-04 | 2023-10-03 | Apple Inc. | Techniques for managing an avatar on a lock screen |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
US11704626B2 (en) * | 2021-07-09 | 2023-07-18 | Prezi, Inc. | Relocation of content item to motion picture sequences at multiple devices |
US11854069B2 (en) | 2021-07-16 | 2023-12-26 | Snap Inc. | Personalized try-on ads |
JP7385289B2 (en) | 2021-08-03 | 2023-11-22 | 株式会社フロンティアチャンネル | Programs and information processing equipment |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11670059B2 (en) | 2021-09-01 | 2023-06-06 | Snap Inc. | Controlling interactive fashion based on body gestures |
US11673054B2 (en) | 2021-09-07 | 2023-06-13 | Snap Inc. | Controlling AR games on fashion items |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11900506B2 (en) | 2021-09-09 | 2024-02-13 | Snap Inc. | Controlling interactive fashion based on facial expressions |
US11734866B2 (en) | 2021-09-13 | 2023-08-22 | Snap Inc. | Controlling interactive fashion based on voice |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11636662B2 (en) | 2021-09-30 | 2023-04-25 | Snap Inc. | Body normal network light and rendering control |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US20240096033A1 (en) * | 2021-10-11 | 2024-03-21 | Meta Platforms Technologies, Llc | Technology for creating, replicating and/or controlling avatars in extended reality |
US11651572B2 (en) | 2021-10-11 | 2023-05-16 | Snap Inc. | Light and rendering of garments |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US11960784B2 (en) | 2021-12-07 | 2024-04-16 | Snap Inc. | Shared augmented reality unboxing experience |
US11748958B2 (en) | 2021-12-07 | 2023-09-05 | Snap Inc. | Augmented reality unboxing experience |
US11880947B2 (en) | 2021-12-21 | 2024-01-23 | Snap Inc. | Real-time upper-body garment exchange |
US11887260B2 (en) | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
US11823346B2 (en) | 2022-01-17 | 2023-11-21 | Snap Inc. | AR body part tracking system |
US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
GB2616644A (en) * | 2022-03-16 | 2023-09-20 | Sony Interactive Entertainment Inc | Input system |
KR102509449B1 (en) * | 2022-05-25 | 2023-03-14 | 주식회사 투바앤 | Digital content providing method and server including counterpart custom character control |
WO2023235217A1 (en) * | 2022-06-03 | 2023-12-07 | Universal City Studios Llc | Smoothing server for processing user interactions to control an interactive asset |
US11870745B1 (en) | 2022-06-28 | 2024-01-09 | Snap Inc. | Media gallery sharing and management |
WO2024014266A1 (en) * | 2022-07-13 | 2024-01-18 | ソニーグループ株式会社 | Control device, control method, information processing device, information processing method, and program |
US11893166B1 (en) | 2022-11-08 | 2024-02-06 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040240740A1 (en) * | 1998-05-19 | 2004-12-02 | Akio Ohba | Image processing device and method, and distribution medium |
Family Cites Families (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB304024A (en) | 1928-01-16 | 1929-01-17 | Herbert Miller | Improvements in or relating to parallel vices |
US5689575A (en) | 1993-11-22 | 1997-11-18 | Hitachi, Ltd. | Method and apparatus for processing images of facial expressions |
US5347306A (en) * | 1993-12-17 | 1994-09-13 | Mitsubishi Electric Research Laboratories, Inc. | Animated electronic meeting place |
US5491743A (en) * | 1994-05-24 | 1996-02-13 | International Business Machines Corporation | Virtual conference system and terminal apparatus therefor |
US6285380B1 (en) * | 1994-08-02 | 2001-09-04 | New York University | Method and system for scripting interactive animated actors |
US5572248A (en) * | 1994-09-19 | 1996-11-05 | Teleport Corporation | Teleconferencing method and system for providing face-to-face, non-animated teleconference environment |
US6430997B1 (en) * | 1995-11-06 | 2002-08-13 | Trazer Technologies, Inc. | System and method for tracking and assessing movement skills in multidimensional space |
US6219045B1 (en) * | 1995-11-13 | 2001-04-17 | Worlds, Inc. | Scalable virtual world chat client-server system |
US5880731A (en) * | 1995-12-14 | 1999-03-09 | Microsoft Corporation | Use of avatars with automatic gesturing and bounded interaction in on-line chat session |
US5983369A (en) * | 1996-06-17 | 1999-11-09 | Sony Corporation | Online simultaneous/altering-audio/video/voice data based service and support for computer systems |
US6400374B2 (en) * | 1996-09-18 | 2002-06-04 | Eyematic Interfaces, Inc. | Video superposition system and method |
IL121178A (en) * | 1997-06-27 | 2003-11-23 | Nds Ltd | Interactive game system |
JPH1133230A (en) * | 1997-07-16 | 1999-02-09 | Sega Enterp Ltd | Communication game system |
US6522312B2 (en) * | 1997-09-01 | 2003-02-18 | Canon Kabushiki Kaisha | Apparatus for presenting mixed reality shared among operators |
US6272231B1 (en) * | 1998-11-06 | 2001-08-07 | Eyematic Interfaces, Inc. | Wavelet-based facial motion capture for avatar animation |
US6215498B1 (en) * | 1998-09-10 | 2001-04-10 | Lionhearth Technologies, Inc. | Virtual command post |
US7073129B1 (en) * | 1998-12-18 | 2006-07-04 | Tangis Corporation | Automated selection of appropriate information based on a computer user's context |
US7055101B2 (en) * | 1998-12-18 | 2006-05-30 | Tangis Corporation | Thematic response to a computer user's context, such as by a wearable personal computer |
US6553138B2 (en) * | 1998-12-30 | 2003-04-22 | New York University | Method and apparatus for generating three-dimensional representations of objects |
US7120880B1 (en) * | 1999-02-25 | 2006-10-10 | International Business Machines Corporation | Method and system for real-time determination of a subject's interest level to media content |
US6466250B1 (en) * | 1999-08-09 | 2002-10-15 | Hughes Electronics Corporation | System for electronically-mediated collaboration including eye-contact collaboratory |
US6384829B1 (en) * | 1999-11-24 | 2002-05-07 | Fuji Xerox Co., Ltd. | Streamlined architecture for embodied conversational characters with reduced message traffic |
US6767287B1 (en) | 2000-03-16 | 2004-07-27 | Sony Computer Entertainment America Inc. | Computer system and method for implementing a virtual reality environment for a multi-player game |
US6854012B1 (en) * | 2000-03-16 | 2005-02-08 | Sony Computer Entertainment America Inc. | Data transmission protocol and visual display for a networked computer system |
US20020083179A1 (en) * | 2000-05-12 | 2002-06-27 | Shaw Venson M . | System and method of personalizing communication sessions based on user behavior |
JP3405708B2 (en) * | 2000-05-15 | 2003-05-12 | 旭化成株式会社 | Audio posting method |
US6894686B2 (en) * | 2000-05-16 | 2005-05-17 | Nintendo Co., Ltd. | System and method for automatically editing captured images for inclusion into 3D video game play |
US6795068B1 (en) * | 2000-07-21 | 2004-09-21 | Sony Computer Entertainment Inc. | Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program |
JP4671011B2 (en) * | 2000-08-30 | 2011-04-13 | ソニー株式会社 | Effect adding device, effect adding method, effect adding program, and effect adding program storage medium |
US6867797B1 (en) * | 2000-10-27 | 2005-03-15 | Nortel Networks Limited | Animating images during a call |
US6731307B1 (en) * | 2000-10-30 | 2004-05-04 | Koninklije Philips Electronics N.V. | User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality |
AU2002232928A1 (en) * | 2000-11-03 | 2002-05-15 | Zoesis, Inc. | Interactive character system |
US6910186B2 (en) * | 2000-12-08 | 2005-06-21 | Kyunam Kim | Graphic chatting with organizational avatars |
KR20010082779A (en) | 2001-05-26 | 2001-08-31 | 이경환 | Method for producing avatar using image data and agent system with the avatar |
US20020188959A1 (en) * | 2001-06-12 | 2002-12-12 | Koninklijke Philips Electronics N.V. | Parallel and synchronized display of augmented multimedia information |
US7444656B2 (en) * | 2001-08-02 | 2008-10-28 | Intellocity Usa, Inc. | Post production visual enhancement rendering |
JP2003248837A (en) * | 2001-11-12 | 2003-09-05 | Mega Chips Corp | Device and system for image generation, device and system for sound generation, server for image generation, program, and recording medium |
US6945870B2 (en) * | 2001-11-23 | 2005-09-20 | Cyberscan Technology, Inc. | Modular entertainment and gaming system configured for processing raw biometric data and multimedia response by a remote server |
AU2003201032A1 (en) | 2002-01-07 | 2003-07-24 | Stephen James Crampton | Method and apparatus for an avatar user interface system |
US7360234B2 (en) | 2002-07-02 | 2008-04-15 | Caption Tv, Inc. | System, method, and computer program product for selective filtering of objectionable content from a program |
US7225414B1 (en) * | 2002-09-10 | 2007-05-29 | Videomining Corporation | Method and system for virtual touch entertainment |
US20040085259A1 (en) * | 2002-11-04 | 2004-05-06 | Mark Tarlton | Avatar control using a communication device |
US7106358B2 (en) * | 2002-12-30 | 2006-09-12 | Motorola, Inc. | Method, system and apparatus for telepresence communications |
JP3950802B2 (en) * | 2003-01-31 | 2007-08-01 | 株式会社エヌ・ティ・ティ・ドコモ | Face information transmission system, face information transmission method, face information transmission program, and computer-readable recording medium |
US20040179037A1 (en) * | 2003-03-03 | 2004-09-16 | Blattner Patrick D. | Using avatars to communicate context out-of-band |
US20070168863A1 (en) * | 2003-03-03 | 2007-07-19 | Aol Llc | Interacting avatars in an instant messaging communication session |
KR100514366B1 (en) | 2003-08-05 | 2005-09-13 | 엘지전자 주식회사 | method for displaying Avata reflected biorhythm |
JP2004118849A (en) * | 2003-09-25 | 2004-04-15 | Digital Passage:Kk | Interactive communication method and interactive communication system using communication line, and recording medium |
JP4559092B2 (en) * | 2004-01-30 | 2010-10-06 | 株式会社エヌ・ティ・ティ・ドコモ | Mobile communication terminal and program |
JP2005230056A (en) * | 2004-02-17 | 2005-09-02 | Namco Ltd | Game device and program |
US20050266925A1 (en) * | 2004-05-25 | 2005-12-01 | Ongame E-Solutions Ab | System and method for an online duel game |
US7542040B2 (en) * | 2004-08-11 | 2009-06-02 | The United States Of America As Represented By The Secretary Of The Navy | Simulated locomotion method and apparatus |
WO2006039371A2 (en) * | 2004-10-01 | 2006-04-13 | Wms Gaming Inc. | Displaying 3d characters in gaming machines |
US20060178964A1 (en) * | 2005-02-04 | 2006-08-10 | Jung Edward K | Reporting a non-mitigated loss in a virtual world |
US20060248461A1 (en) * | 2005-04-29 | 2006-11-02 | Omron Corporation | Socially intelligent agent software |
US20080026838A1 (en) * | 2005-08-22 | 2008-01-31 | Dunstan James E | Multi-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games |
US7822607B2 (en) * | 2005-08-26 | 2010-10-26 | Palo Alto Research Center Incorporated | Computer application environment and communication system employing automatic identification of human conversational behavior |
US20070074114A1 (en) * | 2005-09-29 | 2007-03-29 | Conopco, Inc., D/B/A Unilever | Automated dialogue interface |
US7677974B2 (en) * | 2005-10-14 | 2010-03-16 | Leviathan Entertainment, Llc | Video game methods and systems |
US7775885B2 (en) * | 2005-10-14 | 2010-08-17 | Leviathan Entertainment, Llc | Event-driven alteration of avatars |
EP1984898A4 (en) * | 2006-02-09 | 2010-05-05 | Nms Comm Corp | Smooth morphing between personal video calling avatars |
-
2007
- 2007-05-07 EP EP07776884A patent/EP2016562A4/en not_active Ceased
- 2007-05-07 US US11/800,899 patent/US8766983B2/en active Active
- 2007-05-07 WO PCT/US2007/011143 patent/WO2007130693A2/en active Application Filing
- 2007-05-07 JP JP2009509822A patent/JP4921550B2/en active Active
- 2007-05-07 EP EP10154578A patent/EP2194509A1/en not_active Ceased
- 2007-05-07 WO PCT/US2007/011141 patent/WO2007130691A2/en active Application Filing
- 2007-05-07 US US11/801,036 patent/US20080001951A1/en not_active Abandoned
- 2007-05-07 US US11/800,843 patent/US8601379B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040240740A1 (en) * | 1998-05-19 | 2004-12-02 | Akio Ohba | Image processing device and method, and distribution medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10217029B1 (en) | 2018-02-26 | 2019-02-26 | Ringcentral, Inc. | Systems and methods for automatically generating headshots from a plurality of still images |
US10726305B2 (en) | 2018-02-26 | 2020-07-28 | Ringcentral, Inc. | Systems and methods for automatically generating headshots from a plurality of still images |
Also Published As
Publication number | Publication date |
---|---|
WO2007130691A3 (en) | 2008-11-20 |
EP2194509A1 (en) | 2010-06-09 |
JP4921550B2 (en) | 2012-04-25 |
WO2007130693A3 (en) | 2008-03-06 |
US20080001951A1 (en) | 2008-01-03 |
EP2016562A2 (en) | 2009-01-21 |
US20070268312A1 (en) | 2007-11-22 |
US8766983B2 (en) | 2014-07-01 |
US8601379B2 (en) | 2013-12-03 |
US20070260984A1 (en) | 2007-11-08 |
JP2009536406A (en) | 2009-10-08 |
WO2007130691A2 (en) | 2007-11-15 |
EP2016562A4 (en) | 2010-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8601379B2 (en) | Methods for interactive communications with real time effects and avatar environment interaction | |
US10636217B2 (en) | Integration of tracked facial features for VR users in virtual reality environments | |
US20080215974A1 (en) | Interactive user controlled avatar animations | |
US10195528B2 (en) | Systems for using three-dimensional object as controller in an interactive game | |
JP5756198B2 (en) | Interactive user-controlled avatar animation | |
US20180373413A1 (en) | Information processing method and apparatus, and program for executing the information processing method on computer | |
US20100060662A1 (en) | Visual identifiers for virtual world avatars | |
CN102129343B (en) | Directed performance in motion capture system | |
US8622831B2 (en) | Responsive cutscenes in video games | |
JP2000511368A (en) | System and method for integrating user image into audiovisual representation | |
WO2008106197A1 (en) | Interactive user controlled avatar animations | |
JP2006263122A (en) | Game apparatus, game system, game data processing method, program for game data processing method and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07776886 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07776886 Country of ref document: EP Kind code of ref document: A2 |