US20120092475A1 - Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene - Google Patents

Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene Download PDF

Info

Publication number
US20120092475A1
US20120092475A1 US13/334,765 US201113334765A US2012092475A1 US 20120092475 A1 US20120092475 A1 US 20120092475A1 US 201113334765 A US201113334765 A US 201113334765A US 2012092475 A1 US2012092475 A1 US 2012092475A1
Authority
US
United States
Prior art keywords
user
video data
virtual network
video
network scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/334,765
Inventor
Zhuanke Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=43354281&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20120092475(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED reassignment TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, ZHUANKE
Publication of US20120092475A1 publication Critical patent/US20120092475A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/408Peer to peer connection
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/577Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for watching a game played by other players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Definitions

  • the present disclosure relates to computer techniques, and more particularly, to a method, an apparatus and a system for implementing interaction between a video and a virtual network scene.
  • Various embodiments provide a method, an apparatus and a system for implementing interaction between a video and a virtual network service, so as to increase relativity between the video and a network service in the virtual network scene, realize interaction between the video and the network service and improve the user's experience.
  • a method for implementing interaction between a video and a virtual network scene includes:
  • an apparatus for implementing interaction between a video and a virtual network scene includes:
  • a collecting module to collect video data of a user
  • a displaying module to display a video corresponding to the video data of the user in a virtual network scene on a client of the user.
  • a system for implementing interaction between a video and a virtual network scene includes: a server to implement the interaction between the video and the virtual network scene, at least two clients to implement interaction between the video and the virtual network scene, wherein
  • each of the at least two clients is to collect video data of a user, display a video of the user in the virtual network scene on the client according to the video data of the user, recognize action information of the user according to the video data of the user, apply the action information of the user on the virtual network scene;
  • the server is to forward the video data between the clients, and control the virtual network scene after forwarding virtual network scene control information between the clients.
  • a computer-readable storage medium stores computer programs used for enabling one or more processors to
  • the relativity between the video and the network service in the virtual network scene is increased.
  • the interaction between the video and the network service is realized and the user's experience is improved.
  • FIG. 1 is a flowchart illustrating a method for implementing interaction between a video and a virtual network scene according to an example of the present disclosure.
  • FIG. 2 is a schematic diagram illustrating an application scene of the method for implementing the interaction between the video and the virtual network scene according to a first example of the present disclosure.
  • FIG. 3 is a flowchart illustrating a method for implementing interaction between the video and the virtual network scene according to the first example of the present disclosure.
  • FIG. 4 is schematic diagram illustrating user interactions in the method for implementing the interaction between the video and the virtual network scene according to the first example of the present disclosure.
  • FIG. 5 is a schematic diagram illustrating an apparatus for implementing interaction between the video and the virtual network scene according to a second example of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a system for implementing interaction between the video and the virtual network scene according to a third example of the present disclosure.
  • the network service based on the virtual network scene in the existing technique can provide to the users only interactions in the virtual scene.
  • the provision mode is simplex. It cannot provide users with experience combined with reality and also cannot provide the users with experience combined with the network service.
  • a user is able to see only the virtual people in the virtual scene, but cannot see real identities and real looks of other persons participating in the network service. The user is unable to let other people see himself/herself through the network during the network service, which makes the interaction between user's experience and the network service in the virtual network scene impossible, and brings poor experience to the users.
  • various embodiments provide a method for implementing the interaction between the video and the virtual network scene.
  • the method includes: obtaining video data of a user, displaying, on a client, a video corresponding to the video data in the virtual network scene by embedding the video in the virtual network scene; or, displaying the video corresponding to the video data in the virtual network scene on the client according to the video data.
  • the so-called displaying the video corresponding to the video data on the virtual network scene on the client refers to that the video of the user floats on the virtual network scene when being displayed.
  • the virtual network scene includes but is not limited to network game, network meeting, etc.
  • the method when there are multiple users using the virtual network scene, the method includes the following steps;
  • action information of the current user is obtained according to the video data of the current user, and/or
  • the action information of the current user is applied on the virtual network scene to implement the interaction between the video of the current user and the virtual network scene.
  • the applying the action information of the current user on the virtual network scene to implement the interaction between the video and the virtual scene in step S 4 includes:
  • the process of obtaining the action information of the current user according to the collected video data of the current user in step S 3 includes:
  • the collected video data of the current user capture facial video data of the current user and obtain facial action information of the current user through a face detecting technique
  • an embodiment provides a method for implementing the interaction between the video and the virtual network scene.
  • FIG. 2 is a schematic diagram illustrating an application architecture provided by the example of the present disclosure.
  • the method provided by various embodiments includes the following steps.
  • Step 101 client A of user A collects video data of the user A.
  • client A may be a computer equipped with an apparatus capable of capturing video data (e.g. camera), or a portal terminal equipped with the apparatus capable of capturing video data.
  • an apparatus capable of capturing video data e.g. camera
  • portal terminal equipped with the apparatus capable of capturing video data.
  • the embodiment does not restrict the type of client A.
  • Step 102 client A recognizes the action of user A according to the collected video data of user A.
  • the video data is corresponding to the action displayed on the apparatus capable of capturing video data.
  • the action includes but is not limited to: facial action of the user and body action of the user.
  • client A will capture facial video data of user A from the collected video data of user A through the face detecting technique. Based on the facial video data of user A, it is possible to obtain a face action of user A.
  • client A will capture body video data of user A from the collected video data of user A through the motion analyzing and object tracing techniques. Based on the body video data of user A, it is possible to obtain a body action of user A.
  • Step 103 client A transmits the recognized action of user A to a pre-configured network server.
  • the network server may be a video game server.
  • Client A may transmit the recognized action of user A to the pre-configured network server through carrying values representing detailed actions. For example, with respect to facial actions, it is possible to configure that XX1 represents blink and XX2 represents frown, etc.
  • Step 104 the network server maps the action of user A to the virtual person a according to the recognized action of user A and a mapping relationship between user A and the virtual person a in the virtual network game scene.
  • the embodiment supposes that there is a network server (i.e. the video game server) to provide a virtual network game service to a plurality of users, wherein the network server saves the above mapping relationship.
  • the network server After receiving the action of user A (may be an action identifier) transmitted by client A, the network server applies the action to the virtual person a corresponding to user A.
  • the network server recognized that the action of user A is blink (the identifier of this action is XX1). Accordingly, the virtual person a in the virtual network game scene will also blink.
  • it may be implemented by motion analyzing and object tracing techniques.
  • client A recognizes the action of user A according to the video data of user A and transmits the action of user A to the network server.
  • client A may also transmit the obtained video data of user A to the network server.
  • the network server obtains the action information of user A according to the video data received. This example does not restrict that which of the above two methods is adopted.
  • transmitting the video data it is also possible to encode and compress the video data in order to increase network transmission efficiency. The present disclosure does not have any restriction to such processing.
  • users B and C may realize the interaction between the video and the network game scene.
  • the user's experience is improved.
  • Each user may see real looks of other users participating in the network game on his/her local client.
  • each client may provide a function of self-exhibition to the user, i.e. the user may also see the real look of himself/herself on the local client.
  • the method provided by various embodiments may further include: client A receives a trigger signal transmitted by user A, captures a screen of the video of use B and a screen of the video of user C currently seen by user A to obtain a current screenshot X of user B and a current screenshot Y of user C; wherein the current screenshot X and the current screenshot Y respectively contains facial information of user B and facial information of user C.
  • client A calculates a matching degree between the facial information of user B and the facial information of user C to obtain a matching degree between the current screenshot X and the current screenshot Y. Accordingly, it is also possible to return a calculated result to each user to further improve the user's experience.
  • a positive response is returned to the user, e.g. reward the user in the network game; otherwise, a negative response is returned to the user, e.g. punish the user in the network game such as forbid the user to perform a certain action in the network game.
  • an exhibition value is set for the user and an accumulation plan may be further created. The larger the exhibition value accumulated, the more likely that the user desires to exhibit himself/herself; or, the time that the user appears in the video may be accumulated (in particular, it is possible to accumulate the time that the face of the user appears in the video), the longer the time, the more likely that the user desires to exhibit himself/herself.
  • a facial detection feature point in advance. According to the captured facial video data of the current user and the facial detection feature point, it is determined whether a detection value corresponding to the facial feature point of the current user can be obtained. If yes, a positive response is returned to the user, e.g. reward the user in the network game; otherwise, a negative response is returned to the user, e.g. punish the user in the network game such as forbid the user to perform a certain action in the network game. For example, suppose the above facial detection feature point configured in advance is nose. Then, it is possible to determine, according to the captured facial video data of user A, that whether there is a detection feature point of the nose.
  • the user is rewarded in the network game; otherwise, the user is forbidden to perform a certain action in the network game, e.g. the user is forbidden to keep on playing the network game or the user is punished with certain game scores, or the user is prompted to aim his/her face at the video capturing apparatus, etc.
  • the client may also provide items such as figure items whose locations may change along the face of the user by the object tracing technique.
  • user A selects a pair of glasses. Accordingly, after receiving a trigger signal indicating that user A has selected the glasses, client A selects the glasses selected by user A and adds the glasses in a video window. In the video window, whether user A lowers his/her head or raises his/her head, the glasses will always follow the position of the face of user A.
  • the items are configured for the user in advance or added from other locations by the user. The various embodiments do not have restriction to this.
  • various embodiments further enable the user to select one or more favorite objective video windows after seeing the video windows of other users.
  • the user may display an expression picture or play an animation to the objective video windows selected to express the feeling of the user and the action that the user wants to perform.
  • user A selects an expression picture of laugh and selects to display this laugh picture in the video window of user B.
  • client A displays, after receiving a selection instruction triggered by user A, the laugh picture in the video window of user B through the network server.
  • the expression picture or the animation may be configured for the user in advance or may be defined by the user himself/herself.
  • each user may give a virtual gift to another user after seeing the video image of the other user.
  • An identifier is configured for each kind of virtual gifts. For example, user A decides to give a virtual gift to user B after seeing the video image of user B (suppose there are two kinds of virtual gifts, wherein FFF denotes flower and WWW denotes drink).
  • An animation of sending the gift may be called at the video window of user A and an animation of receiving the gift may be called at the video window of user B (or it is also possible to call the animation only at one end). Accordingly, the interaction of sending a gift and receiving a gift may be realized through value transmission.
  • the interaction between the video and the network service in the virtual network game scene is taken as an example.
  • the example may also be applied to interaction between an audio and the network service in the virtual network game scene, i.e., a client apparatus samples voice of the user, recognize audio data to obtain information that the user wants to express and applies the information that the user wants to express to the virtual network game scene, or applies to the virtual person in the virtual network game scene.
  • client A obtains a “blink” command of user A, obtains the blink action through voice recognizing and applies the blink action to the virtual person a of user A; for another example, client A obtains a “forward” command of user A, obtains the forward action through voice recognizing and applies the forward action to the virtual person a of user A. Then the virtual person performs the forward action in the virtual network game scene.
  • FIG. 4 is a schematic diagram illustrating a user interaction process according to various embodiments.
  • the face detection technique includes but is not limited to low-level feature detection method based on gray scale image of OpenCv, Haar feature detection methods, etc.
  • a video window is embedded in the virtual network game scene.
  • face detection technique, motion analyzing technique and object tracing technique are adopted to implement the interaction between the video and the virtual network scene.
  • a video mix technique may also be adopted to improve the interaction by mixing the video with an animation.
  • the virtual network game scene is taken as an example.
  • the method provided by various embodiments may also be applied to other scenes such as a virtual network meeting scene.
  • the terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations.
  • various embodiments provide an apparatus for implementing the interaction between the video and the virtual network scene.
  • the apparatus is to obtain video data of the user, display, on a client, a video corresponding to the video data in the virtual network scene by embedding the video in the virtual network scene; or, display the video corresponding to the video data in the virtual network scene on the client according to the video data.
  • the so-called displaying the video in the virtual network scene means that the video of the user is floating on the virtual network scene when being displayed.
  • the apparatus includes:
  • a collecting module 501 to collect video data of the current user
  • a displaying module 502 to display a video of the current user on clients of the current user and other users according to the video data of the current user;
  • a recognizing module 503 to recognize action information of the current user according to the video data of the current user.
  • an interacting module 504 to apply the action information of the current user recognized by the recognizing module 503 to the virtual network scene to implement the interaction between the video of the current user and the virtual network scene.
  • the interacting module 504 includes:
  • mapping unit to map a recognized action of the current user to a virtual person of the current user in the virtual network scene according to the action information of the current user recognized by the recognizing module 503 and a mapping relationship between the current user and the virtual person in the virtual network scene;
  • a controlling unit to control the virtual network scene according to the action information of the current user recognized by the recognizing module 503 .
  • the recognizing module 503 includes:
  • a first recognizing unit to capture facial video data of the current user according to the video data of the current user collected by the collecting module 501 and recognize facial action information of the current user through a face detecting technique;
  • a second recognizing unit to capture action video data of the current user according to the video data of the current user collected by the collecting module 501 and recognize body action information of the current user through motion and object tracing techniques.
  • the apparatus further includes:
  • a first determining module to determine, at a pre-defined collecting time, whether the video data of the current user meeting a pre-defined requirement is captured
  • a first rewarding and punishing module to return a positive response to the current user when the determining module determines that the video data of the current user meeting the pre-defined requirement is captured, and to return a negative response to the current user when the determining module determines that the video data of the current user meeting the pre-defined requirement is not captured.
  • the apparatus further includes:
  • a first rewarding module to accumulate the time that the facial video data of the current user can be captured according to the facial video data of the current user captured by the capturing module, and to reward the current user according to the time accumulated by the accumulating module;
  • a second determining module to obtain a detecting value corresponding to a face detection feature point of the current user according to the facial video data of the current user and the face detection feature point defined in advance, and to return a positive or negative response to the current user according to the detected value.
  • the apparatus may further include:
  • a receiving module to a virtual item adding signal transmitted by the current user
  • a selecting module to select an item that the current user wants to add after the item adding signal is received by the receiving module.
  • the displaying module 502 may display the collected video data of the current user and the item that the current user wants to add on the clients of the current user and other users.
  • the position of the item changes along with the position of the face of the current user.
  • the apparatus may further include:
  • a screen capturing module to receive a capture-screen signal triggered by the current user, capture a screen of videos of at least two uses displayed on the client of the current user to obtain at least to screenshots; wherein the screenshots include facial information of users;
  • a processing module to calculate a matching degree between facial information of the at least two users according to the facial information of the users included in the at least two screenshots obtained by the screen capturing module, and return a calculated result to the each user.
  • the modules provided by the examples for implementing interaction between the video and the virtual network scene may be located in the same apparatus (for example, the collecting module, the displaying module, the recognizing module and the interacting module may be located in the client) or may be located in different apparatuses (for example, the collecting module and the displaying module are located in the client, whereas the recognizing module and the interacting module are located in the server).
  • the modules provided by the above example may be integrated into one module according to a requirement.
  • Each module in the above example may also be divided into several sub-modules.
  • various embodiments provide an apparatus for implementing the interaction between the video and the virtual network scene.
  • the apparatus provided by various embodiments is able to implement the interaction between the video and the virtual network scene.
  • the apparatus provided by various embodiments is able to implement the interaction between the video and the virtual network scene utilizing the facial recognizing technique, motion analyzing and object tracing techniques and action capturing technique. Interactions between users may be improved by mixed animation utilizing video mixing techniques.
  • various embodiments provide a system for implementing interaction between the video and the virtual network scene.
  • the system includes: a server 601 to implement interaction between the video and the virtual network scene, and a plurality of clients 602 to implement the interaction between the video and the virtual network scene.
  • Each of the clients 602 is to collect video data of a current user, display a video of the current user on clients 602 of the current user and other users according to the video data of the current user, obtain action information of the current user according to the collected video data of the current user, apply the action information of the current user to the virtual network scene to implement the interaction between the video of the current user and the virtual network scene.
  • the server 601 is to forward the video data of the clients 602 , and control the virtual network scene after forwarding virtual scene control information between the clients 602 .
  • the video data and the virtual scene control information may also be transmitted between the clients 602 through a P2P manner.
  • the video data and the virtual network data (e.g. virtual network game data) may be transmitted separately.
  • the system may include a client, a video server and a game server.
  • the client is to collect the video data of the current user, display the video of the current user and display videos of other users, obtain action information of the current user according to the collected video data of the current user, and display the virtual network scene.
  • the video server is to collect the video data collected by the client, forward the video data, collect the action information obtained by the client and apply the action information on the virtual network scene through the virtual scene server to implement the interaction between the video and the virtual network scene.
  • the virtual scene server is to execute a flowchart of the virtual network scene, apply the action information obtained by the video server on the virtual network scene to implement the interaction between the video and the virtual network scene. If the virtual scene is a network game, the above virtual scene server is a game server.
  • system may include:
  • a client to collect the video data of the current user, display the video of the current user and display videos of other users, recognize action information of the current user according to the collected video data of the current user, and display the virtual network scene;
  • a video server to collect the video data collected by the client, forward the video data, collect the action information recognized by the client and apply the action information on the virtual network scene through a virtual scene server to implement the interaction between the video and the virtual network scene;
  • the virtual scene server to execute a flowchart of the virtual network scene, apply the action information obtained by the video server on the virtual network scene to implement the interaction between the video and the virtual network scene, wherein if the virtual scene is a network game, the above virtual scene server is a game server; and
  • a P2P server to backup data for the video server and the virtual scene server to implement fault recovery and backup functions.
  • various embodiments provide a system for implementing interaction between the video and the virtual network scene.
  • the system provided by the various embodiments is able to implement the interaction between the video and the virtual network scene through embedding a video window in the virtual network game scene.
  • the system provided by various embodiments is able to implement the interaction between the video and the virtual network scene utilizing the face detection technique, the motion analyzing and object tracing techniques and the action capturing technique. Interactions between users may be improved by a video mix technique.
  • the technical solution provided by various embodiments is able to meet requirements of users to the network service in the virtual network scene, enable the user to see real identities and real looks of other persons participating in the network service while enjoying in the network service in the virtual network scene, and enable other users to see the user himself/herself during the network service.
  • Relativity between the video and the network service in the virtual network scene is increased.
  • Interaction between the video and the network server is realized.
  • receiving in the various embodiments may be comprehended as acquiring information from other modules initiatively or receiving information from other modules.
  • the modules in the apparatus provided by various embodiments may be arranged in the apparatus provided by various embodiments according to the description of the embodiment, or may be changed accordingly to be located in one or more apparatuses of the embodiment.
  • the modules may be integrated into one module or may be divided into several sub-modules.
  • Some steps in various embodiments may be implemented by software programs stored in a computer-readable storage medium, e.g. CD or hard disk.
  • an example of the present disclosure provides a computer-readable storage medium which stores computer programs enable one or more processors to execute the following steps:
  • obtaining video data of a user displaying a video corresponding to the video data on a client by embedding the video in the virtual network scene; or, displaying the video corresponding to the video data in the virtual network scene of the client; or, controlling or affecting the virtual network scene according to the video data.
  • the programs further enable the one or more processors to execute the following steps:
  • the programs further enable the one or more processors to execute the following steps:
  • the programs further enable the one or more processors to execute the following steps:

Abstract

Examples of the present disclosure provide a method, an apparatus and a system for implementing interaction between a video and a virtual network scene. The method includes: obtaining video data of a user; displaying, on a client, a video corresponding to the video data in the virtual network scene. Through associating the video and the network service in the virtual network scene, the relativity between the video and the network service in the virtual network scene is increased, interaction between the video and the network service is realized and user's experience is improved.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of International Patent Application No. PCT/CN2010/072993 filed on May 20, 2010. This application claims the benefit and priority of Chinese Patent Application No. 200910150595.8, filed Jun. 23, 2009. The entire disclosures of each of the above applications are incorporated herein by reference.
  • FIELD
  • The present disclosure relates to computer techniques, and more particularly, to a method, an apparatus and a system for implementing interaction between a video and a virtual network scene.
  • BACKGROUND
  • This section provides background information related to the present disclosure which is not necessarily prior art.
  • With rapid development of the Internet, users can enjoy services such as online office, and online entertainment through the Internet. When implementing the services including the online office and the online entertainment, existing techniques provide a virtual network scene in order to improve the user's experience and extend service scope provided by the Internet. For example, when multiple users play a network game, the users will feel like staying in the game through the virtual network game scene, which greatly improves the user's experience and increases the degree of satisfaction of the users to the network service. The virtual network scene may be applied to, but is not limited to network game, network meeting, etc.
  • SUMMARY
  • This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
  • Various embodiments provide a method, an apparatus and a system for implementing interaction between a video and a virtual network service, so as to increase relativity between the video and a network service in the virtual network scene, realize interaction between the video and the network service and improve the user's experience.
  • According to one embodiment, a method for implementing interaction between a video and a virtual network scene is provided. The method includes:
  • obtaining video data of a user;
  • displaying, on a client, a video corresponding to the video data in the virtual network scene.
  • According to another embodiment, an apparatus for implementing interaction between a video and a virtual network scene is provided. The apparatus includes:
  • a collecting module, to collect video data of a user;
  • a displaying module, to display a video corresponding to the video data of the user in a virtual network scene on a client of the user.
  • According to another embodiment, a system for implementing interaction between a video and a virtual network scene is provided. The system includes: a server to implement the interaction between the video and the virtual network scene, at least two clients to implement interaction between the video and the virtual network scene, wherein
  • each of the at least two clients is to collect video data of a user, display a video of the user in the virtual network scene on the client according to the video data of the user, recognize action information of the user according to the video data of the user, apply the action information of the user on the virtual network scene;
  • the server is to forward the video data between the clients, and control the virtual network scene after forwarding virtual network scene control information between the clients.
  • According to still another embodiment, a computer-readable storage medium is provided. The computer-readable storage medium stores computer programs used for enabling one or more processors to
  • obtain video data of a user,
  • display, on a client, a video corresponding to the video data in the virtual network scene.
  • Advantages of the technical solution provided by various embodiments are as follows:
  • Through associating the video with the network service in the virtual network scene, the relativity between the video and the network service in the virtual network scene is increased. The interaction between the video and the network service is realized and the user's experience is improved.
  • Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
  • DRAWINGS
  • The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
  • FIG. 1 is a flowchart illustrating a method for implementing interaction between a video and a virtual network scene according to an example of the present disclosure.
  • FIG. 2 is a schematic diagram illustrating an application scene of the method for implementing the interaction between the video and the virtual network scene according to a first example of the present disclosure.
  • FIG. 3 is a flowchart illustrating a method for implementing interaction between the video and the virtual network scene according to the first example of the present disclosure.
  • FIG. 4 is schematic diagram illustrating user interactions in the method for implementing the interaction between the video and the virtual network scene according to the first example of the present disclosure.
  • FIG. 5 is a schematic diagram illustrating an apparatus for implementing interaction between the video and the virtual network scene according to a second example of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a system for implementing interaction between the video and the virtual network scene according to a third example of the present disclosure.
  • Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
  • DETAILED DESCRIPTION
  • Example embodiments will now be described in further detail hereinafter with reference to accompanying drawings and examples to make the technical solution and merits therein clearer.
  • Reference throughout this specification to “one embodiment,” “an embodiment,” “specific embodiment,” or the like in the singular or plural means that one or more particular features, structures, or characteristics described in connection with an embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment,” “in a specific embodiment,” or the like in the singular or plural in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • The network service based on the virtual network scene in the existing technique can provide to the users only interactions in the virtual scene. The provision mode is simplex. It cannot provide users with experience combined with reality and also cannot provide the users with experience combined with the network service. A user is able to see only the virtual people in the virtual scene, but cannot see real identities and real looks of other persons participating in the network service. The user is unable to let other people see himself/herself through the network during the network service, which makes the interaction between user's experience and the network service in the virtual network scene impossible, and brings poor experience to the users.
  • In order to increase the relativity between a video and a network service in a virtual network scene, implement the interaction between the video and the network service and improve user's experience, various embodiments provide a method for implementing the interaction between the video and the virtual network scene. The method includes: obtaining video data of a user, displaying, on a client, a video corresponding to the video data in the virtual network scene by embedding the video in the virtual network scene; or, displaying the video corresponding to the video data in the virtual network scene on the client according to the video data. The so-called displaying the video corresponding to the video data on the virtual network scene on the client refers to that the video of the user floats on the virtual network scene when being displayed. Thus, the objective of implementing the interaction between the video and the virtual network scene is achieved. The virtual network scene includes but is not limited to network game, network meeting, etc.
  • As shown in FIG. 1, when there are multiple users using the virtual network scene, the method includes the following steps;
  • S1, video data of a current user is collected,
  • S2, a video of the current user is displayed on clients of the current user and other users according to the collected video data of the current user,
  • S3, action information of the current user is obtained according to the video data of the current user, and/or
  • S4, the action information of the current user is applied on the virtual network scene to implement the interaction between the video of the current user and the virtual network scene.
  • The applying the action information of the current user on the virtual network scene to implement the interaction between the video and the virtual scene in step S4 includes:
  • S4A, according to the action information of the current user and a pre-configured mapping relationship between the current user and a virtual object in the virtual network scene, map an action of the current user to the virtual object of the current user in the virtual network scene; and/or,
  • S4B, control the virtual network scene according to the action information of the current user.
  • The process of obtaining the action information of the current user according to the collected video data of the current user in step S3 includes:
  • according to the collected video data of the current user, capture facial video data of the current user and obtain facial action information of the current user through a face detecting technique; and/or
  • according to the collected video data of the current user, capture action video data of the current user and obtain the body action information of the current user through motion analyzing and object tracing techniques.
  • In the method for implementing the interaction between the video and the virtual network scene provided by various embodiments, through associating the video and the network service in the virtual network scene, relativity between the video and the network service in the virtual network scene is increased, the interaction between the video and the network service is realized and the experience of the user is improved.
  • In order to make the technical solution of the method for implementing the interaction between the video and the virtual network scene provided by various embodiments clearer, embodiments are described in further detail. For facilitating the description, a frequently-used virtual network game scene is taken as an example virtual network scene in the following examples.
  • Example 1
  • In order to increase the relativity between the video and the network service in the virtual network game scene, implement the interaction between the video and the network service and improve the user's experience, an embodiment provides a method for implementing the interaction between the video and the virtual network scene.
  • As described above, for facilitating the description, this embodiment takes the virtual network game scene as an example. FIG. 2 is a schematic diagram illustrating an application architecture provided by the example of the present disclosure. As show in FIG. 2, suppose users in the virtual network game scene are user A, user B and user C, and the virtual objects corresponding to uses A, B and C are virtual persons a, b and c. As shown in FIG. 3, the method provided by various embodiments includes the following steps.
  • Step 101, client A of user A collects video data of the user A.
  • In a practical application, client A may be a computer equipped with an apparatus capable of capturing video data (e.g. camera), or a portal terminal equipped with the apparatus capable of capturing video data. The embodiment does not restrict the type of client A.
  • Step 102, client A recognizes the action of user A according to the collected video data of user A.
  • The video data is corresponding to the action displayed on the apparatus capable of capturing video data. The action includes but is not limited to: facial action of the user and body action of the user.
  • If user A presents his/her face on client A, client A will capture facial video data of user A from the collected video data of user A through the face detecting technique. Based on the facial video data of user A, it is possible to obtain a face action of user A.
  • If user A presents his/her body action on client A, client A will capture body video data of user A from the collected video data of user A through the motion analyzing and object tracing techniques. Based on the body video data of user A, it is possible to obtain a body action of user A.
  • Step 103, client A transmits the recognized action of user A to a pre-configured network server.
  • The network server may be a video game server. Client A may transmit the recognized action of user A to the pre-configured network server through carrying values representing detailed actions. For example, with respect to facial actions, it is possible to configure that XX1 represents blink and XX2 represents frown, etc.
  • Step 104, the network server maps the action of user A to the virtual person a according to the recognized action of user A and a mapping relationship between user A and the virtual person a in the virtual network game scene.
  • As described above, there is a mapping relationship between the real user and the virtual person in the virtual network game scene. In order to increase the processing efficiency of network service, as shown in FIG. 2, the embodiment supposes that there is a network server (i.e. the video game server) to provide a virtual network game service to a plurality of users, wherein the network server saves the above mapping relationship. Accordingly, after receiving the action of user A (may be an action identifier) transmitted by client A, the network server applies the action to the virtual person a corresponding to user A. For example, suppose that the network server recognized that the action of user A is blink (the identifier of this action is XX1). Accordingly, the virtual person a in the virtual network game scene will also blink. In a practical application, it may be implemented by motion analyzing and object tracing techniques.
  • Through the above steps 101 to 104, the interaction between the video of the user and the virtual network game scene is realized. In addition, it is also possible to control the virtual network game scene according to the action information. In this example, client A recognizes the action of user A according to the video data of user A and transmits the action of user A to the network server. In a practical application, in order to increase data processing efficiency, client A may also transmit the obtained video data of user A to the network server. The network server obtains the action information of user A according to the video data received. This example does not restrict that which of the above two methods is adopted. When transmitting the video data, it is also possible to encode and compress the video data in order to increase network transmission efficiency. The present disclosure does not have any restriction to such processing.
  • Similarly, through the above steps 101-104, users B and C may realize the interaction between the video and the network game scene. Thus, the user's experience is improved. Each user may see real looks of other users participating in the network game on his/her local client. Further, each client may provide a function of self-exhibition to the user, i.e. the user may also see the real look of himself/herself on the local client.
  • In addition, in order to improves the user's experience, the method provided by various embodiments may further include: client A receives a trigger signal transmitted by user A, captures a screen of the video of use B and a screen of the video of user C currently seen by user A to obtain a current screenshot X of user B and a current screenshot Y of user C; wherein the current screenshot X and the current screenshot Y respectively contains facial information of user B and facial information of user C. According to the facial information contained in the current screenshot X and the facial information contained in the current screenshot Y, client A calculates a matching degree between the facial information of user B and the facial information of user C to obtain a matching degree between the current screenshot X and the current screenshot Y. Accordingly, it is also possible to return a calculated result to each user to further improve the user's experience.
  • In addition, in the method provided by various embodiments, in order to increase the relativity between the video and the network service in the virtual network game scene, if the user is sampled at a pre-defined sampling time when using the network service in the virtual network game scene, it is possible to determine whether a video meeting a pre-defined requirement is captured (in particular, it is possible to determine whether the facial video of a particular user is captured). If yes, a positive response is returned to the user, e.g. reward the user in the network game; otherwise, a negative response is returned to the user, e.g. punish the user in the network game such as forbid the user to perform a certain action in the network game.
  • In addition, if the user is sampled at the pre-defined sampling time when using the network service in the virtual network game scene, it is possible to determine whether the video meeting the pre-defined requirement is captured (in particular, it is possible to determine whether the facial video of a particular user is captured). If yes, an exhibition value is set for the user and an accumulation plan may be further created. The larger the exhibition value accumulated, the more likely that the user desires to exhibit himself/herself; or, the time that the user appears in the video may be accumulated (in particular, it is possible to accumulate the time that the face of the user appears in the video), the longer the time, the more likely that the user desires to exhibit himself/herself.
  • In addition, it is also possible to configure a facial detection feature point in advance. According to the captured facial video data of the current user and the facial detection feature point, it is determined whether a detection value corresponding to the facial feature point of the current user can be obtained. If yes, a positive response is returned to the user, e.g. reward the user in the network game; otherwise, a negative response is returned to the user, e.g. punish the user in the network game such as forbid the user to perform a certain action in the network game. For example, suppose the above facial detection feature point configured in advance is nose. Then, it is possible to determine, according to the captured facial video data of user A, that whether there is a detection feature point of the nose. If the detection value corresponding to the detection feature point can be obtained, the user is rewarded in the network game; otherwise, the user is forbidden to perform a certain action in the network game, e.g. the user is forbidden to keep on playing the network game or the user is punished with certain game scores, or the user is prompted to aim his/her face at the video capturing apparatus, etc.
  • In addition, in order to improve the user's experience, the client may also provide items such as figure items whose locations may change along the face of the user by the object tracing technique. For example, user A selects a pair of glasses. Accordingly, after receiving a trigger signal indicating that user A has selected the glasses, client A selects the glasses selected by user A and adds the glasses in a video window. In the video window, whether user A lowers his/her head or raises his/her head, the glasses will always follow the position of the face of user A. The items are configured for the user in advance or added from other locations by the user. The various embodiments do not have restriction to this.
  • In addition, in order to improve the user's experience, various embodiments further enable the user to select one or more favorite objective video windows after seeing the video windows of other users. The user may display an expression picture or play an animation to the objective video windows selected to express the feeling of the user and the action that the user wants to perform. For example, user A selects an expression picture of laugh and selects to display this laugh picture in the video window of user B. Accordingly, client A displays, after receiving a selection instruction triggered by user A, the laugh picture in the video window of user B through the network server. The expression picture or the animation may be configured for the user in advance or may be defined by the user himself/herself.
  • In addition, each user may give a virtual gift to another user after seeing the video image of the other user. An identifier is configured for each kind of virtual gifts. For example, user A decides to give a virtual gift to user B after seeing the video image of user B (suppose there are two kinds of virtual gifts, wherein FFF denotes flower and WWW denotes drink). An animation of sending the gift may be called at the video window of user A and an animation of receiving the gift may be called at the video window of user B (or it is also possible to call the animation only at one end). Accordingly, the interaction of sending a gift and receiving a gift may be realized through value transmission.
  • In addition, in the above example, the interaction between the video and the network service in the virtual network game scene is taken as an example. The example may also be applied to interaction between an audio and the network service in the virtual network game scene, i.e., a client apparatus samples voice of the user, recognize audio data to obtain information that the user wants to express and applies the information that the user wants to express to the virtual network game scene, or applies to the virtual person in the virtual network game scene. For example, client A obtains a “blink” command of user A, obtains the blink action through voice recognizing and applies the blink action to the virtual person a of user A; for another example, client A obtains a “forward” command of user A, obtains the forward action through voice recognizing and applies the forward action to the virtual person a of user A. Then the virtual person performs the forward action in the virtual network game scene.
  • In view of the above, various embodiments provide a method for implementing interaction between a video and a virtual network scene, wherein the video includes but is not limited to image, voice, etc. FIG. 4 is a schematic diagram illustrating a user interaction process according to various embodiments. The face detection technique includes but is not limited to low-level feature detection method based on gray scale image of OpenCv, Haar feature detection methods, etc. In the method provided by various embodiments, a video window is embedded in the virtual network game scene. Thus, the interaction between the video and the virtual network scene is realized. In addition, in the method provided by various embodiments, face detection technique, motion analyzing technique and object tracing technique are adopted to implement the interaction between the video and the virtual network scene. A video mix technique may also be adopted to improve the interaction by mixing the video with an animation.
  • In the above example, the virtual network game scene is taken as an example. The method provided by various embodiments may also be applied to other scenes such as a virtual network meeting scene. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations.
  • Example 2
  • Corresponding to the above method example, in order to increase relativity between a video and a network service in the virtual network scene, implement the interaction between the video and the network service and improve user's experience, various embodiments provide an apparatus for implementing the interaction between the video and the virtual network scene. The apparatus is to obtain video data of the user, display, on a client, a video corresponding to the video data in the virtual network scene by embedding the video in the virtual network scene; or, display the video corresponding to the video data in the virtual network scene on the client according to the video data. The so-called displaying the video in the virtual network scene means that the video of the user is floating on the virtual network scene when being displayed. As shown in FIG. 5, the apparatus includes:
  • a collecting module 501, to collect video data of the current user;
  • a displaying module 502, to display a video of the current user on clients of the current user and other users according to the video data of the current user;
  • a recognizing module 503, to recognize action information of the current user according to the video data of the current user; and
  • an interacting module 504, to apply the action information of the current user recognized by the recognizing module 503 to the virtual network scene to implement the interaction between the video of the current user and the virtual network scene.
  • The interacting module 504 includes:
  • a mapping unit, to map a recognized action of the current user to a virtual person of the current user in the virtual network scene according to the action information of the current user recognized by the recognizing module 503 and a mapping relationship between the current user and the virtual person in the virtual network scene; and
  • a controlling unit, to control the virtual network scene according to the action information of the current user recognized by the recognizing module 503.
  • The recognizing module 503 includes:
  • a first recognizing unit, to capture facial video data of the current user according to the video data of the current user collected by the collecting module 501 and recognize facial action information of the current user through a face detecting technique; and/or
  • a second recognizing unit, to capture action video data of the current user according to the video data of the current user collected by the collecting module 501 and recognize body action information of the current user through motion and object tracing techniques.
  • The apparatus further includes:
  • a first determining module, to determine, at a pre-defined collecting time, whether the video data of the current user meeting a pre-defined requirement is captured;
  • a first rewarding and punishing module, to return a positive response to the current user when the determining module determines that the video data of the current user meeting the pre-defined requirement is captured, and to return a negative response to the current user when the determining module determines that the video data of the current user meeting the pre-defined requirement is not captured.
  • When the recognizing module 503 includes the first recognizing unit, the apparatus further includes:
  • a first rewarding module, to accumulate the time that the facial video data of the current user can be captured according to the facial video data of the current user captured by the capturing module, and to reward the current user according to the time accumulated by the accumulating module; and/or
  • a second determining module, to obtain a detecting value corresponding to a face detection feature point of the current user according to the facial video data of the current user and the face detection feature point defined in advance, and to return a positive or negative response to the current user according to the detected value.
  • Further, in order to improve the user's experience, the apparatus may further include:
  • a receiving module, to a virtual item adding signal transmitted by the current user;
  • a selecting module, to select an item that the current user wants to add after the item adding signal is received by the receiving module.
  • Accordingly, the displaying module 502 may display the collected video data of the current user and the item that the current user wants to add on the clients of the current user and other users.
  • When the item is displayed by the displaying module 502, the position of the item changes along with the position of the face of the current user.
  • In addition, in order to improve the user's experience, the apparatus may further include:
  • a screen capturing module, to receive a capture-screen signal triggered by the current user, capture a screen of videos of at least two uses displayed on the client of the current user to obtain at least to screenshots; wherein the screenshots include facial information of users;
  • a processing module, to calculate a matching degree between facial information of the at least two users according to the facial information of the users included in the at least two screenshots obtained by the screen capturing module, and return a calculated result to the each user.
  • In practical applications, the modules provided by the examples for implementing interaction between the video and the virtual network scene may be located in the same apparatus (for example, the collecting module, the displaying module, the recognizing module and the interacting module may be located in the client) or may be located in different apparatuses (for example, the collecting module and the displaying module are located in the client, whereas the recognizing module and the interacting module are located in the server). The modules provided by the above example may be integrated into one module according to a requirement. Each module in the above example may also be divided into several sub-modules.
  • In view of the above, various embodiments provide an apparatus for implementing the interaction between the video and the virtual network scene. The apparatus provided by various embodiments is able to implement the interaction between the video and the virtual network scene. The apparatus provided by various embodiments is able to implement the interaction between the video and the virtual network scene utilizing the facial recognizing technique, motion analyzing and object tracing techniques and action capturing technique. Interactions between users may be improved by mixed animation utilizing video mixing techniques.
  • Example 3
  • Corresponding to the above example, in order to increase the relativity between the video and the network service in the virtual network game scene, implement interaction between the video and the network service and improve the user's experience, various embodiments provide a system for implementing interaction between the video and the virtual network scene. As shown in FIG. 6, the system includes: a server 601 to implement interaction between the video and the virtual network scene, and a plurality of clients 602 to implement the interaction between the video and the virtual network scene.
  • Each of the clients 602 is to collect video data of a current user, display a video of the current user on clients 602 of the current user and other users according to the video data of the current user, obtain action information of the current user according to the collected video data of the current user, apply the action information of the current user to the virtual network scene to implement the interaction between the video of the current user and the virtual network scene.
  • The server 601 is to forward the video data of the clients 602, and control the virtual network scene after forwarding virtual scene control information between the clients 602.
  • Optionally, besides being forwarded by the server 601, the video data and the virtual scene control information may also be transmitted between the clients 602 through a P2P manner. The video data and the virtual network data (e.g. virtual network game data) may be transmitted separately.
  • In particular, considering a practical application, in order to improve transmission efficiency of the network and save network transmission bandwidth, the system may include a client, a video server and a game server.
  • The client is to collect the video data of the current user, display the video of the current user and display videos of other users, obtain action information of the current user according to the collected video data of the current user, and display the virtual network scene.
  • The video server is to collect the video data collected by the client, forward the video data, collect the action information obtained by the client and apply the action information on the virtual network scene through the virtual scene server to implement the interaction between the video and the virtual network scene.
  • The virtual scene server is to execute a flowchart of the virtual network scene, apply the action information obtained by the video server on the virtual network scene to implement the interaction between the video and the virtual network scene. If the virtual scene is a network game, the above virtual scene server is a game server.
  • With respect to a current popular P2P network structure, the system provided by various embodiments may include:
  • a client, to collect the video data of the current user, display the video of the current user and display videos of other users, recognize action information of the current user according to the collected video data of the current user, and display the virtual network scene;
  • a video server, to collect the video data collected by the client, forward the video data, collect the action information recognized by the client and apply the action information on the virtual network scene through a virtual scene server to implement the interaction between the video and the virtual network scene;
  • the virtual scene server, to execute a flowchart of the virtual network scene, apply the action information obtained by the video server on the virtual network scene to implement the interaction between the video and the virtual network scene, wherein if the virtual scene is a network game, the above virtual scene server is a game server; and
  • a P2P server, to backup data for the video server and the virtual scene server to implement fault recovery and backup functions.
  • Various embodiments have no restriction to the architecture of the system. Any solution which implements interaction between the video and the virtual network scene by combining the video and the virtual network scene is within the protection scope of the present disclosure.
  • In view of the above, various embodiments provide a system for implementing interaction between the video and the virtual network scene. The system provided by the various embodiments is able to implement the interaction between the video and the virtual network scene through embedding a video window in the virtual network game scene. The system provided by various embodiments is able to implement the interaction between the video and the virtual network scene utilizing the face detection technique, the motion analyzing and object tracing techniques and the action capturing technique. Interactions between users may be improved by a video mix technique.
  • The technical solution provided by various embodiments is able to meet requirements of users to the network service in the virtual network scene, enable the user to see real identities and real looks of other persons participating in the network service while enjoying in the network service in the virtual network scene, and enable other users to see the user himself/herself during the network service. Relativity between the video and the network service in the virtual network scene is increased. Interaction between the video and the network server is realized.
  • The term “receiving” in the various embodiments may be comprehended as acquiring information from other modules initiatively or receiving information from other modules.
  • The drawings are example schematic diagrams. Not all the modules and flows in the drawings are necessary for implementing the various embodiments.
  • The modules in the apparatus provided by various embodiments may be arranged in the apparatus provided by various embodiments according to the description of the embodiment, or may be changed accordingly to be located in one or more apparatuses of the embodiment. The modules may be integrated into one module or may be divided into several sub-modules.
  • The above sequence numbers of the examples are merely used for facilitating the description but not used to denoting preference of the examples.
  • Example 4
  • Some steps in various embodiments may be implemented by software programs stored in a computer-readable storage medium, e.g. CD or hard disk.
  • Corresponding to the above examples, in order to increase the relativity between the video and the network service in the virtual network scene, implement interaction between the video and the network service and improve the user's experience, an example of the present disclosure provides a computer-readable storage medium which stores computer programs enable one or more processors to execute the following steps:
  • obtaining video data of a user, displaying a video corresponding to the video data on a client by embedding the video in the virtual network scene; or, displaying the video corresponding to the video data in the virtual network scene of the client; or, controlling or affecting the virtual network scene according to the video data.
  • The programs further enable the one or more processors to execute the following steps:
  • collecting the video data of the current user, displaying the video data of the current user on clients of the current user and other users; recognize action information of the current user according to the video data of the current user, applying the action information of the current user on the virtual network scene to implement the interaction between the video and the virtual network scene.
  • The programs further enable the one or more processors to execute the following steps:
  • capturing facial video data of the current user according to the video data of the current user through a face detecting technique, recognizing face action information of the user; and/or, capturing action video data of the current user according to the video data of the current user through motion analyzing and object tracing techniques, and recognizing body action information of the current user.
  • The programs further enable the one or more processors to execute the following steps:
  • mapping the action of the current user to a virtual person of the current user in the virtual network scene according to the action information of the current user and a pre-defined mapping relationship between the current user and the virtual person in the virtual network scene; and/or, controlling the virtual network scene according to the action information of the current user.
  • The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.

Claims (28)

1. A method for implementing interaction between a video and a virtual network scene, comprising:
obtaining video data of a user;
displaying, on a client, a video corresponding to the video data in the virtual network scene.
2. The method of claim 1, further comprising:
recognizing action information of the user according to the collected video data of the user; and
applying the action information of the user on the virtual network scene.
3. The method of claim 2, wherein the recognizing the action information of the user according to the collected video data of the user comprises:
capturing facial video data of the user according to the collected video data of the user and recognizing facial action information of the user utilizing a face detecting technique; and/or
capturing action video data of the user according to collected video data of the user and recognizing body action information of the user utilizing motion analyzing and object tracing techniques.
4. The method of claim 2, wherein the applying the action information of the user on the virtual network scene comprises:
mapping an action of the user to a virtual person in the virtual network scene according to the action information of the user and a pre-defined mapping relationship between the user and the virtual person in the virtual network scene; and/or
controlling the virtual network scene according to the action information of the user.
5. The method of claim 2, further comprising:
determining, at a pre-defined collecting time, whether the video data of the user meeting a pre-defined requirement is captured, if the video data is captured, returning a positive response to the user; otherwise, returning a negative response to the user.
6. The method of claim 5, wherein the determining whether the video data of the user meeting the pre-defined requirement is captured comprises:
determining whether facial video data of the user is captured, if the facial video data of the user is captured, determining that the video data of the user meeting the pre-defined requirement is captured; otherwise, determining that the video data of the user meeting the pre-defined requirement is not captured.
7. The method of claim 3, further comprising:
when recognizing the facial action information of the user according to the facial video data of the user utilizing the face detecting technique,
accumulating the time that the facial video data of the user can be captured according to the facial video data of the user, rewarding the user according to the time accumulated; and/or
obtaining detection values of facial detecting feature points of the user according to the facial video data of the user and pre-defined facial detecting feature points; and returning a positive or negative response to the user according to the detection values.
8. The method of claim 5, wherein the returning the positive response to the user comprises: rewarding the user;
the returning the negative response to the user comprises: forbidding the user to use the virtual network scene.
9. The method of claim 2, further comprising:
receiving a virtual item adding signal transmitted by the user, selecting a virtual item that the user wants to add;
after displaying the collected video of the user, displaying the video data of the user and the virtual item that the user wants to add on the client.
10. The method of claim 9, wherein when the virtual item that the user wants to add is displayed, the virtual item moves with the position of the face of the user synchronously.
11. The method of claim 2, further comprising:
receiving a capture-screen signal triggered by the user;
capture a screen of videos of at least two users displayed on the client of the user to obtain at least two screenshots, wherein the screenshots comprise facial information of the at least two users;
calculating a matching degree between the facial information of the at least two users according to the facial information comprised in the at least two screenshots, and returning a calculated result to the at least two users.
12. The method of claim 1, wherein the displaying the video corresponding to the video data in the virtual network scene comprises:
displaying the video corresponding to the video data by embedding the video in the virtual network scene; or
displaying the video corresponding to the video data in a window floating on the virtual network scene.
13. An apparatus for implementing interaction between a video and a virtual network scene, comprising:
a collecting module, to collect video data of a user;
a displaying module, to display a video corresponding to the video data of the user in a virtual network scene on a client of the user.
14. The apparatus of claim 13, further comprising:
a recognizing module, to recognize action information of the user according to the video data of the user collected by the collecting module, and
an interacting module, to apply the action information of the user recognized by the recognizing module to the virtual network scene.
15. The apparatus of claim 14, wherein the recognizing module comprises:
a first recognizing unit, to capture facial video data of the user according to the video data of the user collected by the collecting module and recognize facial action information of the user utilizing a face detecting technique; and/or
a second recognizing unit, to capture action video data of the user according to the video data of the user collected by the collecting module and recognize body action information of the user utilizing motion analyzing and object tracing techniques.
16. The apparatus of claim 14, wherein the interacting module comprises:
a mapping unit, to map an action of the user to a virtual person of the user in the virtual network scene according to the action information of the user recognized by the recognizing module and a pre-defined mapping relationship between the user and the virtual person in the virtual network scene;
a controlling unit, to control the virtual network scene according to the action information of the user recognized by the recognizing module.
17. The apparatus of claim 16, further comprising:
a first determining module, to determine whether video data of the user meeting a pre-defined requirement is captured at a pre-defined collecting time; and
a first rewarding and punishing module, to return a positive response to the user when the determining module determines that the video data of the user meeting the pre-defined requirement is captured and return a negative response to the user when the determining module determines that the video data of the user meeting the pre-defined requirement is not captured.
18. The apparatus of claim 16, wherein when the recognizing module comprises the first recognizing unit, the apparatus further comprises:
a first rewarding module, to accumulate the time that the facial video data of the user can be captured according to the facial video data of the user captured by the capturing module, and reward the user according to the time accumulated; and/or
a second determining module, to obtain a detection value of a facial detecting feature point of the user according to the facial video data of the user and the pre-defined facial detecting feature point, and return the positive or negative response to the user according to the detection value.
19. The apparatus of claim 14, further comprising:
a receiving module, to receive a virtual item adding signal transmitted by the user;
a selecting module, to select an item that user wants to add after the receiving module receives the item adding signal;
the displaying module is further to display the video data of the user and the item that the user wants to add on the client of the user.
20. The apparatus of claim 19, wherein when the displaying module displays the item, the item moves with the position of the face of the user synchronously.
21. The apparatus of claim 14, further comprising:
a screen capturing module, to receive a capture-screen signal triggered by the user, capture a screen of videos of at least two users displayed on the client of the user to obtain at least two screenshots, wherein the screenshots comprises facial information of the at least two users;
a processing module, to calculate a matching degree between the facial information of the at least two users according to the facial information comprised in the at least two screenshots obtained by the screen capturing module, and return a calculated result to the user.
22. The apparatus of claim 13, wherein the displaying module is to display the video corresponding to the video data by embedding the video in the virtual network scene, or display the video corresponding to the video data in a window floating on the virtual network scene.
23. A system for implementing interaction between a video and a virtual network scene, comprising: a server to implement the interaction between the video and the virtual network scene, at least two clients to implement interaction between the video and the virtual network scene, wherein
each of the at least two clients is to collect video data of a user, display a video of the user in the virtual network scene on the client according to the video data of the user, recognize action information of the user according to the video data of the user, apply the action information of the user on the virtual network scene;
the server is to forward the video data between the clients, and control the virtual network scene after forwarding virtual network scene control information between the clients.
24. A computer-readable storage medium, comprising computer instructions executable with one or more processors, wherein the computer instructions enable the one or more processors to:
obtain video data of a user,
display, on a client, a video corresponding to the video data in the virtual network scene.
25. The computer readable storage medium of claim 24, further comprising computer instructions enable the one or more processors to:
recognize action information of the user according to the video data of the user,
apply the action information of the user on the virtual network scene.
26. The computer readable storage medium of claim 25, further comprising computer instructions enable the one or more processors to:
capture facial video data of the user according to the video data of the user and recognize face action information of the user through a face detecting technique; and/or,
capture action video data of the user according to the video data of the user and recognize body action information of the user through motion analyzing and object tracing techniques.
27. The computer readable storage medium of claim 25, further comprising computer instructions enable the one or more processors to:
map the action of the user to a virtual person of the user in the virtual network scene according to the action information of the user and a pre-defined mapping relationship between the user and the virtual person in the virtual network scene; and/or,
control the virtual network scene according to the action information of the user.
28. The computer readable storage medium of claim 24, further comprising computer instructions enable the one or more processors to:
display the video corresponding to the video data by embedding the video in the virtual network scene; or
display the video corresponding to the video data in a window floating on the virtual network scene.
US13/334,765 2009-06-23 2011-12-22 Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene Abandoned US20120092475A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN200910150595.8 2009-06-23
CN200910150595.8A CN101930284B (en) 2009-06-23 2009-06-23 Method, device and system for implementing interaction between video and virtual network scene
PCT/CN2010/072993 WO2010148848A1 (en) 2009-06-23 2010-05-20 Method, device and system for enabling interaction between video and virtual network scene

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/072993 Continuation WO2010148848A1 (en) 2009-06-23 2010-05-20 Method, device and system for enabling interaction between video and virtual network scene

Publications (1)

Publication Number Publication Date
US20120092475A1 true US20120092475A1 (en) 2012-04-19

Family

ID=43354281

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/648,209 Active 2032-10-30 US9247201B2 (en) 2009-06-23 2009-12-28 Methods and systems for realizing interaction between video input and virtual network scene
US13/334,765 Abandoned US20120092475A1 (en) 2009-06-23 2011-12-22 Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/648,209 Active 2032-10-30 US9247201B2 (en) 2009-06-23 2009-12-28 Methods and systems for realizing interaction between video input and virtual network scene

Country Status (6)

Country Link
US (2) US9247201B2 (en)
EP (1) EP2448200A4 (en)
CN (1) CN101930284B (en)
BR (1) BRPI1015566A2 (en)
RU (1) RU2518940C2 (en)
WO (1) WO2010148848A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084014B2 (en) 2013-02-20 2015-07-14 Samsung Electronics Co., Ltd. Method of providing user specific interaction using device and digital television(DTV), the DTV, and the user device
US9353476B2 (en) 2014-04-18 2016-05-31 Georgia-Pacific Containerboard Llc Method for recycling waste material with reduced odor emission
US10332317B2 (en) 2016-10-25 2019-06-25 Microsoft Technology Licensing, Llc Virtual reality and cross-device experiences
US20190335161A1 (en) * 2016-06-17 2019-10-31 Alexandre COURTÈS Image capture method and system using a virtual sensor
WO2020042786A1 (en) * 2018-08-27 2020-03-05 阿里巴巴集团控股有限公司 Interactive method and device based on augmented reality
JP2022526512A (en) * 2019-11-28 2022-05-25 北京市商▲湯▼科技▲開▼▲發▼有限公司 Interactive object drive methods, devices, equipment, and storage media

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103873492B (en) * 2012-12-07 2019-01-15 联想(北京)有限公司 A kind of electronic equipment and data transmission method
CN104104703B (en) * 2013-04-09 2018-02-13 广州华多网络科技有限公司 More people's audio-video-interactive method, client, server and systems
JP6070584B2 (en) * 2014-01-17 2017-02-01 ソニー株式会社 Information processing apparatus, information processing method, and program
TWI549728B (en) * 2014-06-27 2016-09-21 國立臺中科技大學 Management method for detecting multi-users' motions and system thereof
CN105472300B (en) * 2014-09-10 2019-03-15 易珉 Video interaction method, system and device
CN105472271A (en) * 2014-09-10 2016-04-06 易珉 Video interaction method, device and system
CN105472299B (en) * 2014-09-10 2019-04-26 易珉 Video interaction method, system and device
CN105472298B (en) * 2014-09-10 2019-04-23 易珉 Video interaction method, system and device
CN105472301B (en) * 2014-09-10 2019-03-15 易珉 Video interaction method, system and device
CN105396289A (en) * 2014-09-15 2016-03-16 掌赢信息科技(上海)有限公司 Method and device for achieving special effects in process of real-time games and multimedia sessions
CN106502554B (en) * 2015-09-08 2021-09-17 腾讯科技(深圳)有限公司 Display control method and device
GB2554633B (en) 2016-06-24 2020-01-22 Imperial College Sci Tech & Medicine Detecting objects in video data
CN105959718A (en) * 2016-06-24 2016-09-21 乐视控股(北京)有限公司 Real-time interaction method and device in video live broadcasting
CN106162369B (en) * 2016-06-29 2018-11-16 腾讯科技(深圳)有限公司 It is a kind of to realize the method, apparatus and system interacted in virtual scene
CN106375775B (en) * 2016-09-26 2020-12-11 广州华多网络科技有限公司 Virtual gift display method and device
CN107741809B (en) * 2016-12-21 2020-05-12 腾讯科技(深圳)有限公司 Interaction method, terminal, server and system between virtual images
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN110036412A (en) * 2017-05-16 2019-07-19 苹果公司 Emoticon is recorded and is sent
DK180007B1 (en) 2017-05-16 2020-01-16 Apple Inc. RECORDING AND SENDING EMOJI
DK179948B1 (en) 2017-05-16 2019-10-22 Apple Inc. Recording and sending Emoji
GB201710840D0 (en) * 2017-07-05 2017-08-16 Jones Maria Francisca Virtual meeting participant response indication method and system
CN108134945B (en) * 2017-12-18 2021-03-19 阿里巴巴(中国)有限公司 AR service processing method, AR service processing device and terminal
DK201870374A1 (en) 2018-05-07 2019-12-04 Apple Inc. Avatar creation user interface
CN109325450A (en) * 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
US11107261B2 (en) 2019-01-18 2021-08-31 Apple Inc. Virtual avatar animation based on facial feature movement
WO2020147794A1 (en) * 2019-01-18 2020-07-23 北京市商汤科技开发有限公司 Image processing method and apparatus, image device and storage medium
CN111460871B (en) 2019-01-18 2023-12-22 北京市商汤科技开发有限公司 Image processing method and device and storage medium
CN110059661B (en) * 2019-04-26 2022-11-22 腾讯科技(深圳)有限公司 Action recognition method, man-machine interaction method, device and storage medium
CN110139115B (en) * 2019-04-30 2020-06-09 广州虎牙信息科技有限公司 Method and device for controlling virtual image posture based on key points and electronic equipment
CN111314730A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Virtual resource searching method, device, equipment and storage medium for live video
CN111355988B (en) * 2020-03-31 2022-11-11 苏州科达科技股份有限公司 Business disaster recovery method, equipment and readable storage medium
JP2023529126A (en) 2020-06-08 2023-07-07 アップル インコーポレイテッド Presentation of avatars in a 3D environment
CN112511739B (en) * 2020-11-20 2022-05-06 上海盛付通电子支付服务有限公司 Interactive information generation method and equipment
CN113271425A (en) * 2021-04-19 2021-08-17 瑞芯微电子股份有限公司 Interaction system and method based on virtual equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6278466B1 (en) * 1998-06-11 2001-08-21 Presenter.Com, Inc. Creating animation from a video
US20030006957A1 (en) * 2000-11-10 2003-01-09 Victor Colantonio Method and system for automatically covering video display of sensitive information
US20040260823A1 (en) * 2003-06-17 2004-12-23 General Instrument Corporation Simultaneously transporting multiple MPEG-2 transport streams
US20060155642A1 (en) * 2004-08-19 2006-07-13 Leadpoint, Inc. Ranking system using instant post-transaction surveying of transaction judges
US20060210165A1 (en) * 2005-03-03 2006-09-21 Fuji Photo Film Co., Ltd. Image extracting apparatus, image extracting method, and image extracting program
US20060251383A1 (en) * 2005-05-09 2006-11-09 Microsoft Corporation Automatic video editing for real-time generation of multiplayer game show videos
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
US20080088698A1 (en) * 2006-10-11 2008-04-17 Cisco Technology, Inc. Interaction based on facial recognition of conference participants
US20080141282A1 (en) * 2000-09-19 2008-06-12 Technion Research & Development Foundation Ltd. Control of interactions within virtual environmetns
US20080227546A1 (en) * 2007-03-12 2008-09-18 Roberts Thomas J Feedback controller
US20090079816A1 (en) * 2007-09-24 2009-03-26 Fuji Xerox Co., Ltd. Method and system for modifying non-verbal behavior for social appropriateness in video conferencing and other computer mediated communications
US20090098939A1 (en) * 2007-10-15 2009-04-16 Hamilton Ii Rick A Systems and methods for compensating participants of virtual environments
US20090182889A1 (en) * 2008-01-15 2009-07-16 Move Networks, Inc. System and method of managing multiple video players
US20090245747A1 (en) * 2008-03-25 2009-10-01 Verizon Data Services Llc Tv screen capture
US20100295771A1 (en) * 2009-05-20 2010-11-25 Microsoft Corporation Control of display objects
US8601379B2 (en) * 2006-05-07 2013-12-03 Sony Computer Entertainment Inc. Methods for interactive communications with real time effects and avatar environment interaction

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09107534A (en) 1995-10-11 1997-04-22 Canon Inc Video conference equipment and video conference system
US6124862A (en) 1997-06-13 2000-09-26 Anivision, Inc. Method and apparatus for generating virtual views of sporting events
JP2000115521A (en) 1998-10-05 2000-04-21 Canon Inc Information processor and picture data processing method
US6414707B1 (en) * 1998-10-16 2002-07-02 At&T Corp. Apparatus and method for incorporating virtual video conferencing environments
US6754276B1 (en) 1999-09-20 2004-06-22 Matsushita Electric Industrial Co., Ltd. System stream creating apparatus which adjusts system clock reference based on total number of pictures to be stored and decoded during certain time period
JP2002007294A (en) 2000-06-22 2002-01-11 Canon Inc System and method for image distribution, and storage medium
US20040049547A1 (en) * 2001-12-21 2004-03-11 Matthews W. Donald Methods for providing information over networks responsive to digital device user requests
KR20030021197A (en) 2003-02-05 2003-03-12 김성태 The Image Simulation for the plastic consulting
US7499974B2 (en) 2003-09-30 2009-03-03 International Business Machines Corporation Instant message user management
WO2006025137A1 (en) * 2004-09-01 2006-03-09 Sony Computer Entertainment Inc. Image processor, game machine, and image processing method
US20060092178A1 (en) 2004-10-29 2006-05-04 Tanguay Donald O Jr Method and system for communicating through shared media
CN1328908C (en) * 2004-11-15 2007-07-25 北京中星微电子有限公司 A video communication method
US7676063B2 (en) 2005-03-22 2010-03-09 Microsoft Corp. System and method for eye-tracking and blink detection
KR100651206B1 (en) 2005-07-29 2006-12-01 주식회사 팬택 System and method for picture-taking of cellular phone using the face recognition algorithm
RU2284571C1 (en) 2005-08-25 2006-09-27 Максим Алексеевич Мишин Method for setting up interactive game for remote users
US20070203911A1 (en) 2006-02-07 2007-08-30 Fu-Sheng Chiu Video weblog
CN101098241A (en) 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
CN101068314A (en) * 2006-09-29 2007-11-07 腾讯科技(深圳)有限公司 Network video frequency showing method and system
DE102007006847A1 (en) 2007-02-12 2008-08-14 Voice Trust Ag Digital method and arrangement for authentication of a user of a telecommunications or data network
CN100592783C (en) 2007-03-23 2010-02-24 腾讯科技(深圳)有限公司 A video communication system and method
CN101350845B (en) * 2007-07-20 2012-05-09 中兴通讯股份有限公司 Method for simulating talking scene of mobile phone visible telephone
CN101219029A (en) 2008-01-21 2008-07-16 胡清 Video terminal, display method and storage medium for providing buddhism bodhimandala scene
US9445045B2 (en) * 2008-06-23 2016-09-13 Alcatel Lucent Video conferencing device for a communications device and method of manufacturing and using the same
US20100064010A1 (en) * 2008-09-05 2010-03-11 International Business Machines Corporation Encouraging user attention during presentation sessions through interactive participation artifacts

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6278466B1 (en) * 1998-06-11 2001-08-21 Presenter.Com, Inc. Creating animation from a video
US20080141282A1 (en) * 2000-09-19 2008-06-12 Technion Research & Development Foundation Ltd. Control of interactions within virtual environmetns
US20030006957A1 (en) * 2000-11-10 2003-01-09 Victor Colantonio Method and system for automatically covering video display of sensitive information
US20040260823A1 (en) * 2003-06-17 2004-12-23 General Instrument Corporation Simultaneously transporting multiple MPEG-2 transport streams
US20060155642A1 (en) * 2004-08-19 2006-07-13 Leadpoint, Inc. Ranking system using instant post-transaction surveying of transaction judges
US20060210165A1 (en) * 2005-03-03 2006-09-21 Fuji Photo Film Co., Ltd. Image extracting apparatus, image extracting method, and image extracting program
US20060251383A1 (en) * 2005-05-09 2006-11-09 Microsoft Corporation Automatic video editing for real-time generation of multiplayer game show videos
US20070242066A1 (en) * 2006-04-14 2007-10-18 Patrick Levy Rosenthal Virtual video camera device with three-dimensional tracking and virtual object insertion
US8601379B2 (en) * 2006-05-07 2013-12-03 Sony Computer Entertainment Inc. Methods for interactive communications with real time effects and avatar environment interaction
US20080088698A1 (en) * 2006-10-11 2008-04-17 Cisco Technology, Inc. Interaction based on facial recognition of conference participants
US20080227546A1 (en) * 2007-03-12 2008-09-18 Roberts Thomas J Feedback controller
US20090079816A1 (en) * 2007-09-24 2009-03-26 Fuji Xerox Co., Ltd. Method and system for modifying non-verbal behavior for social appropriateness in video conferencing and other computer mediated communications
US20090098939A1 (en) * 2007-10-15 2009-04-16 Hamilton Ii Rick A Systems and methods for compensating participants of virtual environments
US20090182889A1 (en) * 2008-01-15 2009-07-16 Move Networks, Inc. System and method of managing multiple video players
US20090245747A1 (en) * 2008-03-25 2009-10-01 Verizon Data Services Llc Tv screen capture
US20100295771A1 (en) * 2009-05-20 2010-11-25 Microsoft Corporation Control of display objects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Egocentric Navigation for Video Surveillance in 3D Virtual Environments, Haan et al., March 2009 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084014B2 (en) 2013-02-20 2015-07-14 Samsung Electronics Co., Ltd. Method of providing user specific interaction using device and digital television(DTV), the DTV, and the user device
US9432738B2 (en) 2013-02-20 2016-08-30 Samsung Electronics Co., Ltd. Method of providing user specific interaction using device and digital television (DTV), the DTV, and the user device
US9848244B2 (en) 2013-02-20 2017-12-19 Samsung Electronics Co., Ltd. Method of providing user specific interaction using device and digital television (DTV), the DTV, and the user device
US9353476B2 (en) 2014-04-18 2016-05-31 Georgia-Pacific Containerboard Llc Method for recycling waste material with reduced odor emission
US20190335161A1 (en) * 2016-06-17 2019-10-31 Alexandre COURTÈS Image capture method and system using a virtual sensor
US11178387B2 (en) * 2016-06-17 2021-11-16 Alexandre COURTÈS Image capture method and system using a virtual sensor
US10332317B2 (en) 2016-10-25 2019-06-25 Microsoft Technology Licensing, Llc Virtual reality and cross-device experiences
WO2020042786A1 (en) * 2018-08-27 2020-03-05 阿里巴巴集团控股有限公司 Interactive method and device based on augmented reality
JP2022526512A (en) * 2019-11-28 2022-05-25 北京市商▲湯▼科技▲開▼▲發▼有限公司 Interactive object drive methods, devices, equipment, and storage media

Also Published As

Publication number Publication date
CN101930284A (en) 2010-12-29
EP2448200A4 (en) 2014-01-29
EP2448200A1 (en) 2012-05-02
BRPI1015566A2 (en) 2016-08-16
US20100322111A1 (en) 2010-12-23
US9247201B2 (en) 2016-01-26
CN101930284B (en) 2014-04-09
WO2010148848A1 (en) 2010-12-29
RU2012101502A (en) 2013-07-27
RU2518940C2 (en) 2014-06-10

Similar Documents

Publication Publication Date Title
US20120092475A1 (en) Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene
Wu et al. A dataset for exploring user behaviors in VR spherical video streaming
JP6369462B2 (en) Client device, control method, system, and program
US8874673B2 (en) Mobile terminal, server, and method for establishing communication channel using augmented reality (AR)
JP5492077B2 (en) Method and system for improving the appearance of a person on an RTP stream coming from a media terminal
EP3007452A1 (en) Display controller, display control method, and computer program
US20140192136A1 (en) Video chatting method and system
CN108702484A (en) Communication event
WO2017072534A2 (en) Communication system and method
US20220171960A1 (en) Reaction track for different spectator groups within an audience
CN115883853B (en) Video frame playing method, device, equipment and storage medium
KR20160139633A (en) An system and method for providing experiential contents using augmented reality
CN110716634A (en) Interaction method, device, equipment and display equipment
JP7202935B2 (en) Attention level calculation device, attention level calculation method, and attention level calculation program
CN113392690A (en) Video semantic annotation method, device, equipment and storage medium
CN110716641B (en) Interaction method, device, equipment and storage medium
CN114463470A (en) Virtual space browsing method and device, electronic equipment and readable storage medium
KR20160116677A (en) Method and computer program for consulting fitness, and consultant device
KR101535574B1 (en) System and method for providing social network emoticon using 3d character
Subramanyam et al. Evaluating the impact of tiled user-adaptive real-time point cloud streaming on vr remote communication
JP2024016017A (en) Information processing system, information processing device and program
KR101939130B1 (en) Methods for broadcasting media contents, methods for providing media contents and apparatus using the same
CN109525483A (en) The generation method of mobile terminal and its interactive animation, computer readable storage medium
CN110990607B (en) Method, apparatus, server and computer readable storage medium for screening game photos
CN114425162A (en) Video processing method and related device

Legal Events

Date Code Title Description
AS Assignment

Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, ZHUANKE;REEL/FRAME:027434/0157

Effective date: 20111201

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION