US20130176302A1 - Virtual space moving apparatus and method - Google Patents

Virtual space moving apparatus and method Download PDF

Info

Publication number
US20130176302A1
US20130176302A1 US13/739,693 US201313739693A US2013176302A1 US 20130176302 A1 US20130176302 A1 US 20130176302A1 US 201313739693 A US201313739693 A US 201313739693A US 2013176302 A1 US2013176302 A1 US 2013176302A1
Authority
US
United States
Prior art keywords
virtual space
subject
movement
motion
accelerated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/739,693
Inventor
Moon-sik Jeong
Ivan Koryakovskiy
Sang-keun Jung
Kumar Nipun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JEONG, MOON-SIK, JUNG, Sang-keun, KORYAKOVSKIY, IVAN, NIPUN, KUMAR
Publication of US20130176302A1 publication Critical patent/US20130176302A1/en
Priority to US15/901,299 priority Critical patent/US10853966B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20044Skeletonization; Medial axis transform

Abstract

Provided is a virtual space moving apparatus and method. To this end, a Three-Dimensional (3D) image corresponding to a subject is skeletonized to generate skeletal data. Object data corresponding to the skeletal data is generated and mapped in a virtual space. The mapped object in the virtual space is displayed. Upon selection of an accelerated-movement mode for virtual space movement, motion of the subject is selected. The object data in the virtual space is moved at a movement ratio previously set corresponding to the determined motion of the subject, such that a user can move as desired in the virtual space without being limited by a real space.

Description

    PRIORITY
  • This application claims priority under 35 U.S.C. §119(a) to a Korean 5 Patent Application filed in the Korean Intellectual Property Office Jan. 11, 2012 and assigned Serial No. 10-2012-0003432, the contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates generally to a virtual space moving apparatus and method, and more particularly, to a virtual space moving apparatus and method in which a user can move as desired in a virtual space, regardless of the size of the space afforded to the user for the movement.
  • 2. Description of the Related Art
  • A virtual reality system has now been developed, as the need has grown for a simulator that allows a user to experience real life in Three-Dimensional (3D) virtual reality identical to an actual situation. The virtual reality system may be used to provide high reality sensation in an electronically formed environment, such as in a virtual reality game or a 3D game.
  • The virtual reality system enables the user to feel sensory inputs, such as visual, auditory, and tactile senses, formed variously in a virtually created space. The sensory inputs may reproduce sensory experiences of a virtual world to provide various reality sensations.
  • When the user moves in a virtual environment, the real world is maximally implemented, but it may be more convenient to experience the virtual environment than the real world. A user in the virtual reality world can move between a space and the virtual space, unlike in the real world in which movement is made continuous over time.
  • For example, a virtual space apparatus including a 3D camera renders a user's 3D image output from the 3D camera into object data in a virtual space which is one-to-one size-mapped to a real space which is a user's motion space, and displays the object data on a screen. Thus, the user can move as desired in the virtual space.
  • As such, when the user moves in the virtual space in the conventional art, object data corresponds to the user moves, by a user's moving distance in the real space, in the virtual space which is one-to-one mapped to the real space.
  • However, due to the limited size of the real space in which the user can move, the user in the conventional art has difficulty moving as desired in the virtual space.
  • SUMMARY OF THE INVENTION
  • Accordingly, the present invention provides a virtual space moving apparatus and method that allows a user to move as desired in a virtual space without being limited by a real space in which the user can move.
  • According to an aspect of the present invention, there is provided a virtual space moving apparatus including a skeletonization unit for skeletonizing a 3D image corresponding to a subject to generate skeletal data, a skeletal data processor for generating object data corresponding to the skeletal data and mapping the object data in a virtual space, and a Graphic User Interface (GUI) for displaying the mapped object data in the virtual space, in which upon selection of an accelerated-movement mode for virtual space movement, the skeletal data processor determines motion of the subject and moves the object data in the virtual space at a movement ratio previously set corresponding to the determined motion of the subject.
  • According to another aspect of the present invention, there is provided a virtual space moving method in a virtual space moving apparatus, the virtual space moving method including skeletonizing, upon input of a 3D image corresponding to a subject, the 3D image to generate skeletal data, generating object data corresponding to the skeletal data and mapping the generated object data in a virtual space, displaying the mapped object data in the virtual space, determining motion of the subject upon selection of an accelerated-movement mode for virtual space movement, and moving the object data in the virtual space at a movement ratio previously set corresponding to the determined motion of the subject.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other features and advantages of an embodiment of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 is a structural diagram of a virtual space moving apparatus according to an embodiment of the present invention;
  • FIG. 2 is a detailed structural diagram of a skeletal data processor according to an embodiment of the present invention;
  • FIG. 3 illustrates a process for moving in a virtual space without being limited by a real space in a virtual space moving apparatus according to an embodiment of the present invention;
  • FIG. 4 illustrates a process for mapping a plurality of accelerated-movement regions in a sensor-recognizable region in an accelerated-movement mode according to an embodiment of the present invention;
  • FIG. 5 illustrates a plurality of accelerated-movement regions according to an embodiment of the present invention;
  • FIG. 6 illustrates a process in which an object moves in a virtual space corresponding to user motion according to an embodiment of the present invention;
  • FIG. 7 illustrates object motion in a virtual space corresponding to user motion in a real space according to an embodiment of the present invention;
  • FIG. 8 illustrates movement in a virtual space according to an embodiment of the present invention; and
  • FIGS. 9 and 10 illustrate forms of a plurality of accelerated-movement regions according to an embodiment of the present invention.
  • DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
  • Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description and the accompanying drawings, well-known functions and structures will not be described for the sake of clarity and conciseness.
  • In the present invention, when a user moves in a state where a sensor-recognizable region of a 3D camera and a user motion region of a real space are identical, an object in a virtual space is moved based on accelerated-movement information of an accelerated-movement region corresponding to a user's position among a plurality of accelerated-movement regions which are previously set in the sensor-recognizable region, such that the user can freely move in the virtual space without being limited by a real space.
  • FIG. 1 is a structural diagram of a virtual space moving apparatus according to an embodiment of the present invention.
  • Referring to FIG. 1, the virtual space moving apparatus includes a 3D camera unit 10, a skeletonization unit 20, a skeletal data processor 30, and a Graphic User Interface (GUI) 40.
  • The 3D camera unit 10 converts a 3D image signal including 3D position information of x-axis, y-axis, and z-axis of a subject into a 3D image, and senses motion of the subject. The 3D image corresponds to the subject. The 3D camera unit 10 includes a depth camera, a multi-view camera, and a stereo camera. The subject (i.e., a user) is photographed and its motion is sensed using a 3D camera, but a plurality of 2D cameras may be used or the subject motion may be determined by further including and using a motion sensor. While the virtual space moving apparatus includes the 3D camera unit 10 in an embodiment of the present invention, the 3D image may be received from an external server via a wired or wireless communication unit or may have been stored in a memory embedded or inserted into the virtual space moving apparatus.
  • The sensor-recognizable region of the 3D camera unit 10 refers to a region that can be recognized and photographed by the 3D camera unit 10, and this region is the same as a user motion region in a real space.
  • The skeletonization unit 20 recognizes an outline of the subject, separates a subject region and a background region from the 3D image based on the recognized outline, and skeletonizes the subject region to generate skeletal data. The skeletonization unit 20 outputs a plurality of optical signals to the sensor-recognizable region of the 3D camera unit 10 and recognizes the outline of the subject by recognizing the optical signals received after those signals are reflected from the user. The skeletonization unit 20 may also recognize the outline of the subject by using a pattern.
  • After recognizing the outline of the subject, the skeletonization unit 20 separates the subject region and the background region from the 3D image based on the recognized outline, and skeletonizes the separated subject region to generate skeletal data (or image). In the present invention, skeletonization involves expressing an object with a fully compressed skeletal line for recognition of the object.
  • The skeletal data processor 30 generates object data corresponding to the generated skeletal image and outputs the generated object data to the GUI 40.
  • The skeletal data processor 30 determines whether an accelerated-movement mode for accelerated movement of mapped object data in the virtual space is selected. If the accelerated-movement mode is selected, the skeletal data processor 30 maps a plurality of previously set accelerated-movement regions around a position of the subject. If the subject's position is moved, the skeletal data processor 30 identifies an accelerated-movement region including the moved position of the subject among the plurality of previously set accelerated-movement regions.
  • More specifically, the skeletal data processor 30 determines in which one of the plurality of accelerated-movement regions is included position information of skeletal data, such as x-axis, y-axis, and z-axis coordinates. Herein, respective accelerated-movement regions are set in which object motion per subject motion is made at different movement ratios. For example, if a particular accelerated-movement region is set to have object motion per subject motion which has a movement ratio of 1:2, then the object is moved in the virtual space with motion of twice the subject motion.
  • The skeletal data processor 30 moves the object in the virtual space at a movement ratio previously set corresponding to the identified accelerated-movement region, and displays the moved object in the virtual space through the GUI 40, which maps and displays the object generated by the skeletal data processor 30 in the virtual space. The graphic user interface 40 also displays the object moved in the virtual space.
  • As such, the object is moved in the virtual space at a movement ratio of object motion in the virtual space corresponding to subject motion, allowing the user to move as desired in the virtual space without being limited by the real space.
  • FIG. 2 is a detailed structural diagram of the skeletal data processor 30 according to an embodiment of the present invention.
  • Referring to FIG. 2, the skeletal data processor 30 includes a motion recognizer 31, a 3D image processor 32, and an accelerated-movement mode executer 33.
  • The motion recognizer 31 recognizes motion of a skeletal image corresponding to subject motion, which is input through the skeletonization unit 20. For example, the motion recognizer 31 recognizes position movement of the skeletal image or a gesture such as a hand motion. More specifically, the motion recognizer 31 extracts depth information of the moving subject through the 3D camera unit 10 such as a 3D camera, and segments the depth information. Thereafter, the motion recognizer 31 recognizes a 3D space position of a head, a 3D space position of a hand, and 3D space positions of torso and legs by using a structure of skeletal data regarding a human body, thus implementing interaction with 3D contents. Although user motion is recognized based on motion of the skeletal image in the embodiment of FIG. 2, user motion may also be recognized by a separate motion sensor.
  • The 3D image processor 32 generates object data corresponding to the skeletal data generated by the skeletonization unit 20, and maps the generated object data to a particular position in the virtual space. For example, the 3D image processor 32 generates object data, such as a user's avatar, in the virtual space, and maps the generated avatar to a position in the virtual space corresponding to the user's position in the real space.
  • Thereafter, when the accelerated-movement mode is executed, the 3D image processor 32 moves the position of the object data in the virtual space at the movement ratio identified by the accelerated-movement mode executer 33 corresponding to the subject motion.
  • The accelerated-movement mode executer 33 determines whether the motion recognized by the motion recognizer 31 is motion previously set for selection of the accelerated-movement mode, and executes the accelerated-movement mode or a normal mode according to a result of the determination. The normal mode is a default operation mode in the virtual space moving apparatus, in which the real space and the virtual space are one-to-one mapped and thus subject motion and object motion one-to-one correspond to each other.
  • More specifically, if the recognized motion is for selecting the accelerated-movement mode, the accelerated-movement mode executer 33 maps a plurality of previously set accelerated-movement regions around the position of the skeletal data. Thereafter, if movement of the position of the skeletal data is recognized by the motion recognizer 31, the accelerated-movement mode executer 33 identifies an accelerated-movement region including the moved position information of the skeletal data among the plurality of mapped accelerated-movement regions. In this state, the accelerated-movement regions are mapped around the position of the skeletal data in the sensor-recognizable region of the 3D camera unit 10.
  • The accelerated-movement mode executer 33 outputs a movement ratio of object motion per user motion, which is set corresponding to the identified accelerated-movement region, to the 3D image processor 32.
  • If the accelerated motion is not intended for selecting the accelerated-movement mode, i.e., is motion for movement in a distance, then the accelerated-movement mode executer 33 performs a normal mode in which object motion per subject motion is made at a movement ratio of 1:1.
  • As such, the present invention moves the object in the virtual space at a movement ratio of object motion in the virtual space, which is previously set corresponding to user motion, allowing the user to freely move as desired without being limited by the real space.
  • FIG. 3 illustrates a process for moving in the virtual space without being limited by the real space in the virtual space moving apparatus according to an embodiment of the present invention.
  • Upon input of a 3D image including x-axis, y-axis, and z-axis coordinates information of the subject through the 3D camera unit 10 in step 300, the skeletonization unit 20 recognizes the outline of the subject and separates a subject region and a background region from the 3D image based on the recognized outline in step 301. The skeletonization unit 20 outputs a plurality of optical signals to the sensor-recognizable region of the 3D camera unit 10 and recognizes the optical signals received after being reflected from the subject, thus recognizing the outline of the user.
  • The skeletonization unit 20 skeletonizes the separated subject region to generate skeletal data in step 302. In other words, the skeletonization unit 20, which has recognized the user's outline, separates the subject region and the background region from the 3D image based on the recognized outline, and generates the skeletal data by skeletonizing the separated subject region.
  • In step 303, the skeletal data processor 30 generates object data corresponding to the generated skeletal data.
  • In step 304, the skeletal data processor 30 maps the generated object data to a particular position in the virtual space and displays the generated object through the GUI 40.
  • In step 305, the skeletal data processor 30 determines whether the accelerated-movement mode is selected, and if the accelerated-movement mode is selected, the skeletal data processor 30 proceeds to step 306; otherwise, the skeletal data processor 30 returns to step 300 to continuously receive a 3D image and perform steps 301 through 305. More specifically, the process of determining whether the acceleration movement mode is selected involves determining at the skeletal data processor 30 whether the accelerated-movement mode for accelerated movement of the mapped object data in the virtual space is selected.
  • If the accelerated-movement mode is selected, the skeletal data processor 30 calculates position information of the skeletal data in step 306. More specifically, the skeletal data processor 30 maps the plurality of previously set accelerated-movement regions around the position of the subject, and if the position of the skeletal data is moved, the skeletal data processor 30 calculates the moved position information of the skeletal data.
  • In step 307, the skeletal data processor 30 detects an accelerated-movement region including the moved position information of the subject from the plurality of previously set accelerated-movement regions. Specifically, the skeletal data processor 30 determines in which one of the plurality of accelerated-movement regions is included the position information of the skeletal data, such as x-axis, y-axis, and z-axis coordinates, in correspondence to the motion of the subject.
  • In step 308, the skeletal data processor 30 moves the object in the virtual space at a movement ratio previously set corresponding to the detected accelerated-movement region, and displays the moved object in the virtual space through the GUI 40.
  • In step 309, the skeletal data processor 30 determines whether the accelerated-movement mode is terminated. If the accelerated-movement mode is terminated, the skeletal data processor 30 ends the process, otherwise, the skeletal data processor 30 returns to step 306 to calculate the position information of the skeletal data and performs steps 307 through 309.
  • As such, the object in the embodiment of FIG. 3 is moved in the virtual space at a previously set movement ratio of object motion in the virtual space corresponding to user motion, allowing the user to move as desired in the virtual space without being limited by the real space.
  • FIGS. 4 through 8 are diagrams for describing a process for movement in the virtual space without being limited by the real space by the virtual space moving apparatus according to an embodiment of the present invention.
  • FIG. 4 illustrates a process for mapping the plurality of accelerated-movement regions in the sensor-recognizable region in the accelerated-movement mode according to an embodiment of the present invention.
  • As shown in FIG. 4, assuming that a sensor-recognizable region 400 of the 3D camera unit 10 is identical to a subject motion space, the skeletal data processor 30 maps a plurality of accelerated-movement regions 401 around a position of the subject in the accelerated-movement mode.
  • FIG. 5 illustrates the plurality of accelerated-movement regions according to an embodiment of the present invention.
  • As shown in FIG. 5, there are five accelerated-movement regions: A1, A2, A3, A4, and A5. Although the plurality of accelerated-movement regions includes the 5 regions, it may also include n regions, where n>0.
  • The plurality of accelerated-movement regions may be set as shown below in Table 1.
  • TABLE 1
    Accelerated-
    Movement Region Set Value
    A1 Object motion per subject motion has a movement
    ratio of 1:n
    A2 Object motion per subject motion has a movement
    ratio of 1:2n
    A3 Object motion per subject motion has a movement
    ratio of 1:7n
    A4 Object motion per subject motion has a movement
    ratio of 1:30n
    A5 Object motion per subject motion has a movement
    ratio of 1:100n
  • Referring to Table 1 and FIGS. 6 and 7, the accelerated-movement mode will be described in detail.
  • FIG. 6 illustrates a process in which the object moves in the virtual space corresponding to user motion according to an embodiment of the present invention.
  • For example, when a user 600 is situated in the accelerated-movement region A1 and moves to a position 601, the skeletal data processor 30 may identify position information of the user 601, that is, position information of skeletal data and determine in which one of the plurality of accelerated-movement regions the position information is included. In other words, the skeletal data processor 30 determines in which one of A1 through A4 x-axis, y-axis, and z-axis coordinates of the skeletal data are included.
  • Upon determining that the position of the user 601 is included in the accelerated-movement region A3, the skeletal data processor 30 moves an object in the virtual space at a movement ratio of 1:7n for object motion per user motion as set in Table 1.
  • FIG. 7 illustrates object motion in the virtual space corresponding to user motion in the real space according to an embodiment of the present invention.
  • For example, if the user moves at a movement ratio of 1:1 from the accelerated-movement region A1 to the accelerated-movement region A2 in the real space, the object may move at a previously set movement ratio of 1:2 in the virtual space.
  • FIG. 8 illustrates movement in the virtual space according to an embodiment of the present invention.
  • As the user moves a particular distance in the real space, the object in the virtual space moves from a position A to a position B, as shown in FIG. 8, regardless of a size or a form of the real space.
  • FIGS. 9 and 10 illustrate forms of the plurality of accelerated-movement regions according to an embodiment of the present invention.
  • While the virtual space is implemented in a circular form in an embodiment of the present invention, it may also be configured in a form shown in FIG. 9 and may be configured in circular, rectangular, or triangular forms as shown in FIG. 10.
  • Therefore, the present invention moves the object in the virtual space at a previously set movement ratio of object motion in the virtual space corresponding to user motion, allowing the user to move as desired in the virtual space without being limited by the real space.
  • Moreover, according to the present invention, an object moves an accelerated-movement distance corresponding to an accelerated-movement region previously set corresponding to user motion in a virtual space, such that a user can move anywhere as desired in the virtual space, regardless of a size of a real space.
  • It can be seen that the embodiment of the present invention can be implemented with hardware, software, or a combination of hardware and software. Such arbitrary software may be stored, whether or not erasable or re-recordable, in a volatile or non-volatile storage such as a Read-Only Memory (ROM), a memory such as a Random Access Memory (RAM), a memory chip, a device, or an integrated circuit, and an optically or magnetically recordable and machine (e.g., computer)-readable storage medium such as a Compact Disc (CD), a Digital Versatile Disk (DVD), a magnetic disk, or a magnetic tape. The virtual space moving method according to the present invention can be implemented by a computer or a portable terminal which includes a controller and a memory, and it can be seen that a storing unit may be an example of a non-transitory machine-readable storage medium which is suitable for storing a program or programs including instructions for implementing the embodiment of the present invention. Therefore, the present invention includes a program including codes for implementing an apparatus or method claimed in an arbitrary claim and a machine-readable storage medium for storing such a program. The program may be electronically transferred through an arbitrary medium such as a communication signal delivered through wired or wireless connection, and the present invention properly includes equivalents thereof.
  • The present invention is not limited by the foregoing embodiments and the accompanying drawings because various substitutions, modifications, and changes can be made by those of ordinary skill in the art without departing from the technical spirit of the present invention.

Claims (13)

What is claimed is:
1. A virtual space moving apparatus comprising:
a skeletonization unit which skeletonizes a Three-dimensional (3D) image corresponding to a subject to generate skeletal data;
a skeletal data processor which generates object data corresponding to the skeletal data and mapping the object data in a virtual space; and
a graphic user interface which displays the mapped object data in the virtual space,
wherein upon selection of an accelerated-movement mode for virtual space movement, the skeletal data processor determines motion of the subject and moves the object data in the virtual space at a movement ratio previously set corresponding to the determined motion of the subject.
2. The virtual space moving apparatus of claim 1, further comprising a 3D camera unit which outputs the 3D image corresponding to the subject.
3. The virtual space moving apparatus of claim 2, wherein the skeletonization unit separates a subject region and a background region from the 3D image by recognizing an outline of the subject, and generates the skeletal data by skeletonizing the separated subject region.
4. The virtual space moving apparatus of claim 3, wherein the skeletal data processor maps a plurality of set accelerated-movement regions to create a motion of the object in the virtual space per motion of the subject in a real space at a previously set movement ratio in a sensor-recognizable space of the 3D camera unit upon selection of the accelerated-movement mode.
5. The virtual space moving apparatus of claim 4, wherein when motion of the subject is sensed, the skeletal data processor calculates position information of the subject and detects an accelerated-movement region including the calculated position information of the subject from among the plurality of set accelerated-movement regions.
6. The virtual space moving apparatus of claim 5, wherein the skeletal data processor identifies a movement ratio of motion of the object in the virtual space per motion of the subject, which is previously set corresponding to the detected accelerated-movement region, and moves the object in the virtual space at the identified movement ratio.
7. The virtual space moving apparatus of claim 5, wherein for each of the plurality of accelerated-movement regions, a movement ratio is previously set corresponding to motion of the object in the virtual space per motion of the subject.
8. A virtual space moving method in a virtual space moving apparatus, the virtual space moving method comprising:
skeletonizing, upon input of a Three-Dimensional (3D) image corresponding to a subject, the 3D image to generate skeletal data;
generating object data corresponding to the skeletal data and mapping the generated object data in a virtual space;
displaying the mapped object data in the virtual space;
determining motion of the subject upon selection of an accelerated-movement mode for virtual space movement; and
moving the object data in the virtual space at a movement ratio previously set corresponding to the determined motion of the subject.
9. The virtual space moving method of claim 8, wherein generating the object data and mapping of the object data in the virtual space comprises:
separating a subject region and a background region from the 3D image by recognizing an outline of the subject; and
generating the skeletal data by skeletonizing the separated subject region, and mapping the skeletal data in the virtual space.
10. The virtual space moving method of claim 9, further comprising mapping a plurality of set accelerated-movement regions to create motion of the object in the virtual space per motion of the subject in a real space at a previously set movement ratio in a sensor-recognizable space of a 3D camera unit, upon selection of the accelerated-movement mode.
11. The virtual space moving method of claim 10, wherein moving the object data in the virtual space comprises:
calculating position information of the subject, when motion of the subject is sensed; and
detecting an accelerated-movement region including the calculated position information of the subject from among the plurality of set accelerated-movement regions.
12. The virtual space moving method of claim 11, wherein moving the object data in the virtual space comprises:
identifying a movement ratio of motion of the object data in the virtual space per motion of the subject, which is previously set corresponding to the detected accelerated-movement region; and
moving the object data in the virtual space at the identified movement ratio.
13. The virtual space moving method of claim 12, wherein a movement ratio for each of the plurality of accelerated-movement regions is previously set corresponding to motion of the object in the virtual space per motion of the subject.
US13/739,693 2012-01-11 2013-01-11 Virtual space moving apparatus and method Abandoned US20130176302A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/901,299 US10853966B2 (en) 2012-01-11 2018-02-21 Virtual space moving apparatus and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020120003432A KR101888491B1 (en) 2012-01-11 2012-01-11 Apparatus and method for moving in virtual reality
KR10-2012-0003432 2012-01-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/901,299 Continuation US10853966B2 (en) 2012-01-11 2018-02-21 Virtual space moving apparatus and method

Publications (1)

Publication Number Publication Date
US20130176302A1 true US20130176302A1 (en) 2013-07-11

Family

ID=48743599

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/739,693 Abandoned US20130176302A1 (en) 2012-01-11 2013-01-11 Virtual space moving apparatus and method
US15/901,299 Active 2033-04-02 US10853966B2 (en) 2012-01-11 2018-02-21 Virtual space moving apparatus and method

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/901,299 Active 2033-04-02 US10853966B2 (en) 2012-01-11 2018-02-21 Virtual space moving apparatus and method

Country Status (2)

Country Link
US (2) US20130176302A1 (en)
KR (1) KR101888491B1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665492A (en) * 2018-03-27 2018-10-16 北京光年无限科技有限公司 A kind of Dancing Teaching data processing method and system based on visual human
US10699484B2 (en) 2016-06-10 2020-06-30 Dirtt Environmental Solutions, Ltd. Mixed-reality and CAD architectural design environment
US11308694B2 (en) * 2019-06-25 2022-04-19 Sony Interactive Entertainment Inc. Image processing apparatus and image processing method
US11415980B2 (en) * 2016-07-29 2022-08-16 Nec Solution Innovators, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium
US11467572B2 (en) 2016-07-29 2022-10-11 NEC Solution Innovations, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6915575B2 (en) * 2018-03-29 2021-08-04 京セラドキュメントソリューションズ株式会社 Control device and monitoring system
JP7341674B2 (en) * 2019-02-27 2023-09-11 キヤノン株式会社 Information processing device, information processing method and program

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060119576A1 (en) * 2004-12-06 2006-06-08 Naturalpoint, Inc. Systems and methods for using a movable object to control a computer
US7197126B2 (en) * 2003-05-26 2007-03-27 Hitachi, Ltd. Human communication system
US7225414B1 (en) * 2002-09-10 2007-05-29 Videomining Corporation Method and system for virtual touch entertainment
US20070222746A1 (en) * 2006-03-23 2007-09-27 Accenture Global Services Gmbh Gestural input for navigation and manipulation in virtual space
US20100201693A1 (en) * 2009-02-11 2010-08-12 Disney Enterprises, Inc. System and method for audience participation event with digital avatars
US20100241998A1 (en) * 2009-03-20 2010-09-23 Microsoft Corporation Virtual object manipulation
US20110025603A1 (en) * 2006-02-08 2011-02-03 Underkoffler John S Spatial, Multi-Modal Control Device For Use With Spatial Operating System
US20110107216A1 (en) * 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US20110193939A1 (en) * 2010-02-09 2011-08-11 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US20110227913A1 (en) * 2008-11-28 2011-09-22 Arn Hyndman Method and Apparatus for Controlling a Camera View into a Three Dimensional Computer-Generated Virtual Environment
US8384665B1 (en) * 2006-07-14 2013-02-26 Ailive, Inc. Method and system for making a selection in 3D virtual environment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11146978A (en) * 1997-11-17 1999-06-02 Namco Ltd Three-dimensional game unit, and information recording medium
JP2010233671A (en) * 2009-03-30 2010-10-21 Namco Bandai Games Inc Program, information storage medium and game device
KR20100138702A (en) * 2009-06-25 2010-12-31 삼성전자주식회사 Method and apparatus for processing virtual world
US9032334B2 (en) * 2011-12-21 2015-05-12 Lg Electronics Inc. Electronic device having 3-dimensional display and method of operating thereof
US9552673B2 (en) * 2012-10-17 2017-01-24 Microsoft Technology Licensing, Llc Grasping virtual objects in augmented reality

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7225414B1 (en) * 2002-09-10 2007-05-29 Videomining Corporation Method and system for virtual touch entertainment
US7197126B2 (en) * 2003-05-26 2007-03-27 Hitachi, Ltd. Human communication system
US20060119576A1 (en) * 2004-12-06 2006-06-08 Naturalpoint, Inc. Systems and methods for using a movable object to control a computer
US20110025603A1 (en) * 2006-02-08 2011-02-03 Underkoffler John S Spatial, Multi-Modal Control Device For Use With Spatial Operating System
US20070222746A1 (en) * 2006-03-23 2007-09-27 Accenture Global Services Gmbh Gestural input for navigation and manipulation in virtual space
US8384665B1 (en) * 2006-07-14 2013-02-26 Ailive, Inc. Method and system for making a selection in 3D virtual environment
US20110227913A1 (en) * 2008-11-28 2011-09-22 Arn Hyndman Method and Apparatus for Controlling a Camera View into a Three Dimensional Computer-Generated Virtual Environment
US20100201693A1 (en) * 2009-02-11 2010-08-12 Disney Enterprises, Inc. System and method for audience participation event with digital avatars
US20100241998A1 (en) * 2009-03-20 2010-09-23 Microsoft Corporation Virtual object manipulation
US20110107216A1 (en) * 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US20110193939A1 (en) * 2010-02-09 2011-08-11 Microsoft Corporation Physical interaction zone for gesture-based user interfaces

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Mine, Virtual Environment Interaction Techniques, UNC Chapel Hill computer science technical report TR95-018 (1995): 507248-2. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10699484B2 (en) 2016-06-10 2020-06-30 Dirtt Environmental Solutions, Ltd. Mixed-reality and CAD architectural design environment
US11270514B2 (en) 2016-06-10 2022-03-08 Dirtt Environmental Solutions Ltd. Mixed-reality and CAD architectural design environment
US11415980B2 (en) * 2016-07-29 2022-08-16 Nec Solution Innovators, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium
US11467572B2 (en) 2016-07-29 2022-10-11 NEC Solution Innovations, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium
CN108665492A (en) * 2018-03-27 2018-10-16 北京光年无限科技有限公司 A kind of Dancing Teaching data processing method and system based on visual human
CN108665492B (en) * 2018-03-27 2020-09-18 北京光年无限科技有限公司 Dance teaching data processing method and system based on virtual human
US11308694B2 (en) * 2019-06-25 2022-04-19 Sony Interactive Entertainment Inc. Image processing apparatus and image processing method

Also Published As

Publication number Publication date
US20180173302A1 (en) 2018-06-21
US10853966B2 (en) 2020-12-01
KR101888491B1 (en) 2018-08-16
KR20130082296A (en) 2013-07-19

Similar Documents

Publication Publication Date Title
US10853966B2 (en) Virtual space moving apparatus and method
US9910509B2 (en) Method to control perspective for a camera-controlled computer
EP3855288B1 (en) Spatial relationships for integration of visual images of physical environment into virtual reality
US10001844B2 (en) Information processing apparatus information processing method and storage medium
TWI567659B (en) Theme-based augmentation of photorepresentative view
US7292240B2 (en) Virtual reality presentation device and information processing method
EP2521097B1 (en) System and Method of Input Processing for Augmented Reality
US20190187876A1 (en) Three dimensional digital content editing in virtual reality
US8388146B2 (en) Anamorphic projection device
US20150138065A1 (en) Head-mounted integrated interface
US20150002419A1 (en) Recognizing interactions with hot zones
JP2021081757A (en) Information processing equipment, information processing methods, and program
US20170102791A1 (en) Virtual Plane in a Stylus Based Stereoscopic Display System
CN106980378B (en) Virtual display method and system
US11582409B2 (en) Visual-inertial tracking using rolling shutter cameras
US20130057574A1 (en) Storage medium recorded with program, information processing apparatus, information processing system, and information processing method
US8854358B2 (en) Computer-readable storage medium having image processing program stored therein, image processing apparatus, image processing method, and image processing system
US20150033157A1 (en) 3d displaying apparatus and the method thereof
EP3486749A1 (en) Provision of virtual reality content
US20190294314A1 (en) Image display device, image display method, and computer readable recording device
US20130040737A1 (en) Input device, system and method
US11195320B2 (en) Feed-forward collision avoidance for artificial reality environments
US20140092133A1 (en) Computer-readable medium, image processing device, image processing system, and image processing method
CN112237735A (en) Recording medium, object detection device, object detection method, and object detection system
US20230377279A1 (en) Space and content matching for augmented and mixed reality

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JEONG, MOON-SIK;KORYAKOVSKIY, IVAN;JUNG, SANG-KEUN;AND OTHERS;REEL/FRAME:029687/0770

Effective date: 20130110

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION