CN104916182B - A kind of immersive VR maintenance and Training Simulation System - Google Patents

A kind of immersive VR maintenance and Training Simulation System Download PDF

Info

Publication number
CN104916182B
CN104916182B CN201510278780.0A CN201510278780A CN104916182B CN 104916182 B CN104916182 B CN 104916182B CN 201510278780 A CN201510278780 A CN 201510278780A CN 104916182 B CN104916182 B CN 104916182B
Authority
CN
China
Prior art keywords
subsystem
point
user
node
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510278780.0A
Other languages
Chinese (zh)
Other versions
CN104916182A (en
Inventor
罗军
赵博
陈仁越
李澍
郭逸婧
李莉
张启程
皮赞
聂蓉梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Launch Vehicle Technology CALT
Beijing Institute of Astronautical Systems Engineering
Original Assignee
China Academy of Launch Vehicle Technology CALT
Beijing Institute of Astronautical Systems Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Launch Vehicle Technology CALT, Beijing Institute of Astronautical Systems Engineering filed Critical China Academy of Launch Vehicle Technology CALT
Priority to CN201510278780.0A priority Critical patent/CN104916182B/en
Publication of CN104916182A publication Critical patent/CN104916182A/en
Application granted granted Critical
Publication of CN104916182B publication Critical patent/CN104916182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Include the invention discloses a kind of maintenance of immersive VR with Training Simulation System:Position tracking subsystem, three-dimensional modeling data storehouse, node administration subsystem, node rendering subsystem, projection subsystem;Three-dimensional modeling data storehouse, stores three-dimensional modeling data;Position tracking subsystem, by the spatial positional information of user, is output to node administration subsystem;Node administration subsystem, by threedimensional model and the spatial positional information of user, is sent to node rendering subsystem;Node rendering subsystem, threedimensional model is rendered, and the image after rendering is sent to projection subsystem;Node rendering subsystem, according to the head position of eye point of the user received and operation by human hand information, obtains the image information that human eye should be shown within sweep of the eye, the image of renewal is sent to projection subsystem;Projection subsystem, by image information and is shown.Training and Training scene can be presented in the present invention with intuitive manner, reach participant's purpose on the spot in person.

Description

A kind of immersive VR maintenance and Training Simulation System
Technical field
The invention belongs to space product Design of digital and system emulation field, the present invention relates to virtual reality technology in void Intend the application in maintenance field.
Background technology
Virtual reality (Virtual Reality, abbreviation VR) technology is computer graphics, artificial intelligence, computer network The product of the technological synthesis such as network, information processing development.It generates a kind of simulated environment using computer technology, passes through various sensings Equipment be user's " input " into simulated environment, make user naturally with environment direct interaction.Virtual reality is a kind of computer Interface tech.In essence, virtual reality is exactly a kind of advanced computer user interface, and it to user by providing simultaneously Vision, the sense of hearing, tactile etc. be various directly perceived and natural real-time, interactive means, facilitates the operation of user to greatest extent, So as to mitigate the burden of user, the operating efficiency of whole system is improved.
Carry out Virtual Maintenance Simulation using reality environment and there has been many systems, carry out in fields such as automobile, aircrafts Application.These system design schemes are varied, and many problems faceds are also not quite similar with the problem of solution.It is the most frequently used at present Although maintenance simulation system based on threedimensional model carry out, be show progresss on the computer screen mostly, lack three-dimensional truly Feeling of immersion, show inadequate in terms of degree of verisimilitude, it is impossible to reflect actual scene authentic and validly, can not build the three-dimensional of immersion Virtual environment, versatility is poor.
The content of the invention
The purpose that the technology of the present invention solves problem is:The technology of the present invention solves problem:Overcome prior art not Foot can be presented training and instructed in directly perceived, three-dimensional mode there is provided a kind of Virtual Maintenance of immersion and virtual reality Practice scene and content, reach participant's purpose on the spot in person.
The present invention technical solution be:
A kind of immersive VR maintenance includes with Training Simulation System:Position tracking subsystem, three-dimensional modeling data Storehouse, node administration subsystem, node rendering subsystem, projection subsystem;
Three-dimensional modeling data storehouse, stores three-dimensional modeling data, and threedimensional model is called for node administration subsystem;
Position tracking subsystem, obtains the spatial positional information of user, and the information is output into node administration subsystem System;Head position of eye point and operation by human hand information of the described spatial positional information including the use of person;
Node administration subsystem, reads threedimensional model from three-dimensional modeling data storehouse, and threedimensional model is passed into node wash with watercolours Subsystem, and head position of eye point and the operation by human hand information of the user that position tracking subsystem is provided are contaminated, section is sent to Point rendering subsystem;
Node rendering subsystem, in default indication range, by the threedimensional model received according to threedimensional model it is left, In, it is right, lower four part rendered, the image after rendering is sent to projection subsystem and shown so that user see it is whole Individual virtual threedimensional model;Node rendering subsystem, believes according to the head position of eye point of the user received and operation by human hand Breath, obtains the image information that human eye should be shown within sweep of the eye, the image of renewal is sent to projection subsystem;
Projection subsystem, the image information that receiving node rendering subsystem is sent, and shown, user is seen more Threedimensional model image after new.
Position tracking subsystem includes position processing main frame, optical camera, tracking rigid-object;
Rigid-object is tracked, major joint, head and the ocular vicinity of user is fixed on;
Optical camera launches infrared light, and by gathering the light that tracking rigid-object reflects, the data of collection are passed Pass position processing main frame;
Position processing main frame is received after data, and the spatial positional information of user is determined by position tracking, and will Spatial positional information passes to node administration subsystem;
Projection subsystem includes four stereo projectors and four display screens;
Four display screens are presented 90 degree respectively, according to the position with user, respectively left, center, right, under;Four Projection screen is while a part of image of projected virtual scene, collectively constitutes a complete virtual environment.
The present invention has the following advantages that compared with prior art:
(1) present invention realizes the stereoscopic three-dimensional virtual environment of an immersion, is that user carries out virtual training and void Intend maintenance and provide a very intuitively platform, be conducive to personnel to operation, the understanding of training process, strengthen sense of participation, energy Training or training effect are enough significantly increased,.
(2) present invention realizes 3 D stereo by the collaborative work of node administration subsystem and node rendering subsystem The collaboration of scene image shows that the virtual scene for formation immersion provides technical foundation, ingenious in design, realizes simply, leads to It is stronger with property.
(3) present invention realizes the real-time capture to user head, hand position by positioning control system, further 3-D view is realized to the processing of user's positional information using management software with the difference and operation of user position to believe Cease and make Real-time Feedback, realize man-machine interaction, user is upgraded to " roaming is browsed " impression of virtual reality scenario " interactive operation ", improves the sense of reality and acceptance level of virtual training and training.
Brief description of the drawings
Fig. 1 schemes for the system composition of the present invention;
Fig. 2 is position tracking implementation process schematic diagram of the invention;
Fig. 3 is target point of the present invention and anchor point and camera position coordinate schematic diagram;
Fig. 4 is projection subsystem's structural representation of the invention;
Fig. 5 is projection subsystem's structural representation of the invention.
Embodiment
The specific embodiment of the invention is described further below in conjunction with the accompanying drawings.
As shown in figure 1, a kind of immersive VR maintenance of the invention and Training Simulation System, including position tracking subsystem System, three-dimensional modeling data storehouse, node administration subsystem, node rendering subsystem, projection subsystem;Three-dimensional modeling data storehouse, it is main Three-dimensional modeling data is stored, and threedimensional model is called for node administration subsystem.In order to meet the sense of reality of vision and fast Speed display, threedimensional model uses tri patch form, and there is the different surface material such as metal, composite on surface according to actual object Matter.
Position tracking subsystem, obtains the spatial positional information of user, and the information is output into node administration subsystem System;Head position of eye point and operation by human hand information of the described spatial positional information including the use of person;
Position tracking subsystem includes position processing main frame, optical camera, tracking rigid-object;
Rigid-object is tracked, major joint, head and the ocular vicinity of user is fixed on;For indicating user's hand The tracking rigid-object of position and head position is each made up of two parts, and a part is target point, and another part is fixed Site, anchor point is mainly used to the movement locus that auxiliary determines target point.
Optical camera launches infrared light, and by gathering the light that tracking rigid-object reflects, the data of collection are passed Pass position processing main frame;
Position processing main frame is received after data, and the spatial positional information of user is determined by position tracking, and will Spatial positional information passes to node administration subsystem;
As shown in Figure 2,3, the specific implementation of position tracking is as follows:
(1) each reference point locations are determined;Reference point is the locus of optical camera, in user's head in the present embodiment An optical camera is respectively installed in four positions before top top is left front, left back, right, behind the right side;
(2) K moment each reference point is obtained to target point and anchor point present range;In initial time K, by optical camera Infrared light, anchor point and target point reflection light are sent, is received by optical camera, return is passed the information on and puts processing main frame, Framework computing draws each reference point to target point and the present range of anchor point;
(3) spatial value of K moment target point and anchor point is obtained according to maximum likelihood method;Concrete mode is as follows:
If the coordinate system of n (n >=4) individual reference point of three dimensions is (x1,y1,z1)、(x2,y2,z2)、(x3,y3,z3)… (xn,yn,zn), the coordinate X of target point or anchor point is set to (x, y, z), point X to the 1st, the 2nd ..., n-th reference point Distance respectively d1、d1、…、d1, then equation group is had according to space distance between two points formula:
In above formula, last equation is individually subtracted since first equation and transplants, nonhomogeneous linear equation can be obtained Group:
Above formula is represented by AX=b
Wherein
The coordinate that target point or anchor point X can then be obtained is:
(4) moment renewal is carried out:K=K+i, i=i+t, t represent a sampling interval duration, just Begin moment i=0;
(5) at the K moment, the spatial value of K moment target point and anchor point is obtained according to maximum likelihood method;
(6) whether the locating point position that judgment step (5) is obtained occurs relative to the locating point position obtained in step (3) Change, if not changing, carries out step (7), if being changed, then it is assumed that target point is translated, into step (10);
(7) whether the aiming spot that judgment step (5) is obtained occurs relative to the aiming spot obtained in step (3) Change, if not changing, then it is assumed that target point is not rotated or translated, into step (12), if being changed, recognizes Rotated for target point around orientational vector reference axis, into step (8);
(8) according to the position of target point and orientational vector reference axis, calculate what target point rotated around orientational vector reference axis Angle;
(9) update around locating shaft rotate after aiming spot information, into step (12);
Update around locating shaft rotate after aiming spot information concrete mode it is as follows:
The coordinate of new point of the point after Vector Rotation in three dimensions
The coordinate of point point new after X-axis Y-axis Z axis rotation certain radian is easily calculated in three dimensions, And the point around rotary shaft be any vector (x, y, z), (x, y, z) is positioning point vector, concrete implementation mode For:
Following matrix is realized by the function glRotatef (angle, x, y, z) in OPENGL:
It is assumed herein that reference axis is right-handed system, and wherein c=cos (angle), s=sin (angle), angle is from arrow The forward direction of amount (x, y, z) looks counterclockwise rotates angulation (namely in right-hand rule, thumb pointing vector Direction, four to refer to circular angle in the counterclockwise direction be positive angle for remaining), vector (x, y, z) must be unitization And pass through origin, then can obtain postrotational coordinate of ground point using above-mentioned matrix:
x1=(x2(1-c)+c)*x0+(xy(1-c)-zs)*y0+(xz(1-c)+ys)*z0
y1=(yx (1-c)+zs) * x0+(y2(1-c)+c)*y0+(yz(1-c)-xs)*z0
z1=(xz (1-c)-ys) * x0+(yz(1-c)+xs)*y0+(z2(1-c)+c)*z0
(x0,y0,z0) be former target point coordinate, (x1,y1,z1) be postrotational new target point coordinate.
(10) according to aiming spot and orientational vector reference axis, the distance that target point is translated along three reference axis is calculated;
(11) aiming spot information is updated;
(12) judge whether user is continuing with system, if entering subsequent time into step (4) if, otherwise tie Beam system.
Node administration subsystem, reads threedimensional model from three-dimensional modeling data storehouse, and threedimensional model is passed into node wash with watercolours Subsystem, and head position of eye point and the operation by human hand information of the user that position tracking subsystem is provided are contaminated, section is sent to Point rendering subsystem;
Node rendering subsystem, in default indication range, by the threedimensional model received according to threedimensional model it is left, In, it is right, lower four part rendered, the image after rendering is sent to projection subsystem and shown so that user see it is whole Individual virtual threedimensional model;Node rendering subsystem, believes according to the head position of eye point of the user received and operation by human hand Breath, obtains the image information that human eye should be shown within sweep of the eye, the image of renewal is sent to projection subsystem;
Projection subsystem, the image information that receiving node rendering subsystem is sent, and shown, user is seen more Threedimensional model image after new.Projection subsystem includes four stereo projectors and four display screens;Four display screens point Cheng Xian not be 90 degree, according to the position with user, be respectively it is left, in (front), the right side, under (ground);Four projection screens are same When projected virtual scene a part of image, collectively constitute a complete virtual environment.Every projector's one face screen of correspondence, Four projectors project the three-dimensional modeling data of different angles jointly, and the stereoscopic three-dimensional for collectively constituting a complete immersion is empty Near-ring border.User can be with wearing stereoscopic glasses, the threedimensional model watched in three-dimensional virtual environment.
As shown in Figure 4,5, each screen size is 3520mm × 2200mm in the present embodiment, and display specification is 16:10, throw The projected resolution of shadow machine is 1920 × 1200.Every piece of rear projection screen is both needed to using a monoblock thickness not less than 12mm, super flat, low Gain compound glass host material, ground screen throws mode using positive, and material is wear-resisting white material.Four projection screens are simultaneously A part of image of projected virtual scene, collectively constitutes a complete virtual environment.
(7) user watches virtual scene in the square region that projection subsystem's screen is constituted, as needed can be right Model is operated, and operation is carried out by the 3D mouse of position tracking subsystem.Position tracking system is bundled on 3D mouse System tracking rigid-object, the hand position for indicating user.Position tracking is bundled on the anaglyph spectacles that user wears System tracks rigid-object, the head position for indicating user.User is operated in viewing area, positioning control system Optical camera catch collect user head and hand position data, be sent to position processing main frame.At position Reason main frame is sent to the multichannel scene management software of management node computer after position data is handled.
Unspecified part of the present invention belongs to general knowledge as well known to those skilled in the art.

Claims (4)

1. a kind of immersive VR maintenance and Training Simulation System, it is characterised in that including:Position tracking subsystem, three-dimensional Model database, node administration subsystem, node rendering subsystem, projection subsystem;
Three-dimensional modeling data storehouse, stores three-dimensional modeling data, and threedimensional model is called for node administration subsystem;
Position tracking subsystem, obtains the spatial positional information of user, and the information is output into node administration subsystem;Institute Head position of eye point and operation by human hand information of the spatial positional information stated including the use of person;
Node administration subsystem, reads threedimensional model from three-dimensional modeling data storehouse, and threedimensional model is passed into node renders son System, and head position of eye point and the operation by human hand information of the user that position tracking subsystem is provided, are sent to node wash with watercolours Contaminate subsystem;
Node rendering subsystem, in default indication range, by the threedimensional model received according to threedimensional model it is left, in, Right, lower four parts are rendered, and the image after rendering is sent to projection subsystem and shown, so that user sees whole void The threedimensional model of plan;Node rendering subsystem, according to the head position of eye point of the user received and operation by human hand information, is obtained The image information that should be shown within sweep of the eye to human eye, the image of renewal is sent to projection subsystem;
Projection subsystem, the image information that receiving node rendering subsystem is sent, and shown, user is seen after renewal Threedimensional model image.
2. a kind of immersive VR maintenance according to claim 1 and Training Simulation System, it is characterised in that:It is described Position tracking subsystem include position processing main frame, optical camera, tracking rigid-object;
Rigid-object is tracked, major joint, head and the ocular vicinity of user is fixed on;
Optical camera launches infrared light, and by gathering the light that tracking rigid-object reflects, the data transfer of collection is given Position handles main frame;
After position processing main frame receives data, the spatial positional information of user is determined by position tracking, and by space Positional information passes to node administration subsystem.
3. a kind of immersive VR maintenance according to claim 1 and Training Simulation System, it is characterised in that:It is described Projection subsystem include four stereo projectors and four display screens;
Four display screens are presented 90 degree respectively, according to the position with user, respectively left, center, right, under;Four projections Screen is while a part of image of projected virtual scene, collectively constitutes a complete virtual environment.
4. a kind of immersive VR maintenance according to claim 2 and Training Simulation System, it is characterised in that:It is described Position tracking specific implementation it is as follows:
(1) each reference point locations are determined;Reference point is the locus of optical camera, in the present embodiment on the user crown Fang Zuoqian, it is left back, right before, it is right after four positions one optical camera is respectively installed;
(2) K moment each reference point is obtained to target point and anchor point present range;In initial time K, sent by optical camera Infrared light, anchor point and target point reflection light, are received by optical camera, are passed the information on return and are put processing main frame, main frame Each reference point is calculated to target point and the present range of anchor point;
(3) spatial value of K moment target point and anchor point is obtained according to maximum likelihood method;Concrete mode is as follows:
If the coordinate system of n (n >=4) individual reference point of three dimensions is (x1,y1,z1)、(x2,y2,z2)、(x3,y3,z3)…(xn, yn,zn), the coordinate X of target point or anchor point is set to (x, y, z), point X to the 1st, the 2nd ..., the distance of n-th reference point Respectively d1、d1、…、d1, then equation group is had according to space distance between two points formula:
( x 1 - x ) 2 + ( y 1 - y ) 2 + ( z 1 - z ) 2 = d 1 2 ( x 2 - x ) 2 + ( y 2 - y ) 2 + ( z 2 - z ) 2 = d 2 2 ... ... ( x n - x ) 2 + ( y n - y ) 2 + ( z n - z ) 2 = d n 2
In above formula, last equation is individually subtracted since first equation and transplants, Linear Equations can be obtained:
2 ( x 1 - x n ) x + 2 ( y 1 - y n ) y + 2 ( z 1 - z n ) z = ( x 1 2 + y 1 2 + z 1 2 ) - ( x n 2 + y n 2 + z n 2 ) - ( d 1 2 - d n 2 ) 2 ( x 2 - x n ) x + 2 ( y 2 - y n ) y + 2 ( z 2 - z n ) z = ( x 2 2 + y 2 2 + z 2 2 ) - ( x n 2 + y n 2 + z n 2 ) - ( d 2 2 - d n 2 ) ... ... 2 ( x n - 1 - x n ) x + 2 ( y n - 1 - y n ) y + 2 ( z n - 1 - z n ) z = ( x n - 1 2 + y n - 1 2 + z n - 1 2 ) - ( x n 2 + y n 2 + z n 2 ) - ( d n - 1 2 - d n 2 )
Above formula is represented by AX=b
Wherein
A = 2 ( x 1 - x n ) 2 ( y 1 - y n ) 2 ( z 1 - z n ) 2 ( x 2 - x n ) 2 ( y 2 - y n ) 2 ( z 2 - z n ) ... ... 2 ( x n - 1 - x n ) 2 ( y n - 1 - y n ) 2 ( z n - 1 - z n )
b = ( x 1 2 + y 1 2 + z 1 2 ) - ( x n 2 + y n 2 + z n 2 ) - ( d 1 2 - d n 2 ) ( x 2 2 + y 2 2 + z 2 2 ) - ( x n 2 + y n 2 + z n 2 ) - ( d 2 2 - d n 2 ) ... ... ( x n - 1 2 + y n - 1 2 + z n - 1 2 ) - ( x n 2 + y n 2 + z n 2 ) - ( d n - 1 2 - d n 2 )
The coordinate that target point or anchor point X can then be obtained is:
X ^ = ( A T A ) - 1 A T b ;
(4) moment renewal is carried out:K=K+i, i=i+t, t represent a sampling interval duration, initial time i=0;
(5) at the K moment, the spatial value of K moment target point and anchor point is obtained according to maximum likelihood method;
(6) whether the locating point position that judgment step (5) is obtained becomes relative to the locating point position obtained in step (3) Change, if not changing, step (7) is carried out, if being changed, then it is assumed that target point is translated, into step (10);
(7) whether the aiming spot that judgment step (5) is obtained becomes relative to the aiming spot obtained in step (3) Change, if not changing, then it is assumed that target point is not rotated or translated, into step (12), if being changed, then it is assumed that Target point is rotated around orientational vector reference axis, into step (8);
(8) according to the position of target point and orientational vector reference axis, the angle that target point rotates around orientational vector reference axis is calculated;
(9) update around locating shaft rotate after aiming spot information, into step (12);
(10) according to aiming spot and orientational vector reference axis, the distance that target point is translated along three reference axis is calculated;
(11) aiming spot information is updated;
(12) judge whether user is continuing with system, if entering subsequent time into step (4) if, otherwise terminate be System.
CN201510278780.0A 2015-05-27 2015-05-27 A kind of immersive VR maintenance and Training Simulation System Active CN104916182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510278780.0A CN104916182B (en) 2015-05-27 2015-05-27 A kind of immersive VR maintenance and Training Simulation System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510278780.0A CN104916182B (en) 2015-05-27 2015-05-27 A kind of immersive VR maintenance and Training Simulation System

Publications (2)

Publication Number Publication Date
CN104916182A CN104916182A (en) 2015-09-16
CN104916182B true CN104916182B (en) 2017-07-28

Family

ID=54085215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510278780.0A Active CN104916182B (en) 2015-05-27 2015-05-27 A kind of immersive VR maintenance and Training Simulation System

Country Status (1)

Country Link
CN (1) CN104916182B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105489102B (en) * 2015-12-30 2018-06-01 北京宇航系统工程研究所 A kind of three-dimensional interactive training training system
CN106227352B (en) * 2016-07-28 2019-11-08 北京国承万通信息科技有限公司 Virtual reality scenario rendering method and system
CN106249893B (en) * 2016-08-03 2019-06-04 上海机电工程研究所 Rigid-flexible system virtual assembly system and its dummy assembly method
CN106652721B (en) * 2016-10-21 2019-01-01 中国民航大学 A kind of aircraft maintenance virtual training system and method
CN106774923B (en) * 2016-12-30 2022-07-01 天津天堰科技股份有限公司 Virtual disinfection training system capable of displaying operation track
CN106950534B (en) * 2017-02-27 2020-07-03 广东小天才科技有限公司 Spatial position detection method and system and VR (virtual reality) wearable equipment
CN107092357B (en) * 2017-04-21 2021-05-28 厦门中智信系统集成有限公司 Holographic real-world building equipment management system based on virtual reality
CN107168535B (en) * 2017-05-16 2020-05-05 江苏海事职业技术学院 Ship communication equipment fault maintenance training method and system based on VR technology
CN107293182A (en) * 2017-07-19 2017-10-24 深圳国泰安教育技术股份有限公司 A kind of vehicle teaching method, system and terminal device based on VR
CN107256654A (en) * 2017-07-31 2017-10-17 中国航空工业集团公司西安飞机设计研究所 A kind of guiding emergency evacuation virtual training system
CN107464465A (en) * 2017-07-31 2017-12-12 中国航空工业集团公司西安飞机设计研究所 A kind of active emergency evacuation virtual training system
CN108039080A (en) * 2017-12-21 2018-05-15 中国舰船研究设计中心 A kind of immersion remote training system based on virtual reality
CN108287483B (en) * 2018-01-17 2021-08-20 北京航空航天大学 Immersive virtual maintenance simulation method and system for product maintainability verification
CN109785425B (en) * 2018-12-12 2023-02-28 珠海超凡视界科技有限公司 Three-dimensional virtual imaging method
GB2594714B (en) * 2020-05-04 2022-12-07 Createc Robotics Ltd Virtual view generation
CN112258656B (en) * 2020-09-14 2023-08-08 北京京东振世信息技术有限公司 Method, device, server and medium for displaying product maintenance information
CN112085983B (en) * 2020-09-29 2021-04-06 北京森合智源技术有限公司 Virtual-real combination-based automobile virtual simulation teaching cloud service platform system
CN113038116B (en) * 2021-03-09 2022-06-28 中国人民解放军海军航空大学航空作战勤务学院 Simulation training visual system for oil adding and receiving in air
CN113784109A (en) * 2021-09-07 2021-12-10 太仓中科信息技术研究院 Projection system and method for script killing environment
CN115808974B (en) * 2022-07-29 2023-08-29 深圳职业技术学院 Immersive command center construction method, immersive command center construction system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070048702A1 (en) * 2005-08-25 2007-03-01 Jang Gil S Immersion-type live-line work training system and method
CN101510074A (en) * 2009-02-27 2009-08-19 河北大学 High present sensation intelligent perception interactive motor system and implementing method
US20090213114A1 (en) * 2008-01-18 2009-08-27 Lockheed Martin Corporation Portable Immersive Environment Using Motion Capture and Head Mounted Display
CN204246844U (en) * 2014-12-09 2015-04-08 新疆触彩动漫科技有限公司 The aobvious device of virtual reality type human-computer interaction holography feedback
CN104573230A (en) * 2015-01-06 2015-04-29 北京卫星环境工程研究所 Virtual human work task simulation analyzing system and method for spacecraft repair

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070048702A1 (en) * 2005-08-25 2007-03-01 Jang Gil S Immersion-type live-line work training system and method
US20090213114A1 (en) * 2008-01-18 2009-08-27 Lockheed Martin Corporation Portable Immersive Environment Using Motion Capture and Head Mounted Display
CN101510074A (en) * 2009-02-27 2009-08-19 河北大学 High present sensation intelligent perception interactive motor system and implementing method
CN204246844U (en) * 2014-12-09 2015-04-08 新疆触彩动漫科技有限公司 The aobvious device of virtual reality type human-computer interaction holography feedback
CN104573230A (en) * 2015-01-06 2015-04-29 北京卫星环境工程研究所 Virtual human work task simulation analyzing system and method for spacecraft repair

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
航空维修中虚拟维修训练系统展望;杨琼;《价值工程》;20121231;第31-32页 *
航空虚拟维修系统的设计与实现;刘钡钡等;《计算机集成制造系统》;20111130;第17卷(第11期);第2324-2332页 *

Also Published As

Publication number Publication date
CN104916182A (en) 2015-09-16

Similar Documents

Publication Publication Date Title
CN104916182B (en) A kind of immersive VR maintenance and Training Simulation System
Hilliges et al. HoloDesk: direct 3d interactions with a situated see-through display
Azuma Augmented reality: Approaches and technical challenges
CN103337095B (en) The tridimensional virtual display methods of the three-dimensional geographical entity of a kind of real space
CN106504339A (en) Historical relic 3D methods of exhibiting based on virtual reality
US10192363B2 (en) Math operations in mixed or virtual reality
CN101800907B (en) Method and device for displaying three-dimensional image
CN106131536A (en) A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN103440677A (en) Multi-view free stereoscopic interactive system based on Kinect somatosensory device
Grossman et al. An evaluation of depth perception on volumetric displays
CN109885163A (en) A kind of more people's interactive cooperation method and systems of virtual reality
CN104915979A (en) System capable of realizing immersive virtual reality across mobile platforms
CN104035760A (en) System capable of realizing immersive virtual reality over mobile platforms
KR101096617B1 (en) Spatial multi interaction-based 3d stereo interactive vision system and method of the same
CN103257707B (en) Utilize the three-dimensional range method of Visual Trace Technology and conventional mice opertaing device
CN110610547A (en) Cabin training method and system based on virtual reality and storage medium
Piekarski et al. Augmented reality working planes: A foundation for action and construction at a distance
JP2000122176A (en) Information presentation method and device therefor
CN205987184U (en) Real standard system based on virtual reality
CN106980377B (en) A kind of interactive system and its operating method of three-dimensional space
CN201757829U (en) Apparatus for displaying three dimensional images
CN107272889A (en) A kind of AR interface alternation method and system based on three-dimensional coordinate
CN107967054B (en) Immersive three-dimensional electronic sand table with virtual reality and augmented reality coupled
CN107945270A (en) A kind of 3-dimensional digital sand table system
CN104914993A (en) Experience type design method for controlling civil aircraft passenger cabin seat adjustment by gestures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant