US20070141538A1 - Simulator utilizing a high resolution visual display - Google Patents

Simulator utilizing a high resolution visual display Download PDF

Info

Publication number
US20070141538A1
US20070141538A1 US11/483,310 US48331006A US2007141538A1 US 20070141538 A1 US20070141538 A1 US 20070141538A1 US 48331006 A US48331006 A US 48331006A US 2007141538 A1 US2007141538 A1 US 2007141538A1
Authority
US
United States
Prior art keywords
image
trainee
simulator
line
sight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/483,310
Inventor
Edward Quinn
Randall Wallace
Michael Vogel
Jason Seeliger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lockheed Martin Corp
Original Assignee
Lockheed Martin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lockheed Martin Corp filed Critical Lockheed Martin Corp
Priority to US11/483,310 priority Critical patent/US20070141538A1/en
Assigned to LOCKHEED MARTIN CORPORATION reassignment LOCKHEED MARTIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: QUINN, EDWARD W., SEELIGER, JASON L., VOGEL, MICHAEL R., WALLACE, RANDALL W.
Publication of US20070141538A1 publication Critical patent/US20070141538A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer

Definitions

  • the present invention relates generally to simulators. More particularly, the present invention relates to simulators that provide a high resolution display.
  • visual systems To obtain a realistic simulation, many flight simulators, particularly those for fighter and attack aircraft, use visual systems. These visual systems typically combine an image generation system (e.g., a computer image generator) with an image display subsystem, (e.g., cathode ray tubes or a digital panel projector and screen.)
  • image generation system e.g., a computer image generator
  • image display subsystem e.g., cathode ray tubes or a digital panel projector and screen.
  • these visual systems should allow a trainee to see a wide field of view. Further, the trainee should be able to search for hidden targets and other objects within the field of view, and discriminate one object from another. In effect, the ideal visual system would provide a high detail scene throughout the entire field of view.
  • FIG. 1 is a perspective drawing of a partially broken-away simulator made according to the concepts of the present invention
  • FIG. 2A is a perspective view of a trainee wearing a head-mounted assembly utilized with the simulator
  • FIG. 2B is a cross-sectional view of a helmet shell and visor of the head-mounted assembly
  • FIG. 3 illustrates a sample field of view that a trainee might see
  • FIG. 4 illustrates another sample field of view that a trainee might see
  • FIG. 5 is a block diagram showing various components in the simulator that might utilize line-of-sight information
  • FIG. 6 is a block diagram showing an image generator coupled to a projection system for projecting a final image onto a projection surface
  • FIG. 7 is block diagram showing a simulator with a line-of-sight detection apparatus as well as a background projector and a scene projector;
  • FIG. 8A is block diagram showing an arrangement of various sub-modules that may exist in embodiments of the present invention.
  • FIG. 8B is a block diagram showing another arrangement of various sub-modules that may exist in embodiments of the present invention.
  • FIG. 9 is a plot showing characteristics of a transparency mask and an inverse mask
  • FIG. 10 is a plot showing characteristics of another transparency mask and another inverse mask.
  • FIG. 11 is temporal depiction of two modes of use of the present invention.
  • a simulator of the present concept is generally designated by numeral 10 .
  • a simulator 10 may include: a projection surface 50 , a floor platform 100 , a mock instrumentation apparatus 150 , a head-mounted assembly 200 , and a projection system 250 .
  • the projection surface 50 may be any surface that is suitable for displaying a projected image.
  • the shape of the projection surface 50 may include, but is not limited to: a planar surface; a cylindrical surface; a spherical surface; a non-spherical, but generally continuous surface; or a faceted surface that is made up of either flat and/or curved surfaces.
  • the projection surface 50 may include several types of discontinuities, for example: projector holes 52 or a door aperture or other means for entering and/or leaving the simulator.
  • the projection surface 50 may be composed of any suitable material including, but not limited to: fiberglass, aluminum, foam, or any other naturally occurring or synthetic substance.
  • the projection surface 50 may include one or more layers.
  • the floor platform 100 may be attached to and/or intersect the projection surface 50 .
  • hydraulics and other mechanical apparatus may be attached to the underside of the floor platform 150 .
  • These motor or hydraulic controlled mechanical parts cause the simulator to pitch, roll, incline, decline, rotate, or otherwise move about. These parts may offer a more realistic simulation experience.
  • the underside of the floor platform may rest directly on a floor of a building or other structure.
  • Embodiments that do not include the additional mechanical parts may be particularly desirable for customers who desire an affordable simulator that may fit into a relatively small space.
  • customers may receive a particularly compact simulator.
  • the simulator 10 may include a mock instrumentation apparatus 150 , which may also be referred to as a cockpit, that is generally positioned upon and supported by the floor platform 100 .
  • a mock instrumentation apparatus 150 which may also be referred to as a cockpit, that is generally positioned upon and supported by the floor platform 100 .
  • a goal of the mock instrumentation apparatus 150 is to provide a user or trainee with realistic simulation controls and a realistic simulation environment.
  • the mock instrumentation apparatus 150 may include a skeletal canopy 152 and an instrumentation panel 154 .
  • the instrumentation panel 154 may generally attempt to emulate the controls and display of a particular aircraft.
  • the instrumentation panel 154 may include lights, dials, gauges, LCD or CRT screens, speakers, or any other type of output device commonly found in aircraft or vehicle instrument panels; as well as levers, buttons, keyboards, switches, microphones, or any other type of input device known in this or other arts.
  • the head mounted assembly may be best understood with reference to FIG. 2A and FIG. 2B .
  • the head-mounted assembly 200 may include a helmet shell 202 , a heads-up graphics generator 204 , a visor 206 , and a line-of-sight detection apparatus 208 .
  • the helmet shell 202 generally surrounds a trainee's head. It may be made of any substance, but is generally composed of a rigid material.
  • the heads-up graphics generator 204 may be included in or on the helmet shell 202 .
  • the heads-up graphics generator 204 generally provides the trainee with additional information, including, but not limited to: a plane's speed, remaining fuel, proximity of various targets, or any other relevant data.
  • the heads-up graphics generator 204 may include one or more projectors that are mounted on the exterior of the helmet shell 202 .
  • the heads-up graphics generator 204 may be one or more projectors mounted inside the helmet shell.
  • the heads-up graphics generator 204 may include Liquid-Crystal-Displays (LCDs) or other devices in the visor itself. In any event, the additional information is projected onto or otherwise generated to appear on a heads-up display surface.
  • LCDs Liquid-Crystal-Displays
  • the heads-up graphics generator 204 may be a stereoscopic display. Such displays often include two projectors—one corresponding to each of the trainee's eyes. A common technique used in such displays is to polarize light in one direction for the right eye and to polarize light in another director for the left eye. Filters positioned proximal either the generator or the display aid the trainee in viewing the images.
  • FIG. 3 and FIG. 4 illustrate two sample heads-up images that a user may see. While FIG. 3 and FIG. 4 suggest angular coverage limits that a trainee may see, these angular coverage limits are merely illustrative and do not limit the scope of the present concept. For example, although FIG. 4 specifies that in a particular embodiment the high resolution projector may have an angular coverage limit of 67 degrees horizontally and 33 degrees vertically, other embodiments may have any horizontal angular coverage limit as well as any vertical angular coverage limit.
  • the visor 206 is configured to allow the trainee to see at least a portion of the projection surface 50 .
  • the visor 206 is carried by or otherwise connected to the helmet shell 202 and includes an inner surface 210 and outer surface 212 , between which a thickness is defined.
  • the visor 206 may provide the aforementioned heads-up display surface for displaying the information provided by the heads-up graphics generator 204 .
  • the heads-up display surface is the outer surface 212 of the visor 206 .
  • a heads-up graphics generator 204 is located on the outside of the helmet shell 202 and projects an image onto the outer surface 212 of the visor 206 .
  • the heads-up display surface is the inner surface 210 of the visor 206 .
  • a heads-up graphics generator 204 is located on the inside of the helmet and projects the image onto the inner surface 210 of the visor 206 .
  • the heads-up display surface is found between the visor's outer surface 212 and the inner surface 210 .
  • An example of this embodiment might include a heads-up graphics generator 204 made up of LCDs that are located in the visor 206 itself.
  • the heads-up display surface will not be found on the visor 206 , but will be found in another region of the simulator 10 .
  • the heads-up display surface may be mounted on the mock instrumentation apparatus 150 .
  • the head mounted assembly 200 also includes a line-of-sight detection apparatus 208 , as shown in FIG. 5 of the drawings.
  • the line-of-sight detection apparatus 208 may be attached to or incorporated into the helmet shell 202 , or may be separate from the helmet shell 202 , but in such a way that movement of the shell is detected.
  • the line-of-sight detection apparatus 208 detects the trainee's head position and orientation which corresponds to the trainee's instantaneous line of sight, and outputs a signal that is representative of this information. As shown FIG. 5 , the line-of-sight detection apparatus 208 outputs line-of-sight information 214 .
  • the line-of-sight detection apparatus 208 may be an off-the-shelf system, in which case the line-of-sight information 214 is likely transmitted from the line-of-sight detection apparatus 208 via a bus or a wireless signal. In other embodiments, the line-of-sight detection apparatus 208 may be more closely integrated into the simulator, in which case the line-of-sight information 214 may be utilized by high-level software routines. Indeed, the line-of-sight information 214 may be stored in various arrays, then utilized by a host-processor or other digital processing engine in combination with various memory units and hardware peripherals. In various embodiments, the line-of-sight detection apparatus 208 detects the position and orientation of a trainee's head.
  • position and orientation and/or “line-of-sight” may include information relating to: pitch, roll, and/or yaw of the trainee's head; and/or movement of the trainee's head in the x, y, and/or z directions.
  • one line-of-sight detection apparatus may include, for example, a series of accelerometers, microprocessors, memory, software, and other components that output data that is representative of the trainee's head position and orientation.
  • the line-of-sight detection apparatus 208 detects the position and orientation of the trainee's eye using eye-tracking techniques. These techniques generally use light or other electromagnetic radiation to determine where the trainee is looking.
  • This embodiment may include for example, microprocessors, memory, software, and other components that output data that is representative of where the trainee is looking.
  • the information in the line-of-sight information 214 may be used in various ways.
  • FIG. 5 shows the line-of-sight information 214 being transmitted to the mock instrumentation apparatus 150 , an image generator 216 , and projection system 250 .
  • the mock instrumentation apparatus 150 may utilize the line-of-sight information 214 to determine the cockpit lighting intensity and/or lighting intensity of the instrumentation panel 152 (see FIG. 1 ).
  • the cockpit lighting and/or instrumentation panel 152 may be “turned up” when the trainee looks into the cockpit.
  • the cockpit lighting and/or instrumentation panel 152 may be “turned down” when the trainee looks out of the cockpit.
  • the projection system 250 based upon the line-of-sight information, increases the lighting level of the projected image (or, if desired, decreases the light level) depending upon a desired training scenario.
  • the line-of-sight information 214 may aid in providing a trainee with a more realistic simulation experience that includes cockpit activities such as map reading in daylight scenes.
  • the line-of-sight information 214 may also be transmitted to the image generator 216 and/or the projection system 250 .
  • the image generator 216 and/or the projection system 250 may utilize the line-of-sight information 214 to aid in integrating a relatively high-resolution scene into a relatively low-resolution background scene, which will both be projected onto the projection surface 50 to form a final image.
  • the line-of-sight information 214 need not enter the projection system 250 directly from the line-of-sight detection apparatus 208 .
  • the image generator 216 may send the projection system 250 one or more image information signals 218 that include both image and control information.
  • the projection system is adapted to generate images that are compatible with the capabilities of the human eye.
  • the central portion of the human eye has a greater density of light receptors than the non-central regions of the eye.
  • Standard “low-resolution” projectors do not display images that have sufficient information to saturate the light receptors in the central (foveal) portion of the eye.
  • many trainees could discern finer detail than a standard “low-resolution” projector can display.
  • these same “low-resolution” projectors can display images that contain enough detail to saturate the non-central portion of the eye. Therefore, a goal of the present projection system is to provide a final image that: (1) gives the central portion of the eye the maximum level of detail that it can detect, and (2) keeps computing and other technology costs relatively low.
  • the projection system 250 may include one or more background projectors 252 for projecting a relatively low-resolution background, and one or more scene projectors 254 for projecting a relatively high-resolution scene.
  • the background projector(s) 252 project a low-resolution image.
  • the background projector(s) 252 are collectively arranged to minimize shadowing due to the trainee and the mock instrumentation apparatus 200 .
  • the background projector(s) may have a typical effective resolution that ranges from about 20 arc-minutes/optical line pair to about 3 arc-minutes/optical line pair. In a particular embodiment, the background projector(s) have an effective resolution of approximately 4 arc-minutes/optical line pair.
  • the scene projector(s) 254 collectively project a high-resolution image of a scene.
  • the scene projector 254 may be mounted on the mock instrumentation apparatus 150 above and behind the trainee's head. In other embodiments, the scene projector 254 may be located anywhere that allows it to directly or indirectly project an image onto the projection surface 50 .
  • Typical scene projectors may have an effective resolution that ranges from about 1 arc-minutes/optical line pair to about 4 arc-minutes/optical line pair. In a particular embodiment, the scene projector has a resolution of approximately 2 arc-minutes/optical line pair, which corresponds to a generally accepted resolution for the foveal portion of the human eye.
  • the scene projector(s) 254 may utilize a stereoscopic display.
  • Stereoscopic displays often display two images—one corresponding to each of the trainee's eyes.
  • a common technique used in such displays is to circularly polarize light in one direction for the right eye and to circularly polarize light in another director for the left eye.
  • a filter which is incorporated into the visor 206 , is then used to aid the trainee in viewing the images.
  • the stereoscopic display makes it appear to the trainee that he or she is viewing a three-dimensional image.
  • the scene projector 254 may be positioned to direct its light towards a mirror 256 .
  • the scene projector's light may then reflect off the mirror 256 and onto the projection surface 50 .
  • line-of-sight information 210 (see FIG. 5 ) and/or other information will be utilized to direct the mirror's position such that the high-resolution scene stays in the trainee's line-of-sight.
  • the mirror 256 may exist anywhere that it may reflect the scene projector's light onto the projection surface 50 .
  • the mirror 256 may, for example, be mounted on scaffolding extending from the mock instrumentation apparatus 150 .
  • the mirror 256 may pivot independently about a first axis and the scaffolding may rotate about a second axis.
  • FIG. 6 shows a block diagram illustrating aspects of the simulator 10 .
  • the image generator 216 receives image information from an image library 258 .
  • This image library 258 may be a software library and likely includes geometries, textures, and shapes that are based on polygons. Indeed, the image library may provide two different images of the same object, wherein one image of the object is used in high resolution scenes, and another image of the same object is used in low resolution scenes. Selection of which image is used may be based on a number of factors, such as the line-of-sight information or the like.
  • the image generator 216 Based on how a trainee manipulates the cockpit controls 260 , the image generator 216 outputs an image signal 262 that is representative of the trainee's actions in the simulated environment.
  • the projector system 250 then utilizes the image signal 262 to generate and display a final image 264 on the projection surface 50 .
  • the image generator 216 outputs a corresponding image signal 262 , which results in the projection system 250 displaying a final image 264 that appears to show the trainee climbing in altitude.
  • FIG. 7 shows a block diagram with a line-of-sight detection apparatus 208 included in the simulator.
  • the line-of-sight-detection apparatus 208 generates line-of-sight information 214 .
  • the image generator 216 still may account for the image library 258 and the cockpit controls 260 , but now may additionally account for the line-of-sight information 214 . Based on one or more of these inputs (the image library 258 , the cockpit controls 260 and/or the line-of-sight information 214 ), the image generator 216 outputs a “low-resolution” background-image signal 266 and a “high-resolution” scene-image signal 268 .
  • One or more background projectors 252 receive the background-image signal 266 and project a “low-resolution” background image 272 .
  • one or more scene projectors 254 receive the scene-image signal 268 and project a “high-resolution” scene image 276 .
  • the background image 272 and the scene image 276 are merged on the projection surface 50 to form a final image.
  • the background image 272 has a circular region with lower intensity than the remainder of the background image.
  • the scene image 276 is circular and is displayed in the low-intensity, circular region in the background.
  • a circular scene-image 276 may be aligned with a low-intensity, circular background image to form a final image. This may be accomplished when the image generator 216 uses the line-of-sight information 214 to align the scene-image 276 with where the trainee is looking.
  • various embodiments of the image generator 216 may include several sub-modules including: a background imaging module 278 , a scene imaging module 280 , a transparency mask 282 , and an inverse mask 284 .
  • the background imaging module 278 may generally include hardware and software components.
  • the background imaging module 278 receives input from the image library 258 and the cockpit controls 260 and creates a background-image signal 286 which is received by the transparency mask 282 .
  • the mask 282 modifies the background image signal 286 so that only the required signal information is sent on to the background projector 252 .
  • the scene imaging module 280 may generally include hardware and software components.
  • the scene imaging module receives input from the image library 258 and the cockpit control 260 and creates a scene-image signal 288 .
  • the inverse mask 284 then receives the scene-image signal 288 .
  • the mask 284 modifies the scene-image signal 288 so that only the required signal information is sent on to the scene projector 254 .
  • the scene projector 254 directly receives the scene-image signal 288 .
  • the scene projector 254 generates a corresponding signal which is then received by the inverse mask 284 , which in this embodiment is not maintained by the image generator 216 .
  • This embodiment allows for more processing capability of the generator 216 to be specifically directed to the background imaging 278 , the scene imaging 280 , and the transparency mask 282 .
  • Either of the systems shown in FIGS. 8A and 8B may provide for a computerized control 290 to facilitate merging of the background and scene images.
  • the computerized control 290 may be a stand-alone computer-processing device or it may be part of the other processing devices such as the image generator, the projection system or the like.
  • the control 290 receives the line of sight information 214 and generates a control signal 292 which is, in turn, received by the background projector 252 and/or the scene projector 254 .
  • the projectors 252 and 254 are able to utilize the line of sight information to assist in generating appropriate low resolution and high resolution images for projection.
  • the transparency mask 282 and inverse mask 284 may exist in several embodiments and are best understood with reference to FIGS. 9-10 .
  • the transparency mask and the inverse mask blend the high-resolution scene with the low-resolution background in the region of the final image substantially in the trainee's line-of-sight. This “blending” of the high-resolution scene with the low-resolution background image is designed such that the optical modulation transfer function (MTF) response of the human visual system is minimal.
  • MTF optical modulation transfer function
  • FIG. 9 shows a particular embodiment, in which the transparency mask 282 and the inverse mask 284 work in tandem to display a final image with normalized light intensity.
  • the transparency mask 282 has a central region 312 that corresponds to the center of the high-resolution image, and edge regions 314 , 316 outside the central region 312 .
  • the transparency mask 282 provides a light intensity of about 0 in the central region 312 ; a light intensity of about 1 at the edge regions 314 , 316 ; and is generally piece-wise continuous therebetween. This piece-wise function may approximate a continuous function, or may in fact be continuous.
  • the light intensity provided by the transparency mask 282 has two inflection points 324 , 326 that exist between the central region and the edge regions 314 , 316 .
  • the inverse mask 284 also has a central region 318 , and edge regions 320 , 322 outside the central region 318 .
  • This inverse mask 284 provides for a light intensity of about 1 in the central region 318 ; and a light intensity of about 0 at the edge regions 320 , 322 ; and is generally piece-wise continuous therebetween.
  • This piece-wise function may approximate a continuous function or may in fact be continuous.
  • the light intensity provided by the inverse mask has two inflection points 328 , 330 that exist between the central region 318 and the edge region 320 , 322 .
  • the inflection points 324 , 326 , 328 , 330 provided by both masks aid in “blending” the low-resolution background and the high-resolution scenes together. Because the images blend, the trainee is less likely to notice a distinct “ring” that might otherwise be observed between the background image and the scene image. Thus, negative training is minimized because the image looks realistic.
  • FIG. 10 shows another embodiment of the transparency mask and the inverse mask.
  • the light intensity provided from a transparency mask 332 and an inverse mask 334 are super-imposed on one another.
  • the light intensity of the final image is not normalized.
  • the intensity of the central, high-resolution area is approximately double the intensity of the edge regions 338 , 340 .
  • the effective contrast between the high-resolution region and the background image also increased.
  • This scheme solves a long-standing problem with projection domes known as the “integration effect.” Because light reflects off all portions of the dome, black regions often appear gray.
  • the present scheme causes black regions to appear darker because the effective contrast between the high-resolution region and the background image is also increased.
  • FIG. 10 illustrates a particular embodiment where the light intensity of the central, high resolution area is approximately double the intensity of the edge regions
  • the present concept is not limited to that embodiment.
  • the central region 336 may have a light intensity that is slightly greater or significantly greater. This range may be from a fraction of a percent to an order of magnitude or more.
  • the central region 336 may have a light intensity that is 1% greater than the edge regions 338 , 340
  • the central area 336 may have a light intensity that is 10-times the light intensity of the edge regions 338 , 340 .
  • the scene projector may include two modes of operation: a “rapid” translation mode 350 , and a “slow” translation mode 352 .
  • “rapid” translation mode 350 if the translation velocity of the head-mounted assembly 200 exceeds a predetermined level, the scene projector 254 may become misaligned and may turn off until it can re-align with the trainee's line-of-sight.
  • “slow” translation mode 352 if the translation velocity of the head-mounted-assembly 200 is within a predetermined level, the scene projector 254 slowly and accurately scans the dome or projection surface 50 , so as to correspond with the trainee's line-of-sight.
  • the simulator 10 allows for movement of a high resolution area that is always positioned so that the focus of the trainee's vision is centered on the moving area.
  • the simulator is also advantageous in that a transparency mask and an inverse mask are used to blend the centered high resolution area which is surrounded by the low resolution area. This blending is essentially seamless and prevents, or at least significantly reduces, negative training.
  • Another advantage of the simulator 10 is that it allows for changes in the perceived cockpit lighting intensity based upon line of sight information associated with the pilot's head.
  • Still another advantage of the present invention is that it allows for use of different models of the same image from an image library, such as an enemy fighter, depending upon whether the image is the high resolution area or the low resolution area. Accordingly, a high resolution simulator can be realized with relatively low-cost system components.

Abstract

A simulator comprising a projection surface, a line-of-sight detection apparatus for detecting the orientation of a trainee's head, a projection system for projecting a high-resolution image and a low resolution image on to the projection surface, and at least one mask for merging the high-resolution image and the low-resolution image in an area that is substantially in the trainee's line-of-sight.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 60/697,652.
  • TECHNICAL FIELD
  • The present invention relates generally to simulators. More particularly, the present invention relates to simulators that provide a high resolution display.
  • BACKGROUND OF THE INVENTION
  • To obtain a realistic simulation, many flight simulators, particularly those for fighter and attack aircraft, use visual systems. These visual systems typically combine an image generation system (e.g., a computer image generator) with an image display subsystem, (e.g., cathode ray tubes or a digital panel projector and screen.)
  • For maximum training capability, these visual systems should allow a trainee to see a wide field of view. Further, the trainee should be able to search for hidden targets and other objects within the field of view, and discriminate one object from another. In effect, the ideal visual system would provide a high detail scene throughout the entire field of view.
  • While today's technology would allow a system to provide a high detail scene throughout the entire field of view, customers are not willing to pay the excessive cost associated with such a system. Thus, a visual system that has these capabilities is economically unfeasible.
  • In trying to develop an economical visual system that displays a high detail scene, prior art visual systems have had a major deficiency in that they conspicuously define or present an object within the field of view. In these prior art systems, if a particular scene is presented with a superior resolution with respect to the surrounding low detail background, then this variation will catch the trainee's attention in an unrealistic fashion and prevent him from discriminating objects. These particular objects thus “stand out” in an unrealistic fashion and result in negative training.
  • Thus, a need exists for an improved visual system that provides the desired realism that allows a trainee to interpret various visual cues without resulting in negative training.
  • SUMMARY OF THE INVENTION
  • In light of the foregoing, it is a first aspect of the present invention to provide a simulator utilizing a high-resolution visual display.
  • It is another aspect of the present invention to provide a simulator that includes a projection surface; a line-of-sight detection apparatus for detecting the orientation of a trainee's head; a projection system for projecting a high-resolution image and a low resolution image; and at least one mask for merging the high-resolution image and the low-resolution image in an area that is substantially in the trainee's line-of-sight on the projection surface.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a complete understanding of the objects, techniques and structure of the invention, reference should be made to the following detailed description and accompanying drawings, wherein:
  • FIG. 1 is a perspective drawing of a partially broken-away simulator made according to the concepts of the present invention;
  • FIG. 2A is a perspective view of a trainee wearing a head-mounted assembly utilized with the simulator;
  • FIG. 2B is a cross-sectional view of a helmet shell and visor of the head-mounted assembly;
  • FIG. 3 illustrates a sample field of view that a trainee might see;
  • FIG. 4 illustrates another sample field of view that a trainee might see;
  • FIG. 5 is a block diagram showing various components in the simulator that might utilize line-of-sight information;
  • FIG. 6 is a block diagram showing an image generator coupled to a projection system for projecting a final image onto a projection surface;
  • FIG. 7 is block diagram showing a simulator with a line-of-sight detection apparatus as well as a background projector and a scene projector;
  • FIG. 8A is block diagram showing an arrangement of various sub-modules that may exist in embodiments of the present invention;
  • FIG. 8B is a block diagram showing another arrangement of various sub-modules that may exist in embodiments of the present invention;
  • FIG. 9 is a plot showing characteristics of a transparency mask and an inverse mask;
  • FIG. 10 is a plot showing characteristics of another transparency mask and another inverse mask; and
  • FIG. 11 is temporal depiction of two modes of use of the present invention.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • The present concept is best understood by referring to the drawings. Although the drawings are generally tailored towards a flight simulator, aspects of the present concept are equally applicable to land-vehicle simulators, water-vehicle simulators, or other various simulation environments.
  • As shown in FIG. 1, a simulator of the present concept is generally designated by numeral 10. A simulator 10 may include: a projection surface 50, a floor platform 100, a mock instrumentation apparatus 150, a head-mounted assembly 200, and a projection system 250.
  • The projection surface 50 may be any surface that is suitable for displaying a projected image. The shape of the projection surface 50 may include, but is not limited to: a planar surface; a cylindrical surface; a spherical surface; a non-spherical, but generally continuous surface; or a faceted surface that is made up of either flat and/or curved surfaces. The projection surface 50 may include several types of discontinuities, for example: projector holes 52 or a door aperture or other means for entering and/or leaving the simulator. The projection surface 50 may be composed of any suitable material including, but not limited to: fiberglass, aluminum, foam, or any other naturally occurring or synthetic substance. The projection surface 50 may include one or more layers.
  • The floor platform 100 may be attached to and/or intersect the projection surface 50. In various embodiments, hydraulics and other mechanical apparatus may be attached to the underside of the floor platform 150. These motor or hydraulic controlled mechanical parts cause the simulator to pitch, roll, incline, decline, rotate, or otherwise move about. These parts may offer a more realistic simulation experience.
  • Other various embodiments will not include these mechanical parts. In these other embodiments, the underside of the floor platform may rest directly on a floor of a building or other structure. Embodiments that do not include the additional mechanical parts may be particularly desirable for customers who desire an affordable simulator that may fit into a relatively small space. In particular, when the floor platform rests directly on the floor of a building or other structure and the projection surface is non-spherical, customers may receive a particularly compact simulator.
  • The simulator 10 may include a mock instrumentation apparatus 150, which may also be referred to as a cockpit, that is generally positioned upon and supported by the floor platform 100. Although the drawings are generally tailored towards a mock instrumentation apparatus 150 relating to a flight simulator, aspects of the present concept are equally applicable to land-vehicle simulators, water-vehicle simulators, or other various simulation environments. A goal of the mock instrumentation apparatus 150 is to provide a user or trainee with realistic simulation controls and a realistic simulation environment.
  • The mock instrumentation apparatus 150 may include a skeletal canopy 152 and an instrumentation panel 154. The instrumentation panel 154 may generally attempt to emulate the controls and display of a particular aircraft. The instrumentation panel 154 may include lights, dials, gauges, LCD or CRT screens, speakers, or any other type of output device commonly found in aircraft or vehicle instrument panels; as well as levers, buttons, keyboards, switches, microphones, or any other type of input device known in this or other arts.
  • To make the simulation as realistic as possible, trainees may wear a head-mounted assembly 200. The head mounted assembly may be best understood with reference to FIG. 2A and FIG. 2B. The head-mounted assembly 200 may include a helmet shell 202, a heads-up graphics generator 204, a visor 206, and a line-of-sight detection apparatus 208. The helmet shell 202 generally surrounds a trainee's head. It may be made of any substance, but is generally composed of a rigid material.
  • The heads-up graphics generator 204 may be included in or on the helmet shell 202. The heads-up graphics generator 204 generally provides the trainee with additional information, including, but not limited to: a plane's speed, remaining fuel, proximity of various targets, or any other relevant data. In one embodiment, the heads-up graphics generator 204 may include one or more projectors that are mounted on the exterior of the helmet shell 202. In other embodiments, the heads-up graphics generator 204 may be one or more projectors mounted inside the helmet shell. In still other embodiments, the heads-up graphics generator 204 may include Liquid-Crystal-Displays (LCDs) or other devices in the visor itself. In any event, the additional information is projected onto or otherwise generated to appear on a heads-up display surface.
  • In various embodiments, the heads-up graphics generator 204 may be a stereoscopic display. Such displays often include two projectors—one corresponding to each of the trainee's eyes. A common technique used in such displays is to polarize light in one direction for the right eye and to polarize light in another director for the left eye. Filters positioned proximal either the generator or the display aid the trainee in viewing the images.
  • FIG. 3 and FIG. 4, illustrate two sample heads-up images that a user may see. While FIG. 3 and FIG. 4 suggest angular coverage limits that a trainee may see, these angular coverage limits are merely illustrative and do not limit the scope of the present concept. For example, although FIG. 4 specifies that in a particular embodiment the high resolution projector may have an angular coverage limit of 67 degrees horizontally and 33 degrees vertically, other embodiments may have any horizontal angular coverage limit as well as any vertical angular coverage limit.
  • The visor 206 is configured to allow the trainee to see at least a portion of the projection surface 50. As shown in FIG. 2B, the visor 206 is carried by or otherwise connected to the helmet shell 202 and includes an inner surface 210 and outer surface 212, between which a thickness is defined. In addition to allowing the trainee to see at least a portion of the projection surface 50, the visor 206 may provide the aforementioned heads-up display surface for displaying the information provided by the heads-up graphics generator 204.
  • In various embodiments, the heads-up display surface is the outer surface 212 of the visor 206. One such embodiment is where a heads-up graphics generator 204 is located on the outside of the helmet shell 202 and projects an image onto the outer surface 212 of the visor 206.
  • In other embodiments, the heads-up display surface is the inner surface 210 of the visor 206. An example of this type of embodiment is where a heads-up graphics generator 204 is located on the inside of the helmet and projects the image onto the inner surface 210 of the visor 206. In still other embodiments, the heads-up display surface is found between the visor's outer surface 212 and the inner surface 210. An example of this embodiment might include a heads-up graphics generator 204 made up of LCDs that are located in the visor 206 itself. In other embodiments, the heads-up display surface will not be found on the visor 206, but will be found in another region of the simulator 10. For example, the heads-up display surface may be mounted on the mock instrumentation apparatus 150.
  • The head mounted assembly 200 also includes a line-of-sight detection apparatus 208, as shown in FIG. 5 of the drawings. The line-of-sight detection apparatus 208 may be attached to or incorporated into the helmet shell 202, or may be separate from the helmet shell 202, but in such a way that movement of the shell is detected. Generally, the line-of-sight detection apparatus 208 detects the trainee's head position and orientation which corresponds to the trainee's instantaneous line of sight, and outputs a signal that is representative of this information. As shown FIG. 5, the line-of-sight detection apparatus 208 outputs line-of-sight information 214.
  • The line-of-sight detection apparatus 208 may be an off-the-shelf system, in which case the line-of-sight information 214 is likely transmitted from the line-of-sight detection apparatus 208 via a bus or a wireless signal. In other embodiments, the line-of-sight detection apparatus 208 may be more closely integrated into the simulator, in which case the line-of-sight information 214 may be utilized by high-level software routines. Indeed, the line-of-sight information 214 may be stored in various arrays, then utilized by a host-processor or other digital processing engine in combination with various memory units and hardware peripherals. In various embodiments, the line-of-sight detection apparatus 208 detects the position and orientation of a trainee's head. The terms “position and orientation” and/or “line-of-sight” may include information relating to: pitch, roll, and/or yaw of the trainee's head; and/or movement of the trainee's head in the x, y, and/or z directions. Thus, one line-of-sight detection apparatus may include, for example, a series of accelerometers, microprocessors, memory, software, and other components that output data that is representative of the trainee's head position and orientation.
  • In an alternative embodiment, the line-of-sight detection apparatus 208 detects the position and orientation of the trainee's eye using eye-tracking techniques. These techniques generally use light or other electromagnetic radiation to determine where the trainee is looking. This embodiment may include for example, microprocessors, memory, software, and other components that output data that is representative of where the trainee is looking.
  • The information in the line-of-sight information 214 may be used in various ways. For example, FIG. 5 shows the line-of-sight information 214 being transmitted to the mock instrumentation apparatus 150, an image generator 216, and projection system 250.
  • In various embodiments, the mock instrumentation apparatus 150 may utilize the line-of-sight information 214 to determine the cockpit lighting intensity and/or lighting intensity of the instrumentation panel 152 (see FIG. 1). For example, the cockpit lighting and/or instrumentation panel 152 may be “turned up” when the trainee looks into the cockpit. Alternatively, the cockpit lighting and/or instrumentation panel 152 may be “turned down” when the trainee looks out of the cockpit. In other words, the projection system 250, based upon the line-of-sight information, increases the lighting level of the projected image (or, if desired, decreases the light level) depending upon a desired training scenario. Thus, referring again to FIG. 5, the line-of-sight information 214 may aid in providing a trainee with a more realistic simulation experience that includes cockpit activities such as map reading in daylight scenes.
  • The line-of-sight information 214 may also be transmitted to the image generator 216 and/or the projection system 250. As discussed in detail further herein, the image generator 216 and/or the projection system 250 may utilize the line-of-sight information 214 to aid in integrating a relatively high-resolution scene into a relatively low-resolution background scene, which will both be projected onto the projection surface 50 to form a final image. Note that in various embodiments, the line-of-sight information 214 need not enter the projection system 250 directly from the line-of-sight detection apparatus 208. In these embodiments, the image generator 216 may send the projection system 250 one or more image information signals 218 that include both image and control information.
  • The projection system is adapted to generate images that are compatible with the capabilities of the human eye. The central portion of the human eye has a greater density of light receptors than the non-central regions of the eye. Standard “low-resolution” projectors do not display images that have sufficient information to saturate the light receptors in the central (foveal) portion of the eye. Thus, in an area directly in the center of their field of view, many trainees could discern finer detail than a standard “low-resolution” projector can display. However, these same “low-resolution” projectors can display images that contain enough detail to saturate the non-central portion of the eye. Therefore, a goal of the present projection system is to provide a final image that: (1) gives the central portion of the eye the maximum level of detail that it can detect, and (2) keeps computing and other technology costs relatively low.
  • Referring again to FIG. 1, the projection system 250 may include one or more background projectors 252 for projecting a relatively low-resolution background, and one or more scene projectors 254 for projecting a relatively high-resolution scene.
  • The background projector(s) 252 project a low-resolution image. Ideally, the background projector(s) 252 are collectively arranged to minimize shadowing due to the trainee and the mock instrumentation apparatus 200. Generally, the background projector(s) may have a typical effective resolution that ranges from about 20 arc-minutes/optical line pair to about 3 arc-minutes/optical line pair. In a particular embodiment, the background projector(s) have an effective resolution of approximately 4 arc-minutes/optical line pair.
  • The scene projector(s) 254 collectively project a high-resolution image of a scene.
  • In one or more embodiments, the scene projector 254 may be mounted on the mock instrumentation apparatus 150 above and behind the trainee's head. In other embodiments, the scene projector 254 may be located anywhere that allows it to directly or indirectly project an image onto the projection surface 50. Typical scene projectors may have an effective resolution that ranges from about 1 arc-minutes/optical line pair to about 4 arc-minutes/optical line pair. In a particular embodiment, the scene projector has a resolution of approximately 2 arc-minutes/optical line pair, which corresponds to a generally accepted resolution for the foveal portion of the human eye.
  • The scene projector(s) 254 may utilize a stereoscopic display. Stereoscopic displays often display two images—one corresponding to each of the trainee's eyes. A common technique used in such displays is to circularly polarize light in one direction for the right eye and to circularly polarize light in another director for the left eye. A filter, which is incorporated into the visor 206, is then used to aid the trainee in viewing the images. Often, the stereoscopic display makes it appear to the trainee that he or she is viewing a three-dimensional image.
  • In various embodiments, the scene projector 254 may be positioned to direct its light towards a mirror 256. The scene projector's light may then reflect off the mirror 256 and onto the projection surface 50. In such an embodiment, line-of-sight information 210 (see FIG. 5) and/or other information will be utilized to direct the mirror's position such that the high-resolution scene stays in the trainee's line-of-sight.
  • Generally, the mirror 256 may exist anywhere that it may reflect the scene projector's light onto the projection surface 50. In a particular embodiment, the mirror 256 may, for example, be mounted on scaffolding extending from the mock instrumentation apparatus 150. In a very particular embodiment, the mirror 256 may pivot independently about a first axis and the scaffolding may rotate about a second axis.
  • FIG. 6 shows a block diagram illustrating aspects of the simulator 10. As shown, the image generator 216 receives image information from an image library 258. This image library 258 may be a software library and likely includes geometries, textures, and shapes that are based on polygons. Indeed, the image library may provide two different images of the same object, wherein one image of the object is used in high resolution scenes, and another image of the same object is used in low resolution scenes. Selection of which image is used may be based on a number of factors, such as the line-of-sight information or the like. Based on how a trainee manipulates the cockpit controls 260, the image generator 216 outputs an image signal 262 that is representative of the trainee's actions in the simulated environment. The projector system 250 then utilizes the image signal 262 to generate and display a final image 264 on the projection surface 50. For example, if a trainee pulls back on a throttle in the cockpit, the image generator 216 outputs a corresponding image signal 262, which results in the projection system 250 displaying a final image 264 that appears to show the trainee climbing in altitude.
  • FIG. 7 shows a block diagram with a line-of-sight detection apparatus 208 included in the simulator. In this embodiment, the line-of-sight-detection apparatus 208 generates line-of-sight information 214. The image generator 216 still may account for the image library 258 and the cockpit controls 260, but now may additionally account for the line-of-sight information 214. Based on one or more of these inputs (the image library 258, the cockpit controls 260 and/or the line-of-sight information 214), the image generator 216 outputs a “low-resolution” background-image signal 266 and a “high-resolution” scene-image signal 268. One or more background projectors 252 receive the background-image signal 266 and project a “low-resolution” background image 272. Similarly, one or more scene projectors 254 receive the scene-image signal 268 and project a “high-resolution” scene image 276.
  • The background image 272 and the scene image 276 are merged on the projection surface 50 to form a final image. In a particular embodiment, the background image 272 has a circular region with lower intensity than the remainder of the background image. In this embodiment, the scene image 276 is circular and is displayed in the low-intensity, circular region in the background. In other words, a circular scene-image 276 may be aligned with a low-intensity, circular background image to form a final image. This may be accomplished when the image generator 216 uses the line-of-sight information 214 to align the scene-image 276 with where the trainee is looking.
  • As shown in FIG. 8A, various embodiments of the image generator 216 may include several sub-modules including: a background imaging module 278, a scene imaging module 280, a transparency mask 282, and an inverse mask 284. The background imaging module 278 may generally include hardware and software components. The background imaging module 278 receives input from the image library 258 and the cockpit controls 260 and creates a background-image signal 286 which is received by the transparency mask 282. The mask 282, as will be explained in further detail, modifies the background image signal 286 so that only the required signal information is sent on to the background projector 252.
  • Similarly, the scene imaging module 280 may generally include hardware and software components. The scene imaging module receives input from the image library 258 and the cockpit control 260 and creates a scene-image signal 288. The inverse mask 284 then receives the scene-image signal 288.
  • The mask 284, which will also be described in detail, modifies the scene-image signal 288 so that only the required signal information is sent on to the scene projector 254. In the embodiment shown FIG. 8B, the scene projector 254 directly receives the scene-image signal 288. The scene projector 254 generates a corresponding signal which is then received by the inverse mask 284, which in this embodiment is not maintained by the image generator 216. This embodiment allows for more processing capability of the generator 216 to be specifically directed to the background imaging 278, the scene imaging 280, and the transparency mask 282.
  • Either of the systems shown in FIGS. 8A and 8B may provide for a computerized control 290 to facilitate merging of the background and scene images. The computerized control 290 may be a stand-alone computer-processing device or it may be part of the other processing devices such as the image generator, the projection system or the like. In any event, the control 290 receives the line of sight information 214 and generates a control signal 292 which is, in turn, received by the background projector 252 and/or the scene projector 254. As such, the projectors 252 and 254 are able to utilize the line of sight information to assist in generating appropriate low resolution and high resolution images for projection.
  • The transparency mask 282 and inverse mask 284 may exist in several embodiments and are best understood with reference to FIGS. 9-10. Generally, the transparency mask and the inverse mask blend the high-resolution scene with the low-resolution background in the region of the final image substantially in the trainee's line-of-sight. This “blending” of the high-resolution scene with the low-resolution background image is designed such that the optical modulation transfer function (MTF) response of the human visual system is minimal.
  • FIG. 9 shows a particular embodiment, in which the transparency mask 282 and the inverse mask 284 work in tandem to display a final image with normalized light intensity. The transparency mask 282 has a central region 312 that corresponds to the center of the high-resolution image, and edge regions 314, 316 outside the central region 312. The transparency mask 282 provides a light intensity of about 0 in the central region 312; a light intensity of about 1 at the edge regions 314,316; and is generally piece-wise continuous therebetween. This piece-wise function may approximate a continuous function, or may in fact be continuous. The light intensity provided by the transparency mask 282 has two inflection points 324, 326 that exist between the central region and the edge regions 314, 316.
  • Still referring to FIG. 9, the inverse mask 284 also has a central region 318, and edge regions 320, 322 outside the central region 318. This inverse mask 284 provides for a light intensity of about 1 in the central region 318; and a light intensity of about 0 at the edge regions 320, 322; and is generally piece-wise continuous therebetween. This piece-wise function may approximate a continuous function or may in fact be continuous. The light intensity provided by the inverse mask has two inflection points 328, 330 that exist between the central region 318 and the edge region 320, 322. The inflection points 324, 326, 328, 330 provided by both masks aid in “blending” the low-resolution background and the high-resolution scenes together. Because the images blend, the trainee is less likely to notice a distinct “ring” that might otherwise be observed between the background image and the scene image. Thus, negative training is minimized because the image looks realistic.
  • FIG. 10 shows another embodiment of the transparency mask and the inverse mask. In FIG. 10, the light intensity provided from a transparency mask 332 and an inverse mask 334 are super-imposed on one another. As shown, the light intensity of the final image is not normalized. In fact, the intensity of the central, high-resolution area is approximately double the intensity of the edge regions 338, 340. Because the light intensity is increased in the central region 336, the effective contrast between the high-resolution region and the background image also increased. This scheme solves a long-standing problem with projection domes known as the “integration effect.” Because light reflects off all portions of the dome, black regions often appear gray. The present scheme causes black regions to appear darker because the effective contrast between the high-resolution region and the background image is also increased.
  • While FIG. 10 illustrates a particular embodiment where the light intensity of the central, high resolution area is approximately double the intensity of the edge regions, the present concept is not limited to that embodiment. Generally, relative to the edge regions 338,340, the central region 336 may have a light intensity that is slightly greater or significantly greater. This range may be from a fraction of a percent to an order of magnitude or more. For example, in one embodiment the central region 336 may have a light intensity that is 1% greater than the edge regions 338,340, while in another embodiment the central area 336 may have a light intensity that is 10-times the light intensity of the edge regions 338,340.
  • As illustrated in FIG. 11, the scene projector may include two modes of operation: a “rapid” translation mode 350, and a “slow” translation mode 352. In “rapid” translation mode 350, if the translation velocity of the head-mounted assembly 200 exceeds a predetermined level, the scene projector 254 may become misaligned and may turn off until it can re-align with the trainee's line-of-sight. In “slow” translation mode 352, if the translation velocity of the head-mounted-assembly 200 is within a predetermined level, the scene projector 254 slowly and accurately scans the dome or projection surface 50, so as to correspond with the trainee's line-of-sight.
  • Based on the foregoing description, it will be appreciated that a number of advantages are realized. Foremost, the simulator 10 allows for movement of a high resolution area that is always positioned so that the focus of the trainee's vision is centered on the moving area. The simulator is also advantageous in that a transparency mask and an inverse mask are used to blend the centered high resolution area which is surrounded by the low resolution area. This blending is essentially seamless and prevents, or at least significantly reduces, negative training. Another advantage of the simulator 10 is that it allows for changes in the perceived cockpit lighting intensity based upon line of sight information associated with the pilot's head. And still another advantage of the present invention is that it allows for use of different models of the same image from an image library, such as an enemy fighter, depending upon whether the image is the high resolution area or the low resolution area. Accordingly, a high resolution simulator can be realized with relatively low-cost system components.
  • Thus, it can be seen that the objects of the invention have been satisfied by the structure and its method for use presented above. While in accordance with the Patent Statutes, only the best mode and preferred embodiment has been presented and described in detail, it is to be understood that the invention is not limited thereto and thereby. Accordingly, for an appreciation of the true scope and breadth of the invention, reference should be made to the following claims.

Claims (11)

1. A simulator comprising:
a projection surface;
a line-of-sight detection apparatus for detecting the orientation of a trainee's head;
a projection system for projecting a high-resolution image and a low resolution image on to the projection surface; and
at least one mask for merging the high-resolution image and the low-resolution image in an area that is substantially in the trainee's line-of-sight based upon said trainee's head orientation.
2. The simulator of claim 1, wherein:
the at least one mask includes a transparency mask that reduces the light intensity of a background image in an area that is substantially in the trainee's line-of-sight.
3. The simulator of claim 2, wherein the transparency mask includes:
a central region that provides a relatively low light intensity;
edge regions outside the central region that provide for a relatively high light intensity; and
wherein the light intensity of the background image is continuous between the central region and the edge regions.
4. The simulator of claim 2, wherein the transparency mask includes:
a central region that provides a relatively low light intensity;
edge regions outside the central region that provide for a relatively high light intensity; and
wherein the light intensity of the background image has at least one inflection point that exists between the central region and the edge regions.
5. The simulator of claim 1, wherein said projection system comprises:
a background projector which generates said low resolution image; and
a scene projector which generates said high resolution image.
6. The simulator according to claim 6, wherein said scene projector generates a first circularly polarized light in a first direction and a second circularly polarized light in a second direction opposite said first direction.
7. The simulator according to claim 6, further comprising:
a head-mounted assembly worn by the trainee which has a visor with a filter incorporated therein so that the trainee visualizes a three-dimensional image.
8. The simulator of claim 1, wherein:
the one or more masks includes an inverse mask that reduces the light intensity of a scene image in an area that is substantially in the trainee's line-of-sight.
9. The simulator of claim 1, further comprising:
a mock instrumentation apparatus oriented with respect to said projection surface, wherein an intensity of the high-resolution image is increased when said line-of-sight detection apparatus determines that the trainee's head orientation is directed toward said mock instrumentation apparatus.
10. The simulator of claim 1, further comprising:
an image generator for generating said high-resolution image and said low-resolution image, said line-of-sight detection apparatus providing line-of-sight information to said image generator to assist in determining boundaries of said high resolution image.
11. The simulator of claim 10, further comprising:
an image library providing image information to said image generator for the purpose of generating said high and low resolution images.
US11/483,310 2005-07-08 2006-07-07 Simulator utilizing a high resolution visual display Abandoned US20070141538A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/483,310 US20070141538A1 (en) 2005-07-08 2006-07-07 Simulator utilizing a high resolution visual display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US69765205P 2005-07-08 2005-07-08
US11/483,310 US20070141538A1 (en) 2005-07-08 2006-07-07 Simulator utilizing a high resolution visual display

Publications (1)

Publication Number Publication Date
US20070141538A1 true US20070141538A1 (en) 2007-06-21

Family

ID=38174037

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/483,310 Abandoned US20070141538A1 (en) 2005-07-08 2006-07-07 Simulator utilizing a high resolution visual display

Country Status (1)

Country Link
US (1) US20070141538A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070009862A1 (en) * 2005-07-08 2007-01-11 Quinn Edward W Simulator utilizing a non-spherical projection surface
US20070238085A1 (en) * 2006-01-13 2007-10-11 Colvin Richard T Computer based system for training workers
US20090066858A1 (en) * 2007-09-10 2009-03-12 L-3 Communications Corporation Display system for high-definition projectors
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
WO2017048125A1 (en) * 2015-09-18 2017-03-23 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Night vision goggles aided flight simulator system and method
US10643381B2 (en) 2016-01-12 2020-05-05 Qualcomm Incorporated Systems and methods for rendering multiple levels of detail
US10643296B2 (en) 2016-01-12 2020-05-05 Qualcomm Incorporated Systems and methods for rendering multiple levels of detail
US11217204B2 (en) 2018-12-19 2022-01-04 Cae Inc. Dynamically adjusting image characteristics in real-time

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4896210A (en) * 1987-11-16 1990-01-23 Brokenshire Daniel A Stereoscopic graphics display terminal with image data processing
US4963959A (en) * 1989-11-20 1990-10-16 Drewlo Kenneth G Three-dimensional cathode ray tube display
US5136675A (en) * 1990-12-20 1992-08-04 General Electric Company Slewable projection system with fiber-optic elements
US5490784A (en) * 1993-10-29 1996-02-13 Carmein; David E. E. Virtual reality system with enhanced sensory apparatus
US5539422A (en) * 1993-04-12 1996-07-23 Virtual Vision, Inc. Head mounted display system
US5696531A (en) * 1991-02-05 1997-12-09 Minolta Camera Kabushiki Kaisha Image display apparatus capable of combining image displayed with high resolution and image displayed with low resolution
US5784149A (en) * 1995-09-21 1998-07-21 Fuji Photo Film Co., Ltd. Film image processing method and apparatus
US6078427A (en) * 1998-12-01 2000-06-20 Kaiser Electro-Optics, Inc. Smooth transition device for area of interest head-mounted display

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4896210A (en) * 1987-11-16 1990-01-23 Brokenshire Daniel A Stereoscopic graphics display terminal with image data processing
US4963959A (en) * 1989-11-20 1990-10-16 Drewlo Kenneth G Three-dimensional cathode ray tube display
US5136675A (en) * 1990-12-20 1992-08-04 General Electric Company Slewable projection system with fiber-optic elements
US5696531A (en) * 1991-02-05 1997-12-09 Minolta Camera Kabushiki Kaisha Image display apparatus capable of combining image displayed with high resolution and image displayed with low resolution
US5539422A (en) * 1993-04-12 1996-07-23 Virtual Vision, Inc. Head mounted display system
US5490784A (en) * 1993-10-29 1996-02-13 Carmein; David E. E. Virtual reality system with enhanced sensory apparatus
US5784149A (en) * 1995-09-21 1998-07-21 Fuji Photo Film Co., Ltd. Film image processing method and apparatus
US6078427A (en) * 1998-12-01 2000-06-20 Kaiser Electro-Optics, Inc. Smooth transition device for area of interest head-mounted display

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070009862A1 (en) * 2005-07-08 2007-01-11 Quinn Edward W Simulator utilizing a non-spherical projection surface
US8241038B2 (en) * 2005-07-08 2012-08-14 Lockheed Martin Corporation Simulator utilizing a non-spherical projection surface
US20070238085A1 (en) * 2006-01-13 2007-10-11 Colvin Richard T Computer based system for training workers
US9224303B2 (en) 2006-01-13 2015-12-29 Silvertree Media, Llc Computer based system for training workers
US20090066858A1 (en) * 2007-09-10 2009-03-12 L-3 Communications Corporation Display system for high-definition projectors
US9188850B2 (en) * 2007-09-10 2015-11-17 L-3 Communications Corporation Display system for high-definition projectors
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
WO2017048125A1 (en) * 2015-09-18 2017-03-23 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Night vision goggles aided flight simulator system and method
US10769959B2 (en) 2015-09-18 2020-09-08 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Night vision goggles aided flight simulator system and method
US10643381B2 (en) 2016-01-12 2020-05-05 Qualcomm Incorporated Systems and methods for rendering multiple levels of detail
US10643296B2 (en) 2016-01-12 2020-05-05 Qualcomm Incorporated Systems and methods for rendering multiple levels of detail
US11217204B2 (en) 2018-12-19 2022-01-04 Cae Inc. Dynamically adjusting image characteristics in real-time

Similar Documents

Publication Publication Date Title
US20070141538A1 (en) Simulator utilizing a high resolution visual display
EP1886179B1 (en) Combined head up display
US7719484B2 (en) Vehicle simulator having head-up display
US7200536B2 (en) Simulator
Furness III The super cockpit and its human factors challenges
US5582518A (en) System for restoring the visual environment of a pilot in a simulator
US6814578B2 (en) Visual display system and method for displaying images utilizing a holographic collimator
US8348440B2 (en) Vision system
CN107991777A (en) A kind of vehicle-mounted head-up-display system with error correction function
CA2496865C (en) Masked image projection system and method
US9470967B1 (en) Motion-based system using a constant vertical resolution toroidal display
CN207752239U (en) A kind of vehicle-mounted augmented reality head-up-display system
US7871270B2 (en) Deployable training device visual system
JP2000510612A (en) Large screen display with projected overlay and method of use
CN108152957A (en) A kind of vehicle-mounted head-up-display system and the error calibrating method based on the system
US20200066177A1 (en) Multi-view display device and manipulation simulation device
US20030164808A1 (en) Display system for producing a virtual image
CN207676046U (en) A kind of vehicle-mounted head-up-display system
CA2217639A1 (en) A visual display system having a large field of view
Kelly et al. Helmet-mounted area of interest
Waldelof et al. ACE: advanced cockpit technologies evaluation
Gudzbeler et al. Application of Collimated Projection Systems for the Purpose of Driving Simulators
Mariani Crewman's associate advanced technology demonstration
JPH1138530A (en) Display device for virtual environmental video
Tomilin Optical projection systems for simulators

Legal Events

Date Code Title Description
AS Assignment

Owner name: LOCKHEED MARTIN CORPORATION, MARYLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:QUINN, EDWARD W.;WALLACE, RANDALL W.;VOGEL, MICHAEL R.;AND OTHERS;REEL/FRAME:018311/0296

Effective date: 20060913

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION