US20050264559A1 - Multi-plane horizontal perspective hands-on simulator - Google Patents

Multi-plane horizontal perspective hands-on simulator Download PDF

Info

Publication number
US20050264559A1
US20050264559A1 US11/141,652 US14165205A US2005264559A1 US 20050264559 A1 US20050264559 A1 US 20050264559A1 US 14165205 A US14165205 A US 14165205A US 2005264559 A1 US2005264559 A1 US 2005264559A1
Authority
US
United States
Prior art keywords
display
image
horizontal perspective
peripheral device
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/141,652
Inventor
Michael Vesely
Nancy Clemens
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infinite Z Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/141,652 priority Critical patent/US20050264559A1/en
Publication of US20050264559A1 publication Critical patent/US20050264559A1/en
Assigned to INFINITE Z, LLC reassignment INFINITE Z, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CLEMENS, NANCY L., VESELY, MICHAEL A.
Assigned to INFINITE Z, INC. reassignment INFINITE Z, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: INFINITE Z, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/40Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
    • G02B30/56Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
    • H04N13/395Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones

Definitions

  • This invention relates to a three-dimensional simulator system, and in particular, to a multi-plane hands-on computer simulator system capable of operator's interaction.
  • Three dimensional (3D) capable electronics and computing hardware devices and real-time computer-generated 3D computer graphics have been a popular area of computer science for the past few decades, with innovations in visual, audio and tactile systems. Much of the research in this area has produced hardware and software products that are specifically designed to generate greater realism and more natural computer-human interfaces. These innovations have significantly enhanced and simplified the end-user's computing experience.
  • the answer is three dimensional illusions.
  • the two dimensional pictures must provide a numbers of cues of the third dimension to the brain to create the illusion of three dimensional images.
  • This effect of third dimension cues can be realistically achievable due to the fact that the brain is quite accustomed to it.
  • the three dimensional real world is always and already converted into two dimensional (e.g. height and width) projected image at the retina, a concave surface at the back of the eye.
  • the brain through experience and perception, generates the depth information to form the three dimension visual image from two types of depth cues: monocular (one eye perception) and binocular (two eye perception).
  • binocular depth cues are innate and biological while monocular depth cues are learned and environmental.
  • the major binocular depth cues are convergence and retinal disparity.
  • the brain measures the amount of convergence of the eyes to provide a rough estimate of the distance since the angle between the line of sight of each eye is larger when an object is closer.
  • the disparity of the retinal images due to the separation of the two eyes is used to create the perception of depth.
  • the effect is called stereoscopy where each eye receives a slightly different view of a scene, and the brain fuses them together using these differences to determine the ratio of distances between nearby objects.
  • Binocular cues are very powerful perception of depth. However, there are also depth cues with only one eye, called monocular depth cues, to create an impression of depth on a flat image.
  • the major monocular cues are: overlapping, relative size, linear perspective and light and shadow. When an object is viewed partially covered, this pattern of blocking is used as a cue to determine that the object is farther away. When two objects known to be the same size and one appears smaller than the other, this pattern of relative size is used as a cue to assume that the smaller object is farther away.
  • the cue of relative size also provides the basis for the cue of linear perspective where the farther away the lines are from the observer, the closer together they will appear since parallel lines in a perspective image appear to converge towards a single point. The light falling on an object from a certain angle could provide the cue for the form and depth of an object.
  • the distribution of light and shadow on objects is a powerful monocular cue for depth provided by the biologically correct assumption that light comes from above.
  • Perspective drawing is most often used to achieve the illusion of three dimension depth and spatial relationships on a flat (two dimension) surface, such as paper or canvas.
  • a flat (two dimension) surface such as paper or canvas.
  • three dimension objects are depicted on a two dimension plane, but “trick” the eye into appearing to be in three dimension space.
  • the first theoretical treatise for constructing perspective, Depictura was published in the early 1400's by the architect, Leone Battista Alberti. Since the introduction of his book, the details behind “general” perspective have been very well documented. However, the fact that there are a number of other types of perspectives is not well known. Some examples are military, cavalier, isometric, and dimetric, as shown at the top of FIG. 1 .
  • FIG. 1 Central perspective, also called one-point perspective, is the simplest kind of “genuine” perspective construction, and is often taught in art and drafting classes for beginners.
  • FIG. 2 further illustrates central perspective.
  • the chess board and chess pieces look like three dimension objects, even though they are drawn on a two dimensional flat piece of paper.
  • Central perspective has a central vanishing point, and rectangular objects are placed so their front sides are parallel to the picture plane. The depth of the objects is perpendicular to the picture plane. All parallel receding edges run towards a central vanishing point. The viewer looks towards this vanishing point with a straight view.
  • an architect or artist creates a drawing using central perspective they must use a single-eye view. That is, the artist creating the drawing captures the image by looking through only one eye, which is perpendicular to the drawing surface.
  • 3D computer graphics Central perspective is employed extensively in 3D computer graphics, for a myriad of applications, such as scientific, data visualization, computer-generated prototyping, special effects for movies, medical imaging, and architecture, to name just a few.
  • 3D gaming One of the most common and well-known 3D computing applications is 3D gaming, which is used here as an example, because the core concepts used in 3D gaming extend to all other 3D computing applications.
  • FIG. 3 is a simple illustration, intended to set the stage by listing the basic components necessary to achieve a high level of realism in 3D software applications. At its highest level, 3D game development consists of four essential components:
  • a person using a 3D application is in fact running software in the form of a real-time computer-generated 3D graphics engine.
  • One of the engine's key components is the renderer. Its job is to take 3D objects that exist within computer-generated world coordinates x, y, z, and render (draw/display) them onto the computer monitor's viewing surface, which is a flat (2D) plane, with real world coordinates x, y.
  • FIG. 4 is a representation of what is happening inside the computer when running a 3D graphics engine.
  • Game play for a typical 3D game might begin with a computer-generated-3D earth and a computer-generated-3D satellite orbiting it.
  • the virtual world coordinate system enables the earth and satellite to be properly positioned in computer-generated x, y, z space.
  • the 3D graphics engine creates a fourth universal dimension for computer-generated time, t. For every tick of time t, the 3D graphics engine regenerates the satellite at its new location and orientation as it orbits the spinning earth. Therefore, a key job for a 3D graphics engine is to continuously synchronize and regenerate all 3D objects within all four computer-generated dimensions x, y, z, and t.
  • FIG. 5 is a conceptual illustration of what happens inside the computer when an end-user is playing, i.e. running, a first-person 3D application.
  • First-person means that the computer monitor is much like a window, through which the person playing the game views the computer-generated world.
  • the 3D graphics engine renders the scene from the point of view of the eye of a computer-generated person.
  • the computer-generated person can be thought of as a computer-generated or “virtual” simulation of the “real” person actually playing the game.
  • the boxed-in area in FIG. 5 conceptually represents how a 3D graphics engine minimizes the hardware's burden. It focuses computational resources on extremely small areas of information as compared to the 3D applications entire world. In this example, it is a “computer-generated” polar bear cub being observed by a “computer-generated” virtual person. Because the end user is running in first-person everything the computer-generated person sees is rendered onto the end-user's monitor, i.e. the end user is looking through the eye of the computer-generated person.
  • the computer-generated person is looking through only one eye; in other words, an one-eyed view.
  • the area that the computer-generated person sees with a one-eye view is called the “view volume”, and the computer-generated 3D objects within this view volume are what actually get rendered to the computer monitor's 2D viewing surface.
  • FIG. 6 illustrates a view volume in more detail.
  • a view volume is a subset of a “camera model”.
  • a camera model is a blueprint that defines the characteristics of both the hardware and software of a 3D graphics engine. Like a very complex and sophisticated automobile engine, a 3D graphics engine consist of so many parts that their camera models are often simplified to illustrate only the essential elements being referenced.
  • the camera model depicted in FIG. 6 shows a 3D graphics engine using central perspective to render computer-generated 3D objects to a computer monitor's vertical, 2D viewing surface.
  • the view volume shown in FIG. 6 is the same view volume represented in FIG. 5 .
  • the only difference is semantics because a 3D graphics engine calls the computer-generated person's one-eye view a camera point (hence camera model).
  • the element called near clip plane is the 2D plane onto which the x, y, z coordinates of the 3D objects within the view volume will be rendered.
  • Each projection line starts at the camera point, and ends at a x, y. z coordinate point of a virtual 3D object within the view volume.
  • the 3D graphics engine determines where the projection line intersects the near clip-plane and the x and y point where this intersection occurs is rendered onto the near clip-plane.
  • the near clip plane is displayed on the 2D viewing surface of the computer monitor, as shown in FIG. 6 .
  • 3D central perspective projection though offering realistic 3D illusion, has some limitations is allowing the user to have hands-on interaction with the 3D display.
  • horizontal perspective where the image appears distorted when viewing head on, but displaying a three dimensional illusion when viewing from the correct viewing position.
  • the angle between the viewing surface and the line of vision is preferably 45° but can be almost any angle, and the viewing surface is preferably horizontal (wherein the name “horizontal perspective”), but it can be any surface, as long as the line of vision forming a not-perpendicular angle to it.
  • Horizontal perspective images offer realistic three dimensional illusion, but are little known primarily due to the narrow viewing location (the viewer's eyepoint has to be coincide precisely with the image projection eyepoint), and the complexity involving in projecting the two dimensional image or the three dimension model into the horizontal perspective image.
  • the present invention recognizes that the personal computer is perfectly suitable for horizontal perspective display. It is personal, thus it is designed for the operation of one person, and the computer, with its powerful microprocessor, is well capable of rendering various horizontal perspective images to the viewer. Further, horizontal perspective offers open space display of 3D images, thus allowing the hands-on interaction of the end users.
  • the present invention discloses a multi-plane hands-on simulator system comprising at least two display surfaces, one of which displaying a three dimensional horizontal perspective images.
  • the other display surfaces can display two dimensional images, or preferably three dimensional central perpective images.
  • the display surfaces can have a curvilinear blending display section to merge the various images.
  • the multi-plane hands-on simulator can comprise various camera eyepoints, one for the horizontal perspective images, one for the central perspective images, and optionally one for the curvilinear blending display surface.
  • the multi-plane display surface can further adjust the various images to accommodate the position of the viewer.
  • the display can accept manual input such as a computer mouse, trackball, joystick, tablet, etc. to re-position the horizontal perspective images.
  • the display can also automatically re-position the images based on an input device automatically providing the viewer's viewpoint location.
  • the multi-plane hands-on simulator system can project horizontal perspective images into the open space and a peripheral device that allow the end user to manipulate the images with hands or hand-held tools.
  • FIG. 1 shows the various perspective drawings.
  • FIG. 2 shows a typical central perspective drawing.
  • FIG. 3 shows 3D software application.
  • FIG. 4 shows 3D application running on PC.
  • FIG. 5 shows 3D application in first person.
  • FIG. 6 shows central perspective camera model
  • FIG. 7 shows the comparison of central perspective (Image A) and horizontal perspective (Image B).
  • FIG. 8 shows the central perspective drawing of three stacking blocks.
  • FIG. 9 shows the horizontal perspective drawing of three stacking blocks.
  • FIG. 10 shows the method of drawing a horizontal perspective drawing.
  • FIG. 11 shows a horizontal perspective display and a viewer input device.
  • FIG. 12 shows a horizontal perspective display, a computational device and a viewer input device.
  • FIG. 13 shows a computer monitor
  • FIG. 14 shows a monitor's phosphor layer indicating of an incorrect location of image.
  • FIG. 15 shows a monitor's viewing surface indicating of a correct location of image.
  • FIG. 16 shows a reference plane x, y, z coordinates.
  • FIG. 17 shows the location of an angled camera point.
  • FIG. 18 shows the mapping of the horizontal plane to a reference plane.
  • FIG. 19 shows the comfort plane.
  • FIG. 20 shows the hands-on volume.
  • FIG. 21 shows the inner plane
  • FIG. 22 shows the bottom plane
  • FIG. 23 shows the inner access volume
  • FIG. 24 shows the angled camera mapped to the end-user's eye
  • FIG. 25 shows mapping of the 3-d object onto the horizontal plane.
  • FIG. 26 shows the two-eye view.
  • FIG. 27 shows the simulation time of the horizontal perspective.
  • FIG. 28 shows the horizontal plane.
  • FIG. 29 shows the 3D peripherals.
  • FIG. 30 shows an open-access camera model.
  • FIG. 31 shows the concept of object recognition.
  • FIG. 32 shows the 3D audio combination with object recognition.
  • FIG. 33 shows another open access camera model.
  • FIG. 34 shows another open access camera model
  • FIG. 35 shows the mapping of virtual attachments to end of tools.
  • FIG. 36 shows the multi-plane and multi-view device.
  • FIG. 37 shows an open access camera model.
  • FIG. 38 shows another multi-plane device.
  • the new and unique inventions described in this document build upon prior art by taking the current state of real-time computer-generated 3D computer graphics, 3D sound, and tactile computer-human interfaces to a whole new level of reality and simplicity. More specifically, these new inventions enable real-time computer-generated 3D simulations to coexist in physical space and time with the end-user and with other real-world physical objects. This capability dramatically improves upon the end-user's visual, auditory and tactile computing experience by providing direct physical interactions with 3D computer-generated objects and sounds.
  • the present invention discloses a multi-plane horizontal perspective hands-on simulator comprising at least two display surfaces, one of which capable of projecting three dimensional illusion based on horizontal perspective projection.
  • the present invention horizontal perspective hands-on simulator can be used to display and interact with three dimensional images and has obvious utility to many industrial applications such as manufacturing design reviews, ergonomic simulation, safety and training, video games, cinematography, scientific 3D viewing, and medical and other data displays.
  • horizontal perspective Normally, as in central perspective, the plane of vision, at right angle to the line of sight, is also the projected plane of the picture, and depth cues are used to give the illusion of depth to this flat image.
  • the plane of vision remains the same, but the projected image is not on this plane. It is on a plane angled to the plane of vision. Typically, the image would be on the ground level surface. This means the image will be physically in the third dimension relative to the plane of vision.
  • horizontal perspective can be called horizontal projection.
  • the object In horizontal perspective, the object is to separate the image from the paper, and fuse the image to the three dimension object that projects the horizontal perspective image.
  • the horizontal perspective image must be distorted so that the visual image fuses to form the free standing three dimensional figure. It is also essential the image is viewed from the correct eye points, otherwise the three dimensional illusion is lost.
  • the horizontal perspective images In contrast to central perspective images which have height and width, and project an illusion of depth, and therefore the objects are usually abruptly projected and the images appear to be in layers, the horizontal perspective images have actual depth and width, and illusion gives them height, and therefore there is usually a graduated shifting so the images appear to be continuous.
  • FIG. 7 compares key characteristics that differentiate central perspective and horizontal perspective.
  • Image A shows key pertinent characteristics of central perspective
  • Image B shows key pertinent characteristics of horizontal perspective.
  • Image A the real-life three dimension object (three blocks stacked slightly above each other) was drawn by the artist closing one eye, and viewing along a line of sight perpendicular to the vertical drawing plane.
  • the resulting image when viewed vertically, straight on, and through one eye, looks the same as the original image.
  • Image B the real-life three dimension object was drawn by the artist closing one eye, and viewing along a line of sight 45° to the horizontal drawing plane.
  • the resulting image when viewed horizontally, at 45° and through one eye, looks the same as the original image.
  • central perspective showing in Image A and horizontal perspective showing in Image B is the location of the display plane with respect to the projected three dimensional image.
  • the display plane can be adjusted up and down, and therefore the projected image can be displayed in the open air above the display plane, i.e. a physical hand can touch (or more likely pass through) the illusion, or it can be displayed under the display plane, i.e. one cannot touch the illusion because the display plane physically blocks the hand.
  • This is the nature of horizontal perspective, and as long as the camera eyepoint and the viewer eyepoint is at the same place, the illusion is present.
  • the three dimensional illusion is likely to be only inside the display plane, meaning one cannot touch it.
  • the central perspective would need elaborate display scheme such as surround image projection and large volume.
  • FIGS. 8 and 9 illustrate the visual difference between using central and horizontal perspective. To experience this visual difference, first look at FIG. 8 , drawn with central perspective, through one open eye. Hold the piece of paper vertically in front of you, as you would a traditional drawing, perpendicular to your eye. You can see that central perspective provides a good representation of three dimension objects on a two dimension surface.
  • FIG. 9 drawn using horizontal perspective, by sifting at your desk and placing the paper lying flat (horizontally) on the desk in front of you. Again, view the image through only one eye. This puts your one open eye, called the eye point at approximately a 45° angle to the paper, which is the angle that the artist used to make the drawing. To get your open eye and its line-of-sight to coincide with the artist's, move your eye downward and forward closer to the drawing, about six inches out and down and at a 45° angle. This will result in the ideal viewing experience where the top and middle blocks will appear above the paper in open space.
  • both central and horizontal perspective not only defines the angle of the line of sight from the eye point; they also define the distance from the eye point to the drawing.
  • FIGS. 8 and 9 are drawn with an ideal location and direction for your open eye relative to the drawing surfaces.
  • the use of only one eye and the position and direction of that eye relative to the viewing surface are essential to seeing the open space three dimension horizontal perspective illusion.
  • FIG. 10 is an architectural-style illustration that demonstrates a method for making simple geometric drawings on paper or canvas utilizing horizontal perspective.
  • FIG. 10 is a side view of the same three blocks used in FIG. 9 . It illustrates the actual mechanics of horizontal perspective. Each point that makes up the object is drawn by projecting the point onto the horizontal drawing plane. To illustrate this, FIG. 10 shows a few of the coordinates of the blocks being drawn on the horizontal drawing plane through projection lines. These projection lines start at the eye point (not shown in FIG.
  • FIG. 10 one of the three blocks appears below the horizontal drawing plane.
  • points located below the drawing surface are also drawn onto the horizontal drawing plane, as seen from the eye point along the line-of-site. Therefore when the final drawing is viewed, objects not only appear above the horizontal drawing plane, but may also appear below it as well-giving the appearance that they are receding into the paper. If you look again at FIG. 9 , you will notice that the bottom box appears to be below, or go into, the paper, while the other two boxes appear above the paper in open space.
  • the horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience.
  • the horizontal perspective display is shown in FIG. 11 , comprising a real time electronic display 100 capable of re-drawing the projected image, together with a viewer's input device 102 to adjust the horizontal perspective image.
  • the horizontal perspective display can ensure the minimum distortion in rendering the three dimension illusion from the horizontal perspective method.
  • the input device can be manually operated where the viewer manually inputs his or her eyepoint location, or change the projection image eyepoint to obtain the optimum three dimensional illusion.
  • the input device can also be automatically operated where the display automatically tracks the viewer's eyepoint and adjust the projection image accordingly.
  • the horizontal perspective display removes the constraint that the viewers keeping their heads in relatively fixed positions, a constraint that create much difficulty in the acceptance of precise eyepoint location such as horizontal perspective or hologram display.
  • the horizontal perspective display system shown in FIG. 12 , can further comprise a computation device 110 in addition to the real time electronic display device 100 and projection image input device 112 providing input to the computational device 110 to calculating the projectional images for display to providing a realistic, minimum distortion three dimensional illusion to the viewer by coincide the viewer's eyepoint with the projection image eyepoint.
  • the system can further comprise an image enlargement/reduction input device 115 , or an image rotation input device 117 , or an image movement device 119 to allow the viewer to adjust the view of the projection images.
  • the horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience.
  • the horizontal perspective display comprising a real time electronic display capable of re-drawing the projected image, together with a viewer's input device to adjust the horizontal perspective image.
  • the horizontal perspective display of the present invention can ensure the minimum distortion in rendering the three dimension illusion from the horizontal perspective method.
  • the input device can be manually operated where the viewer manually inputs his or her eyepoint location, or change the projection image eyepoint to obtain the optimum three dimensional illusions.
  • the input device can also be automatically operated where the display automatically tracks the viewer's eyepoint and adjust the projection image accordingly.
  • the horizontal perspective display system removes the constraint that the viewers keeping their heads in relatively fixed positions, a constraint that create much difficulty in the acceptance of precise eyepoint location such as horizontal perspective or hologram display.
  • the horizontal perspective display system can further a computation device in addition to the real time electronic display device and projection image input device providing input to the computational device to calculating the projectional images for display to providing a realistic, minimum distortion three dimensional illusion to the viewer by coincide the viewer's eyepoint with the projection image eyepoint.
  • the system can further comprise an image enlargement/reduction input device, or an image rotation input device, or an image movement device to allow the viewer to adjust the view of the projection images.
  • the input device can be operated manually or automatically.
  • the input device can detect the position and orientation of the viewer eyepoint, to compute and to project the image onto the display according to the detection result.
  • the input device can be made to detect the position and orientation of the viewer's head along with the orientation of the eyeballs.
  • the input device can comprise an infrared detection system to detect the position the viewer's head to allow the viewer freedom of head movement.
  • Other embodiments of the input device can be the triangulation method of detecting the viewer eyepoint location, such as a CCD camera providing position data suitable for the head tracking objectives of the invention.
  • the input device can be manually operated by the viewer, such as a keyboard, mouse, trackball, joystick, or the like, to indicate the correct display of the horizontal perspective display images.
  • the head or eye-tracking system can comprise a base unit and a head-mounted sensor on the head of the viewer.
  • the head-mounted sensor produces signals showing the position and orientation of the viewer in response to the viewer's head movement and eye orientation. These signals can be received by the base unit and are used to compute the proper three dimensional projection images.
  • the head or eye tracking system can be infrared cameras to capture images of the viewer's eyes. Using the captured images and other techniques of image processing, the position and orientation of the viewer's eyes can be determined, and then provided to the base unit. The head and eye tracking can be done in real time for small enough time interval to provide continous viewer's head and eye tracking.
  • the Hands-On Simulator employing the open space characteristics of the horizontal perspective, together with a number of new computer hardware and software elements and processes that together to create a “Hands-On Simulator”.
  • the Hands-On Simulator generates a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space.
  • the computer hardware viewing surface is situated horizontally, such that the end-user's line of sight is at a 45° angle to the surface.
  • the end user can experience hands-on simulations at viewing angles other than 45° (e.g. 55°, 300 etc.), it is the optimal angle for the brain to recognize the maximum amount of spatial information in an open space image. Therefore, for simplicity's sake, we use “45°” throughout this document to mean “an approximate 45 degree angle”.
  • horizontal viewing surface is preferred since it simulates viewers' experience with the horizontal ground, any viewing surface could offer similar three dimensional illusion experience.
  • the horizontal perspective illusion can appear to be hanging from a ceiling by projecting the horizontal perspective images onto a ceiling surface, or appear to be floating from a wall by projecting the horizontal perspective images onto a vertical wall surface.
  • the hands-on simulations are generated within a 3D graphics engines' view volume, creating two new elements, the “Hands-On Volume” and the “Inner-Access Volume.”
  • the Hands-On Volume is situated on and above the physical viewing surface.
  • the Inner-Access Volume is located underneath the viewing surface and simulations within this volume appear inside the physically viewing device.
  • simulations generated within the Inner-Access Volume do not share the same physical space with the end user and the images therefore cannot be directly, physically manipulated by hands or hand-held tools. That is, they are manipulated indirectly via a computer mouse or a joystick.
  • This disclosed Hands-On Simulator can lead to the end user's ability to directly, physically manipulate simulations because they co-inhabit the end-user's own physical space.
  • To accomplish this requires a new computing concept where computer-generated world elements have a 1:1 correspondence with their physical real-world equivalents; that is, a physical element and an equivalent computer-generated element occupy the same space and time. This is achieved by identifying and establishing a common “Reference Plane”, to which the new elements are synchronized.
  • Synchronization with the Reference Plane forms the basis to create the 1:1 correspondence between the “virtual” world of the simulations, and the “real” physical world.
  • the 1:1 correspondence insures that images are properly displayed: What is on and above the viewing surface appears on and above the surface, in the Hands-On Volume; what is underneath the viewing surface appears below, in the Inner-Access Volume. Only if this 1:1 correspondence and synchronization to the Reference Plane are present can the end user physically and directly access and interact with simulations via their hands or hand-held tools.
  • the present invention simulator further includes a real-time computer-generated 3D-graphics engine as generally described above, but using horizontal perspective projection to display the 3D images.
  • a real-time computer-generated 3D-graphics engine as generally described above, but using horizontal perspective projection to display the 3D images.
  • One major different between the present invention and prior art graphics engine is the projection display.
  • Existing 3D-graphics engine uses central-perspective and therefore a vertical plane to render its view volume while in the present invention simulator, a “horizontal” oriented rendering plane vs. a “vertical” oriented rendering plane is required to generate horizontal perspective open space images.
  • the horizontal perspective images offer much superior open space access than central perspective images.
  • One of the invented elements in the present invention hands-on simulator is the 1:1 correspondence of the computer-generated world elements and their physical real-world equivalents.
  • this 1:1 correspondence is a new computing concept that is essential for the end user to physically and directly access and interact with hands-on simulations.
  • This new concept requires the creation of a common physical Reference Plane, as well as, the formula for deriving its unique x, y, z spatial coordinates. To determine the location and size of the Reference Plane and its specific coordinates requires understanding the following.
  • FIG. 13 contains a conceptual side-view of typical CRT-type viewing device.
  • the top layer of the monitor's glass surface is the physical “View Surface”, and the phosphor layer, where images are made, is the physical “Image Layer”.
  • the View Surface and the Image Layer are separate physical layers located at different depths or z coordinates along the viewing device's z axis.
  • To display an image the CRT's electron gun excites the phosphors, which in turn emit photons. This means that when you view an image on a CRT, you are looking along its z axis through its glass surface, like you would a window, and seeing the light of the image coming from its phosphors behind the glass.
  • FIG. 14 we use the same architectural technique for drawing images with horizontal perspective as previously illustrated in FIG. 10 .
  • the middle block in FIG. 14 does not correctly appear on the View Surface.
  • the bottom of the middle block is located correctly on the horizontal drawing/viewing plane, i.e. a piece of paper's View Surface.
  • the phosphor layer i.e. where the image is made, is located behind the CRT's glass surface. Therefore, the bottom of the middle block is incorrectly positioned behind or underneath the View Surface.
  • FIG. 15 shows the proper location of the three blocks on a CRT-type viewing device. That is, the bottom of the middle block is displayed correctly on the View Surface and not on the Image Layer. To make this adjustment the z coordinates of the View Surface and Image Layer are used by the Simulation Engine to correctly render the image. Thus the unique task of correctly rendering an open space image on the View Surface vs. the Image Layer is critical in accurately mapping the simulation images to the real world space.
  • FIG. 16 shows an example of a complete image being displayed on a viewing device's View Surface. That is, the blue image, including the bear cub, shows the entire image area, which is smaller than the viewing device's View Surface.
  • the Image Layer is given a z coordinate of 0.
  • the View Surface is the distance along the z axis from the Image Layer the Reference Plane's z coordinate is equal to the View Surface, i.e. its distance from the Image Layer.
  • the x and y coordinates, or size of the Reference Plane can be determined by displaying a complete image on the viewing device and measuring the length of its x and y axis.
  • Reference Plane Calibration The concept of the common physical Reference Plane is a new inventive concept. Therefore, display manufactures may not supply or even know its coordinates. Thus a “Reference Plane Calibration” procedure might need to be performed to establish the Reference Plane coordinates.
  • This calibration procedure provides the end user with a number of orchestrated images that s/he interacts. The end-user's response to these images provides feedback to the Simulation Engine such that it can identify the correct size and location of the Reference Plane. When the end user is satisfied and completes the procedure the coordinates are saved in the end user's personal profile.
  • One element of the present invention horizontal perspective projection hands-on simulator is a computer-generated “Angled Camera” point, shown in FIG. 17 .
  • the camera point is initially located at an arbitrary distance from the Horizontal Plane and the camera's line-of-site is oriented at a 45° angle looking through the center.
  • the position of the Angled Camera in relation to the end-user's eye is critical to generating simulations that appear in open space on and above the surface of the viewing device.
  • the computer-generated x, y, z coordinates of the Angled Camera point form the vertex of an infinite “pyramid”, whose sides pass through the x, y, z coordinates of the Reference/Horizontal Plane.
  • FIG. 18 illustrates this infinite pyramid, which begins at the Angled Camera point and extending through the Far Clip Plane.
  • These unique view volumes are called Hands-On and the Inner-Access Volume, and are not shown in FIG. 18 .
  • the dimensions of these volumes and the planes that define them are based on their locations within the pyramid.
  • FIG. 19 illustrates a plane, called Comfort Plane, together with other display elements.
  • the Comfort Plane is one of six planes that define the new Hands-On Volume, and of these planes it is closest to the Angled Camera point and parallel to the Reference Plane.
  • the Comfort Plane is appropriately named because its location within the pyramid determines the end-user's personal comfort, i.e. how their eyes, head, body, etc. are situated while viewing and interacting with simulations.
  • the end user can adjust the location of the Comfort Plane based on their personal visual comfort through a “Comfort Plane Adjustment” procedure. This procedure provides the end user with orchestrated simulations within the Hands-On Volume, and enables them to adjust the location of the Comfort Plane within the pyramid relative to the Reference Plane. When the end user is satisfied and completes the procedure the location of the Comfort Plane is saved in the end-user's personal profiles.
  • the present invention simulator further defines a “Hands-On Volume”, shown in FIG. 20 .
  • the Hands-On Volume is where you can reach your hand in and physically “touch” a simulation. You can envision this by imagining you are sifting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. If you place your hand several inches above the surface of the monitor, you are putting your hand inside both the physical and computer-generated Hands-On Volume at the same time.
  • the Hands-On Volume exists within the pyramid and are between and inclusive of the Comfort Planes and the Reference/Horizontal Planes.
  • the Inner-Access Volume exists below or inside the physical viewing device. For this reason, an end user cannot directly interact with 3D objects located within the Inner-Access Volume via their hand or hand-held tools. But they can interact in the traditional sense with a computer mouse, joystick, or other similar computer peripheral.
  • An “Inner Plane” is further defined, located immediately below and are parallel to the Reference/Horizontal Plane within the pyramid as shown in FIG. 21 .
  • the Inner Plane, along with the Bottom Plane, is two of the six planes within the pyramid that define the Inner-Access Volume.
  • the Bottom Plane (shown in FIG.
  • the Bottom Plane is also parallel to the Reference/Horizontal Plane and is one of the six planes that define the Inner-Access Volume ( FIG. 23 ).
  • the end-user's preferred viewing distance to the bottom of the viewing pyramid determines the location of these planes.
  • One way the end user can adjust the location of the Bottom Planes is through a “Bottom Plane Adjustment” procedure. This procedure provides the end user with orchestrated simulations within the Inner-Access Volume and enables them to interact and adjust the location of the Bottom Plane relative to the physical Reference/Horizontal Plane. When the end user completes the procedure the Bottom Plane's coordinates are saved in the end-user's personal profiles.
  • the end user For the end user to view open space images on their physical viewing device it must be positioned properly, which usually means the physical Reference Plane is placed horizontally to the ground. Whatever the viewing device's position relative to the ground, the Reference/Horizontal Plane must be at approximately a 45° angle to the end-user's line-of-sight for optimum viewing.
  • One way the end user might perform this step is to position their CRT computer monitor on the floor in a stand, so that the Reference/Horizontal Plane is horizontal to the floor. This example use a CRT-type computer monitor, but it could be any type of viewing device, placed at approximately a 45° angle to the end-user's line-of-sight.
  • the real-world coordinates of the “End-User's Eye” and the computer-generated Angled Camera point must have a 1:1 correspondence in order for the end user to properly view open space images that appear on and above the Reference/Horizontal Plane ( FIG. 24 ).
  • One way to do this is for the end user to supply the Simulation Engine with their eye's real-world x, yr. z location and line-of-site information relative to the center of the physical Reference/Horizontal Plane. For example, the end user tells the Simulation Engine that their physical eye will be located 12 inches up, and 12 inches back, while looking at the center of the Reference/Horizontal Plane.
  • the Simulation Engine maps the computer-generated Angled Camera point to the End-User's Eye point physical coordinates and line-of-sight.
  • the present invention horizontal perspective hands-on simulator employs the horizontal perspective projection to mathematically projected the 3D objects to the Hands-On and Inner-Access Volumes.
  • the existence of a physical Reference Plane and the knowledge of its coordinates are essential to correctly adjusting the Horizontal Plane's coordinates prior to projection.
  • This adjustment to the Horizontal Plane enables open space images to appear to the end user on the View Surface vs. the Image Layer by taking into account the offset between the Image Layer and the View Surface, which are located at different values along the viewing device's z axis.
  • the three dimensional x, y, z point of the object becomes a two-dimensional x, y point of the Horizontal Plane (see FIG. 25 ).
  • Projection lines often intersect more than one 3D object coordinate, but only one object x, y, z coordinate along a given projection line can become a Horizontal Plane x, y point.
  • the formula to determine which object coordinate becomes a point on the Horizontal Plane is different for each volume. For the Hands-On Volume it is the object coordinate of a given projection line that is farthest from the Horizontal Plane.
  • Inner-Access Volume it is the object coordinate of a given projection line that is closest to the Horizontal Plane.
  • the Hands-On Volume's 3D object point is used.
  • FIG. 25 is an illustration of the present invention Simulation Engine that includes the new computer-generated and real physical elements as described above. It also shows that a real-world element and its computer-generated equivalent are mapped 1:1 and together share a common Reference Plane.
  • the full implementation of this Simulation Engine results in a Hands-On Simulator with real-time computer-generated 3D-graphics appearing in open space on and above a viewing device's surface, which is oriented approximately 45° to the end-user's line-of-sight.
  • the Hands-On Simulator further involves adding completely new elements and processes and existing stereoscopic 3D computer hardware.
  • Multi-View provides the end user with multiple and/or separate left- and right-eye views of the same simulation.
  • the simulator further includes a new computer-generated “time dimension” element, called “SI-time”.
  • SI is an acronym for “Simulation Image” and is one complete image displayed on the viewing device.
  • SI-Time is the amount of time the Simulation Engine uses to completely generate and display one Simulation Image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector But SI-Time is variable, meaning that depending on the complexity of the view volumes it could take 1/120 th or 1 ⁇ 2 a second for the Simulation Engine to complete just one SI.
  • the simulator also includes a new computer-generated “time dimension” element, called “EV-time” and is the amount of time used to generate a one “Eye-View”. For example, let's say that the Simulation Engine needs to create one left-eye view and one right-eye view for purposes of providing the end user with a stereoscopic 3D experience. If it takes the Simulation Engine 1 ⁇ 2 a second to generate the left-eye view then the first EV-Time period is 1 ⁇ 2 a second. If it takes another 1 ⁇ 2 second to generate the right-eye view then the second EV-Time period is also 1 ⁇ 2 second. Since the Simulation Engine was generating a separate left and right eye view of the same Simulation Image the total SI-Time is one second. That is, the first EV-Time was 1 ⁇ 2 second and the second EV-Time was also 1 ⁇ 2 second making a total SI-Time of one second.
  • EV-time a new computer-generated “time dimension” element
  • FIG. 26 helps illustrate these two new time dimension elements. It is a conceptual drawing of what is occurring inside the Simulation Engine when it is generating a two-eye view of a Simulated Image.
  • the computer-generated person has both eyes open, a requirement for stereoscopic 3D viewing, and therefore sees the bear cub from two separate vantage points, i.e. from both a right-eye view and a left-eye view. These two separate views are slightly different and offset because the average person's eyes are about 2 inches apart. Therefore, each eye sees the world from a separate point in space and the brain puts them together to make a whole image. This is how and why we see the real world in stereoscopic 3D.
  • FIG. 27 is a very high-level Simulation Engine blueprint focusing on how the computer-generated person's two eye views are projected onto the Horizontal Plane and then displayed on a stereoscopic 3D capable viewing device.
  • FIG. 26 represents one complete SI-Time period. If we use the example from step 3 above, SI-Time takes one second. During this one second of SI-Time the Simulation Engine needs to generate two different eye views, because in this example the stereoscopic 3D viewing device requires a separate left- and right-eye view. There are existing stereoscopic 3D viewing devices that require more than a separate left- and right-eye view. But because the method described here can generate multiple views it works for these devices as well.
  • FIG. 27 The illustration in the upper left of FIG. 27 shows the Angled Camera point for the right eye at time-element “EV-Time-1”, which means the first Eye-View time period or the first eye-view to be generated.
  • EV-Time-1 is the time period used by the Simulation Engine to complete the first eye (right-eye) view of the computer-generated person. This is the job for this step, which is within EV-Time-1, and using the Angled Camera at coordinate x, y, z, the Simulation Engine completes the rendering and display of the right-eye view of a given Simulation Image.
  • the Simulation Engine starts the process of rendering the computer-generated person's second eye (left-eye) view.
  • the illustration in the lower left of FIG. 27 shows the Angled Camera point for the left eye at time element “EV-Time-2”. That is, this second eye view is completed during EV-Time-2.
  • step 5 makes an adjustment to the Angled Camera point. This is illustrated in FIG. 27 by the left eye's x coordinate being incremented by two inches. This difference between the right eye's x value and the left eye's x+2′′ is what provides the two-inch separation between the eyes, which is required for stereoscopic 3D viewing.
  • the distances between people's eyes vary but in the above example we are using the average of 2 inches. It is also possible for the end user to supply the Simulation Engine with their personal eye separation value. This would make the x value for the left and right eyes highly accurate for a given end user and thereby improve the quality of their stereoscopic 3D view.
  • the Simulation Engine Once the Simulation Engine has incremented the Angled Camera point's x coordinate by two inches, or by the personal eye separation value supplied by the end user, it completes the rendering and display of the second (left-eye) view. This is done by the Simulation Engine within the EV-Time-2 period using the Angled Camera point coordinate x ⁇ 2′′, y, z and the exact same Simulation Image rendered. This completes one SI-Time period.
  • the Simulation Engine continues to display the left- and right-eye images, as described above, until it needs to move to the next SI-Time period.
  • the job of this step is to determine if it is time to move to a new SI-Time period, and if it is, then increment SI-Time.
  • An example of when this may occur is if the bear cub moves his paw or any part of his body Then a new and second Simulated Image would be required to show the bear cub in its new position. This new Simulated Image of the bear cub, in a slightly different location, gets rendered during a new SI-Time period or SI-Time-2.
  • This new SI-time-2 period will have its own EV-Time-1 and EV-Time-2, and therefore the simulation steps described above will be repeated during SI-time-2.
  • This process of generating multiple views via the nonstop incrementing of SI-Time and its EV-Times continues as long as the Simulation Engine is generating real-time simulations in stereoscopic 3D.
  • Multi-View provides the end user with multiple and/or separate left- and right-eye views of the same simulation. Multi-View capability is a significant visual and interactive improvement over the single eye view.
  • the present invention also allows the viewer to move around the three dimensional display and yet suffer no great distortion since the display can track the viewer eyepoint and re-display the images correspondingly, in contrast to the conventional prior art three dimensional image display where it would be projected and computed as seen from a singular viewing point, and thus any movement by the viewer away from the intended viewing point in space would cause gross distortion.
  • the display system can further comprise a computer capable of re-calculate the projected image given the movement of the eyepoint location.
  • the horizontal perspective images can be very complex, tedious to create, or created in ways that are not natural for artists or cameras, and therefore require the use of a computer system for the tasks.
  • To display a three-dimensional image of an object with complex surfaces or to create animation sequences would demand a lot of computational power and time, and therefore it is a task well suited to the computer.
  • Three dimensional capable electronics and computing hardware devices and real-time computer-generated three dimensional computer graphics have advanced significantly recently with marked innovations in visual, audio and tactile systems, and have producing excellent hardware and software products to generate realism and more natural computer-human interfaces.
  • the horizontal perspective display system of the present invention are not only in demand for entertainment media such as televisions, movies, and video games but are also needed from various fields such as education (displaying three-dimensional structures), technological training (displaying three-dimensional equipment).
  • entertainment media such as televisions, movies, and video games
  • various fields such as education (displaying three-dimensional structures), technological training (displaying three-dimensional equipment).
  • three-dimensional image displays which can be viewed from various angles to enable observation of real objects using object-like images.
  • the horizontal perspective display system is also capable of substitute a computer-generated reality for the viewer observation.
  • the systems may include audio, visual, motion and inputs from the user in order to create a complete experience of three dimensional illusions.
  • the input for the horizontal perspective system can be two dimensional image, several images combined to form one single three dimensional image, or three dimensional model.
  • the three dimensional image or model conveys much more information than that a two dimensional image and by changing viewing angle, the viewer will get the impression of seeing the same object from different perspectives continuously.
  • the horizontal perspective display can further provide multiple views or “Multi-View” capability.
  • Multi-View provides the viewer with multiple and/or separate left- and right-eye views of the same simulation.
  • Multi-View capability is a significant visual and interactive improvement over the single eye view.
  • both the left eye and right eye images are fused by the viewer's brain into a single, three-dimensional illusion.
  • the problem of the discrepancy between accommodation and convergence of eyes, inherent in stereoscopic images, leading to the viewer's eye fatigue with large discrepancy, can be reduced with the horizontal perspective display, especially for motion images, since the position of the viewer's gaze point changes when the display scene changes.
  • Multi-View devices that can be used in the present invention include methods with glasses such as anaglyph method, special polarized glasses or shutter glasses, methods without using glasses such as a parallax stereogram, a lenticular method, and mirror method (concave and convex lens).
  • a display image for the right eye and a display image for the left eye are respectively superimpose-displayed in two colors, e.g., red and blue, and observation images for the right and left eyes are separated using color filters, thus allowing a viewer to recognize a stereoscopic image.
  • the images are displayed using horizontal perspective technique with the viewer looking down at an angle.
  • the eyepoint of the projected images has to be coincide with the eyepoint of the viewer, and therefore the viewer input device is essential in allowing the viewer to observe the three dimensional horizontal perspective illusion. From the early days of the anaglyph method, there are much improvements such as the spectrum of the red/blue glasses and display to generate much more realism and comfort to the viewers.
  • the left eye image and the right eye image are separated by the use of mutually extinguishing polarizing filters such as orthogonally linear polarizer, circular polarizer, elliptical polarizer.
  • the images are normally projected onto screens with polarizing filters and the viewer is then provided with corresponding polarized glasses.
  • the left and right eye images appear on the screen at the same time, but only the left eye polarized light is transmitted through the left eye lens of the eyeglasses and only the right eye polarized light is transmitted through the right eye lens.
  • Another way for stereoscopic display is the image sequential system.
  • the images are displayed sequentially between left eye and right eye images rather than superimposing them upon one another, and the viewer's lenses are synchronized with the screen display to allow the left eye to see only when the left image is displayed, and the right eye to see only when the right image is displayed.
  • the shuttering of the glasses can be achieved by mechanical shuttering or with liquid crystal electronic shuttering.
  • display images for the right and left eyes are alternately displayed on a CRT in a time sharing manner, and observation images for the right and left eyes are separated using time sharing shutter glasses which are opened/closed in a time sharing manner in synchronism with the display images, thus allowing an observer to recognize a stereoscopic image.
  • optical method Other way to display stereoscopic images is by optical method.
  • display images for the right and left eyes which are separately displayed on a viewer using optical means such as prisms, mirror, lens, and the like, are superimpose-displayed as observation images in front of an observer, thus allowing the observer to recognize a stereoscopic image.
  • Large convex or concave lenses can also be used where two image projectors, projecting left eye and right eye images, are providing focus to the viewer's left and right eye respectively.
  • a variation of the optical method is the lenticular method where the images form on cylindrical lens elements or two dimensional array of lens elements.
  • FIG. 27 is a horizontal perspective display focusing on how the computer-generated person's two eye views are projected onto the Horizontal Plane and then displayed on a stereoscopic 3D capable viewing device.
  • FIG. 27 represents one complete display time period. During this display time period, the horizontal perspective display needs to generate two different eye views, because in this example the stereoscopic 3D viewing device requires a separate left- and right-eye view.
  • the illustration in the upper left of FIG. 27 shows the Angled Camera point for the right eye after the first (right) eye-view to be generated.
  • the horizontal perspective display starts the process of rendering the computer-generated person's second eye (left-eye) view.
  • the illustration in the lower left of FIG. 27 shows the Angled Camera point for the left eye after the completion of this time.
  • the horizontal perspective display makes an adjustment to the Angled Camera point. This is illustrated in FIG. 27 by the left eye's x coordinate being incremented by two inches. This difference between the right eye's x value and the left eye's x+2′′ is what provides the two-inch separation between the eyes, which is required for stereoscopic 3D viewing.
  • the distances between people's eyes vary but in the above example we are using the average of 2 inches. It is also possible for the view to supply the horizontal perspective display with their personal eye separation value. This would make the x value for the left and right eyes highly accurate for a given viewer and thereby improve the quality of their stereoscopic 3D view.
  • the rendering continues by displaying the second (left-eye) view.
  • the horizontal perspective display continues to display the left- and right-eye images, as described above, until it needs to move to the next display time period.
  • An example of when this may occur is if the bear cub moves his paw or any part of his body. Then a new and second Simulated Image would be required to show the bear cub in its new position.
  • This new Simulated Image of the bear cub in a slightly different location, gets rendered during a new display time period. This process of generating multiple views via the nonstop incrementing of display time continues as long as the horizontal perspective display is generating real-time simulations in stereoscopic 3D.
  • the display rate is the number of images per second that the display uses to completely generate and display one image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector. But the display time could be a variable, meaning that depending on the complexity of the view volumes it could take 1/12 or 1 ⁇ 2 a second for the computer to complete just one display image. Since the display was generating a separate left and right eye view of the same image, the total display time is twice the display time for one eye image.
  • FIG. 28 shows a horizontal plane as related to both central perspective and horizontal perspective.
  • the present invention hands-on simulator further includes technologies employed in computer “peripherals”.
  • FIG. 29 shows examples of such Peripherals with six degrees of freedom, meaning that their coordinate system enables them to interact at any given point in an (x, y, z) space.
  • the simulator creates a “Peripheral Open-Access Volume,” for each Peripheral the end-user requires, such as the Space Glove in FIG. 29 .
  • FIG. 30 is a high-level illustration of the Hands-On Simulation Tool, focusing on how a Peripheral's coordinate system is implemented within the Hands-On Simulation Tool.
  • the new Peripheral Open-Access Volume which as an example in FIG. 30 is labeled “Space Glove,” is mapped one-to-one with the “Open-Access Real Volume” and “Open-Access Computer-generated Volume.”
  • the key to achieving a precise one-to-one mapping is to calibrate the Peripheral's volume with the Common Reference, which is the physical View surface, located at the viewing surface of the display device.
  • Some Peripherals provide a mechanism that enables the Hands-On Simulation Tool to perform this calibration without any end-user involvement. But if calibrating the Peripheral requires external intervention than the end-user will accomplish this through an “Open-Access Peripheral Calibration” procedure. This procedure provides the end-user with a series of Simulations within the Hands-On Volume and a user-friendly interface that enables them to adjusting the location of the Peripheral's volume until it is in perfect synchronization with the View surface. When the calibration procedure is complete, the Hands-On Simulation Tool saves the information in the end-user's personal profile.
  • the Hands-On Simulation Tool will continuously track and map the Peripheral's volume to the Open-Access Volumes.
  • the Hands-On Simulation Tool modifies each Hands-On Image it generates based on the data in the Peripheral's volume.
  • the end result of this process is the end-user's ability to use any given Peripheral to interact with Simulations within the Hands-On Volume generated in real-time by the Hands-On Simulation Tool.
  • the peripherals linking to the simulator, the user can interact with the display model.
  • the Simulation Engine can get the inputs from the user through the peripherals, and manipulate the desired action.
  • the simulator can provide proper interaction and display.
  • the invention Hands-On Simulator then can generate a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space.
  • the peripheral tracking can be done through camera triangulation or through infrared tracking devices.
  • the simulator can further include 3D audio devices for “SIMULATION RECOGNITION & 3D AUDIO”. This results in a new invention in the form of a Hands-On Simulation Tool with its Camera Model, Horizontal Multi-View Device, Peripheral Devices, Frequency Receiving/Sending Devices, and Handheld Devices as described below.
  • Triangulation is a process employing trigonometry, sensors, and frequencies to “receive” data from simulations in order to determine their precise location in space. It is for this reason that triangulation is a mainstay of the cartography and surveying industries where the sensors and frequencies they use include but are not limited to cameras, lasers, radar, and microwave.
  • 3D Audio also uses triangulation but in the opposite way 3D Audio “sends” or projects data in the form of sound to a specific location. But whether you're sending or receiving data the location of the simulation in three-dimensional space is done by triangulation with frequency receiving/sending devices.
  • the device can effectively emulate the position of the sound source.
  • the sounds reaching the ears will need to be isolated to avoid interference.
  • the isolation can be accomplished by the use of earphones or the like.
  • FIG. 31 shows an end-user looking at a Hands-On Image of a bear cub. Since the cub appears in open space above the viewing surface the end-user can reach in and manipulate the cub by hand or with a handheld tool. It is also possible for the end-user to view the cub from different angles, as they would in real life. This is accomplished though the use of triangulation where the three real-world cameras continuously send images from their unique angle of view to the Hands-On Simulation Tool. This camera data of the real world enables the Hands-On Simulation Tool to locate, track, and map the end-user's body and other real-world simulations positioned within and around the computer monitor's viewing surface ( FIG. 32 ).
  • FIG. 33 also shows the end-user viewing and interacting with the bear cub, but it includes 3D sounds emanating from the cub's mouth.
  • To accomplish this level of audio quality requires physically combining each of the three cameras with a separate speaker, as shown in FIG. 32 .
  • the cameras' data enables the Hands-On Simulation Tool to use triangulation in order to locate, track, and map the end-user's “left and right ear”. And since the Hands-On Simulation Tool is generating the bear cub as a computer-generated Hands-On Image it knows the exact location of the cub's mouth.
  • the Hands-On Simulation Tool uses triangulation to sends data, by modifying the spatial characteristics of the audio, making it appear that 3D sound is emanating from the cub's computer-generated mouth.
  • each camera/speaker device Take these new camera/speaker devices and attach or place them nearby a viewing device, such as a computer monitor as previously shown in FIG. 32 .
  • a viewing device such as a computer monitor as previously shown in FIG. 32 .
  • real-world x, y, z
  • Triangulation works by separating and positioning each camera/speaker device such that their individual frequency receiving/sending volumes overlap and cover the exact same area of space. If you have three widely spaced frequency receiving/sending volumes covering the exact same area of space than any simulation within the space can accurately be located. The next step creates a new element in the Open-Access Camera Model for this real-world space and in FIG. 33 it is labeled “real frequency receiving/sending volume”.
  • FIG. 34 is a simplified illustration of the complete Open-Access Camera Model and will assist in explaining each of the additional steps required to accomplish the scenarios described in FIGS. 32 and 33 above.
  • the simulator then performs simulation recognition by continuously locating and tracking the end-user's “left and right eye” and their “line-of-sight”, continuously map the real-world left and right eye coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the computer-generated cameras coordinates to match the real-world eye coordinates that are being located, tracked, and mapped.
  • This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right eye. Allowing the end-user to freely move their head and look around the Hands-On Image without distortion.
  • the simulator then perform simulation recognition by continuously locating and tracking the end-user's “left and right ear” and their “line-of-hearing”, continuously map the real-world left- and right-ear coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the 3D Audio coordinates to match the real-world ear coordinates that are being located, tracked, and mapped.
  • This enables the real-time generation of Open-Access sounds based on the exact location of the end-user's left and right ears. Allowing the end-user to freely move their head and still hear Open-Access sounds emanating from their correct location.
  • the simulator then perform simulation recognition by continuously locating and tracking the end-user's “left and right hand” and their “digits,” i.e. fingers and thumbs, continuously map the real-world left and right hand coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world hand coordinates that are being located, tracked, and mapped.
  • This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right hands allowing the end-user to freely interact with Simulations within the Hands-On Volume.
  • the simulator then perform simulation recognition by continuously locating and tracking “handheld tools”, continuously map these real-world handheld tool coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world handheld tool coordinates that are being located, tracked, and mapped.
  • This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the handheld tools allowing the end-user to freely interact with Simulations within the Hands-On Volume.
  • FIG. 35 is intended to assist in further explaining unique discoveries regarding the new Open-Assess Camera Model and handheld tools.
  • FIG. 35 is a simulation of and end-user interacting with a Hands-On Image using a handheld tool.
  • the scenario being illustrated is the end-user visualizing large amounts of financial data as a number of interrelated Open-Access 3D simulations.
  • the end-user can probe and manipulated the Open-Access simulations by using a handheld tool, which in FIG. 35 looks like a pointing device.
  • a “computer-generated attachment” is mapped in the form of an Open-Access computer-generated simulation onto the tip of a handheld tool, which in FIG. 35 appears to the end-user as a computer-generated “eraser”.
  • the end-user can of course request that the Hands-On Simulation Tool map any number of computer-generated attachments to a given handheld tool. For example, there can be different computer-generated attachments with unique visual and audio characteristics for cutting, pasting, welding, painting, smearing, pointing, grabbing, etc. And each of these computer-generated attachments would act and sound like the real device they are simulating when they are mapped to the tip of the end-user's handheld tool.
  • FIG. 36 illustrates an example of the present invention Multi-Plane display in which the Multi-Plane display is a computer monitor that is approximately “L” shaped when open.
  • the end-user views the L-shaped computer monitor from its concave side and at approximately a 45° angle to the bottom of the “L,” as shown in FIG. 36 .
  • From the end-user's point of view the entire L-shaped computer monitor appears as one single and seamless viewing surface.
  • the edge is the two display segments is preferably smoothly joined and can also having a curvilinear projection to connect the two displays of horizontal perspective and central perspective.
  • the Multi-Plane display can be made with one or more physical viewing surfaces.
  • the vertical leg of the “L” can be one physical viewing surface, such as flat panel display
  • the horizontal leg of the “L” can be a separate flat panel display.
  • the edge of the two display segments can be a non-display segment and therefore the two viewing surface are not continuous.
  • Each leg of a Multi-Plane display is called a viewing plane and as you can see in the upper left of FIG. 36 there is a vertical viewing plane and a horizontal viewing plane where a central perspective image is generated on the vertical plane and a horizontal perspective image is generated on the horizontal plane, and then blend the two images where the planes meet, as illustrated in the lower right of FIG. 36 .
  • FIG. 36 also illustrates that a Multi-Plane display is capable of generating multiple views. Meaning that it can display single-view images, i.e. a one-eye perspective like the simulation in the upper left, and/or multi-view images, i.e. separate right and left eye views like the simulation in the lower right. And when the L-shaped computer monitor is not being used by the end-user it can be closed and look like the simulation in the lower left.
  • FIG. 37 is a simplified illustration of the present invention Multi-Plane display.
  • the upper right of FIG. 37 is an example of a single-view image of a bear cub that is displayed on an L-shaped computer monitor.
  • Normally a single-view or one eye image would be generated with only one camera point, but as you can see there are at least two camera points for the Multi-Plabe display even though this is a single-view example.
  • One camera point is for the horizontal perspective image, which is displayed on the horizontal surface, and the other camera point is for the central perspective image, which is displayed on the vertical surface.
  • the vertical viewing plane of the L-shaped monitor is the display surface for the central perspective images, and thus there is a need to define another common reference plane for this surface.
  • the common reference plane is the plane where the images are display, and the computer need to keep track of this plane for the synchronization of the locations of the displayed images and the real physical locations.
  • the multi-plane display system can further include a curvilinear connection display section to blend the horizontal perspective and the central perspective images together at the location of the seam in the “L,” as shown at the bottom of FIG. 37 .
  • the multi-plane display system can continuously update and display what appears to be a single L-shaped image on the L-shaped Multi-Plane device.
  • the multi-plane display system can comprise multiple display surfaces together with multiple curvilinear blending sections as shown in FIG. 38 .
  • the multiple display surfaces can be a flat wall, multiple adjacent flat walls, a dome, and a curved wraparound panel.
  • the present invention multi-plane display system thus can simultaneously projecting a plurality of three dimensional images onto multiple display surfaces, one of which is a horizontal perspective image. Further, it can be a stereoscopic multiple display system allowing viewers to use their stereoscopic vision for three dimensional image presentation.
  • the multi-plane display system comprises at least two display surfaces
  • various requirements need to be addressed to ensure high fidelity in the three dimensional image projection.
  • the display requirements are typically geometric accuracy, to ensure that objects and features of the image to be correctly positioned, edge match accuracy, to ensure continuity between display surfaces, no blending variation, to ensure no variation in luminance in the blending section of various display surfaces, and field of view, to ensure a continuous image from the eyepoint of the viewer.
  • the blending section of the multi-plane display system is preferably a curve surface
  • some distortion correction could be applied in order for the image projected onto the blending section surface to appear correct to the viewer.
  • There are various solutions for providing distortion correction to a display system such as using a test pattern image, designing the image projection system for the specific curved blending display section, using special video hardware, utilizing a piecewise-linear approximation for the curved blending section.
  • Still another distortion correction solution for the curve surface projection is to automatically computes image distortion correction for any given position of the viewer eyepoint and the projector.
  • the multi-plane display system comprises more than one display surface, care should be taken to minimize the seams and gaps between the edges of the respective displays.
  • the overlapped image is calculated by an image processor to ensure that the projected pixels in the overlapped areas are adjusted to form the proper displayed images.
  • Other solutions are to control the degree of intensity reduction in the overlapping to create a smooth transition from the image of one display surface to the next.

Abstract

The present invention discloses a multi-plane hands-on simulator system comprising at least two display surfaces, one of which displaying a three dimensional horizontal perspective images. Further, the display surfaces can have a curvilinear blending display section to merge the various images. The multi-plane hands-on simulator can comprise various camera eyepoints, one for the horizontal perspective images, and optionally one for the curvilinear blending display surface. The multi-plane display surface can further adjust the various images to accommodate the position of the viewer. The multi-plane hands-on simulator system can project horizontal perspective images into the open space and a peripheral device that allow the end user to manipulate the images with hands or hand-held tools.

Description

  • This application claims priority from U.S. provisional applications Ser. No. 60/576, 187 filed Jun. 1, 2004, entitled “Multi plane horizontal perspective display”; Ser. No. 60/576,189 filed Jun. 1, 2004, entitled “Multi plane horizontal perspective hand on simulator”; Ser. No. 60/576, 182 filed Jun. 1, 2004, entitled “Binaural horizontal perspective display”; and Ser. No. 60/576,181 filed Jun. 1, 2004, entitled “Binaural horizontal perspective hand on simulator” which are incorporated herein by reference.
  • This application is related to co-pending application Ser. No. 11/098,681 filed Apr. 4, 2005, entitled “Horizontal projection display”; Ser. No. 11/098,685 filed Apr. 4, 2005, entitled “Horizontal projection display”, Ser. No. 11/098,667 filed Apr. 4, 2005, entitled “Horizontal projection hands-on simulator”; Ser. No. 11/098,682 filed Apr. 4, 2005, entitled “Horizontal projection hands-on simulator”; “Multi plane horizontal perspective display” filed May 27, 2005; “Multi plane horizontal perspective hand on simulator” filed May 27, 2005; “Binaural horizontal perspective display” filed May 27, 2005; and “Binaural horizontal perspective hand on simulator” filed May 27, 2005.
  • FIELD OF INVENTION
  • This invention relates to a three-dimensional simulator system, and in particular, to a multi-plane hands-on computer simulator system capable of operator's interaction.
  • BACKGROUND OF THE INVENTION
  • Three dimensional (3D) capable electronics and computing hardware devices and real-time computer-generated 3D computer graphics have been a popular area of computer science for the past few decades, with innovations in visual, audio and tactile systems. Much of the research in this area has produced hardware and software products that are specifically designed to generate greater realism and more natural computer-human interfaces. These innovations have significantly enhanced and simplified the end-user's computing experience.
  • Ever since humans began to communicate through pictures, they faced a dilemma of how to accurately represent the three-dimensional world they lived in. Sculpture was used to successfully depict three-dimensional objects, but was not adequate to communicate spatial relationships between objects and within environments. To do this, early humans attempted to “flatten” what they saw around them onto two-dimensional, vertical planes (e.g. paintings, drawings, tapestries, etc.). Scenes where a person stood upright, surrounded by trees, were rendered relatively successfully on a vertical plane. But how could they represent a landscape, where the ground extended out horizontally from where the artist was standing, as far as the eye could see?
  • The answer is three dimensional illusions. The two dimensional pictures must provide a numbers of cues of the third dimension to the brain to create the illusion of three dimensional images. This effect of third dimension cues can be realistically achievable due to the fact that the brain is quite accustomed to it. The three dimensional real world is always and already converted into two dimensional (e.g. height and width) projected image at the retina, a concave surface at the back of the eye. And from this two dimensional image, the brain, through experience and perception, generates the depth information to form the three dimension visual image from two types of depth cues: monocular (one eye perception) and binocular (two eye perception). In general, binocular depth cues are innate and biological while monocular depth cues are learned and environmental.
  • The major binocular depth cues are convergence and retinal disparity. The brain measures the amount of convergence of the eyes to provide a rough estimate of the distance since the angle between the line of sight of each eye is larger when an object is closer. The disparity of the retinal images due to the separation of the two eyes is used to create the perception of depth. The effect is called stereoscopy where each eye receives a slightly different view of a scene, and the brain fuses them together using these differences to determine the ratio of distances between nearby objects.
  • Binocular cues are very powerful perception of depth. However, there are also depth cues with only one eye, called monocular depth cues, to create an impression of depth on a flat image. The major monocular cues are: overlapping, relative size, linear perspective and light and shadow. When an object is viewed partially covered, this pattern of blocking is used as a cue to determine that the object is farther away. When two objects known to be the same size and one appears smaller than the other, this pattern of relative size is used as a cue to assume that the smaller object is farther away. The cue of relative size also provides the basis for the cue of linear perspective where the farther away the lines are from the observer, the closer together they will appear since parallel lines in a perspective image appear to converge towards a single point. The light falling on an object from a certain angle could provide the cue for the form and depth of an object. The distribution of light and shadow on objects is a powerful monocular cue for depth provided by the biologically correct assumption that light comes from above.
  • Perspective drawing, together with relative size, is most often used to achieve the illusion of three dimension depth and spatial relationships on a flat (two dimension) surface, such as paper or canvas. Through perspective, three dimension objects are depicted on a two dimension plane, but “trick” the eye into appearing to be in three dimension space. The first theoretical treatise for constructing perspective, Depictura, was published in the early 1400's by the architect, Leone Battista Alberti. Since the introduction of his book, the details behind “general” perspective have been very well documented. However, the fact that there are a number of other types of perspectives is not well known. Some examples are military, cavalier, isometric, and dimetric, as shown at the top of FIG. 1.
  • Of special interest is the most common type of perspective, called central perspective, shown at the bottom left of FIG. 1. Central perspective, also called one-point perspective, is the simplest kind of “genuine” perspective construction, and is often taught in art and drafting classes for beginners. FIG. 2 further illustrates central perspective. Using central perspective, the chess board and chess pieces look like three dimension objects, even though they are drawn on a two dimensional flat piece of paper. Central perspective has a central vanishing point, and rectangular objects are placed so their front sides are parallel to the picture plane. The depth of the objects is perpendicular to the picture plane. All parallel receding edges run towards a central vanishing point. The viewer looks towards this vanishing point with a straight view. When an architect or artist creates a drawing using central perspective, they must use a single-eye view. That is, the artist creating the drawing captures the image by looking through only one eye, which is perpendicular to the drawing surface.
  • The vast majority of images, including central perspective images, are displayed, viewed and captured in a plane perpendicular to the line of vision. Viewing the images at angle different from 90° would result in image distortion, meaning a square would be seen as a rectangle when the viewing surface is not perpendicular to the line of vision.
  • Central perspective is employed extensively in 3D computer graphics, for a myriad of applications, such as scientific, data visualization, computer-generated prototyping, special effects for movies, medical imaging, and architecture, to name just a few. One of the most common and well-known 3D computing applications is 3D gaming, which is used here as an example, because the core concepts used in 3D gaming extend to all other 3D computing applications.
  • FIG. 3 is a simple illustration, intended to set the stage by listing the basic components necessary to achieve a high level of realism in 3D software applications. At its highest level, 3D game development consists of four essential components:
      • 1. Design: Creation of the game's story line and game play
      • 2. Content: The objects (figures, landscapes, etc.) that come to life during game play
      • 3. Artificial Intelligence (AI): Controls interaction with the content during game play
      • 4. Real-time computer-generated 3D graphics engine (3D graphics engine):
        • Manages the design, content, and AI data. Decides what to draw, and how to draw it, then renders (displays) it on a computer monitor
  • A person using a 3D application, such as a game, is in fact running software in the form of a real-time computer-generated 3D graphics engine. One of the engine's key components is the renderer. Its job is to take 3D objects that exist within computer-generated world coordinates x, y, z, and render (draw/display) them onto the computer monitor's viewing surface, which is a flat (2D) plane, with real world coordinates x, y.
  • FIG. 4 is a representation of what is happening inside the computer when running a 3D graphics engine. Within every 3D game there exists a computer-generated 3D “world.” This world contains everything that could be experienced during game play. It also uses the Cartesian coordinate system, meaning it has three spatial dimensions x, y, and z. These three dimensions are referred to as “virtual world coordinates”. Game play for a typical 3D game might begin with a computer-generated-3D earth and a computer-generated-3D satellite orbiting it. The virtual world coordinate system enables the earth and satellite to be properly positioned in computer-generated x, y, z space.
  • As they move through time, the satellite and earth must stay properly synchronized. To accomplish this, the 3D graphics engine creates a fourth universal dimension for computer-generated time, t. For every tick of time t, the 3D graphics engine regenerates the satellite at its new location and orientation as it orbits the spinning earth. Therefore, a key job for a 3D graphics engine is to continuously synchronize and regenerate all 3D objects within all four computer-generated dimensions x, y, z, and t.
  • FIG. 5 is a conceptual illustration of what happens inside the computer when an end-user is playing, i.e. running, a first-person 3D application. First-person means that the computer monitor is much like a window, through which the person playing the game views the computer-generated world. To generate this view, the 3D graphics engine renders the scene from the point of view of the eye of a computer-generated person. The computer-generated person can be thought of as a computer-generated or “virtual” simulation of the “real” person actually playing the game.
  • While running a 3D application the real person, i.e. the end-user, views only a small segment of the entire 3D world at any given time. This is done because it is computationally expensive for the computer's hardware to generate the enormous number of 3D objects in a typical 3D application, the majority of which the end-user is not currently focused on. Therefore, a critical job for the 3D graphics engine is to minimize the computer hardware's computational burden by drawing/rendering as little information as absolutely necessary during each tick of computer-generated time t.
  • The boxed-in area in FIG. 5 conceptually represents how a 3D graphics engine minimizes the hardware's burden. It focuses computational resources on extremely small areas of information as compared to the 3D applications entire world. In this example, it is a “computer-generated” polar bear cub being observed by a “computer-generated” virtual person. Because the end user is running in first-person everything the computer-generated person sees is rendered onto the end-user's monitor, i.e. the end user is looking through the eye of the computer-generated person.
  • In FIG. 5 the computer-generated person is looking through only one eye; in other words, an one-eyed view. This is because the 3D graphics engine's renderer uses central perspective to draw/render 3D objects onto a 2D surface, which requires viewing through only one eye. The area that the computer-generated person sees with a one-eye view is called the “view volume”, and the computer-generated 3D objects within this view volume are what actually get rendered to the computer monitor's 2D viewing surface.
  • FIG. 6 illustrates a view volume in more detail. A view volume is a subset of a “camera model”. A camera model is a blueprint that defines the characteristics of both the hardware and software of a 3D graphics engine. Like a very complex and sophisticated automobile engine, a 3D graphics engine consist of so many parts that their camera models are often simplified to illustrate only the essential elements being referenced.
  • The camera model depicted in FIG. 6 shows a 3D graphics engine using central perspective to render computer-generated 3D objects to a computer monitor's vertical, 2D viewing surface. The view volume shown in FIG. 6, although more detailed, is the same view volume represented in FIG. 5. The only difference is semantics because a 3D graphics engine calls the computer-generated person's one-eye view a camera point (hence camera model).
  • Every component of a camera model is called an “element”. In our simplified camera model, the element called near clip plane is the 2D plane onto which the x, y, z coordinates of the 3D objects within the view volume will be rendered. Each projection line starts at the camera point, and ends at a x, y. z coordinate point of a virtual 3D object within the view volume. The 3D graphics engine then determines where the projection line intersects the near clip-plane and the x and y point where this intersection occurs is rendered onto the near clip-plane. Once the 3D graphics engine's renderer completes all necessary mathematical projections, the near clip plane is displayed on the 2D viewing surface of the computer monitor, as shown in FIG. 6.
  • The basic of prior art 3D computer graphics is the central perspective projection. 3D central perspective projection, though offering realistic 3D illusion, has some limitations is allowing the user to have hands-on interaction with the 3D display.
  • There is a little known class of images that we called it “horizontal perspective” where the image appears distorted when viewing head on, but displaying a three dimensional illusion when viewing from the correct viewing position. In horizontal perspective, the angle between the viewing surface and the line of vision is preferably 45° but can be almost any angle, and the viewing surface is preferably horizontal (wherein the name “horizontal perspective”), but it can be any surface, as long as the line of vision forming a not-perpendicular angle to it.
  • Horizontal perspective images offer realistic three dimensional illusion, but are little known primarily due to the narrow viewing location (the viewer's eyepoint has to be coincide precisely with the image projection eyepoint), and the complexity involving in projecting the two dimensional image or the three dimension model into the horizontal perspective image.
  • The generation of horizontal perspective images requires considerably more expertise to create than conventional perpendicular images. The conventional perpendicular images can be produced directly from the viewer or camera point. One need simply open one's eyes or point the camera in any direction to obtain the images. Further, with much experience in viewing three dimensional depth cues from perpendicular images, viewers can tolerate significant amount of distortion generated by the deviations from the camera point. In contrast, the creation of a horizontal perspective image does require much manipulation. Conventional camera, by projecting the image into the plane perpendicular to the line of sight, would not produce a horizontal perspective image. Making a horizontal drawing requires much effort and very time consuming. Further, since human has limited experience with horizontal perspective images, the viewer's eye must be positioned precisely where the projection eyepoint point is to avoid image distortion. And therefore horizontal perspective, with its difficulties, has received little attention.
  • SUMMARY OF THE INVENTION
  • The present invention recognizes that the personal computer is perfectly suitable for horizontal perspective display. It is personal, thus it is designed for the operation of one person, and the computer, with its powerful microprocessor, is well capable of rendering various horizontal perspective images to the viewer. Further, horizontal perspective offers open space display of 3D images, thus allowing the hands-on interaction of the end users.
  • Thus the present invention discloses a multi-plane hands-on simulator system comprising at least two display surfaces, one of which displaying a three dimensional horizontal perspective images. The other display surfaces can display two dimensional images, or preferably three dimensional central perpective images. Further, the display surfaces can have a curvilinear blending display section to merge the various images. The multi-plane hands-on simulator can comprise various camera eyepoints, one for the horizontal perspective images, one for the central perspective images, and optionally one for the curvilinear blending display surface. The multi-plane display surface can further adjust the various images to accommodate the position of the viewer. By changing the displayed images to keep the camera eyepoints of the horizontal perspective and central perspective images in the same position as the viewer's eye point, the viewer's eye is always positioned at the proper viewing position to perceive the three dimensional illusion, thus minimizing viewer's discomfort and distortion. The display can accept manual input such as a computer mouse, trackball, joystick, tablet, etc. to re-position the horizontal perspective images. The display can also automatically re-position the images based on an input device automatically providing the viewer's viewpoint location. The multi-plane hands-on simulator system can project horizontal perspective images into the open space and a peripheral device that allow the end user to manipulate the images with hands or hand-held tools.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows the various perspective drawings.
  • FIG. 2 shows a typical central perspective drawing.
  • FIG. 3 shows 3D software application.
  • FIG. 4 shows 3D application running on PC.
  • FIG. 5 shows 3D application in first person.
  • FIG. 6 shows central perspective camera model.
  • FIG. 7 shows the comparison of central perspective (Image A) and horizontal perspective (Image B).
  • FIG. 8 shows the central perspective drawing of three stacking blocks.
  • FIG. 9 shows the horizontal perspective drawing of three stacking blocks.
  • FIG. 10 shows the method of drawing a horizontal perspective drawing.
  • FIG. 11 shows a horizontal perspective display and a viewer input device.
  • FIG. 12 shows a horizontal perspective display, a computational device and a viewer input device.
  • FIG. 13 shows a computer monitor.
  • FIG. 14 shows a monitor's phosphor layer indicating of an incorrect location of image.
  • FIG. 15 shows a monitor's viewing surface indicating of a correct location of image.
  • FIG. 16 shows a reference plane x, y, z coordinates.
  • FIG. 17 shows the location of an angled camera point.
  • FIG. 18 shows the mapping of the horizontal plane to a reference plane.
  • FIG. 19 shows the comfort plane.
  • FIG. 20 shows the hands-on volume.
  • FIG. 21 shows the inner plane.
  • FIG. 22 shows the bottom plane.
  • FIG. 23 shows the inner access volume.
  • FIG. 24 shows the angled camera mapped to the end-user's eye
  • FIG. 25 shows mapping of the 3-d object onto the horizontal plane.
  • FIG. 26 shows the two-eye view.
  • FIG. 27 shows the simulation time of the horizontal perspective.
  • FIG. 28 shows the horizontal plane.
  • FIG. 29 shows the 3D peripherals.
  • FIG. 30 shows an open-access camera model.
  • FIG. 31 shows the concept of object recognition.
  • FIG. 32 shows the 3D audio combination with object recognition.
  • FIG. 33 shows another open access camera model.
  • FIG. 34 shows another open access camera model
  • FIG. 35 shows the mapping of virtual attachments to end of tools.
  • FIG. 36 shows the multi-plane and multi-view device.
  • FIG. 37 shows an open access camera model.
  • FIG. 38 shows another multi-plane device.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The new and unique inventions described in this document build upon prior art by taking the current state of real-time computer-generated 3D computer graphics, 3D sound, and tactile computer-human interfaces to a whole new level of reality and simplicity. More specifically, these new inventions enable real-time computer-generated 3D simulations to coexist in physical space and time with the end-user and with other real-world physical objects. This capability dramatically improves upon the end-user's visual, auditory and tactile computing experience by providing direct physical interactions with 3D computer-generated objects and sounds. This unique ability is useful in nearly every conceivable industry including, but not limited to, electronics, computers, biometrics, medical, education, games, movies, science, legal, financial, communication, law enforcement, national security, military, print media, television, advertising, trade show, data visualization, computer-generated reality, animation, CAD/CAE/CAM, productivity software, operating systems, and more.
  • The present invention discloses a multi-plane horizontal perspective hands-on simulator comprising at least two display surfaces, one of which capable of projecting three dimensional illusion based on horizontal perspective projection.
  • In general, the present invention horizontal perspective hands-on simulator can be used to display and interact with three dimensional images and has obvious utility to many industrial applications such as manufacturing design reviews, ergonomic simulation, safety and training, video games, cinematography, scientific 3D viewing, and medical and other data displays.
  • Horizontal perspective is a little-known perspective, of which we found only two books that describe its mechanics: Stereoscopic Drawing ((®1990) and How to Make Anaglyphs (®1979, out of print). Although these books describe this obscure perspective, they do not agree on its name. The first book refers to it as a “free-standing anaglyph,” and the second, a “phantogram.” Another publication called it “projective anaglyph” (U.S. Pat. No. 5,795,154 by G. M. Woods, Aug. 18, 1998). Since there is no agreed-upon name, we have taken the liberty of calling it “horizontal perspective.” Normally, as in central perspective, the plane of vision, at right angle to the line of sight, is also the projected plane of the picture, and depth cues are used to give the illusion of depth to this flat image. In horizontal perspective, the plane of vision remains the same, but the projected image is not on this plane. It is on a plane angled to the plane of vision. Typically, the image would be on the ground level surface. This means the image will be physically in the third dimension relative to the plane of vision. Thus horizontal perspective can be called horizontal projection.
  • In horizontal perspective, the object is to separate the image from the paper, and fuse the image to the three dimension object that projects the horizontal perspective image. Thus the horizontal perspective image must be distorted so that the visual image fuses to form the free standing three dimensional figure. It is also essential the image is viewed from the correct eye points, otherwise the three dimensional illusion is lost. In contrast to central perspective images which have height and width, and project an illusion of depth, and therefore the objects are usually abruptly projected and the images appear to be in layers, the horizontal perspective images have actual depth and width, and illusion gives them height, and therefore there is usually a graduated shifting so the images appear to be continuous.
  • FIG. 7 compares key characteristics that differentiate central perspective and horizontal perspective. Image A shows key pertinent characteristics of central perspective, and Image B shows key pertinent characteristics of horizontal perspective.
  • In other words, in Image A, the real-life three dimension object (three blocks stacked slightly above each other) was drawn by the artist closing one eye, and viewing along a line of sight perpendicular to the vertical drawing plane. The resulting image, when viewed vertically, straight on, and through one eye, looks the same as the original image.
  • In Image B, the real-life three dimension object was drawn by the artist closing one eye, and viewing along a line of sight 45° to the horizontal drawing plane. The resulting image, when viewed horizontally, at 45° and through one eye, looks the same as the original image.
  • One major difference between central perspective showing in Image A and horizontal perspective showing in Image B is the location of the display plane with respect to the projected three dimensional image. In horizontal perspective of Image B, the display plane can be adjusted up and down, and therefore the projected image can be displayed in the open air above the display plane, i.e. a physical hand can touch (or more likely pass through) the illusion, or it can be displayed under the display plane, i.e. one cannot touch the illusion because the display plane physically blocks the hand. This is the nature of horizontal perspective, and as long as the camera eyepoint and the viewer eyepoint is at the same place, the illusion is present. In contrast, in central perspective of Image A, the three dimensional illusion is likely to be only inside the display plane, meaning one cannot touch it. To bring the three dimensional illusion outside of the display plane to allow viewer to touch it, the central perspective would need elaborate display scheme such as surround image projection and large volume.
  • FIGS. 8 and 9 illustrate the visual difference between using central and horizontal perspective. To experience this visual difference, first look at FIG. 8, drawn with central perspective, through one open eye. Hold the piece of paper vertically in front of you, as you would a traditional drawing, perpendicular to your eye. You can see that central perspective provides a good representation of three dimension objects on a two dimension surface.
  • Now look at FIG. 9, drawn using horizontal perspective, by sifting at your desk and placing the paper lying flat (horizontally) on the desk in front of you. Again, view the image through only one eye. This puts your one open eye, called the eye point at approximately a 45° angle to the paper, which is the angle that the artist used to make the drawing. To get your open eye and its line-of-sight to coincide with the artist's, move your eye downward and forward closer to the drawing, about six inches out and down and at a 45° angle. This will result in the ideal viewing experience where the top and middle blocks will appear above the paper in open space.
  • Again, the reason your one open eye needs to be at this precise location is because both central and horizontal perspective not only defines the angle of the line of sight from the eye point; they also define the distance from the eye point to the drawing. This means that FIGS. 8 and 9 are drawn with an ideal location and direction for your open eye relative to the drawing surfaces. However, unlike central perspective where deviations from position and direction of the eye point create little distortion, when viewing a horizontal perspective drawing, the use of only one eye and the position and direction of that eye relative to the viewing surface are essential to seeing the open space three dimension horizontal perspective illusion.
  • FIG. 10 is an architectural-style illustration that demonstrates a method for making simple geometric drawings on paper or canvas utilizing horizontal perspective. FIG. 10 is a side view of the same three blocks used in FIG. 9. It illustrates the actual mechanics of horizontal perspective. Each point that makes up the object is drawn by projecting the point onto the horizontal drawing plane. To illustrate this, FIG. 10 shows a few of the coordinates of the blocks being drawn on the horizontal drawing plane through projection lines. These projection lines start at the eye point (not shown in FIG. 10 due to scale), intersect a point on the object, then continue in a straight line to where they intersect the horizontal drawing plane, which is where they are physically drawn as a single dot on the paper When an architect repeats this process for each and every point on the blocks, as seen from the drawing surface to the eye point along the line-of-sight the horizontal perspective drawing is complete, and looks like FIG. 9.
  • Notice that in FIG. 10, one of the three blocks appears below the horizontal drawing plane. With horizontal perspective, points located below the drawing surface are also drawn onto the horizontal drawing plane, as seen from the eye point along the line-of-site. Therefore when the final drawing is viewed, objects not only appear above the horizontal drawing plane, but may also appear below it as well-giving the appearance that they are receding into the paper. If you look again at FIG. 9, you will notice that the bottom box appears to be below, or go into, the paper, while the other two boxes appear above the paper in open space.
  • The generation of horizontal perspective images requires considerably more expertise to create than central perspective images. Even though both methods seek to provide the viewer the three dimension illusion that resulted from the two dimensional image, central perspective images produce directly the three dimensional landscape from the viewer or camera point. In contrast, the horizontal perspective image appears distorted when viewing head on, but this distortion has to be precisely rendered so that when viewing at a precise location, the horizontal perspective produces a three dimensional illusion.
  • The horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience. By employing the computation power of the microprocessor and a real time display, the horizontal perspective display is shown in FIG. 11, comprising a real time electronic display 100 capable of re-drawing the projected image, together with a viewer's input device 102 to adjust the horizontal perspective image. By re-display the horizontal perspective image so that its projection eyepoint coincides with the eyepoint of the viewer, the horizontal perspective display can ensure the minimum distortion in rendering the three dimension illusion from the horizontal perspective method. The input device can be manually operated where the viewer manually inputs his or her eyepoint location, or change the projection image eyepoint to obtain the optimum three dimensional illusion. The input device can also be automatically operated where the display automatically tracks the viewer's eyepoint and adjust the projection image accordingly. The horizontal perspective display removes the constraint that the viewers keeping their heads in relatively fixed positions, a constraint that create much difficulty in the acceptance of precise eyepoint location such as horizontal perspective or hologram display.
  • The horizontal perspective display system, shown in FIG. 12, can further comprise a computation device 110 in addition to the real time electronic display device 100 and projection image input device 112 providing input to the computational device 110 to calculating the projectional images for display to providing a realistic, minimum distortion three dimensional illusion to the viewer by coincide the viewer's eyepoint with the projection image eyepoint. The system can further comprise an image enlargement/reduction input device 115, or an image rotation input device 117, or an image movement device 119 to allow the viewer to adjust the view of the projection images.
  • The horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience. By employing the computation power of the microprocessor and a real time display, the horizontal perspective display, comprising a real time electronic display capable of re-drawing the projected image, together with a viewer's input device to adjust the horizontal perspective image. By re-display the horizontal perspective image so that its projection eyepoint coincides with the eyepoint of the viewer, the horizontal perspective display of the present invention can ensure the minimum distortion in rendering the three dimension illusion from the horizontal perspective method. The input device can be manually operated where the viewer manually inputs his or her eyepoint location, or change the projection image eyepoint to obtain the optimum three dimensional illusions. The input device can also be automatically operated where the display automatically tracks the viewer's eyepoint and adjust the projection image accordingly. The horizontal perspective display system removes the constraint that the viewers keeping their heads in relatively fixed positions, a constraint that create much difficulty in the acceptance of precise eyepoint location such as horizontal perspective or hologram display.
  • The horizontal perspective display system can further a computation device in addition to the real time electronic display device and projection image input device providing input to the computational device to calculating the projectional images for display to providing a realistic, minimum distortion three dimensional illusion to the viewer by coincide the viewer's eyepoint with the projection image eyepoint. The system can further comprise an image enlargement/reduction input device, or an image rotation input device, or an image movement device to allow the viewer to adjust the view of the projection images.
  • The input device can be operated manually or automatically. The input device can detect the position and orientation of the viewer eyepoint, to compute and to project the image onto the display according to the detection result. Alternatively, the input device can be made to detect the position and orientation of the viewer's head along with the orientation of the eyeballs. The input device can comprise an infrared detection system to detect the position the viewer's head to allow the viewer freedom of head movement. Other embodiments of the input device can be the triangulation method of detecting the viewer eyepoint location, such as a CCD camera providing position data suitable for the head tracking objectives of the invention. The input device can be manually operated by the viewer, such as a keyboard, mouse, trackball, joystick, or the like, to indicate the correct display of the horizontal perspective display images.
  • The head or eye-tracking system can comprise a base unit and a head-mounted sensor on the head of the viewer. The head-mounted sensor produces signals showing the position and orientation of the viewer in response to the viewer's head movement and eye orientation. These signals can be received by the base unit and are used to compute the proper three dimensional projection images. The head or eye tracking system can be infrared cameras to capture images of the viewer's eyes. Using the captured images and other techniques of image processing, the position and orientation of the viewer's eyes can be determined, and then provided to the base unit. The head and eye tracking can be done in real time for small enough time interval to provide continous viewer's head and eye tracking.
  • The invention described in this document, employing the open space characteristics of the horizontal perspective, together with a number of new computer hardware and software elements and processes that together to create a “Hands-On Simulator”. In the simplest terms, the Hands-On Simulator generates a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space.
  • For the end user to experience these unique hands-on simulations the computer hardware viewing surface is situated horizontally, such that the end-user's line of sight is at a 45° angle to the surface. Typically, this means that the end user is standing or seated vertically, and the viewing surface is horizontal to the ground. Note that although the end user can experience hands-on simulations at viewing angles other than 45° (e.g. 55°, 300 etc.), it is the optimal angle for the brain to recognize the maximum amount of spatial information in an open space image. Therefore, for simplicity's sake, we use “45°” throughout this document to mean “an approximate 45 degree angle”. Further, while horizontal viewing surface is preferred since it simulates viewers' experience with the horizontal ground, any viewing surface could offer similar three dimensional illusion experience. The horizontal perspective illusion can appear to be hanging from a ceiling by projecting the horizontal perspective images onto a ceiling surface, or appear to be floating from a wall by projecting the horizontal perspective images onto a vertical wall surface.
  • The hands-on simulations are generated within a 3D graphics engines' view volume, creating two new elements, the “Hands-On Volume” and the “Inner-Access Volume.” The Hands-On Volume is situated on and above the physical viewing surface. Thus the end user can directly, physically manipulate simulations because they co-inhabit the end-user's own physical space. This 1:1 correspondence allows accurate and tangible physical interaction by touching and manipulating simulations with hands or hand-held tools. The Inner-Access Volume is located underneath the viewing surface and simulations within this volume appear inside the physically viewing device. Thus simulations generated within the Inner-Access Volume do not share the same physical space with the end user and the images therefore cannot be directly, physically manipulated by hands or hand-held tools. That is, they are manipulated indirectly via a computer mouse or a joystick.
  • This disclosed Hands-On Simulator can lead to the end user's ability to directly, physically manipulate simulations because they co-inhabit the end-user's own physical space. To accomplish this requires a new computing concept where computer-generated world elements have a 1:1 correspondence with their physical real-world equivalents; that is, a physical element and an equivalent computer-generated element occupy the same space and time. This is achieved by identifying and establishing a common “Reference Plane”, to which the new elements are synchronized.
  • Synchronization with the Reference Plane forms the basis to create the 1:1 correspondence between the “virtual” world of the simulations, and the “real” physical world. Among other things, the 1:1 correspondence insures that images are properly displayed: What is on and above the viewing surface appears on and above the surface, in the Hands-On Volume; what is underneath the viewing surface appears below, in the Inner-Access Volume. Only if this 1:1 correspondence and synchronization to the Reference Plane are present can the end user physically and directly access and interact with simulations via their hands or hand-held tools.
  • The present invention simulator further includes a real-time computer-generated 3D-graphics engine as generally described above, but using horizontal perspective projection to display the 3D images. One major different between the present invention and prior art graphics engine is the projection display. Existing 3D-graphics engine uses central-perspective and therefore a vertical plane to render its view volume while in the present invention simulator, a “horizontal” oriented rendering plane vs. a “vertical” oriented rendering plane is required to generate horizontal perspective open space images. The horizontal perspective images offer much superior open space access than central perspective images.
  • One of the invented elements in the present invention hands-on simulator is the 1:1 correspondence of the computer-generated world elements and their physical real-world equivalents. As noted in the introduction above, this 1:1 correspondence is a new computing concept that is essential for the end user to physically and directly access and interact with hands-on simulations. This new concept requires the creation of a common physical Reference Plane, as well as, the formula for deriving its unique x, y, z spatial coordinates. To determine the location and size of the Reference Plane and its specific coordinates requires understanding the following.
  • A computer monitor or viewing device is made of many physical layers, individually and together having thickness or depth. To illustrate this, FIG. 13 contains a conceptual side-view of typical CRT-type viewing device. The top layer of the monitor's glass surface is the physical “View Surface”, and the phosphor layer, where images are made, is the physical “Image Layer”. The View Surface and the Image Layer are separate physical layers located at different depths or z coordinates along the viewing device's z axis. To display an image the CRT's electron gun excites the phosphors, which in turn emit photons. This means that when you view an image on a CRT, you are looking along its z axis through its glass surface, like you would a window, and seeing the light of the image coming from its phosphors behind the glass.
  • With a viewing device's z axis in mind, let's display an image on that device using horizontal perspective. In FIG. 14 we use the same architectural technique for drawing images with horizontal perspective as previously illustrated in FIG. 10. By comparing FIG. 14 and FIG. 10 you can see that the middle block in FIG. 14 does not correctly appear on the View Surface. In FIG. 10 the bottom of the middle block is located correctly on the horizontal drawing/viewing plane, i.e. a piece of paper's View Surface. But in FIG. 14, the phosphor layer, i.e. where the image is made, is located behind the CRT's glass surface. Therefore, the bottom of the middle block is incorrectly positioned behind or underneath the View Surface.
  • FIG. 15 shows the proper location of the three blocks on a CRT-type viewing device. That is, the bottom of the middle block is displayed correctly on the View Surface and not on the Image Layer. To make this adjustment the z coordinates of the View Surface and Image Layer are used by the Simulation Engine to correctly render the image. Thus the unique task of correctly rendering an open space image on the View Surface vs. the Image Layer is critical in accurately mapping the simulation images to the real world space.
  • It is now clear that a viewing device's View Surface is the correct physical location to present open space images. Therefore, the View Surface, i.e. the top of the viewing device's glass surface, is the common physical Reference Plane. But only a subset of the View Surface can be the Reference Plane because the entire View Surface is larger than the total image area. FIG. 16 shows an example of a complete image being displayed on a viewing device's View Surface. That is, the blue image, including the bear cub, shows the entire image area, which is smaller than the viewing device's View Surface.
  • Many viewing devices enable the end user to adjust the size of the image area by adjusting its x and y value. Of course these same viewing devices do not provide any knowledge of, or access to, the z axis information because it is a completely new concept and to date only required for the display of open space images. But all three, x, y, z, coordinates are essential to determine the location and size of the common physical Reference Plane. The formula for this is: The Image Layer is given a z coordinate of 0. The View Surface is the distance along the z axis from the Image Layer the Reference Plane's z coordinate is equal to the View Surface, i.e. its distance from the Image Layer. The x and y coordinates, or size of the Reference Plane, can be determined by displaying a complete image on the viewing device and measuring the length of its x and y axis.
  • The concept of the common physical Reference Plane is a new inventive concept. Therefore, display manufactures may not supply or even know its coordinates. Thus a “Reference Plane Calibration” procedure might need to be performed to establish the Reference Plane coordinates. This calibration procedure provides the end user with a number of orchestrated images that s/he interacts. The end-user's response to these images provides feedback to the Simulation Engine such that it can identify the correct size and location of the Reference Plane. When the end user is satisfied and completes the procedure the coordinates are saved in the end user's personal profile.
  • With some viewing devices the distance between the View Surface and Image Layer is quite short. But no matter how small or large the distance, it is critical that all Reference Plane x, y, and z coordinates are determined as close as technically possible.
  • After the mapping of the “computer-generated” horizontal perspective projection display plane (Horizontal Plane) to the “physical” Reference Plane x, y, z coordinates, the two elements coexist and are coincident in time and space; that is, the computer-generated Horizontal Plane now shares the real-world x, y, z coordinates of the physical Reference Plane, and they exist at the same time.
  • You can envision this unique mapping of a computer-generated element and a physical element occupying the same space and time by imagining you are sitting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. By placing your finger on the surface of the monitor, you would touch the Reference Plane (a portion of the physical View Surface) and the Horizontal Plane (computer-generated) at exactly the same time, In other words, when touching the physical surface of the monitor, you are also “touching” its computer-generated equivalent, the Horizontal Plane, which has been created and mapped by the Simulation Engine to the same location and time.
  • One element of the present invention horizontal perspective projection hands-on simulator is a computer-generated “Angled Camera” point, shown in FIG. 17. The camera point is initially located at an arbitrary distance from the Horizontal Plane and the camera's line-of-site is oriented at a 45° angle looking through the center. The position of the Angled Camera in relation to the end-user's eye is critical to generating simulations that appear in open space on and above the surface of the viewing device.
  • Mathematically, the computer-generated x, y, z coordinates of the Angled Camera point form the vertex of an infinite “pyramid”, whose sides pass through the x, y, z coordinates of the Reference/Horizontal Plane. FIG. 18 illustrates this infinite pyramid, which begins at the Angled Camera point and extending through the Far Clip Plane. There are new planes within the pyramid that run parallel to the Reference/Horizontal Plane, which, together with the sides of the pyramid define two new view volumes. These unique view volumes are called Hands-On and the Inner-Access Volume, and are not shown in FIG. 18. The dimensions of these volumes and the planes that define them are based on their locations within the pyramid.
  • FIG. 19 illustrates a plane, called Comfort Plane, together with other display elements. The Comfort Plane is one of six planes that define the new Hands-On Volume, and of these planes it is closest to the Angled Camera point and parallel to the Reference Plane. The Comfort Plane is appropriately named because its location within the pyramid determines the end-user's personal comfort, i.e. how their eyes, head, body, etc. are situated while viewing and interacting with simulations. The end user can adjust the location of the Comfort Plane based on their personal visual comfort through a “Comfort Plane Adjustment” procedure. This procedure provides the end user with orchestrated simulations within the Hands-On Volume, and enables them to adjust the location of the Comfort Plane within the pyramid relative to the Reference Plane. When the end user is satisfied and completes the procedure the location of the Comfort Plane is saved in the end-user's personal profiles.
  • The present invention simulator further defines a “Hands-On Volume”, shown in FIG. 20. The Hands-On Volume is where you can reach your hand in and physically “touch” a simulation. You can envision this by imagining you are sifting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. If you place your hand several inches above the surface of the monitor, you are putting your hand inside both the physical and computer-generated Hands-On Volume at the same time. The Hands-On Volume exists within the pyramid and are between and inclusive of the Comfort Planes and the Reference/Horizontal Planes.
  • Where the Hands-On Volume exists on and above the Reference/Horizontal Plane, the Inner-Access Volume exists below or inside the physical viewing device. For this reason, an end user cannot directly interact with 3D objects located within the Inner-Access Volume via their hand or hand-held tools. But they can interact in the traditional sense with a computer mouse, joystick, or other similar computer peripheral. An “Inner Plane” is further defined, located immediately below and are parallel to the Reference/Horizontal Plane within the pyramid as shown in FIG. 21. The Inner Plane, along with the Bottom Plane, is two of the six planes within the pyramid that define the Inner-Access Volume. The Bottom Plane (shown in FIG. 22) is farthest away from the Angled Camera point, but it is not to be mistaken for the Far Clip plane. The Bottom Plane is also parallel to the Reference/Horizontal Plane and is one of the six planes that define the Inner-Access Volume (FIG. 23). You can envision the Inner-Access Volume by imagining you are sitting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. If you pushed your hand through the physical surface and placed your hand inside the monitor (which of course is not possible), you would be putting your hand inside the Inner-Access Volume.
  • The end-user's preferred viewing distance to the bottom of the viewing pyramid determines the location of these planes. One way the end user can adjust the location of the Bottom Planes is through a “Bottom Plane Adjustment” procedure. This procedure provides the end user with orchestrated simulations within the Inner-Access Volume and enables them to interact and adjust the location of the Bottom Plane relative to the physical Reference/Horizontal Plane. When the end user completes the procedure the Bottom Plane's coordinates are saved in the end-user's personal profiles.
  • For the end user to view open space images on their physical viewing device it must be positioned properly, which usually means the physical Reference Plane is placed horizontally to the ground. Whatever the viewing device's position relative to the ground, the Reference/Horizontal Plane must be at approximately a 45° angle to the end-user's line-of-sight for optimum viewing. One way the end user might perform this step is to position their CRT computer monitor on the floor in a stand, so that the Reference/Horizontal Plane is horizontal to the floor. This example use a CRT-type computer monitor, but it could be any type of viewing device, placed at approximately a 45° angle to the end-user's line-of-sight.
  • The real-world coordinates of the “End-User's Eye” and the computer-generated Angled Camera point must have a 1:1 correspondence in order for the end user to properly view open space images that appear on and above the Reference/Horizontal Plane (FIG. 24). One way to do this is for the end user to supply the Simulation Engine with their eye's real-world x, yr. z location and line-of-site information relative to the center of the physical Reference/Horizontal Plane. For example, the end user tells the Simulation Engine that their physical eye will be located 12 inches up, and 12 inches back, while looking at the center of the Reference/Horizontal Plane. The Simulation Engine then maps the computer-generated Angled Camera point to the End-User's Eye point physical coordinates and line-of-sight.
  • The present invention horizontal perspective hands-on simulator employs the horizontal perspective projection to mathematically projected the 3D objects to the Hands-On and Inner-Access Volumes. The existence of a physical Reference Plane and the knowledge of its coordinates are essential to correctly adjusting the Horizontal Plane's coordinates prior to projection. This adjustment to the Horizontal Plane enables open space images to appear to the end user on the View Surface vs. the Image Layer by taking into account the offset between the Image Layer and the View Surface, which are located at different values along the viewing device's z axis.
  • As a projection line in either the Hands-On and Inner-Access Volume intersects both an object point and the offset Horizontal Plane, the three dimensional x, y, z point of the object becomes a two-dimensional x, y point of the Horizontal Plane (see FIG. 25). Projection lines often intersect more than one 3D object coordinate, but only one object x, y, z coordinate along a given projection line can become a Horizontal Plane x, y point. The formula to determine which object coordinate becomes a point on the Horizontal Plane is different for each volume. For the Hands-On Volume it is the object coordinate of a given projection line that is farthest from the Horizontal Plane. For the Inner-Access Volume it is the object coordinate of a given projection line that is closest to the Horizontal Plane. In case of a tie, i.e. if a 3D object point from each volume occupies the same 2D point of the Horizontal Plane, the Hands-On Volume's 3D object point is used.
  • FIG. 25 is an illustration of the present invention Simulation Engine that includes the new computer-generated and real physical elements as described above. It also shows that a real-world element and its computer-generated equivalent are mapped 1:1 and together share a common Reference Plane. The full implementation of this Simulation Engine results in a Hands-On Simulator with real-time computer-generated 3D-graphics appearing in open space on and above a viewing device's surface, which is oriented approximately 45° to the end-user's line-of-sight.
  • The Hands-On Simulator further involves adding completely new elements and processes and existing stereoscopic 3D computer hardware. The result in a Hands-On Simulator with multiple views or “Multi-View” capability. Multi-View provides the end user with multiple and/or separate left- and right-eye views of the same simulation.
  • To provide motion, or time-related simulation, the simulator further includes a new computer-generated “time dimension” element, called “SI-time”. SI is an acronym for “Simulation Image” and is one complete image displayed on the viewing device. SI-Time is the amount of time the Simulation Engine uses to completely generate and display one Simulation Image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector But SI-Time is variable, meaning that depending on the complexity of the view volumes it could take 1/120th or ½ a second for the Simulation Engine to complete just one SI.
  • The simulator also includes a new computer-generated “time dimension” element, called “EV-time” and is the amount of time used to generate a one “Eye-View”. For example, let's say that the Simulation Engine needs to create one left-eye view and one right-eye view for purposes of providing the end user with a stereoscopic 3D experience. If it takes the Simulation Engine ½ a second to generate the left-eye view then the first EV-Time period is ½ a second. If it takes another ½ second to generate the right-eye view then the second EV-Time period is also ½ second. Since the Simulation Engine was generating a separate left and right eye view of the same Simulation Image the total SI-Time is one second. That is, the first EV-Time was ½ second and the second EV-Time was also ½ second making a total SI-Time of one second.
  • FIG. 26 helps illustrate these two new time dimension elements. It is a conceptual drawing of what is occurring inside the Simulation Engine when it is generating a two-eye view of a Simulated Image. The computer-generated person has both eyes open, a requirement for stereoscopic 3D viewing, and therefore sees the bear cub from two separate vantage points, i.e. from both a right-eye view and a left-eye view. These two separate views are slightly different and offset because the average person's eyes are about 2 inches apart. Therefore, each eye sees the world from a separate point in space and the brain puts them together to make a whole image. This is how and why we see the real world in stereoscopic 3D.
  • FIG. 27 is a very high-level Simulation Engine blueprint focusing on how the computer-generated person's two eye views are projected onto the Horizontal Plane and then displayed on a stereoscopic 3D capable viewing device. FIG. 26 represents one complete SI-Time period. If we use the example from step 3 above, SI-Time takes one second. During this one second of SI-Time the Simulation Engine needs to generate two different eye views, because in this example the stereoscopic 3D viewing device requires a separate left- and right-eye view. There are existing stereoscopic 3D viewing devices that require more than a separate left- and right-eye view. But because the method described here can generate multiple views it works for these devices as well.
  • The illustration in the upper left of FIG. 27 shows the Angled Camera point for the right eye at time-element “EV-Time-1”, which means the first Eye-View time period or the first eye-view to be generated. So in FIG. 27, EV-Time-1 is the time period used by the Simulation Engine to complete the first eye (right-eye) view of the computer-generated person. This is the job for this step, which is within EV-Time-1, and using the Angled Camera at coordinate x, y, z, the Simulation Engine completes the rendering and display of the right-eye view of a given Simulation Image.
  • Once the first eye (right-eye) view is complete, the Simulation Engine starts the process of rendering the computer-generated person's second eye (left-eye) view. The illustration in the lower left of FIG. 27 shows the Angled Camera point for the left eye at time element “EV-Time-2”. That is, this second eye view is completed during EV-Time-2. But before the rendering process can begin, step 5 makes an adjustment to the Angled Camera point. This is illustrated in FIG. 27 by the left eye's x coordinate being incremented by two inches. This difference between the right eye's x value and the left eye's x+2″ is what provides the two-inch separation between the eyes, which is required for stereoscopic 3D viewing.
  • The distances between people's eyes vary but in the above example we are using the average of 2 inches. It is also possible for the end user to supply the Simulation Engine with their personal eye separation value. This would make the x value for the left and right eyes highly accurate for a given end user and thereby improve the quality of their stereoscopic 3D view.
  • Once the Simulation Engine has incremented the Angled Camera point's x coordinate by two inches, or by the personal eye separation value supplied by the end user, it completes the rendering and display of the second (left-eye) view. This is done by the Simulation Engine within the EV-Time-2 period using the Angled Camera point coordinate x±2″, y, z and the exact same Simulation Image rendered. This completes one SI-Time period.
  • Depending on the stereoscopic 3D viewing device used, the Simulation Engine continues to display the left- and right-eye images, as described above, until it needs to move to the next SI-Time period. The job of this step is to determine if it is time to move to a new SI-Time period, and if it is, then increment SI-Time. An example of when this may occur is if the bear cub moves his paw or any part of his body Then a new and second Simulated Image would be required to show the bear cub in its new position. This new Simulated Image of the bear cub, in a slightly different location, gets rendered during a new SI-Time period or SI-Time-2. This new SI-time-2 period will have its own EV-Time-1 and EV-Time-2, and therefore the simulation steps described above will be repeated during SI-time-2. This process of generating multiple views via the nonstop incrementing of SI-Time and its EV-Times continues as long as the Simulation Engine is generating real-time simulations in stereoscopic 3D.
  • The above steps describe new and unique elements and process that makeup the Hands-On Simulator with Multi-View capability. Multi-View provides the end user with multiple and/or separate left- and right-eye views of the same simulation. Multi-View capability is a significant visual and interactive improvement over the single eye view.
  • The present invention also allows the viewer to move around the three dimensional display and yet suffer no great distortion since the display can track the viewer eyepoint and re-display the images correspondingly, in contrast to the conventional prior art three dimensional image display where it would be projected and computed as seen from a singular viewing point, and thus any movement by the viewer away from the intended viewing point in space would cause gross distortion.
  • The display system can further comprise a computer capable of re-calculate the projected image given the movement of the eyepoint location. The horizontal perspective images can be very complex, tedious to create, or created in ways that are not natural for artists or cameras, and therefore require the use of a computer system for the tasks. To display a three-dimensional image of an object with complex surfaces or to create animation sequences would demand a lot of computational power and time, and therefore it is a task well suited to the computer. Three dimensional capable electronics and computing hardware devices and real-time computer-generated three dimensional computer graphics have advanced significantly recently with marked innovations in visual, audio and tactile systems, and have producing excellent hardware and software products to generate realism and more natural computer-human interfaces.
  • The horizontal perspective display system of the present invention are not only in demand for entertainment media such as televisions, movies, and video games but are also needed from various fields such as education (displaying three-dimensional structures), technological training (displaying three-dimensional equipment). There is an increasing demand for three-dimensional image displays, which can be viewed from various angles to enable observation of real objects using object-like images. The horizontal perspective display system is also capable of substitute a computer-generated reality for the viewer observation. The systems may include audio, visual, motion and inputs from the user in order to create a complete experience of three dimensional illusions.
  • The input for the horizontal perspective system can be two dimensional image, several images combined to form one single three dimensional image, or three dimensional model. The three dimensional image or model conveys much more information than that a two dimensional image and by changing viewing angle, the viewer will get the impression of seeing the same object from different perspectives continuously.
  • The horizontal perspective display can further provide multiple views or “Multi-View” capability. Multi-View provides the viewer with multiple and/or separate left- and right-eye views of the same simulation. Multi-View capability is a significant visual and interactive improvement over the single eye view. In Multi-View mode, both the left eye and right eye images are fused by the viewer's brain into a single, three-dimensional illusion. The problem of the discrepancy between accommodation and convergence of eyes, inherent in stereoscopic images, leading to the viewer's eye fatigue with large discrepancy, can be reduced with the horizontal perspective display, especially for motion images, since the position of the viewer's gaze point changes when the display scene changes.
  • In Multi-View mode, the objective is to simulate the actions of the two eyes to create the perception of depth, namely the left eye and the right eye sees slightly different images. Thus Multi-View devices that can be used in the present invention include methods with glasses such as anaglyph method, special polarized glasses or shutter glasses, methods without using glasses such as a parallax stereogram, a lenticular method, and mirror method (concave and convex lens).
  • In anaglyph method, a display image for the right eye and a display image for the left eye are respectively superimpose-displayed in two colors, e.g., red and blue, and observation images for the right and left eyes are separated using color filters, thus allowing a viewer to recognize a stereoscopic image. The images are displayed using horizontal perspective technique with the viewer looking down at an angle. As with one eye horizontal perspective method, the eyepoint of the projected images has to be coincide with the eyepoint of the viewer, and therefore the viewer input device is essential in allowing the viewer to observe the three dimensional horizontal perspective illusion. From the early days of the anaglyph method, there are much improvements such as the spectrum of the red/blue glasses and display to generate much more realism and comfort to the viewers.
  • In polarized glasses method, the left eye image and the right eye image are separated by the use of mutually extinguishing polarizing filters such as orthogonally linear polarizer, circular polarizer, elliptical polarizer. The images are normally projected onto screens with polarizing filters and the viewer is then provided with corresponding polarized glasses. The left and right eye images appear on the screen at the same time, but only the left eye polarized light is transmitted through the left eye lens of the eyeglasses and only the right eye polarized light is transmitted through the right eye lens.
  • Another way for stereoscopic display is the image sequential system. In such a system, the images are displayed sequentially between left eye and right eye images rather than superimposing them upon one another, and the viewer's lenses are synchronized with the screen display to allow the left eye to see only when the left image is displayed, and the right eye to see only when the right image is displayed. The shuttering of the glasses can be achieved by mechanical shuttering or with liquid crystal electronic shuttering. In shuttering glass method, display images for the right and left eyes are alternately displayed on a CRT in a time sharing manner, and observation images for the right and left eyes are separated using time sharing shutter glasses which are opened/closed in a time sharing manner in synchronism with the display images, thus allowing an observer to recognize a stereoscopic image.
  • Other way to display stereoscopic images is by optical method. In this method, display images for the right and left eyes, which are separately displayed on a viewer using optical means such as prisms, mirror, lens, and the like, are superimpose-displayed as observation images in front of an observer, thus allowing the observer to recognize a stereoscopic image. Large convex or concave lenses can also be used where two image projectors, projecting left eye and right eye images, are providing focus to the viewer's left and right eye respectively. A variation of the optical method is the lenticular method where the images form on cylindrical lens elements or two dimensional array of lens elements.
  • FIG. 27 is a horizontal perspective display focusing on how the computer-generated person's two eye views are projected onto the Horizontal Plane and then displayed on a stereoscopic 3D capable viewing device. FIG. 27 represents one complete display time period. During this display time period, the horizontal perspective display needs to generate two different eye views, because in this example the stereoscopic 3D viewing device requires a separate left- and right-eye view. There are existing stereoscopic 3D viewing devices that require more than a separate left- and right-eye view, and because the method described here can generate multiple views it works for these devices as well.
  • The illustration in the upper left of FIG. 27 shows the Angled Camera point for the right eye after the first (right) eye-view to be generated. Once the first (right) eye view is complete, the horizontal perspective display starts the process of rendering the computer-generated person's second eye (left-eye) view. The illustration in the lower left of FIG. 27 shows the Angled Camera point for the left eye after the completion of this time. But before the rendering process can begin, the horizontal perspective display makes an adjustment to the Angled Camera point. This is illustrated in FIG. 27 by the left eye's x coordinate being incremented by two inches. This difference between the right eye's x value and the left eye's x+2″ is what provides the two-inch separation between the eyes, which is required for stereoscopic 3D viewing. The distances between people's eyes vary but in the above example we are using the average of 2 inches. It is also possible for the view to supply the horizontal perspective display with their personal eye separation value. This would make the x value for the left and right eyes highly accurate for a given viewer and thereby improve the quality of their stereoscopic 3D view.
  • Once the horizontal perspective display has incremented the Angled Camera point's x coordinate by two inches, or by the personal eye separation value supplied by the viewer, the rendering continues by displaying the second (left-eye) view.
  • Depending on the stereoscopic 3D viewing device used, the horizontal perspective display continues to display the left- and right-eye images, as described above, until it needs to move to the next display time period. An example of when this may occur is if the bear cub moves his paw or any part of his body. Then a new and second Simulated Image would be required to show the bear cub in its new position. This new Simulated Image of the bear cub, in a slightly different location, gets rendered during a new display time period. This process of generating multiple views via the nonstop incrementing of display time continues as long as the horizontal perspective display is generating real-time simulations in stereoscopic 3D.
  • By rapidly display the horizontal perspective images, three dimensional illusion of motion can be realized. Typically, 30 to 60 images per second would be adequate for the eye to perceive motion. For stereoscopy, the same display rate is needed for superimposed images, and twice that amount would be needed for time sequential method.
  • The display rate is the number of images per second that the display uses to completely generate and display one image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector. But the display time could be a variable, meaning that depending on the complexity of the view volumes it could take 1/12 or ½ a second for the computer to complete just one display image. Since the display was generating a separate left and right eye view of the same image, the total display time is twice the display time for one eye image.
  • FIG. 28 shows a horizontal plane as related to both central perspective and horizontal perspective.
  • The present invention hands-on simulator further includes technologies employed in computer “peripherals”. FIG. 29 shows examples of such Peripherals with six degrees of freedom, meaning that their coordinate system enables them to interact at any given point in an (x, y, z) space. The simulator creates a “Peripheral Open-Access Volume,” for each Peripheral the end-user requires, such as the Space Glove in FIG. 29. FIG. 30 is a high-level illustration of the Hands-On Simulation Tool, focusing on how a Peripheral's coordinate system is implemented within the Hands-On Simulation Tool.
  • The new Peripheral Open-Access Volume, which as an example in FIG. 30 is labeled “Space Glove,” is mapped one-to-one with the “Open-Access Real Volume” and “Open-Access Computer-generated Volume.” The key to achieving a precise one-to-one mapping is to calibrate the Peripheral's volume with the Common Reference, which is the physical View surface, located at the viewing surface of the display device.
  • Some Peripherals provide a mechanism that enables the Hands-On Simulation Tool to perform this calibration without any end-user involvement. But if calibrating the Peripheral requires external intervention than the end-user will accomplish this through an “Open-Access Peripheral Calibration” procedure. This procedure provides the end-user with a series of Simulations within the Hands-On Volume and a user-friendly interface that enables them to adjusting the location of the Peripheral's volume until it is in perfect synchronization with the View surface. When the calibration procedure is complete, the Hands-On Simulation Tool saves the information in the end-user's personal profile.
  • Once the Peripheral's volume is precisely calibrated to the View surface, the next step in the process can be taken. The Hands-On Simulation Tool will continuously track and map the Peripheral's volume to the Open-Access Volumes. The Hands-On Simulation Tool modifies each Hands-On Image it generates based on the data in the Peripheral's volume. The end result of this process is the end-user's ability to use any given Peripheral to interact with Simulations within the Hands-On Volume generated in real-time by the Hands-On Simulation Tool.
  • With the peripherals linking to the simulator, the user can interact with the display model. The Simulation Engine can get the inputs from the user through the peripherals, and manipulate the desired action. With the peripherals properly matched with the physical space and the display space, the simulator can provide proper interaction and display. The invention Hands-On Simulator then can generate a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space. The peripheral tracking can be done through camera triangulation or through infrared tracking devices.
  • The simulator can further include 3D audio devices for “SIMULATION RECOGNITION & 3D AUDIO”. This results in a new invention in the form of a Hands-On Simulation Tool with its Camera Model, Horizontal Multi-View Device, Peripheral Devices, Frequency Receiving/Sending Devices, and Handheld Devices as described below.
  • Object Recognition is a technology that uses cameras and/or other sensors to locate simulations by a method called triangulation. Triangulation is a process employing trigonometry, sensors, and frequencies to “receive” data from simulations in order to determine their precise location in space. It is for this reason that triangulation is a mainstay of the cartography and surveying industries where the sensors and frequencies they use include but are not limited to cameras, lasers, radar, and microwave. 3D Audio also uses triangulation but in the opposite way 3D Audio “sends” or projects data in the form of sound to a specific location. But whether you're sending or receiving data the location of the simulation in three-dimensional space is done by triangulation with frequency receiving/sending devices. By changing the amplitudes and phase angles of the sound waves reaching the user's left and right ears, the device can effectively emulate the position of the sound source. The sounds reaching the ears will need to be isolated to avoid interference. The isolation can be accomplished by the use of earphones or the like.
  • FIG. 31 shows an end-user looking at a Hands-On Image of a bear cub. Since the cub appears in open space above the viewing surface the end-user can reach in and manipulate the cub by hand or with a handheld tool. It is also possible for the end-user to view the cub from different angles, as they would in real life. This is accomplished though the use of triangulation where the three real-world cameras continuously send images from their unique angle of view to the Hands-On Simulation Tool. This camera data of the real world enables the Hands-On Simulation Tool to locate, track, and map the end-user's body and other real-world simulations positioned within and around the computer monitor's viewing surface (FIG. 32).
  • FIG. 33 also shows the end-user viewing and interacting with the bear cub, but it includes 3D sounds emanating from the cub's mouth. To accomplish this level of audio quality requires physically combining each of the three cameras with a separate speaker, as shown in FIG. 32. The cameras' data enables the Hands-On Simulation Tool to use triangulation in order to locate, track, and map the end-user's “left and right ear”. And since the Hands-On Simulation Tool is generating the bear cub as a computer-generated Hands-On Image it knows the exact location of the cub's mouth. By knowing the exact location of the end-user's ears and the cub's mouth the Hands-On Simulation Tool uses triangulation to sends data, by modifying the spatial characteristics of the audio, making it appear that 3D sound is emanating from the cub's computer-generated mouth.
  • Create a new frequency receiving/sending device by combining a video camera with an audio speaker, as previously shown in FIG. 31. Note that other sensors and/or transducers may be used as well.
  • Take these new camera/speaker devices and attach or place them nearby a viewing device, such as a computer monitor as previously shown in FIG. 32. This results in each camera/speaker device having a unique and separate “real-world” (x, y, z) location, line-of-sight, and frequency receiving/sending volume. To understand these parameters think of using a camcorder and looking through its view finder When you do this the camera has a specific location in space, is pointed in a specific direction, and all the visual frequency information you see or receive through the view finder is its “frequency receiving volume”.
  • Triangulation works by separating and positioning each camera/speaker device such that their individual frequency receiving/sending volumes overlap and cover the exact same area of space. If you have three widely spaced frequency receiving/sending volumes covering the exact same area of space than any simulation within the space can accurately be located. The next step creates a new element in the Open-Access Camera Model for this real-world space and in FIG. 33 it is labeled “real frequency receiving/sending volume”.
  • Now that this real frequency receiving/sending volume exists it must be calibrated to the Common Reference, which of course is the real View Surface. The next step is the automatic calibration of the real frequency receiving/sending volume to the real View Surface. This is an automated procedure that is continuously performed by the Hands-On Simulation Tool in order to keep the camera/speaker devices correctly calibrated even when they are accidentally bumped or moved by the end-user, which is likely to occur.
  • FIG. 34 is a simplified illustration of the complete Open-Access Camera Model and will assist in explaining each of the additional steps required to accomplish the scenarios described in FIGS. 32 and 33 above.
  • The simulator then performs simulation recognition by continuously locating and tracking the end-user's “left and right eye” and their “line-of-sight”, continuously map the real-world left and right eye coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the computer-generated cameras coordinates to match the real-world eye coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right eye. Allowing the end-user to freely move their head and look around the Hands-On Image without distortion.
  • The simulator then perform simulation recognition by continuously locating and tracking the end-user's “left and right ear” and their “line-of-hearing”, continuously map the real-world left- and right-ear coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the 3D Audio coordinates to match the real-world ear coordinates that are being located, tracked, and mapped. This enables the real-time generation of Open-Access sounds based on the exact location of the end-user's left and right ears. Allowing the end-user to freely move their head and still hear Open-Access sounds emanating from their correct location.
  • The simulator then perform simulation recognition by continuously locating and tracking the end-user's “left and right hand” and their “digits,” i.e. fingers and thumbs, continuously map the real-world left and right hand coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world hand coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right hands allowing the end-user to freely interact with Simulations within the Hands-On Volume.
  • The simulator then perform simulation recognition by continuously locating and tracking “handheld tools”, continuously map these real-world handheld tool coordinates into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world handheld tool coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the handheld tools allowing the end-user to freely interact with Simulations within the Hands-On Volume.
  • FIG. 35 is intended to assist in further explaining unique discoveries regarding the new Open-Assess Camera Model and handheld tools. FIG. 35 is a simulation of and end-user interacting with a Hands-On Image using a handheld tool. The scenario being illustrated is the end-user visualizing large amounts of financial data as a number of interrelated Open-Access 3D simulations. The end-user can probe and manipulated the Open-Access simulations by using a handheld tool, which in FIG. 35 looks like a pointing device.
  • A “computer-generated attachment” is mapped in the form of an Open-Access computer-generated simulation onto the tip of a handheld tool, which in FIG. 35 appears to the end-user as a computer-generated “eraser”. The end-user can of course request that the Hands-On Simulation Tool map any number of computer-generated attachments to a given handheld tool. For example, there can be different computer-generated attachments with unique visual and audio characteristics for cutting, pasting, welding, painting, smearing, pointing, grabbing, etc. And each of these computer-generated attachments would act and sound like the real device they are simulating when they are mapped to the tip of the end-user's handheld tool.
  • The present invention further discloses a Multi-Plane display comprising a horizontal perspective display together with a non-horizontal central perspective display. FIG. 36 illustrates an example of the present invention Multi-Plane display in which the Multi-Plane display is a computer monitor that is approximately “L” shaped when open. The end-user views the L-shaped computer monitor from its concave side and at approximately a 45° angle to the bottom of the “L,” as shown in FIG. 36. From the end-user's point of view the entire L-shaped computer monitor appears as one single and seamless viewing surface. The bottom L of the display, positioned horizontally, shows horizontal perspective image, and the other branch of the L display shows central perspective image. The edge is the two display segments is preferably smoothly joined and can also having a curvilinear projection to connect the two displays of horizontal perspective and central perspective.
  • The Multi-Plane display can be made with one or more physical viewing surfaces. For example, the vertical leg of the “L” can be one physical viewing surface, such as flat panel display, and the horizontal leg of the “L” can be a separate flat panel display. The edge of the two display segments can be a non-display segment and therefore the two viewing surface are not continuous. Each leg of a Multi-Plane display is called a viewing plane and as you can see in the upper left of FIG. 36 there is a vertical viewing plane and a horizontal viewing plane where a central perspective image is generated on the vertical plane and a horizontal perspective image is generated on the horizontal plane, and then blend the two images where the planes meet, as illustrated in the lower right of FIG. 36.
  • FIG. 36 also illustrates that a Multi-Plane display is capable of generating multiple views. Meaning that it can display single-view images, i.e. a one-eye perspective like the simulation in the upper left, and/or multi-view images, i.e. separate right and left eye views like the simulation in the lower right. And when the L-shaped computer monitor is not being used by the end-user it can be closed and look like the simulation in the lower left.
  • FIG. 37 is a simplified illustration of the present invention Multi-Plane display. In the upper right of FIG. 37 is an example of a single-view image of a bear cub that is displayed on an L-shaped computer monitor. Normally a single-view or one eye image would be generated with only one camera point, but as you can see there are at least two camera points for the Multi-Plabe display even though this is a single-view example. This is because each viewing plane of a Multi-Plane device requires its own rendering perspective. One camera point is for the horizontal perspective image, which is displayed on the horizontal surface, and the other camera point is for the central perspective image, which is displayed on the vertical surface.
  • To generate both the horizontal perspective and central perspective images requires the creation of two camera eyepoints (which can be the same or different) as shown in FIG. 37 for two different and separate camera points labeled OSI and CPI. The vertical viewing plane of the L-shaped monitor, as shown at the bottom of FIG. 37, is the display surface for the central perspective images, and thus there is a need to define another common reference plane for this surface. As discussed above, the common reference plane is the plane where the images are display, and the computer need to keep track of this plane for the synchronization of the locations of the displayed images and the real physical locations. With the L-shaped Multi-Plane device and the two display surfaces, the Simulation can to generate the three dimansional images, a horizontal perspective image using (OSI) camera eyepoint, and a central perspective image using (CPI) camera eyepoint.
  • The multi-plane display system can further include a curvilinear connection display section to blend the horizontal perspective and the central perspective images together at the location of the seam in the “L,” as shown at the bottom of FIG. 37. The multi-plane display system can continuously update and display what appears to be a single L-shaped image on the L-shaped Multi-Plane device.
  • Furthermore, the multi-plane display system can comprise multiple display surfaces together with multiple curvilinear blending sections as shown in FIG. 38. The multiple display surfaces can be a flat wall, multiple adjacent flat walls, a dome, and a curved wraparound panel.
  • The present invention multi-plane display system thus can simultaneously projecting a plurality of three dimensional images onto multiple display surfaces, one of which is a horizontal perspective image. Further, it can be a stereoscopic multiple display system allowing viewers to use their stereoscopic vision for three dimensional image presentation.
  • Since the multi-plane display system comprises at least two display surfaces, various requirements need to be addressed to ensure high fidelity in the three dimensional image projection. The display requirements are typically geometric accuracy, to ensure that objects and features of the image to be correctly positioned, edge match accuracy, to ensure continuity between display surfaces, no blending variation, to ensure no variation in luminance in the blending section of various display surfaces, and field of view, to ensure a continuous image from the eyepoint of the viewer.
  • Since the blending section of the multi-plane display system is preferably a curve surface, some distortion correction could be applied in order for the image projected onto the blending section surface to appear correct to the viewer. There are various solutions for providing distortion correction to a display system such as using a test pattern image, designing the image projection system for the specific curved blending display section, using special video hardware, utilizing a piecewise-linear approximation for the curved blending section. Still another distortion correction solution for the curve surface projection is to automatically computes image distortion correction for any given position of the viewer eyepoint and the projector.
  • Since the multi-plane display system comprises more than one display surface, care should be taken to minimize the seams and gaps between the edges of the respective displays. To avoid seams or gaps problem, there could be at least two image generators generating adjacent overlapped portions of an image. The overlapped image is calculated by an image processor to ensure that the projected pixels in the overlapped areas are adjusted to form the proper displayed images. Other solutions are to control the degree of intensity reduction in the overlapping to create a smooth transition from the image of one display surface to the next.

Claims (20)

1. A method for 3-D horizontal perspective simulation by horizontal perspective projection, the horizontal perspective projection comprising displaying horizontal perspective images according to a predetermined projection eyepoint, the method comprising the steps of:
displaying a 3-D image onto an open space of a first display surface using horizontal perspective;
display a second image onto a second display; and
manipulating the display image on the first display surface by touching the 3-D image with a peripheral device.
2. A method as in claim 1 further comprising the step of taking an input from the second display and providing output to the first horizontal perspective display.
3. A method as in claim 1 further comprising a step of tracking the physical peripheral device to the 3-D image.
4. A method as in claim 3 wherein tracking the peripheral device comprises tracking a tip of the peripheral device.
5. A method as in claim 3 wherein the peripheral device tracking comprises inputting the position of the peripheral device to the processing unit.
6. A method as in claim 3 wherein the peripheral device tracking comprises a step of triangulation or infrared tracking.
7. A method as in claim 1 further comprising a step of calibrating the physical peripheral device to the 3-D image.
8. A method as in claim 7 wherein the calibration step comprises a manual inputting a reference coordinate.
9. A method as in claim 7 wherein the calibration step comprises an automatic inputting a reference coordinate through a calibration procedure.
10. A method as in claim 1 further comprising a step of display a third image onto a third curvilinear display, the curvilinear display blending the first display and the second display.
11. A method as in as in claim 1 wherein the horizontal perspective display is a stereoscopic horizontal perspective display using horizontal perspective to display a stereoscopic 3-D image.
12. A method as in as in claim 1 wherein the horizontal perspective display further display a portion of the 3-D image onto an inner-access volume, whereby the image portion in the inner-access volume cannot be touched by the peripheral device.
13. A method as in as in claim 1 further comprising a step of automatic or manual eyepoint tracking for the horizontal perspective display.
14. A method as in as in claim 1 further comprising a step of zooming, rotating or moving the 3-D image.
15. A method as in as in claim 1 wherein manipulating the display image by the peripheral device comprises tracking a tip of the peripheral device.
16. A method as in as in claim 15 wherein the manipulation comprises the action of modifying the display image or the action of generating a different image.
17. A 3-D simulation method using a 3-D horizontal perspective simulator system, the 3-D horizontal perspective simulator system comprising
a processing unit;
a first horizontal perspective display using horizontal perspective to display a 3-D image onto an open space;
a second display showing information related to the 3-D image;
a peripheral device to manipulate the display image by touching the 3-D image; and
a peripheral device tracking unit for mapping the peripheral device to the 3-D image;
the method comprising
calibrating the peripheral device;
displaying a first 3-D image onto an open space of the first display surface using horizontal perspective;
displaying a second image onto the second display;
tracking the peripheral device; and
manipulating the display image by touching the 3-D image with the peripheral device.
18. A method as in claim 17 further comprising a step of display a third image onto a third curvilinear display, the curvilinear display blending the first display and the second display.
19. A 3-D simulation method using a multi-view 3-D horizontal perspective simulator system, the multi-view 3-D horizontal perspective simulator system comprising
a processing unit;
a first stereoscopic horizontal perspective display using horizontal perspective to display a stereoscopic 3-D image onto an open space; and
a second display showing information related to the 3-D image;
a peripheral device to manipulate the display image by touching the 3-D image; and
a peripheral device tracking unit for mapping the peripheral device to the 3-D image;
the method comprising
displaying a first stereoscopic 3-D image onto an open space of the first display surface using horizontal perspective;
displaying a second image onto the second display;
tracking the peripheral device; and
manipulating the display image by touching the 3-D image with a peripheral device.
20. A method as in claim 19 further comprising a step of display a third image onto a third curvilinear display, the curvilinear display blending the first display and the second display.
US11/141,652 2004-06-01 2005-05-31 Multi-plane horizontal perspective hands-on simulator Abandoned US20050264559A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/141,652 US20050264559A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective hands-on simulator

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US57618104P 2004-06-01 2004-06-01
US57618204P 2004-06-01 2004-06-01
US57618704P 2004-06-01 2004-06-01
US57618904P 2004-06-01 2004-06-01
US11/141,652 US20050264559A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective hands-on simulator

Publications (1)

Publication Number Publication Date
US20050264559A1 true US20050264559A1 (en) 2005-12-01

Family

ID=35462954

Family Applications (8)

Application Number Title Priority Date Filing Date
US11/141,606 Abandoned US20050264857A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective display
US11/141,541 Abandoned US20050275914A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective hands-on simulator
US11/141,652 Abandoned US20050264559A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective hands-on simulator
US11/141,640 Abandoned US20050281411A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective display
US11/141,649 Active 2028-04-22 US7796134B2 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective display
US11/141,651 Abandoned US20050264558A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective hands-on simulator
US11/141,650 Abandoned US20050275915A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective display
US11/141,540 Abandoned US20050275913A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective hands-on simulator

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/141,606 Abandoned US20050264857A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective display
US11/141,541 Abandoned US20050275914A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective hands-on simulator

Family Applications After (5)

Application Number Title Priority Date Filing Date
US11/141,640 Abandoned US20050281411A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective display
US11/141,649 Active 2028-04-22 US7796134B2 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective display
US11/141,651 Abandoned US20050264558A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective hands-on simulator
US11/141,650 Abandoned US20050275915A1 (en) 2004-06-01 2005-05-31 Multi-plane horizontal perspective display
US11/141,540 Abandoned US20050275913A1 (en) 2004-06-01 2005-05-31 Binaural horizontal perspective hands-on simulator

Country Status (5)

Country Link
US (8) US20050264857A1 (en)
EP (2) EP1781893A1 (en)
JP (2) JP2008506140A (en)
KR (2) KR20070052261A (en)
WO (2) WO2005118998A1 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010062117A2 (en) 2008-11-26 2010-06-03 Samsung Electronics Co., Ltd. Immersive display system for interacting with three-dimensional content
US7796134B2 (en) 2004-06-01 2010-09-14 Infinite Z, Inc. Multi-plane horizontal perspective display
US7907167B2 (en) 2005-05-09 2011-03-15 Infinite Z, Inc. Three dimensional horizontal perspective workstation
WO2011032217A1 (en) 2009-09-16 2011-03-24 Sydac Pty Ltd Visual presentation system
US20120002828A1 (en) * 2010-06-30 2012-01-05 Sony Corporation Audio processing device, audio processing method, and program
US20120200495A1 (en) * 2009-10-14 2012-08-09 Nokia Corporation Autostereoscopic Rendering and Display Apparatus
EP2541947A3 (en) * 2011-06-28 2013-04-17 Kabushiki Kaisha Toshiba Medical image processing apparatus
US20130135310A1 (en) * 2011-11-24 2013-05-30 Thales Method and device for representing synthetic environments
ITTO20111150A1 (en) * 2011-12-14 2013-06-15 Univ Degli Studi Genova PERFECT THREE-DIMENSIONAL STEREOSCOPIC REPRESENTATION OF VIRTUAL ITEMS FOR A MOVING OBSERVER
EP2521362A3 (en) * 2011-05-06 2013-09-18 Kabushiki Kaisha Toshiba Medical image processing apparatus
US8704879B1 (en) 2010-08-31 2014-04-22 Nintendo Co., Ltd. Eye tracking enabling 3D viewing on conventional 2D display
US8717423B2 (en) 2005-05-09 2014-05-06 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
US8717360B2 (en) 2010-01-29 2014-05-06 Zspace, Inc. Presenting a view within a three dimensional scene
EP2418866A3 (en) * 2010-08-11 2014-05-21 LG Electronics Inc. Method for operating image display apparatus
US8786529B1 (en) 2011-05-18 2014-07-22 Zspace, Inc. Liquid crystal variable drive voltage
US20140282267A1 (en) * 2011-09-08 2014-09-18 Eads Deutschland Gmbh Interaction with a Three-Dimensional Virtual Scenario
EP2429202A3 (en) * 2010-09-13 2014-10-29 LG Electronics Inc. Image display apparatus and method for operating image display apparatus
EP2957997A1 (en) * 2014-06-20 2015-12-23 Funai Electric Co., Ltd. Image display device
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
EP3074954A4 (en) * 2013-11-26 2017-06-14 Yoav Shefi Method and system for constructing a virtual image anchored onto a real-world object
CN107333121A (en) * 2017-06-27 2017-11-07 山东大学 The immersion solid of moving view point renders optical projection system and its method on curve screens
US10139721B1 (en) * 2017-05-23 2018-11-27 Hae-Yong Choi Apparatus for synthesizing spatially separated images
EP2747430B1 (en) * 2012-12-18 2020-10-21 Samsung Electronics Co., Ltd 3D display device for displaying 3D image using at least one of gaze direction of user or gravity direction
EP4068768A4 (en) * 2019-12-05 2023-08-02 Beijing Ivisual 3D Technology Co., Ltd. 3d display apparatus and 3d image display method

Families Citing this family (122)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2482240A1 (en) * 2004-09-27 2006-03-27 Claude Choquet Body motion training and qualification system and method
JP4046121B2 (en) * 2005-03-24 2008-02-13 セイコーエプソン株式会社 Stereoscopic image display apparatus and method
WO2006106522A2 (en) * 2005-04-07 2006-10-12 Visionsense Ltd. Method for reconstructing a three- dimensional surface of an object
US20060285832A1 (en) * 2005-06-16 2006-12-21 River Past Corporation Systems and methods for creating and recording digital three-dimensional video streams
KR101309176B1 (en) * 2006-01-18 2013-09-23 삼성전자주식회사 Apparatus and method for augmented reality
JP4940671B2 (en) * 2006-01-26 2012-05-30 ソニー株式会社 Audio signal processing apparatus, audio signal processing method, and audio signal processing program
US10994358B2 (en) 2006-12-20 2021-05-04 Lincoln Global, Inc. System and method for creating or modifying a welding sequence based on non-real world weld data
US9937577B2 (en) 2006-12-20 2018-04-10 Lincoln Global, Inc. System for a welding sequencer
US9104195B2 (en) 2006-12-20 2015-08-11 Lincoln Global, Inc. Welding job sequencer
GB2447060B (en) * 2007-03-01 2009-08-05 Magiqads Sdn Bhd Method of creation of a virtual three dimensional image to enable its reproduction on planar substrates
WO2008108274A1 (en) * 2007-03-02 2008-09-12 Nec Corporation Image display device
WO2008108389A1 (en) * 2007-03-07 2008-09-12 Nec Corporation Image display
JP2008226400A (en) * 2007-03-15 2008-09-25 Sony Computer Entertainment Inc Audio reproducing system and audio reproducing method
US8269822B2 (en) * 2007-04-03 2012-09-18 Sony Computer Entertainment America, LLC Display viewing system and methods for optimizing display view based on active tracking
US8400493B2 (en) * 2007-06-25 2013-03-19 Qualcomm Incorporated Virtual stereoscopic camera
US8355019B2 (en) * 2007-11-02 2013-01-15 Dimension Technologies, Inc. 3D optical illusions from off-axis displays
US20090125801A1 (en) * 2007-11-10 2009-05-14 Cherif Atia Algreatly 3D windows system
US8452052B2 (en) * 2008-01-21 2013-05-28 The Boeing Company Modeling motion capture volumes with distance fields
JP4991621B2 (en) * 2008-04-17 2012-08-01 キヤノン株式会社 Imaging device
CN101610360A (en) * 2008-06-19 2009-12-23 鸿富锦精密工业(深圳)有限公司 The camera head of automatically tracking sound source
KR101595104B1 (en) 2008-07-10 2016-02-17 리얼 뷰 이미징 리미티드 Broad viewing angle displays and user interfaces
US8657605B2 (en) 2009-07-10 2014-02-25 Lincoln Global, Inc. Virtual testing and inspection of a virtual weldment
US8915740B2 (en) 2008-08-21 2014-12-23 Lincoln Global, Inc. Virtual reality pipe welding simulator
US8851896B2 (en) 2008-08-21 2014-10-07 Lincoln Global, Inc. Virtual reality GTAW and pipe welding simulator and setup
US9330575B2 (en) 2008-08-21 2016-05-03 Lincoln Global, Inc. Tablet-based welding simulator
US9483959B2 (en) 2008-08-21 2016-11-01 Lincoln Global, Inc. Welding simulator
US8911237B2 (en) 2008-08-21 2014-12-16 Lincoln Global, Inc. Virtual reality pipe welding simulator and setup
US8884177B2 (en) 2009-11-13 2014-11-11 Lincoln Global, Inc. Systems, methods, and apparatuses for monitoring weld quality
US8747116B2 (en) 2008-08-21 2014-06-10 Lincoln Global, Inc. System and method providing arc welding training in a real-time simulated virtual reality environment using real-time weld puddle feedback
US8834168B2 (en) * 2008-08-21 2014-09-16 Lincoln Global, Inc. System and method providing combined virtual reality arc welding and three-dimensional (3D) viewing
US9280913B2 (en) 2009-07-10 2016-03-08 Lincoln Global, Inc. Systems and methods providing enhanced education and training in a virtual reality environment
US9318026B2 (en) 2008-08-21 2016-04-19 Lincoln Global, Inc. Systems and methods providing an enhanced user experience in a real-time simulated virtual reality welding environment
US9196169B2 (en) 2008-08-21 2015-11-24 Lincoln Global, Inc. Importing and analyzing external data using a virtual reality welding system
WO2010029707A2 (en) * 2008-09-12 2010-03-18 株式会社 東芝 Image projection system and image projection method
US20100156907A1 (en) * 2008-12-23 2010-06-24 Microsoft Corporation Display surface tracking
FR2942096B1 (en) * 2009-02-11 2016-09-02 Arkamys METHOD FOR POSITIONING A SOUND OBJECT IN A 3D SOUND ENVIRONMENT, AUDIO MEDIUM IMPLEMENTING THE METHOD, AND ASSOCIATED TEST PLATFORM
US8274013B2 (en) 2009-03-09 2012-09-25 Lincoln Global, Inc. System for tracking and analyzing welding activity
TW201039099A (en) * 2009-04-22 2010-11-01 Song-Yuan Hu Electronic device
US8279269B2 (en) * 2009-04-29 2012-10-02 Ke-Ou Peng Mobile information kiosk with a three-dimensional imaging effect
US20100306825A1 (en) * 2009-05-27 2010-12-02 Lucid Ventures, Inc. System and method for facilitating user interaction with a simulated object associated with a physical location
CN102450001B (en) * 2009-05-29 2015-08-12 惠普开发有限公司 Reduce the method for the viewpoint associated artifact in multi-projector system, equipment and optical projection system
US8269902B2 (en) * 2009-06-03 2012-09-18 Transpacific Image, Llc Multimedia projection management
KR101176612B1 (en) * 2009-06-11 2012-08-23 신닛뽄세이테쯔 카부시키카이샤 Process for producing thick high-strength steel plate with excellent toughness of heat-affected zone in high heat input welding and thick high-strength steel plate with excellent toughness of heat-affected zone in high heat input welding
CN102804789B (en) 2009-06-23 2015-04-29 Lg电子株式会社 Receiving system and method of providing 3D image
KR20120039703A (en) 2009-07-07 2012-04-25 엘지전자 주식회사 Method for displaying three-dimensional user interface
US9773429B2 (en) 2009-07-08 2017-09-26 Lincoln Global, Inc. System and method for manual welder training
US9221117B2 (en) 2009-07-08 2015-12-29 Lincoln Global, Inc. System for characterizing manual welding operations
US10748447B2 (en) 2013-05-24 2020-08-18 Lincoln Global, Inc. Systems and methods providing a computerized eyewear device to aid in welding
US9011154B2 (en) 2009-07-10 2015-04-21 Lincoln Global, Inc. Virtual welding system
US9294751B2 (en) 2009-09-09 2016-03-22 Mattel, Inc. Method and system for disparity adjustment during stereoscopic zoom
US8569655B2 (en) 2009-10-13 2013-10-29 Lincoln Global, Inc. Welding helmet with integral user interface
CN102577410B (en) 2009-10-16 2016-03-16 Lg电子株式会社 The method of instruction 3D content and the device of processing signals
US8569646B2 (en) 2009-11-13 2013-10-29 Lincoln Global, Inc. Systems, methods, and apparatuses for monitoring weld quality
US9468988B2 (en) 2009-11-13 2016-10-18 Lincoln Global, Inc. Systems, methods, and apparatuses for monitoring weld quality
US8736670B2 (en) * 2009-12-07 2014-05-27 Photon-X, Inc. 3D visualization system
US8532962B2 (en) 2009-12-23 2013-09-10 Honeywell International Inc. Approach for planning, designing and observing building systems
DE102010009737A1 (en) * 2010-03-01 2011-09-01 Institut für Rundfunktechnik GmbH Method and arrangement for reproducing 3D image content
US20110234591A1 (en) * 2010-03-26 2011-09-29 Microsoft Corporation Personalized Apparel and Accessories Inventory and Display
US8581905B2 (en) * 2010-04-08 2013-11-12 Disney Enterprises, Inc. Interactive three dimensional displays on handheld devices
US9733699B2 (en) * 2010-04-13 2017-08-15 Dean Stark Virtual anamorphic product display with viewer height detection
TWI407773B (en) * 2010-04-13 2013-09-01 Nat Univ Tsing Hua Method and system for providing three dimensional stereo image
US8315443B2 (en) 2010-04-22 2012-11-20 Qualcomm Incorporated Viewpoint detector based on skin color area and face area
US8995678B2 (en) 2010-04-30 2015-03-31 Honeywell International Inc. Tactile-based guidance system
US8990049B2 (en) 2010-05-03 2015-03-24 Honeywell International Inc. Building structure discovery and display from various data artifacts at scene
US8538687B2 (en) 2010-05-04 2013-09-17 Honeywell International Inc. System for guidance and navigation in a building
JP2011259373A (en) * 2010-06-11 2011-12-22 Sony Corp Stereoscopic image display device and stereoscopic image display method
JP5488306B2 (en) * 2010-07-29 2014-05-14 船井電機株式会社 projector
KR101695819B1 (en) * 2010-08-16 2017-01-13 엘지전자 주식회사 A apparatus and a method for displaying a 3-dimensional image
JP5451894B2 (en) * 2010-09-22 2014-03-26 富士フイルム株式会社 Stereo imaging device and shading correction method
US9001053B2 (en) 2010-10-28 2015-04-07 Honeywell International Inc. Display system for controlling a selector symbol within an image
KR20120065774A (en) * 2010-12-13 2012-06-21 삼성전자주식회사 Audio providing apparatus, audio receiver and method for providing audio
ES2767882T3 (en) 2010-12-13 2020-06-18 Lincoln Global Inc Welding learning system
WO2012088285A2 (en) 2010-12-22 2012-06-28 Infinite Z, Inc. Three-dimensional tracking of a user control device in a volume
US8773946B2 (en) 2010-12-30 2014-07-08 Honeywell International Inc. Portable housings for generation of building maps
US8982192B2 (en) 2011-04-07 2015-03-17 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Visual information display on curvilinear display surfaces
US9766698B2 (en) 2011-05-05 2017-09-19 Nokia Technologies Oy Methods and apparatuses for defining the active channel in a stereoscopic view by using eye tracking
US9560314B2 (en) 2011-06-14 2017-01-31 Microsoft Technology Licensing, Llc Interactive and shared surfaces
US9342928B2 (en) 2011-06-29 2016-05-17 Honeywell International Inc. Systems and methods for presenting building information
US9451232B2 (en) * 2011-09-29 2016-09-20 Dolby Laboratories Licensing Corporation Representation and coding of multi-view images using tapestry encoding
US9606992B2 (en) * 2011-09-30 2017-03-28 Microsoft Technology Licensing, Llc Personal audio/visual apparatus providing resource management
EP2768396A2 (en) 2011-10-17 2014-08-27 Butterfly Network Inc. Transmissive imaging and related apparatus and methods
US9106903B2 (en) 2011-11-18 2015-08-11 Zspace, Inc. Head tracking eyewear system
WO2013074997A1 (en) 2011-11-18 2013-05-23 Infinite Z, Inc. Indirect 3d scene positioning control
JP6017795B2 (en) * 2012-02-10 2016-11-02 任天堂株式会社 GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME IMAGE GENERATION METHOD
JP2014006674A (en) * 2012-06-22 2014-01-16 Canon Inc Image processing device, control method of the same and program
US20160093233A1 (en) 2012-07-06 2016-03-31 Lincoln Global, Inc. System for characterizing manual welding operations on pipe and other curved structures
US9767712B2 (en) 2012-07-10 2017-09-19 Lincoln Global, Inc. Virtual reality pipe welding simulator and setup
US20140125763A1 (en) * 2012-11-07 2014-05-08 Robert Nathan Cohen 3d led output device and process for emitting 3d content output for large screen applications and which is visible without 3d glasses
US9667889B2 (en) 2013-04-03 2017-05-30 Butterfly Network, Inc. Portable electronic devices with integrated imaging capabilities
US10019130B2 (en) 2013-04-21 2018-07-10 Zspace, Inc. Zero parallax drawing within a three dimensional display
EP2806404B1 (en) * 2013-05-23 2018-10-10 AIM Sport AG Image conversion for signage
US10930174B2 (en) 2013-05-24 2021-02-23 Lincoln Global, Inc. Systems and methods providing a computerized eyewear device to aid in welding
US9426598B2 (en) 2013-07-15 2016-08-23 Dts, Inc. Spatial calibration of surround sound systems including listener position estimation
US20150072323A1 (en) 2013-09-11 2015-03-12 Lincoln Global, Inc. Learning management system for a real-time simulated virtual reality welding training environment
US10083627B2 (en) 2013-11-05 2018-09-25 Lincoln Global, Inc. Virtual reality and real welding training system and method
US9836987B2 (en) 2014-02-14 2017-12-05 Lincoln Global, Inc. Virtual reality pipe welding simulator and setup
US9667951B2 (en) 2014-02-18 2017-05-30 Cisco Technology, Inc. Three-dimensional television calibration
US9875573B2 (en) 2014-03-17 2018-01-23 Meggitt Training Systems, Inc. Method and apparatus for rendering a 3-dimensional scene
JP6687543B2 (en) 2014-06-02 2020-04-22 リンカーン グローバル,インコーポレイテッド System and method for hand welding training
JP2016012889A (en) * 2014-06-30 2016-01-21 株式会社リコー Image projection system
US20160048369A1 (en) * 2014-08-15 2016-02-18 Beam Authentic, LLC Systems for Displaying Media on Display Devices
US10416947B2 (en) 2014-07-28 2019-09-17 BEAM Authentic Inc. Mountable display devices
US10235807B2 (en) 2015-01-20 2019-03-19 Microsoft Technology Licensing, Llc Building holographic content using holographic tools
US20170084084A1 (en) * 2015-09-22 2017-03-23 Thrillbox, Inc Mapping of user interaction within a virtual reality environment
CN106067967B (en) * 2016-06-29 2018-02-09 汇意设计有限公司 Deformable Volumetric hologram implementation method
EP3319066A1 (en) 2016-11-04 2018-05-09 Lincoln Global, Inc. Magnetic frequency selection for electromagnetic position tracking
US10913125B2 (en) 2016-11-07 2021-02-09 Lincoln Global, Inc. Welding system providing visual and audio cues to a welding helmet with a display
US10878591B2 (en) 2016-11-07 2020-12-29 Lincoln Global, Inc. Welding trainer utilizing a head up display to display simulated and real-world objects
US10503456B2 (en) 2017-05-05 2019-12-10 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment
CN107193372B (en) * 2017-05-15 2020-06-19 杭州一隅千象科技有限公司 Projection method from multiple rectangular planes at arbitrary positions to variable projection center
US10373536B2 (en) 2017-05-26 2019-08-06 Jeffrey Sherretts 3D signage using an inverse cube illusion fixture
US10997872B2 (en) 2017-06-01 2021-05-04 Lincoln Global, Inc. Spring-loaded tip assembly to support simulated shielded metal arc welding
CA3028794A1 (en) * 2018-01-04 2019-07-04 8259402 Canada Inc. Immersive environment with digital environment to enhance depth sensation
US11557223B2 (en) 2018-04-19 2023-01-17 Lincoln Global, Inc. Modular and reconfigurable chassis for simulated welding training
US11475792B2 (en) 2018-04-19 2022-10-18 Lincoln Global, Inc. Welding simulator with dual-user configuration
CN110858464A (en) * 2018-08-24 2020-03-03 财团法人工业技术研究院 Multi-view display device and control simulator
CN109407329B (en) * 2018-11-06 2021-06-25 三亚中科遥感研究所 Space light field display method and device
WO2020102803A1 (en) * 2018-11-16 2020-05-22 Geomni, Inc. Systems and methods for generating augmented reality environments from two-dimensional drawings
IL264032B (en) * 2018-12-30 2020-06-30 Elbit Systems Ltd Systems and methods for reducing image artefacts in binocular displays
US11367361B2 (en) * 2019-02-22 2022-06-21 Kyndryl, Inc. Emulating unmanned aerial vehicle (UAV)
US11508131B1 (en) 2019-11-08 2022-11-22 Tanzle, Inc. Generating composite stereoscopic images
WO2024043772A1 (en) * 2022-08-23 2024-02-29 Samsung Electronics Co., Ltd. Method and electronic device for determining relative position of one or more objects in image

Citations (94)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1592034A (en) * 1924-09-06 1926-07-13 Macy Art Process Corp Process and method of effective angular levitation of printed images and the resulting product
US4182053A (en) * 1977-09-14 1980-01-08 Systems Technology, Inc. Display generator for simulating vehicle operation
US4291380A (en) * 1979-05-14 1981-09-22 The Singer Company Resolvability test and projection size clipping for polygon face display
US4677576A (en) * 1983-06-27 1987-06-30 Grumman Aerospace Corporation Non-edge computer image generation system
US4763280A (en) * 1985-04-29 1988-08-09 Evans & Sutherland Computer Corp. Curvilinear dynamic image generation system
US4984179A (en) * 1987-01-21 1991-01-08 W. Industries Limited Method and apparatus for the perception of computer-generated imagery
US5079699A (en) * 1987-11-27 1992-01-07 Picker International, Inc. Quick three-dimensional display
US5276785A (en) * 1990-08-02 1994-01-04 Xerox Corporation Moving viewpoint with respect to a target in a three-dimensional workspace
US5287437A (en) * 1992-06-02 1994-02-15 Sun Microsystems, Inc. Method and apparatus for head tracked display of precomputed stereo images
US5327285A (en) * 1990-06-11 1994-07-05 Faris Sadeg M Methods for manufacturing micropolarizers
US5381127A (en) * 1993-12-22 1995-01-10 Intel Corporation Fast static cross-unit comparator
US5381158A (en) * 1991-07-12 1995-01-10 Kabushiki Kaisha Toshiba Information retrieval apparatus
US5400177A (en) * 1993-11-23 1995-03-21 Petitto; Tony Technique for depth of field viewing of images with improved clarity and contrast
US5438623A (en) * 1993-10-04 1995-08-01 The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration Multi-channel spatialization system for audio signals
US5515079A (en) * 1989-11-07 1996-05-07 Proxima Corporation Computer input system and method of using same
US5537144A (en) * 1990-06-11 1996-07-16 Revfo, Inc. Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in stereoscopically viewing the same with high image quality and resolution
US5652617A (en) * 1995-06-06 1997-07-29 Barbour; Joel Side scan down hole video tool having two camera
US5745164A (en) * 1993-11-12 1998-04-28 Reveo, Inc. System and method for electro-optically producing and displaying spectrally-multiplexed images of three-dimensional imagery for use in stereoscopic viewing thereof
US5795154A (en) * 1995-07-07 1998-08-18 Woods; Gail Marjorie Anaglyphic drawing device
US5862229A (en) * 1996-06-12 1999-01-19 Nintendo Co., Ltd. Sound generator synchronized with image display
US5880733A (en) * 1996-04-30 1999-03-09 Microsoft Corporation Display system and method for displaying windows of an operating system to provide a three-dimensional workspace for a computer system
US5945985A (en) * 1992-10-27 1999-08-31 Technology International, Inc. Information system for interactive access to geographic information
US5956046A (en) * 1997-12-17 1999-09-21 Sun Microsystems, Inc. Scene synchronization of multiple computer displays
US6028593A (en) * 1995-12-01 2000-02-22 Immersion Corporation Method and apparatus for providing simulated physical interactions within computer generated environments
US6034717A (en) * 1993-09-23 2000-03-07 Reveo, Inc. Projection display system for viewing displayed imagery over a wide field of view
US6064354A (en) * 1998-07-01 2000-05-16 Deluca; Michael Joseph Stereoscopic user interface method and apparatus
US6069649A (en) * 1994-08-05 2000-05-30 Hattori; Tomohiko Stereoscopic display
US6072495A (en) * 1997-04-21 2000-06-06 Doryokuro Kakunenryo Kaihatsu Jigyodan Object search method and object search system
US6100903A (en) * 1996-08-16 2000-08-08 Goettsche; Mark T Method for generating an ellipse with texture and perspective
US6108005A (en) * 1996-08-30 2000-08-22 Space Corporation Method for producing a synthesized stereoscopic image
US6139434A (en) * 1996-09-24 2000-10-31 Nintendo Co., Ltd. Three-dimensional image processing apparatus with enhanced automatic and user point of view control
US6195205B1 (en) * 1991-12-18 2001-02-27 Reveo, Inc. Multi-mode stereoscopic imaging system
US6198524B1 (en) * 1999-04-19 2001-03-06 Evergreen Innovations Llc Polarizing system for motion visual depth effects
US6208346B1 (en) * 1996-09-18 2001-03-27 Fujitsu Limited Attribute information presenting apparatus and multimedia system
US6211848B1 (en) * 1998-05-15 2001-04-03 Massachusetts Institute Of Technology Dynamic holographic video with haptic interaction
US6226008B1 (en) * 1997-09-04 2001-05-01 Kabushiki Kaisha Sega Enterprises Image processing device
US6241609B1 (en) * 1998-01-09 2001-06-05 U.S. Philips Corporation Virtual environment viewpoint control
US6252707B1 (en) * 1996-01-22 2001-06-26 3Ality, Inc. Systems for three-dimensional viewing and projection
US6346938B1 (en) * 1999-04-27 2002-02-12 Harris Corporation Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model
US6351280B1 (en) * 1998-11-20 2002-02-26 Massachusetts Institute Of Technology Autostereoscopic display system
US20020041327A1 (en) * 2000-07-24 2002-04-11 Evan Hildreth Video-based image control system
US6373482B1 (en) * 1998-12-23 2002-04-16 Microsoft Corporation Method, system, and computer program product for modified blending between clip-map tiles
US6392689B1 (en) * 1991-02-21 2002-05-21 Eugene Dolgoff System for displaying moving images pseudostereoscopically
US20020080094A1 (en) * 2000-12-22 2002-06-27 Frank Biocca Teleportal face-to-face system
US20020113752A1 (en) * 1998-04-20 2002-08-22 Alan Sullivan Multi-planar volumetric display system and method of operation using psychological vision cues
US6452593B1 (en) * 1999-02-19 2002-09-17 International Business Machines Corporation Method and system for rendering a virtual three-dimensional graphical display
US20020140698A1 (en) * 2001-03-29 2002-10-03 Robertson George G. 3D navigation techniques
US20030006943A1 (en) * 2000-02-07 2003-01-09 Seiji Sato Multiple-screen simultaneous displaying apparatus, multi-screen simultaneous displaying method, video signal generating device, and recorded medium
US20030011535A1 (en) * 2001-06-27 2003-01-16 Tohru Kikuchi Image display device, image displaying method, information storage medium, and image display program
US6529210B1 (en) * 1998-04-08 2003-03-04 Altor Systems, Inc. Indirect object manipulation in a simulation
US6556197B1 (en) * 1995-11-22 2003-04-29 Nintendo Co., Ltd. High performance low cost video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
US20030085866A1 (en) * 2000-06-06 2003-05-08 Oliver Bimber Extended virtual table: an optical extension for table-like projection systems
US20030085896A1 (en) * 2001-11-07 2003-05-08 Freeman Kyle G. Method for rendering realistic terrain simulation
US6593924B1 (en) * 1999-10-04 2003-07-15 Intel Corporation Rendering a non-photorealistic image
US6614427B1 (en) * 1999-02-01 2003-09-02 Steve Aubrey Process for making stereoscopic images which are congruent with viewer space
US6618049B1 (en) * 1999-11-30 2003-09-09 Silicon Graphics, Inc. Method and apparatus for preparing a perspective view of an approximately spherical surface portion
US6680735B1 (en) * 2000-10-04 2004-01-20 Terarecon, Inc. Method for correcting gradients of irregular spaced graphic data
US6690337B1 (en) * 1999-06-09 2004-02-10 Panoram Technologies, Inc. Multi-panel video display
US20040037459A1 (en) * 2000-10-27 2004-02-26 Dodge Alexandre Percival Image processing apparatus
US6715620B2 (en) * 2001-10-05 2004-04-06 Martin Taschek Display frame for album covers
US20040066384A1 (en) * 2002-09-06 2004-04-08 Sony Computer Entertainment Inc. Image processing method and apparatus
US20040066376A1 (en) * 2000-07-18 2004-04-08 Max Donath Mobility assist device
US6734847B1 (en) * 1997-10-30 2004-05-11 Dr. Baldeweg Gmbh Method and device for processing imaged objects
US20040125103A1 (en) * 2000-02-25 2004-07-01 Kaufman Arie E. Apparatus and method for volume processing and rendering
US20040130525A1 (en) * 2002-11-19 2004-07-08 Suchocki Edward J. Dynamic touch screen amusement game controller
US20040135744A1 (en) * 2001-08-10 2004-07-15 Oliver Bimber Virtual showcases
US20040135780A1 (en) * 2002-08-30 2004-07-15 Nims Jerry C. Multi-dimensional images system for digital image input and output
US20040164956A1 (en) * 2003-02-26 2004-08-26 Kosuke Yamaguchi Three-dimensional object manipulating apparatus, method and computer program
US20040169649A1 (en) * 2000-12-11 2004-09-02 Namco Ltd. Method, apparatus, storage medium, program, and program product for generating image data of virtual three-dimensional space
US20040196359A1 (en) * 2002-05-28 2004-10-07 Blackham Geoffrey Howard Video conferencing terminal apparatus with part-transmissive curved mirror
US20040208358A1 (en) * 2002-11-12 2004-10-21 Namco Ltd. Image generation system, image generation method, program, and information storage medium
US20050024331A1 (en) * 2003-03-26 2005-02-03 Mimic Technologies, Inc. Method, apparatus, and article for force feedback based on tension control and tracking through cables
US20050030308A1 (en) * 2001-11-02 2005-02-10 Yasuhiro Takaki Three-dimensional display method and device therefor
US20050057579A1 (en) * 2003-07-21 2005-03-17 Young Mark J. Adaptive manipulators
US20050093859A1 (en) * 2003-11-04 2005-05-05 Siemens Medical Solutions Usa, Inc. Viewing direction dependent acquisition or processing for 3D ultrasound imaging
US20050093876A1 (en) * 2002-06-28 2005-05-05 Microsoft Corporation Systems and methods for providing image rendering using variable rate source sampling
US6898307B1 (en) * 1999-09-22 2005-05-24 Xerox Corporation Object identification method and system for an augmented-reality display
US20050151742A1 (en) * 2003-12-19 2005-07-14 Palo Alto Research Center, Incorporated Systems and method for turning pages in a three-dimensional electronic document
US20050156881A1 (en) * 2002-04-11 2005-07-21 Synaptics, Inc. Closed-loop sensor on a solid-state object position detector
US20050162447A1 (en) * 2004-01-28 2005-07-28 Tigges Mark H.A. Dynamic width adjustment for detail-in-context lenses
US6943754B2 (en) * 2002-09-27 2005-09-13 The Boeing Company Gaze tracking system, eye-tracking assembly and an associated method of calibration
US20050219695A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective display
US20050219240A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective hands-on simulator
US20050219693A1 (en) * 2004-04-02 2005-10-06 David Hartkop Scanning aperture three dimensional display device
US6956576B1 (en) * 2000-05-16 2005-10-18 Sun Microsystems, Inc. Graphics system using sample masks for motion blur, depth of field, and transparency
US20050231532A1 (en) * 2004-03-31 2005-10-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20060126926A1 (en) * 2004-11-30 2006-06-15 Vesely Michael A Horizontal perspective representation
US20060170652A1 (en) * 2005-01-31 2006-08-03 Canon Kabushiki Kaisha System, image processing apparatus, and information processing method
US7102635B2 (en) * 1998-07-17 2006-09-05 Sensable Technologies, Inc. Systems and methods for sculpting virtual objects in a haptic virtual reality environment
US20060221071A1 (en) * 2005-04-04 2006-10-05 Vesely Michael A Horizontal perspective display
US20070035511A1 (en) * 2005-01-25 2007-02-15 The Board Of Trustees Of The University Of Illinois. Compact haptic and augmented virtual reality system
US20070040905A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
US20070043466A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
US20070109296A1 (en) * 2002-07-19 2007-05-17 Canon Kabushiki Kaisha Virtual space rendering/display apparatus and virtual space rendering/display method

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4795248A (en) * 1984-08-31 1989-01-03 Olympus Optical Company Ltd. Liquid crystal eyeglass
US5361386A (en) * 1987-12-04 1994-11-01 Evans & Sutherland Computer Corp. System for polygon interpolation using instantaneous values in a variable
US4853592A (en) * 1988-03-10 1989-08-01 Rockwell International Corporation Flat panel display having pixel spacing and luminance levels providing high resolution
US5168531A (en) * 1991-06-27 1992-12-01 Digital Equipment Corporation Real-time recognition of pointing information from video
JPH07325934A (en) * 1992-07-10 1995-12-12 Walt Disney Co:The Method and equipment for provision of graphics enhanced to virtual world
US5574835A (en) * 1993-04-06 1996-11-12 Silicon Engines, Inc. Bounding box and projections detection of hidden polygons in three-dimensional spatial databases
WO1995009405A1 (en) * 1993-09-28 1995-04-06 Namco Ltd. Clipping processor, three-dimensional simulator and clipping processing method
US5686975A (en) * 1993-10-18 1997-11-11 Stereographics Corporation Polarel panel for stereoscopic displays
US5684460A (en) * 1994-04-22 1997-11-04 The United States Of America As Represented By The Secretary Of The Army Motion and sound monitor and stimulator
AUPN003894A0 (en) 1994-12-13 1995-01-12 Xenotech Research Pty Ltd Head tracking system for stereoscopic display apparatus
US5574836A (en) * 1996-01-22 1996-11-12 Broemmelsiek; Raymond M. Interactive display apparatus and method with viewer position compensation
US6317127B1 (en) 1996-10-16 2001-11-13 Hughes Electronics Corporation Multi-user real-time augmented reality system and method
US6115022A (en) * 1996-12-10 2000-09-05 Metavision Corporation Method and apparatus for adjusting multiple projected raster images
US6047201A (en) * 1998-04-02 2000-04-04 Jackson, Iii; William H. Infant blood oxygen monitor and SIDS warning device
US20020163482A1 (en) 1998-04-20 2002-11-07 Alan Sullivan Multi-planar volumetric display system including optical elements made from liquid crystal having polymer stabilized cholesteric textures
US6215403B1 (en) * 1999-01-27 2001-04-10 International Business Machines Corporation Wireless monitoring system
US6417867B1 (en) * 1999-05-27 2002-07-09 Sharp Laboratories Of America, Inc. Image downscaling using peripheral vision area localization
US6669346B2 (en) * 2000-05-15 2003-12-30 Darrell J. Metcalf Large-audience, positionable imaging and display system for exhibiting panoramic imagery, and multimedia content featuring a circularity of action
US6643124B1 (en) * 2000-08-09 2003-11-04 Peter J. Wilk Multiple display portable computing devices
US6553256B1 (en) * 2000-10-13 2003-04-22 Koninklijke Philips Electronics N.V. Method and apparatus for monitoring and treating sudden infant death syndrome
US6593934B1 (en) * 2000-11-16 2003-07-15 Industrial Technology Research Institute Automatic gamma correction system for displays
US20020180727A1 (en) * 2000-11-22 2002-12-05 Guckenberger Ronald James Shadow buffer control module method and software construct for adjusting per pixel raster images attributes to screen space and projector features for digital warp, intensity transforms, color matching, soft-edge blending, and filtering for multiple projectors and laser projectors
GB2375699B (en) 2001-05-16 2003-08-13 Nibble Ltd Information management system and method
IL159013A0 (en) 2001-05-22 2004-05-12 Yoav Shefi Method and system for displaying visual content in a virtual three-dimensional space
US7259747B2 (en) 2001-06-05 2007-08-21 Reactrix Systems, Inc. Interactive video display system
TW579019U (en) 2001-06-13 2004-03-01 Eturbotouch Technology Inc Flexible current type touch film
US6478432B1 (en) 2001-07-13 2002-11-12 Chad D. Dyner Dynamically generated interactive real imaging device
CA2386702A1 (en) 2002-05-17 2003-11-17 Idelix Software Inc. Computing the inverse of a pdt distortion
US7190331B2 (en) 2002-06-06 2007-03-13 Siemens Corporate Research, Inc. System and method for measuring the registration accuracy of an augmented reality system
AU2003303111A1 (en) 2002-11-29 2004-07-29 Bracco Imaging, S.P.A. Method and system for scaling control in 3d displays
US7495638B2 (en) 2003-05-13 2009-02-24 Research Triangle Institute Visual display with increased field of view
US20050248566A1 (en) 2004-04-05 2005-11-10 Vesely Michael A Horizontal perspective hands-on simulator
US20050264857A1 (en) 2004-06-01 2005-12-01 Vesely Michael A Binaural horizontal perspective display
US20060250390A1 (en) 2005-04-04 2006-11-09 Vesely Michael A Horizontal perspective display
JP4738870B2 (en) 2005-04-08 2011-08-03 キヤノン株式会社 Information processing method, information processing apparatus, and remote mixed reality sharing apparatus
US7907167B2 (en) 2005-05-09 2011-03-15 Infinite Z, Inc. Three dimensional horizontal perspective workstation
US20060252979A1 (en) 2005-05-09 2006-11-09 Vesely Michael A Biofeedback eyewear system

Patent Citations (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1592034A (en) * 1924-09-06 1926-07-13 Macy Art Process Corp Process and method of effective angular levitation of printed images and the resulting product
US4182053A (en) * 1977-09-14 1980-01-08 Systems Technology, Inc. Display generator for simulating vehicle operation
US4291380A (en) * 1979-05-14 1981-09-22 The Singer Company Resolvability test and projection size clipping for polygon face display
US4677576A (en) * 1983-06-27 1987-06-30 Grumman Aerospace Corporation Non-edge computer image generation system
US4763280A (en) * 1985-04-29 1988-08-09 Evans & Sutherland Computer Corp. Curvilinear dynamic image generation system
US4984179A (en) * 1987-01-21 1991-01-08 W. Industries Limited Method and apparatus for the perception of computer-generated imagery
US5079699A (en) * 1987-11-27 1992-01-07 Picker International, Inc. Quick three-dimensional display
US5515079A (en) * 1989-11-07 1996-05-07 Proxima Corporation Computer input system and method of using same
US6384971B1 (en) * 1990-06-11 2002-05-07 Reveo, Inc. Methods for manufacturing micropolarizers
US5327285A (en) * 1990-06-11 1994-07-05 Faris Sadeg M Methods for manufacturing micropolarizers
US5537144A (en) * 1990-06-11 1996-07-16 Revfo, Inc. Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in stereoscopically viewing the same with high image quality and resolution
US5276785A (en) * 1990-08-02 1994-01-04 Xerox Corporation Moving viewpoint with respect to a target in a three-dimensional workspace
US6392689B1 (en) * 1991-02-21 2002-05-21 Eugene Dolgoff System for displaying moving images pseudostereoscopically
US5381158A (en) * 1991-07-12 1995-01-10 Kabushiki Kaisha Toshiba Information retrieval apparatus
US6195205B1 (en) * 1991-12-18 2001-02-27 Reveo, Inc. Multi-mode stereoscopic imaging system
US5287437A (en) * 1992-06-02 1994-02-15 Sun Microsystems, Inc. Method and apparatus for head tracked display of precomputed stereo images
US5945985A (en) * 1992-10-27 1999-08-31 Technology International, Inc. Information system for interactive access to geographic information
US6034717A (en) * 1993-09-23 2000-03-07 Reveo, Inc. Projection display system for viewing displayed imagery over a wide field of view
US5438623A (en) * 1993-10-04 1995-08-01 The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration Multi-channel spatialization system for audio signals
US5745164A (en) * 1993-11-12 1998-04-28 Reveo, Inc. System and method for electro-optically producing and displaying spectrally-multiplexed images of three-dimensional imagery for use in stereoscopic viewing thereof
US5400177A (en) * 1993-11-23 1995-03-21 Petitto; Tony Technique for depth of field viewing of images with improved clarity and contrast
US5381127A (en) * 1993-12-22 1995-01-10 Intel Corporation Fast static cross-unit comparator
US6069649A (en) * 1994-08-05 2000-05-30 Hattori; Tomohiko Stereoscopic display
US5652617A (en) * 1995-06-06 1997-07-29 Barbour; Joel Side scan down hole video tool having two camera
US5795154A (en) * 1995-07-07 1998-08-18 Woods; Gail Marjorie Anaglyphic drawing device
US6556197B1 (en) * 1995-11-22 2003-04-29 Nintendo Co., Ltd. High performance low cost video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
US6028593A (en) * 1995-12-01 2000-02-22 Immersion Corporation Method and apparatus for providing simulated physical interactions within computer generated environments
US6252707B1 (en) * 1996-01-22 2001-06-26 3Ality, Inc. Systems for three-dimensional viewing and projection
US5880733A (en) * 1996-04-30 1999-03-09 Microsoft Corporation Display system and method for displaying windows of an operating system to provide a three-dimensional workspace for a computer system
US5862229A (en) * 1996-06-12 1999-01-19 Nintendo Co., Ltd. Sound generator synchronized with image display
US6100903A (en) * 1996-08-16 2000-08-08 Goettsche; Mark T Method for generating an ellipse with texture and perspective
US6108005A (en) * 1996-08-30 2000-08-22 Space Corporation Method for producing a synthesized stereoscopic image
US6208346B1 (en) * 1996-09-18 2001-03-27 Fujitsu Limited Attribute information presenting apparatus and multimedia system
US6139434A (en) * 1996-09-24 2000-10-31 Nintendo Co., Ltd. Three-dimensional image processing apparatus with enhanced automatic and user point of view control
US6072495A (en) * 1997-04-21 2000-06-06 Doryokuro Kakunenryo Kaihatsu Jigyodan Object search method and object search system
US6226008B1 (en) * 1997-09-04 2001-05-01 Kabushiki Kaisha Sega Enterprises Image processing device
US6734847B1 (en) * 1997-10-30 2004-05-11 Dr. Baldeweg Gmbh Method and device for processing imaged objects
US5956046A (en) * 1997-12-17 1999-09-21 Sun Microsystems, Inc. Scene synchronization of multiple computer displays
US6241609B1 (en) * 1998-01-09 2001-06-05 U.S. Philips Corporation Virtual environment viewpoint control
US6529210B1 (en) * 1998-04-08 2003-03-04 Altor Systems, Inc. Indirect object manipulation in a simulation
US20020113752A1 (en) * 1998-04-20 2002-08-22 Alan Sullivan Multi-planar volumetric display system and method of operation using psychological vision cues
US6211848B1 (en) * 1998-05-15 2001-04-03 Massachusetts Institute Of Technology Dynamic holographic video with haptic interaction
US6064354A (en) * 1998-07-01 2000-05-16 Deluca; Michael Joseph Stereoscopic user interface method and apparatus
US7102635B2 (en) * 1998-07-17 2006-09-05 Sensable Technologies, Inc. Systems and methods for sculpting virtual objects in a haptic virtual reality environment
US6351280B1 (en) * 1998-11-20 2002-02-26 Massachusetts Institute Of Technology Autostereoscopic display system
US6373482B1 (en) * 1998-12-23 2002-04-16 Microsoft Corporation Method, system, and computer program product for modified blending between clip-map tiles
US6614427B1 (en) * 1999-02-01 2003-09-02 Steve Aubrey Process for making stereoscopic images which are congruent with viewer space
US6452593B1 (en) * 1999-02-19 2002-09-17 International Business Machines Corporation Method and system for rendering a virtual three-dimensional graphical display
US6198524B1 (en) * 1999-04-19 2001-03-06 Evergreen Innovations Llc Polarizing system for motion visual depth effects
US6346938B1 (en) * 1999-04-27 2002-02-12 Harris Corporation Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model
US6690337B1 (en) * 1999-06-09 2004-02-10 Panoram Technologies, Inc. Multi-panel video display
US6898307B1 (en) * 1999-09-22 2005-05-24 Xerox Corporation Object identification method and system for an augmented-reality display
US6593924B1 (en) * 1999-10-04 2003-07-15 Intel Corporation Rendering a non-photorealistic image
US6618049B1 (en) * 1999-11-30 2003-09-09 Silicon Graphics, Inc. Method and apparatus for preparing a perspective view of an approximately spherical surface portion
US20030006943A1 (en) * 2000-02-07 2003-01-09 Seiji Sato Multiple-screen simultaneous displaying apparatus, multi-screen simultaneous displaying method, video signal generating device, and recorded medium
US20040125103A1 (en) * 2000-02-25 2004-07-01 Kaufman Arie E. Apparatus and method for volume processing and rendering
US6956576B1 (en) * 2000-05-16 2005-10-18 Sun Microsystems, Inc. Graphics system using sample masks for motion blur, depth of field, and transparency
US20030085866A1 (en) * 2000-06-06 2003-05-08 Oliver Bimber Extended virtual table: an optical extension for table-like projection systems
US20040066376A1 (en) * 2000-07-18 2004-04-08 Max Donath Mobility assist device
US20020041327A1 (en) * 2000-07-24 2002-04-11 Evan Hildreth Video-based image control system
US6680735B1 (en) * 2000-10-04 2004-01-20 Terarecon, Inc. Method for correcting gradients of irregular spaced graphic data
US20040037459A1 (en) * 2000-10-27 2004-02-26 Dodge Alexandre Percival Image processing apparatus
US6912490B2 (en) * 2000-10-27 2005-06-28 Canon Kabushiki Kaisha Image processing apparatus
US20040169649A1 (en) * 2000-12-11 2004-09-02 Namco Ltd. Method, apparatus, storage medium, program, and program product for generating image data of virtual three-dimensional space
US20020080094A1 (en) * 2000-12-22 2002-06-27 Frank Biocca Teleportal face-to-face system
US20020140698A1 (en) * 2001-03-29 2002-10-03 Robertson George G. 3D navigation techniques
US6987512B2 (en) * 2001-03-29 2006-01-17 Microsoft Corporation 3D navigation techniques
US20030011535A1 (en) * 2001-06-27 2003-01-16 Tohru Kikuchi Image display device, image displaying method, information storage medium, and image display program
US20040135744A1 (en) * 2001-08-10 2004-07-15 Oliver Bimber Virtual showcases
US6715620B2 (en) * 2001-10-05 2004-04-06 Martin Taschek Display frame for album covers
US20050030308A1 (en) * 2001-11-02 2005-02-10 Yasuhiro Takaki Three-dimensional display method and device therefor
US20030085896A1 (en) * 2001-11-07 2003-05-08 Freeman Kyle G. Method for rendering realistic terrain simulation
US20050156881A1 (en) * 2002-04-11 2005-07-21 Synaptics, Inc. Closed-loop sensor on a solid-state object position detector
US20040196359A1 (en) * 2002-05-28 2004-10-07 Blackham Geoffrey Howard Video conferencing terminal apparatus with part-transmissive curved mirror
US20050093876A1 (en) * 2002-06-28 2005-05-05 Microsoft Corporation Systems and methods for providing image rendering using variable rate source sampling
US20070109296A1 (en) * 2002-07-19 2007-05-17 Canon Kabushiki Kaisha Virtual space rendering/display apparatus and virtual space rendering/display method
US20040135780A1 (en) * 2002-08-30 2004-07-15 Nims Jerry C. Multi-dimensional images system for digital image input and output
US20040066384A1 (en) * 2002-09-06 2004-04-08 Sony Computer Entertainment Inc. Image processing method and apparatus
US6943754B2 (en) * 2002-09-27 2005-09-13 The Boeing Company Gaze tracking system, eye-tracking assembly and an associated method of calibration
US20040208358A1 (en) * 2002-11-12 2004-10-21 Namco Ltd. Image generation system, image generation method, program, and information storage medium
US20040130525A1 (en) * 2002-11-19 2004-07-08 Suchocki Edward J. Dynamic touch screen amusement game controller
US20040164956A1 (en) * 2003-02-26 2004-08-26 Kosuke Yamaguchi Three-dimensional object manipulating apparatus, method and computer program
US20050024331A1 (en) * 2003-03-26 2005-02-03 Mimic Technologies, Inc. Method, apparatus, and article for force feedback based on tension control and tracking through cables
US20050057579A1 (en) * 2003-07-21 2005-03-17 Young Mark J. Adaptive manipulators
US20050093859A1 (en) * 2003-11-04 2005-05-05 Siemens Medical Solutions Usa, Inc. Viewing direction dependent acquisition or processing for 3D ultrasound imaging
US20050151742A1 (en) * 2003-12-19 2005-07-14 Palo Alto Research Center, Incorporated Systems and method for turning pages in a three-dimensional electronic document
US20050162447A1 (en) * 2004-01-28 2005-07-28 Tigges Mark H.A. Dynamic width adjustment for detail-in-context lenses
US20050231532A1 (en) * 2004-03-31 2005-10-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20050219693A1 (en) * 2004-04-02 2005-10-06 David Hartkop Scanning aperture three dimensional display device
US20050219695A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective display
US20050219240A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective hands-on simulator
US20050219694A1 (en) * 2004-04-05 2005-10-06 Vesely Michael A Horizontal perspective display
US20060126926A1 (en) * 2004-11-30 2006-06-15 Vesely Michael A Horizontal perspective representation
US20060126927A1 (en) * 2004-11-30 2006-06-15 Vesely Michael A Horizontal perspective representation
US20070035511A1 (en) * 2005-01-25 2007-02-15 The Board Of Trustees Of The University Of Illinois. Compact haptic and augmented virtual reality system
US20060170652A1 (en) * 2005-01-31 2006-08-03 Canon Kabushiki Kaisha System, image processing apparatus, and information processing method
US20060221071A1 (en) * 2005-04-04 2006-10-05 Vesely Michael A Horizontal perspective display
US20070040905A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
US20070043466A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7796134B2 (en) 2004-06-01 2010-09-14 Infinite Z, Inc. Multi-plane horizontal perspective display
US9684994B2 (en) 2005-05-09 2017-06-20 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
US7907167B2 (en) 2005-05-09 2011-03-15 Infinite Z, Inc. Three dimensional horizontal perspective workstation
US8717423B2 (en) 2005-05-09 2014-05-06 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
US9292962B2 (en) 2005-05-09 2016-03-22 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
EP2356540A2 (en) * 2008-11-26 2011-08-17 Samsung Electronics Co., Ltd. Immersive display system for interacting with three-dimensional content
WO2010062117A2 (en) 2008-11-26 2010-06-03 Samsung Electronics Co., Ltd. Immersive display system for interacting with three-dimensional content
EP2356540A4 (en) * 2008-11-26 2014-09-17 Samsung Electronics Co Ltd Immersive display system for interacting with three-dimensional content
WO2011032217A1 (en) 2009-09-16 2011-03-24 Sydac Pty Ltd Visual presentation system
EP2478492B1 (en) 2009-09-16 2019-03-20 Sydac Pty Ltd Visual presentation system
EP2478492A4 (en) * 2009-09-16 2017-08-02 Sydac Pty Ltd Visual presentation system
US8970478B2 (en) * 2009-10-14 2015-03-03 Nokia Corporation Autostereoscopic rendering and display apparatus
US20120200495A1 (en) * 2009-10-14 2012-08-09 Nokia Corporation Autostereoscopic Rendering and Display Apparatus
US8717360B2 (en) 2010-01-29 2014-05-06 Zspace, Inc. Presenting a view within a three dimensional scene
US9202306B2 (en) 2010-01-29 2015-12-01 Zspace, Inc. Presenting a view within a three dimensional scene
US9824485B2 (en) 2010-01-29 2017-11-21 Zspace, Inc. Presenting a view within a three dimensional scene
US9351092B2 (en) * 2010-06-30 2016-05-24 Sony Corporation Audio processing device, audio processing method, and program
US20120002828A1 (en) * 2010-06-30 2012-01-05 Sony Corporation Audio processing device, audio processing method, and program
EP2418866A3 (en) * 2010-08-11 2014-05-21 LG Electronics Inc. Method for operating image display apparatus
US9098112B2 (en) 2010-08-31 2015-08-04 Nintendo Co., Ltd. Eye tracking enabling 3D viewing on conventional 2D display
US10372209B2 (en) 2010-08-31 2019-08-06 Nintendo Co., Ltd. Eye tracking enabling 3D viewing
US10114455B2 (en) 2010-08-31 2018-10-30 Nintendo Co., Ltd. Eye tracking enabling 3D viewing
US8704879B1 (en) 2010-08-31 2014-04-22 Nintendo Co., Ltd. Eye tracking enabling 3D viewing on conventional 2D display
EP2429202A3 (en) * 2010-09-13 2014-10-29 LG Electronics Inc. Image display apparatus and method for operating image display apparatus
US9020219B2 (en) 2011-05-06 2015-04-28 Kabushiki Kaisha Toshiba Medical image processing apparatus
EP2521362A3 (en) * 2011-05-06 2013-09-18 Kabushiki Kaisha Toshiba Medical image processing apparatus
US9134556B2 (en) 2011-05-18 2015-09-15 Zspace, Inc. Liquid crystal variable drive voltage
US8786529B1 (en) 2011-05-18 2014-07-22 Zspace, Inc. Liquid crystal variable drive voltage
US9958712B2 (en) 2011-05-18 2018-05-01 Zspace, Inc. Liquid crystal variable drive voltage
EP2541947A3 (en) * 2011-06-28 2013-04-17 Kabushiki Kaisha Toshiba Medical image processing apparatus
US9492122B2 (en) 2011-06-28 2016-11-15 Kabushiki Kaisha Toshiba Medical image processing apparatus
US20140282267A1 (en) * 2011-09-08 2014-09-18 Eads Deutschland Gmbh Interaction with a Three-Dimensional Virtual Scenario
US20130135310A1 (en) * 2011-11-24 2013-05-30 Thales Method and device for representing synthetic environments
WO2013088390A1 (en) 2011-12-14 2013-06-20 Universita' Degli Studi Di Genova Improved three-dimensional stereoscopic rendering of virtual objects for a moving observer
ITTO20111150A1 (en) * 2011-12-14 2013-06-15 Univ Degli Studi Genova PERFECT THREE-DIMENSIONAL STEREOSCOPIC REPRESENTATION OF VIRTUAL ITEMS FOR A MOVING OBSERVER
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
EP2747430B1 (en) * 2012-12-18 2020-10-21 Samsung Electronics Co., Ltd 3D display device for displaying 3D image using at least one of gaze direction of user or gravity direction
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
EP3074954A4 (en) * 2013-11-26 2017-06-14 Yoav Shefi Method and system for constructing a virtual image anchored onto a real-world object
US9841844B2 (en) * 2014-06-20 2017-12-12 Funai Electric Co., Ltd. Image display device
US20150370415A1 (en) * 2014-06-20 2015-12-24 Funai Electric Co., Ltd. Image display device
EP2957997A1 (en) * 2014-06-20 2015-12-23 Funai Electric Co., Ltd. Image display device
US10139721B1 (en) * 2017-05-23 2018-11-27 Hae-Yong Choi Apparatus for synthesizing spatially separated images
CN107333121A (en) * 2017-06-27 2017-11-07 山东大学 The immersion solid of moving view point renders optical projection system and its method on curve screens
EP4068768A4 (en) * 2019-12-05 2023-08-02 Beijing Ivisual 3D Technology Co., Ltd. 3d display apparatus and 3d image display method

Also Published As

Publication number Publication date
US7796134B2 (en) 2010-09-14
WO2005118998A8 (en) 2006-04-27
WO2005119376A2 (en) 2005-12-15
WO2005119376A3 (en) 2006-04-27
KR20070052261A (en) 2007-05-21
US20050264858A1 (en) 2005-12-01
US20050281411A1 (en) 2005-12-22
US20050275913A1 (en) 2005-12-15
JP2008506140A (en) 2008-02-28
US20050264558A1 (en) 2005-12-01
US20050275914A1 (en) 2005-12-15
KR20070052260A (en) 2007-05-21
US20050275915A1 (en) 2005-12-15
WO2005118998A1 (en) 2005-12-15
US20050264857A1 (en) 2005-12-01
JP2008507006A (en) 2008-03-06
EP1781893A1 (en) 2007-05-09
EP1759379A2 (en) 2007-03-07

Similar Documents

Publication Publication Date Title
US20050264559A1 (en) Multi-plane horizontal perspective hands-on simulator
US20050219240A1 (en) Horizontal perspective hands-on simulator
US9684994B2 (en) Modifying perspective of stereoscopic images based on changes in user viewpoint
EP1740998A2 (en) Horizontal perspective hand-on simulator
US7907167B2 (en) Three dimensional horizontal perspective workstation
US20060126927A1 (en) Horizontal perspective representation
US20070291035A1 (en) Horizontal Perspective Representation
US20050248566A1 (en) Horizontal perspective hands-on simulator
US20060221071A1 (en) Horizontal perspective display
JP2009211718A (en) Image forming system, image forming method, program, and information storage medium
US20060250390A1 (en) Horizontal perspective display
JP3579683B2 (en) Method for producing stereoscopic printed matter, stereoscopic printed matter
WO2006121955A2 (en) Horizontal perspective display

Legal Events

Date Code Title Description
AS Assignment

Owner name: INFINITE Z, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VESELY, MICHAEL A.;CLEMENS, NANCY L.;REEL/FRAME:019456/0438

Effective date: 20060317

AS Assignment

Owner name: INFINITE Z, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:INFINITE Z, LLC;REEL/FRAME:019464/0909

Effective date: 20061026

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION