WO2000020936A1 - Control system for variably operable devices - Google Patents

Control system for variably operable devices Download PDF

Info

Publication number
WO2000020936A1
WO2000020936A1 PCT/CA1999/000904 CA9900904W WO0020936A1 WO 2000020936 A1 WO2000020936 A1 WO 2000020936A1 CA 9900904 W CA9900904 W CA 9900904W WO 0020936 A1 WO0020936 A1 WO 0020936A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
pan
data
point
sensors
Prior art date
Application number
PCT/CA1999/000904
Other languages
French (fr)
Inventor
Will N. Bauer
Rafael Lozano-Hemmer
Original Assignee
Acoustic Positioning Research Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acoustic Positioning Research Inc. filed Critical Acoustic Positioning Research Inc.
Priority to AU58456/99A priority Critical patent/AU5845699A/en
Publication of WO2000020936A1 publication Critical patent/WO2000020936A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/401Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by control arrangements for measuring, e.g. calibration and initialisation, measuring workpiece for machining purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/30Determining absolute distances from a plurality of spaced points of known location

Definitions

  • This invention relates to a control system for variably operable devices, notably lighting and sound devices in response to movement of an object within an area.
  • Robotic or "intelligent” lamps can be remotely controlled by an industry communications standard called "DMX-512".
  • DMX-512 This is a high speed serial data protocol which allows control of parameters such as the pan and tilt angle at which the light beam is projected, beam intensity, colour selection, beam width (iris) , focus, and light pattern ("gobo” selection) among others.
  • these lamps are used in conjunction with computer software running on external PC computers and/or lighting desks to enhance their capabilities.
  • Some of this external software simulates the three dimensional (“3D") environment in which the lamps function, allowing programming of lighting effects to occur "off-line” (i.e. without the need for a theatre and lights) . This is possible since the software provides a 3D virtual environment visualizing how the lamps will look when they are used in real life.
  • GAMS Gesture and Media System
  • calibrating the lamp is done by either direct entry of labouriously measured sets of these six coordinates or by pointing the lamp at four calibration points and deriving the six coordinates (X, Y, Z, pitch, yaw and roll) which define the lamp's position/orientation of the lamp using the pan/tilt measurements of the lamp when pointed at these four points.
  • Such a system is described and claimed in PCT/CA98/00684 in the name of Will Bauer.
  • the pointing procedure as described in PCT/CA98/00684 is presently slow; one must measure the distances between the four points exactly and then take the time to individually point each lamp at each of the four points. Additionally, the process almost invariably involves two people . One person is used to control the lamps (or the 3D positioning system if that is what is being calibrated) while the other must stand by the calibration points and direct the pointing of the lamps with sufficient accuracy (normally, the location of the control system for the lights/3D tracker is too remote to allow accurate viewing of the position of the lamps or 3D tracker sensors) .
  • the present invention is aimed at providing a faster and less labour intensive way of performing this calibration.
  • An intelligent lamp can respond to 3D information in a variety of ways. For many of the responses, it is necessary to know the lamp's 3D position and orientation. Coordinates of the X, Y and Z axes of three dimensional space plus pitch, yaw and roll angles for the orientation in space give a six degree of freedom (“6DOF") description of the light's state. This information establishes a coordinate system which completely describes the lamp and is necessary to calculate the way in which the lamp responds to incoming 3D information. For example, for a light beam to follow an object moving in three dimensions, one must know the 3D coordinates of the object and the 6DOF coordinates of the lamp in order to correctly calculate the pan/tilt angles necessary to point the lamp at the moving object.
  • 6DOF six degree of freedom
  • a 3D frame of reference must be established and used to describe the positions of 3D objects while they are being tracked. This may be done in a manner similar to that of the lamp by placing a detector at each of three or four calibration points and measuring the distance from the ranging elements of the 3D tracking system or virtual sound positioning system to each of these calibration points. The radial distances thus measured may then be used to calculate the 3D positions of each of the ranging elements (for example, ultrasonic speakers) relative to the coordinate system established by the calibration points and thus establish a 3D frame of reference for referring measurements made by each ranging element to the common coordinate system.
  • the ranging elements for example, ultrasonic speakers
  • the invention provides, as an improvement in the above described process, the provision of a set of sensors at each of the four calibration points, each of the sensor sets being connected to a data transmission means to gather data from each of the sensors and transmit this data to a remote computer.
  • a data transmission means to gather data from each of the sensors and transmit this data to a remote computer.
  • Such transmission may be by any desirable means such as a cable, radio transmission, or light-based data transmission.
  • the format of the data transmission may be any asynchronous/synchronous data protocol, particularly one such as DMX-512 (a serial data protocol commonly used in the lighting industry) .
  • the sensor sets may be mounted at corners of a square on a (nominally) three to five metre square "rug" -- an easily rollable sheet of stiff fabric or plastic upon which is printed a plan of an X - Y coordinate plane and mounting clips for the sensor sets at several different distances from the X - Y origin. Due to the several different distances of the mounting clips it may be possible to use the rug in partially obstructed areas while still retaining precise knowledge of the spacing of the sensor sets relative to each other. The spacing of the sensors must be great enough to allow accuracy when calculating 3D positions hence the nominal spacing of 3 - 5 metres mentioned earlier.
  • the sensor sets may be connected by a foldable frame which, when unfolded, releasably latches rigidly in place. When thus latched, the frame would establish a two dimensional set of coordinate axes with the sensor sets at known positions relative to each other.
  • the actual size of the frame might be adjustable in steps to allow for calibration in areas that were smaller or larger in size. Other variations are also possible.
  • the operator simply chooses a centre point for the coordinate system, aligns one axis of the frame along a desired heading, and connects the data transmission means to the remote computer system capable of controlling the lamps to be calibrated and to the ranging elements of any 3D virtual sound or tracking system (s) for which calibration was desired.
  • the computer through a suitable computer program, directs the lamps to shine a beam of light or other emission and to first make a course scan throughout their range of pan/tilt values until the beam is detected by one of the sensor sets. The beam would then scan in a more refined path to determine accurately the whereabouts of the edges of the cone of light, thus making possible the calculation of the pan/tilt values to centre the light cone on the calibration point.
  • This scanning may be repeated until the lamp had found each of the four calibration points and accurately measured the pan/tilt values corresponding to the edges of the light beam for each calibration point.
  • the remote computer may activate each ranging element, resulting in range information from each ranging element being detected at each of the four calibration point 3D tracking sensors.
  • Figure 1 shows an arrangement of a system of utilizing single spot lamp for a performer on a large area
  • Figure 1A shows an arrangement similar to that of Figure 1 utilizing a lighting system comprising a plurality of lamps;
  • Figure IB shows an arrangement similar to that of Figure 1 utilizing a lighting system, a music system and other systems;
  • Figure 2 illustrates a movable base having sensors
  • Figure 3 illustrates a block diagram showing the operation of the base of Figure 2;
  • Figure 4 is a geometric presentation of the position of the 3D tracking system, virtual sound system, or pan/tilt computer controlled media relative to the centre point and the four calibration points;
  • Figure 5 is another geometric figure
  • Figure 6 is a detail of Figure 5;
  • Figure 7 is a further geometric figure.
  • Figure 8A, 8B and 8C are further geometric figures used in calculating the location of a 3D point.
  • a stage area 10A has a frame of reference 10B established by calibration points Cl, C2 , C3 and C4 of a removable mat or framework IOC.
  • a further point of origin reference point is marked C.
  • the virtual position 12 of a performer may be tracked by lamps under computer control and being capable of a wide range of pan and tilt angles.
  • FIG. 1A shows a lighting system having a plurality of lamps 16.
  • Figure IB shows a somewhat similar system to that of Figures 1 and 1A but including a music system 16A and another unspecified variably operable system 16B.
  • the distance from music system or other systems are measured by the computer 15 and are used to calculate the 3D position of said systems .
  • Figure 2 illustrates a base 18 to be superimposed on stage area 10A for calibration of points 1, 2 , 3 and 4.
  • the base 18 may be a rug or foldable frame having sensors 20, transmission means 22 and a printed orthogonal X - Y axis grid 24.
  • the rug on frame 19 may be square having any convenient size but, possibly, the side length may be from 3 to 5 metres . It may have an orthogonal grid printed on it as X/Y axis.
  • the sensor set 20 for each calibration point may comprise: a photosensor which can detect the presence and intensity of light coming from a lamp whose calibration is desired plus a microphone for detecting ranging signals coming from a speaker used in a 3D tracking system or virtual sound system.
  • the photosensors are light-to-frequency converters such as the TSL320 chip which converts light intensity directly into a digital signal, eliminating the need for signal conditioning and A/D conversion.
  • the sensors 20 may be attached to the rug or frame 18 by means of mounting clips 21.
  • the transmission means 22 which may be a data encoder which consists of a microcontroller 28 such as the Atmel 89C52 or 8535 equipped with a built-in serial data port capable of DMX-512 data reception and transmission, buffer circuitry for the DMX data reception and transmission 34, a tone encoder 26 such as is normally used in touch-tone phones, a multiplexer 30 which can be switched by the microcontroller to select each of the microphones in turn, a high-pass filter 32 which allows only frequencies greater than about 17KHz to pass through, a "frame start" oscillator which can be selected via the analog switch to indicate the start of a microphone data transmission frame, and a mixer 38 to mix the tone encoder output
  • a data encoder which consists of a micro
  • each of the photosensors produces a digital TTL signal which changes frequency in proportion to the amount of light falling on the sensor. These four signals are monitored by the microcontroller.
  • the microcontroller detects the frequency shift and encodes this change in two ways. Firstly, it changes the DMX channel corresponding to that particular encoder to a 255 value. Secondly, it makes the tone encoder emit a tone unique to that sensor. Similarly, when a transition from light to dark is detected, a DMX value of 127 is sent along with a second tone unique to that sensor.
  • a separate indication is given when change from dark to light or light to dark is detected and each of these indications are conveyed in two ways via DMX transmission and via tone encoding.
  • Sound detection proceeds as follows: under the direction of the microcontroller, each of the four microphones is selected by the analog multiplexer switch and their signal is routed through the high-pass filter to the mixer where it is mixed with any signals coming from the tone encoder. Additionally, a fifth source (the frame start oscillator or "FSO") is also switched by the microcontroller. This serves as an easily recognisable signal which indicates in which order the microphones are being switched. For example, a two second burst of the FSO could indicate that the next microphone to be switched was mic number 1. A half second burst of the FSO could serve to demarcate the switching between microphones two, three, and four respectively.
  • the high-pass filter serves a dual purpose.
  • the tone encoder signal removes any audible noise artifacts from the microphone signal that might interfere with pulse timing (which for accuracy's sake uses high sound frequencies, i.e. those greater than about 17 KHz) .
  • it allows the tone encoder signal to be frequency multiplexed with the ultrasonic pulses since the only presence of frequencies below the filter's cut off will be due to the tone encoder.
  • Detection of the tone/DMX encoded light signals by the remote PC is also a simple matter for one skilled in the art .
  • the position of the 3D tracking system's ranging elements, the virtual sound system's speakers, and/or the position/orientation of the lamp can be calculated.
  • the available information is: a) Distances d lc , d 2c/ d 13 , d 23 , d 14 , and d 24 .
  • This problem can be solved in four steps : a) Calculate the X,Y,Z coordinates of points 1, 2, 3 and 4 given the coordinate origin point C at (0,0,0) . b) Solve for radial distances R 1# R 2 , R 3 and R 4 . c) Solve for the X,Y,Z 3D coordinates of point 0. d) Solve for the orientation angles (theta, phi, and gamma) of the object at point 0.
  • Y 3 Y 1 -_--d 23 l---cl 12 -i--d 13 i_ (2*d 12 )
  • Y 4 Y 1 -id 24 l+d 12 ---- ⁇ -i 14 i-l (2*d 12 )
  • the X coordinate of point 3 can be determined as
  • Angles Z 12 , Z 13 , Z 14 , and Z 23 can be calculated. With some manipulation, the expression
  • Cos(Z mn ) Cos(p n )*Cos(p *Cos( ⁇ n - ⁇ +Sin(p n )*Sin(p m )
  • Equations i) through iii) can be solved for a succession of R x values, obtaining the following pairs of roots for R 2 , R 3 , and R 4 for each R value from the quadratic equations R 2a , R 2b , R 3a , R 3b , R 4a , R 4b . All possible trios of root values obtained from each R ⁇ value can now be tested.
  • the error test function can be defined as the sum of the absolute values of the functions F(R 2root ), F(R 3root ), and
  • R 1# R 2 , R 3 and R 4 To solve numerically for R 1# R 2 , R 3 and R 4 : a) Find the maximum R x values that could possibly yield a solution, i.e. the largest R 1 which yield two real roots for one of equations i) , ii) , or iii) . b) Cycle through R ⁇ values between R lraax and zero , evaluating the error function E for each trial R ⁇ value. c) Iterate over a steadily narrowing range of R 1 values with a successively smaller "step" increment for R x values until the value of E falls low enough for the accuracy desired.
  • R x 2 (X o -X 2 +(Y 0 -Y x ) 2 +(Z 0 -Z 1 ) 2
  • R, (X.-X, ! +(Y 0 -Y 2 ) 2 +(Z 0 -Z 2
  • R 3 2 (X 0 -X 3 ) 2 + (Y C -Y 3 ) + (Z 0 -Z 3 )
  • X 0 can be obtained by substituting the Y 0 of equations iv) into equation i) at which point Z 0 is determined by equation iii) .
  • pan and tilt angles ⁇ and p respectively obtained by pointing the object at the four points 1, 2, 3, 4 mentioned earlier in this document.
  • the rotation angle theta ⁇ can be calculated based on equations involving the object frame of reference based coordinates of the calibration points (X",Y",Z”) and the known X,Y,Z coordinates of the calibration points.
  • the tilt angle gamma ⁇ can be calculated based on the theta ⁇ value plus more equations comparing the object frame of reference coordinates with the (X,Y,Z) frame of reference of the calibration points .
  • the pan angle phi ⁇ can be calculated based on both theta ⁇ and gamma ⁇ . This is done numerically because there is often too much error propagation for a closed-form calculation to handle.
  • the numeric solution (where the best phi ⁇ is chosen to make the predicted calibration point locations match the actual ones) allows phi ⁇ to be "tuned” to compensate out some of the errors made in calculating theta ⁇ and gamma ⁇ which makes the whole solution work much better.
  • the solution is based on the fact that one can rotationally transform the object O's position between the (X,Y,Z) coordinate system established via the four calibration points and an (X",Y",Z") coordinate system having point 0 as its centre.
  • the angles required to properly transform the known (X,Y,Z) coordinates of the calibration points into the (X",Y",Z") coordinates calculable based on the ⁇ n , p n pan/tilt angles are theta ⁇ , gamma ⁇ , and phi ⁇ which is sought for.
  • ⁇ n Tan- 1 [ ( X 0 - n ) / ( Zo-Z n ) ]
  • Relative distance coordinates can then be defined as (X rl , Y rl , Z rl ) , (X r2 ' Y r2 Z r ) , (X r3 , Y r3 , Z r3 ) , (X r4 , Y r4 , Z r4 ) to be the positions relative to the object at point 0, i.e.
  • Z TM Zn-Zo for n 1,2,3,4
  • phi ⁇ values can be iterated over successively smaller ranges with finer 0 incremental steps until the desired level of accuracy is reached. Having completed the last step of the algorithm, a solution to the problem has been reached.
  • X,Y,Z coordinates of the object have been computed, plus its theta ⁇ , gamma ⁇ and phi ⁇ angular orientation offsets and there is complete information to be able to direct the object (through remote control of pan/tilt angle positioning) to point at any (X,Y,Z) position such as a moving performer.

Abstract

A calibration system is provided for variably operable devices such as robotic lamps or music systems so that a controlling system knows the 3D position of the device and its angular orientation. The calibration system includes sensors for emissions from the device at each of at least four calibration points located, for example, at corners of a square rug or folded frame. The sensors are connected to a data transmission device which passes data to a computer for evaluation. A method of calibrating the devices includes scanning each of the sensors with an emitting device until the respective sensor indicates that the device is pointed accurately at the sensor.

Description

CONTROL SYSTEM FOR VARIABLY OPERABLE DEVICES
TECHNICAL FIELD
This invention relates to a control system for variably operable devices, notably lighting and sound devices in response to movement of an object within an area.
BACKGROUND ART Robotic or "intelligent" lamps can be remotely controlled by an industry communications standard called "DMX-512". This is a high speed serial data protocol which allows control of parameters such as the pan and tilt angle at which the light beam is projected, beam intensity, colour selection, beam width (iris) , focus, and light pattern ("gobo" selection) among others. Increasingly these lamps are used in conjunction with computer software running on external PC computers and/or lighting desks to enhance their capabilities. Some of this external software simulates the three dimensional ("3D") environment in which the lamps function, allowing programming of lighting effects to occur "off-line" (i.e. without the need for a theatre and lights) . This is possible since the software provides a 3D virtual environment visualizing how the lamps will look when they are used in real life.
Other software/hardware systems are those such as Gesture and Media System (GAMS) technology for electronic media control as for example described and claimed in U.S. Patent No. 5,107,746 issued to Will Bauer on April 28,
1992, U.S. Patent No. 5,214,615 issued to Will Bauer on May 25, 1993 and U.S. Patent No. 5,412,619 issued to Will Bauer on May 2, 1995. Other software/hardware systems are also those such as virtual sound positioning systems which rely on knowledge of 3D speaker placement to generate the effect of 3D sound in a room and 3D systems which follow the movements of a performer, allowing sound and lighting media to automatically track and respond to performers' movements in various ways. These 3D systems must be calibrated when they are set up so as to give accurate measurements of 3D position. Similarly, virtual sound systems must be calibrated with information about the location of each of the many speakers used so they can project the correct amounts of sonic energy from each speaker to the correct regions of the auditorium.
As interest in using intelligent lamps to respond to 3D cues and movements grows, there is also an increasing need for the lamp itself to be calibrated so that the controlling system knows the lamp's 3D position (X, Y, and Z axes) and its angular orientation (pitch, yaw and roll rotational axes) . Presently, calibrating the lamp is done by either direct entry of labouriously measured sets of these six coordinates or by pointing the lamp at four calibration points and deriving the six coordinates (X, Y, Z, pitch, yaw and roll) which define the lamp's position/orientation of the lamp using the pan/tilt measurements of the lamp when pointed at these four points. Such a system is described and claimed in PCT/CA98/00684 in the name of Will Bauer.
While useful, the pointing procedure as described in PCT/CA98/00684 is presently slow; one must measure the distances between the four points exactly and then take the time to individually point each lamp at each of the four points. Additionally, the process almost invariably involves two people . One person is used to control the lamps (or the 3D positioning system if that is what is being calibrated) while the other must stand by the calibration points and direct the pointing of the lamps with sufficient accuracy (normally, the location of the control system for the lights/3D tracker is too remote to allow accurate viewing of the position of the lamps or 3D tracker sensors) .
The present invention is aimed at providing a faster and less labour intensive way of performing this calibration.
An intelligent lamp can respond to 3D information in a variety of ways. For many of the responses, it is necessary to know the lamp's 3D position and orientation. Coordinates of the X, Y and Z axes of three dimensional space plus pitch, yaw and roll angles for the orientation in space give a six degree of freedom ("6DOF") description of the light's state. This information establishes a coordinate system which completely describes the lamp and is necessary to calculate the way in which the lamp responds to incoming 3D information. For example, for a light beam to follow an object moving in three dimensions, one must know the 3D coordinates of the object and the 6DOF coordinates of the lamp in order to correctly calculate the pan/tilt angles necessary to point the lamp at the moving object. Similarly, with a 3D tracking system such as GAMS or a virtual sound positioning system, a 3D frame of reference must be established and used to describe the positions of 3D objects while they are being tracked. This may be done in a manner similar to that of the lamp by placing a detector at each of three or four calibration points and measuring the distance from the ranging elements of the 3D tracking system or virtual sound positioning system to each of these calibration points. The radial distances thus measured may then be used to calculate the 3D positions of each of the ranging elements (for example, ultrasonic speakers) relative to the coordinate system established by the calibration points and thus establish a 3D frame of reference for referring measurements made by each ranging element to the common coordinate system.
Currently, with normal intelligent lighting fixtures, it is possible to calculate the lamp's position and orientation by pointing the lamp at four reference points located in a common plane (usually the stage floor) and measuring the pan/tilt angles required to point the lamp at each of the four points. This gives enough information that one skilled in the art can calculate the position and orientation of the lamp. A problem with this process is that it is quite time consuming to point each lamp to each of the four points . Often there are many lamps for which this must be done and it can take an inordinate amount of time .
DISCLOSURE THE INVENTION
Accordingly the invention provides, as an improvement in the above described process, the provision of a set of sensors at each of the four calibration points, each of the sensor sets being connected to a data transmission means to gather data from each of the sensors and transmit this data to a remote computer. Such transmission may be by any desirable means such as a cable, radio transmission, or light-based data transmission. The format of the data transmission may be any asynchronous/synchronous data protocol, particularly one such as DMX-512 (a serial data protocol commonly used in the lighting industry) .
The sensor sets may be mounted at corners of a square on a (nominally) three to five metre square "rug" -- an easily rollable sheet of stiff fabric or plastic upon which is printed a plan of an X - Y coordinate plane and mounting clips for the sensor sets at several different distances from the X - Y origin. Due to the several different distances of the mounting clips it may be possible to use the rug in partially obstructed areas while still retaining precise knowledge of the spacing of the sensor sets relative to each other. The spacing of the sensors must be great enough to allow accuracy when calculating 3D positions hence the nominal spacing of 3 - 5 metres mentioned earlier.
Alternatively, the sensor sets may be connected by a foldable frame which, when unfolded, releasably latches rigidly in place. When thus latched, the frame would establish a two dimensional set of coordinate axes with the sensor sets at known positions relative to each other. The actual size of the frame might be adjustable in steps to allow for calibration in areas that were smaller or larger in size. Other variations are also possible.
In use, the operator simply chooses a centre point for the coordinate system, aligns one axis of the frame along a desired heading, and connects the data transmission means to the remote computer system capable of controlling the lamps to be calibrated and to the ranging elements of any 3D virtual sound or tracking system (s) for which calibration was desired. Once connected, the computer, through a suitable computer program, directs the lamps to shine a beam of light or other emission and to first make a course scan throughout their range of pan/tilt values until the beam is detected by one of the sensor sets. The beam would then scan in a more refined path to determine accurately the whereabouts of the edges of the cone of light, thus making possible the calculation of the pan/tilt values to centre the light cone on the calibration point. This scanning may be repeated until the lamp had found each of the four calibration points and accurately measured the pan/tilt values corresponding to the edges of the light beam for each calibration point. Similarly, for the 3D tracking system (s) or virtual sound systems, the remote computer may activate each ranging element, resulting in range information from each ranging element being detected at each of the four calibration point 3D tracking sensors. These sets of data, gathered automatically by software running on a remote PC computer, greatly speed and simplify the determination of 3D position/orientation since the detection of the data can be done more quickly than is possible manually and since no manpower is required during the calibration process.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the invention will now be described by way of example with reference to the drawings in which:
Figure 1 shows an arrangement of a system of utilizing single spot lamp for a performer on a large area;
Figure 1A shows an arrangement similar to that of Figure 1 utilizing a lighting system comprising a plurality of lamps;
Figure IB shows an arrangement similar to that of Figure 1 utilizing a lighting system, a music system and other systems;
Figure 2 illustrates a movable base having sensors; Figure 3 illustrates a block diagram showing the operation of the base of Figure 2;
Figure 4 is a geometric presentation of the position of the 3D tracking system, virtual sound system, or pan/tilt computer controlled media relative to the centre point and the four calibration points;
Figure 5 is another geometric figure;
Figure 6 is a detail of Figure 5;
Figure 7 is a further geometric figure; and
Figure 8A, 8B and 8C are further geometric figures used in calculating the location of a 3D point.
MODES OF CARRYING OUT THE INVENTION
In the Figures 1 and 1A a stage area 10A has a frame of reference 10B established by calibration points Cl, C2 , C3 and C4 of a removable mat or framework IOC. A further point of origin reference point is marked C. The virtual position 12 of a performer may be tracked by lamps under computer control and being capable of a wide range of pan and tilt angles.
The pan and tilt angles of the lamps are measured by computer 15 and used to calculate the 6D0F position/orientation of the light (s) 16 in accordance with an algorithm. Figure 1A shows a lighting system having a plurality of lamps 16.
Figure IB shows a somewhat similar system to that of Figures 1 and 1A but including a music system 16A and another unspecified variably operable system 16B. The distance from music system or other systems are measured by the computer 15 and are used to calculate the 3D position of said systems .
Figure 2 illustrates a base 18 to be superimposed on stage area 10A for calibration of points 1, 2 , 3 and 4. The base 18 may be a rug or foldable frame having sensors 20, transmission means 22 and a printed orthogonal X - Y axis grid 24. The rug on frame 19 may be square having any convenient size but, possibly, the side length may be from 3 to 5 metres . It may have an orthogonal grid printed on it as X/Y axis. The sensor set 20 for each calibration point may comprise: a photosensor which can detect the presence and intensity of light coming from a lamp whose calibration is desired plus a microphone for detecting ranging signals coming from a speaker used in a 3D tracking system or virtual sound system. The photosensors are light-to-frequency converters such as the TSL320 chip which converts light intensity directly into a digital signal, eliminating the need for signal conditioning and A/D conversion. The sensors 20 may be attached to the rug or frame 18 by means of mounting clips 21. As may be best seen from Figure 3, the sensors 20 from each calibration point are wired to the transmission means 22 which may be a data encoder which consists of a microcontroller 28 such as the Atmel 89C52 or 8535 equipped with a built-in serial data port capable of DMX-512 data reception and transmission, buffer circuitry for the DMX data reception and transmission 34, a tone encoder 26 such as is normally used in touch-tone phones, a multiplexer 30 which can be switched by the microcontroller to select each of the microphones in turn, a high-pass filter 32 which allows only frequencies greater than about 17KHz to pass through, a "frame start" oscillator which can be selected via the analog switch to indicate the start of a microphone data transmission frame, and a mixer 38 to mix the tone encoder output with that coming from the ultrasonic microphones 19. There is also a user interface 36 connected to the microcontroller 28 which allows for setting of the DMX base address and other useful parameters by the user. The signal from the mixer may be transmitted by radio.
Light detection proceeds as follows: each of the photosensors produces a digital TTL signal which changes frequency in proportion to the amount of light falling on the sensor. These four signals are monitored by the microcontroller. When there is a sudden change in a sensor's light level going from dark to light, the microcontroller detects the frequency shift and encodes this change in two ways. Firstly, it changes the DMX channel corresponding to that particular encoder to a 255 value. Secondly, it makes the tone encoder emit a tone unique to that sensor. Similarly, when a transition from light to dark is detected, a DMX value of 127 is sent along with a second tone unique to that sensor. Thus for each of. the photosensors, a separate indication is given when change from dark to light or light to dark is detected and each of these indications are conveyed in two ways via DMX transmission and via tone encoding.
Sound detection proceeds as follows: under the direction of the microcontroller, each of the four microphones is selected by the analog multiplexer switch and their signal is routed through the high-pass filter to the mixer where it is mixed with any signals coming from the tone encoder. Additionally, a fifth source (the frame start oscillator or "FSO") is also switched by the microcontroller. This serves as an easily recognisable signal which indicates in which order the microphones are being switched. For example, a two second burst of the FSO could indicate that the next microphone to be switched was mic number 1. A half second burst of the FSO could serve to demarcate the switching between microphones two, three, and four respectively. The high-pass filter serves a dual purpose. Firstly, it removes any audible noise artifacts from the microphone signal that might interfere with pulse timing (which for accuracy's sake uses high sound frequencies, i.e. those greater than about 17 KHz) . Secondly, it allows the tone encoder signal to be frequency multiplexed with the ultrasonic pulses since the only presence of frequencies below the filter's cut off will be due to the tone encoder.
Detection of the sound pulses and the subsequent conversion of this into ranging information is dealt with by the 3D tracking system or virtual sound system; we shall not elaborate the details here because they are incidental to this invention.
Detection of the tone/DMX encoded light signals by the remote PC is also a simple matter for one skilled in the art .
In order to calculate the correct positions/orientations for the lamp based on its pan/tilt angles when pointed at the four calibration points, it is necessary to determine which pan/tilt angle pair results in the photosensors being exactly in the centre of the cone of light that is being projected by the lamp. This is a relatively simple matter of finding the pan/tilt values where the edges of the light cone are just barely touching the photosensor and then taking an angular average of these two values for both pan and tilt.
Once the data has been transmitted to the remote PC, the position of the 3D tracking system's ranging elements, the virtual sound system's speakers, and/or the position/orientation of the lamp can be calculated.
An exemplary calculation follows with reference to Figures 4 - 8 of the drawings. It is first desired to calculate the 3D X,Y,Z coordinate position of Object "0" plus its three angular orientation angles (theta, phi, and gamma) .
The available information is: a) Distances dlc, d2c/ d13, d23, d14, and d24. b) Pan and tilt angles β, p for the line segments Rl f R2, R3 and R4 connecting point 0 with point 1, 2, 3 and 4 respectively, i.e. a line drawn through point 0 and point 1 can be expressed in radial coordinates (with point 0 as the coordinate origin) of (R17 pan angle, tilt angle) where Rλ is the distance between point 0 and point 1 and "pan angle" and "tilt angle" are the pan and tilt angles around two orthogonal axes.
This problem can be solved in four steps : a) Calculate the X,Y,Z coordinates of points 1, 2, 3 and 4 given the coordinate origin point C at (0,0,0) . b) Solve for radial distances R1# R2, R3 and R4. c) Solve for the X,Y,Z 3D coordinates of point 0. d) Solve for the orientation angles (theta, phi, and gamma) of the object at point 0.
The solution of each of these steps is exemplified below: a) Calculate the X,Y,Z coordinates of points 1, 2, 3 and 4
Consider Figure 3 comprising four points lying in a single plane (as will be explained below, three points are not enough to solve the problem and a fourth point is needed to obtain a solution) .
Points 1 and 2 are collinear with the centre point C and may be defined as lying along the Y axis of the coordinate system with origin C. Points 3 and 4 can be anywhere as long as point 3 is on the left side of the Y axis, point 4 is on the right side of it and no three of points 1, 2, 3, and 4 are collinear,. Distances dlc, d2c, d13, d23, d14 and d24 are known. Since points 1 and 2 lie along the Y axis, then, by inspection, their X,Y,Z coordinates are point 1 = (0,dlc,0) and point 2 = (0, -d2c,0) assuming for convenience that the plane is at a height of Z
= 0 and is parallel to the X-Y plane. Also obvious is the fact that d12 = dlc+d2c . From the theorem of Pythagoras,
Y3 = Y1-_--d23l---cl12-i--d13i_ (2*d12)
and similarly,
Y4 = Y1-id24l+d12----ι-i14i-l (2*d12)
Knowing which quadrant of the coordinate system point 3 is in, the X coordinate of point 3 can be determined as
Figure imgf000015_0001
and the X coordinate of point 4 as
Figure imgf000015_0002
Minor variations in solution similar to this one are obvious to one skilled in the art and will not be elaborated upon. X,Y,Z coordinates for points 3 and 4, have thus been found being:
point 3 = (X3,Y3,0)
and
point 4 = (X4,Y4, 0) .
It should be noted that the calibration was arbitrarily assumed to be located at Z = 0. In fact, it could located at any constant Z value "Zcal" without compromising the calculations which follow. The only difference would be that the calculated Z coordinate of the object "O" would have to have Zcal added to it at the end of these calculations.
b) Solve for radial distances R1# R2, R3, and R4
Given are pan angles beta (βn) and tilt angles rho (pn) obtained by pointing tracking head 14 at each of calibration points 1, 2, 3, and 4 (i.e. n = 1, 2, 3, 4). Angles Z12, Z13, Z14, and Z23 can be calculated. With some manipulation, the expression
Cos(Zmn) = Cos(pn)*Cos(p *Cos(βn-β +Sin(pn)*Sin(pm)
or
Z^ = Cos-1[Cos(pn)*Cos(p *Cos(βnra)+Sin(pn)*Sin(pJ]
can be obtained. Now, from the law of cosines it can be written that:
d12 2 = R1 2+R2 2-2*R1*R2*Cos(Z12)
d13 2 = R1 2+R3 2-2*R1*R3*Cos (Z13)
d14 2 = R1 2+R4 ~2*R1*R4*Cos(Z14)
d23 2 = R2 +R3 2-2*R2*R3*Cos(Z23)
Due to the nature of these equations, they are not amendable to normal, "closed- form" solution techniques even though there is enough information (3 equations in 3 unknowns) that a solution is possible. As one example of a solution, Rl can be numerically solved by using a computer to calculate a great many values of solutions of the quadratic equations:
i) R1 2+R2 2-2*R1*R2*Cos (Z12) -d12 2 = 0 = F(R2) with R2 fixed.
ii) R1 2+R3 2-2*R1*R3*Cos(Z13) -d13 2 = 0 = F(R3) with R. fixed.
iii) R1 2+R4 -2*R1*R4*Cos(Z14) -d14 2 = 0 = F(R4) with Rλ fixed.
It is clear from this that three points would not be enough to uniquely determine Rλ since a quadratic equation's solution has two roots which would result in two possible positions along the R-L-point 0 axis. Equations i) through iii) can be solved for a succession of Rx values, obtaining the following pairs of roots for R2, R3, and R4 for each R value from the quadratic equations R2a, R2b, R3a, R3b, R4a, R4b. All possible trios of root values obtained from each Rλ value can now be tested. The error test function can be defined as the sum of the absolute values of the functions F(R2root), F(R3root), and
Figure imgf000018_0001
E = | F (R2root ) | + | F (R3root) | + | F (R4root ) I
One of these trios (for the right Rx value) will give a very low error, E. E should ideally be zero since with perfect root and Rx values, F(R2) = F(R3) = F(R4) = 0.
The possible root trios are:
Figure imgf000018_0002
/ R4b) /
Figure imgf000018_0003
To solve numerically for R1# R2, R3 and R4 : a) Find the maximum Rx values that could possibly yield a solution, i.e. the largest R1 which yield two real roots for one of equations i) , ii) , or iii) . b) Cycle through Rλ values between Rlraax and zero, evaluating the error function E for each trial Rλ value. c) Iterate over a steadily narrowing range of R1 values with a successively smaller "step" increment for Rx values until the value of E falls low enough for the accuracy desired. d) Pick the R1 value with the lowest E and designate the solution to be "RllowE" plus the root trio ("R2:LowE", "R 31O E" and "R4IO E") that yielded the lowest E value.
c) Solve for the X,Y,Z 3D coordinates of point 0
Consider the tetrahedron of Figure 4.
It is desired to solve for X0,Y0,Z0. Given is Rl r R2, R3 plus (X,Y,Z) coordinates for points 1, 2, and 3. It can be written:
Rx 2 = (Xo-X 2+(Y0-Yx) 2+(Z0-Z1) 2
R, = (X.-X, !+(Y0-Y2) 2+(Z0-Z 2
R3 2 = (X0-X3) 2 + (YC -Y3) + (Z0-Z3)
These equations can be solved easily by normal methods, A particularly simple and useful case is when Z1=Z2=Z3. The following is obtained:
i) X0 = iRn-i----R[r --- n^---Xnl---Yffd-zYn 2 -2 *Yo -Y^Ynl
(2*(Xm-Xn))
ii) Yo = - rl--^Ji±Xq----Xr-i-±Yq-i----Yr 2 - 2 *X0-tlXq---Xrll
(2*(Yq-Yr) )
iii) Z0 = JRS 2 - (X0-Xs) 2- (Y0-Ys) 2 +ZS s = 1,2 or 3.
Clearly, a pair of points (m,n) or (q,r) must be chosen such that (Xm - Xn) 0 0 and (Yq - Yr) () 0. An easy way to do this to pick the pairs so that they are both on diagonals as shown in Figure 5.
In Figure 5, choose 1,3 as the X0 "m,n" pair and pick 2,4 as the Y0 "q,r" pair.
Substituting and solving equations i) and ii) the following is obtained:
iv) Y0 = C/[l+B*(Xq-Xr)/(Yq-Yr)] where C = (Rr 2 -Rq +Xq 2 -Xr +Yq 2 -Yr 2 -2A* (Xq-Xr) ) /(2* (Yq-Yr) ) and B = - (Ym-Yn) / (Xm-Xn) and A = (Rn 2-Rm 2+Xm 2-Xn 2+Ym 2-Yn 2) /(2 X-.-X..))
X0 can be obtained by substituting the Y0 of equations iv) into equation i) at which point Z0 is determined by equation iii) .
d) Solve for the orientation angles (theta (θ) , gamma (y) and phi (φ) of the object at point 0.
To solve for the angular orientation of the object at point 0, the following information is available: pan and tilt angles β and p respectively obtained by pointing the object at the four points 1, 2, 3, 4 mentioned earlier in this document.
The algorithm for the solution of this sub-objective is as follows: First the Cartesian coordinates can be calculated using the object's frame of reference and knowledge about the R, β, and p angles that were measured, i.e. X",Y",Z" of the calibration points are calculated relative to the object at the origin of its coordinate system.
Secondly, the rotation angle theta θ can be calculated based on equations involving the object frame of reference based coordinates of the calibration points (X",Y",Z") and the known X,Y,Z coordinates of the calibration points.
Thirdly, the tilt angle gamma γ can be calculated based on the theta θ value plus more equations comparing the object frame of reference coordinates with the (X,Y,Z) frame of reference of the calibration points .
Fourthly and finally, the pan angle phi φ can be calculated based on both theta θ and gamma γ. This is done numerically because there is often too much error propagation for a closed-form calculation to handle. The numeric solution (where the best phi φ is chosen to make the predicted calibration point locations match the actual ones) allows phi φ to be "tuned" to compensate out some of the errors made in calculating theta θ and gamma γ which makes the whole solution work much better.
The solution is based on the fact that one can rotationally transform the object O's position between the (X,Y,Z) coordinate system established via the four calibration points and an (X",Y",Z") coordinate system having point 0 as its centre. The angles required to properly transform the known (X,Y,Z) coordinates of the calibration points into the (X",Y",Z") coordinates calculable based on the βn, pn pan/tilt angles are the theta θ, gamma γ, and phi φ which is sought for.
To perform the transformation, the object is taken through three rotations in a particular sequence. The transformation equations between (X,Y,Z) coordinates and (X",Y",Z") coordinates as the rotations are applied, in this order, are:
a) Rotation about Z axis:
i) X' = X*Cos(-θ) +Y*Sin(-θ) ii) Y' = -X*Sin(-θ) +Y*Cos(-θ)
b) Rotation about X' axis:
i) Y" = Y'*Cos(γ) +Z*Sin(γ) ii) Z' = -Y'*Sin(γ) +Z*Cos(γ)
c) Rotation about Y' axis:
i) X" = X'*Cos (φ) +Z'*Sin(φ) ii) Z" = -X'*Sin(φ) +Z'*Cos(φ)
Note that the (X,Y,Z) coordinates given here are really (Xcn-X0, Ycn-Y0, Zcn-Z0) , i.e. the (X,Y,Z) position relative to the object as the origin of the X,Y,Z coordinate system.
There is enough information to calculate four sets of (X",Y",Z") coordinates, i.e. the calibration point positions as seen from a frame of reference whose origin is the location of the object 0. Clearly, angles β and p can be expressed (for n = 1, 2, 3, 4 signifying calibration points 1, 2, 3, 4 respectively):
βn = Tan-1[(X0- n)/(Zo-Zn )]
and
Pn = Cos-1 [ ( Z0- Zn) *J ( l+Tan 2n)]
This can be rewritten for (X",Y",Z") coordinates as
Pn Cos" Z"n*J ( l+Tan 2n) R-.
or
Figure imgf000023_0001
We can get the sign of Z"n right by checking βπ and making Z"n = -Z"n if Cos(βn) ) 0.
Similarly, for X"n
X"n = -Z"n*Tan(βn) and it follows that
Y"n = Rn*Sin(pn)
which also gives us a sign for Y" .
Thus (X"1,Y"1,Z"1) , (X"2,Y"2,Z"2) , (X"3,Y"3,Z"3) , and (X"4, Y"4, Z"4) can be calculated using angles P!..p4, β1..β4, and radial values RX..R4 (the radial distances from the object to the four calibration points) .
Relative distance coordinates can then be defined as (Xrl, Yrl , Zrl) , (Xr2 ' Yr2 Zr ) , (Xr3 , Yr3 , Zr3) , (Xr4 , Yr4 , Zr4) to be the positions relative to the object at point 0, i.e.
Figure imgf000024_0001
Y. Yn-Yo
Z = Zn-Zo for n 1,2,3,4
Given all of this, we can solve b) i) for angle theta as
θ = Tan"1 [J/K] where J = [ (Y^-Z^/Zr^Y^) * (Y"3-Zr3/Zrl*Y"1) -
Figure imgf000024_0002
and K = [(Y"2-Zr2/Zrl*Y"1)*(Zr3/Zrl*Xrl-Xr3)- (Y"3-Zr3/Zri*Y"ι) * (Zr2/Zrl*Xrl-Xr2) ]
Angle gamma γ can then be solved using equation b) i) to obtain: Cos(γ) = [ (Y"2*Zrl-Zr2*Y"1)/(Zrl*(Y'2-Y'1) ) ]
and
5 Sin(γ) = [ (Y"2*Y'1-Y'2*Y"1)/(Zr2*Y'1-Zrl*Y'2)]
Knowing both the Sin and Cos of gamma γ allows us to set the sign of the angle properly by making it equal to - (π+oldγ) if Cos (γ) ( 0, after which we can easily obtain 0 gamma γ as:
Y = Cos-1 [Cos (adjustedγ) ]
Having obtained theta θ and gamma γ, we can now solve 5 for phi φ numerically since it has often been found that a calculated value of phi φ produces too much error due to error propagation through the theta θ and gamma Y calculations. This can be done by taking all of the equations a) through c) i) and ii) and computing 0 (X"n, Y"n, Z"n) using angles theta θ, gamma y and "test" angle phi φtest which ranges over the full range of possible phi φ values. The "best" phi φ value will be the one that minimizes the error function that is
R T? — |Y" _Y I .lv" _v" 4-I7" -V" I -> "phi ~ |Λ n -^ntest I τ I L n x ntest^ I ^ n " ntest I for n = 1,2,3,4
As is usual in numeric solutions, phi φ values can be iterated over successively smaller ranges with finer 0 incremental steps until the desired level of accuracy is reached. Having completed the last step of the algorithm, a solution to the problem has been reached. X,Y,Z coordinates of the object have been computed, plus its theta θ, gamma γ and phi φ angular orientation offsets and there is complete information to be able to direct the object (through remote control of pan/tilt angle positioning) to point at any (X,Y,Z) position such as a moving performer.

Claims

CLAIMS :
1. A tracking system for controlling at least one variably operable tracking device (16, 16A, 16B) utilizing a high speed serial data protocol in conjunction with a computer (15) programmed to modify the parameters to control the device in response at least to the three dimensional position of the device and to its orientation; the device (16, 16A, 16B) itself being calibrated to provide data relating to its three dimensional position and orientation by directing an emission outlet of the device at each one of at least four calibration points (Cl f C2, C3, C4) located in a common plane and calculating its 3D position and orientation from pan/tilt angle data for the device at each of the calibration points, characterized in that: a set of sensors (24) for detecting emissions from the device is provided at each of the calibration points, each set of sensors being connected to a data transmission means (22) to gather data from the sensors (24) and transmit it to the computer to calculate from said data and from the distances between the calibration points (C1 , C2, C3, C4) , the six degrees of freedom from the device.
2. A system as claimed in claim 1 characterized in that the device is selected from robotic lamps (16) and speakers (16A) .
3. A system as claimed in claim 1 in which the calibration points and sensor sets are mounted on mounting clips at corners of a square on a sheet (18) of stiff fabric or plastic upon which is printed a plan of an X - Y coordinate plane .
4. A system as claimed in claim 3 in which mounting clips (21) are provided to define squares of different sizes.
5. A system as claimed in claim 3 in which the mounting clips (21) are movable to define squares of different sizes.
6. A system as claimed in claim 1 in which the calibration points and sensor sets are provided on a foldable frame (18) which is adjustable between a fully folded position and an unfolded use position, the frame (18) being releasably latchable in its unfolded use position.
7. A method of calibrating a variably operable tracking device (16, 16A, 16B) to provide information to a computer (15) regarding its three dimensional position and orientation for control of the device at each of at least four calibration point (C^ C2, C3, C4) establishing the pan/tilt angle data, and calculating the three dimensional position and orientation of the device (16, 16A, 16B) from said pan/tilt data; characterized in that: pan/tilt angle data is established by providing a set of sensors (20) for detecting an emission beam from the device at each calibration point; scanning the region of each calibration point with the device until the sensor set (20) indicates that the device (16, 16A, 16B) detects emission at the calibration point; transmitting pan/tilt data between the computer (15) and each calibration point (Cx, C2, C3, C4) and calibrating the device in terms of its six degrees of freedom from said pan/tilt data and from the distances between the calibration points .
8. A method as claimed in claim 8 in which said scanning is performed in two steps, the first step being a coarse scan until emission is detected by one of the sensors, and the second step being a more refined scan to determine the edges of a cone of the beam, and in which calculation of the centre point of the beam on the calibration point.
PCT/CA1999/000904 1998-10-02 1999-09-30 Control system for variably operable devices WO2000020936A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU58456/99A AU5845699A (en) 1998-10-02 1999-09-30 Control system for variably operable devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CA2,249,761 1998-10-02
CA 2249761 CA2249761A1 (en) 1998-10-02 1998-10-02 Control system for variably operable devices

Publications (1)

Publication Number Publication Date
WO2000020936A1 true WO2000020936A1 (en) 2000-04-13

Family

ID=4162887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA1999/000904 WO2000020936A1 (en) 1998-10-02 1999-09-30 Control system for variably operable devices

Country Status (3)

Country Link
AU (1) AU5845699A (en)
CA (1) CA2249761A1 (en)
WO (1) WO2000020936A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1514053A2 (en) * 2002-05-21 2005-03-16 Lightspace Corporation Interactive modular system
WO2007057069A1 (en) * 2005-11-18 2007-05-24 Valeo Schalter Und Sensoren Gmbh System for detecting objects in the vicinity of a vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0420500A2 (en) * 1989-09-26 1991-04-03 Cyber Scientific Incorporated Acoustic digitizing system
GB2267160A (en) * 1992-05-21 1993-11-24 Flying Pig Systems Limited Light system configuration
EP0591899A1 (en) * 1992-10-08 1994-04-13 Ushio U-Tech Inc. Automatic control system for lighting projector
US5412619A (en) * 1994-04-14 1995-05-02 Bauer; Will Three-dimensional displacement of a body with computer interface
US5504477A (en) * 1993-11-15 1996-04-02 Wybron, Inc. Tracking system
US5668537A (en) * 1993-11-12 1997-09-16 Chansky; Leonard M. Theatrical lighting control network
WO1999005857A1 (en) * 1997-07-21 1999-02-04 Bauer Will N Virtual positioning media control system
WO1999055122A1 (en) * 1998-04-16 1999-10-28 Bauer Will N 3d ready lamp

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0420500A2 (en) * 1989-09-26 1991-04-03 Cyber Scientific Incorporated Acoustic digitizing system
GB2267160A (en) * 1992-05-21 1993-11-24 Flying Pig Systems Limited Light system configuration
EP0591899A1 (en) * 1992-10-08 1994-04-13 Ushio U-Tech Inc. Automatic control system for lighting projector
US5668537A (en) * 1993-11-12 1997-09-16 Chansky; Leonard M. Theatrical lighting control network
US5504477A (en) * 1993-11-15 1996-04-02 Wybron, Inc. Tracking system
US5412619A (en) * 1994-04-14 1995-05-02 Bauer; Will Three-dimensional displacement of a body with computer interface
WO1999005857A1 (en) * 1997-07-21 1999-02-04 Bauer Will N Virtual positioning media control system
WO1999055122A1 (en) * 1998-04-16 1999-10-28 Bauer Will N 3d ready lamp

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1514053A2 (en) * 2002-05-21 2005-03-16 Lightspace Corporation Interactive modular system
EP1514053A4 (en) * 2002-05-21 2009-11-18 Lightspace Corp Interactive modular system
WO2007057069A1 (en) * 2005-11-18 2007-05-24 Valeo Schalter Und Sensoren Gmbh System for detecting objects in the vicinity of a vehicle

Also Published As

Publication number Publication date
CA2249761A1 (en) 2000-04-02
AU5845699A (en) 2000-04-26

Similar Documents

Publication Publication Date Title
US11022284B2 (en) Computer-controlled lighting system
JP4812170B2 (en) Position measuring device, optical transmission method, and optical transmitter
CN100549919C (en) Device network with optional target
US6153836A (en) Adjustable area coordinate position data-capture system
US7423666B2 (en) Image pickup system employing a three-dimensional reference object
JP5544042B2 (en) Method and apparatus for controlling a laser tracker using a gesture
US7436522B2 (en) Method for determining the 3D coordinates of the surface of an object
WO1999005857A1 (en) Virtual positioning media control system
WO1998044316A9 (en) Adjustable area coordinate position data-capture system
CN101852607A (en) Rotary laser visual linear array space identification and positioning system
WO2001065206A2 (en) Low cost 2d position measurement system and method
US20170315228A1 (en) Rf in-wall image registration using position indicating markers
US20170315227A1 (en) Manipulation of 3-d rf imagery and on-wall marking of detected structure
CN108572369A (en) A kind of micro mirror scanning probe device and detection method
CN103175504B (en) Optical system
CN110376550B (en) Three-dimensional space positioning method and system based on position compensation
CN201764965U (en) Rotary type laser visual linear array space recognition positioning system
WO2000020936A1 (en) Control system for variably operable devices
WO2013059720A1 (en) Apparatus and method for measuring room dimensions
EP2115388B1 (en) Method of determining the flatness of a foundation to which a building structure, machinery or equipment is to be mounted
WO2017189687A1 (en) Optical image capture with position registration and rf in-wall composite image
CN206541028U (en) The single image sensor indoor visible light alignment system measured based on non-angled
JPH0248069B2 (en)
AU3766500A (en) Calibration of optical transmitter for position measurement systems
JP7078486B2 (en) Angle detection system and angle detection method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase