WO2007029997A1 - Display system utilising multiple display units - Google Patents

Display system utilising multiple display units Download PDF

Info

Publication number
WO2007029997A1
WO2007029997A1 PCT/MY2006/000008 MY2006000008W WO2007029997A1 WO 2007029997 A1 WO2007029997 A1 WO 2007029997A1 MY 2006000008 W MY2006000008 W MY 2006000008W WO 2007029997 A1 WO2007029997 A1 WO 2007029997A1
Authority
WO
WIPO (PCT)
Prior art keywords
colour
image
display unit
input
display
Prior art date
Application number
PCT/MY2006/000008
Other languages
French (fr)
Inventor
Liang Shing Ng
Original Assignee
Continuum Science And Technologies Sdn Bhd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continuum Science And Technologies Sdn Bhd filed Critical Continuum Science And Technologies Sdn Bhd
Publication of WO2007029997A1 publication Critical patent/WO2007029997A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/12Synchronisation between the display unit and other units, e.g. other display units, video-disc players
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0233Improving the luminance or brightness uniformity across the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0693Calibration of display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • G09G2360/145Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light originating from the display screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background

Definitions

  • the present invention relates to display methods, particularly to a method for displaying high resolution images utilizing a plurality of display units, with each display unit displaying a sub-image and combining the sub-images to form a complete, seamless, high resolution image.
  • CTR cathode ray tube
  • LCDs Liquid Crystal Displays
  • plasma displays have also become popular nowadays due to improvements in the manufacturing process leading to reduced costs. However, they are still limited to screen sizes of below 50 inches, larger displays are extremely expensive. For even larger displays, normally a conventional optical projection displays is used.
  • Optical projection displays have the advantage of being able to vary the display size simply by adjusting the zoom lens (in some models) or by adjusting the distance between display unit and screen.
  • HDTV high- definition television
  • displays having resolutions of 100 dots per inch over a 30.times.40 inch display are desired.
  • Such images include 12 mega-pixels of displayed information.
  • displays having such capabilities do not exist with conventional technologies. It is further desirable that such large display devices be easily transported and set up, and that they are available at a reasonable cost.
  • Front-projection systems have great difficulty in projecting a flawless combined image in the. seam areas, which are usually sought to be minimized by involved and time- consuming complex set up and alignment procedures. Even with rear-projection systems, the mullions of their respective diffuser panels leave a visible image-less seam.
  • differences in resolution, geometry, brightness, intensity, and colour between the portions of the combined image, or sub- images, produced by the various display units making up a larger display can produce noticeable variations in the displayed image. Such effects are well known and easily seen, for example, in the jumbo television displays often used at sporting arenas, concerts and outdoor events. ! •
  • United States Patent 6,456,339 describes a method to correct images from multiple display units by utilising image blending and positional correction of each display unit.
  • the described method is computationally very intensive and would require a powerful computer to implement since it relies on almost real time correction of reproduced images via a feedback loop.
  • the problem is especially acute for moving images, since the processor would be loaded with doing multiple tasks at the same time.
  • United States Patent 6,611,241 also describes another method to seamlessly display high resolution images from multiple sub-images.
  • the invention utilises a client server model. While broadly similar to United States Patent 6,456,339 above, it utilises commercially available special purpose multi media processor in the clients to achieve its computational requirements. It however does not address the issue of displaying moving images specifically. It assumes that all clients would be able to simultaneously display the same frame of moving image. This is not necessarily the case, especially if the client is heavily loaded with processing tasks. . . .
  • the system can be a centralized coordinated system or decentralized coordinated system.
  • An example of a centralized coordinated system comprises a control terminal, a master image server, a plurality of image servers, each image server being associated with a display unit.
  • a 2 x 2 display arrangement would require a master image server, 3 image servers and 4 display units, as the master image server itself can also be the control terminal.
  • a video playback program running on each of the image server and master image server perform the necessary tasks required for generating sub- images and modifying appropriate pixel characteristics of the sub-images before being displayed through the display units.
  • the clients can be any standard personal computers (PCs) or any media devices in communication with each other that are capable of playing back multimedia files while the display units can be any front or rear projectors.
  • a specific feature of the present invention is the capability of the system to synchronize the display of moving video images, e.g. from a HDTV source, so that each image server displays the same video frame at the same time.
  • GUI Graphical User Interface
  • Figure 1 shows an exemplary embodiment of the system.
  • Figure 2a shows a first type of test pattern displayed side by side.
  • Figure 2b shows a plot of output colour value vs. input colour value for two different display units displaying the test patterns illustrated in Figure 2a.
  • Figure 2c shows a normalised plot for output colours for two display units according to Figure 2b.
  • Figure 3 a shows a second type of test pattern displayed side by side.
  • Figure 3b shows a plot of output colour value vs. input colour value for two different display units displaying the test patterns illustrated in Figure 3 a.
  • Figure 3 c shows two normalised plots for output colours for two display units according to Figure 3b.
  • Figure 4a shows the method for obtaining colour primaries and mapping dominant input colour components.
  • Figure 4b shows plot for the output of two display units when they are switched off.
  • Figure 4c shows plot for the output of two display units when the left display unit is switched on but no input has been applied yet.
  • Figure 4d shows plot for the output of two display units when the right display unit is switched on but no input has been applied yet.
  • Figure 4e shows plot for the output of two display units when they are switched on but no input has been applied yet.
  • Figure 4f shows plot for the output of two display units when an x-level of red colour input has been applied.
  • Figure 4g shows a Commision Internationale Eclairage (CIE). chromaticity diagram where the colour of a pixel is represented by P and colour primaries are represented by Si ? Gi 5 Si for two different display units.
  • CIE Commision Internationale Eclairage
  • Figure 5 shows overlapping areas in an exemplary display with four display units.
  • Figure 6 shows a typical display of a distributed GUI system.
  • FIG 1 shows an exemplary embodiment of the system.
  • a master image server (100) is connected to multiple image servers (110).
  • Each image server (110) is connected to a display unit (120), such as a projector that can be used for both front projection and back projection.
  • Both master image server (100) and image server (110) can be standard personal computers (PCs) or other devices capable of playing back multimedia files and communicating with similar devices via a means of communication such as a network.
  • the system is capable of reproducing tiled image on a large screen (130) as shown in Figure 1. by means of front or back projection.
  • a sensor in this case a charged- coupled device (CCD) digital camera (140), is used for capturing images of test pattern for colour correction of the each sub-images.
  • the master image server (100) and image servers (110) are in communication via a means of data communication, such as an Ethernet local area network (LAN), and shown physically connected via a standard 10/100 Mbps hub (150).
  • LAN Ethernet local area network
  • 150 10/
  • Image servers (110) are apparatus for playing back still images or moving images.
  • the still image or moving image files are stored on mass storage devices on image servers (110).
  • the image server (110) can be a computer or similar hardware capable of playing back still images or moving images. They must also be capable of communicating with other similar devices via a means of data communication.
  • Image servers are also capable of reading from different mass storage devices such as hard disk drive, Video CD (VCD), Digital Video Disc (DVD) and other currently available magnetically or optically readable storage medium. All the image, servers can have identical hardware configuration. However for the working of one embodiment of the invention, one of the image servers (110) can be dedicated as a master image server (100) as to be further disclosed later.
  • one of the image servers (110) may function as a control terminal and it has two video display cards.
  • One of the video display card is used for displaying a sub-image on to the projection screen while the other video display card is connected to a separate display unit for displaying the whole tiled image
  • the control terminal (160) is a separate workstation connected to other image servers via a means of communications, such as a hub (150) in a LAN network.
  • the workstation (160) can function as a control terminal to enable a user to view the entire tiled image on the video monitor of the workstation (160).
  • Either the image server with two video display cards or the workstation (160) can also be used by a user as a control terminal to control the entire operation of displaying the whole tiled image.
  • the control terminal (160) can also be further extended to perform other functions such as displaying graphical user interface (GUI) objects as will be further disclosed later.
  • GUI graphical user interface
  • AU hardware used are off the shelf items and easily obtainable.
  • a CCD digital camera (140) is used to capture images of a test pattern produced by the display units (120), one of which is designated as the reference display unit.
  • the test pattern (210) is reproduced side by side on a screen (130) by each display unit and captured by the camera (140).
  • the increase of input colour is from left to right only.
  • relative positions along the bottom edge of this test pattern represents the input colour values.
  • Two or more images from each display unit (120) can be captured by the camera (140) at a time. The number of test pattern images that can be simultaneously captured ⁇ depends on the resolution of the camera. This is repeated for test patterns from all display units are captured.
  • the captured images are downloaded into a computer and analysed.
  • the colour of the output image on the screen that is captured by the CCD camera (220) is compared against input colour (222) provided by the test pattern (210) file to the graphics card.
  • the intensities (220) of Red (224a, 226a), Green (224b, 226b) and Blue (224c, 226c) components reproduced on screen are plotted against their input colour values (222).
  • An example of a typical capture plot (212) for two display units (224, 226) is shown in Figure 2b. It can be seen, for the same input colour values, such as red colour (224a, 226a), the reproduced output colours on screen from different display units are different. This would be true even if both display units were the same model and belong to the same production batch.
  • the display unit with the smallest dynamic range is ⁇ designated the reference display unit; in this case it is the left display unit (224).
  • designated the reference display unit; in this case it is the left display unit (224).
  • a straight line (224d) is drawn from the point marked X to X'.
  • a straight line (226d) is also drawn for red colour graph (226a) of the right display unit (226) from point marked Y to Y'.
  • the gradient of both lines (224d, 226d) are not necessarily the same. This can be more clearly seen in Fig. 2c on a typical normalised plot (300a) which has a common pixel input axis (222a).
  • red colour input e.g. 122
  • the correct amount to intended value i.e. 130
  • a second method for correcting the output colour for each display unit according to the designated reference display unit is to use a second test pattern (310a), as shown in Fig. 3 a and perform most of the steps outlined earlier. Instead of having all three R, G 5 B colour components, only a. single colour component is used at a time and its input colour value is increased stepwise in the second test pattern. Preferably the width of each band of input colour value is equally wide.
  • Input colour value of one of the colour component e.g. red obtained from the second type of test pattern (310a) is plotted as indicated by (324al) and (326al) in a plot (312a) of output colour value (320) versus input colour (322) as shown in Figure 3b.
  • the plots of input colour values for the left display unit (324) and the right display unit (326) increase stepwise instead of increasing continuously.
  • the width of uniform input colour value may be set to be as wide as needed to obtain the most reliable average value for each output colour value and they appear as steps in Figure 3b.
  • the difference between the highest input colour value and the lowest input colour value in a single test pattern may be smaller than the full input colour value range. Either one of these variations may result in more than one test patterns of the second type to cover the full input colour value range. If more than one test patterns of second type are used for abovementioned reason, then plots from each second type of test pattern are joined together. After that, best- fit straight lines (324dl, 326dl) are drawn through the midpoints of the steps in Figure 3b and reproduced on the left hand side of Figure 3c. Alternatively, the midpoints of the steps in Figure 3b are joined to adjacent midpoints with straight lines (324d2, 326d2) to obtain the normalized plot as shown on the right hand side Figure 3 c.
  • the corrected values are obtained from the new plots in the similar manner of mapping as outlined for the first embodiment of colour correction method disclosed earlier.
  • a given output colour value such as 100 in this example
  • red colour input e.g. 122 on an original image file playback by right display unit must be increased or decreased by the correct amount to intended value (i.e. 130) so that the right display unit will reproduce the same output as the left display unit.
  • the differences between the second and the first method of colour correction are different test pattern and way of obtaining the straight lines is used. It is noted that the straight lines (324dl, 326dl) or plot of points joined with lines (324d2, 326d2) define the mapping relationships for correcting display unit colours.
  • input colour of this test pattern can be altered on the spot when the calibration method exemplified in Fig.
  • Figures 4b to 4f are plots of output colour values for a test pattern produced from each display unit.
  • Each R, G and B label refers to red, green and blue component respectively.
  • a CCD digital camera (140) as shown in Figure 1 is used now also to capture the displayed image of test patterns placed side by side by different display units.
  • test pattern file is altered on board of the image server (110) and input colour value is inputted to the graphic device control card.
  • This technique uses a commercially available camera, preferably a CCD camera to capture the display image of the test pattern.
  • test pattern from each display unit is displayed side-by-side and captured by the camera in a single frame.
  • the characteristics and the output colour level of different colour components of the displayed test patterns can then be compared with each other.
  • a great deal of measurements can be derived which would otherwise be impossible or otherwise require the use of expensive detection or measurement devices.
  • CCD camera has detector voltage biasing and signal processing circuits that automatically maximize the contrast between the lowest intensity and the highest intensity detected. For this reason some images of reproduced test patterns wherein the input colour value are relatively high, are captured by the camera (140) with background lighting instead of in total darkness, while some images wherein the input colour value are relatively low, are captured in total darkness.
  • the background lighting helps to prevent the camera from being driven to saturation at high output colour values. Saturation of the camera (140) will happen the when camera is exposed to relatively high intensity light in a totally dark background (i.e. a large difference of intensities).
  • detector saturation has set in when a recorded output colour value in images does not increase anymore (e.g. maximum value 255 for 8-bit colour per component CCD camera) but the input colour values can still be further increased.
  • Typical results obtained from steps (600), (602), (604) and (606) are shown in Figures 4b, 4c, 4d and 4e respectively.
  • the respective auxiliary output colour components between the two display units however can be different.
  • Figure 4f are two typical plots of output colour for the same sets of input colour values. In each case, test pattern initially reproduced by one display unit initially has the same r, g and b input values as the other test pattern reproduced by the other display unit. These figures show test pattern before adjusting the input colour component as in step (612).
  • the reference display unit e.g. Ri
  • step (616) when the dominant output colour component approaches saturation, the background light is turned on and the changes in output colour is noted as shown in step (616).
  • steps (612), (614) and (616) are carried out for each dominant input colour values.
  • the auxiliary colour component output values (following the example given i.e. green, blue) are also recorded if they are present (which will most likely be the case).
  • R 2 , G 2 , B 2 and iv) the output colour components values for reference display unit i.e. Ri, Gi, Bi are noted.
  • the procedure from step 612 to 618 is repeated for other colour components (continuing from the example i.e. green and blue) for the same display unit.
  • Other display units that will be used are subjected also to the same procedure from step 600 to 618 for all the input colour components.
  • the ratio of the red input colour component to the sum of all input colour component, p r for a new given input colour (simply referred to as the "red input colour ratio” hereinafter) and the ratio of green output colour component to the sum of all output colour component, p g (simply referred to as the "green input colour ratio” hereinafter) for the same input colour are calculated.
  • CIE Commone Eclairage
  • equations (3 a) and (3b) can be obtained from equations (3 a) and (3b) again by substituting the reference display unit colour primaries (R (x , i>, R (y , i>, G (x , o, G (y> i ) ⁇ B (x , 1)5 B (y , i>) with those (R (x , i)5 R (y , Q, G( Xi ,), G ( y ; i), B( X , 0 , B( y , o) for each of the display unit under correction.
  • input colour values for display unit under correction are obtained from its own red and green output colour ratios, p( r , x, and p( a 2) and the value of one of the input colour component, such as Ti by using equations (Ia, Ib) again b 2 ) b 2 ) and solve for the other two output colour component values, g 2 and b 2 .
  • any set of the colour component values can be used.
  • the input colour component that has the largest value is chosen for convenience of computation. It is conceivable that all the steps in the calibration can be automated so that a comprehensive set of matched input colour values and the related colour primaries can be obtained. Furthermore, these vast sets of values can be stored in different types of programmable read-only memories (PROMs) on board of specially designed graphics device control card for runtime colour correction. This functionality can also be performed on any other hardware that controls a display unit.
  • PROMs programmable read-only memories
  • colour correction as described in the previous section to individual pixels.
  • the correction value makes the reproduced intensity of each colour pixel (Red, Green or Blue as the case may be) uniform across the whole display.
  • a blue screen would look uniformly blue across the. entire 4 sub-images, except in areas where the sub-images overlap, which would obviously be brighter.
  • a sub-image needs to be blended with adjacent sub-image so that the overlapping regions appear seamless.
  • Image blending requires the ability to correct at least the intensity of the overlapping pixels of each sub-image.
  • the image correction software running on the image servers (110) must adjust the intensity of the overlapped pixels such that these artefacts do not appear during image playback.
  • a first image blending method is used, whereby the intensity of the displayed pixels is linearly reduced across the overlap region. For example, if the overlapped region (410) is 20 pixels wide between display 1 and display 2, the pixel intensity from display 1 would reduce from 100% at (430) to 0% at (440), in steps of 5%. For wider overlapping regions, the pixel intensities would be reduced in smaller steps.
  • Image blending method can be subsequently modified to reduce intensity in a nonlinear fashion across the overlap region.
  • the size of the step is non-uniform and may vary from one another.
  • the size of the step for any pixel in the overlapped region can be arbitrary or depends on the intensity variation profile that is adopted. Intensities of pixels at the corner of the overlapping images (420) are further attenuated in a similar manner in comparison to the pixels at the edges of the overlapping images (410) so. that the overall image appears seamless. In essence, colour-, intensity of any said overlapping pixels are reduced so that the desired intensity is reproduced when said pixels are overlapping.
  • a video play back program that runs on the master image server in a centralized coordination setup sends out a frame synchronization message from time to time, preferably after every several frames have elapsed, to synchronize image frame displayed by each image server so that each image frame displayed has the same frame number.
  • Similar video playback program is running on each image server to receive the frame synchronization message. If any image server is not displaying image frame according to that particular frame number as contained in the message (either in front or behind said frame number), it would skip to the image frame according to the frame number contained in the synchronization message as requested by the master image server. In this manner, frame synchronization errors cannot accumulate.
  • one of the image servers performs the same • function of sending out the synchronization message to synchronize image frames displayed by image servers.
  • the present invention is not limited to only displaying images from a single source only. With multiple display units, it can also display multiple images from different sources. In this case, each image server is loaded with different image files.
  • a distributed graphical user interface (GUI) model is adopted to further enable the system to display GUI objects apart from images.
  • the display system further comprises a workstation (160) as a control terminal in connection with the image servers (100, 110) that are in communication with each other. With the addition of this control terminal (160), full display of the tiled image can be displayed thereon.
  • the distributed GUI system is distributedly executed on the image servers and the control terminal where each image server or the control terminal runs a part of the distributed GUI in a coordinated fashion.
  • one of the image servers (100, 110) can be implemented as the control terminal instead of the additional workstation (160).
  • the image server such as the master image server (100) can be implemented as the control terminal, which is preferably the image server that has the most computing resources.
  • GUI objects can be displayed by input device means such as keyboard, mouse or other pointing .
  • input device means such as keyboard, mouse or other pointing .
  • devices connected to the control terminal (160) States of GUI objects are distributedly stored in the memory of the control terminal (160) and also in the memory of the image servers (100, 110).
  • a GUI object may be a pointer (500), an icon or a window.
  • the control terminal sends updated cursor coordinates to one of the image server, in this case a first image server (HOa) when the user moves the mouse or pointing device.
  • the first ' image server (110a) will calculate the correct cursor location and display it on the display area of its display unit.
  • first image server (110a) informs a second image server (HOb) the states of the GUI object . which includes alerting the second image server (HOb), which is controlling a second display unit, and sending the mouse coordinates to it.
  • the second display unit (HOb) takes over the function of displaying the GUI object based upon the state of the GUI object informed by the first image server (110a). In this manner, the cursor will appear to move seamlessly across display areas.
  • the user can either view a snapshot of the aggregate of display areas or only some or part of the aggregate in the control terminal (160). The same principle of operation applies to other GUI objects such as icons or windows.
  • the display system can accept three-dimensional input.
  • two- dimensional input device such as joystick, mouse, track ball, or touch screen can be suitably adapted by software or hardware methods to accept three-dimensional input.
  • Three-dimensional input devices such as multiple cameras for detecting 3D motion, 3D motion sensors or haptics can also be used as the pointing device to move GUI objects in the third dimension, i.e. the perspective axis (530).
  • a perspective point or an angle of view must be first predetermined.
  • the perspective axis (530) will be set by fixing the angle of view.
  • GUI objects will be reduced and enlarged in size according to their location along the perspective axis (530) according to coordinate information received from the three-dimensional pointing device when it is moved in the depth dimension. .

Abstract

The object of this invention is to develop a display system that is affordable for widespread use by utilising standard, off the shelf components, yet at the same time be capable of displaying still and moving images with high resolution and brightness, comprising a plurality of image servers, each image server being associated with a display unit, a program for generating sub-images from source image and applying appropriate correction to the sub-images before being sent to the display units. Additionally a distributed GUI based operating system can be operated across multiple displays.

Description

Display System Utilising Multiple Display Units
Field of Invention
The present invention relates to display methods, particularly to a method for displaying high resolution images utilizing a plurality of display units, with each display unit displaying a sub-image and combining the sub-images to form a complete, seamless, high resolution image.
Background of the Invention
The most widely used display is the cathode ray tube ("CRT") display such as is employed in television receivers, computer displays and the like. However, CRT screen sizes of above 36 inches are rare, due to high manufacturing costs and weight of the display. Liquid Crystal Displays (LCDs) and plasma displays have also become popular lately due to improvements in the manufacturing process leading to reduced costs. However, they are still limited to screen sizes of below 50 inches, larger displays are extremely expensive. For even larger displays, normally a conventional optical projection displays is used. Optical projection displays have the advantage of being able to vary the display size simply by adjusting the zoom lens (in some models) or by adjusting the distance between display unit and screen.
In addition to the desire for large image size, there is also a need for high image resolution along with large size. This need is evident, for example, with regard to high- definition television (HDTV) systems and displays used in industrial and military applications. For high-definition displays of maps and charts, or of surveillance images, displays having resolutions of 100 dots per inch over a 30.times.40 inch display are desired. Such images include 12 mega-pixels of displayed information. Unfortunately, displays having such capabilities do not exist with conventional technologies. It is further desirable that such large display devices be easily transported and set up, and that they are available at a reasonable cost. However single projector systems suffer from two major drawbacks, a larger reproduced image on screen corresponds to a dimmer image, since the display unit lamp power is fixed and must be spread over a larger screen area. Secondly, the reproduced image resolution is also fixed and dependent on the display unit used. A high resolution source image, such as those taken with a modern 8 mega pixel digital camera, will be reproduced at display unit resolution (typically less than 1 mega pixels), thus resulting in a blurry image on screen. High-resolution display units are extremely expensive.
Rather than relying on a single display unit, discrete tiling systems that combine several single projection systems to create a display that- scales linearly in both resolution and brightness are known. However, existing projection display technology suffers from two significant visible drawbacks.
Front-projection systems have great difficulty in projecting a flawless combined image in the. seam areas, which are usually sought to be minimized by involved and time- consuming complex set up and alignment procedures. Even with rear-projection systems, the mullions of their respective diffuser panels leave a visible image-less seam. In any of the foregoing arrangements, differences in resolution, geometry, brightness, intensity, and colour between the portions of the combined image, or sub- images, produced by the various display units making up a larger display can produce noticeable variations in the displayed image. Such effects are well known and easily seen, for example, in the jumbo television displays often used at sporting arenas, concerts and outdoor events. ! •
Efforts to overcome these problems are known. United States Patent 6,456,339 describes a method to correct images from multiple display units by utilising image blending and positional correction of each display unit. However, the described method is computationally very intensive and would require a powerful computer to implement since it relies on almost real time correction of reproduced images via a feedback loop. The problem is especially acute for moving images, since the processor would be loaded with doing multiple tasks at the same time. United States Patent 6,611,241 also describes another method to seamlessly display high resolution images from multiple sub-images. The invention utilises a client server model. While broadly similar to United States Patent 6,456,339 above, it utilises commercially available special purpose multi media processor in the clients to achieve its computational requirements. It however does not address the issue of displaying moving images specifically. It assumes that all clients would be able to simultaneously display the same frame of moving image. This is not necessarily the case, especially if the client is heavily loaded with processing tasks. . . .
Among the hurdles to overcome in practically realizing a large display area . are limitation in screen size and resolution. Most modern computer operating systems such as Windows XP, are GUI based. This makes it easier for users to operate the system since they do not have to remember arcane Commands. However, a limitation of GUI based systems is the lack of 'screen real estate'. This has led to the ever-increasing size of computer monitors, from 14 inches a decade ago to 17 and 19-inch models today to allow more objects to be displayed. Displays above 20 inches are starting to make their way to the market. Furthermore, resolution did not increase much with increase in monitor sizes. Therefore life-size and realistic objects could not be properly display as graininess will be visible to the viewers when objects are enlarged.
Summary of the Invention
It is the object of this invention to develop a display system that is affordable for widespread use by utilising standard, off the shelf components, yet at the same time be capable of displaying still and moving images with high resolution and brightness.
The system can be a centralized coordinated system or decentralized coordinated system. An example of a centralized coordinated system comprises a control terminal, a master image server, a plurality of image servers, each image server being associated with a display unit. Thus a 2 x 2 display arrangement would require a master image server, 3 image servers and 4 display units, as the master image server itself can also be the control terminal. A video playback program running on each of the image server and master image server perform the necessary tasks required for generating sub- images and modifying appropriate pixel characteristics of the sub-images before being displayed through the display units. The clients can be any standard personal computers (PCs) or any media devices in communication with each other that are capable of playing back multimedia files while the display units can be any front or rear projectors.
In a decentralized coordinated system,' similar software and hardware are used except that none of the image server is designated as a master image server. Coordination is realized by implementing in an' inter-server communications protocol on all the image servers.
A specific feature of the present invention is the capability of the system to synchronize the display of moving video images, e.g. from a HDTV source, so that each image server displays the same video frame at the same time.
Another feature of the present invention is the ability to operate a Graphical User Interface (GUI) based operating system across multiple screens. This gives the user an extremely large screen to work on. Consequently life-size and realistic objects can be displayed.
Brief Description of the Drawings
The invention will now be described by an example of its embodiment together with references to the accompanying drawings, where:
Figure 1 shows an exemplary embodiment of the system.
Figure 2a shows a first type of test pattern displayed side by side. Figure 2b shows a plot of output colour value vs. input colour value for two different display units displaying the test patterns illustrated in Figure 2a.
Figure 2c shows a normalised plot for output colours for two display units according to Figure 2b.
Figure 3 a shows a second type of test pattern displayed side by side.
Figure 3b shows a plot of output colour value vs. input colour value for two different display units displaying the test patterns illustrated in Figure 3 a.
Figure 3 c shows two normalised plots for output colours for two display units according to Figure 3b.
Figure 4a shows the method for obtaining colour primaries and mapping dominant input colour components.
Figure 4b shows plot for the output of two display units when they are switched off.
Figure 4c shows plot for the output of two display units when the left display unit is switched on but no input has been applied yet.
Figure 4d shows plot for the output of two display units when the right display unit is switched on but no input has been applied yet.
Figure 4e shows plot for the output of two display units when they are switched on but no input has been applied yet.
Figure 4f shows plot for the output of two display units when an x-level of red colour input has been applied. Figure 4g shows a Commision Internationale Eclairage (CIE). chromaticity diagram where the colour of a pixel is represented by P and colour primaries are represented by Si? Gi5 Si for two different display units.
Figure 5 shows overlapping areas in an exemplary display with four display units.
Figure 6 shows a typical display of a distributed GUI system.
Detailed Description of the Preferred Embodiment .
System Overview
Figure 1 shows an exemplary embodiment of the system. In an embodiment that uses centralized coordination, a master image server (100), is connected to multiple image servers (110). Each image server (110) is connected to a display unit (120), such as a projector that can be used for both front projection and back projection. Both master image server (100) and image server (110) can be standard personal computers (PCs) or other devices capable of playing back multimedia files and communicating with similar devices via a means of communication such as a network. The system is capable of reproducing tiled image on a large screen (130) as shown in Figure 1. by means of front or back projection. A sensor, in this case a charged- coupled device (CCD) digital camera (140), is used for capturing images of test pattern for colour correction of the each sub-images. The master image server (100) and image servers (110) are in communication via a means of data communication, such as an Ethernet local area network (LAN), and shown physically connected via a standard 10/100 Mbps hub (150).
In an embodiment that uses decentralized coordination, similar software and hardware are used except that none of the image server is designated as a master image server. Coordination is realized by implementing in an inter-server communications protocol on all the image servers. Image servers (110) are apparatus for playing back still images or moving images. The still image or moving image files are stored on mass storage devices on image servers (110). The image server (110) can be a computer or similar hardware capable of playing back still images or moving images. They must also be capable of communicating with other similar devices via a means of data communication. Image servers are also capable of reading from different mass storage devices such as hard disk drive, Video CD (VCD), Digital Video Disc (DVD) and other currently available magnetically or optically readable storage medium. All the image, servers can have identical hardware configuration. However for the working of one embodiment of the invention, one of the image servers (110) can be dedicated as a master image server (100) as to be further disclosed later.
In an embodiment of the invention, one of the image servers (110) may function as a control terminal and it has two video display cards. One of the video display card is used for displaying a sub-image on to the projection screen while the other video display card is connected to a separate display unit for displaying the whole tiled image, hi another embodiment of the invention which is easier to execute practically, the control terminal (160) is a separate workstation connected to other image servers via a means of communications, such as a hub (150) in a LAN network. The workstation (160) can function as a control terminal to enable a user to view the entire tiled image on the video monitor of the workstation (160). Either the image server with two video display cards or the workstation (160) can also be used by a user as a control terminal to control the entire operation of displaying the whole tiled image. The control terminal (160) can also be further extended to perform other functions such as displaying graphical user interface (GUI) objects as will be further disclosed later. AU hardware used are off the shelf items and easily obtainable.
Correcting Display Unit Output Colours
Before first use, output characteristics of each display unit (120) used has to be measured. In the preferred embodiment, a CCD digital camera (140) is used to capture images of a test pattern produced by the display units (120), one of which is designated as the reference display unit. The test pattern (210), as shown in Fig 2a, is reproduced side by side on a screen (130) by each display unit and captured by the camera (140). The test pattern is composed of a colour plot with each red R, green G and blue B input colour equal to one another (R= G= B) and increasing at same increment (increasing in grey scale) and continually in a fixed direction on the test pattern. Preferably, a white band (R = G = B = 255 in the case of a 24-bit colour system) is disposed next to where all the colour component is equal to zero so as to indicate the start of black colour wherein all input colour value R = G = B = O. On the other side of the plot is the edge of the colour plot where R, G, B is at their maximum value. In the test pattern (210) exemplified in Figure 2a that is according the present description, the increase of input colour is from left to right only. Thus for this particular test pattern (210), relative positions along the bottom edge of this test pattern represents the input colour values. Two or more images from each display unit (120) can be captured by the camera (140) at a time. The number of test pattern images that can be simultaneously captured ■ depends on the resolution of the camera. This is repeated for test patterns from all display units are captured.
The captured images are downloaded into a computer and analysed. With reference to Figure 2a and 2b, the colour of the output image on the screen that is captured by the CCD camera (220) is compared against input colour (222) provided by the test pattern (210) file to the graphics card. The intensities (220) of Red (224a, 226a), Green (224b, 226b) and Blue (224c, 226c) components reproduced on screen are plotted against their input colour values (222). An example of a typical capture plot (212) for two display units (224, 226) is shown in Figure 2b. It can be seen, for the same input colour values, such as red colour (224a, 226a), the reproduced output colours on screen from different display units are different. This would be true even if both display units were the same model and belong to the same production batch.
To derive the correction values, the display unit with the smallest dynamic range is ■ designated the reference display unit; in this case it is the left display unit (224). Taking the red colour (224a) as an example, a straight line (224d) is drawn from the point marked X to X'. Similarly, a straight line (226d) is also drawn for red colour graph (226a) of the right display unit (226) from point marked Y to Y'. The gradient of both lines (224d, 226d) are not necessarily the same. This can be more clearly seen in Fig. 2c on a typical normalised plot (300a) which has a common pixel input axis (222a). As can be seen, for a given output colour value (such as 100 in this example), both display units require different input colour, in this case red input colour R= 122 for the left display unit (224) and red input colour R= 130 for the right display unit (226) respectively. In order to reproduce the same output on the right display unit, red colour input (e.g. 122) on an original image file playback by right display unit must be increased or decreased by the correct amount to intended value (i.e. 130) so that the right display unit will reproduce the same output as the left display unit. This ensures all display units reproduce the same output colour as each reproduced image to be played later was intended to be part of the entire tiled iniage.
A second method for correcting the output colour for each display unit according to the designated reference display unit is to use a second test pattern (310a), as shown in Fig. 3 a and perform most of the steps outlined earlier. Instead of having all three R, G5 B colour components, only a. single colour component is used at a time and its input colour value is increased stepwise in the second test pattern. Preferably the width of each band of input colour value is equally wide. Input colour value of one of the colour component e.g. red obtained from the second type of test pattern (310a) is plotted as indicated by (324al) and (326al) in a plot (312a) of output colour value (320) versus input colour (322) as shown in Figure 3b. The plots of input colour values for the left display unit (324) and the right display unit (326) increase stepwise instead of increasing continuously.
The width of uniform input colour value may be set to be as wide as needed to obtain the most reliable average value for each output colour value and they appear as steps in Figure 3b. The difference between the highest input colour value and the lowest input colour value in a single test pattern may be smaller than the full input colour value range. Either one of these variations may result in more than one test patterns of the second type to cover the full input colour value range. If more than one test patterns of second type are used for abovementioned reason, then plots from each second type of test pattern are joined together. After that, best- fit straight lines (324dl, 326dl) are drawn through the midpoints of the steps in Figure 3b and reproduced on the left hand side of Figure 3c. Alternatively, the midpoints of the steps in Figure 3b are joined to adjacent midpoints with straight lines (324d2, 326d2) to obtain the normalized plot as shown on the right hand side Figure 3 c.
After obtaining the plots (324dl, 326dl) or plots (324d2, 326d2), the corrected values are obtained from the new plots in the similar manner of mapping as outlined for the first embodiment of colour correction method disclosed earlier. E.g. for a given output colour value (such as 100 in this example), both display units require different input colour, in this case red input colour R= 122 for the left display unit (324) and red input colour R= 130 for the right display unit (326) respectively. In order to reproduce the same output on the right display unit (326), red colour input (e.g. 122) on an original image file playback by right display unit must be increased or decreased by the correct amount to intended value (i.e. 130) so that the right display unit will reproduce the same output as the left display unit.
The differences between the second and the first method of colour correction are different test pattern and way of obtaining the straight lines is used. It is noted that the straight lines (324dl, 326dl) or plot of points joined with lines (324d2, 326d2) define the mapping relationships for correcting display unit colours.
Still another method as exemplified in Figure 4a for obtaining a set of colour primaries for each display unit (Ri5 Gj, B1 where i = I5 2, 3.... denotes each display unit) can be used for correcting display unit's output colour. In this embodiment, the test pattern (not shown) is a uniform display of a single input colour such as red (r = x, g = 0, b = 0) instead of a changing gradient of input colour value as with earlier embodiments of the colour correction method. In the uniform display of a single input colour, that input colour would be known as the dominant component or colour while other colour components will be known as auxiliary components or colours. Furthermore, input colour of this test pattern can be altered on the spot when the calibration method exemplified in Fig. 4a is being carried out. The purpose of the steps as outlined in Figure 4a is to firstly, i) determine the dominant input colour component of a display unit that produces the dominant output colour component that matches the dominant output colour component of a reference display unit for a given dominant input colour component for the reference display unit and secondly, ii) recording the corresponding output colour component values (R, G, B) for each of input colour test pattern for each display unit. Colour primaries Ri, Gi and Bi for each display unit are. determined from ii) as will be further explained later.
Figures 4b to 4f are plots of output colour values for a test pattern produced from each display unit. Each R, G and B label refers to red, green and blue component respectively. A CCD digital camera (140) as shown in Figure 1 is used now also to capture the displayed image of test patterns placed side by side by different display units. In the case of a projector (120) linked up to an image server (110) as shown in
Figure 1, the test pattern file is altered on board of the image server (110) and input colour value is inputted to the graphic device control card.
This technique uses a commercially available camera, preferably a CCD camera to capture the display image of the test pattern. According to this technique, test pattern from each display unit is displayed side-by-side and captured by the camera in a single frame. The characteristics and the output colour level of different colour components of the displayed test patterns can then be compared with each other. As a result, a great deal of measurements can be derived which would otherwise be impossible or otherwise require the use of expensive detection or measurement devices.
CCD camera has detector voltage biasing and signal processing circuits that automatically maximize the contrast between the lowest intensity and the highest intensity detected. For this reason some images of reproduced test patterns wherein the input colour value are relatively high, are captured by the camera (140) with background lighting instead of in total darkness, while some images wherein the input colour value are relatively low, are captured in total darkness. The background lighting helps to prevent the camera from being driven to saturation at high output colour values. Saturation of the camera (140) will happen the when camera is exposed to relatively high intensity light in a totally dark background (i.e. a large difference of intensities). One knows that detector saturation has set in when a recorded output colour value in images does not increase anymore (e.g. maximum value 255 for 8-bit colour per component CCD camera) but the input colour values can still be further increased.
Before reproducing the test pattern by each of the display unit, a preliminary reading of any light produced by the display unit before reproducing the test pattern (that is no input is fed to the display unit, r = 0, g = 0, b = 0) is desirable. The CCD camera (140) captures any lights emitted from the display unit that is reflected off the screen (130) (cross reference Fig. 1 and 5) when i) the display units are turned off (600); ii) either one of them is turned on (602, 604) but with input colour components set to zero (i.e. r = 0, g = 0, b =0) and iii) both display units (606) are turned on as shown in Figure 4a. Typical results obtained from steps (600), (602), (604) and (606) are shown in Figures 4b, 4c, 4d and 4e respectively. These figures show output colour produced by each display unit that are derived from images captured by the CCD camera (140).
Then, test patterns having same input colour component value (e.g. r = x, g = 0, b= 0) will be reproduced by each display unit as shown in step (612) of Figure 4a. One of the input colour component values (following given example i.e. r = x) of a test pattern of the display unit under calibration such as a display unit 2, will be adjusted as shown in step (614) so that the dominant output colour component will be the same as that of the reference display, unit. The respective auxiliary output colour components between the two display units however can be different. Figure 4f are two typical plots of output colour for the same sets of input colour values. In each case, test pattern initially reproduced by one display unit initially has the same r, g and b input values as the other test pattern reproduced by the other display unit. These figures show test pattern before adjusting the input colour component as in step (612).
When there is a difference between two dominant output colour components such as red colour between the two display unit, the input colour value (e.g. r2 = x) of the display unit under calibration (e.g. display unit 2) is altered (614) until its dominant output colour component (e.g. R2) is the same as the reference display unit (e.g. Ri). The new input colour value r2 for display unit 2 which matches the unchanged input colour value ri for reference display unit is noted.
During this exercise, when the dominant output colour component approaches saturation, the background light is turned on and the changes in output colour is noted as shown in step (616). As indicated by step (618), steps (612), (614) and (616) are carried out for each dominant input colour values. For each dominant input colour component value (such as red) that is recorded, the auxiliary colour component output values (following the example given i.e. green, blue) are also recorded if they are present (which will most likely be the case). Four sets of values that are i) dominant input colour component values, r2 of display unit 2, ii) dominant input colour component values, x\ of the reference display unit, iii) the output colour components values for display unit 2, i.e. R2, G2, B2 and iv) the output colour components values for reference display unit i.e. Ri, Gi, Bi are noted. The procedure from step 612 to 618 is repeated for other colour components (continuing from the example i.e. green and blue) for the same display unit. For example, when the blue primary is being calibrated, blue will be the dominant input colour component and set to non-zero and other auxiliary colour components e.g. red and green will be set to zero (b= x, r= 0, g= 0). Other display units that will be used are subjected also to the same procedure from step 600 to 618 for all the input colour components.
It is also noted that for almost all display units, there are instances where auxiliary output colour components change as well when input colour component under correction is being altered. In this case, when red input colour component is being altered, green and blue will be the auxiliary colour components. This is a device dependent problem which results in each display unit having a different colour gamut. Besides differences in colour gamut, the resultant output colour will also be different for the same input colour component values. Needless to say, it is not possible to obtain an output colour in a pure colour component. With all these facts in mind, colour is corrected according to a chromaticity correction approach that is different from direct mapping of each colour component as outlined earlier. According to this approach, the ratio of the red input colour component to the sum of all input colour component, pr for a new given input colour (simply referred to as the "red input colour ratio" hereinafter) and the ratio of green output colour component to the sum of all output colour component, pg (simply referred to as the "green input colour ratio" hereinafter) for the same input colour are calculated. The red input colour ratio, pr and the green input colour ratio, pg are calculated according to equations (Ia, Ib), Per, 0 = Ti /( Ti + gi + b5)... (Ia) Pte.o = gi /( ri + gi + bi).. , (Ib) ' . wherein p(r, « and p(g, ή correspond to the weighting factors for the vectors (R- B) and (G- B) where R, G, B are the colour primaries on a Commision Internationale Eclair-age (CIE) chromaticity diagram as illustrated in Figure 4g and sub-i = 1, 2, 3... to denote each display unit. ' .
Then, red and green output colour ratios are calculated in order to derive the colour primaries of both a reference display unit (R(X, D, R<y, o, G(X, D, G(y, i>, B(x, i,, B(y, i>) and a display unit under correction (e.g; display unit 2, RCx, 2), R(y> D, Oft, 2), G(y, 2), B(X, 2), B(y, 2)) according to equations (2a, 2b), Q^o = RJ /( Rr^ Gr+- Bi)... (2a) C(y,i) = Gi/( Ri+ Gi+ Bi)... (2b) wherein C is one of the red R, green G or blue B colour primaries and i = I3 2, 3... and so on to denote different display units. It must be pointed out that the colour primaries are different from one display unit to the other.
Then, pr, pg and the colour primaries are used to represent the input colour inputted to the reference display unit as a point with coordinates (Pw Py) from equations (3 a) and
(3b),
Px = B(X. i) + p(r, i)(R(X, i) - B(χ, D) + P(g> I)(G(X, D - B(X, D)... (3a) Py = B(y, D + p(r, I)(R(Jr, D - B(y, I)) + p(g, i)(G(y, 1 } - B(y, D) ... (3b) wherein Px and Py corresponds to the x, y coordinates of the output colour for the reference display unit on the same Commision Internationale Eclairage (CIE) chromaticity diagram.
By representing the reference display unit input colour (ri, gi, bi) as a point with coordinates (Px, Py) on a CIE chromaticity diagram, other display units (e.g. display unit 2, 3... and so on) can refer to it to determine their respective input colour (rh
Figure imgf000016_0001
bi) for the purpose of colour correction. Based upon Px and Py, red and green input colour ratios, p(r, o and p(g, « for a display unit under correction (e.g. display unit 2, 3... and so on) can be obtained from equations (3 a) and (3b) again by substituting the reference display unit colour primaries (R(x, i>, R(y, i>, G(x, o, G(y> i)} B(x, 1)5 B(y, i>) with those (R(x, i)5 R(y, Q, G(Xi ,), G(y; i), B(X, 0, B(y, o) for each of the display unit under correction.
That is Px = B(χ, i) + p(r, i)(R(x, 0 - B(χ, o) + ρ(g> J)(G(X, 0 - B(X, i})
Py = B(y, i) + p(r, I)(R(J-, i) - B(y, I)) + p(& i)(G(y, J) - B(y, i)) and solve for p(r, i) and p(g; Q.
Finally, input colour values for display unit under correction (e.g. display unit 2) are obtained from its own red and green output colour ratios, p(r, x, and p(a 2) and the value of one of the input colour component, such as Ti by using equations (Ia, Ib) again
Figure imgf000016_0002
b2)
Figure imgf000016_0003
b2) and solve for the other two output colour component values, g2 and b2.
It is necessary that one of the input colour components (r or g or b) on a display unit that matches that on the reference display unit be known beforehand. This has already been obtained earlier during the calibration process outlined in Figure 4a in step (614).
Since input values of each colour component for each display unit have been matched with the reference display unit, any set of the colour component values can be used.
Preferably, the input colour component that has the largest value is chosen for convenience of computation. It is conceivable that all the steps in the calibration can be automated so that a comprehensive set of matched input colour values and the related colour primaries can be obtained. Furthermore, these vast sets of values can be stored in different types of programmable read-only memories (PROMs) on board of specially designed graphics device control card for runtime colour correction. This functionality can also be performed on any other hardware that controls a display unit.
Image Blending
Display units in general exhibit different display characteristics, such as slightly different colour outputs or light intensity, which vary across different makes of display unit as well as within same make or even same models. Before combining images from these display units, it is necessary to correct the colour of the images of the display units so that they appear to be same. This is done by applying colour correction as described in the previous section to individual pixels. The correction value makes the reproduced intensity of each colour pixel (Red, Green or Blue as the case may be) uniform across the whole display. Thus a blue screen would look uniformly blue across the. entire 4 sub-images, except in areas where the sub-images overlap, which would obviously be brighter.
Therefore, a sub-image needs to be blended with adjacent sub-image so that the overlapping regions appear seamless. Image blending requires the ability to correct at least the intensity of the overlapping pixels of each sub-image. Referring to Fig. 5, there are at least 2 regions in the display screen (130) where overlaps occur in a 2 x 2- display unit configuration, i.e. along the overlap edges (410) of adjacent sub-images, and at overlap corner (420) of the sub-image. Thus the image correction software running on the image servers (110) must adjust the intensity of the overlapped pixels such that these artefacts do not appear during image playback. This requirement is addressed by image processing that adjusts or pre distorts each sub-image, on a pixel- by-pixel basis, and controls the intensity of the pixels that made up such sub-image. A first image blending method is used, whereby the intensity of the displayed pixels is linearly reduced across the overlap region. For example, if the overlapped region (410) is 20 pixels wide between display 1 and display 2, the pixel intensity from display 1 would reduce from 100% at (430) to 0% at (440), in steps of 5%. For wider overlapping regions, the pixel intensities would be reduced in smaller steps.
Image blending method can be subsequently modified to reduce intensity in a nonlinear fashion across the overlap region. As such the size of the step is non-uniform and may vary from one another. The size of the step for any pixel in the overlapped region can be arbitrary or depends on the intensity variation profile that is adopted. Intensities of pixels at the corner of the overlapping images (420) are further attenuated in a similar manner in comparison to the pixels at the edges of the overlapping images (410) so. that the overall image appears seamless. In essence, colour-, intensity of any said overlapping pixels are reduced so that the desired intensity is reproduced when said pixels are overlapping.
Present performance limits of personal computer do not allow for the splitting of the source image file and modifying of pixel intensities in real time for moving images, however these tasks can be done in real time for still images. Thus video files must be pre-processed with correct pixel intensity valuies and copied to the image servers (110) beforehand. It is conceivable that as computing performance continues to improve, such tasks may be done in real time in the future.
Synchronization
In the display of moving images, it is extremely important that sub-images sent by the image servers are synchronized at the frame level so that they all show image frame corresponding to the same frame number at the same time. For example, in a two-hour show for example, a few sub frame slips in the beginning might not be too noticeable. However as these frame slips accumulate towards the end of the show, each display unit could be displaying image frames depicting very different images. To prevent this from happening, a video play back program that runs on the master image server in a centralized coordination setup sends out a frame synchronization message from time to time, preferably after every several frames have elapsed, to synchronize image frame displayed by each image server so that each image frame displayed has the same frame number. Similar video playback program is running on each image server to receive the frame synchronization message. If any image server is not displaying image frame according to that particular frame number as contained in the message (either in front or behind said frame number), it would skip to the image frame according to the frame number contained in the synchronization message as requested by the master image server. In this manner, frame synchronization errors cannot accumulate.
In a decentralized coordination setup, one of the image servers performs the same • function of sending out the synchronization message to synchronize image frames displayed by image servers.
Multiple Displays from Multiple Sources
The present invention is not limited to only displaying images from a single source only. With multiple display units, it can also display multiple images from different sources. In this case, each image server is loaded with different image files.
Distributed GUI
In this display system, a distributed graphical user interface (GUI) model is adopted to further enable the system to display GUI objects apart from images. The display system further comprises a workstation (160) as a control terminal in connection with the image servers (100, 110) that are in communication with each other. With the addition of this control terminal (160), full display of the tiled image can be displayed thereon. The distributed GUI system is distributedly executed on the image servers and the control terminal where each image server or the control terminal runs a part of the distributed GUI in a coordinated fashion. As explained earlier, one of the image servers (100, 110) can be implemented as the control terminal instead of the additional workstation (160). Depending on the computing resources available any particular image server, it is understood that one of the image server such as the master image server (100) can be implemented as the control terminal, which is preferably the image server that has the most computing resources.
A user can interact with the GUI objects to be displayed by input device means such as keyboard, mouse or other pointing . devices connected to the control terminal (160). States of GUI objects are distributedly stored in the memory of the control terminal (160) and also in the memory of the image servers (100, 110).
With reference to Fig. 6, a GUI object may be a pointer (500), an icon or a window. In the case of a mouse cursor, the control terminal sends updated cursor coordinates to one of the image server, in this case a first image server (HOa) when the user moves the mouse or pointing device. The first' image server (110a) will calculate the correct cursor location and display it on the display area of its display unit. When the GUI object is moving from the display area of first display unit to second display unit, first image server (110a) informs a second image server (HOb) the states of the GUI object . which includes alerting the second image server (HOb), which is controlling a second display unit, and sending the mouse coordinates to it. Meanwhile, the second display unit (HOb) takes over the function of displaying the GUI object based upon the state of the GUI object informed by the first image server (110a). In this manner, the cursor will appear to move seamlessly across display areas. The user can either view a snapshot of the aggregate of display areas or only some or part of the aggregate in the control terminal (160). The same principle of operation applies to other GUI objects such as icons or windows.
In addition, with appropriate pointing device connected to the control terminal (160), the display system can accept three-dimensional input. Besides using keyboard, two- dimensional input device such as joystick, mouse, track ball, or touch screen can be suitably adapted by software or hardware methods to accept three-dimensional input. Three-dimensional input devices such as multiple cameras for detecting 3D motion, 3D motion sensors or haptics can also be used as the pointing device to move GUI objects in the third dimension, i.e. the perspective axis (530). Before displaying the GUI objects and other display objects that needs to take the perspective axis into account, a perspective point or an angle of view must be first predetermined. The perspective axis (530) will be set by fixing the angle of view. GUI objects will be reduced and enlarged in size according to their location along the perspective axis (530) according to coordinate information received from the three-dimensional pointing device when it is moved in the depth dimension. .
While the present invention has been described in terms of the foregoing exemplary . embodiments, variations within the scope and spirit of the present invention as defined by the claims following will be apparent to those skilled in the art. For example, display systems of greater or fewer number of display units than shown in the exemplary embodiments herein may be constructed in accordance with the principles of the present invention.

Claims

Claims
1. A system for displaying high resolution images comprising: a plurality of displays units; and at least one image server, characterized in that one said display unit is connected to one said image server, each said display unit displays a sub-image on a screen, said sub-images are combined to create at least one high resolution image, said image servers are connected to each other and working in synchronization with one another one to display said sub-images.
2. The system as claimed in claim 1, wherein the system is working in centralized coordination and one of said image servers is a master image server.
3. The system as claimed in claim 1, wherein the system is working in decentralized coordination and said image servers coordinate with each other using an inter- server communications protocol.
4. The system as claimed in claim 1, wherein said image server and said image servers are connected together by a means for connecting devices located in local geographical area,
5. The system as claimed in claim 4, wherein said means for connecting devices located in local geographical area is a local area network-type (LAN-type) means of connection.
6. The system as claimed in claim 1, wherein said said image servers are computer central processing units (CPUs).
7. The system as claimed in claim 1, wherein said image servers are hardware for playing back images, each said hardware are adapted for communication with a plurality of similarly adapted hardware.
8. The system as claimed in claim 2, wherein said master image server runs a video playback program that sends out from time to time a frame synchronization message to said image servers.
9. The system as claimed in claim 3, wherein one of said image server runs a video playback program that sends out from time to time a frame synchronization message to other said image servers.
10. The system as claimed in claims 8 and 9, wherein said synchronization message carries a frame number corresponding to image frame to be displayed by all said image servers.
11. The system as claimed in claim 10, wherein each said image server, except the image server that sends out said synchronization message, runs a video playback program that receives' said synchronization message.
12. The system as claimed in claim 1, wherein input colour of each said sub-images is corrected so that all display units reproduce the same output colour for an intended colour.
13. The system as claimed in claim 1, wherein intensity of overlapping pixels of said sub-images is corrected so that overlapping region of said high resolution image appears to be seamless with respect to non-overlapping regions of said high resolution image.
14. The system as claimed in claim 1, wherein any said overlapping pixels have reduced intensity so that said overlapping regions will have the same intensity as said non-overlapping regions.
15. The system as claimed in claim 1, further comprising a control terminal in communication with the system.
16. The system as claimed in claim 15, wherein said control terminal is a workstation.
17. The system as claimed in claim 15, wherein said control terminal is a master image server, said master image server further includes an additional video display card in connection with an additional display unit for displaying the whole high resolution image.
18. The system as claimed in claim 15, wherein said control terminal is one of said image server, said image server functioning as control terminal further includes an additional video display card in connection with an additional display unit for displaying the whole high resolution image.
19. The system as claimed in claim 15, wherein said control terminal and said image server runs a distributed graphical user interface operating system on board for displaying graphical user interface objects.
20. The system as claimed in claim 19 wherein states of said graphical user interface objects are distributedly stored in said image servers and said control terminal.
21. The system as claimed in claim 19, wherein a first display unit and a second display unit among any of the said display units is each connected to a first image server and a second image server respectively, said first image server informs said second image server and other relevant said image servers when any said graphical user interface object moves from the display area of said first display unit to the display area of said second display unit and said second image server takes over the function of displaying portions of said graphical user interface object that falls in said display area of said second display unit.
22. The system as claimed in claim 19, wherein said workstation is connected to an input device to allow a user to interact with said graphical user interface object from said control terminal.
23. The system as claimed in claim 22, wherein said input device allows three- dimensional input.
24. The system as claimed in claim 23, wherein said input device is a three-dimensional input device.
25. The system as claimed in claim 23, wherein said input device is a two-dimensional input device.
26. The system as claimed in claim 23, wherein said input device is a keyboard.
27 7.. The system as claimed in claim 19, wherein size of said graphical user interface object reduces and enlarges in size proportionally according to its location along a preset perspective axis when said graphical user interface object moves along said perspective axis.
28. A method for displaying high resolution images, comprising the steps of: composing said high resolution images from a plurality of sub-images according to the number of image servers available; storing each said sub-image on each said image server wherein each said image server controls a display unit connected to respective said image server; displaying each said sub-images by means of respective said image server and said display unit; and synchronizing image frame displayed by respective said display unit.
29. The method as claimed in claim 28, wherein one of said image server runs a video playback program that sends a synchronization message containing a current frame number from time to time and said step of synchronizing image frame is carried out by sending said synchronization message to other said image servers so that all said display units display image frame according to said current frame number.
30. The method as claimed in claim 28, further comprising the step of correcting colours displayed by each said display unit.
31. The method as claimed in claim 30, wherein said step of correcting colours displayed includes the steps of measuring simultaneously output colour characteristics of two or more display units, designating one said display unit as a reference display unit and determine the corresponding input colour values for matched output colour between each said display unit.
32. The method as claimed in claim 31, wherein said step of measuring simultaneously output colour characteristics of two or more display units includes displaying simultaneously test patterns from the reference display unit and one or more said display units and capturing the images of the displayed test patterns with an image capturing device in the same, frame for comparisons of the characteristics of the displayed test patterns and inferring characteristics of the display units from the comparisons made.
33. The method as claimed in claim 32,' wherein said image capturing device is a charge-coupled device (CCD) camera.
34. The method as claimed in claim 31, wherein said step of measuring output colour characteristics includes capturing the images of a first type of test pattern comprising red input colour component value that is equal to both green and blue input colour component values at every pixel, said input colour components of different values are distributed spatially at different locations within the display area of each display unit.
35. The method as claimed in claim 31, wherein said step of measuring output colour characteristics includes capturing the images of a second type of test pattern comprising only a dominant input colour component value at every pixel, said input colour component of different values are distributed spatially at different locations within the display area of each display unit.
36. The method as claimed in claim 34 and 35, wherein said step of determining corresponding input colour values includes the steps of obtaining a best fit line of a plot of the output colour versus the input colour, wherein the input colour component is increasing along a specific direction within the display area of each said test pattern image, and mapping said input colour value of one of said display unit to input colour value of reference display unit for the same value of output colour with the aid of the best fit lines corresponding to each said display unit.
37. The method as claimed in claim 35, wherein said step of determining corresponding input colour values includes the steps of joining a point at a step of the output value with a similar point before and after said point to obtain a plot of the output colour versus the input colour, wherein the input colour component is increasing along a specific direction within the display area of each said test pattern image, and mapping said input colour value of one of said display unit to input colour value of reference display unit for the same value of output colour with the aid of the plot corresponding to each said display unit.
38. The method as claimed in claim 37, wherein the x-coordinate of said point is the midpoint of the step and the y-coordinate of said point is a mean value of the output colour.
39. The method as claimed in claim 31, wherein said step of measuring output colour characteristics includes capturing the images of a third type of test pattern comprising single input colour component at a single input value throughout the entire third type of test pattern.
40. The method as claimed in claim 39, wherein said step of correcting colours displayed includes the steps of adjusting dominant input colour component of said third test pattern type displayed by each said display unit so that dominant output colour component of said third test pattern type displayed by each said display unit is the same as dominant output colour component of said third test pattern type displayed by said reference display unit, and obtaining colour primaries from output colour components corresponding to each adjusted dominant input colour component value for each said display unit and reference display unit.
41. The method as claimed in claim 40, wherein said steps of adjusting dominant input colour component value and obtaining colour primaries from output colour components corresponding to each adjusted dominant input colour component value are repeated for all colour components.
42. The method as claimed in claim 41, wherein said steps of correcting colours displayed includes representing input colour of the reference display unit as a point on a Commision Internationale Eclairage (CIE) chromaticity diagram, wherein coordinates of the point is derived from red colour ratio and green colour ratio of said input colour of the reference . display unit and the colour" primaries of the reference display unit according to CIE chromaticity equations, deriving the red colour ratio and green colour ratio of input colour to be corrected for said display units under correction from said point and the colour primaries of said display units under correction according to CIE chromaticity equations, obtaining one of the corrected input colour component value from the adjusted input colour component values derived in claim 40, and obtaining other corrected input colour component values from the red colour ratio and the green colour ratio of input colour to be corrected and the corrected input colour value obtained in claim 40.
43. The method as claimed in claim 31 wherein a said display unit among all said display unit that has the smallest range of output colour value is designated as the reference projector in said step of designating the reference projector.
44. The method as claimed in claim 28, further comprising the step of blending said sub-image that overlaps other adjacent sub-images.
45. The method as claimed in claim 44 wherein said step of blending includes the steps of reducing the input colour values of pixels in the overlapping regions of each said sub-images according to predetermined weightages wherein said predetermined weightages are chosen so that the resultant intensity of said overlapping region is the same as intensity of non-overlapping region of said sub-images.
46. The method as claimed in claim 28, further comprising the steps of: storing locally states of one or more graphical user interface object in the memory of a control terminal in connection with said image servers and in the memory of said image servers; sending from said control terminal to relevant said image servers updates of said states of said graphical user interface object that is moving; calculating coordinates of said moving graphical user interface object; and displaying said graphical user interface object on said display units wherein when said moving graphical user interface object moves from display area controlled by a first said image server to display area controlled by a second said image server, said first image server sends coordinates of said moving graphical user interface object to said second image server and other relevant said image servers to ensure a seamless transition between said display areas.
47. The method as claimed in claim 46, further comprising the step of enlarging and reducing the size of a graphical user interface object according to the a perspective axis when said control terminal receives coordinates information related to a depth dimension from a three-dimensional pointing device connected to said workstation.
PCT/MY2006/000008 2005-09-05 2006-09-05 Display system utilising multiple display units WO2007029997A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI20054154 2005-09-05
MYPI20054154 2005-09-05

Publications (1)

Publication Number Publication Date
WO2007029997A1 true WO2007029997A1 (en) 2007-03-15

Family

ID=37836070

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2006/000008 WO2007029997A1 (en) 2005-09-05 2006-09-05 Display system utilising multiple display units

Country Status (1)

Country Link
WO (1) WO2007029997A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100171765A1 (en) * 2008-12-29 2010-07-08 Lg Electronics Inc. Digital television and method of displaying contents using the same
CN103543596A (en) * 2012-07-12 2014-01-29 Cjcgv株式会社 Multi-projection system
CN103677706A (en) * 2012-09-24 2014-03-26 中国人民解放军海军航空工程学院 Multi simulator comprehensive display control method
EP2953341A3 (en) * 2014-02-24 2016-03-16 Samsung Electronics Co., Ltd Display device, mobile device, system including the same, and image quality matching method thereof
DE102010009208B4 (en) * 2010-02-25 2016-12-15 Michael Schweizer display system
WO2019091565A1 (en) 2017-11-10 2019-05-16 Ses-Imagotag Gmbh System for synchronized video playback on a number of playback devices
CN110910814A (en) * 2018-08-28 2020-03-24 杭州海康威视数字技术股份有限公司 LED module correction method and device, LED display screen and storage medium
CN110989949A (en) * 2019-11-13 2020-04-10 浙江大华技术股份有限公司 Method and device for special-shaped splicing display
WO2021134396A1 (en) * 2019-12-31 2021-07-08 西安诺瓦星云科技股份有限公司 Playing method, device and system and computer-readable storage medium
JP7384077B2 (en) 2020-03-09 2023-11-21 株式会社リコー multi-projection system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030115263A1 (en) * 2001-09-28 2003-06-19 De Tran Network projector interface system
JP2004507954A (en) * 2000-06-13 2004-03-11 パノラム テクノロジーズ,インコーポレイテッド. Method and apparatus for seamless integration of multiple video projectors
US6814448B2 (en) * 2000-10-05 2004-11-09 Olympus Corporation Image projection and display device
US20050117126A1 (en) * 2003-12-01 2005-06-02 Seiko Epson Corporation Front projection type multi-projection display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004507954A (en) * 2000-06-13 2004-03-11 パノラム テクノロジーズ,インコーポレイテッド. Method and apparatus for seamless integration of multiple video projectors
US6814448B2 (en) * 2000-10-05 2004-11-09 Olympus Corporation Image projection and display device
US20030115263A1 (en) * 2001-09-28 2003-06-19 De Tran Network projector interface system
US20050117126A1 (en) * 2003-12-01 2005-06-02 Seiko Epson Corporation Front projection type multi-projection display

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100171765A1 (en) * 2008-12-29 2010-07-08 Lg Electronics Inc. Digital television and method of displaying contents using the same
US9077935B2 (en) * 2008-12-29 2015-07-07 Lg Electronics Inc. Digital television and method of displaying contents using the same
DE102010009208B4 (en) * 2010-02-25 2016-12-15 Michael Schweizer display system
CN103543596B (en) * 2012-07-12 2014-12-31 Cjcgv株式会社 Multi-projection system
EP2685311A3 (en) * 2012-07-12 2014-05-28 CJ CGV Co., Ltd. Multi-projection system
US9298071B2 (en) 2012-07-12 2016-03-29 Cj Cgv Co., Ltd. Multi-projection system
CN103543596A (en) * 2012-07-12 2014-01-29 Cjcgv株式会社 Multi-projection system
CN103677706A (en) * 2012-09-24 2014-03-26 中国人民解放军海军航空工程学院 Multi simulator comprehensive display control method
EP2953341A3 (en) * 2014-02-24 2016-03-16 Samsung Electronics Co., Ltd Display device, mobile device, system including the same, and image quality matching method thereof
US9799251B2 (en) 2014-02-24 2017-10-24 Samsung Electronics Co., Ltd. Display device, mobile device, system including the same, and image quality matching method thereof
WO2019091565A1 (en) 2017-11-10 2019-05-16 Ses-Imagotag Gmbh System for synchronized video playback on a number of playback devices
CN110910814A (en) * 2018-08-28 2020-03-24 杭州海康威视数字技术股份有限公司 LED module correction method and device, LED display screen and storage medium
CN110989949A (en) * 2019-11-13 2020-04-10 浙江大华技术股份有限公司 Method and device for special-shaped splicing display
CN110989949B (en) * 2019-11-13 2023-04-11 浙江大华技术股份有限公司 Method and device for special-shaped splicing display
WO2021134396A1 (en) * 2019-12-31 2021-07-08 西安诺瓦星云科技股份有限公司 Playing method, device and system and computer-readable storage medium
JP7384077B2 (en) 2020-03-09 2023-11-21 株式会社リコー multi-projection system

Similar Documents

Publication Publication Date Title
WO2007029997A1 (en) Display system utilising multiple display units
US9479769B2 (en) Calibration of a super-resolution display
Raskar et al. A low-cost projector mosaic with fast registration
JP3714163B2 (en) Video display system
Bimber et al. Embedded entertainment with smart projectors
US8508615B2 (en) View projection matrix based high performance low latency display pipeline
Bimber et al. The visual computing of projector-camera systems
Cotting et al. Embedding imperceptible patterns into projected images for simultaneous acquisition and display
US8102332B2 (en) Intensity scaling for multi-projector displays
US9554105B2 (en) Projection type image display apparatus and control method therefor
US7215362B2 (en) Auto-calibration of multi-projector systems
Harville et al. Practical methods for geometric and photometric correction of tiled projector
US20080253685A1 (en) Image and video stitching and viewing method and system
US20090122195A1 (en) System and Method for Combining Image Sequences
US9654750B2 (en) Image processing system, image processing apparatus, and image processing method to respectively display images obtained by dividing one image on first and the second display media
EP1421795A1 (en) Multi-projector mosaic with automatic registration
US8454171B2 (en) Method for determining a video capture interval for a calibration process in a multi-projector display system
JP2007295559A (en) Video processing and display
JP2011082798A (en) Projection graphic display device
US20190281266A1 (en) Control apparatus, readable medium, and control method
JP2006074805A (en) Multi-projection video display device
JP3757979B2 (en) Video display system
Majumder et al. Using a camera to capture and correct spatial photometric variation in multi-projector displays
Summet et al. Shadow elimination and blinding light suppression for interactive projected displays
JP2020191589A (en) Projection device, projection method, program, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC - FORM EPO 1205A DATED 28-07-2008

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC, EPO FORM 1205A, SENT ON 28/07/08.

122 Ep: pct application non-entry in european phase

Ref document number: 06799447

Country of ref document: EP

Kind code of ref document: A1