WO2007107948A1 - Video transmission over a data link with limited capacity - Google Patents

Video transmission over a data link with limited capacity Download PDF

Info

Publication number
WO2007107948A1
WO2007107948A1 PCT/IB2007/050945 IB2007050945W WO2007107948A1 WO 2007107948 A1 WO2007107948 A1 WO 2007107948A1 IB 2007050945 W IB2007050945 W IB 2007050945W WO 2007107948 A1 WO2007107948 A1 WO 2007107948A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
frames
video signal
pixels
Prior art date
Application number
PCT/IB2007/050945
Other languages
French (fr)
Inventor
Bas Driesen
Henk Huijgen
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007107948A1 publication Critical patent/WO2007107948A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4112Peripherals receiving signals from specially adapted client devices having fewer capabilities than the client, e.g. thin client having less processing power or no tuning capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/162Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
    • H04N7/163Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing by receiver means only

Definitions

  • the present invention relates in general to a method for data transmission, particularly the transmission of video images over a data link between a transmitting station and a receiving station, which link may be wired but which particularly may be a wireless link.
  • the transmitting station receives video from a video source, for instance a video player or a television receiver, and the receiving station is a display device or screen, and the invention will be specifically explained for this example, but it is noted that this should not be considered as limiting the scope of the present invention.
  • a video source for instance a video player or a television receiver
  • the receiving station is a display device or screen
  • a digital video signal contains a certain amount of bits per second, on average, the precise amount depending on a number of factors, one of these factors being the video format. If this video signal is to be transferred over a data link, the data link should have a data transmission capacity that is at least equal to the average data rate (bit rate) of the video signal.
  • bit rate average data rate
  • a data link has a limited data transmission capacity, which may depend on the transmission protocol used and on the hardware used.
  • a wireless link has a more limited capacity as compared to a wired link.
  • video formats have been developed for increasing image quality, thus corresponding to an increased data rate.
  • the video signal to be transmitted has a data rate less than the capacity of the system, there is basically no problem: the signal can be transmitted without loss of quality. Problems arise if the video signal to be transmitted has a data rate higher than the capacity of the system.
  • the data of the video images are packed in a standard video format of a reduced size, this size being chosen such that the corresponding data rate is less than the transmission capacity of the link of the system.
  • This conversion results in a "normal" video signal that can be transferred over the data link without loss.
  • the format of the transmission signal is a standard format, to assure that the hardware components involved are capable of handling this format: they should be, because the format used is a normal format which they should expect to handle. Of course, if the
  • Figure 1 is a block diagram schematically illustrating a video transmission system according to the present invention
  • Figure 2 illustrates the YUV segments after addition of CRC bytes according to the present invention
  • Figure 3 illustrates multiplexing on a segment basis
  • Figure 4 illustrates multiplexing on a pixel basis.
  • FIG. 1 is a block diagram schematically illustrating an exemplary embodiment of a video transmission system 1 according to the present invention.
  • the system 1 comprises a first station 10, a second station 30, and a data link 20 coupling these two stations.
  • the second station is a display station. It comprises a receiving section 31 receiving data from the link 20, a processing section 32 for processing the data received from the link 20 and generating a video signal for display, and a display device 33, for instance an LCD screen.
  • the second station may also be a different type of station, for instance a recording station.
  • the first station 10 is a transmitting station, having an input for receiving a digital video signal from a video source VS.
  • the video source may for instance be a generating device for generating video signals, such as a video camera. It may also be a reading device such as a video player for reading stored video signals from a storage device such as an optical disc. It may also itself be a receiving device for receiving broadcast signals. The origin of the video signal is not relevant for understanding or implementing the present invention.
  • the digital video signal as provided by the video source VS will hereinafter be indicated as source signal SS.
  • the system 1 as a whole may be considered to constitute a split-architecture type of television set, where the electronics is incorporated in the first station 10, to which all external connections are made.
  • the display device may be connected to the first station through a single link only. Inside the first station 10, all the TV- related processing is done, while the display device needs to perform some basic backend processing only.
  • a digital video signal contains a sequence of frames, each frame being constituted by a predefined number of pixels arranged in a rectangular array.
  • the "size" of a frame will be defined as the number of pixels per frame.
  • Frame size has been standardized, but there exists a relatively large number of standard sizes, one being more commonly used than the other.
  • some standard sizes will be mentioned, in which the first number indicates the number of pixels in the horizontal direction while the second number indicates the number of pixels in the vertical direction; further, the indication between brackets indicates the video format in which the frame size is typically implemented.
  • the video signal contains information regarding color and brightness.
  • the brightness coding takes a predetermined number of bits per pixel (bpp)
  • the color coding takes a predetermined number of bpp. It should be clear that the image quality, expressed as a combination of the frame size and the color and brightness resolution, has a large influence on the number of bits that need to be processed.
  • a suitable and preferred format of the link is HDMI.
  • an HDMI link there are 3 video channels available for 3 video components (R, G, B), there is an audio channel available, and there is a bidirectional control channel available, as should be known to a person skilled in the art.
  • transmission links have a predetermined transmission capacity, which has an upper limit depending, inter alia, on the type of link (electrical wire, optical fiber, wireless), but which in practice may be less than the upper limit because of circumstantial conditions (such as, for instance, the length of the link, or the link protocol itself, but also link devices in the chain can have limitations).
  • the link 20 is not capable of handling the data stream. For instance, if the link 20 has a capacity of 800 Mbps, the HDTV signal of size 1280 x 720 can be transmitted without problems but the size 1920 x 1080 or higher can not be transmitted without transmission errors.
  • the transmitting station 10 needs to perform some data processing such that the video information can be transmitted with a reduced data rate.
  • a commonly used method for doing this is compression; well-known compression schemes are, for instance, MPEG2 and MPEG4. With such compression, however, loss of information occurs to some extent, possibly resulting in video artefacts on display. Further, the compression factor achieved is not constant but depends on picture content. In the following example, it will be assumed that the input signal SS has a size 1920 x 1080.
  • the first station 10 comprises a conversion block 11 for converting received RGB signals of input signal SS to YUV signals, because generally a better compression can be achieved in YUV space as compared to RGB space.
  • the first station 10 further comprises a compression block 12 for compressing the video data using a Differential Pulse Code Modulation (DPCM) method.
  • DPCM Differential Pulse Code Modulation
  • the method is intra field, and compresses the image line based.
  • An inter field compression method would introduce too much latency.
  • An advantage of the DPCM method is, besides its low complexity, that the compression factor may be guaranteed.
  • Other methods like JPEG, do not deliver a guaranteed compression.
  • each line is divided in segments of Np pixels, separate for each color component.
  • a segment is formed by Nb consecutive bytes on a line. Compression may be done segment-wise.
  • Each color component could be compressed with a different factor, as long as the overall compression factor is still ok.
  • segment size of the compression method is selectable but best results are reached when using a segment size of around 128 pixels.
  • the segment size will be selected such that the active amount of pixels per line is a multiple of the segment size. If in the example the video source is 1920* 1080p, segment size could be taken as 128.
  • the transmission may not be error-free.
  • the wireless link already features an RS (Reed-Solomon) or LDPC (Low-Density-Parity- Check) error correction mechanism. It will be hard to improve the bit error rate by adding another correction method on top of these. Therefore, the present invention proposes to use an error detection method in combination with error concealment at the display.
  • a 16-bit (2 bytes) cyclic redundancy checksum is proposed. Such a CRC will detect 99.99% of the errors.
  • a CRC block 13 calculates a CRC per Y-segment. It is possible to do the same per U-segment and per V-segment, but in a preferred embodiment the checksum calculation for U and V is combined to limit the overhead on CRC bits. Such a CRC will be able to detect 99% of the errors, but, since an erroneous U or V segment will be less visible then an erroneous Y segment, this is an acceptable solution.
  • the compression is performed to such degree that the resulting data rate is less than or equal to the capacity of the link 20. It is now possible, in principle, to generate a straight train of data for transmission over the link 20. However, it is very well possible that the equipment of the link 20 is not capable of handling such unorganized data train. In order to overcome this problem, the present invention proposes to pack the data train into a standard video format of a size that can be handled by the link 20. This video format will be indicated as target format, and the resulting video signal will be indicated as target signal TS.
  • the first station 10 further comprises a formatter block 14, receiving the compressed source signal CSS from the compression block 12, possibly provided with CRC.
  • the formatter block 14 has information regarding the link 20, more specifically information regarding the data rate capacity of the link 20, and the formatter block 14 is designed to select a target standard video format having a data rate less that the data rate capacity of the link 20. In a possible embodiment, this selection is predefined in the first station 10.
  • the formatter block 14 is provided with a memory (not shown) containing a table of possible video formats, and the formatter block 14 is designed to select a suitable format from this memory. If there are more standard video formats possible, the formatter block 14 will select from the potentially possible formats the one having the largest number of pixels per frame.
  • the link 20 is capable of transmitting the following formats:
  • the link 20 is capable of transmitting the formats 1024 x 768 and lower
  • the formatter block 14 will select format 1280 x 72Op as target format.
  • the required compression factor would then be at least 2.25.
  • the compression block 12 could perform the compression as follows:
  • Y 128 bytes compressed to 84 bytes.
  • U 128 bytes compressed to 43 bytes.
  • Figure 2 illustrates the resulting YUV segments after addition of CRC bytes; it is noted that the addition of CRC bytes reduces the number of video content bytes.
  • the multiplexing process performed by the formatting block 14 two possible embodiments for the multiplexing process performed by the formatting block 14 are described.
  • the multiplexing is performed on a segment basis.
  • the segments are multiplexed over the three available channels (R,G,B).
  • the R-channel contains YUV segment 1, 4, 7 etc.
  • the G-channel contains YUV segment 2, 5, 8 etc.
  • the B-channel contains YUV segment 3, 6, 9 etc.
  • the disadvantage of this segment multiplexing is that segments need to be buffered at encoder as well as decoder side.
  • the CRC is considered part of the segment.
  • the multiplexing is performed on a pixel basis.
  • a first V-segment Vl is written as pixel data in the B-channel.
  • a first U-segment Ul is written as pixel data in the G-channel.
  • a first Y-segment Yl is written as pixel data in the R-channel.
  • Second, third and fourth Y-segments Y2, Y3, Y4 are written as pixel data in the R, G, B channels, respectively. And so on.
  • one channel here: the R channel
  • one channel here: the G channel
  • one channel here: the B channel
  • V and Y segments one channel will contain alternatively V and Y segments.
  • Pixel based multiplexing will limit the use of buffers, but adds additional control logic. Both first station 10 and second station 30 need to know the way of multiplexing. If the number of Y pixels is not a multiple of the number of U or V pixels, pixel multiplexing may be difficult.
  • the CRC is considered part of the segment.
  • the first station 10 further comprises a transmission format modelling block
  • the transmission format modelling block 15 also receives the audio data received at the input of the first station 10. Further, the first station 10 may also generate a CEC message for transmission over the CEC channel of the HDMI link, as will be explained later.
  • the first station 10 further comprises a link transmission block 16, which receives the HDMI signal from the transmission format modeling block 15 and converts this signal to a signal suitable for actual transmission over the link 20.
  • the transmission block 16 generates electrical pulses or an optical signal or a wireless signal.
  • Control will be a bidirectional communication.
  • wireless transmission for high-speed forward link, the technologies that could be used are UWB or 60 GHz communications.
  • UWB User Data Bus
  • 60 GHz communications For low speed bi-directional control data, another wireless technology can be used.
  • the receiving section 31 receives the HDMI signals in the physical form determined by the physical realization of the link 20, and retrieves the HDMI signal, which contains the video data, audio data and CEC control data.
  • the second station 30 further comprises an HDMI receiver 34, which receives the HDMI signal from the receiving section 31, and which outputs the CEC control data, the audio data, and the video data.
  • the second station 30 further comprises a de-formatter block 35, which receives the video data from the HDMI receiver 34, and which performs the inverse operation of the formatter block 14 on the (standard) video signal as received, such as to provide the compressed Y, U, V data together with possible CRC data. If, on transmission, an error correcting code like FEC has been used, errors may be corrected in this block. If an error detecting code like CRC has been used, errors cannot be corrected here. The error detection information (CRC data) is then sent to the error concealment block 37 further in the chain.
  • CRC data error detection information
  • the second station 30 further comprises a de-compressor block 36, which receives the compressed Y, U, V data from the de-formatter block 35, and which performs the inverse operation of the compression block 12, such as to provide uncompressed Y, U, V data.
  • a de-compressor block 36 which receives the compressed Y, U, V data from the de-formatter block 35, and which performs the inverse operation of the compression block 12, such as to provide uncompressed Y, U, V data.
  • the second station 30 further comprises an error concealment block 37, receiving the uncompressed Y, U, V data from the de-compressor block 36, and receiving the CRC data (if any) from the de-formatter block 35.
  • the error concealment block 37 may perform any of the following (or other) methods if an error is detected in a segment: * The erroneous segment is replaced by the segment directly above on the previous line.
  • the second station 30 further comprises a backconversion block 38 for converting the YUV signals received from the error concealment block 37 to RGB signals.
  • the backconversion block 38 provides its RGB output signals to the display device 33, which also receives the audio data and the possible CEC data from the HDMI receiver 34.
  • the circuit blocks 31, 34, 35, 36, 37, 38 will be accommodated within a housing of the display device 33 (monitor or the like).
  • the RGB signals may be sent to the display device 33 directly, if the display device is a type accepting RGB signals, or may first be packed in a standard video signal, if the display device is a type expecting to receive digital video signals.
  • the first station 10 and the second station 30 of the system 1 form a matching set. This means that the second station "knows" what kind of compression is performed by the first station, and it means that the first station in turn "knows" this.
  • the user connects a different display device (monitor) to the first station 10. If the video transmitted by the first station 10 is compressed while the display device does not know this, the user can not view "recognizable" video. Also, it is possible that the first station 10 receives source signals of different sizes. In case it receives a source signal of size 1280 x 720, it does not need to perform any compression, whereas in case it receives a source signal of size 2560 x 1440 it needs to perform compression with a higher compression factor. In other words, the compression factor actually applied may not always be the same.
  • the first station 10 needs to communicate with the second station 30, which communication can take place over the CEC channel.
  • This communication is initiated by the first station 10, for instance on power-up, or at the occasion of the HDMI cable being connected, or each time a video transmission is started, by the first station 10 sending a CEC message.
  • This CEC message may, for instance, contain the compression factor.
  • the first station 10 may for instance receive any of the four following types of answer from the second station 30:
  • the second station 30 is not the standard station belonging to the first station
  • the second station 30 does not respond at all. 2) the second station 30 is not the standard station belonging to the first station
  • the second station 30 may respond by sending a ⁇ feature abort> message, indicating that it does not support the compression feature.
  • the first station 10 communicates the compression factor, and the second station 30, whether or not it is the standard station belonging to the first station 10, has installed CEC facility and it understands the message from the first station 10; in that case, the second station 30 will send an acknowledgement message that it recognizes the compression factor.
  • the second station 30 is the standard station belonging to the first station 10, and the first station 10 performs a predefined standard compression; in that case, the second station 30 will send an acknowledgement message.
  • processing may take place as described earlier.
  • the first station 10 does not proceed as described, because it is highly likely that the second station 30 will not or not correctly decompress the video signal received over the link 20, so that the display of the video signal will not result in viewable images.
  • the first station 10 has to take some fallback action in order to reduce the data rate, and according to the invention the first station 10 will do so in a manner which will at least result in viewable images without decompression.
  • the first station 10 will scale the video to a resolution that can be handled by the second station 30 as well as by the link 20.
  • the first station 10 will discard some of the pixels of the frames in order to generate a target video signal with the suitable target size.
  • the first station 10 may discard or ignore the first 180 lines of each frame and the last 180 lines of each frame, so that it will transmit only data from lines 181 to 900. Further, of each line, the first station 10 may discard or ignore the first 320 pixels and the last 320 pixels, so that it will transmit only the data corresponding to pixels 321 to 1600.
  • a video signal will result having a size of 1280 x 720 pixels, indeed, which on display without decompression will result in a viewable image of 1280 pixels wide and 720 pixels high, corresponding to a central portion of the original image.
  • the price of this fallback method is that from the original image upper and lower horizontal bands of 180 lines height have been lost and left and right vertical bands of 320 pixels wide have been lost along the edges of the original picture, but at least the user can view video.
  • the present invention provides a video transmission system 1 which comprises a transmitting station 10, a receiving station 30, and a data link 20 coupling these two stations.
  • the transmitting station receives a source video signal SS for frames having size X 1,Yl, Xl and Yl indicating the number of pixels in the horizontal and vertical dimension of the frames, each pixel being associated with nb bits of digital data.
  • the pixel data are compressed such that the number of bits of the compressed data per frame is equal to X*Y*nb ⁇ Xl*Yl*nb, wherein X and Y correspond to horizontal and vertical dimension of the frames of a target standard video format.
  • the compressed data are formatted into an uncompressed target video signal corresponding to said target standard video format.
  • the uncompressed target video signal is transmitted over the data link 20, which preferably is an HDMI link.
  • the signal transferred over the link 20 is in all aspects a standard video signal of size X, Y, which can be received and processed by any standard video processing apparatus, up to and including a display device.
  • this standard video signal contains pixel data coding for X*Y pixels, which upon display will result in pixel color and pixel brightness of an image having size X pixels wide and Y pixels height.
  • a compressed video signal does not have the format of a standard video signal and thus can not be displayed without a decompression process.
  • the data in pixel x,y do not correspond to one actual pixel in the image intended for display.
  • the image intended for display has size Xl pixels wide and Yl pixels height, wherein X1>X or Y1>Y, or both, thus containing X1*Y1 pixels.
  • the data coding for these X1*Y1 pixels is compressed so that the number of bits is reduced, and the compressed data is redistributed over the X* Y pixels of the standard video signal used for transmission.
  • this Standard video signal can be considered as constituting a transport vehicle of size X* Y, containing compressed data of an original signal of size X1*Y1.
  • the pixel data are compressed but the video signal as such is an uncompressed video signal.
  • the first station 10 may also use a parameter in the monitor's EDID information.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Abstract

A video transmission system (1) comprises a transmitting station (10), a receiving station (30), and a data link (20) coupling these two stations. The transmitting station receives a source video signal (SS) for frames having size X 1,Y1, X1 and Y1 indicating the number of pixels in the horizontal and vertical dimension of the frames, each pixel being associated with nb bits of digital data; the pixel data are compressed such that the number of bits of the compressed data per frame is equal to X*Y*nb < X1 *Y1 *nb, wherein X and Y correspond to horizontal and vertical dimension of the frames of a target standard video format; the compressed data are formatted into an uncompressed target video signal corresponding to said target standard video format; the uncompressed target video signal is transmitted over the data link (20), which preferably is an HDMI link.

Description

Video transmission over a data link with limited capacity
FIELD OF THE INVENTION
The present invention relates in general to a method for data transmission, particularly the transmission of video images over a data link between a transmitting station and a receiving station, which link may be wired but which particularly may be a wireless link.
In a particular example, the transmitting station receives video from a video source, for instance a video player or a television receiver, and the receiving station is a display device or screen, and the invention will be specifically explained for this example, but it is noted that this should not be considered as limiting the scope of the present invention.
BACKGROUND OF THE INVENTION
It is generally known that a digital video signal contains a certain amount of bits per second, on average, the precise amount depending on a number of factors, one of these factors being the video format. If this video signal is to be transferred over a data link, the data link should have a data transmission capacity that is at least equal to the average data rate (bit rate) of the video signal. In practice, however, a data link has a limited data transmission capacity, which may depend on the transmission protocol used and on the hardware used. In general, it can be said that a wireless link has a more limited capacity as compared to a wired link. On the other hand, video formats have been developed for increasing image quality, thus corresponding to an increased data rate.
If, in a transmission system comprising the combination of a transmitting station, a link, and a receiving station, the video signal to be transmitted has a data rate less than the capacity of the system, there is basically no problem: the signal can be transmitted without loss of quality. Problems arise if the video signal to be transmitted has a data rate higher than the capacity of the system.
For this problem, solutions have already been developed in the form of compression techniques, such as MPEG. Such compression techniques typically involve a loss of image quality, but they are designed such that the loss of information is hardly preceptible to the human eye. The present invention aims to provide a different solution.
SUMMARY OF THE INVENTION
According to the present invention, the data of the video images are packed in a standard video format of a reduced size, this size being chosen such that the corresponding data rate is less than the transmission capacity of the link of the system. This conversion results in a "normal" video signal that can be transferred over the data link without loss. It is noted that the format of the transmission signal is a standard format, to assure that the hardware components involved are capable of handling this format: they should be, because the format used is a normal format which they should expect to handle. Of course, if the
"normal" video signal as received by the receiving station is directly sent to a display screen, it would not result in "sensible" images, because the signal still contains transformed data. Thus, the receiving station first needs to unpack the received signal and generate a signal of the original format, which can be applied to the display screen. Further advantageous elaborations are mentioned in the dependent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects, features and advantages of the present invention will be further explained by the following description of one or more preferred embodiments with reference to the drawings, in which same reference numerals indicate same or similar parts, and in which:
Figure 1 is a block diagram schematically illustrating a video transmission system according to the present invention;
Figure 2 illustrates the YUV segments after addition of CRC bytes according to the present invention;
Figure 3 illustrates multiplexing on a segment basis; Figure 4 illustrates multiplexing on a pixel basis.
DETAILED DESCRIPTION OF THE INVENTION Figure 1 is a block diagram schematically illustrating an exemplary embodiment of a video transmission system 1 according to the present invention. The system 1 comprises a first station 10, a second station 30, and a data link 20 coupling these two stations. In this embodiment, the second station is a display station. It comprises a receiving section 31 receiving data from the link 20, a processing section 32 for processing the data received from the link 20 and generating a video signal for display, and a display device 33, for instance an LCD screen. It is noted, however, that the second station may also be a different type of station, for instance a recording station.
The first station 10 is a transmitting station, having an input for receiving a digital video signal from a video source VS. The video source may for instance be a generating device for generating video signals, such as a video camera. It may also be a reading device such as a video player for reading stored video signals from a storage device such as an optical disc. It may also itself be a receiving device for receiving broadcast signals. The origin of the video signal is not relevant for understanding or implementing the present invention. The digital video signal as provided by the video source VS will hereinafter be indicated as source signal SS.
In a particular embodiment, the system 1 as a whole may be considered to constitute a split-architecture type of television set, where the electronics is incorporated in the first station 10, to which all external connections are made. The display device may be connected to the first station through a single link only. Inside the first station 10, all the TV- related processing is done, while the display device needs to perform some basic backend processing only. As should be clear to a person skilled in the art, a digital video signal contains a sequence of frames, each frame being constituted by a predefined number of pixels arranged in a rectangular array. In the following, the "size" of a frame will be defined as the number of pixels per frame. Frame size has been standardized, but there exists a relatively large number of standard sizes, one being more commonly used than the other. By way of example, some standard sizes will be mentioned, in which the first number indicates the number of pixels in the horizontal direction while the second number indicates the number of pixels in the vertical direction; further, the indication between brackets indicates the video format in which the frame size is typically implemented.
72Ox 480 (DV NTSC /VGA) 768 x 576 (PAL)
1024 x 768 (XGA) 1280 x 72Op (HDTV) 1920 x 1080p (HDTV) 2560 x 144Op For each pixel, the video signal contains information regarding color and brightness. Normally, the brightness coding takes a predetermined number of bits per pixel (bpp), and the color coding takes a predetermined number of bpp. It should be clear that the image quality, expressed as a combination of the frame size and the color and brightness resolution, has a large influence on the number of bits that need to be processed. For instance, in the case of color and brightness being coded as 8 bpp each, and in the case of a frame repetition rate of 50 frames per second (fps), an XGA signal of size 1024 x 768 requires 629 Mbps, whereas an HDTV signal of size 1920 x 1080 requires 1659 Mbps. The recently developed size 2560 x 1440 even requires 2950 Mbps. The above example of calculation has not taken into account any additional information that needs to accompany the "bare" video information.
In a practical implementation, it is preferred that the capacity of the link is as high as possible. Further, it is preferred that the video signal on the link is copy-protected to prevent illegal copying of the content. Therefore, a suitable and preferred format of the link is HDMI. In an HDMI link, there are 3 video channels available for 3 video components (R, G, B), there is an audio channel available, and there is a bidirectional control channel available, as should be known to a person skilled in the art.
If the transmitting station 10 would transmit the video signal directly, the above numbers indicate the data rate transmitted over the link 20. However, transmission links have a predetermined transmission capacity, which has an upper limit depending, inter alia, on the type of link (electrical wire, optical fiber, wireless), but which in practice may be less than the upper limit because of circumstantial conditions (such as, for instance, the length of the link, or the link protocol itself, but also link devices in the chain can have limitations). Thus, in practice it may turn out that the link 20 is not capable of handling the data stream. For instance, if the link 20 has a capacity of 800 Mbps, the HDTV signal of size 1280 x 720 can be transmitted without problems but the size 1920 x 1080 or higher can not be transmitted without transmission errors.
Thus, the transmitting station 10 needs to perform some data processing such that the video information can be transmitted with a reduced data rate. A commonly used method for doing this is compression; well-known compression schemes are, for instance, MPEG2 and MPEG4. With such compression, however, loss of information occurs to some extent, possibly resulting in video artefacts on display. Further, the compression factor achieved is not constant but depends on picture content. In the following example, it will be assumed that the input signal SS has a size 1920 x 1080.
The first station 10 comprises a conversion block 11 for converting received RGB signals of input signal SS to YUV signals, because generally a better compression can be achieved in YUV space as compared to RGB space.
The first station 10 further comprises a compression block 12 for compressing the video data using a Differential Pulse Code Modulation (DPCM) method. The method is intra field, and compresses the image line based. An inter field compression method would introduce too much latency. An advantage of the DPCM method is, besides its low complexity, that the compression factor may be guaranteed. Other methods, like JPEG, do not deliver a guaranteed compression.
In the compression method according to the present invention, each line is divided in segments of Np pixels, separate for each color component. A segment is formed by Nb consecutive bytes on a line. Compression may be done segment-wise. Each color component could be compressed with a different factor, as long as the overall compression factor is still ok.
The segment size of the compression method is selectable but best results are reached when using a segment size of around 128 pixels. In a practical embodiment, the segment size will be selected such that the active amount of pixels per line is a multiple of the segment size. If in the example the video source is 1920* 1080p, segment size could be taken as 128.
In case the link 20 is a wireless link, the transmission may not be error-free. The wireless link already features an RS (Reed-Solomon) or LDPC (Low-Density-Parity- Check) error correction mechanism. It will be hard to improve the bit error rate by adding another correction method on top of these. Therefore, the present invention proposes to use an error detection method in combination with error concealment at the display.
As error detection, a 16-bit (2 bytes) cyclic redundancy checksum (CRC) is proposed. Such a CRC will detect 99.99% of the errors. In the first station 10, a CRC block 13 calculates a CRC per Y-segment. It is possible to do the same per U-segment and per V-segment, but in a preferred embodiment the checksum calculation for U and V is combined to limit the overhead on CRC bits. Such a CRC will be able to detect 99% of the errors, but, since an erroneous U or V segment will be less visible then an erroneous Y segment, this is an acceptable solution. Thus, in this embodiment, there are two bytes per Y-segment used for CRC and two bytes per combination of U and V segments used for CRC, in other words a total of 4 bytes CRC per YUV segment.
For more information on the compression, reference is made to the presentation "Lossless and Fine-Granularity Scalable Near-Lossless Color Image Compression" by R.J. van der Vleuten and S. Egner at the 25th Symposium on Information Theory in the Benelux, 2-4 June 2004, Kerkrade, the Netherlands, of which the proceedings are availably as ISBN: 90-71048-20-9, the contents of this presentation being incorporated herein by reference.
The compression is performed to such degree that the resulting data rate is less than or equal to the capacity of the link 20. It is now possible, in principle, to generate a straight train of data for transmission over the link 20. However, it is very well possible that the equipment of the link 20 is not capable of handling such unorganized data train. In order to overcome this problem, the present invention proposes to pack the data train into a standard video format of a size that can be handled by the link 20. This video format will be indicated as target format, and the resulting video signal will be indicated as target signal TS.
To this end, the first station 10 further comprises a formatter block 14, receiving the compressed source signal CSS from the compression block 12, possibly provided with CRC. The formatter block 14 has information regarding the link 20, more specifically information regarding the data rate capacity of the link 20, and the formatter block 14 is designed to select a target standard video format having a data rate less that the data rate capacity of the link 20. In a possible embodiment, this selection is predefined in the first station 10. In another possible embodiment, the formatter block 14 is provided with a memory (not shown) containing a table of possible video formats, and the formatter block 14 is designed to select a suitable format from this memory. If there are more standard video formats possible, the formatter block 14 will select from the potentially possible formats the one having the largest number of pixels per frame.
For instance, assume that the link 20 is capable of transmitting the following formats:
720 x 480 768 x 576
1024 x 768 1280 x 72Op
In such case, although the link 20 is capable of transmitting the formats 1024 x 768 and lower, the formatter block 14 will select format 1280 x 72Op as target format. With a source format of 1920 x 1080, the required compression factor would then be at least 2.25. In such exemplary situation, the compression block 12 could perform the compression as follows:
Y: 128 bytes compressed to 84 bytes. U: 128 bytes compressed to 43 bytes.
V: 128 bytes compressed to 43 bytes. This leads to an overall compression of (128 + 128 + 128) / (84 + 43 + 43) = 2.26.
Figure 2 illustrates the resulting YUV segments after addition of CRC bytes; it is noted that the addition of CRC bytes reduces the number of video content bytes. In the following, two possible embodiments for the multiplexing process performed by the formatting block 14 are described.
In a first embodiment, with reference to figure 3, the multiplexing is performed on a segment basis. The segments are multiplexed over the three available channels (R,G,B). The R-channel contains YUV segment 1, 4, 7 etc. The G-channel contains YUV segment 2, 5, 8 etc. The B-channel contains YUV segment 3, 6, 9 etc. The disadvantage of this segment multiplexing is that segments need to be buffered at encoder as well as decoder side. The CRC is considered part of the segment.
In a second embodiment, with reference to figure 4, the multiplexing is performed on a pixel basis. A first V-segment Vl is written as pixel data in the B-channel. A first U-segment Ul is written as pixel data in the G-channel. A first Y-segment Yl is written as pixel data in the R-channel. Second, third and fourth Y-segments Y2, Y3, Y4 are written as pixel data in the R, G, B channels, respectively. And so on. Thus, one channel (here: the R channel) will contain only Y-segments, one channel (here: the G channel) will contain alternatively U and Y segments, and one channel (here: the B channel) will contain alternatively V and Y segments.
Pixel based multiplexing will limit the use of buffers, but adds additional control logic. Both first station 10 and second station 30 need to know the way of multiplexing. If the number of Y pixels is not a multiple of the number of U or V pixels, pixel multiplexing may be difficult. The CRC is considered part of the segment. The first station 10 further comprises a transmission format modelling block
15; in the case of an HDMI link, this would be a block for outputting an HDMI signal. Apart from the video information received from the formatter 14, the transmission format modelling block 15 also receives the audio data received at the input of the first station 10. Further, the first station 10 may also generate a CEC message for transmission over the CEC channel of the HDMI link, as will be explained later.
The first station 10 further comprises a link transmission block 16, which receives the HDMI signal from the transmission format modeling block 15 and converts this signal to a signal suitable for actual transmission over the link 20. Depending on the physical realization of the link 20, the transmission block 16 generates electrical pulses or an optical signal or a wireless signal. Control will be a bidirectional communication. In case of wireless transmission, for high-speed forward link, the technologies that could be used are UWB or 60 GHz communications. For low speed bi-directional control data, another wireless technology can be used.
In the second station 30, the receiving section 31 receives the HDMI signals in the physical form determined by the physical realization of the link 20, and retrieves the HDMI signal, which contains the video data, audio data and CEC control data.
The second station 30 further comprises an HDMI receiver 34, which receives the HDMI signal from the receiving section 31, and which outputs the CEC control data, the audio data, and the video data.
The second station 30 further comprises a de-formatter block 35, which receives the video data from the HDMI receiver 34, and which performs the inverse operation of the formatter block 14 on the (standard) video signal as received, such as to provide the compressed Y, U, V data together with possible CRC data. If, on transmission, an error correcting code like FEC has been used, errors may be corrected in this block. If an error detecting code like CRC has been used, errors cannot be corrected here. The error detection information (CRC data) is then sent to the error concealment block 37 further in the chain.
The second station 30 further comprises a de-compressor block 36, which receives the compressed Y, U, V data from the de-formatter block 35, and which performs the inverse operation of the compression block 12, such as to provide uncompressed Y, U, V data.
If an error detection code has been used, errors cannot be corrected. Since a bit error could lead to an error of Nb consecutive bytes on the screen, preferably some way of error concealment is performed. To this end, the second station 30 further comprises an error concealment block 37, receiving the uncompressed Y, U, V data from the de-compressor block 36, and receiving the CRC data (if any) from the de-formatter block 35. The error concealment block 37 may perform any of the following (or other) methods if an error is detected in a segment: * The erroneous segment is replaced by the segment directly above on the previous line.
* The erroneous segment is replaced by the average of the segment directly above on the previous line and the segment directly below on the next line.
The second station 30 further comprises a backconversion block 38 for converting the YUV signals received from the error concealment block 37 to RGB signals. The backconversion block 38 provides its RGB output signals to the display device 33, which also receives the audio data and the possible CEC data from the HDMI receiver 34. It is noted that, in practice, the circuit blocks 31, 34, 35, 36, 37, 38 will be accommodated within a housing of the display device 33 (monitor or the like). It is further noted that the RGB signals may be sent to the display device 33 directly, if the display device is a type accepting RGB signals, or may first be packed in a standard video signal, if the display device is a type expecting to receive digital video signals. In principle, the first station 10 and the second station 30 of the system 1 form a matching set. This means that the second station "knows" what kind of compression is performed by the first station, and it means that the first station in turn "knows" this.
However, in practice it may happen that the user connects a different display device (monitor) to the first station 10. If the video transmitted by the first station 10 is compressed while the display device does not know this, the user can not view "recognizable" video. Also, it is possible that the first station 10 receives source signals of different sizes. In case it receives a source signal of size 1280 x 720, it does not need to perform any compression, whereas in case it receives a source signal of size 2560 x 1440 it needs to perform compression with a higher compression factor. In other words, the compression factor actually applied may not always be the same.
In such cases, the first station 10 needs to communicate with the second station 30, which communication can take place over the CEC channel. This communication is initiated by the first station 10, for instance on power-up, or at the occasion of the HDMI cable being connected, or each time a video transmission is started, by the first station 10 sending a CEC message. This CEC message may, for instance, contain the compression factor. The first station 10 may for instance receive any of the four following types of answer from the second station 30:
1) the second station 30 is not the standard station belonging to the first station
10, and it has not installed any CEC facility; in that case, the second station 30 does not respond at all. 2) the second station 30 is not the standard station belonging to the first station
10, but it has installed CEC facility; in that case, the second station 30 may respond by sending a <feature abort> message, indicating that it does not support the compression feature. 3) the first station 10 communicates the compression factor, and the second station 30, whether or not it is the standard station belonging to the first station 10, has installed CEC facility and it understands the message from the first station 10; in that case, the second station 30 will send an acknowledgement message that it recognizes the compression factor. 4) the second station 30 is the standard station belonging to the first station 10, and the first station 10 performs a predefined standard compression; in that case, the second station 30 will send an acknowledgement message.
In the situations 3) and 4), processing may take place as described earlier. However, in the situations 1) and 2), it is preferred that the first station 10 does not proceed as described, because it is highly likely that the second station 30 will not or not correctly decompress the video signal received over the link 20, so that the display of the video signal will not result in viewable images. Nevertheless, the first station 10 has to take some fallback action in order to reduce the data rate, and according to the invention the first station 10 will do so in a manner which will at least result in viewable images without decompression. In a first possible fallback method, the first station 10 will scale the video to a resolution that can be handled by the second station 30 as well as by the link 20.
In a second possible fallback method, the first station 10 will discard some of the pixels of the frames in order to generate a target video signal with the suitable target size. In the present example, where the source video signal has a size of 1920 x 1080 pixels while the target video signal has a size of 1280 x 720 pixels, the first station 10 may discard or ignore the first 180 lines of each frame and the last 180 lines of each frame, so that it will transmit only data from lines 181 to 900. Further, of each line, the first station 10 may discard or ignore the first 320 pixels and the last 320 pixels, so that it will transmit only the data corresponding to pixels 321 to 1600. Thus, a video signal will result having a size of 1280 x 720 pixels, indeed, which on display without decompression will result in a viewable image of 1280 pixels wide and 720 pixels high, corresponding to a central portion of the original image. The price of this fallback method is that from the original image upper and lower horizontal bands of 180 lines height have been lost and left and right vertical bands of 320 pixels wide have been lost along the edges of the original picture, but at least the user can view video.
Summarizing, the present invention provides a video transmission system 1 which comprises a transmitting station 10, a receiving station 30, and a data link 20 coupling these two stations. The transmitting station receives a source video signal SS for frames having size X 1,Yl, Xl and Yl indicating the number of pixels in the horizontal and vertical dimension of the frames, each pixel being associated with nb bits of digital data. The pixel data are compressed such that the number of bits of the compressed data per frame is equal to X*Y*nb < Xl*Yl*nb, wherein X and Y correspond to horizontal and vertical dimension of the frames of a target standard video format. The compressed data are formatted into an uncompressed target video signal corresponding to said target standard video format. The uncompressed target video signal is transmitted over the data link 20, which preferably is an HDMI link.
In this invention, the following innovative ideas are incorporated: * use a 16-bit CRC for Y and a 8-bit CRC for U and V; more in general, use a
CRC for Y which is twice as large as the CRC for U and V.
* format the compressed video in a standard HDMI video format;
* use error detection and concealment;
* crop the image when the display is not compatible with the the first station 10; * use a vendor-specific CEC message to detect the presence of the decoder at the receiver side.
It is to be particularly noted that the signal transferred over the link 20 is in all aspects a standard video signal of size X, Y, which can be received and processed by any standard video processing apparatus, up to and including a display device. Particularly, this standard video signal contains pixel data coding for X*Y pixels, which upon display will result in pixel color and pixel brightness of an image having size X pixels wide and Y pixels height. This is a difference with respect to prior art, where a compressed video signal does not have the format of a standard video signal and thus can not be displayed without a decompression process. However, in the standard video signal according to the present invention, the data in pixel x,y do not correspond to one actual pixel in the image intended for display. The image intended for display has size Xl pixels wide and Yl pixels height, wherein X1>X or Y1>Y, or both, thus containing X1*Y1 pixels. The data coding for these X1*Y1 pixels is compressed so that the number of bits is reduced, and the compressed data is redistributed over the X* Y pixels of the standard video signal used for transmission. Thus, this Standard video signal can be considered as constituting a transport vehicle of size X* Y, containing compressed data of an original signal of size X1*Y1. Thus, the pixel data are compressed but the video signal as such is an uncompressed video signal.
While the invention has been illustrated and described in detail in the drawings and foregoing description, it should be clear to a person skilled in the art that such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments; rather, several variations and modifications are possible within the protective scope of the invention as defined in the appending claims. For instance, instead of using the CEC protocol, the first station 10 may also use a parameter in the monitor's EDID information.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
In the above, the present invention has been explained with reference to block diagrams, which illustrate functional blocks of the device according to the present invention. It is to be understood that one or more of these functional blocks may be implemented in hardware, where the function of such functional block is performed by individual hardware components, but it is also possible that one or more of these functional blocks are implemented in software, so that the function of such functional block is performed by one or more program lines of a computer program or a programmable device such as a microprocessor, microcontroller, digital signal processor, etc.

Claims

CLAIMS:
1. Method for transmitting video data, the method comprising the steps of: receiving an original video signal for frames having size Xl5Yl, Xl indicating the number of pixels in the horizontal dimension of the frames and Yl indicating the number of pixels in the vertical dimension of the frames, each pixel being associated with nb bits of digital data, so that the number of bits per frame is equal to Xl *Y1 *nb; from this original video signal, extracting the pixel data; compressing the pixel data such that the number of bits of the compressed data per frame is equal to X*Y*nb < Xl *Y1 *nb, wherein X and Y are chosen such as to correspond to horizontal and vertical dimension, respectively, of the frames of a target standard video format; from the compressed data, calculating target pixel data of X* Y pixels per frame, and using these target pixel data to generate an uncompressed target video signal corresponding to said target standard video format; transmitting the uncompressed target video signal.
2. Method according to claim I5 wherein said target standard video format is selected from the group comprising 720x480, 768x576, 1024x768, 1280x720, 1920x1080, 2560x1440.
3. Method according to claim I5 wherein the uncompressed target video signal is transmitted over an HDMI link.
4. Method according to claim I5 wherein the step of compressing the pixel data comprises segment based multiplexing or pixel based multiplexing.
5. Method according to claim I5 further comprising the steps of: before transmission of the video data, transmitting a control message to a receiver; receiving a return message back from the receiver; in response to the received return message, deciding to proceed with compressing video data and transmit the compressed video data.
6. Method according to claim 5, wherein, if no timely return message from the receiver is received back, or if the return message indicates that the receiver will not be capable of decompressing the compressed video data, it is decided, instead of compressing the pixel data of all original pixels, to select the pixel data of X* Y of the original pixels, and to use these pixel data to generate the uncompressed target video signal.
7. Method for receiving video data which has been transmitted using the transmission method according to claim 1 , the receiving method comprising the steps of: receiving a video signal for frames having size X,Y, X indicating the number of pixels in the horizontal dimension of the frames and Y indicating the number of pixels in the vertical dimension of the frames, each pixel being associated with nb bits of digital data, so that the number of bits per frame is equal to X*Y*nb; from this received video signal, extracting the pixel data; decompressing the pixel data such that the number of bits of the decompressed data per frame is equal to Xl*Yl*nb > X*Y*nb, wherein Xl and Yl correspond to horizontal and vertical dimension, respectively, of the frames of an original standard video format; - from the decompressed data, calculating original pixel data of Xl *Y1 pixels per frame, and using these original pixel data to generate an uncompressed original video signal corresponding to said original standard video format.
8. Standard video data signal, for frames having size X, Y, X indicating the number of pixels in the horizontal dimension of the frames and Y indicating the number of pixels in the vertical dimension of the frames, each pixel being associated with nb bits of digital data, so that the number of bits per frame is equal to X*Y*nb; wherein the pixel data are compressed data corresponding to original pixel data of an original standard video format in which the frames have size X 1,Yl, wherein X1>X and/or Y1>Y.
9. Transmitting station (10) for a video transmission system (1), comprising: an input for receiving a source video signal (SS) for frames having size Xl, Yl, Xl indicating the number of pixels in the horizontal dimension of the frames and Yl indicating the number of pixels in the vertical dimension of the frames, each pixel being associated with nb bits of digital data, so that the number of bits per frame is equal to Xl*Yl*nb; a compression block (12) for compressing the pixel data such that the number of bits of the compressed data per frame is equal to X*Y*nb < Xl*Yl*nb, wherein X and Y are chosen such as to correspond to horizontal and vertical dimension, respectively, of the frames of a target standard video format; a formatting block for formatting the compressed data into an uncompressed target video signal corresponding to said target standard video format; transmission means (15, 16) for transmitting the uncompressed target video signal over a data link (20).
10. Transmitting station according to claim 9, wherein the transmission means (15, 16) comprise a transmission format modeling block (15) for translating the uncompressed target video signal to a signal format corresponding to the data link (20).
11. Transmitting station according to claim 10, wherein the transmission means (15, 16) further receive an audio signal contained in the source video signal, and combine this audio signal in the target video signal to be transmitted.
12. Transmitting station according to claim 10, further comprising means for generating a control signal (CEC), wherein the transmission means (15, 16) further receive the control signal and combine this control signal in the target video signal to be transmitted.
13. Transmitting station according to claim 9, further comprising a conversion block (11) arranged before the compression block (12), for converting received RGB signals of the input signal (SS) to YUV signals.
14. Transmitting station according to claim 9, wherein the signal format is HDMI.
15. Transmitting station according to claim 9, adapted for performing the method of any of claims 1-6.
16. Receiving station (30) for a video transmission system (1), comprising: a receiving section (31) for receiving a video signal for frames having size X, Y, X indicating the number of pixels in the horizontal dimension of the frames and Y indicating the number of pixels in the vertical dimension of the frames, each pixel being associated with nb bits of digital data, so that the number of bits per frame is equal to X*Y*nb; - a deformatter (35) for retrieving the pixel data from the video signal; a decompressor (36) for decompressing the pixel data such that the number of bits of the decompressed data per frame is equal to Xl*Yl*nb > X*Y*nb, wherein Xl and Yl correspond to horizontal and vertical dimension, respectively, of the frames of an original standard video format;
17. Receiving station according to claim 16, further comprising backconversion means (38), for converting the decompressed YUV data back into RGB data.
18. Receiving station according to claim 16, further comprising a display device (33) for receiving the uncompressed original video signal.
19. Receiving station according to claim 16, further comprising error concealment means (37) arranged between the decompressor (36) and the conversion means (38), and receiving an error correction signal (CRC) from the deformatter (35); wherein the error concealment means (37) are adapted, in response to receiving an error correction signal (CRC) indicating a possibly defective segment of pixel data, to replace to possibly defective segment by a replacement segment of data.
20. Receiving station according to claim 19, wherein the replacement segment contains the pixel data of the segment directly above the possibly defective segment.
21. Receiving station according to claim 19, wherein the error concealment means (37) are adapted to calculate the pixel data of the replacement segment as the average of the pixel data of the segment directly above the possibly defective segment and the pixel data of the segment directly below the possibly defective segment.
22. Video transmission system (1), comprising a transmitting station (10), a receiving station (30), and a data link (20) coupling these two stations, wherein the transmitting station (10) is implemented in accordance with any of the previous claims 9-15.
23. Video transmission system according to claim 22, wherein the receiving station
(30) is implemented in accordance with any of the previous claims 16-21.
24. Video transmission system according to claim 22, wherein the data link (20) is an HDMI link.
PCT/IB2007/050945 2006-03-21 2007-03-19 Video transmission over a data link with limited capacity WO2007107948A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06111504.4 2006-03-21
EP06111504 2006-03-21

Publications (1)

Publication Number Publication Date
WO2007107948A1 true WO2007107948A1 (en) 2007-09-27

Family

ID=38294213

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/050945 WO2007107948A1 (en) 2006-03-21 2007-03-19 Video transmission over a data link with limited capacity

Country Status (1)

Country Link
WO (1) WO2007107948A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573098A (en) * 2021-07-06 2021-10-29 杭州海康威视数字技术股份有限公司 Image transmission method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5353059A (en) * 1992-01-09 1994-10-04 Sony United Kingdom Ltd. Data error concealment
US5537157A (en) * 1993-04-21 1996-07-16 Kinya Washino Multi-format audio/video production system
US20040008767A1 (en) * 2002-07-09 2004-01-15 Nec Corporation Video image data compression archiver and method for video image data compression

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5353059A (en) * 1992-01-09 1994-10-04 Sony United Kingdom Ltd. Data error concealment
US5537157A (en) * 1993-04-21 1996-07-16 Kinya Washino Multi-format audio/video production system
US20040008767A1 (en) * 2002-07-09 2004-01-15 Nec Corporation Video image data compression archiver and method for video image data compression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VAN DER VLEUTEN R J: "Lossless and Fine-Granularity Scalable Near-Lossless Color Image Compression", SYMPOSIUM ON INFORMATION THEORY IN THE BENELUX, XX, XX, 2 June 2004 (2004-06-02), pages 209 - 216, XP008081930 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573098A (en) * 2021-07-06 2021-10-29 杭州海康威视数字技术股份有限公司 Image transmission method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN107147942B (en) Video signal transmission method, device, apparatus and storage medium
CN108366300B (en) Image receiving apparatus
US9014258B2 (en) Transmission device and method of determining transmission date format
US9602785B2 (en) Transmission and detection of multi-channel signals in reduced channel format
JP6522643B2 (en) Data transmitting / receiving apparatus and method using HDMI
US8856840B2 (en) Communication system, video signal transmission method, transmitter, transmitting method, receiver, and receiving method
WO2009098933A1 (en) Video signal transmission device, video signal transmission method, video signal reception device, and video signal reception method
EP1835746A2 (en) Frame rate conversion system and method
US20110103472A1 (en) Methods, systems and devices for compression of data and transmission thereof using video transmission standards
US7567588B2 (en) Transmission system
WO2013018248A1 (en) Image transmission device, image transmission method, image receiving device, and image receiving method
US20080101409A1 (en) Packetization
US20140177735A1 (en) Image receiving device and image receiving method
US7363575B2 (en) Method and system for TERC4 decoding using minimum distance rule in high definition multimedia interface (HDMI) specifications
WO2007107948A1 (en) Video transmission over a data link with limited capacity
JP4483457B2 (en) Transmission system
US20080094500A1 (en) Frame filter
US20060013559A1 (en) Data transfer apparatus and method using USB module
JP6609074B2 (en) Image output apparatus and output method
WO2016196138A1 (en) Communication of sideband data for videos
JP5041969B2 (en) Video transmission method, system and program
WO2012147786A1 (en) Image transmission device and image transmission method
JP2013031025A (en) Image transmission device, image transmission method, image receiving device, and image receiving method
JP2013115456A (en) Image transmission apparatus and image transmission method
JP5318992B2 (en) Video transmission program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07735171

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1), EPO FORM 1205A SENT ON 15/12/08 .

122 Ep: pct application non-entry in european phase

Ref document number: 07735171

Country of ref document: EP

Kind code of ref document: A1