US20060038878A1 - Data transmission method and data trasmission system - Google Patents

Data transmission method and data trasmission system Download PDF

Info

Publication number
US20060038878A1
US20060038878A1 US11/260,358 US26035805A US2006038878A1 US 20060038878 A1 US20060038878 A1 US 20060038878A1 US 26035805 A US26035805 A US 26035805A US 2006038878 A1 US2006038878 A1 US 2006038878A1
Authority
US
United States
Prior art keywords
network
data
signals
terminals
mcu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/260,358
Inventor
Masatoshi Takashima
Hideyuki Narita
Haruyoshi Murayama
Yoshiyuki Ito
Daisuke Hiranaka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/260,358 priority Critical patent/US20060038878A1/en
Publication of US20060038878A1 publication Critical patent/US20060038878A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23608Remultiplexing multiplex streams, e.g. involving modifying time stamps or remapping the packet identifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2665Gathering content from different sources, e.g. Internet and satellite

Definitions

  • the present invention relates to a data transmission method and a data transmission system for transmitting a plurality of streams over a network when communicating, streaming, etc. among a plurality of terminals.
  • FIG. 1 is a view of an example of a television (TV) conference system.
  • a conference is simultaneously carried out by using five terminals of a terminal 1 to a terminal 5 with cameras CMR mounted thereon.
  • the terminals 1 to 5 are connected via switches SW 1 to SW 4 , routers RT 1 to RT 3 , and an ISDN network NTW 1 .
  • the signals (video and audio) from the terminals 1 to 5 are assembled at a multipoint control unit (MCU) 6 where they are combined to the signal to be reproduced at each terminal.
  • MCU multipoint control unit
  • the MCU 6 has mainly two functions. One is that of a block of a multipoint controller (MC) 6 A for controlling which terminals are attending the conference, while the other is that of a multipoint processor (MP) 6 B for combining signals assembled from multiple points for every terminal.
  • MC multipoint controller
  • MP multipoint processor
  • FIGS. 2A and 2B are views of the structure of data flowing over the network and an amount of transmission in the TV conference system of FIG. 1 .
  • signals (A 1 , V 1 ) transmitted from the terminal 1 pass through the switch SW 1 , router RT 1 , ISDN network NTW 1 , router RT 2 , and switch SW 4 to be transmitted to the MCU 6 .
  • the signals transmitted from the terminals 2 , 3 , 4 , and 5 are transmitted to the MCU 6 .
  • the signals assembled at the MCU 6 are combined as follows for every terminal.
  • A denotes audio
  • V denotes video
  • (,) of (A 1 ,V 1 ) indicates that each signal is separated
  • (-) of (A 1 - 2 - 3 - 4 ) indicates that the signals are combined.
  • “Combined” means that the signals are added in a baseband state (for example PCM) in the case of the audio.
  • the signals are combined to one having the same image size by reducing the sizes of the images in the baseband (pixel) state and joining the plurality of images with each other in one frame.
  • the data structure of the signal flowing over the network shown in FIG. 2A becomes as shown in FIG. 2B .
  • the data has the same amount of information before and after the composition.
  • the audio and video are formed into different packets and multiplexed (MUX) in packet units. Further, data is also multiplexed in addition to the audio and video.
  • MUX multiplexed
  • FIG. 3 is a view of the topology in the case where the TV conference system is applied to wireless telephones.
  • FIG. 3 is a view of an example of the configuration of multipoint communication.
  • the case where five terminals MT (Mobile Terminal) 1 to MT 5 communicate is shown.
  • the terminals MT 1 to MT 5 are connected via mobile base stations (MBS) 11 A to 11 D arranged in the network, mobile switching centers (MSC) 13 A to 13 C with the MCUs 12 A to 12 C connected thereto, and further gateway mobile switching centers (GMSC) 14 A to 14 E having home location registers (HLR).
  • MCS mobile base stations
  • MSC mobile switching centers
  • GMSC gateway mobile switching centers
  • the center portion is a network wherein the GMSCs 14 A to 14 E are connected in a so-called mesh state (for example circuit switched network or a packet switching network).
  • mesh state for example circuit switched network or a packet switching network.
  • a great difference from the TV conference system resides in that there are many MCUs in the network, and the MCU located nearest each terminal multiplexes the signals of the multiple points.
  • the MCU in the same way as the time of the TV conference, there are the function of an MC and the function of an MP.
  • one MC among the plurality of MCUs controls one communication, while a plurality of MPs are controlled by this one MC and perform the multiplexing.
  • FIGS. 4A and 4B are views of the structure of the data flowing in the network and the amount of transmission in the multipoint communication of FIG. 3 .
  • the signals of the multiple points must all be transferred to a plurality of MCUs 12 A to 12 C. Accordingly, the signals (A 1 ,V 1 ) transmitted from for example the terminal MT 1 are transmitted to the MCU 12 A, MCU 12 B, and MCU 12 C.
  • the data structure of the signals (A 1 , V 1 ) becomes as shown in FIG. 4B .
  • the channel is narrow, so, unlike the time of the TV conference system, the image sent from each terminal is transmitted matching with the size after composition.
  • the data structure of this signal is indicated by numeral 15 in FIG. 4B .
  • the structure of the data flowing at this layer becomes the format as indicated by reference numeral 16 in FIG. 4B .
  • the amount of transmission also becomes 15 times this data structure.
  • the MCU side can multiplex the data in packet units without composition at the baseband level. This situation will be shown in FIGS. 5A and 5B .
  • FIG. 5B This data structure becomes as shown in FIG. 5B .
  • the example of FIG. 5B shows the situation where the data is multiplexed in packet units.
  • FIG. 6 is a view of an example of the configuration of a conventional MCU used for multipoint communication.
  • the MCU 12 inserts delay units DLY 1 to DLY 5 for the signals to match their phases, then demultiplexes the plurality of signals at the demultiplexers DMX 1 to DMX 5 provided in the MP, passes them through a switcher (buffer) BF, and combines them at the multiplexers MX 1 to MX 5 for every terminal.
  • delay units DLY 1 to DLY 5 for the signals to match their phases, then demultiplexes the plurality of signals at the demultiplexers DMX 1 to DMX 5 provided in the MP, passes them through a switcher (buffer) BF, and combines them at the multiplexers MX 1 to MX 5 for every terminal.
  • This delay amount and the demultiplexing and multiplexing at the MP are performed according to instructions of the MC.
  • FIGS. 7A, 7B and 7 C and FIGS. 8A, 8B and 8 C are views for explaining a situation where the video and audio are encoded and decoded.
  • FIG. 7A indicates a vertical synchronization signal V Sync.
  • the bold lines represent frames.
  • This frame is an access unit of the video. Generally, this is used as the unit for compression of the amount of information.
  • An I-picture is a picture compressed utilizing the correlation with a frame, while a P-picture is a picture compressed utilizing the correlation among frames.
  • the numerals after the picture type indicate the sequence of input frames.
  • the picture input as in 2 ) in FIG. 7A is encoded at a time 4 ).
  • FIG. 7A indicates the image of the buffer existing inside the encoder.
  • An inverse form to a virtual decoder buffer (VBV buffer) is described rather than the operation of the actual buffer. This corresponds to a virtual buffer existing inside a controller for controlling the rate.
  • this buffer is instantaneously generated when the encoding is terminated.
  • the bold line shows this situation.
  • FIG. 7A indicates the value of an STC (system time clock) when each access unit of the video is input to the encoder.
  • STC system time clock
  • This STC illustrates an absolute clock in a telephone network. All systems and terminals are assumed as operating with the same clock and time.
  • FIG. 7A indicates an STS (decoding time stamp) which indicates the timing when the access unit finished being encoded at 5 ) starts being decoded at the reproduction side.
  • STS decoding time stamp
  • This value is transmitted together when the access units of the video are formed into packets and multiplexed. Accordingly, for 10 pictures, a value such as STC_V 6 is transmitted. When the system reaches this time, the decoding is started.
  • audio unlike video, there is no concept of discrete access units such as frames. However, the audio is fetched in the form of access units for every certain number of sample number.
  • FIG. 7B and FIG. 8A show the situation where an AAU (audio access unit) is input into the encoder.
  • 7 is the time when the AAU is input.
  • 9 is the time when the encoding is actually carried out, while 10 ) indicates the situation where data is generated in the virtual buffer at the instant when the encoding is completed.
  • 11) is the timing when each AAU is decoded. This value is multiplexed together with the AAU and transmitted to the decoder side.
  • the bit stream (compressed signal) generated in the buffer in 5 ) of FIG. 7A starts to be transmitted while the state of the buffer on the decoder side is monitored.
  • the data is accumulated in the decoder buffer.
  • FIG. 13 ) of FIG. 7C and FIG. 8B indicate the timing when the decoding is carried out matching with the time of the STC of 15 ).
  • the decoding is ideally instantaneously completed and, simultaneously with the completion of the decoding, the data is output as shown in 14 ).
  • the time from the instant when the signal is input to the encoder (terminal) to when the signal is output from the decoder (terminal) is defined as the end-to-end delay. Namely, that time is shown in 15 ) of FIG. 7C and FIG. 8B . This becomes the same in all access units both video and audio.
  • the audio is transmitted with a delay so as to match the end-to-end-delay of the video.
  • the data is accumulated in the decoder buffer.
  • the timing of decoding is determined for every AAU shown in 17 ) matching with the value of the STC of 19 ) in FIG. 8C .
  • the decoding is instantaneously completed matching with this.
  • the data is output from the decoder immediately after that.
  • the information concerning the video and audio are synchronized by transmitting a time stamp such as a DTS. Further, they are controlled so that no underflow or overflow of the buffer occurs in the system.
  • FIG. 9 By utilizing the DTS shown in FIGS. 7A to 7 C and FIGS. 8A to 8 C, it is possible to achieve synchronization among multiple points. This situation is shown in FIG. 9 .
  • the signals of the terminals MT 1 and MT 2 reach the MCU 12 A without through the GMSC 14 .
  • the signals of the terminals MT 3 , MT 4 , and MT 5 reach the MCU 12 A after passing through the GMSC 14 .
  • the signals of the terminal MT 3 (T 3 -AU 1 , AU 2 ), terminal MT 4 (T 4 -AU 1 , AU 2 ), and terminal MT 5 (T 5 -AU 1 , AU 2 ) arrive delayed in comparison with those of the terminal MT 1 (T 1 -AU 1 , AU 2 ) and the terminal MT 2 (T 2 -AU 1 , AU 2 ) as shown in the situation of the time difference of the packets transmitted from the terminals indicated by symbol TM 1 in FIG. 9 .
  • the MCU 12 A analyzes this DTS from each packet, controls the delay units in the MCU to match the phases of the signals from the terminals, then multiplexes and combines the signals.
  • the band is often not compensated. Therefore, it is an area where the quality of service (QoS) is low.
  • QoS quality of service
  • FIG. 10 is a view of an example of the configuration of a multipoint communication system utilizing only a network having a low QoS.
  • the terminals are indicated by MT 1 to MT 4 in the same way as the above.
  • 21 A to 21 C denote MBSs
  • 22 A and 22 B denote MSCs
  • 23 A and 23 B denote MCUs
  • 24 A and 24 B denote packet switching networks
  • 25 A and 25 B denote Internet exchanges (IX)
  • 26 denotes the Internet.
  • the signals rising from the terminals are all transmitted to the packet switching networks 24 A and 24 B at the MSCs 22 A and 22 B.
  • the MCUs 23 A and 23 B for multiplexing the signals of the multiple points are arranged in this packet switching networks.
  • the MCU 23 A preparing the signals to be transmitted to the terminals MT 1 and MT 2 receives the signals from the terminals MT 1 , MT 2 , MT 3 , and MT 4 , multiplex them, and send them to the terminals MT 1 and MT 2 .
  • the data of the terminals MT 3 and MT 4 are transmitted through the Internet 26 , so the transmission delay is greatly influenced in accordance with the state of congestion of the network.
  • an RTCP real-time control protocol
  • RTT round trip time
  • the amount of the data transmitted over the network is controlled to ease the congestion state so as to avoid congestion.
  • the continuity is more important in the audio.
  • a plurality of signals for example audio and video
  • a plurality of signals are transmitted through a plurality of transmission lines.
  • the delay values of the signals flowing over the transmission lines are different, if the signals are recombined as they are, a plurality of signals will end up out of phase.
  • this will result in lip-sync deviation and an extremely strange feeling.
  • the signals could become even more out of phase than with lip-sync deviation.
  • a large delay unit becomes necessary somewhere in the system.
  • delivery of a continuous signal is made possible by this method.
  • An object of the present invention is to provide a data transmission method and a data transmission system capable of reducing traffic of signals flowing over the entire network.
  • a second object of the present invention is to provide a data transmission method and a data transmission system not requiring a large delay unit in the MCU performing the multiplexing and composition and therefore capable of reducing the hardware size.
  • a third object of the present invention is to provide a data transmission method and a data transmission system achieving an improvement of a utilization efficiency of the transmission bands and a reduction of the transmission cost.
  • a fourth object of the present invention is to provide a data transmission method and a data transmission system capable of synchronizing a plurality of signals transmitted over different bands.
  • a fifth object of the present invention is to provide a data transmission method and a data transmission system capable of avoiding the trouble of enormous amount of signals being accumulated in the transmission line and the data not being updated for a long time.
  • a data transmission method for transmitting video data and audio data among multiple points from a plurality of terminals arranged in a network comprising transmitting the video data by multiplexing it as a stream encoded for every point and transmitting the audio data by combining at least one audio signal in a baseband in the network.
  • a data transmission method for transmitting data among multiple points from a plurality of terminals arranged in a network comprising shifting data in accordance with transmission delays when transmitting data at multiple points to the terminals.
  • identical packets are transmitted given different time stamps in accordance with the transmission delays in the network.
  • a data transmission method for transmitting a plurality of data streams among multiple points from a plurality of terminals arranged in a network, comprising transmitting each of the plurality of data streams through a network having a different property and recombining them after transmission over the networks and transmitting them to the terminals.
  • a network having a superior property is defined as a master network, and the others are defined as slave networks
  • the delay values of the slave networks are monitored based on the master network as the standard and the transmission of data through a slave network is restricted when that slave network has more than a certain delay in comparison with the master network.
  • the data transmitted over the slave network employs a compression method utilizing correlation among access units, the data transmitted to the network is controlled for every unit of interruption of the correlation.
  • a data transmission method for transmitting a plurality of data streams having different degrees of importance among multiple points from a plurality of terminals arranged in a network comprising demultiplexing the plurality of data streams having different degrees of importance in the middle of the transmission line, transmitting data where continuity is regarded as important through a network having a higher quality of service, transmitting data for which discontinuity is permitted through a network having a lower quality of service, combining the plurality of data transmitted through the different networks again before the data arrive at the destination terminals, and transmitting the same to the terminals.
  • a data transmission system for transmitting video data and audio data among multiple points from a plurality of terminals arranged in a network, comprising a device for transmitting the video data by multiplexing it as a stream encoded for every point and transmitting the audio data by combining at least one audio signal in a baseband in the network.
  • a data transmission system for transmitting data among multiple points from a plurality of terminals arranged in a network, comprising a device for shifting data in accordance with transmission delays when transmitting data at multiple points to the terminals.
  • the device transmits identical packets given different time stamps in accordance with the transmission delays in the network.
  • a data transmission system for transmitting a plurality of data streams among multiple points from a plurality of terminals arranged in a network, comprising a plurality of networks having different properties, a first device for transmitting each of the plurality of data streams through a network having a different property, and a second device for recombining them after transmission over the networks and transmitting them to the terminals.
  • the first device monitors delay values of the slave networks based on the master network as the standard and restricts the transmission of data through a slave network when that slave network has more than a certain delay in comparison with the master network.
  • the first device controls the data transmitted to the network for every unit of interruption of the correlation.
  • the first device when restricting the transmission of data to a slave network, transmits data for restricting a frame rate and a bit rate from the network to the terminals.
  • a data transmission system for transmitting a plurality of data streams having different degrees of importance among multiple points from a plurality of terminals arranged in a network, comprising a first network having a higher quality of service, a second network having a lower quality of service than the first network, a first device for demultiplexing the plurality of data streams having different degrees of importance in the middle of the transmission line, transmitting data where continuity is regarded as important through the first network, transmitting data for which discontinuity is permitted through the second network, and a second device for combining the plurality of data transmitted through the different networks again before the data arrive at the destination terminals and transmitting the same to the terminals.
  • the system when combining and transmitting a plurality of data (signals), the system adds only the information concerning the audio at the baseband (PCM) to obtain a signal of one channel. It transmits the video by bundling a plurality of channels while keeping the packet form.
  • PCM baseband
  • the information concerning the audio is greatly reduced in size by assembling the same in one channel.
  • the amount of information is determined in accordance with the image size, even if the images are returned to the baseband and combined, the amount of information is not reduced. On the contrary, a high performance is required in order to return the images to their original form and combine them.
  • the multiplexed signals when transmitting the multiplexed signals to the terminals, by adding only the information concerning the audio at the baseband (PCM) and transmitting signals multiplexed by bundling a plurality of channels as the video while keeping the packet form, the amount of information flowing over the transmission lines is reduced.
  • PCM baseband
  • the data transmitted from the multiple points are deliberately shifted in accordance with the transmission delays for reproduction and display instead of matching the phases of the signals input to the terminals at the same time.
  • a plurality of data signals having different degrees of importance are demultiplexed in the middle of the transmission lines, the signals where continuity is regarded as important (for example, information concerning the audio) are transmitted through the network having a higher QoS (quality of service), while signals for which discontinuity can be permitted (for example video) are transmitted through the network having a lower QoS.
  • the signals arrive at the destination terminals, they are recombined and delivered to the terminals.
  • the signals transmitted through the network having a high QoS for example the audio
  • the signals transmitted through the network having a lower QoS for example the video
  • the timing of the display is shifted on the receiver side.
  • the value of the time stamp for example DTS is delayed by that amount.
  • the transmission to the network is restricted on the transmitter side.
  • the method for control there are a method of lowering the bit rate and a method of lowering the frame rate.
  • the end-to-end delay of the system is delayed.
  • FIG. 1 is a view of an example of a television (TV) conference system
  • FIGS. 2A and 2B are views of the structure of the data flowing over a network in the TV conference system of FIG. 1 and the amount of transmission;
  • FIG. 3 is a view of a topology in a case where the TV conference system is applied to wireless telephones (example of the configuration of the multipoint communication);
  • FIGS. 4A and 4B are views of the structure of the data flowing in the network in the multipoint communication of FIG. 3 and the amount of transmission;
  • FIGS. 5A and 5B are views of another example of the structure of the data flowing in the network in the multipoint communication of FIG. 3 and the amount of transmission;
  • FIG. 6 is a view of an example of the configuration of a conventional MCU used for multipoint communication
  • FIGS. 7A to 7 C are views for explaining a situation where a video and an audio are encoded and decoded
  • FIGS. 8A to 8 C are views for explaining the situation where a video and an audio are encoded and decoded
  • FIG. 9 is a view for explaining a flow and a timing of the signals in a case where a DTS shown in FIGS. 7A to 7 C and FIGS. 8A to 8 C are utilized for the multipoint communication;
  • FIG. 10 is a view of an example of the configuration of multipoint communication utilizing only a network having a low QoS
  • FIG. 11 is a view for explaining a first embodiment of a data transmission system employing a data transmission method according to the present invention and a view of a signal transmission state of a case of multipoint communication;
  • FIG. 12 is a view for explaining the first embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a state where signals in the case of multipoint communication are reproduced and displayed at terminals;
  • FIGS. 13A to 13 E are views for explaining a second embodiment of the data transmission system employing the data transmission method according to the present invention and a view of the situation of adding information concerning an audio at a baseband;
  • FIGS. 14A and 14B are views for explaining the second embodiment of the data transmission system employing the data transmission method according to the present invention, in which FIG. 14A is a view of an example of the configuration of a data transmission system 40 in a case where such multiplexed signals are transmitted among MCUs and between the MCU and the terminals, and FIG. 14B is a view of a data structure and an amount of transmission in the system of FIG. 14B ;
  • FIGS. 15A and 15B are views for explaining the second embodiment of the data transmission system employing the data transmission method according to the present invention, in which FIG. 15A is a view of an example of the configuration of a data transmission system 40 A in a case where the multiplexed signals are transmitted among MCUs and between the MCU and the terminals, and the MCU is not a layer of an MSC, but the layer of a GMSC, and FIG. 15B is a view of a data structure and an amount of transmission in the system of FIG. 15A ;
  • FIG. 16 is a view for explaining a third embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a first example of the configuration thereof;
  • FIG. 17 is a view for explaining the third embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a second example of the configuration thereof;
  • FIG. 18 is a view for explaining the third embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a third example of the configuration thereof;
  • FIGS. 19A to 19 C are explanatory views of monitoring and control of a transmission delay according to a fourth embodiment.
  • FIG. 20 is a flowchart of monitoring and control of the transmission delay according to the fourth embodiment.
  • FIG. 11 and FIG. 12 are views for explaining a first embodiment of a data transmission system employing a data transmission method according to the present invention.
  • FIG. 11 shows a signal transmission state in a case of multipoint communication
  • FIG. 12 shows a state where the signals in the case of multipoint communication are reproduced and displayed at the terminals.
  • a data transmission system 30 according to the first embodiment is configured based on the following characteristics.
  • MT 31 to MT 35 denote mobile terminals (hereinafter, simply referred to as terminals), 31 A to 31 C denote MBSs (mobile base stations), 32 A to 32 C denote MSCs (mobile switching centers), 33 A to 33 C denote MCUs, and 34 denotes the gateway mobile switching center (GMSC).
  • MBSs mobile base stations
  • MSCs mobile switching centers
  • GMSC gateway mobile switching center
  • FIG. 11 shows the situation when the signals of the terminals MT 31 to MT 35 are transmitted to the MCU 33 A.
  • an audio signal A 1 and a video signal V 1 from the terminal MT 31 and an audio signal A 2 and a video signal V 2 from the terminal MT 32 pass through the MBS 31 A and the MSC 32 A, but do not pass through the GMSC 34 , and arrive at the MCU 33 A.
  • an audio signal A 3 and a video signal V 3 from the terminal MT 33 pass through the MSB 31 B, MSC 32 B, and the MCU 33 B
  • an audio signal A 4 and a video signal V 4 from the terminal MT 34 and an audio signal A 5 and a video signal V 5 from the terminal MT 35 pass through the MSB 31 C, MSC 32 C, and the MCU 33 C and further pass through the GMSC 34 and arrive at the MCU 33 A.
  • portion indicated by the symbol MT 31 of FIG. 11 indicates the situation of the time difference of the packets transmitted from the terminals MT 31 to MT 35 .
  • T 1 -AU 1 , T 1 -AU 2 , . . . , T 1 -AU 5 denote packet signals from the terminal MT 31
  • T 2 -AU 1 , T 2 -AU 2 , . . . , T 2 -AU 5 denote packet signals from the terminal MT 32
  • T 3 -AU 1 , T 3 -AU 2 denote packet signals by the terminal MT 33
  • T 4 -AU 1 , T 4 -AU 2 , . . . , T 4 -AU 4 ) denote packet signals from the terminal MT 34
  • T 5 -AU 1 , T 5 -AU 2 , . . . , T 5 -AU 4 denote packet signals from the terminal MT 35 .
  • T 31 denotes the timing of the transmission of each packet signal.
  • portion indicated by the symbol MT 32 in FIG. 11 indicates the situation where the transmitted packet signals are reproduced at the terminals MT 31 and MT 32 and displayed.
  • T 32 denotes the timing of reproduction and display.
  • the signals (A 3 , V 3 ) of the terminal MT 33 transmitted from the MCU 33 B to the MCU 33 A are transmitted delayed by exactly the delay 1 .
  • DTS DTS +delay 1
  • the values of the DTSs of the signals (A 4 , V 4 ), (A 5 , V 5 ) of the terminals MT 34 and MT 45 transmitted from the MCU 33 C to the MCU 33 A are replaced as follows.
  • DTS DTS+ delay 2
  • the MCU 33 A it becomes possible to multiplex the signals sent from the terminals MT 33 , MT 34 , and MT 35 , without delaying the signals sent from the terminals MT 31 and MT 32 , and send them out via the MSC 32 A and MBS 31 A. They are reproduced and displayed at the terminals MT 31 and MT 32 according to the designated times.
  • the portion indicated by the symbol MT 32 indicates the situation where the signals are reproduced and displayed at the terminals MT 31 and MT 32
  • the portion indicated by the symbol MT 33 indicates a situation where the signals are reproduced and displayed at the terminal MT 33
  • the portion indicated by the symbol MT 34 indicates a situation where the signals are reproduced and displayed at the terminals MT 34 and MT 35 .
  • T 32 to T 34 denote the timings of the reproduction and display.
  • the timing when the packet signals (T 1 -AU 1 , T 1 -AU 2 , . . . T 1 -AU 5 ) of the terminal MT 31 are displayed differs according to the terminal.
  • the signals (A 1 , V 1 ) of the terminal MT 31 sent from the MCU 33 A to the MCU 33 A (MCU 1 ), MCU 33 B (MCU 2 ), and MCU 33 C (MCU 3 ) are transmitted so that the DTSs are replaced by the following three types according to the delay values of the transmission lines.
  • each of the MCU 33 A, MCU 33 B, and MCU 33 C is shown divided into two, but physically the parts are the same. Accordingly, a signal transmitted to itself will not pass through the GMSC 34 .
  • the display times at the terminals can be controlled by controlling the DTSs in accordance with where are they transmitted.
  • the multiplexing can be simply and smoothly achieved.
  • a signal from a near position can be output in the shortest time.
  • Quick display of even a signal from a far position is enabled after only the transmission delay.
  • FIGS. 13A to 13 E, FIGS. 14A and 14B , and FIGS. 15A and 15B are views for explaining a second embodiment of a data transmission system employing a data transmission method according to the present invention.
  • a data transmission system 30 A according to the second embodiment is configured based on the following characteristics.
  • FIGS. 13A to 13 E are views of the situation when adding the information concerning the audio at the baseband.
  • FIG. 13A shows the information concerning the audio
  • FIG. 13B shows the video information
  • FIGS. 13C and 13E show the data structures
  • FIG. 13D shows the flow of the signals.
  • the amount of information is greatly influenced by the image size. Therefore, even if they are combined at the baseband, the amount of information does not change so much.
  • the signals are multiplexed in packet units as encoded.
  • the amount of the information flowing over the transmission lines can be reduced.
  • FIG. 14A is a view of an example of the configuration of a data transmission system 40 in a case when transmitting such multiplexed signals among the MCUs and between the MCU and the terminals described above, while FIG. 14B shows the data structure and the amount of transmission in the system of FIG. 14A .
  • the terminals are indicated by the symbols MT 31 to MT 35 in the same way as FIG. 11 and FIG. 12 .
  • 41A to 41 E denote MBSs
  • 42 A to 42 C denote MSCs
  • 43 A to 43 C denote MCUs
  • 44 A to 44 C denote GMSCs.
  • the MBSs 41 A and 41 B, MCU 43 A, and GMSC 44 A are connected to the MSC 42 A
  • the MBS 41 C, MCU 43 B, and GMSC 44 B are connected to the MSC 42 B
  • the MBS 41 D, MBS 41 E, MCU 43 C, and the GMSC 44 C are connected to the MSC 42 C.
  • the signals (A 1 , V 1 ) of the terminal MT 31 and the signals (A 2 , V 2 ) of the terminal MT 32 are multiplexed (A 1 - 2 , V 1 , 2 ), that is, transformed to the data structure as indicated by a symbol X 6 in Fig. 14 B, pass through the GMSCs 44 A to 44 C, MSC 42 B, and MSC 42 C, and are transmitted to the MCU 43 B and the MCU 43 C.
  • the signals are transformed to (A 2 - 3 - 4 - 5 , V 2 , 3 , 4 , 5 ) and (A 1 - 3 - 4 - 5 , V 1 , 3 , 4 , 5 ), that is, the data structure as indicated by numeral 411 in FIG. 14B , and transmitted.
  • the amount of transmission of the data flowing through the network is reduced in comparison with the conventional data transmission system shown in FIG. 4A .
  • FIG. 15A shows the case where the multiplexed signals are transmitted among the MCUs and between the MCU and the terminals as described above and shows an example of the configuration of a data transmission system 40 A in a case where the MCU is not a layer of the MSC, but the layer of the GMSC, while FIG. 15B shows the data structure and the amount of transmission in the system of FIG. 15A .
  • the transfer of the data between the MCU 43 A (MCU 1 ) and the MCU 43 B (MCU 2 ) in the data transmission system 40 A of FIG. 15A becomes as follows.
  • the amount of information of the signals flowing over the entire network can be reduced, and the traffic can be reduced.
  • FIG. 16 , FIG. 17 , and FIG. 18 are views for explaining a third embodiment of a data transmission system employing a data transmission method according to the present invention.
  • the data transmission system according to the third embodiment is configured based on the following characteristics.
  • the signals where continuity is regarded as important are transmitted over a network having a higher QoS (quality of service), while the signals for which discontinuity can be permitted (for example the video) are transmitted over a network having a lower QoS.
  • a network having a high QoS includes a circuit switched network at present, while a network having a low QoS includes a packet switching network.
  • the information concerning the audio is transmitted to the circuit switched network, and the information concerning the video is transmitted to the packet switching network.
  • the information concerning the audio has a smaller amount of information in comparison with the video, but the continuity is regarded as important. Conversely, in the video, the amount of information is large, but the continuity is not regarded as so important in comparison with the audio.
  • FIG. 16 is a view of a first example of the configuration of a data transmission system according to the third embodiment.
  • the terminals are indicated by the symbols MT 31 to MT 34 in the same way as FIG. 11 and FIG. 12 .
  • 51A to 51 C denote MBSs
  • 52 A and 52 B denote MSCs
  • 53 A and 53 B denote MCUs
  • 54 denotes a circuit switched network
  • 55 denotes a packet switching network.
  • GMSCs 541 and 542 having home location registers (HLR) are arranged.
  • the MBSs 51 A and 51 B, MCU 53 A, GMSC 541 of the circuit switched network 54 , and the packet switching network 55 are connected to the MSC 52 A, while the MBS 51 C, MCU 53 B, GMSC 542 of the circuit switched network 54 , and the packet switching network 55 are connected to the MSC 52 B.
  • the MSC 52 A when transferring signals containing information concerning for example the video and the audio from the terminal MT 31 and MT 32 side to the terminal MT 33 and MT 34 side, under the control of the MCU 53 A, the MSC 52 A transmits the information concerning the audio where continuity is regarded as important to the circuit switched network 54 having a higher QoS (quality of service) and transmits the video signal for which discontinuity can be permitted to the packet switching network 55 having a low QoS.
  • QoS quality of service
  • the information concerning the audio and the video signals transmitted through the circuit switched network 54 and the packet switching network 55 are combined to a single signal at the MSC 52 B and transmitted via the MBS 51 C to the terminals MT 33 and MT 34 .
  • the packet switching network 55 When comparing the amount of transmission, the packet switching network 55 is more expensive than the circuit switched network 54 . However, if CoS (class of service) is introduced in the future, it is projected that the cost will be lowered all at once in the “best effort” region.
  • CoS class of service
  • FIG. 17 is a view of a second example of the configuration of a data transmission system according to the third embodiment.
  • this data transmission system 50 A according to the second example of the configuration from the first example of the configuration of FIG. 16 resides in that the Internet 56 is utilized for the network having a low QoS, and the Internet 56 is connected to the packet switching networks 55 A and 55 B via Internet exchanges (IX) 57 A and 57 B.
  • IX Internet exchanges
  • the information where continuity is regarded as important is transmitted to the circuit switched network 54 , while the information for which the continuity is not regarded as so important is transmitted through transmission lines formed by the packet switching networks 55 A and 55 B and the Internet 56 .
  • FIG. 18 is a view of a third example of the configuration of the data transmission system according to the third embodiment.
  • This third example of the configuration assumes the case of international roaming.
  • the data transmission system 50 B is configured by the packet switching networks 55 A and 55 B of the systems A and B connected by the Internet 56 .
  • FIGS. 19A to 19 C and FIG. 20 are views for explaining a fourth embodiment of the present invention.
  • the transmission delays are monitored and controlled as explained below.
  • the signals transmitted over the network having a high QoS for example the audio
  • the signals transmitted over the network having a low QoS for example the video
  • the method for control there are the method of lowering the bit rate and the method of lowering the frame rate.
  • the end-to-end delay of the system is delayed.
  • FIGS. 19A to 19 C are explanatory views of the monitoring and the control of the transmission delays according to the fourth embodiment, while FIG. 20 is a flowchart of the monitoring and the control of the transmission delays according to the fourth embodiment.
  • 1 indicates a situation where the video is input to a terminal
  • 2 indicates a situation where the input signals are encoded in units of access units
  • 3 indicates a situation where the audio is input to a terminal
  • 4 indicates a situation where the input audio signals are encoded in units of access units
  • 5 ) and 6 indicate situations where the video and audio are reproduced and displayed at the same timing according to the end-to-end delay.
  • the signals transmitted over the network having a high QoS for example the audio
  • the signals transmitted over the network having a low QoS for example the video
  • the delay value of the network having a low QoS is observed and monitored.
  • it becomes larger than the estimated end-to-end delay value it is possible to add this delay value to the DTC of the video transmitted to the network having a low QoS on the receiver side so as to display the data with a delay by that amount at the terminal.
  • step ST 2 When it is decided at step ST 2 that a delay value is larger than that of the previous time, it is decided whether or not the delay value is larger in comparison with the estimated end-to-end delay (ST 3 ).
  • step ST 3 When it is decided at step ST 3 that the delay value is larger than the estimated-end-to-end delay value, it is assumed that the delay value is increasing and is exceeding the permissible value, the DTSs of the signals flowing through the network having a low QoS are replaced, and the transmission of the signals to the network having a low QoS is controlled (ST 4 ).
  • step ST 3 When it is decided at step ST 3 that the delay value is smaller than the estimated end-to-end delay value, it is assumed that the delay value is increasing but not exceeding the permissible value and the transmission of the signals to the network having a low QoS is controlled (ST 5 ).
  • step ST 2 when it is decided at step ST 2 that the delay value is smaller than that of the previous time, it is decided whether or not the delay value is larger than the estimated end-to-end delay value (ST 6 ).
  • step ST 6 When it is decided at step ST 6 that the delay value is larger than the estimated end-to-end delay value, it is assumed that the delay value is decreasing and exceeds the permissible value, the DTSs of the signals flowing through the network having a low QoS are replaced, and the control of the transmission of the signals to the network having a low QoS is eased (ST 7 ).
  • step ST 6 When it is decided at step ST 6 that the delay value is smaller than the estimated end-to-end delay value, it is assumed that the delay value is decreasing and does not exceed the permissible value, the DTSs of the signals flowing through the network having a low QoS are returned to the original values, and the transmission of the signals to the network having a low QoS are returned to the original level (ST 8 ).
  • the fourth embodiment it becomes possible to synchronize a plurality of signals (for example the audio and the video) transmitted over different bands.
  • the traffic of the signals flowing over the entire network can be reduced.
  • a large delay unit becomes unnecessary in the MCU for performing the multiplexing and composition, so the size of the hardware can be reduced.
  • the delay among multiple points at the time of multipoint communication can be made as short as possible.
  • all signals can be continuously transmitted without interpolation and thinning for more important signals and signals where continuity is regarded as important (for example the audio). Further, it becomes possible to set the total transmission cost cheap by utilizing bands having a low QoS for signals having a lower degree of importance and for which continuity is not regarded as important (for example the video).
  • the utilization efficiency can be improved from the viewpoint of the effective utilization of the bands.
  • the trouble of an enormous amount, of signals building up in the transmission lines and the data not being updated for a long time can be avoided.

Abstract

A data transmission method and a data transmission system not requiring a large delay unit for multiplexing and composition and capable of reducing the hardware scale, wherein when transmitting data among multiple points from a plurality of terminals arranged in a network, when the data at multiple points are transmitted to the terminals, in the network, identical packets are added different time stamps in accordance with the transmission delays, whereby the data shifted in accordance with the transmission delays are transmitted.

Description

    RELATED APPLICATION DATA
  • The present application claims priority to Japanese Application No. P2000-081851 filed Mar. 17, 2000, which application is incorporated herein by reference to the extent permitted by law.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to a data transmission method and a data transmission system for transmitting a plurality of streams over a network when communicating, streaming, etc. among a plurality of terminals.
  • Below, an explanation will be made of a conventional method for transmitting a plurality of streams over a network when communicating, streaming, etc. among a plurality of terminals in relation to the drawings.
  • FIG. 1 is a view of an example of a television (TV) conference system.
  • In this TV conference system, a conference is simultaneously carried out by using five terminals of a terminal 1 to a terminal 5 with cameras CMR mounted thereon.
  • The terminals 1 to 5 are connected via switches SW1 to SW4, routers RT1 to RT3, and an ISDN network NTW1.
  • The signals (video and audio) from the terminals 1 to 5 are assembled at a multipoint control unit (MCU) 6 where they are combined to the signal to be reproduced at each terminal.
  • The MCU 6 has mainly two functions. One is that of a block of a multipoint controller (MC) 6A for controlling which terminals are attending the conference, while the other is that of a multipoint processor (MP) 6B for combining signals assembled from multiple points for every terminal.
  • FIGS. 2A and 2B are views of the structure of data flowing over the network and an amount of transmission in the TV conference system of FIG. 1.
  • As shown in FIG. 2A, signals (A1, V1) transmitted from the terminal 1 pass through the switch SW1, router RT1, ISDN network NTW1, router RT2, and switch SW4 to be transmitted to the MCU 6.
  • Similarly, the signals transmitted from the terminals 2, 3, 4, and 5 are transmitted to the MCU 6. The signals assembled at the MCU 6 are combined as follows for every terminal.
      • Terminal 1: (A2-3-4-5, V2-3-4-5)
      • Terminal 2: (A1-3-4-5, V1-3-4-5)
      • Terminal 3: (A1-2-4-5, V1-2-4-5)
      • Terminal 4: (A1-2-3-5, V1-2-3-5)
      • Terminal 5: (A1-2-3-4, V1-2-3-4)
  • Here, A denotes audio, and V denotes video. Further, (,) of (A1,V1) indicates that each signal is separated, and (-) of (A1-2-3-4) indicates that the signals are combined. “Combined” means that the signals are added in a baseband state (for example PCM) in the case of the audio.
  • In the case of the video, it means that the signals are combined to one having the same image size by reducing the sizes of the images in the baseband (pixel) state and joining the plurality of images with each other in one frame.
  • The data structure of the signal flowing over the network shown in FIG. 2A becomes as shown in FIG. 2B.
  • Namely, the data has the same amount of information before and after the composition. The audio and video are formed into different packets and multiplexed (MUX) in packet units. Further, data is also multiplexed in addition to the audio and video.
  • When arranged in this way, it is understood that, as the amount of information of the signals flowing over the network of the TV conference system, signals of 20 times the data structure flow in all layers.
  • Next, a case where the TV conference system is applied to wireless telephones will be considered.
  • FIG. 3 is a view of the topology in the case where the TV conference system is applied to wireless telephones. In other words, FIG. 3 is a view of an example of the configuration of multipoint communication. In this example as well, the case where five terminals MT (Mobile Terminal) 1 to MT5 communicate is shown.
  • The terminals MT1 to MT5 are connected via mobile base stations (MBS) 11A to 11D arranged in the network, mobile switching centers (MSC) 13A to 13C with the MCUs 12A to 12C connected thereto, and further gateway mobile switching centers (GMSC) 14A to 14E having home location registers (HLR).
  • The center portion is a network wherein the GMSCs 14A to 14E are connected in a so-called mesh state (for example circuit switched network or a packet switching network).
  • A great difference from the TV conference system resides in that there are many MCUs in the network, and the MCU located nearest each terminal multiplexes the signals of the multiple points.
  • That is, in the MCU, in the same way as the time of the TV conference, there are the function of an MC and the function of an MP. However, one MC among the plurality of MCUs controls one communication, while a plurality of MPs are controlled by this one MC and perform the multiplexing.
  • FIGS. 4A and 4B are views of the structure of the data flowing in the network and the amount of transmission in the multipoint communication of FIG. 3.
  • As shown in FIG. 4A, unlike the TV conference system, there are a plurality of MCUs, so the signals of the multiple points must all be transferred to a plurality of MCUs 12A to 12C. Accordingly, the signals (A1,V1) transmitted from for example the terminal MT1 are transmitted to the MCU 12A, MCU 12B, and MCU 12C.
  • The data structure of the signals (A1, V1) becomes as shown in FIG. 4B. The channel is narrow, so, unlike the time of the TV conference system, the image sent from each terminal is transmitted matching with the size after composition.
  • Further, when looking at the MCU 12A, two patterns are combined in the following way from the collected five signals for the terminals MT1 and MT2:
      • MT1: (A2-3-4-5, V2-3-4-5)
      • MT2: (A1-3-4-5, V1-3-4-5)
  • The data structure of this signal is indicated by numeral 15 in FIG. 4B. This becomes the same as the combined one in the TV system. Note, due to a difference of thicknesses of the wireless or other channels, the size of the images, quality of audio, etc. are different from those of the TV conference utilizing an ISDN network.
  • In this way, behind the existence of the GMSCs, since the composed signals do not flow over the network, the structure of the data flowing at this layer becomes the format as indicated by reference numeral 16 in FIG. 4B. The amount of transmission also becomes 15 times this data structure.
  • In this way, it is understood that the amount of the data flowing over the entire network is improved a little in comparison with the TV conference by arranging a plurality of MCUs.
  • Further, by giving the terminal side the function of simultaneously decoding a plurality of streams, the MCU side can multiplex the data in packet units without composition at the baseband level. This situation will be shown in FIGS. 5A and 5B.
  • In this case, looking at the MCU 12A, the signals combined for the terminals MT1 and MT2 become as follows:
      • MT1: (A2, 3, 4, 5, V2, 3, 4, 5)
      • MT2: (A1, 3, 4, 5, V1, 3, 4, 5)
  • This data structure becomes as shown in FIG. 5B. The example of FIG. 5B shows the situation where the data is multiplexed in packet units.
  • Next, an explanation will be given of the operation of an MCU in multipoint communication.
  • FIG. 6 is a view of an example of the configuration of a conventional MCU used for multipoint communication.
  • Note that, in this example, the explanation will be made treating the three existing MCUs as one MCU 12.
  • There is a time difference by which each of the signals collected from the terminals MT1 to MT5 reach the MCU 12.
  • In order to make these constant, the MCU 12 inserts delay units DLY1 to DLY5 for the signals to match their phases, then demultiplexes the plurality of signals at the demultiplexers DMX1 to DMX5 provided in the MP, passes them through a switcher (buffer) BF, and combines them at the multiplexers MX1 to MX5 for every terminal.
  • This delay amount and the demultiplexing and multiplexing at the MP are performed according to instructions of the MC.
  • Next, how this delay time is controlled will be explained.
  • FIGS. 7A, 7B and 7C and FIGS. 8A, 8B and 8C are views for explaining a situation where the video and audio are encoded and decoded.
  • (Explanation of Video Encoding)
  • First, an explanation will be made of the video encoding in relation to FIG. 7A.
  • 1) in FIG. 7A indicates a vertical synchronization signal V Sync. The bold lines represent frames. This frame is an access unit of the video. Generally, this is used as the unit for compression of the amount of information. Further, according to the method of compression, there may be I-pictures and P-pictures. An I-picture is a picture compressed utilizing the correlation with a frame, while a P-picture is a picture compressed utilizing the correlation among frames. The numerals after the picture type indicate the sequence of input frames.
  • The picture input as in 2) in FIG. 7A is encoded at a time 4).
  • 5) in FIG. 7A indicates the image of the buffer existing inside the encoder. An inverse form to a virtual decoder buffer (VBV buffer) is described rather than the operation of the actual buffer. This corresponds to a virtual buffer existing inside a controller for controlling the rate.
  • Accordingly, this buffer is instantaneously generated when the encoding is terminated. The bold line shows this situation.
  • 3) in FIG. 7A indicates the value of an STC (system time clock) when each access unit of the video is input to the encoder. This STC illustrates an absolute clock in a telephone network. All systems and terminals are assumed as operating with the same clock and time.
  • 6) in FIG. 7A indicates an STS (decoding time stamp) which indicates the timing when the access unit finished being encoded at 5) starts being decoded at the reproduction side.
  • This value is transmitted together when the access units of the video are formed into packets and multiplexed. Accordingly, for 10 pictures, a value such as STC_V6 is transmitted. When the system reaches this time, the decoding is started.
  • (Explanation of Encoding of Audio Related Information)
  • Next, an explanation will be made of the audio encoding in relation to FIG. 7B and FIG. 8A.
  • In audio, unlike video, there is no concept of discrete access units such as frames. However, the audio is fetched in the form of access units for every certain number of sample number.
  • 8) in FIG. 7B and FIG. 8A show the situation where an AAU (audio access unit) is input into the encoder. 7) is the time when the AAU is input. 9) is the time when the encoding is actually carried out, while 10) indicates the situation where data is generated in the virtual buffer at the instant when the encoding is completed. 11) is the timing when each AAU is decoded. This value is multiplexed together with the AAU and transmitted to the decoder side.
  • (Explanation of Video Decoding)
  • Next, an explanation will be made of the video decoding in relation to FIG. 7C and FIG. 8B.
  • The bit stream (compressed signal) generated in the buffer in 5) of FIG. 7A starts to be transmitted while the state of the buffer on the decoder side is monitored. The data is accumulated in the decoder buffer.
  • This situation is shown in 12) of FIG. 7C and FIG. 8B. Here, the state of the virtual buffer (VBV buffer) is illustrated.
  • 13) of FIG. 7C and FIG. 8B indicate the timing when the decoding is carried out matching with the time of the STC of 15). Here, it is supposed that the decoding is ideally instantaneously completed and, simultaneously with the completion of the decoding, the data is output as shown in 14).
  • Here, the time from the instant when the signal is input to the encoder (terminal) to when the signal is output from the decoder (terminal) is defined as the end-to-end delay. Namely, that time is shown in 15) of FIG. 7C and FIG. 8B. This becomes the same in all access units both video and audio.
  • The state where the video and audio become out of phase is defined as “lip-sync deviation”. Deviation between the same video or between the same audio is defined as “jitter”.
  • (Explanation of Decoding of Audio Related Information)
  • Next, an explanation will be made of the audio decoding in relation to FIG. 8C.
  • As shown in 16) in FIG. 8C, the audio is transmitted with a delay so as to match the end-to-end-delay of the video. The data is accumulated in the decoder buffer.
  • The timing of decoding is determined for every AAU shown in 17) matching with the value of the STC of 19) in FIG. 8C. The decoding is instantaneously completed matching with this. The data is output from the decoder immediately after that.
  • As described above, the information concerning the video and audio are synchronized by transmitting a time stamp such as a DTS. Further, they are controlled so that no underflow or overflow of the buffer occurs in the system.
  • By utilizing the DTS shown in FIGS. 7A to 7C and FIGS. 8A to 8C, it is possible to achieve synchronization among multiple points. This situation is shown in FIG. 9.
  • In the example of FIG. 9, the signals of the terminals MT1 and MT2 reach the MCU 12A without through the GMSC 14.
  • Contrary to this, the signals of the terminals MT3, MT4, and MT5 reach the MCU 12A after passing through the GMSC 14.
  • Accordingly, it is learned that the signals of the terminal MT3 (T3-AU1, AU2), terminal MT4 (T4-AU1, AU2), and terminal MT5 (T5-AU1, AU2) arrive delayed in comparison with those of the terminal MT1 (T1-AU1, AU2) and the terminal MT2 (T2-AU1, AU2) as shown in the situation of the time difference of the packets transmitted from the terminals indicated by symbol TM1 in FIG. 9.
  • The MCU 12A analyzes this DTS from each packet, controls the delay units in the MCU to match the phases of the signals from the terminals, then multiplexes and combines the signals.
  • In this way, it becomes possible to make the phases of the signals from all of the terminals completely match at each of the terminals MT1 and MT2 as shown in the situation of reproduction and display at each terminal shown by the reference symbol TM2 in FIG. 9.
  • Further, in recent years, Internet telephone and other services using the Internet have been started.
  • In the Internet, the band is often not compensated. Therefore, it is an area where the quality of service (QoS) is low. When using such a network, it is necessary to monitor the state of congestion and control a signal to be transmitted to the network in accordance with the state of congestion.
  • FIG. 10 is a view of an example of the configuration of a multipoint communication system utilizing only a network having a low QoS.
  • As a network having a low QoS, here, the case of utilizing the Internet is shown.
  • In FIG. 10, the terminals are indicated by MT1 to MT4 in the same way as the above. Further, 21A to 21C denote MBSs, 22A and 22B denote MSCs, 23A and 23B denote MCUs, 24A and 24B denote packet switching networks, 25A and 25B denote Internet exchanges (IX), and 26 denotes the Internet.
  • The signals rising from the terminals are all transmitted to the packet switching networks 24A and 24B at the MSCs 22A and 22B. Here, the MCUs 23A and 23B for multiplexing the signals of the multiple points are arranged in this packet switching networks.
  • The MCU 23A preparing the signals to be transmitted to the terminals MT1 and MT2 receives the signals from the terminals MT1, MT2, MT3, and MT4, multiplex them, and send them to the terminals MT1 and MT2.
  • Here, the data of the terminals MT3 and MT4 are transmitted through the Internet 26, so the transmission delay is greatly influenced in accordance with the state of congestion of the network.
  • At this time, in order to confirm the congestion, an RTCP (real-time control protocol) is utilized to monitor the RTT (round trip time).
  • When the RTT widely fluctuates by more than the amount of allowable end-to-end jitter, the amount of the data transmitted over the network is controlled to ease the congestion state so as to avoid congestion.
  • Summarizing the problem to be solved by the invention, there are the following problems.
  • (Problem 1)
  • Conventionally, all of the signals of the multiple points have been gathered at the MCU (multipoint control unit) for combining the signals of the multiple points which then composed the signals required for each terminal. For this reason, many signals had to be transmitted over the network.
  • (Problem 2)
  • Conventionally, when combining signals of multiple points, in order to match the times of the signals of the multiple points, the times taken for the transfer were canceled and the phases matched by inserting delay. In order to realize this, delay units compensating for large delays were necessary.
  • (Problem 3)
  • When transferring a plurality of signals such as video and audio signals among two or more multiple points, signals of more importance and signals of less importance from the viewpoint of the continuity of the signals are frequently mixed together among these plurality of signals.
  • For example, when comparing the video and audio, the continuity is more important in the audio.
  • These signals are transmitted over bands having the same QoS, so the transmission cost becomes high.
  • Further, from the viewpoint of effective utilization of the bands, the utilization efficiency was low.
  • (Problem 4)
  • When utilizing different bands, a plurality of signals (for example audio and video) are transmitted through a plurality of transmission lines. At this time, since the delay values of the signals flowing over the transmission lines are different, if the signals are recombined as they are, a plurality of signals will end up out of phase. In the case of audio and video, this will result in lip-sync deviation and an extremely strange feeling. In some cases, the signals could become even more out of phase than with lip-sync deviation.
  • (Problem 5)
  • When communicating by utilizing only a network having a low QoS, there is a possibility of large jitters or large delay occurring in accordance with the state of congestion of the network.
  • In order to enlarge the permissible value of such jitter in the network, a large delay unit (buffer) becomes necessary somewhere in the system. In one-way streaming, delivery of a continuous signal is made possible by this method.
  • Further, if a large delay is inserted, in the communication, a deviation occurs in the responses to each other and conversation ends up becoming impossible.
  • Further, if a state of congestion occurs in the network, the audio will be interrupted. Not only it is then difficult to use this system as a communication tool, but also there is the problem that once congestion occurs, the system cannot be restored for a long time.
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to provide a data transmission method and a data transmission system capable of reducing traffic of signals flowing over the entire network.
  • A second object of the present invention is to provide a data transmission method and a data transmission system not requiring a large delay unit in the MCU performing the multiplexing and composition and therefore capable of reducing the hardware size.
  • A third object of the present invention is to provide a data transmission method and a data transmission system achieving an improvement of a utilization efficiency of the transmission bands and a reduction of the transmission cost.
  • A fourth object of the present invention is to provide a data transmission method and a data transmission system capable of synchronizing a plurality of signals transmitted over different bands.
  • A fifth object of the present invention is to provide a data transmission method and a data transmission system capable of avoiding the trouble of enormous amount of signals being accumulated in the transmission line and the data not being updated for a long time.
  • According to a first aspect of the invention, there is provided a data transmission method for transmitting video data and audio data among multiple points from a plurality of terminals arranged in a network, comprising transmitting the video data by multiplexing it as a stream encoded for every point and transmitting the audio data by combining at least one audio signal in a baseband in the network.
  • According to a second aspect of the invention, there is provided a data transmission method for transmitting data among multiple points from a plurality of terminals arranged in a network, comprising shifting data in accordance with transmission delays when transmitting data at multiple points to the terminals.
  • In the present invention, identical packets are transmitted given different time stamps in accordance with the transmission delays in the network.
  • According to a third aspect of the invention, there is provide a data transmission method for transmitting a plurality of data streams among multiple points from a plurality of terminals arranged in a network, comprising transmitting each of the plurality of data streams through a network having a different property and recombining them after transmission over the networks and transmitting them to the terminals.
  • Preferably, when a network having a superior property is defined as a master network, and the others are defined as slave networks, the delay values of the slave networks are monitored based on the master network as the standard and the transmission of data through a slave network is restricted when that slave network has more than a certain delay in comparison with the master network.
  • Further, preferably, when restricting the transmission of data to a slave network, if the data transmitted over the slave network employs a compression method utilizing correlation among access units, the data transmitted to the network is controlled for every unit of interruption of the correlation.
  • Further, in the present invention, when restricting the transmission of data to a slave network, data for restricting a frame rate and a bit rate is transmitted from the network to the terminals.
  • According to a fourth aspect of the invention, there is provided a data transmission method for transmitting a plurality of data streams having different degrees of importance among multiple points from a plurality of terminals arranged in a network, comprising demultiplexing the plurality of data streams having different degrees of importance in the middle of the transmission line, transmitting data where continuity is regarded as important through a network having a higher quality of service, transmitting data for which discontinuity is permitted through a network having a lower quality of service, combining the plurality of data transmitted through the different networks again before the data arrive at the destination terminals, and transmitting the same to the terminals.
  • According to a fifth aspect of the invention, there is provided a data transmission system for transmitting video data and audio data among multiple points from a plurality of terminals arranged in a network, comprising a device for transmitting the video data by multiplexing it as a stream encoded for every point and transmitting the audio data by combining at least one audio signal in a baseband in the network.
  • According to a sixth aspect of the invention, there is provided a data transmission system for transmitting data among multiple points from a plurality of terminals arranged in a network, comprising a device for shifting data in accordance with transmission delays when transmitting data at multiple points to the terminals.
  • In the present invention, the device transmits identical packets given different time stamps in accordance with the transmission delays in the network.
  • According to a seventh aspect of the invention, there is provided a data transmission system for transmitting a plurality of data streams among multiple points from a plurality of terminals arranged in a network, comprising a plurality of networks having different properties, a first device for transmitting each of the plurality of data streams through a network having a different property, and a second device for recombining them after transmission over the networks and transmitting them to the terminals.
  • Preferably, when a network having a superior property is defined as a master network, and the others are defined as slave networks, the first device monitors delay values of the slave networks based on the master network as the standard and restricts the transmission of data through a slave network when that slave network has more than a certain delay in comparison with the master network.
  • Further, preferably, when restricting the transmission of data to a slave network, if the data transmitted over the slave network employs a compression method utilizing correlation among access units, the first device controls the data transmitted to the network for every unit of interruption of the correlation.
  • Further, in the present invention, when restricting the transmission of data to a slave network, the first device transmits data for restricting a frame rate and a bit rate from the network to the terminals.
  • According to a eighth aspect of the invention, there is provided a data transmission system for transmitting a plurality of data streams having different degrees of importance among multiple points from a plurality of terminals arranged in a network, comprising a first network having a higher quality of service, a second network having a lower quality of service than the first network, a first device for demultiplexing the plurality of data streams having different degrees of importance in the middle of the transmission line, transmitting data where continuity is regarded as important through the first network, transmitting data for which discontinuity is permitted through the second network, and a second device for combining the plurality of data transmitted through the different networks again before the data arrive at the destination terminals and transmitting the same to the terminals.
  • According to the present invention, when combining and transmitting a plurality of data (signals), the system adds only the information concerning the audio at the baseband (PCM) to obtain a signal of one channel. It transmits the video by bundling a plurality of channels while keeping the packet form.
  • At that time, the information concerning the audio is greatly reduced in size by assembling the same in one channel. Note that, in the video, since the amount of information is determined in accordance with the image size, even if the images are returned to the baseband and combined, the amount of information is not reduced. On the contrary, a high performance is required in order to return the images to their original form and combine them.
  • Further, by sending the data signals as described above to each other when transferring the data required for multiplexing, the amount of information of the signals flowing among the MCUs is reduced.
  • Further, when transmitting the multiplexed signals to the terminals, by adding only the information concerning the audio at the baseband (PCM) and transmitting signals multiplexed by bundling a plurality of channels as the video while keeping the packet form, the amount of information flowing over the transmission lines is reduced.
  • Due to this, the traffic of the signals flowing over the entire network can be reduced.
  • Further, according to the present invention, the data transmitted from the multiple points are deliberately shifted in accordance with the transmission delays for reproduction and display instead of matching the phases of the signals input to the terminals at the same time.
  • In this case, when for example transmitting the same access units to a plurality of multipoint control devices, they are transmitted by adding different delay values to the time stamps in accordance with the transmission delays.
  • Further, according to the present invention, a plurality of data signals having different degrees of importance (for example the video and the audio) are demultiplexed in the middle of the transmission lines, the signals where continuity is regarded as important (for example, information concerning the audio) are transmitted through the network having a higher QoS (quality of service), while signals for which discontinuity can be permitted (for example video) are transmitted through the network having a lower QoS.
  • Further, before the signals arrive at the destination terminals, they are recombined and delivered to the terminals.
  • Further, the signals transmitted through the network having a high QoS (for example the audio) are used as the reference, and the signals transmitted through the network having a lower QoS (for example the video) are multiplexed and combined matching with the display time of the former and transmitted to the target terminals.
  • Further, according to the present invention, where the signals transmitted over the network having a lower QoS (for example the video) are delayed more than a certain predetermined level in comparison with the signals transmitted over the network having a high QoS (for example the audio), the timing of the display is shifted on the receiver side. For this purpose, the value of the time stamp (for example DTS) is delayed by that amount.
  • Further, where the signals transmitted over the network having a lower QoS (for example the video) are delayed more than a certain predetermined level in comparison with the signals transmitted over the network having a high QoS (for example the audio), the transmission to the network is restricted on the transmitter side.
  • As the method for control, there are a method of lowering the bit rate and a method of lowering the frame rate.
  • Further, when the congestion of the network having a low QoS is not eased, the end-to-end delay of the system is delayed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other objects and features of the present invention will become clearer from the following description of the preferred embodiments given with reference to the accompanying drawings, in which:
  • FIG. 1 is a view of an example of a television (TV) conference system;
  • FIGS. 2A and 2B are views of the structure of the data flowing over a network in the TV conference system of FIG. 1 and the amount of transmission;
  • FIG. 3 is a view of a topology in a case where the TV conference system is applied to wireless telephones (example of the configuration of the multipoint communication);
  • FIGS. 4A and 4B are views of the structure of the data flowing in the network in the multipoint communication of FIG. 3 and the amount of transmission;
  • FIGS. 5A and 5B are views of another example of the structure of the data flowing in the network in the multipoint communication of FIG. 3 and the amount of transmission;
  • FIG. 6 is a view of an example of the configuration of a conventional MCU used for multipoint communication;
  • FIGS. 7A to 7C are views for explaining a situation where a video and an audio are encoded and decoded;
  • FIGS. 8A to 8C are views for explaining the situation where a video and an audio are encoded and decoded;
  • FIG. 9 is a view for explaining a flow and a timing of the signals in a case where a DTS shown in FIGS. 7A to 7C and FIGS. 8A to 8C are utilized for the multipoint communication;
  • FIG. 10 is a view of an example of the configuration of multipoint communication utilizing only a network having a low QoS;
  • FIG. 11 is a view for explaining a first embodiment of a data transmission system employing a data transmission method according to the present invention and a view of a signal transmission state of a case of multipoint communication;
  • FIG. 12 is a view for explaining the first embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a state where signals in the case of multipoint communication are reproduced and displayed at terminals;
  • FIGS. 13A to 13E are views for explaining a second embodiment of the data transmission system employing the data transmission method according to the present invention and a view of the situation of adding information concerning an audio at a baseband;
  • FIGS. 14A and 14B are views for explaining the second embodiment of the data transmission system employing the data transmission method according to the present invention, in which FIG. 14A is a view of an example of the configuration of a data transmission system 40 in a case where such multiplexed signals are transmitted among MCUs and between the MCU and the terminals, and FIG. 14B is a view of a data structure and an amount of transmission in the system of FIG. 14B;
  • FIGS. 15A and 15B are views for explaining the second embodiment of the data transmission system employing the data transmission method according to the present invention, in which FIG. 15A is a view of an example of the configuration of a data transmission system 40A in a case where the multiplexed signals are transmitted among MCUs and between the MCU and the terminals, and the MCU is not a layer of an MSC, but the layer of a GMSC, and FIG. 15B is a view of a data structure and an amount of transmission in the system of FIG. 15A;
  • FIG. 16 is a view for explaining a third embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a first example of the configuration thereof;
  • FIG. 17 is a view for explaining the third embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a second example of the configuration thereof;
  • FIG. 18 is a view for explaining the third embodiment of the data transmission system employing the data transmission method according to the present invention and a view of a third example of the configuration thereof;
  • FIGS. 19A to 19C are explanatory views of monitoring and control of a transmission delay according to a fourth embodiment; and
  • FIG. 20 is a flowchart of monitoring and control of the transmission delay according to the fourth embodiment.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Below, preferred embodiments will be described with reference to the accompanying drawings.
  • First Embodiment
  • FIG. 11 and FIG. 12 are views for explaining a first embodiment of a data transmission system employing a data transmission method according to the present invention. FIG. 11 shows a signal transmission state in a case of multipoint communication, while FIG. 12 shows a state where the signals in the case of multipoint communication are reproduced and displayed at the terminals.
  • A data transmission system 30 according to the first embodiment is configured based on the following characteristics.
  • 1) The signals transmitted from multiple points are deliberately shifted for reproduction and display in accordance with the transmission delays instead of matching the phases of the data (signals) input to the terminals at the same time.
  • 2) In order to realize 1), when transmitting the same access units to a plurality of MCUs (multipoint control devices), different delay values are added to the DTS (decoding time stamp) in accordance with the transmission delays and transmitted.
  • In FIG. 11 and FIG. 12, MT31 to MT35 denote mobile terminals (hereinafter, simply referred to as terminals), 31A to 31C denote MBSs (mobile base stations), 32A to 32C denote MSCs (mobile switching centers), 33A to 33C denote MCUs, and 34 denotes the gateway mobile switching center (GMSC).
  • FIG. 11 shows the situation when the signals of the terminals MT31 to MT35 are transmitted to the MCU 33A.
  • Specifically, an audio signal A1 and a video signal V1 from the terminal MT31 and an audio signal A2 and a video signal V2 from the terminal MT32 pass through the MBS 31A and the MSC 32A, but do not pass through the GMSC 34, and arrive at the MCU 33A.
  • As opposed to this, an audio signal A3 and a video signal V3 from the terminal MT33 pass through the MSB 31B, MSC 32B, and the MCU 33B, an audio signal A4 and a video signal V4 from the terminal MT34 and an audio signal A5 and a video signal V5 from the terminal MT35 pass through the MSB 31C, MSC 32C, and the MCU 33C and further pass through the GMSC 34 and arrive at the MCU 33A.
  • Further, the portion indicated by the symbol MT31 of FIG. 11 indicates the situation of the time difference of the packets transmitted from the terminals MT31 to MT35.
  • (T1-AU1, T1-AU2, . . . , T1-AU5) denote packet signals from the terminal MT31, (T2-AU1, T2-AU2, . . . , T2-AU5) denote packet signals from the terminal MT32, (T3-AU1, T3-AU2) denote packet signals by the terminal MT33, (T4-AU1, T4-AU2, . . . , T4-AU4) denote packet signals from the terminal MT34, and (T5-AU1, T5-AU2, . . . , T5-AU4) denote packet signals from the terminal MT35.
  • Note that, T31 denotes the timing of the transmission of each packet signal.
  • Further, the portion indicated by the symbol MT32 in FIG. 11 indicates the situation where the transmitted packet signals are reproduced at the terminals MT31 and MT32 and displayed.
  • Note that, T32 denotes the timing of reproduction and display.
  • In the data transmission system 30 having the above configuration, in the MCU 33B, the signals (A3, V3) of the terminal MT33 transmitted from the MCU 33B to the MCU 33A are transmitted delayed by exactly the delay 1.
  • For this reason, the value of the DTS of the signals (A3, V3) transmitted to the MCU 33A is set up as follows.
    DTS=DTS+delay1
  • Note that since a transmission line having a high QoS is assumed, the delay value is known in advance.
  • Similarly, in the MCU 33C, the values of the DTSs of the signals (A4, V4), (A5, V5) of the terminals MT34 and MT45 transmitted from the MCU 33C to the MCU 33A are replaced as follows.
    DTS=DTS+delay2
  • By this, in the MCU 33A, it becomes possible to multiplex the signals sent from the terminals MT33, MT34, and MT35, without delaying the signals sent from the terminals MT31 and MT32, and send them out via the MSC 32A and MBS 31A. They are reproduced and displayed at the terminals MT31 and MT32 according to the designated times.
  • Similarly, the situation of the reproduction and display at the terminals MT31 to MT35 is shown in FIG. 12.
  • In FIG. 12, the portion indicated by the symbol MT32 indicates the situation where the signals are reproduced and displayed at the terminals MT31 and MT32, the portion indicated by the symbol MT33 indicates a situation where the signals are reproduced and displayed at the terminal MT33, and the portion indicated by the symbol MT34 indicates a situation where the signals are reproduced and displayed at the terminals MT34 and MT35.
  • Note that, T32 to T34 denote the timings of the reproduction and display.
  • In the example of FIG. 12, the timing when the packet signals (T1-AU1, T1-AU2, . . . T1-AU5) of the terminal MT31 are displayed differs according to the terminal.
  • In order to realize this, the signals (A1, V1) of the terminal MT31 sent from the MCU 33A to the MCU 33A (MCU 1), MCU 33B (MCU 2), and MCU 33C (MCU 3) are transmitted so that the DTSs are replaced by the following three types according to the delay values of the transmission lines.
      • MCU 1-->MCU 1: DTS=DTS+0
      • MCU 1-->MCU 2: DTS=DTS+delay1
      • MCU 1-->MCU 3: DTS=DTS+delay2
  • Note that, in FIG. 12, each of the MCU 33A, MCU 33B, and MCU 33C is shown divided into two, but physically the parts are the same. Accordingly, a signal transmitted to itself will not pass through the GMSC 34.
  • In this way, according to the present first embodiment, even for the same signals, the display times at the terminals can be controlled by controlling the DTSs in accordance with where are they transmitted. By this, in each MCU, the multiplexing can be simply and smoothly achieved.
  • Further, a signal from a near position can be output in the shortest time. Quick display of even a signal from a far position is enabled after only the transmission delay.
  • According, overall, communication with the shortest delay value becomes possible.
  • Second Embodiment
  • FIGS. 13A to 13 E, FIGS. 14A and 14B, and FIGS. 15A and 15B are views for explaining a second embodiment of a data transmission system employing a data transmission method according to the present invention.
  • A data transmission system 30A according to the second embodiment is configured based on the following characteristics.
  • 1) When combining and transmitting a plurality of signals, only the information concerning the audio is added at the baseband (PCM) to obtain a signal of one channel. The video is transmitted by bundling a plurality of channels while keeping the packet form.
  • 2) When transferring the data required for the multiplexing among the MCUs, by sending the signals as in 1) to each other, the amount of information of the signals flowing among the MCUs is reduced.
  • 3) When transmitting the multiplexed signals from the MCU to the terminals, by transmitting the signals multiplexed as in 1), the amount of the information flowing through the transmission lines is reduced. 4) By combining 2) and 3), the traffic of the signals flowing over the entire network can be reduced.
  • FIGS. 13A to 13E are views of the situation when adding the information concerning the audio at the baseband.
  • FIG. 13A shows the information concerning the audio, FIG. 13B shows the video information, FIGS. 13C and 13E show the data structures, and FIG. 13D shows the flow of the signals.
  • As shown in FIG. 13A, by adding the audio, audio of a plurality of channels can be converted to one channel. When it is not necessary to demultiplex and reproduce the same later, the amount of information can be reduced by combining them.
  • In the video, spatial information cannot be superposed at the same position. Therefore, it is possible to view it by putting together the videos.
  • However, the amount of information is greatly influenced by the image size. Therefore, even if they are combined at the baseband, the amount of information does not change so much.
  • Therefore, in the video, as shown in FIG. 13B, the signals are multiplexed in packet units as encoded.
  • In this way, when transmitting the signals fetched into the MCUs to each other among the MCUs, by transmitting them in such a combined state, the amount of the information flowing through the network is reduced and thus the traffic can be reduced.
  • Further, even when transmitting the signals from the MCU to the terminals, by performing similar multiplexing, the amount of the information flowing over the transmission lines can be reduced.
  • FIG. 14A is a view of an example of the configuration of a data transmission system 40 in a case when transmitting such multiplexed signals among the MCUs and between the MCU and the terminals described above, while FIG. 14B shows the data structure and the amount of transmission in the system of FIG. 14A.
  • In the data transmission system 40 of FIG. 14A, the terminals are indicated by the symbols MT31 to MT35 in the same way as FIG. 11 and FIG. 12.
  • Further, in FIG. 14A, 41A to 41E denote MBSs, 42A to 42C denote MSCs, 43A to 43C denote MCUs, and 44A to 44C denote GMSCs.
  • Further, the MBSs 41A and 41B, MCU 43A, and GMSC 44A are connected to the MSC 42A, the MBS 41C, MCU 43B, and GMSC 44B are connected to the MSC 42B, and the MBS 41D, MBS 41E, MCU 43C, and the GMSC 44C are connected to the MSC 42C.
  • In the data transmission system 40 of FIG. 14A, when looking at the MCU 43A, the signals (A1, V1) of the terminal MT31 and the signals (A2, V2) of the terminal MT32 are multiplexed (A1-2, V1, 2), that is, transformed to the data structure as indicated by a symbol X6 in Fig. 14B, pass through the GMSCs 44A to 44C, MSC 42B, and MSC 42C, and are transmitted to the MCU 43B and the MCU 43C.
  • Further, for the terminals MT31 and MT32, the signals are transformed to (A2-3-4-5, V2, 3, 4, 5) and (A1-3-4-5, V1, 3, 4, 5), that is, the data structure as indicated by numeral 411 in FIG. 14B, and transmitted.
  • In this way, in the data transmission system 40, the amount of transmission of the data flowing through the network is reduced in comparison with the conventional data transmission system shown in FIG. 4A.
  • Further, FIG. 15A shows the case where the multiplexed signals are transmitted among the MCUs and between the MCU and the terminals as described above and shows an example of the configuration of a data transmission system 40A in a case where the MCU is not a layer of the MSC, but the layer of the GMSC, while FIG. 15B shows the data structure and the amount of transmission in the system of FIG. 15A.
  • The transfer of the data between the MCU 43A (MCU 1) and the MCU 43B (MCU 2) in the data transmission system 40A of FIG. 15A becomes as follows.
      • MCU 1-->MCU 2: (A1-2, V1,2): Data structure of numeral 421 of FIG. 15B
      • MCU 2-->MCU 1: (A3-4-5, V3,4,5): Data structure of numeral 422 of FIG. 15B
  • In this way, in the data transmission system 40A as well, the amount of information of the signals flowing over the entire network can be reduced, and the traffic can be reduced.
  • Third Embodiment
  • FIG. 16, FIG. 17, and FIG. 18 are views for explaining a third embodiment of a data transmission system employing a data transmission method according to the present invention.
  • The data transmission system according to the third embodiment is configured based on the following characteristics.
  • The signals where continuity is regarded as important (for example the information concerning the audio) are transmitted over a network having a higher QoS (quality of service), while the signals for which discontinuity can be permitted (for example the video) are transmitted over a network having a lower QoS.
  • A network having a high QoS includes a circuit switched network at present, while a network having a low QoS includes a packet switching network.
  • Therefore, in the third embodiment, the information concerning the audio is transmitted to the circuit switched network, and the information concerning the video is transmitted to the packet switching network.
  • The information concerning the audio has a smaller amount of information in comparison with the video, but the continuity is regarded as important. Conversely, in the video, the amount of information is large, but the continuity is not regarded as so important in comparison with the audio.
  • FIG. 16 is a view of a first example of the configuration of a data transmission system according to the third embodiment.
  • In this data transmission system 50 as well, the terminals are indicated by the symbols MT31 to MT34 in the same way as FIG. 11 and FIG. 12.
  • Further, in FIG. 16, 51A to 51C denote MBSs, 52A and 52B denote MSCs, 53A and 53B denote MCUs, 54 denotes a circuit switched network, and 55 denotes a packet switching network.
  • In the circuit switched network 54, GMSCs 541 and 542 having home location registers (HLR) are arranged.
  • The MBSs 51A and 51B, MCU 53A, GMSC 541 of the circuit switched network 54, and the packet switching network 55 are connected to the MSC 52A, while the MBS 51C, MCU 53B, GMSC 542 of the circuit switched network 54, and the packet switching network 55 are connected to the MSC 52B.
  • In this data transmission system 50, when transferring signals containing information concerning for example the video and the audio from the terminal MT31 and MT32 side to the terminal MT33 and MT34 side, under the control of the MCU 53A, the MSC 52A transmits the information concerning the audio where continuity is regarded as important to the circuit switched network 54 having a higher QoS (quality of service) and transmits the video signal for which discontinuity can be permitted to the packet switching network 55 having a low QoS.
  • Then, the information concerning the audio and the video signals transmitted through the circuit switched network 54 and the packet switching network 55 are combined to a single signal at the MSC 52B and transmitted via the MBS 51C to the terminals MT33 and MT34.
  • In this way, according to the data transmission system 50 according to the third embodiment, since information concerning the audio where continuity is regarded as important is allocated to a switching network having a high QoS and few bands, and information concerning the video for which the continuity is not regarded as so important is allocated to a switching network having a low QoS and many bands, there are the advantages such that the transmission cost can be greatly enhanced, and the effective utilization of the network becomes possible.
  • When comparing the amount of transmission, the packet switching network 55 is more expensive than the circuit switched network 54. However, if CoS (class of service) is introduced in the future, it is projected that the cost will be lowered all at once in the “best effort” region.
  • FIG. 17 is a view of a second example of the configuration of a data transmission system according to the third embodiment.
  • The difference of this data transmission system 50A according to the second example of the configuration from the first example of the configuration of FIG. 16 resides in that the Internet 56 is utilized for the network having a low QoS, and the Internet 56 is connected to the packet switching networks 55A and 55B via Internet exchanges (IX) 57A and 57B.
  • In this data transmission system 50A as well, the information where continuity is regarded as important is transmitted to the circuit switched network 54, while the information for which the continuity is not regarded as so important is transmitted through transmission lines formed by the packet switching networks 55A and 55B and the Internet 56.
  • In the second example of the configuration, similar effects to the effects of the first example of configuration can be obtained.
  • FIG. 18 is a view of a third example of the configuration of the data transmission system according to the third embodiment.
  • This third example of the configuration assumes the case of international roaming.
  • Specifically, systems A and B resembling FIG. 16 are present in for example two countries. The data transmission system 50B is configured by the packet switching networks 55A and 55B of the systems A and B connected by the Internet 56.
  • Note that, in this third example of the configuration, as a network having a low QoS and cheap cost, a path that passes through the packet switching network from the layer of GMSC, passes through the packet switching network of the other country via the Internet, and returns to the circuit switched network is formed.
  • In the third example of the configuration as well, similar effects to the effects of the first example of the configuration mentioned above can be obtained.
  • Fourth Embodiment
  • FIGS. 19A to 19C and FIG. 20 are views for explaining a fourth embodiment of the present invention.
  • In the fourth embodiment, as the data transmission system, use is made of one which allocates information concerning the audio where the continuity is regarded as important to a switching network having a high QoS and few bands and allocates information concerning the video for which the continuity is not regarded as so important to a switching network having a low QoS and many bands, shown in FIG. 16 to FIG. 18.
  • Further, in the fourth embodiment, the transmission delays are monitored and controlled as explained below.
  • 1) Using the signals transmitted over the network having a high QoS (for example the audio) as a reference, the signals transmitted over the network having a low QoS (for example the video) are multiplexed and combined matching with the display time and transmitted to the intended terminals.
  • 2) When the signals transmitted over the network having a low QoS (for example the video) are delayed more than a certain predetermined level in comparison with the signals transmitted over the network having a high QoS (for example, the audio), the timings of the display are shifted at the receiver side. For this purpose, the values of the time stamp (for example DTS) are delayed by that amount.
  • 3) When the signals transmitted over the network having a low QoS (for example the video) are delayed more than a certain predetermined level in comparison with the signals transmitted over the network having a high QoS (for example the audio), the transmission to the network is controlled at the transmitter side.
  • As the method for control, there are the method of lowering the bit rate and the method of lowering the frame rate.
  • Where the congestion of the network having a low QoS is not eased, the end-to-end delay of the system is delayed.
  • FIGS. 19A to 19C are explanatory views of the monitoring and the control of the transmission delays according to the fourth embodiment, while FIG. 20 is a flowchart of the monitoring and the control of the transmission delays according to the fourth embodiment.
  • In FIG. 19A, 1) indicates a situation where the video is input to a terminal, 2) indicates a situation where the input signals are encoded in units of access units, 3) indicates a situation where the audio is input to a terminal, 4) indicates a situation where the input audio signals are encoded in units of access units, and 5) and 6) indicate situations where the video and audio are reproduced and displayed at the same timing according to the end-to-end delay.
  • In order to enable this, using the signals transmitted over the network having a high QoS (for example the audio) as the reference, the signals transmitted over the network having a low QoS (for example the video) are multiplexed and combined matching with the display time and transmitted to the intended terminals.
  • Further, as shown in FIGS. 19B and 19C, the delay value of the network having a low QoS is observed and monitored. When it becomes larger than the estimated end-to-end delay value, it is possible to add this delay value to the DTC of the video transmitted to the network having a low QoS on the receiver side so as to display the data with a delay by that amount at the terminal.
  • Further, where congestion occurs in the network having a low QoS, in order to quickly ease this, it is possible to decide if the delay value is increasing and is larger or smaller in comparison with the estimated end-to-end delay value and thereby have the receiver side, as previously mentioned, replace the value of the DTS and the transmission side control of the information to be transmitted to the network.
  • The flow of the above series of operations is shown in the flowchart of FIG. 20.
  • Namely, first, the delay values among the MCUs in the network having a low QoS are observed (ST1).
  • Next, it is decided whether or not a delay value is larger than that of the previous time (ST2).
  • When it is decided at step ST2 that a delay value is larger than that of the previous time, it is decided whether or not the delay value is larger in comparison with the estimated end-to-end delay (ST3).
  • When it is decided at step ST3 that the delay value is larger than the estimated-end-to-end delay value, it is assumed that the delay value is increasing and is exceeding the permissible value, the DTSs of the signals flowing through the network having a low QoS are replaced, and the transmission of the signals to the network having a low QoS is controlled (ST4).
  • When it is decided at step ST3 that the delay value is smaller than the estimated end-to-end delay value, it is assumed that the delay value is increasing but not exceeding the permissible value and the transmission of the signals to the network having a low QoS is controlled (ST5).
  • Further, when it is decided at step ST2 that the delay value is smaller than that of the previous time, it is decided whether or not the delay value is larger than the estimated end-to-end delay value (ST6).
  • When it is decided at step ST6 that the delay value is larger than the estimated end-to-end delay value, it is assumed that the delay value is decreasing and exceeds the permissible value, the DTSs of the signals flowing through the network having a low QoS are replaced, and the control of the transmission of the signals to the network having a low QoS is eased (ST7).
  • When it is decided at step ST6 that the delay value is smaller than the estimated end-to-end delay value, it is assumed that the delay value is decreasing and does not exceed the permissible value, the DTSs of the signals flowing through the network having a low QoS are returned to the original values, and the transmission of the signals to the network having a low QoS are returned to the original level (ST8).
  • According to the fourth embodiment, it becomes possible to synchronize a plurality of signals (for example the audio and the video) transmitted over different bands.
  • Summarizing the effects of the invention, as explained above, according to the present invention, the traffic of the signals flowing over the entire network can be reduced.
  • Further, according to the present invention, a large delay unit becomes unnecessary in the MCU for performing the multiplexing and composition, so the size of the hardware can be reduced.
  • Further, the delay among multiple points at the time of multipoint communication can be made as short as possible.
  • Further, according to the present invention, all signals can be continuously transmitted without interpolation and thinning for more important signals and signals where continuity is regarded as important (for example the audio). Further, it becomes possible to set the total transmission cost cheap by utilizing bands having a low QoS for signals having a lower degree of importance and for which continuity is not regarded as important (for example the video).
  • Further, the utilization efficiency can be improved from the viewpoint of the effective utilization of the bands.
  • Further, according to the present invention, it becomes possible to synchronize a plurality of signals (for example the audio and the video) transmitted over different bands.
  • Further, according to the present invention, the trouble of an enormous amount, of signals building up in the transmission lines and the data not being updated for a long time can be avoided.
  • While the invention has been described with reference to specific embodiment chosen for purpose of illustration, it should be apparent that numerous modifications could be made thereto by those skilled in the art without departing from the basic concept and scope of the invention.

Claims (4)

1-7. (canceled)
8. A data transmission method for transmitting a plurality of data streams having different degrees of importance among multiple points from a plurality of terminals arranged in a network, comprising
demultiplexing said plurality of data streams having different degrees of importance in the middle of the transmission line,
transmitting data where continuity is regarded as important through a network having a higher quality of service and transmitting data for which discontinuity is permitted through a network having a lower quality of service, and
combining the plurality of data transmitted through the different networks again before the data arrive at the destination terminals and transmitting the same to the terminals.
9-15. (canceled)
16. A data transmission system for transmitting a plurality of data streams having different degrees of importance among multiple points from a plurality of terminals arranged in a network, comprising;
a first network having a higher quality of service,
a second network having a lower quality of service than the first network,
a first device for demultiplexing said plurality of data streams having different degrees of importance in the middle of the transmission line, transmitting data where continuity is regarded as important through the first network, transmitting data for which discontinuity is permitted through the second network, and a second device for combining the plurality of data transmitted through the different networks again before the data arrive at the destination terminals and transmitting the same to the terminals.
US11/260,358 2000-03-17 2005-10-27 Data transmission method and data trasmission system Abandoned US20060038878A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/260,358 US20060038878A1 (en) 2000-03-17 2005-10-27 Data transmission method and data trasmission system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JPP2000-081851 2000-03-17
JP2000081851A JP4228505B2 (en) 2000-03-17 2000-03-17 Data transmission method and data transmission system
US09/811,099 US6987526B2 (en) 2000-03-17 2001-03-16 Data transmission method and data transmission system
US11/260,358 US20060038878A1 (en) 2000-03-17 2005-10-27 Data transmission method and data trasmission system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/811,099 Continuation US6987526B2 (en) 2000-03-17 2001-03-16 Data transmission method and data transmission system

Publications (1)

Publication Number Publication Date
US20060038878A1 true US20060038878A1 (en) 2006-02-23

Family

ID=18598723

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/811,099 Expired - Fee Related US6987526B2 (en) 2000-03-17 2001-03-16 Data transmission method and data transmission system
US11/260,358 Abandoned US20060038878A1 (en) 2000-03-17 2005-10-27 Data transmission method and data trasmission system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/811,099 Expired - Fee Related US6987526B2 (en) 2000-03-17 2001-03-16 Data transmission method and data transmission system

Country Status (4)

Country Link
US (2) US6987526B2 (en)
EP (1) EP1146740A3 (en)
JP (1) JP4228505B2 (en)
CN (1) CN1314762A (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040224708A1 (en) * 2003-05-09 2004-11-11 Brabenac Charles L. Reducing interference from closely proximate wireless units
US20070206556A1 (en) * 2006-03-06 2007-09-06 Cisco Technology, Inc. Performance optimization with integrated mobility and MPLS
US7453835B1 (en) * 2005-04-14 2008-11-18 At&T Intellectual Property Ii, L.P. Arrangement for overlaying optical (such as FSO) and radio frequency (such as WiMAX) communications networks
US20090207233A1 (en) * 2008-02-14 2009-08-20 Mauchly J William Method and system for videoconference configuration
US20090216581A1 (en) * 2008-02-25 2009-08-27 Carrier Scott R System and method for managing community assets
US20090256901A1 (en) * 2008-04-15 2009-10-15 Mauchly J William Pop-Up PIP for People Not in Picture
US20100082557A1 (en) * 2008-09-19 2010-04-01 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US20100225732A1 (en) * 2009-03-09 2010-09-09 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US20100283829A1 (en) * 2009-05-11 2010-11-11 Cisco Technology, Inc. System and method for translating communications between participants in a conferencing environment
US20100302345A1 (en) * 2009-05-29 2010-12-02 Cisco Technology, Inc. System and Method for Extending Communications Between Participants in a Conferencing Environment
US20110037636A1 (en) * 2009-08-11 2011-02-17 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US20110228096A1 (en) * 2010-03-18 2011-09-22 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
USD682854S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen for graphical user interface
US8542264B2 (en) 2010-11-18 2013-09-24 Cisco Technology, Inc. System and method for managing optics in a video environment
US8599934B2 (en) 2010-09-08 2013-12-03 Cisco Technology, Inc. System and method for skip coding during video conferencing in a network environment
US8599865B2 (en) 2010-10-26 2013-12-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US8670019B2 (en) 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US8682087B2 (en) 2011-12-19 2014-03-25 Cisco Technology, Inc. System and method for depth-guided image filtering in a video conference environment
US8692862B2 (en) 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US8699457B2 (en) 2010-11-03 2014-04-15 Cisco Technology, Inc. System and method for managing flows in a mobile network environment
US8723914B2 (en) 2010-11-19 2014-05-13 Cisco Technology, Inc. System and method for providing enhanced video processing in a network environment
US8730297B2 (en) 2010-11-15 2014-05-20 Cisco Technology, Inc. System and method for providing camera functions in a video environment
US8786631B1 (en) 2011-04-30 2014-07-22 Cisco Technology, Inc. System and method for transferring transparency information in a video environment
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US9681154B2 (en) 2012-12-06 2017-06-13 Patent Capital Group System and method for depth-guided filtering in a video conference environment
US9843621B2 (en) 2013-05-17 2017-12-12 Cisco Technology, Inc. Calendaring activities based on communication processing

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7068299B2 (en) 2001-10-26 2006-06-27 Tandberg Telecom As System and method for graphically configuring a video call
US6677978B2 (en) * 2002-01-31 2004-01-13 Forgent Networks, Inc. Method and system for automated call graph layout
JPWO2004019521A1 (en) 2002-07-31 2005-12-15 シャープ株式会社 Data communication device, intermittent communication method thereof, program describing the method, and recording medium recording the program
US7761876B2 (en) * 2003-03-20 2010-07-20 Siemens Enterprise Communications, Inc. Method and system for balancing the load on media processors based upon CPU utilization information
KR100548383B1 (en) * 2003-07-18 2006-02-02 엘지전자 주식회사 Digital video signal processing apparatus of mobile communication system and method thereof
CN100466671C (en) * 2004-05-14 2009-03-04 华为技术有限公司 Method and device for switching speeches
US7400340B2 (en) * 2004-11-15 2008-07-15 Starent Networks, Corp. Data mixer for portable communications devices
JP4894858B2 (en) * 2006-11-06 2012-03-14 パナソニック株式会社 Receiving machine
US8169949B1 (en) * 2006-12-07 2012-05-01 Sprint Communications Company L.P. Audio/video/media handoff split and re-providing
DE102010007497A1 (en) 2010-02-09 2011-08-11 Thüringisches Institut für Textil- und Kunststoff-Forschung e.V., 07407 Heat-storing moldings
CN103210656B (en) * 2011-03-09 2016-08-17 日立麦克赛尔株式会社 Image dispensing device, image sending method, video receiver and image method of reseptance
WO2013132289A1 (en) * 2012-03-06 2013-09-12 Nokia Corporation Re-selection optimization for packet and circuit switched connections
US9426423B2 (en) * 2012-11-01 2016-08-23 Polycom, Inc. Method and system for synchronizing audio and video streams in media relay conferencing
WO2015100290A1 (en) * 2013-12-23 2015-07-02 Yost David Arthur System for intelligible audio conversation over unreliable digital transmission media
CN106454474B (en) * 2016-10-08 2019-08-06 Oppo广东移动通信有限公司 Multimedia synchronous plays method, apparatus and system
JP7092049B2 (en) * 2019-01-17 2022-06-28 日本電信電話株式会社 Multipoint control methods, devices and programs

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796440A (en) * 1996-02-29 1998-08-18 Rupinski; Frederick A. Baseband video/audio/data transceiver
US5877821A (en) * 1997-01-30 1999-03-02 Motorola, Inc. Multimedia input and control apparatus and method for multimedia communications

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2646910B2 (en) * 1991-10-15 1997-08-27 松下電器産業株式会社 Multipoint conference system
US5689553A (en) * 1993-04-22 1997-11-18 At&T Corp. Multimedia telecommunications network and service
DE69532640T2 (en) * 1994-09-16 2005-03-10 SBC Technology Resources, Inc., Austin ADAPTIVE MULTIPORT VIDEO DESIGN AND BRIDGE SYSTEM
US5844600A (en) * 1995-09-15 1998-12-01 General Datacomm, Inc. Methods, apparatus, and systems for transporting multimedia conference data streams through a transport network
US20010054071A1 (en) * 2000-03-10 2001-12-20 Loeb Gerald E. Audio/video conference system for electronic caregiving

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796440A (en) * 1996-02-29 1998-08-18 Rupinski; Frederick A. Baseband video/audio/data transceiver
US5877821A (en) * 1997-01-30 1999-03-02 Motorola, Inc. Multimedia input and control apparatus and method for multimedia communications

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7729711B2 (en) * 2003-05-09 2010-06-01 Intel Corporation Reducing interference from closely proximate wireless units
US20040224708A1 (en) * 2003-05-09 2004-11-11 Brabenac Charles L. Reducing interference from closely proximate wireless units
US7453835B1 (en) * 2005-04-14 2008-11-18 At&T Intellectual Property Ii, L.P. Arrangement for overlaying optical (such as FSO) and radio frequency (such as WiMAX) communications networks
US8472415B2 (en) 2006-03-06 2013-06-25 Cisco Technology, Inc. Performance optimization with integrated mobility and MPLS
US20070206556A1 (en) * 2006-03-06 2007-09-06 Cisco Technology, Inc. Performance optimization with integrated mobility and MPLS
US8797377B2 (en) 2008-02-14 2014-08-05 Cisco Technology, Inc. Method and system for videoconference configuration
US20090207233A1 (en) * 2008-02-14 2009-08-20 Mauchly J William Method and system for videoconference configuration
US20090216581A1 (en) * 2008-02-25 2009-08-27 Carrier Scott R System and method for managing community assets
US20090256901A1 (en) * 2008-04-15 2009-10-15 Mauchly J William Pop-Up PIP for People Not in Picture
US8390667B2 (en) 2008-04-15 2013-03-05 Cisco Technology, Inc. Pop-up PIP for people not in picture
US8694658B2 (en) 2008-09-19 2014-04-08 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US20100082557A1 (en) * 2008-09-19 2010-04-01 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US20100225732A1 (en) * 2009-03-09 2010-09-09 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US8659637B2 (en) 2009-03-09 2014-02-25 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US20100283829A1 (en) * 2009-05-11 2010-11-11 Cisco Technology, Inc. System and method for translating communications between participants in a conferencing environment
US20100302345A1 (en) * 2009-05-29 2010-12-02 Cisco Technology, Inc. System and Method for Extending Communications Between Participants in a Conferencing Environment
US8659639B2 (en) * 2009-05-29 2014-02-25 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US9204096B2 (en) 2009-05-29 2015-12-01 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US20110037636A1 (en) * 2009-08-11 2011-02-17 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US9082297B2 (en) 2009-08-11 2015-07-14 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US20110228096A1 (en) * 2010-03-18 2011-09-22 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
US9225916B2 (en) 2010-03-18 2015-12-29 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US8599934B2 (en) 2010-09-08 2013-12-03 Cisco Technology, Inc. System and method for skip coding during video conferencing in a network environment
US8599865B2 (en) 2010-10-26 2013-12-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US9331948B2 (en) 2010-10-26 2016-05-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US8699457B2 (en) 2010-11-03 2014-04-15 Cisco Technology, Inc. System and method for managing flows in a mobile network environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US8730297B2 (en) 2010-11-15 2014-05-20 Cisco Technology, Inc. System and method for providing camera functions in a video environment
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8542264B2 (en) 2010-11-18 2013-09-24 Cisco Technology, Inc. System and method for managing optics in a video environment
US8723914B2 (en) 2010-11-19 2014-05-13 Cisco Technology, Inc. System and method for providing enhanced video processing in a network environment
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
USD682854S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen for graphical user interface
US8692862B2 (en) 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US8670019B2 (en) 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US8786631B1 (en) 2011-04-30 2014-07-22 Cisco Technology, Inc. System and method for transferring transparency information in a video environment
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US8682087B2 (en) 2011-12-19 2014-03-25 Cisco Technology, Inc. System and method for depth-guided image filtering in a video conference environment
US9681154B2 (en) 2012-12-06 2017-06-13 Patent Capital Group System and method for depth-guided filtering in a video conference environment
US9843621B2 (en) 2013-05-17 2017-12-12 Cisco Technology, Inc. Calendaring activities based on communication processing

Also Published As

Publication number Publication date
EP1146740A2 (en) 2001-10-17
EP1146740A3 (en) 2006-03-08
US6987526B2 (en) 2006-01-17
CN1314762A (en) 2001-09-26
JP2001268080A (en) 2001-09-28
US20020015108A1 (en) 2002-02-07
JP4228505B2 (en) 2009-02-25

Similar Documents

Publication Publication Date Title
US6987526B2 (en) Data transmission method and data transmission system
US8046815B2 (en) Optical network for bi-directional wireless communication
CN104737514B (en) Method and apparatus for distributive medium content service
CN1941916B (en) Method and system for synchronous packet data streams
KR101374408B1 (en) Method and system for synchronizing the output of terminals
Lindbergh The H. 324 multimedia communication standard
JP2004536529A (en) Method and apparatus for continuously receiving frames from a plurality of video channels and alternately transmitting individual frames containing information about each of the video channels to each of a plurality of participants in a video conference
RU2634206C2 (en) Device and method of commutation of media streams in real time mode
AU2005259240A1 (en) Method for transmitting packets in a transmission system
JP2007325109A (en) Distribution server, network camera, distribution method, and program
KR102519381B1 (en) Method and apparatus for synchronously switching audio and video streams
US20060161676A1 (en) Apparatus for IP streaming capable of smoothing multimedia stream
JP2013062819A (en) Video signal communication system and communication method thereof
Kunić et al. Analysis of television technology transformation from SDI to IP production
JP4737266B2 (en) Data transmission method and data transmission system
JP4737265B2 (en) Data transmission method and data transmission system
Yamauchi et al. Audio and video over IP technology
Verbiest et al. Variable bit rate video coding in ATM networks
Johanson Designing an environment for distributed real-time collaboration
EP2068528A1 (en) Method and system for synchronizing the output of end-terminals
Parker An overview of new video techniques
JP4491448B2 (en) Call transfer method and call transfer system
EP2912817B1 (en) A method and apparatus for distributing media content services
JPS6148282A (en) Picture communication system
Nakanishi et al. An experiment on media synchronization of MPEG video and voice in a wireless LAN

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION