US20050259623A1 - Delivery of information over a communication channel - Google Patents

Delivery of information over a communication channel Download PDF

Info

Publication number
US20050259623A1
US20050259623A1 US11/129,625 US12962505A US2005259623A1 US 20050259623 A1 US20050259623 A1 US 20050259623A1 US 12962505 A US12962505 A US 12962505A US 2005259623 A1 US2005259623 A1 US 2005259623A1
Authority
US
United States
Prior art keywords
bit rate
channels
data
constant bit
physical layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/129,625
Inventor
Harinath Garudadri
Phoom Sagetong
Sanjiv Nanda
Stein Lundby
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to US11/129,625 priority Critical patent/US20050259623A1/en
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GARUDADRI, HARINATH, LUNDBY, STEIN ARNE, NANDA, SANJIV, SAGETONG, PHOOM
Publication of US20050259623A1 publication Critical patent/US20050259623A1/en
Priority to US14/499,954 priority patent/US10034198B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/36Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/166IP fragmentation; TCP segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/321Interlayer communication protocols or service data unit [SDU] definitions; Interfaces between layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2381Adapting the multiplex stream to a specific network, e.g. an Internet Protocol [IP] network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6131Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • H04N21/6181Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64707Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless for transferring content from a first network to a second network, e.g. between IP and wireless
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/06Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/18Service support devices; Network management devices
    • H04W88/181Transcoding devices; Rate adaptation devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • H04W72/1263Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/04Large scale networks; Deep hierarchical networks

Definitions

  • the present invention relates generally to delivery of information over a communication system, and more specifically, to partitioning of information units to match a physical layer packet of a constant bit rate communication link.
  • Multimedia data can be different formats and data rates, and the various communication networks use different mechanisms for transmission of real time data over their respective communication channels.
  • Wireless communication systems have many applications including, for example, cellular telephones, paging, wireless local loops, personal digital assistants (PDAs), Internet telephony, and satellite communication systems.
  • PDAs personal digital assistants
  • a particularly important application is cellular telephone systems for mobile subscribers.
  • the term “cellular” system encompasses both cellular and personal communications services (PCS) frequencies.
  • PCS personal communications services
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • CDMA code division multiple access
  • AMPS Advanced Mobile Phone Service
  • GSM Global System for Mobile
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data GSM Environment
  • IS-95A IS-95A
  • IS95B IS95B
  • ANSI J-STD-008 IS-95
  • WCDMA wideband CDMA
  • streaming media such as video, multimedia, and Internet Protocol (IP)
  • customers desire to be able to receive video, such as a teleconference or television broadcasts, on their cell phone or other portable wireless communication device.
  • video such as a teleconference or television broadcasts
  • Other examples of the type of data that customers desire to receive with their wireless communication device include multimedia multicast/broadcast and Internet access.
  • a multimedia data source can produce data at a constant bit rate (CBR) or a variable bit rate (VBR).
  • VBR variable bit rate
  • the communication channel can transmit data at a CBR or a VBR. Table 1 below list various combinations of data sources and communication channels.
  • Communication channels typically transmit data in chunks, which we refer to as physical layer packets or physical layer frames.
  • the data generated by the multimedia source may be a continuous stream of bytes, such as a voice signal encoded using the mu-law or A-law. More frequently, the data generated by the multimedia source consists in groups of bytes, called data packets.
  • an MPEG-4 video encoder compresses visual information as a sequence of information units, which we refer herein as video frames. Visual information is typically encoded at a constant video frame rate by the encoder, of typically 25 or 30 Hz, and must be rendered at the same rate by the decoder.
  • the video frame period is the time between two video frames and can be computed as the inverse of the video frame rate, for example the video frame period of 40 ms corresponds to a video frame rate of 25 Hz.
  • Each video frame is encoded into a variable number of data packets, and all the data packets are transmitted to the decoder. If a portion of a data packet is lost, that packet becomes unusable by the decoder. On the other hand, the decoder may reconstitute the video frame even if some of the data packets are lost, but at the cost of some quality degradation in the resulting video sequence.
  • Each data packet therefore contains part of the description of the video frame, and the number packets are therefore variable from one video frame to another.
  • the communication system resources are efficiently utilized, assuming that the communication channel data rate is at least as fast as the source data rate, or if the two data rates are otherwise matched.
  • the constant data rate of the source is the same as the constant data rate of the channel, then the resources of the channel can be fully utilized, and the source data can be transmitted with no delay.
  • the source produces data at a variable rate and the channel transmits at a variable rate, then as long as the channel data rate can support the source data rate, then the two data rates can be matched and, again, the resources of the channel are fully utilized and all of the source data can be transmitted with no delay.
  • the channel resources may not be as efficiently utilized as possible.
  • SMG statistical multiplexing gain
  • Statistical multiplexing gain results when the same communication channel can be used, or multiplexed, between multiple users. For example, when a communication channel is used to transmit voice, the speaker does not usually talk continuously. That is, there will be a “talk” spurt from the speaker followed by silence (listening). If the ratio of time for the “talk” spurt to the silence was, for example 1:1, then on average the same communication channel could be multiplexed and could support two users.
  • variable bit rate stream such as a multimedia data stream like video
  • a communication channel that has a constant bit rate, such as a wireless radio channel with a constant bit rate assignment.
  • delay is typically introduced between the source and the communication channel, creating “spurts” of data so that the communication channel can be efficiently utilized.
  • the variable rate data stream is stored in a buffer and delayed long enough so that the output of the buffer can be emptied at a constant data rate, to match the channel fixed data rate.
  • the buffer needs to store, or delay, enough data so that it is able to maintain a constant output without “emptying” the buffer so the CBR communication channel is fully utilized sand the communication channel's resources are not wasted.
  • the encoder periodically generates video frames according to the video frame period.
  • Video frames consist of data packets, and the total amount of data in a video frame is variable.
  • the video decoder must render the video frames at the same video frame rate used by the encoder in order to ensure an acceptable result for the viewer.
  • the transmission of video frames, which have a variable amount of data, at a constant video frame rate and over a constant rate communication channel can result in inefficiency. For example, if the total amount of data in a video frame is too large to be transmitted within the video frame period at the bit rate of the channel, then the decoder may not receive the entire frame in time to render it according to the video frame rate.
  • a traffic shaping buffer is used to smooth such large variations for delivery over a constant rate channel. This introduces a delay in rendering the video, if a constant video frame rate is to be maintained by the decoder.
  • Another problem is that if data from multiple video frames is contained in a single physical layer packet, then the loss of a single physical layer packet results in degradation of multiple video frames. Even for the situations when the data packets are close to the physical layer packet sizes, the loss of one physical layer packet can result in the degradation of multiple video frames.
  • Embodiments disclosed herein address the above stated needs by providing methods and apparatus for transmitting information units over a constant bit rate communication channel.
  • the techniques include partitioning the information units into data packets wherein the size of the data packets are selected to match physical layer data packet sizes of a communication channel. For example, the number of bytes contained in each information unit may vary over time and the number of bytes that each physical layer data packets that communication channels can carry may vary independently.
  • the techniques describe partition the information units, thereby creating a plurality of data packets. For example, an encoder may be constrained such that it encodes the information units into data packets of sizes that do not exceed, or “match”, the physical layer packet sizes of the communication channel. The data packets are then assigned to the physical layer data packets of the communication channel.
  • multimedia frame for video, is used herein to mean a video frame that can be displayed/rendered on a display device, after decoding.
  • a video frame can be further divided in to independently decodable units. In video parlance, these are called “slices.”
  • multimedia frame In the case of audio and speech, the term “multimedia frame” is used herein to mean information in a time window over which speech or audio is compressed for transport and decoding at the receiver.
  • information unit interval is used herein to represent the time duration of the multimedia frame described above. For example, in case of video, information unit interval is 100 milliseconds in the case of 10 frames per second video.
  • the information unit interval is typically 20 milliseconds in cdma2000, GSM and WCDMA.
  • typically audio/speech frames are not further divided in to independently decodable units and typically video frames are further divided in to slices that are independently decodable.
  • typically video frames are further divided in to slices that are independently decodable.
  • the phrases “multimedia frame”, “information unit interval”, etc. refer to multimedia data of video, audio and speech.
  • GSM Global System for Mobile Communication
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data GSM Environment
  • CDMA Code Division Multiple Access
  • TIA/EIA-95-B IS-95
  • TIA/EIA-98-C IS-98
  • HRPD Wideband CDMA
  • WCDMA Wideband CDMA
  • aspects include determining possible physical layer packet sizes of at least one available constant bit rate communication channel.
  • Information units are partitioned, thereby creating a plurality of data packets such that the size of an individual data packet does not exceed, or is matched to, one of the physical layer packets of at least one of the constant bit rate communication channels.
  • the data packets are then encoded and assigned to the physical layer packets of the matched constant bit rate communication channel.
  • Encoding information can include a source encoder equipped with a rate controlled module capable of generating partitions of varying size.
  • information units are encoded into a stream of data packets that are transmitted over one or more constant bit rate channels.
  • information units may be encoded into different sized data packets, and different combinations of constant bit rate channels, with different available physical layer packet sizes, may be used to transmit the data packets.
  • an information unit may include video data that is included in video frames of different sizes, and thus different combinations of fixed bit rate communication channel physical layer packets may be selected to accommodate the transmission of the different sized video frames.
  • Other aspects include determining a physical layer packet size and an available data rate of a plurality of constant bit rate communication channels. Then, information units are assigned to data packets, wherein individual data packet sizes are selected to be a size that fits into a physical layer packet of one of the individual constant bit rate communication channels. A combination of individual constant bit rate channels may be selected such that the physical layer packet sizes match the variable bit rate data stream packet sizes. Different combinations of constant bit rate channels, for example one or more, may be selected depending on the variable bit rate data stream.
  • Another aspect is an encoder configured to accept information units.
  • the information units are then partitioned into data packets wherein the size of individual data packets do not exceed, or are matched to, a physical layer packet size of one of an available constant bit rate communication channel.
  • Another aspect is a decoder configured to accept data streams from a plurality of constant bit rate communication channels.
  • the data streams are decoded and the decoded data streams are accumulated into a variable bit rate data stream.
  • constant bit rate communication channels examples include GSM, GPRS, EDGE, or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS-2000, HRPD, and Wideband CDMA (WCDMA).
  • GSM Global System for Mobile communications
  • GPRS General Packet Radio Service
  • EDGE EDGE
  • CDMA Code Division Multiple Access
  • TIA/EIA-95-B IS-95
  • TIA/EIA-98-C IS-2000
  • HRPD Wideband CDMA
  • FIG. 1 is an illustration of portions of a communication system 100 constructed in accordance with the present invention.
  • FIG. 2 is a block diagram illustrating an exemplary packet data network and various air interface options for delivering packet data over a wireless network in the FIG. 1 system.
  • FIG. 3 is a block diagram illustrating two radio frames 302 and 304 in the FIG. 1 system utilizing the GSM air interface.
  • FIG. 4 is a chart illustrating an example of variation in frame sizes for a typical video sequence in the FIG. 1 system.
  • FIG. 5 is a block diagram illustrating buffering delay used to support the transmission of frames of various sizes to be transmitted over a CBR channel in the FIG. 1 system.
  • FIG. 6 is a graph illustrating buffering delay introduced by streaming a variable bit rate (VBR) multimedia stream over a CBR channel in the FIG. 1 system.
  • VBR variable bit rate
  • FIG. 7 is a bar graph illustrating buffer delay Ab in milliseconds, for various 50 frame sequence video clips encoded with nominal rate of 64 kbps and constant Qp for AVC/H.264 and MPEG-4 in the system.
  • FIG. 8 is a bar graph illustrating the visual quality, as represented by the well understood objective metric “peak signal to noise ratio” (PSNR), of the sequences illustrated in FIG. 7 .
  • PSNR peak signal to noise ratio
  • FIG. 9 is a diagram illustrating various levels of encapsulation present when transmitting multimedia data, such as video data, over a wireless link using the RTP/UDP/IP protocol in the system.
  • FIG. 10 is a diagram illustrating an example of the allocation of application data packets, such as multimedia data packets, into physical layer data packets in the system.
  • FIG. 11 illustrates an example of encoding application layer packets in accordance with the EBR technique in the system.
  • FIG. 12 is a block diagram illustrating one embodiment of a codec transmitting a VBR data stream through an IP/UDP/RTP network, such as the Internet.
  • FIG. 13 is a bar graph illustrating the relative drop in peak signal to nose ratio (PSNR) for various examples of encoded video sequences, using different encoding techniques and with a channel packet loss is 1%.
  • PSNR peak signal to nose ratio
  • FIG. 14 is a bar graph illustrating the relative drop in peak signal to nose ratio (PSNR) when the channel loss is 5% for various examples of encoded video sequences.
  • PSNR peak signal to nose ratio
  • FIG. 15 is a bar graph illustrating the percentage of defective data packets received for the encoded video sequences of FIG. 13 .
  • FIG. 16 is a bar graph illustrating the percentage of defective data packets received for the encoded video sequences of FIG. 14 .
  • FIG. 17 is a graph illustrating the PSNR of a sample encoded video sequence versus bit rate for four different cases.
  • FIG. 18 is a graph illustrating the PSNR of another encoded video sequences versus bit rate for four different cases.
  • FIG. 19 is a graph illustrating the transmission plan for a AVC/H.264 stream of average rate 64 kbps.
  • FIG. 20 is a flow diagram illustrating an embodiment of a method of transmitting data.
  • FIG. 21 is a flow diagram illustrating another embodiment of a method of transmitting data.
  • FIG. 22 is a block diagram of a wireless communication device, or a mobile station (MS), constructed in accordance with an exemplary embodiment of the present invention.
  • multimedia frame for video, is used herein to mean video frame that can be displayed/rendered on a display device, after decoding. A video frame can be further divided in to independently decodable units. In video parlance, these are called “slices”.
  • multimedia frame is used herein to mean information in a time window over which speech or audio is compressed for transport and decoding at the receiver.
  • information unit interval is used herein to represent the time duration of the multimedia frame described above.
  • information unit interval is 100 milliseconds in the case of 10 frames per second video.
  • information unit interval is typically 20 milliseconds in cdma2000, GSM and WCDMA.
  • the techniques include partitioning the information units into data packets wherein the size of the data packets are selected to match physical layer data packet sizes of a communication channel. For example, the information units may occur at a constant rate and the communication channels may transmit physical layer data packets at a different rate.
  • the techniques describe partitioning the information units, thereby creating a plurality of data packets. For example, an encoder may be constrained such that it encodes the information units into sizes that match physical layer packet sizes of the communication channel. The encoded data packets are then assigned to the physical layer data packets of the communication channel.
  • the information units may include a variable bit rate data stream, multimedia data, video data, and audio data.
  • the communication channels include GSM, GPRS, EDGE, or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • CDMA TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • aspects include determining possible physical layer packet sizes of at least one available constant bit rate communication channel.
  • Information units are partitioned, thereby creating a plurality of data packets such that the size of an individual data packet is matched to one of the physical layer packets of at least one of the constant bit rate communication channels.
  • the data packets are then encoded and assigned to the physical layer packets of the matched constant bit rate communication channel.
  • information units are encoded into a stream of data packets that are transmitted over one or more constant bit rate channels.
  • the information units may be encoded into different sized data packets, and different combinations of constant bit rate channels, with different available physical layer packet sizes, may be used to transmit the data packets.
  • an information unit may include video data that is included in frames of different sizes, and thus different combinations of fixed bit rate communication channel physical layer packets may be selected to accommodate the transmission of the different sized video frames.
  • Other aspects include determining a physical layer packet size and an available data rate of a plurality of constant bit rate communication channels. Then, information units are assigned to data packets, wherein individual data packet sizes are selected to be a size that fits into a physical layer packet of one of the individual constant bit rate communication channels. A combination of individual constant bit rate channels may be selected such that the physical layer packet sizes match the variable bit rate data stream packet sizes. Different combinations of constant bit rate channels, for example one or more, may be selected depending on the variable bit rate data stream.
  • Another aspect is an encoder configured to accept information units.
  • the information units are then partitioned into data packets wherein the size of individual data packets is matched to a physical layer packet size of one of an available constant bit rate communication channel.
  • Another aspect is a decoder configured to accept data streams from a plurality of constant bit rate communication channels.
  • the data streams are decoded and the decoded data streams are accumulated into a variable bit rate data stream.
  • Examples of information units include variable bit rate data streams, multimedia data, video data, and audio data.
  • the information units may occur at a constant repetition rate.
  • the information units may be frames of video data.
  • Examples of constant bit rate communication channels include CMDA channels, GSM channels, GPRS channels, and EDGE channels.
  • Examples of protocols and formats for transmitting information units such as variable bit rate data, multimedia data, video data, speech data, or audio data, from a content server or source on the wired network to a mobile are also provided.
  • the techniques described are applicable to any type of multimedia applications, such as unicast streaming, conversational and broadcast streaming applications.
  • the techniques can be used to transmit multimedia data, such as video data (such as a content server on wireline streaming to a wireless mobile), as well as other multimedia applications such as broadcast/multicast services, or audio and conversational services such as video telephony between two mobiles,
  • FIG. 1 shows a communication system 100 constructed in accordance with the present invention.
  • the communication system 100 includes infrastructure 101 , multiple wireless communication devices (WCD) 104 and 105 , and landline communication devices 122 and 124 .
  • the WCDs will also be referred to as mobile stations (MS) or mobiles.
  • MS mobile stations
  • WCDs may be either mobile or fixed.
  • the landline communication devices 122 and 124 can include, for example, serving nodes, or content servers, that provide various types of multimedia data such as streaming data.
  • MSs can transmit streaming data, such as multimedia data.
  • the infrastructure 101 may also include other components, such as base stations 102 , base station controllers 106 , mobile switching centers 108 , a switching network 120 , and the like.
  • the base station 102 is integrated with the base station controller 106 , and in other embodiments the base station 102 and the base station controller 106 are separate components.
  • Different types of switching networks 120 may be used to route signals in the communication system 100 , for example, IP networks, or the public switched telephone network (PSTN).
  • PSTN public switched telephone network
  • forward link refers to the signal path from the infrastructure 101 to a MS
  • reverse link refers to the signal path from a MS to the infrastructure.
  • MSs 104 and 105 receive signals 132 and 136 on the forward link and transmit signals 134 and 138 on the reverse link.
  • signals transmitted from a MS 104 and 105 are intended for reception at another communication device, such as another remote unit, or a landline communication device 122 and 124 , and are routed through the IP network or switching network.
  • signals initiated in the infrastructure 101 may be broadcast to a MS 105 .
  • a content provider may send multimedia data, such as streaming multimedia data, to a MS 105 .
  • a communication device such as a MS or a landline communication device, may be both an initiator of and a destination for the signals.
  • Examples of a MS 104 include cellular telephones, wireless communication enabled personal computers, and personal digital assistants (PDA), and other wireless devices.
  • the communication system 100 may be designed to support one or more wireless standards.
  • the standards may include standards referred to as Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • GSM Global System for Mobile Communication
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data GSM Environment
  • TIA/EIA-95-B IS-95
  • TIA/EIA-98-C IS-98
  • IS2000 HRPD
  • WCDMA Wideband CDMA
  • FIG. 2 is a block diagram illustrating an exemplary packet data network and various air interface options for delivering packet data over a wireless network.
  • the techniques described may be implemented in a packet switched data network 200 such as the one illustrated in FIG. 2 .
  • the packet switched data network system may include a wireless channel 202 , a plurality of recipient nodes or MS 204 , a sending node or content server 206 , a serving node 208 , and a controller 210 .
  • the sending node 206 may be coupled to the serving node 208 via a network 212 such as the Internet.
  • the serving node 208 may comprise, for example, a packet data serving node (PDSN) or a Serving GPRS Support Node (SGSN) and a Gateway GPRS Support Node (GGSN).
  • the serving node 208 may receive packet data from the sending node 206 , and serve the packets of information to the controller 210 .
  • the controller 210 may comprise, for example, a Base Station Controller/Packet Control Function (BSC/PCF) or Radio Network Controller (RNC).
  • BSC/PCF Base Station Controller/Packet Control Function
  • RNC Radio Network Controller
  • the controller 210 communicates with the serving node 208 over a Radio Access Network (RAN).
  • RAN Radio Access Network
  • the controller 210 communicates with the serving node 208 and transmits the packets of information over the wireless channel 202 to at least one of the recipient nodes 204 , such as an MS.
  • the serving node 208 or the sending node 206 may also include an encoder for encoding a data stream, or a decoder for decoding a data stream, or both.
  • the encoder could encode a video stream and thereby produce variable-sized frames of data, and the decoder could receive variable sized frames of data and decode them. Because the frames are of various size, but the video frame rate is constant, a variable bit rate stream of data is produced.
  • a MS may include an encoder for encoding a data stream, or a decoder for decoding a received data stream, or both.
  • codec is used to describe the combination of an encoder and a decoder.
  • data such as multimedia data
  • data from the sending node 206 which is connected to the network, or Internet 212 can be sent to a recipient node, or MS 204 , via the serving node, or Packet Data Serving Node (PDSN) 206 , and a Controller, or Base Station Controller/Packet Control Function (BSC/PCF) 208 .
  • the wireless channel 202 interface between the MS 204 and the BSC/PCF 210 is an air interface and, typically, can use many channels for signaling and bearer, or payload, data.
  • the air interface 202 may operate in accordance with any of a number of wireless standards.
  • the standards may include standards based on TDMA, such as Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • GSM Global System for Mobile Communication
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data GSM Environment
  • CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • data can be transmitted on multiple channels, for example, on a fundamental channel (FCH), generally used to transmit voice, a dedicated control channel (DCCH), a supplemental channel (SCH), and a packet data channel (PDCH) as well as other channels.
  • FCH fundamental channel
  • DCCH dedicated control channel
  • SCH supplemental channel
  • PDCH packet data channel
  • the FCH provides a communication channel for the transmission of speech at multiple fixed rates, e.g. full rate, half rate, quarter rate and 1 ⁇ 8 th rate.
  • the FCH provides these rates and when a user's speech activity requires less than the full rate to achieve a target voice quality, the system reduces interference to other users in the system by using one of the lower data rates.
  • the benefit of lowering source rate in order to increase the system capacity is well known in CDMA networks.
  • DCCH is similar to FCH but provides only full rate traffic at one of two fixed rates, 9.6 kbps in radio configuration three (RC3), and 14.4 in radio configuration five (RC5). This is called 1 ⁇ traffic rate.
  • SCH can be configured to provide traffic rates at 1 ⁇ , 2 ⁇ , 4 ⁇ , 8 ⁇ and 16 ⁇ in cdma2000. When there is no data to be transmitted, both DCCH and SCH can cease transmission, that is not transmit any data, also referred to as dtx, to ensure reduced interference to the other users in the system or to stay within the transmit power budget of the base station transmitter.
  • the FCH and DCCH channels provide a constant delay and low data packet loss for communication of data, for example, to enable conversational services.
  • the SCH and PDCH channels provide multiple fixed bit rate channels providing higher bandwidths, for example, 300 kbps to 3 Mbps, than the FCH and DCCH.
  • the SCH and PDCH also have variable delays because these channels are shared among many users. In the case of SCH, multiple users are multiplexed in time, which introduces different amounts of delay depending on the system load.
  • the bandwidth and delay depend on, for example, the radio conditions, negotiated Quality of Service (QoS), and other scheduling considerations. Similar channels are available in systems based on TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, UMTS, and Wideband CDMA (WCDMA).
  • FCH provides multiple fixed bit data rates (full, half, quarter and 1 ⁇ 8) to conserve power required by a voice user.
  • a voice encoder, or vocoder will use a lower data rate when the time-frequency structure of a signal to be transmitted permits higher compression without unduly compromising the quality.
  • This technique is commonly referred to as source controlled variable bit rate vocoding.
  • TIA/EIA-95-B IS-95
  • TIA/EIA-98-C IS2000
  • HRPD HRPD
  • UMTS UMTS
  • cdma2000 there are multiple fixed bit rate channels available for transmitting data.
  • the communication channels are divided into a continuous stream of “slots.”
  • the communication channels may be divided into 20 ms segments or time slots. This is also called “Transmit Time Interval” (TTI).
  • TTI Transmit Time Interval
  • Data transmitted during these time slots is assembled into packets, where the size of the data packet depends on the available data rate, or bandwidth, of the channel.
  • TTI Transmit Time Interval
  • a data packet may be transmitted on the DCCH channel and a different data packet may simultaneously be transmitted on the SCH channel.
  • FIG. 3 is a block diagram illustrating two radio frames 302 and 304 in the GSM air interface.
  • the GSM air interface radio frames 302 and 304 are each divided into eight timeslots. Individual timeslots are assigned to particular users in the system.
  • GSM transmission and reception use two different frequencies and forward link and reverse link are offset by three timeslots.
  • a downlink radio frame 302 begins at time to and would be transmitted at one frequency, and an uplink radio frame 304 would be transmitted at a different frequency.
  • the downlink radio frame 302 is offset by three time slots, TS0-TS2, from the uplink radio frame. Having an offset between the downlink and uplink radio frames allows wireless communication devices, or terminals, to be able to operate without having to be able to transmit and receive at the same time.
  • GSM terminals that can receive multiple timeslots during the same radio frames. These are called “multislot classes” and can be found in Annex B of 3GPP TS 45.002, incorporated herein in its entirety. Thus, in a system based on GSM, or GPRS, or EDGE there are multiple fixed time slots available for transmitting data.
  • VBR Variable Bit Rate
  • video data is generally captured at a constant frame rate by a sensor, such as a camera.
  • a multimedia transmitter generally requires a finite processing time with an upper bound to encode the video stream.
  • a multimedia receiver generally requires a finite processing time with an upper bound to decode the video stream.
  • Transporting multimedia content typically incurs delays. Some of these delays are due to codec settings and some are due to network settings such as radio-link protocol (RLP) transmissions that allow, among other things, the re-transmission and re-ordering of packets sent over the air interface, etc.
  • RLP radio-link protocol
  • An objective methodology to assess the delay of multimedia transmissions is to observe the encoded stream. For example, a transmission can not be decoded until a complete, independently decodable, packet has been received. Thus, delay can be affected by the size of the packets and the rate of transmission.
  • a packet For example, if a packet is 64 kbytes in size, and it is transmitted over a 64 kbytes per second channel, then the packet cannot be decoded, and must be delayed, for 1 sec until the entire packet is received. All packets that are received would need to be delayed enough to accommodate the largest packet, so that packets can be decoded at a constant rate. For example, if video packets, or varying size, are transmitted, a receiver would need to delay, or buffer, all of the received packets by an amount equal to the delay needed to accommodate the largest packet size. The delay would permit the decoded video to be rendered, or displayed, at a constant rate. If the maximum packet size is not known ahead of time then estimates of the maximum packet size, and associated delay, can be made based on the parameters used during the encoding of the packets.
  • the technique just described can be used in assessing the delay for any video codec (H.263, AVC/H.264, MPEG-4, etc.). Further, given that only video decoders are normatively specified by the Motion Picture Expert Group (MPEG) and the International Telecommunication Union (ITU), it is useful to have an objective measure that can be used to estimate the delays introduced by different encoder implementations for mobiles in typical wireless deployments.
  • MPEG Motion Picture Expert Group
  • ITU International Telecommunication Union
  • video streams will have more delay than other types of data in multimedia services, for example, more delay than speech, audio, timed text, etc. Because of the longer delay typically experienced by a video stream, other multimedia data that needs to be synchronized with the video data will usually need to be intentionally delayed in order to maintain synchronization with video.
  • multimedia data frames are encoded or decoded using information from a previous reference multimedia data frame.
  • video codecs implementing the MPEG-4 standard will encode and decode different types of video frames.
  • video is typically encoded into an “I” frame and a “P” frame.
  • An I frame is self-contained, that is, it includes all of the information needed to render, or display, one complete frame of video.
  • a P frame is not self-contained and will typically contain differential information relative to the previous frame, such as motion vectors and differential texture information.
  • I frames are about 8 to 10 times larger that a P frame, depending on the content and encoder settings.
  • Encoding and decoding of multimedia data introduces delays that may depend on the processing resources available.
  • a typical implementation of this type of scheme may utilize a ping-pong buffer to allow the processing resources to simultaneously capture or display one frame and process another.
  • Video encoders such as H.263, AVC/H.264, MPEG-4, etc. are inherently variable rate in nature because of predictive coding and also due to the use of variable length coding (VLC) of many parameters.
  • VLC variable length coding
  • Real time delivery of variable rate bitstreams over circuit switched networks and packet switched networks is generally accomplished by traffic shaping with buffers at the sender and receiver. Traffic shaping buffers introduces additional delay which is typically undesirable. For example, additional delay can be annoying during teleconferencing when there is delay between when a person speaks and when another person hears the speech.
  • video data typically has desired frame rates are 15 fps, 10 fps, or 7.5 fps.
  • desired frame rates are 15 fps, 10 fps, or 7.5 fps.
  • An upper bound on the time allowed for an encoder and decoder to process the data and maintain the desired frame rate results in upper bounds of 66.6 ms, 100 ms and 133 ms respectively frames rates of 15 fps, 10 fps, or 7.5 fps respectively.
  • Consistent quality at an encoder can be achieved by setting an encoder “Quantization parameter” (Q p ) to a constant value or less variable around a target Q p .
  • FIG. 4 is a chart illustrating an example of variation in frame sizes for a typical video sequence entitled “Carphone.”
  • the Carphone sequence is a standard video sequence, that is well know to those in the art, and it is used to provide a “common” video sequence for use in evaluating various techniques, such as video compression, error correction and transmission.
  • FIG. 4 shows an example of the variation in frame size, in bytes, for a sample number of frames of Carphone data encoded using MPEG4 and AVC/H.264 encoding techniques indicated by references 402 and 404 respectively.
  • a desired quality of encoding can be achieved by setting the encoder parameter “Qp” to a desired value.
  • CBR constant bit rate
  • the variations in frame size would need to be “smoothed out” to maintain a constant, or negotiated, QoS bitrate.
  • this “smoothing out” of the variations in frame size results in an introduction of additional delay, commonly called buffering delay ⁇ b .
  • FIG. 5 is a block diagram illustrating how buffering delay can be used to support the transmission of frames of various sizes to be transmitted over a CBR channel.
  • data frames of varying size 502 enter the buffer 504 .
  • the buffer 504 will store a sufficient number of frames of data so that data frames that are a constant size can be output from the buffer 506 for transmission over a CBR channel 508 .
  • a buffer of this type is commonly referred to as a “leaky bucket” buffer.
  • a “leaky bucket” buffer outputs data at a constant rate, like a bucket with a hole in the bottom.
  • the bucket needs to maintain a sufficient amount of water in the bucket to prevent the bucket from running dry when the rate of the water entering the bucket falls to less than the rate of the leak. Likewise, the bucket needs to be large enough so that the bucket does not overflow when the rate of the water entering the bucket exceeds the rate of the leak.
  • the buffer 504 works in a similar way to the bucket and the amount of data that the buffer needs to store to prevent buffer underflow results in delay corresponding the length of time that the data stays in the buffer.
  • FIG. 6 is a graph illustrating buffering delay introduced by streaming a variable bit rate (VBR) multimedia stream over a CBR channel in the FIG. 1 system.
  • VBR variable bit rate
  • a video signal is encoded using a VBR encoding scheme, MPEG-4, producing a VBR stream.
  • the number of bytes in the VBR stream is illustrated in FIG. 6 by a line 602 representing the cumulative, or total, number of bytes required to transmit a given number of video frames.
  • the MPEG4 stream is encoded at an average bit rate of 64 kbps and is transmitted over a 64 kbps CBR channel.
  • the number of bytes that are transmitted by the CBR channel is represented by a constant slope line 604 corresponding to the constant transmission rate of 64 kps.
  • the display, or playout, 606 at the decoder needs to be delayed.
  • the delay is 10 frames, or 1 second, for a desired display rate of 10 fps.
  • a constant rate of 64 kbps was used for the channel, but if an MPEG4 stream that has an average data rate 64 kbps is transmitted over a 32 kbps CBR channel, the buffering delay would increase with the length of the sequence. For example, for the 50-frame sequence illustrated in FIG. 6 , the buffering delay would increase to 2 seconds.
  • the denominator in Equation 5 represents the average data rate for the entire session duration I.
  • the denominator is C.
  • the above analyses can also be used to estimate nominal encoder buffer sizes required to avoid overflow at encoder by computing max ⁇ Be(i) ⁇ for all i in a set of exemplar sequences.
  • FIG. 7 is a bar graph illustrating buffer delay ⁇ b in milliseconds, for various 50 frame sequence video clips encoded with nominal rate of 64 kbps and constant Qp for AVC/H.264 and MPEG-4.
  • the MPEG-4 frame sequence of FIG. 6 is represented by a bar 702 indicating a buffer delay of 1000 ms.
  • the same video sequence encoded using AVC/H.264 is represented by a bar 704 indicating a buffer delay of 400 ms. Additional examples of 50 frame sequences of video clips are shown in FIG. 7 , where the buffer delay associated with each sequence, encoded with both MPEG4 and AVC/H.264 are indicated.
  • FIG. 8 is a bar graph illustrating the video quality, as represented by peak signal to noise ratio (PSNR), of the sequences illustrated in FIG. 7 .
  • PSNR peak signal to noise ratio
  • Transmission delay ⁇ t depends on the number of retransmissions used and certain constant time for a given network. It can be assumed that ⁇ t has a nominal value when no retransmissions are used. For example, it may be assumed that ⁇ t has a nominal value of 40 ms when no retransmissions are used. If retransmissions are used, Frame Erasure Rate (FER) drops, but the delay will increase. The delay will depend, at least in part, on the number of retransmissions and associated overhead delays.
  • FER Frame Erasure Rate
  • FIG. 9 is a diagram illustrating various levels of encapsulation present when transmitting multimedia data, such as video data, over a wireless links using the RTP/UDP/IP protocol.
  • a video codec generates a payload, 902 that includes information describing a video frame.
  • the payload 902 may be made up of several video packets (not depicted).
  • the payload 902 includes a Slice_Header (SH) 904 .
  • an application layer data packet 905 consists of the video data 902 and the associated Slice_Header 904 .
  • additional header information may be added. For example, a real-time protocol (RTP) header 906 , a user datagram protocol (UDP) header 908 , and an Internet protocol (IP) header 910 may be added.
  • RTP real-time protocol
  • UDP user datagram protocol
  • IP Internet protocol
  • a point to point protocol (PPP) header 912 is added to provide framing information for serializing the packets into a continuous stream of bits.
  • a radio link protocol for example, RLP in cdma2000 or RLC in W-CDMA, then packs the stream of bits into RLP packets 914 .
  • the radio-link protocol allows, among other things, the re-transmission and re-ordering of packets sent over the air interface.
  • the air interface MAC-layer takes one or more RLP packets 914 , packs them into MUX layer packet 916 , and adds a multiplexing header (MUX) 918 .
  • a physical layer channel coder then adds a checksum (CRC) 920 to detect decoding errors, and a tail part 922 forming a physical layer packet 925 .
  • CRC checksum
  • the successive uncoordinated encapsulations illustrated in FIG. 9 has several consequences on the transmission of multimedia data.
  • One such consequence is that there may be a mismatch between application layer data packets 905 and physical layer packets 925 .
  • portions of a single application layer data packet 905 may be included in more than one physical layer data packet 925 , losing one physical layer packet 925 can result in the loss of an entire application layer packet 905 because the entire application layer data packet 905 is needed to be properly decoded.
  • Another consequence is that if portion of more than one application layer data packets 905 is included in a physical layer data packet 925 , then the loss of a single physical layer data packet 925 can result in the loss of more than one application layer data packets 905 .
  • FIG. 10 is a diagram illustrating an example of conventional allocation of application data packets 905 such as multimedia data packets, into physical layer data packets 925 .
  • the application data packets can be multimedia data packets, for example each data packet 1002 and 1004 can represent video frames.
  • the uncoordinated encapsulations illustrated in FIG. 10 can result in a physical layer packet having data that is from a single application data packet or from more than one application data packet.
  • a first physical layer data packet 1006 can include data from a single application layer packet 1002
  • a second physical layer data packet 1008 can include data from more that one application data packet 1002 and 1004 .
  • first physical layer data packet 1006 is “lost”, or corrupted during transmission, then a single application layer data packet 1002 is lost.
  • second physical layer packet 1008 is lost, then two application data packets 1002 and 1004 are also lost.
  • the loss of the first physical layer data packet 1006 results in the loss of a single video frame.
  • loss of the second physical layer data packet results in the loss of both video frames because portions of both video frames are lost neither of the video frames can be properly decoded, or recovered, by a decoder.
  • EBR explicit bit rate control
  • an encoder may be constrained, or configured, to output bytes at time i (previously denoted R(i)) that match “the capacity” of the physical channel used to deliver the data stream in any over-the-air standard, such as, GSM, GPRS, EDGE, TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), cdma2000, Wideband CDMA (WCDMA), and others.
  • the encoded packets may be constrained so that it produces data packets that are sized, i.e. the same number of bytes or less, than the size of the physical layer data packets of the communication channel.
  • the encoder can be constrained so that each application layer data packet that it outputs is independently decodable.
  • Simulations of the EBR technique, on a AVC/H.264 reference encoder show that there is no perceivable loss in quality when the encoder is constrained in accordance with the EBR techniques, provided adequate number of explicit rates are used to constrain the VBR encoding. Examples of constraints for some channels are described below as examples.
  • multimedia encoders may generate multimedia frames of variable size.
  • each new multimedia frame may include all of the information needed to fully render the frame content, while other frames may include information about changes to the content from a previously fully rendered content.
  • video frames may typically be of two types: I or P frames.
  • I frames are self-contained, similar to JPEG files, in that each I frame contains all the information needed to render, or display, one complete frame.
  • P frames typically include information relative to the previous frame, such as, differential information relative to the previous frame and motion vectors.
  • a P frame is not self-contained, and cannot render, or display, a complete frame without reliance on a previous frame, in other words a P frame cannot be self-decoded.
  • the word “decoded” is used to mean full reconstruction for displaying a frame.
  • I frames are larger than P frames, for example, about 8 to 10 times larger depending on the content and encoder settings.
  • each frame of data can be partitioned into portions, or “slices”, such that each slice can be independently decoded, as described further below.
  • a frame of data may be contained in a single slice, in other cases a frame of data is divided into multiple slices.
  • the frame of data is video information
  • the video frame may be include within a independently decodable slice, or the frame may be divided into more than one independently decodable slice.
  • each encoded slice is configured so that the size of the slice matches an available size of a communication channel physical layer data packet. If the encoder is encoding video information then each slice is configured such that the size of each video slice matches an available size of a physical layer packet. In other words, frame slice sizes are matched to physical layer packet sizes.
  • Advantages to making slices a size that matches an available communication channel physical layer data size is that there is a one to one correspondence between the application packets and the physical layer data packets. This helps alleviate some of the problems associated with uncoordinated encapsulation as illustrated in FIG. 10 . Thus, if a physical layer data packet is corrupted, or lost, during transmission only the corresponding slice is lost. Also, if each slice of a frame is independently decodable, then the loss of a slice of a frame will not prevent the decoding of the other slices of the frame.
  • a video frame is divided into five slices, such that each slice is independently decodable and matched to a physical layer data packet, then corruption, or loss, of one of the physical layer data packets will result in the loss of only the corresponding slice and the physical layer packets that are successfully transmitted can be successfully decoded.
  • the entire video frame may not be decoded, portions of it may be.
  • four of the five video slices will be successfully decoded, and thereby allow the video frame to be rendered, or displayed, albeit at reduced performance.
  • the DCCH channel can be configured to support multiple, fixed, data rates.
  • the DCCH can support data transmission rates of either 9.60 kbps or 14.4 kbps depending on the selected rate set (RS), RS1 and RS2 respectively.
  • the SCH channel can also be configured to support multiple, fixed data rates, depending on the SCH radio configuration (RC).
  • the SCH supports multiples of 9.6 kps when configured in RC3 and multiples of 14.4 kps when configured as RC5.
  • Table 2 illustrates possible physical layer data packet sizes for the DCCH and SCH channels in a communication system based on cdma2000.
  • the first column identifies a case, or possible configuration.
  • the second and third columns are the DCCH rate set and SCH radio configuration respectively.
  • the fourth column has four entries. The first is the dtx case where no data is sent on either DCCH or SCH.
  • the second is the physical layer data packet size of a 20 ms time slot for the DCCH channel.
  • the third entry is the physical layer data packet size of a 20 ms time slot for the SCH channel.
  • the fourth entry is the physical layer data packet size of a 20 ms time slot for a combination of the DCCH and SCH channels.
  • each slice 902 has its own slice header 904 .
  • the SCH for Case 1 in is configured as 2 ⁇ in RC3.
  • RC3 corresponds to a base data rate of 9.6 Kbps and the 2 ⁇ means that the channel data rate is two times the base data rate.
  • the second entry in the fourth column of Table 2 for Case 1 is 40.
  • the third entry in the fourth column of Table 2 for Case 1 is the sum of the first and second entries, or 60.
  • Case 9 is similar to Case 1.
  • the DCCH is configured RS1, corresponding to a physical layer packet size of 20 bytes.
  • the SCH channel in Case 9 is configured 2 ⁇ RC5.
  • RC5 corresponds to a base data rate of 14.4 Kbps and the 2 ⁇ means that the channel data rate is two times the base data rate.
  • the second entry in the fourth column of Table 2, for Case 9 is 64.
  • the third entry in the fourth column of Table 2 for Case 9 is the sum of the first and second entries, or 84.
  • Table 2 The other entries in Table 2 are determined in a similar manner, where RS2 corresponds to DCCH having a data rate of 14.4 Kbps, corresponding to 36 bytes within a 20 msec time slot of which 31 are available to the application layer. It is noted that there is the dtx operation available for all cases, and that is zero payload size, where no data is transmitted on either channel. When the user data can be transmitted in fewer than the available physical layer slots (of 20 ms each), dtx is used in the subsequent slots, reducing the interference to other users in the system.
  • a set of CBR channels can behave similarly to a VBR channel. That is, configuring the multiple fixed rate channels can make a CBR channel behave as a pseudo-VBR channel.
  • Techniques that take advantage of the pseudo-VBR channel include determining possible physical layer data packet sizes corresponding to a CBR channel's bit rate from a plurality of available constant bit rate communication channels, and encoding a variable bit rate stream of data thereby creating a plurality of data packets such that a size of each of the data packets is matched to a size of one of the physical layer data packets sizes.
  • the configuration of the communication channels is established at the beginning of a session and then either not changed through out the session or only changed infrequently.
  • the SCH discussed in the above example is generally set to a configuration and remains in that configuration through out the entire session. That is, the SCH described is a fixed rate SCH.
  • the channel configuration can be changed dynamically during the session.
  • a variable rate SCH (V-SCH) can change its configuration for each time slot. That is, during one time slot a V-SCH can be configured in one configuration, such as 2 ⁇ RC3, and in the next time slot the V-SCH can be configured to a different configuration, such as 16 ⁇ RC3 or any other possible configuration of V-SCH.
  • V-SCH provides additional flexibility, and can improve system performance in EBR techniques.
  • application layer packets are selected so that that they fit into one of the available physical layer data packets that are available. For example, if the DCCH and SCH are configured as RS1 and 2 ⁇ RC3, as illustrated in Case 1 in Table 2, then the application layer slices would be selected to fit into either 0 byte, 20 byte, 40 byte, or 60 byte packets. Likewise, if the channels were configured as RS1 and 16 ⁇ RC3, as illustrated in Case 4 of Table 2, then the application layer slices would be selected to fit into either 0 byte, 20 byte, 320 byte, or 340 byte packets. If a V-SCH channel were used then it is possible to change between two different configurations for each slice.
  • V-SCH For example, if the DCCH is configured as RS1 and V-SCH is configured as RC3, then it is possible to change between any of the V-SCH configurations 2 ⁇ RC3, 4 ⁇ RC3, 8 ⁇ RC3, or 16 ⁇ RC3, corresponding to Cases 1-4 of Table 2. Selection between these various configurations provides physical layer data packets of 0 byte, 20 byte, 40 byte, 60 byte, 80 byte, 100 byte, 160 byte, 180 byte, 320 byte, or 340 byte as illustrated in Cases 1-4 of Table 2. Thus, in this example, using a V-SCH channel allows application layer slices to be selected to fit into any of the ten different physical layer data packet sizes listed in Cases 1-4 of Table 2. In the case of cdma2000, the size of the data delivered is estimated by the MS and this process is called “Blind Detection”
  • DCH Wideband CDMA
  • V-SCH Data Channel
  • DCH can support different physical layer packet sizes.
  • DCH can support rates of 0 to nx in multiples of 40 octets, where ‘nx’ corresponds to the maximum allocated rate o the DCH channel.
  • Typical values of nx include 64 kbps, 128 kbps and 256 kbps.
  • Explicit Indication the size of the data delivered can be indicated using additional signaling, thereby eliminating the need to do blind detection.
  • the size of the data packet delivered may be indicated using “Transport Format Combination Indicator” (TFCI), so that the MS does not have to do blind detection, thereby reducing the computational burden on the MS, when packets of variable sizes are used as in EBR.
  • TFCI Transport Format Combination Indicator
  • the EBR concepts described are applicable to both blind detection and explicit indication of the packet sizes.
  • a combination of constant bit rate communication channels can transmit a VBR data stream with performance similar to, and in some cases superior to, a VBR communication channel.
  • a variable bit rate data stream is encoded into a stream of data packets that are of a size that matches the physical layer data packet size of available communication channels, and are then transmitted over a combination of constant bit rate channels.
  • the bit rate of the variable bit rate data stream may be encoded into different sized data packets and a different combinations of constant bit rate channels may be used to transmit the data packets.
  • variable bit rate data can be efficiently transmitted over a constant bit rate channel by assigning data packets to at least one of the constant bit rate communication channels so as to match the aggregate bit rate of the constant bit rate communication channels to the bit rate of the variable bit rate stream.
  • the encoder can be constrained so as to limit the total number of bits used to represent the variable bit rate data stream to a pre-selected maximum number of bits. That is, if the variable bit rate data stream is a frame of multimedia data, such as video, the frame may be divided into slices where the slices are selected such that each slice can be independently decoded and the number of bits in the slice is limited to a pre-selected number of bits. For example, if the DCCH and SCH channels are configured RS1 and 2 ⁇ RC3 respectively (Case 1 in Table 2) then the encoded can be constrained so that a slice will be no larger that either 20 bytes, 40 bytes or 60 bytes.
  • cdma2000 packet data channel can be used to transmit multimedia data.
  • the PDCH has different data rates available of the forward PDCH (F-PDCH) and the reverse PDCH (R-PDCH).
  • F-PDCH forward PDCH
  • R-PDCH reverse PDCH
  • the F-PDCH has slightly less bandwidth available than the R-PDCH.
  • the R-PDCH can be limited so that it is the same as the F-PDCH.
  • One way to limit the F-PDCH bandwidth is to limit the application data packet sizes sent on the R-PDCH to those supported by the F-PDCH and then add “stuffing bits” for the remaining bits in the R-PDCH physical layer packet.
  • stuffing bits are added to the R-PDCH data packets so as to match the F-PDCH data packets, then the R-PDCH data packets can be used on the F-PDCH forward link with minimal change, for example, by just drop the stuffing bits.
  • Table 3 lists possible physical layer data packet sizes for the F-PDCH and R-PDCH for four possible data rate cases, one for each value of n, and the number of “stuffing bits” that will be added to the R-PDCH.
  • R-PDCH n F-PDCH and R-PDCH Stuffing bits 1 45 0 2 90 24 4 180 72 8 360 168
  • the techniques of matching multimedia data, such as video slices, to an available size of a physical layer packet can be performed in systems based on other over the air standards.
  • the multimedia frames, such as video slices can be sized to match the available timeslots.
  • many GSM, GPRS and EDGE devices are capable of receiving multiple timeslots.
  • an encoded stream of frames can be constrained so that the video slices are matched to the physical packets.
  • the multimedia data can be encoded so that packet sizes match an available size of a physical layer packet, such as the GSM timeslot, and the aggregate data rate of the physical layer packets used supports the data rate of the multimedia data.
  • an encoder of multimedia data streams when an encoder of multimedia data streams operates in an EBR mode it generates multimedia slices matched to the physical layer, and therefore there is no loss in compression efficiency as compared to true VBR mode.
  • a video codec operating in accordance with the EBR technique generates video slices matched to the particular physical layer over which the video is transmitted.
  • each physical packet loss in the wireless link results in the loss of exactly one application layer packet.
  • FIG. 11 illustrates an example of encoding application layer packets in accordance with the EBR technique.
  • application layer packets may be of various sizes.
  • the physical layer packets may also be of various sizes, for example, the physical layer may be made up of channels that use different sizes of physical layer data packets.
  • a single application layer packet can be encoded so that it is transmitted within multiple physical layer packets. In the example shown in FIG.
  • a single physical layer packet 1102 is encoded into two physical layer packets 1110 and 1112 .
  • DCCH and SCH are configured RS1 and 2 ⁇ RC3 respectively (Case 1 in Table 2) and the application data packet is 60 bytes then it could be transmitted over the two physical layer packets corresponding to the DCCH and SCH packet combination.
  • a single application layer packet can be encoded in to any number of physical layer packets corresponding to available communication channels.
  • a second example illustrated in FIG. 11 is that a single application layer packet 1104 is encoded into a single physical layer packet 1114 .
  • the application layer data packet is 40 bytes, it could be transmitted using just the SCH physical layer data packet in Case 1 of Table 2. In both of these examples loss of a single physical layer packet results in the loss of only a single application layer packet.
  • a third example illustrated in FIG. 11 is that multiple application layer packets can be encoded into a single physical layer packet 1116 .
  • two application layers 1106 and 1108 are encoded and transmitted in a single physical layer packet. It is envisioned that more that two application layer packets may be encoded to fit within a single physical layer packet.
  • a drawback to this example is that the loss of a single physical layer packet 1116 would result in the loss of multiple application layer packets 1106 and 1108 .
  • FIG. 12 is a block diagram illustrating one embodiment of a codec transmitting a VBR data stream through an IP/UDP/RTP network, such as the Internet.
  • the codec generates an application layer data packet 1202 that includes a payload, or slice, 1204 and a slice header 1206 .
  • the application layer 1202 passes through the network where IP/UDP/RTP header information 1208 is appended to the application layer data packet 1202 .
  • the packet then passes through the wireless network where an RLP header 1210 and a MUX header 1212 are appended to the packet.
  • the codec selects a size for the slice 1204 so that the slice and all associated headers fits into the physical layer data packet, or payload, 1216 .
  • FIG. 13 is a bar graph illustrating the relative drop in peak signal to noise ratio (PSNR) for various examples of encoded video sequences, using a true VBR transmission channel, and using an EBR transmission utilizing DCCH plus SCH, and PDCH, when the channel packet loss is 1%.
  • the video sequences illustrated in FIG. 13 are standard video sequences, that are well known to those in the art, and are used to provide “common” video sequences for use in evaluating various techniques, such as video compression, error correction and transmission.
  • the true VBR 1302 sequences have the largest PSNR drop followed by the EBR using PDCH 1306 and then the EBR using DCCH plus SCH 1304 .
  • FIG. 13 illustrates that when a transmission channel experiences 1% packet loss the distortion, as measured by PSNR, for the VBR sequence is more severe than for the EBR sequences.
  • FIG. 14 is a bar graph illustrating the relative drop in peak signal to nose ration (PSNR) when the channel loss is 5% for various examples of standard encoded video sequences, using a true VBR 1402 , EBR using DCCH plus SCH 1404 , and EBR using PDCH 1406 .
  • PSNR peak signal to nose ration
  • VBR 1402 sequence suffered approximately a 2.5 dB drop in PSNR
  • EBR using PDCH 1406 and EBR using DCCH plus SCH 1404 suffered drops in PSNR of approximately 1.4 and 0.8 dB respectively.
  • FIGS. 14 and 13 illustrate that when as the transmission channel packet loss increases the distortion, as measured by PSNR, for the VBR sequence is more severe than for the EBR sequences.
  • FIG. 15 is a bar graph illustrating the percentage of defective macroblock received for the encoded video sequences of FIG. 13 , using a true VBR 1502 , EBR using DCCH and SCH 1504 , and EBR using PDCH 1506 , when the channel packet loss is 1%.
  • FIG. 16 is a bar graph illustrating the percentage of defective macroblocks received for the encoded video sequences of FIG. 14 , using a true VBR 1602 , EBR using DCCH and SCH 1604 , and EBR using PDCH 1606 , when the channel packet loss is 5%. Comparison of these graphs show that in both cases the percentage of defected macroblocks is greater in the VBR sequences than in the EBR sequences.
  • FIG. 17 is a graph illustrating the rate distortion of one of the standard encoded video sequences, entitled “Foreman.” As shown in FIG. 17 , four different cases are illustrated showing the PSNR versus bit rate. The first two cases show the video sequence encoded using VBR 1702 and 1704 . The next two cases show the video sequence encoded using EBR15, where EBR15 is EBR using DCCH plus SCH configured as RS2 and 8 ⁇ in RC5 respectively, as listed in case 15 in Table 2 above. The VBR and EBR data streams are transmitted over a “clean” channel 1702 and 1706 and a “noisy” channel 1704 and 1708 .
  • VBR encoded sequence that is transmitted over a clean channel 1702 has the highest PSNR for all bit rates.
  • EBR15 encoded sequence that is transmitted over a clean channel 1706 has nearly the same PSNR performance, or rate distortion, for all bit rates.
  • This example illustrates that when there are no packets lost during transmission there can be sufficient granularity in an EBR encoding configuration to have nearly equal performance to a true VBR encoding configuration.
  • the PSNR drops significantly, over 3 dB, across all bit rates.
  • the EBR15 encoded sequence is transmitted over the same noisy channel 1708 , although its PSNR performance degrades over all bit rates, its performance only drops about 1 dB.
  • the PSNR performance of an EBR15 encoded sequence is about 2 dB higher that a VBR encoded sequence transmitted over the same noisy channel.
  • FIG. 17 shows, in a clean channel the rate distortion performance of EBR15 encoding is comparable to VBR encoding, and when the channel becomes noisy the rate distortion performance of the EBR15 encoding is superior to VBR encoding.
  • FIG. 18 is a graph, similar to FIG. 17 , illustrating the rate distortion curves of another encoded video sequences, entitle “Carphone.” Again, four different cases are illustrated showing the PSNR versus bit rate. The first two cases show the video sequence encoded using VBR 1802 and 1804 . The next two cases show the video sequence encoded using EBR15, where EBR15 is EBR using DCCH plus VSCH configured as RS2 and 8 ⁇ in RC5 respectively, as listed in case 15 in Table 2 above. The VBR and EBR data streams are transmitted over a “clean” channel 1802 and 1806 , and a “noisy” channel 1804 and 1808 .
  • the PSNR performance of the EBR15 encoded sequence transmitted over a clean channel 1806 exceeds the performance of the VBR sequence over the clean channel 1802 .
  • the PSNR performance of the EBR15 sequence over the noisy channel 1808 exceed the VBR sequence transmitted over the noisy channel 1804 by about 1.5 dB.
  • using the Carphone sequence in both a clean and noisy channel resulted in the rate distortion performance of EBR15 encoding having superior performance, as measured by PSNR, to the VBR encoding.
  • EBR encoding improves latency performance. For example, using EBR video slices can be transmitted over a wireless channel without traffic shaping buffers at the encoder and the decoder. For real time services this is a significant benefit as the overall user experience can be enhanced.
  • VBR variable bitrate
  • the denominator in the above represents the average data rate for the entire session duration I.
  • the denominator is C.
  • FIG. 19 illustrates the transmission example for a typical EBR stream encoded at an average rate of 64 kbps.
  • the cumulative bytes versus frame number is shown for the source 1902 , the transmission 1904 and the display 1906 of a multimedia stream.
  • buffering delay is 0, but delays due to encoding, decoding and transmission are still present. However, these delays are typically much smaller when compared to the VBR buffering delay.
  • FIG. 20 is a flow diagram illustrating an embodiment of a method of transmitting data.
  • Flow begins in block 2002 .
  • Flow then continues to block 2004 .
  • possible physical layer packet sizes of available communication channels are determined. For example, if the DCCH and SCH channels are used then the configuration of these radio channels will establish the physical layer packet sizes available, as illustrated in Table 2, above.
  • Flow then continues to block 2006 where an information unit, for example a frame of a variable bit rate data stream is received. Examples of the variable bit rate data streams include a multimedia stream, such as a video stream.
  • Flow then continues to block 2008 .
  • the information units are partitioned into slices.
  • the partitions, or slices are selected such that their size does not exceed the size of one of the possible physical layer packet sizes.
  • the partitions can be sized such that each the size of the partition is no larger than at least one of the available physical layer packets' size.
  • encoding information can include a source encoder equipped with a rate controlled module capable of generating partitions of varying size. Then, in block 2012 it is determined if all of the partitions of the frame have been encoded and assigned to a physical layer packet.
  • block 2014 it is determined if the flow of information has terminated, such as at the end of a session. If the flow of information has not terminated, a negative outcome at block 2014 , flow continues to block 2006 and the next information unit is received. Returning to block 2014 , if the flow of information has terminated such as the end of a session, an affirmative outcome at 2014 , then flow continues to block 2016 and the process stops.
  • FIG. 21 is a flow diagram illustrating another embodiment of a method of transmitting data.
  • Flow begins in block 2102 .
  • Flow then continues to block 2104 .
  • possible physical layer packet sizes of available communication channels are determined. For example, if the DCCH and SCH channels are used then the configuration of these radio channels will establish the physical layer packet sizes available, as illustrated in Table 2, above.
  • Flow then continues to block 2106 where an information unit is received.
  • the information unit may be variable bit rate data such as a multimedia stream, or video stream.
  • block 2108 it is determined if it is desirable to reconfigure the communication channels' configuration. If a communication channel is being used that can be reconfigured during a session, such as a V-SCH channel, it may be desirable to change the channel configuration during a session. For example, if frames of data that have more data than can be transmitted over the current configuration of communication channels it may be desired to change the configuration to a higher bandwidth so that the communication channel can support more data. In block 2108 if it is decided that it is not desired to reconfigure the communication channels, a negative outcome at block 2108 , the flow continues to block 2110 . In block 2110 the information unit is partitioned into sizes such that their size does not exceed the size of one of the possible physical layer packet sizes.
  • a desired physical layer packet size is determined. For example, the received information unit may be analyzed and the size of a data packet needed to transmit the entire unit may be determined.
  • a desired communication channel configuration is determined. For example, the various physical layer packet sizes of different configurations of the available communication channels can be determined and a configuration that has physical layer packets that are large enough to accommodate the information unit may be selected. The communication channels are then reconfigured accordingly.
  • the partition is encoded and assigned to a physical layer data packet.
  • encoding information can include a source encoder equipped with a rate controlled module capable of generating partitions of varying size. Flow then continues to block 2118 .
  • block 2118 it is determined if all of the partitions of the information unit have been encoded and assigned to a physical layer packet. If they have not, a negative outcome at block 2118 , then flow continues to block 2110 and the next partition is encoded and assigned to a physical layer packet. Returning to block 2118 , if all of the partitions of the information unit have been encoded and assigned to a physical layer packet, an affirmative outcome at block 2118 , then flow continues to block 2120 .
  • block 2120 it is determined if the information flow has terminated, such as at the end of a session. If the information flow has not terminated, a negative outcome at block 2120 , then flow continues to block 2106 and the next information unit is received. Returning to block 2120 , if the information flow is terminated, an affirmative outcome at block 2120 , then flow continues to block 2122 and the process stops.
  • FIG. 22 is a block diagram of a wireless communication device, or a mobile station (MS), constructed in accordance with an exemplary embodiment of the present invention.
  • the communication device 2202 includes a network interface 2206 , codec 2208 , a host processor 2210 , a memory device 2212 , a program product 2214 , and a user interface 2216 .
  • Signals from the infrastructure are received by the network interface 2206 and sent to the host processor 2210 .
  • the host processor 2210 receives the signals and, depending on the content of the signal, responds with appropriate actions. For example, the host processor 2210 may decode the received signal itself, or it may route the received signal to the codec 2208 for decoding. In another embodiment, the received signal is sent directly to the codec 2208 from the network interface 2206 .
  • the network interface 2206 may be a transceiver and an antenna to interface to the infrastructure over a wireless channel. In another embodiment, the network interface 2206 may be a network interface card used to interface to the infrastructure over landlines.
  • the codec 2208 may be implemented as a digital signal processor (DSP), or a general processor such as a central processing unit (CPU).
  • Both the host processor 2210 and the codec 2208 are connected to a memory device 2212 .
  • the memory device 2212 may be used to store data during operation of the WCD, as well as store program code that will be executed by the host processor 2210 or the DSP 2208 .
  • the host processor, codec, or both may operate under the control of programming instructions that are temporarily stored in the memory device 2212 .
  • the host processor 2210 and codec 2208 also can include program storage memory of their own. When the programming instructions are executed, the host processor 2210 or codec 2208 , or both, perform their functions, for example decoding or encoding multimedia streams.
  • the programming steps implement the functionality of the respective host processor 2210 and codec 2208 , so that the host processor and codec can each be made to perform the functions of decoding or encoding content streams as desired.
  • the programming steps may be received from a program product 2214 .
  • the program product 2214 may store, and transfer the programming steps into the memory 2212 for execution by the host processor, codec, or both.
  • the program product 2214 may be semiconductor memory chips, such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, as well as other storage devices such as a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art that may store computer readable instructions. Additionally, the program product 2214 may be the source file including the program steps that is received from the network and stored into memory and is then executed. In this way, the processing steps necessary for operation in accordance with the invention may be embodied on the program product 2214 . In FIG. 22 , the exemplary storage medium is shown coupled to the host processor 2210 such that the host processor may read information from, and write information to, the storage medium. Alternatively, the storage medium may be integral to the host processor 2210 .
  • the user interface 2216 is connected to both the host processor 2210 and the codec 2208 .
  • the user interface 2216 may include a display and a speaker used to output multimedia data to the user.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a user terminal.
  • the processor and the storage medium may reside as discrete components in a user terminal.

Abstract

Methods and apparatus are described for transmitting information units over a plurality of constant bit rate communication channel. The techniques include encoding the information units, thereby creating a plurality of data packets. The encoding is constrained such that the data packet sizes match physical layer packet sizes of the communication channel. The information units may include a variable bit rate data stream, multimedia data, video data, and audio data. The communication channels include CMDA channels, WCDMA, GSM channels, GPRS channels, and EDGE channels.

Description

    CLAIM OF PRIORITY UNDER 35 U.S.C §119
  • The present application for patent claims priority to U.S. Provisional Application No. 60/571,673, entitled “Multimedia Packets Carried by CDMA Physical Layer Products”, filed May 13, 2004, and assigned to the assignee hereof and hereby expressly incorporated by reference herein
  • REFERENCE TO CO-PENDING APPLICATIONS FOR PATENT
  • The present application for patent is related to the following co-pending U.S. patent applications:
      • “Method And Apparatus For Allocation Of Information To Channels Of A Communication System”, having Attorney Docket No. 030166U2, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated in its entirety by reference herein; and
      • “Header Compression Of Multimedia Data Transmitted Over A Wireless Communication System”, having Attorney Docket No. 030166U3, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated in it entirety by reference herein; and
      • “Synchronization Of Audio And Video Data In A Wireless Communication System”, having Attorney Docket No. 030166U4, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated in its entirety by reference herein.
    BACKGROUND
  • I. Field
  • The present invention relates generally to delivery of information over a communication system, and more specifically, to partitioning of information units to match a physical layer packet of a constant bit rate communication link.
  • II. Background
  • Demand for the delivery of multimedia data over various communication networks is increasing. For example, consumers desire the delivery of video over various communication channels, such as the Internet, wire-line and radio networks. Multimedia data can be different formats and data rates, and the various communication networks use different mechanisms for transmission of real time data over their respective communication channels.
  • One type of communication network that has become commonplace is mobile radio networks for wireless communications. Wireless communication systems have many applications including, for example, cellular telephones, paging, wireless local loops, personal digital assistants (PDAs), Internet telephony, and satellite communication systems. A particularly important application is cellular telephone systems for mobile subscribers. As used herein, the term “cellular” system encompasses both cellular and personal communications services (PCS) frequencies. Various over-the-air interfaces have been developed for such cellular telephone systems including frequency division multiple access (FDMA), time division multiple access (TDMA), and code division multiple access (CDMA).
  • Different domestic and international standards have been established to support the various air interfaces including, for example, Advanced Mobile Phone Service (AMPS), Global System for Mobile (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Interim Standard 95 (IS-95) and its derivatives, IS-95A, IS95B, ANSI J-STD-008 (often referred to collectively herein as IS-95), and emerging high-data-rate systems such as cdma2000, Universal Mobile Telecommunications Service (UMTS), and wideband CDMA (WCDMA). These standards are promulgated by the Telecommunication Industry Association (TIA), 3rd Generation partnership Project (3GPP), European Telecommunication Standards Institute (ETSI), and other well-known standards bodies.
  • Users, or customers, of mobile radio networks, such as cellular telephone networks, would like to receive streaming media such as video, multimedia, and Internet Protocol (IP) over a wireless communication link. For example, customers desire to be able to receive video, such as a teleconference or television broadcasts, on their cell phone or other portable wireless communication device. Other examples of the type of data that customers desire to receive with their wireless communication device include multimedia multicast/broadcast and Internet access.
  • There are different types of sources of multimedia data and different types of communication channels on which it is desired to transmit the streaming data. For example, a multimedia data source can produce data at a constant bit rate (CBR) or a variable bit rate (VBR). In addition, the communication channel can transmit data at a CBR or a VBR. Table 1 below list various combinations of data sources and communication channels.
    TABLE 1
    Source Channel Example
    CBR CBR mu-law, or A-law on PSTN
    VBR VBR MPEG-4 video over wire-line IP network, cdma2000
    variable rate vocoders such as 13K vocoder, EVRC
    and SMV over fundamental channel (FCH)
    CBR VBR AMR streaming on cdma2000 FCH
    VBR CBR Compressed video over circuit switched Wireless
    networks (3G-324M)
  • Communication channels typically transmit data in chunks, which we refer to as physical layer packets or physical layer frames. The data generated by the multimedia source may be a continuous stream of bytes, such as a voice signal encoded using the mu-law or A-law. More frequently, the data generated by the multimedia source consists in groups of bytes, called data packets. For example, an MPEG-4 video encoder compresses visual information as a sequence of information units, which we refer herein as video frames. Visual information is typically encoded at a constant video frame rate by the encoder, of typically 25 or 30 Hz, and must be rendered at the same rate by the decoder. The video frame period is the time between two video frames and can be computed as the inverse of the video frame rate, for example the video frame period of 40 ms corresponds to a video frame rate of 25 Hz. Each video frame is encoded into a variable number of data packets, and all the data packets are transmitted to the decoder. If a portion of a data packet is lost, that packet becomes unusable by the decoder. On the other hand, the decoder may reconstitute the video frame even if some of the data packets are lost, but at the cost of some quality degradation in the resulting video sequence. Each data packet therefore contains part of the description of the video frame, and the number packets are therefore variable from one video frame to another.
  • In the case when a source produces data at a constant bit rate and a communication channel transmits data at a constant rate, the communication system resources are efficiently utilized, assuming that the communication channel data rate is at least as fast as the source data rate, or if the two data rates are otherwise matched. In other words, if the constant data rate of the source is the same as the constant data rate of the channel, then the resources of the channel can be fully utilized, and the source data can be transmitted with no delay. Likewise, if the source produces data at a variable rate and the channel transmits at a variable rate, then as long as the channel data rate can support the source data rate, then the two data rates can be matched and, again, the resources of the channel are fully utilized and all of the source data can be transmitted with no delay.
  • If the source produces data at a constant data rate and the channel is a variable data rate channel, then the channel resources may not be as efficiently utilized as possible. For example, in this mismatched case the statistical multiplexing gain (SMG) is less than that compared with a CBR source on a matched CBR channel. Statistical multiplexing gain results when the same communication channel can be used, or multiplexed, between multiple users. For example, when a communication channel is used to transmit voice, the speaker does not usually talk continuously. That is, there will be a “talk” spurt from the speaker followed by silence (listening). If the ratio of time for the “talk” spurt to the silence was, for example 1:1, then on average the same communication channel could be multiplexed and could support two users. But in the case where the data source has a constant data rate and is delivered over a variable rate channel, there is no SMG because there is no time when the communication channel can be used by another user. That is, there is no break during “silence” for a CBR source.
  • The last case noted in Table 1 above, is the situation when the source of multimedia data is a variable bit rate stream, such as a multimedia data stream like video, and it is transmitted over a communication channel that has a constant bit rate, such as a wireless radio channel with a constant bit rate assignment. In this case, delay is typically introduced between the source and the communication channel, creating “spurts” of data so that the communication channel can be efficiently utilized. In other words, the variable rate data stream is stored in a buffer and delayed long enough so that the output of the buffer can be emptied at a constant data rate, to match the channel fixed data rate. The buffer needs to store, or delay, enough data so that it is able to maintain a constant output without “emptying” the buffer so the CBR communication channel is fully utilized sand the communication channel's resources are not wasted.
  • The encoder periodically generates video frames according to the video frame period. Video frames consist of data packets, and the total amount of data in a video frame is variable. The video decoder must render the video frames at the same video frame rate used by the encoder in order to ensure an acceptable result for the viewer. The transmission of video frames, which have a variable amount of data, at a constant video frame rate and over a constant rate communication channel can result in inefficiency. For example, if the total amount of data in a video frame is too large to be transmitted within the video frame period at the bit rate of the channel, then the decoder may not receive the entire frame in time to render it according to the video frame rate. In practice, a traffic shaping buffer is used to smooth such large variations for delivery over a constant rate channel. This introduces a delay in rendering the video, if a constant video frame rate is to be maintained by the decoder.
  • Another problem is that if data from multiple video frames is contained in a single physical layer packet, then the loss of a single physical layer packet results in degradation of multiple video frames. Even for the situations when the data packets are close to the physical layer packet sizes, the loss of one physical layer packet can result in the degradation of multiple video frames.
  • There is therefore a need in the art for techniques and apparatus that can improve the transmission of variable data rate multimedia data over constant data rate channels.
  • SUMMARY
  • Embodiments disclosed herein address the above stated needs by providing methods and apparatus for transmitting information units over a constant bit rate communication channel. The techniques include partitioning the information units into data packets wherein the size of the data packets are selected to match physical layer data packet sizes of a communication channel. For example, the number of bytes contained in each information unit may vary over time and the number of bytes that each physical layer data packets that communication channels can carry may vary independently. The techniques describe partition the information units, thereby creating a plurality of data packets. For example, an encoder may be constrained such that it encodes the information units into data packets of sizes that do not exceed, or “match”, the physical layer packet sizes of the communication channel. The data packets are then assigned to the physical layer data packets of the communication channel.
  • The phrase “multimedia frame”, for video, is used herein to mean a video frame that can be displayed/rendered on a display device, after decoding. A video frame can be further divided in to independently decodable units. In video parlance, these are called “slices.” In the case of audio and speech, the term “multimedia frame” is used herein to mean information in a time window over which speech or audio is compressed for transport and decoding at the receiver. The phrase “information unit interval” is used herein to represent the time duration of the multimedia frame described above. For example, in case of video, information unit interval is 100 milliseconds in the case of 10 frames per second video. Further, as an example, in the case of speech, the information unit interval is typically 20 milliseconds in cdma2000, GSM and WCDMA. From this description, it should be evident that, typically audio/speech frames are not further divided in to independently decodable units and typically video frames are further divided in to slices that are independently decodable. It should be evident form the context when the phrases “multimedia frame”, “information unit interval”, etc. refer to multimedia data of video, audio and speech.
  • The techniques can be used with various over-the-air interfaces, such as, Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS-2000, HRPD, Wideband CDMA (WCDMA), and others.
  • Aspects include determining possible physical layer packet sizes of at least one available constant bit rate communication channel. Information units are partitioned, thereby creating a plurality of data packets such that the size of an individual data packet does not exceed, or is matched to, one of the physical layer packets of at least one of the constant bit rate communication channels. The data packets are then encoded and assigned to the physical layer packets of the matched constant bit rate communication channel. Encoding information can include a source encoder equipped with a rate controlled module capable of generating partitions of varying size.
  • Using the techniques described, information units are encoded into a stream of data packets that are transmitted over one or more constant bit rate channels. As the information units vary in size, they may be encoded into different sized data packets, and different combinations of constant bit rate channels, with different available physical layer packet sizes, may be used to transmit the data packets. For example, an information unit may include video data that is included in video frames of different sizes, and thus different combinations of fixed bit rate communication channel physical layer packets may be selected to accommodate the transmission of the different sized video frames.
  • Other aspects include determining a physical layer packet size and an available data rate of a plurality of constant bit rate communication channels. Then, information units are assigned to data packets, wherein individual data packet sizes are selected to be a size that fits into a physical layer packet of one of the individual constant bit rate communication channels. A combination of individual constant bit rate channels may be selected such that the physical layer packet sizes match the variable bit rate data stream packet sizes. Different combinations of constant bit rate channels, for example one or more, may be selected depending on the variable bit rate data stream.
  • Another aspect is an encoder configured to accept information units. The information units are then partitioned into data packets wherein the size of individual data packets do not exceed, or are matched to, a physical layer packet size of one of an available constant bit rate communication channel.
  • Another aspect is a decoder configured to accept data streams from a plurality of constant bit rate communication channels. The data streams are decoded and the decoded data streams are accumulated into a variable bit rate data stream.
  • Examples of constant bit rate communication channels include GSM, GPRS, EDGE, or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS-2000, HRPD, and Wideband CDMA (WCDMA).
  • Other features and advantages of the present invention should be apparent from the following description of exemplary embodiments, which illustrate, by way of example, aspects of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is an illustration of portions of a communication system 100 constructed in accordance with the present invention.
  • FIG. 2 is a block diagram illustrating an exemplary packet data network and various air interface options for delivering packet data over a wireless network in the FIG. 1 system.
  • FIG. 3 is a block diagram illustrating two radio frames 302 and 304 in the FIG. 1 system utilizing the GSM air interface.
  • FIG. 4 is a chart illustrating an example of variation in frame sizes for a typical video sequence in the FIG. 1 system.
  • FIG. 5 is a block diagram illustrating buffering delay used to support the transmission of frames of various sizes to be transmitted over a CBR channel in the FIG. 1 system.
  • FIG. 6 is a graph illustrating buffering delay introduced by streaming a variable bit rate (VBR) multimedia stream over a CBR channel in the FIG. 1 system.
  • FIG. 7 is a bar graph illustrating buffer delay Ab in milliseconds, for various 50 frame sequence video clips encoded with nominal rate of 64 kbps and constant Qp for AVC/H.264 and MPEG-4 in the system.
  • FIG. 8 is a bar graph illustrating the visual quality, as represented by the well understood objective metric “peak signal to noise ratio” (PSNR), of the sequences illustrated in FIG. 7.
  • FIG. 9 is a diagram illustrating various levels of encapsulation present when transmitting multimedia data, such as video data, over a wireless link using the RTP/UDP/IP protocol in the system.
  • FIG. 10 is a diagram illustrating an example of the allocation of application data packets, such as multimedia data packets, into physical layer data packets in the system.
  • FIG. 11 illustrates an example of encoding application layer packets in accordance with the EBR technique in the system.
  • FIG. 12 is a block diagram illustrating one embodiment of a codec transmitting a VBR data stream through an IP/UDP/RTP network, such as the Internet.
  • FIG. 13 is a bar graph illustrating the relative drop in peak signal to nose ratio (PSNR) for various examples of encoded video sequences, using different encoding techniques and with a channel packet loss is 1%.
  • FIG. 14 is a bar graph illustrating the relative drop in peak signal to nose ratio (PSNR) when the channel loss is 5% for various examples of encoded video sequences.
  • FIG. 15 is a bar graph illustrating the percentage of defective data packets received for the encoded video sequences of FIG. 13.
  • FIG. 16 is a bar graph illustrating the percentage of defective data packets received for the encoded video sequences of FIG. 14.
  • FIG. 17 is a graph illustrating the PSNR of a sample encoded video sequence versus bit rate for four different cases.
  • FIG. 18 is a graph illustrating the PSNR of another encoded video sequences versus bit rate for four different cases.
  • FIG. 19 is a graph illustrating the transmission plan for a AVC/H.264 stream of average rate 64 kbps.
  • FIG. 20 is a flow diagram illustrating an embodiment of a method of transmitting data.
  • FIG. 21 is a flow diagram illustrating another embodiment of a method of transmitting data.
  • FIG. 22 is a block diagram of a wireless communication device, or a mobile station (MS), constructed in accordance with an exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION
  • The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
  • The word “streaming” is used herein to mean real time delivery of multimedia data of continuous in nature, such as, audio, speech or video information, over dedicated and shared channels in conversational, unicast and broadcast applications. The phrase “multimedia frame”, for video, is used herein to mean video frame that can be displayed/rendered on a display device, after decoding. A video frame can be further divided in to independently decodable units. In video parlance, these are called “slices”. In the case of audio and speech, the term “multimedia frame” is used herein to mean information in a time window over which speech or audio is compressed for transport and decoding at the receiver. The phrase “information unit interval” is used herein to represent the time duration of the multimedia frame described above. For example, in case of video, information unit interval is 100 milliseconds in the case of 10 frames per second video. Further, as an example, in the case of speech, the information unit interval is typically 20 milliseconds in cdma2000, GSM and WCDMA. From this description, it should be evident that, typically audio/speech frames are not further divided in to independently decodable units and typically video frames are further divided in to slices that are independently decodable. It should be evident form the context when the phrases “multimedia frame”, “information unit interval”, etc. refer to multimedia data of video, audio and speech.
  • Techniques for transmitting information units over a plurality of constant bit rate communication channel are described. The techniques include partitioning the information units into data packets wherein the size of the data packets are selected to match physical layer data packet sizes of a communication channel. For example, the information units may occur at a constant rate and the communication channels may transmit physical layer data packets at a different rate. The techniques describe partitioning the information units, thereby creating a plurality of data packets. For example, an encoder may be constrained such that it encodes the information units into sizes that match physical layer packet sizes of the communication channel. The encoded data packets are then assigned to the physical layer data packets of the communication channel. The information units may include a variable bit rate data stream, multimedia data, video data, and audio data. The communication channels include GSM, GPRS, EDGE, or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • Aspects include determining possible physical layer packet sizes of at least one available constant bit rate communication channel. Information units are partitioned, thereby creating a plurality of data packets such that the size of an individual data packet is matched to one of the physical layer packets of at least one of the constant bit rate communication channels. The data packets are then encoded and assigned to the physical layer packets of the matched constant bit rate communication channel. In this way, information units are encoded into a stream of data packets that are transmitted over one or more constant bit rate channels. As the information units vary, they may be encoded into different sized data packets, and different combinations of constant bit rate channels, with different available physical layer packet sizes, may be used to transmit the data packets. For example, an information unit may include video data that is included in frames of different sizes, and thus different combinations of fixed bit rate communication channel physical layer packets may be selected to accommodate the transmission of the different sized video frames.
  • Other aspects include determining a physical layer packet size and an available data rate of a plurality of constant bit rate communication channels. Then, information units are assigned to data packets, wherein individual data packet sizes are selected to be a size that fits into a physical layer packet of one of the individual constant bit rate communication channels. A combination of individual constant bit rate channels may be selected such that the physical layer packet sizes match the variable bit rate data stream packet sizes. Different combinations of constant bit rate channels, for example one or more, may be selected depending on the variable bit rate data stream.
  • Another aspect is an encoder configured to accept information units. The information units are then partitioned into data packets wherein the size of individual data packets is matched to a physical layer packet size of one of an available constant bit rate communication channel.
  • Another aspect is a decoder configured to accept data streams from a plurality of constant bit rate communication channels. The data streams are decoded and the decoded data streams are accumulated into a variable bit rate data stream.
  • Examples of information units include variable bit rate data streams, multimedia data, video data, and audio data. The information units may occur at a constant repetition rate. For example, the information units may be frames of video data. Examples of constant bit rate communication channels include CMDA channels, GSM channels, GPRS channels, and EDGE channels.
  • Examples of protocols and formats for transmitting information units, such as variable bit rate data, multimedia data, video data, speech data, or audio data, from a content server or source on the wired network to a mobile are also provided. The techniques described are applicable to any type of multimedia applications, such as unicast streaming, conversational and broadcast streaming applications. For example, the techniques can be used to transmit multimedia data, such as video data (such as a content server on wireline streaming to a wireless mobile), as well as other multimedia applications such as broadcast/multicast services, or audio and conversational services such as video telephony between two mobiles,
  • FIG. 1 shows a communication system 100 constructed in accordance with the present invention. The communication system 100 includes infrastructure 101, multiple wireless communication devices (WCD) 104 and 105, and landline communication devices 122 and 124. The WCDs will also be referred to as mobile stations (MS) or mobiles. In general, WCDs may be either mobile or fixed. The landline communication devices 122 and 124 can include, for example, serving nodes, or content servers, that provide various types of multimedia data such as streaming data. In addition, MSs can transmit streaming data, such as multimedia data.
  • The infrastructure 101 may also include other components, such as base stations 102, base station controllers 106, mobile switching centers 108, a switching network 120, and the like. In one embodiment, the base station 102 is integrated with the base station controller 106, and in other embodiments the base station 102 and the base station controller 106 are separate components. Different types of switching networks 120 may be used to route signals in the communication system 100, for example, IP networks, or the public switched telephone network (PSTN).
  • The term “forward link” or “downlink” refers to the signal path from the infrastructure 101 to a MS, and the term “reverse link” or “uplink” refers to the signal path from a MS to the infrastructure. As shown in FIG. 1, MSs 104 and 105 receive signals 132 and 136 on the forward link and transmit signals 134 and 138 on the reverse link. In general, signals transmitted from a MS 104 and 105 are intended for reception at another communication device, such as another remote unit, or a landline communication device 122 and 124, and are routed through the IP network or switching network. For example, if the signal 134 transmitted from an initiating WCD 104 is intended to be received by a destination MS 105, the signal is routed through the infrastructure 101 and a signal 136 is transmitted on the forward link to the destination MS 105. Likewise, signals initiated in the infrastructure 101 may be broadcast to a MS 105. For example, a content provider may send multimedia data, such as streaming multimedia data, to a MS 105. Typically, a communication device, such as a MS or a landline communication device, may be both an initiator of and a destination for the signals.
  • Examples of a MS 104 include cellular telephones, wireless communication enabled personal computers, and personal digital assistants (PDA), and other wireless devices. The communication system 100 may be designed to support one or more wireless standards. For example, the standards may include standards referred to as Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • FIG. 2 is a block diagram illustrating an exemplary packet data network and various air interface options for delivering packet data over a wireless network. The techniques described may be implemented in a packet switched data network 200 such as the one illustrated in FIG. 2. As shown in the example of FIG. 2, the packet switched data network system may include a wireless channel 202, a plurality of recipient nodes or MS 204, a sending node or content server 206, a serving node 208, and a controller 210. The sending node 206 may be coupled to the serving node 208 via a network 212 such as the Internet.
  • The serving node 208 may comprise, for example, a packet data serving node (PDSN) or a Serving GPRS Support Node (SGSN) and a Gateway GPRS Support Node (GGSN). The serving node 208 may receive packet data from the sending node 206, and serve the packets of information to the controller 210. The controller 210 may comprise, for example, a Base Station Controller/Packet Control Function (BSC/PCF) or Radio Network Controller (RNC). In one embodiment, the controller 210 communicates with the serving node 208 over a Radio Access Network (RAN). The controller 210 communicates with the serving node 208 and transmits the packets of information over the wireless channel 202 to at least one of the recipient nodes 204, such as an MS.
  • In one embodiment, the serving node 208 or the sending node 206, or both, may also include an encoder for encoding a data stream, or a decoder for decoding a data stream, or both. For example the encoder could encode a video stream and thereby produce variable-sized frames of data, and the decoder could receive variable sized frames of data and decode them. Because the frames are of various size, but the video frame rate is constant, a variable bit rate stream of data is produced. Likewise, a MS may include an encoder for encoding a data stream, or a decoder for decoding a received data stream, or both. The term “codec” is used to describe the combination of an encoder and a decoder.
  • In one example illustrated in FIG. 2, data, such as multimedia data, from the sending node 206 which is connected to the network, or Internet 212 can be sent to a recipient node, or MS 204, via the serving node, or Packet Data Serving Node (PDSN) 206, and a Controller, or Base Station Controller/Packet Control Function (BSC/PCF) 208. The wireless channel 202 interface between the MS 204 and the BSC/PCF 210 is an air interface and, typically, can use many channels for signaling and bearer, or payload, data.
  • Air Interface
  • The air interface 202 may operate in accordance with any of a number of wireless standards. For example, the standards may include standards based on TDMA, such as Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), or standards based on CDMA such as TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, cdma2000, Wideband CDMA (WCDMA), and others.
  • In a system based on cdma2000, data can be transmitted on multiple channels, for example, on a fundamental channel (FCH), generally used to transmit voice, a dedicated control channel (DCCH), a supplemental channel (SCH), and a packet data channel (PDCH) as well as other channels.
  • The FCH provides a communication channel for the transmission of speech at multiple fixed rates, e.g. full rate, half rate, quarter rate and ⅛th rate. The FCH provides these rates and when a user's speech activity requires less than the full rate to achieve a target voice quality, the system reduces interference to other users in the system by using one of the lower data rates. The benefit of lowering source rate in order to increase the system capacity is well known in CDMA networks.
  • DCCH is similar to FCH but provides only full rate traffic at one of two fixed rates, 9.6 kbps in radio configuration three (RC3), and 14.4 in radio configuration five (RC5). This is called 1× traffic rate. SCH can be configured to provide traffic rates at 1×, 2×, 4×, 8× and 16× in cdma2000. When there is no data to be transmitted, both DCCH and SCH can cease transmission, that is not transmit any data, also referred to as dtx, to ensure reduced interference to the other users in the system or to stay within the transmit power budget of the base station transmitter. The PDCH can be configured to transmit data packets that are n*45 bytes, where n={1,2,4,8}.
  • The FCH and DCCH channels provide a constant delay and low data packet loss for communication of data, for example, to enable conversational services. The SCH and PDCH channels provide multiple fixed bit rate channels providing higher bandwidths, for example, 300 kbps to 3 Mbps, than the FCH and DCCH. The SCH and PDCH also have variable delays because these channels are shared among many users. In the case of SCH, multiple users are multiplexed in time, which introduces different amounts of delay depending on the system load. In the case of PDCH, the bandwidth and delay depend on, for example, the radio conditions, negotiated Quality of Service (QoS), and other scheduling considerations. Similar channels are available in systems based on TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, UMTS, and Wideband CDMA (WCDMA).
  • It is noted that FCH provides multiple fixed bit data rates (full, half, quarter and ⅛) to conserve power required by a voice user. Typically, a voice encoder, or vocoder will use a lower data rate when the time-frequency structure of a signal to be transmitted permits higher compression without unduly compromising the quality. This technique is commonly referred to as source controlled variable bit rate vocoding. Thus, in a system based on TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), IS2000, HRPD, UMTS, or cdma2000 there are multiple fixed bit rate channels available for transmitting data.
  • In a system based on CDMA, such as cdma2000, the communication channels are divided into a continuous stream of “slots.” For example, the communication channels may be divided into 20 ms segments or time slots. This is also called “Transmit Time Interval” (TTI). Data transmitted during these time slots is assembled into packets, where the size of the data packet depends on the available data rate, or bandwidth, of the channel. Thus, during any individual time slot it is possible that there are individual data packets being transmitted over their respective communication channel. For example, during a single time slot, a data packet may be transmitted on the DCCH channel and a different data packet may simultaneously be transmitted on the SCH channel.
  • Likewise, in a system based on GSM, or GPRS, or EDGE, data can be transmitted between the BSC 208 and MS 204 using multiple time slots within a frame. FIG. 3 is a block diagram illustrating two radio frames 302 and 304 in the GSM air interface. As shown in FIG. 3, the GSM air interface radio frames 302 and 304 are each divided into eight timeslots. Individual timeslots are assigned to particular users in the system. In addition, GSM transmission and reception use two different frequencies and forward link and reverse link are offset by three timeslots. For example, in FIG. 3 a downlink radio frame 302 begins at time to and would be transmitted at one frequency, and an uplink radio frame 304 would be transmitted at a different frequency. The downlink radio frame 302 is offset by three time slots, TS0-TS2, from the uplink radio frame. Having an offset between the downlink and uplink radio frames allows wireless communication devices, or terminals, to be able to operate without having to be able to transmit and receive at the same time.
  • Advancements in GSM wireless communication devices, or terminals, have resulted in GSM terminals that can receive multiple timeslots during the same radio frames. These are called “multislot classes” and can be found in Annex B of 3GPP TS 45.002, incorporated herein in its entirety. Thus, in a system based on GSM, or GPRS, or EDGE there are multiple fixed time slots available for transmitting data.
  • VBR Multimedia Characteristics
  • Variable Bit Rate (VBR) multimedia data, such as video, usually includes common characteristics. For example, video data is generally captured at a constant frame rate by a sensor, such as a camera. A multimedia transmitter generally requires a finite processing time with an upper bound to encode the video stream. A multimedia receiver generally requires a finite processing time with an upper bound to decode the video stream.
  • It is generally desirable to reconstruct multimedia frames at the same frame rate at which they were produced. For example, in the case of video it is desirable to display the reconstructed video frames at the same rate at which the video was captured at a sensor or camera. Having the reconstruction and capture rate the same makes it easier to synchronize with other multimedia elements, for example, synchronizing a video stream with an accompanying audio, or speech, steam is simplified.
  • In the case of video, from a human perception point of view, it is usually desirable to maintain a consistent level of quality. It is generally more annoying, and taxing, for a person to process a continuous multimedia stream with fluctuations in quality than to process a multimedia stream of consistent quality. For example, it is usually annoying to a person to process a video stream that includes quality artifacts such as freeze frames and blockiness.
  • Delay Considerations
  • Transporting multimedia content, for example, audio/video typically incurs delays. Some of these delays are due to codec settings and some are due to network settings such as radio-link protocol (RLP) transmissions that allow, among other things, the re-transmission and re-ordering of packets sent over the air interface, etc. An objective methodology to assess the delay of multimedia transmissions is to observe the encoded stream. For example, a transmission can not be decoded until a complete, independently decodable, packet has been received. Thus, delay can be affected by the size of the packets and the rate of transmission.
  • For example, if a packet is 64 kbytes in size, and it is transmitted over a 64 kbytes per second channel, then the packet cannot be decoded, and must be delayed, for 1 sec until the entire packet is received. All packets that are received would need to be delayed enough to accommodate the largest packet, so that packets can be decoded at a constant rate. For example, if video packets, or varying size, are transmitted, a receiver would need to delay, or buffer, all of the received packets by an amount equal to the delay needed to accommodate the largest packet size. The delay would permit the decoded video to be rendered, or displayed, at a constant rate. If the maximum packet size is not known ahead of time then estimates of the maximum packet size, and associated delay, can be made based on the parameters used during the encoding of the packets.
  • The technique just described can be used in assessing the delay for any video codec (H.263, AVC/H.264, MPEG-4, etc.). Further, given that only video decoders are normatively specified by the Motion Picture Expert Group (MPEG) and the International Telecommunication Union (ITU), it is useful to have an objective measure that can be used to estimate the delays introduced by different encoder implementations for mobiles in typical wireless deployments.
  • In general, video streams will have more delay than other types of data in multimedia services, for example, more delay than speech, audio, timed text, etc. Because of the longer delay typically experienced by a video stream, other multimedia data that needs to be synchronized with the video data will usually need to be intentionally delayed in order to maintain synchronization with video.
  • Encoder/Decoder Delays
  • In some multimedia encoding techniques, multimedia data frames are encoded or decoded using information from a previous reference multimedia data frame. For example, video codecs implementing the MPEG-4 standard will encode and decode different types of video frames. In MPEG-4, video is typically encoded into an “I” frame and a “P” frame. An I frame is self-contained, that is, it includes all of the information needed to render, or display, one complete frame of video. A P frame is not self-contained and will typically contain differential information relative to the previous frame, such as motion vectors and differential texture information. Typically, I frames are about 8 to 10 times larger that a P frame, depending on the content and encoder settings. Encoding and decoding of multimedia data introduces delays that may depend on the processing resources available. A typical implementation of this type of scheme may utilize a ping-pong buffer to allow the processing resources to simultaneously capture or display one frame and process another.
  • Video encoders such as H.263, AVC/H.264, MPEG-4, etc. are inherently variable rate in nature because of predictive coding and also due to the use of variable length coding (VLC) of many parameters. Real time delivery of variable rate bitstreams over circuit switched networks and packet switched networks is generally accomplished by traffic shaping with buffers at the sender and receiver. Traffic shaping buffers introduces additional delay which is typically undesirable. For example, additional delay can be annoying during teleconferencing when there is delay between when a person speaks and when another person hears the speech.
  • The encoder and decoder delays can affect the amount of time that the encoders and decoders have to process multimedia data. For example, an upper bound on the time allowed for an encoder and decoder to process data and maintain a desired frame rate is given by:
    Δed=1/f  Eq. 1
    where Δe and Δd represent the encoder and decoder delays, respectively; and
    f is the desired frame rate, in frames per second (fps), for a given service.
  • For example, video data typically has desired frame rates are 15 fps, 10 fps, or 7.5 fps. An upper bound on the time allowed for an encoder and decoder to process the data and maintain the desired frame rate results in upper bounds of 66.6 ms, 100 ms and 133 ms respectively frames rates of 15 fps, 10 fps, or 7.5 fps respectively.
  • Rate Control Buffer Delay
  • In general, to maintain a consistent perceptual quality of a multimedia service, a different number of bits may be required for different frames. For example, a video codec may need to use a different number of bytes to encode an I frame than a P frame to maintain a consistent quality. Thus, to maintain consistent qualify and a constant frame rate results in the video stream being a variable bit rate stream. Consistent quality at an encoder can be achieved by setting an encoder “Quantization parameter” (Qp) to a constant value or less variable around a target Qp.
  • FIG. 4 is a chart illustrating an example of variation in frame sizes for a typical video sequence entitled “Carphone.” The Carphone sequence is a standard video sequence, that is well know to those in the art, and it is used to provide a “common” video sequence for use in evaluating various techniques, such as video compression, error correction and transmission. FIG. 4 shows an example of the variation in frame size, in bytes, for a sample number of frames of Carphone data encoded using MPEG4 and AVC/H.264 encoding techniques indicated by references 402 and 404 respectively. A desired quality of encoding can be achieved by setting the encoder parameter “Qp” to a desired value. In FIG. 4, the Carphone data is encoded using an MPEG encoder with Qp=33 and using an AVC/H.264 encoder with Qp=33. When the encoded data streams illustrated in FIG. 4 are to be transmitted over a constant bit rate (CBR) channel, such as a typical wireless radio channel, the variations in frame size would need to be “smoothed out” to maintain a constant, or negotiated, QoS bitrate. Typically, this “smoothing out” of the variations in frame size results in an introduction of additional delay, commonly called buffering delay Δb.
  • FIG. 5 is a block diagram illustrating how buffering delay can be used to support the transmission of frames of various sizes to be transmitted over a CBR channel. As shown in FIG. 5, data frames of varying size 502 enter the buffer 504. The buffer 504 will store a sufficient number of frames of data so that data frames that are a constant size can be output from the buffer 506 for transmission over a CBR channel 508. A buffer of this type is commonly referred to as a “leaky bucket” buffer. A “leaky bucket” buffer outputs data at a constant rate, like a bucket with a hole in the bottom. If the rate at which water enters the bucket varies, then the bucket needs to maintain a sufficient amount of water in the bucket to prevent the bucket from running dry when the rate of the water entering the bucket falls to less than the rate of the leak. Likewise, the bucket needs to be large enough so that the bucket does not overflow when the rate of the water entering the bucket exceeds the rate of the leak. The buffer 504 works in a similar way to the bucket and the amount of data that the buffer needs to store to prevent buffer underflow results in delay corresponding the length of time that the data stays in the buffer.
  • FIG. 6 is a graph illustrating buffering delay introduced by streaming a variable bit rate (VBR) multimedia stream over a CBR channel in the FIG. 1 system. As illustrated in FIG. 6, a video signal is encoded using a VBR encoding scheme, MPEG-4, producing a VBR stream. The number of bytes in the VBR stream is illustrated in FIG. 6 by a line 602 representing the cumulative, or total, number of bytes required to transmit a given number of video frames. In this example, the MPEG4 stream is encoded at an average bit rate of 64 kbps and is transmitted over a 64 kbps CBR channel. The number of bytes that are transmitted by the CBR channel is represented by a constant slope line 604 corresponding to the constant transmission rate of 64 kps.
  • To avoid buffer underflow at the decoder, due to insufficient data received at the decoder to allow a full video frame to be decoded, the display, or playout, 606 at the decoder needs to be delayed. In this example, the delay is 10 frames, or 1 second, for a desired display rate of 10 fps. In this example, a constant rate of 64 kbps was used for the channel, but if an MPEG4 stream that has an average data rate 64 kbps is transmitted over a 32 kbps CBR channel, the buffering delay would increase with the length of the sequence. For example, for the 50-frame sequence illustrated in FIG. 6, the buffering delay would increase to 2 seconds.
  • In general, the buffering delay Δb due to buffer underflow constraints can be computed as follows: B ( i ) = j = 0 l R ( i ) - j = 0 l C ( i ) B ( i ) 0 Eq . 2 C ( i ) = BW ( i ) / f * 8 Eq . 3
    where:
      • B(i)=Buffer occupancy at the encoder in bytes at time i (video frame #i)
      • R(i)=Encoder output in bytes at time i (video frame #i)
      • C(i)=Number of bytes that can be transmitted in one frame tick i
      • f=Desired number of frames per second
      • BW(i)=Available bandwidth at time i
        Note that for the special case of CBR transmission,
        C(i)=C∀i  Eq. 4
  • To avoid decoder buffer underflow, or buffer starvation, during the entire presentation, play out has to be delayed by the time required to transmit the maximum buffer occupancy at the encoder. Thus, the buffering delay can be represented as: Δ b = max { Be ( i ) 1 / I i = 1 I C ( i ) } Eq . 5
  • The denominator in Equation 5 represents the average data rate for the entire session duration I. For a CBR channel assignment, the denominator is C. The above analyses can also be used to estimate nominal encoder buffer sizes required to avoid overflow at encoder by computing max{Be(i)} for all i in a set of exemplar sequences.
  • MPEG-4 and AVC/H.264 Buffer Delay Example
  • FIG. 7 is a bar graph illustrating buffer delay Δb in milliseconds, for various 50 frame sequence video clips encoded with nominal rate of 64 kbps and constant Qp for AVC/H.264 and MPEG-4. As shown in FIG. 7, the MPEG-4 frame sequence of FIG. 6 is represented by a bar 702 indicating a buffer delay of 1000 ms. The same video sequence encoded using AVC/H.264 is represented by a bar 704 indicating a buffer delay of 400 ms. Additional examples of 50 frame sequences of video clips are shown in FIG. 7, where the buffer delay associated with each sequence, encoded with both MPEG4 and AVC/H.264 are indicated.
  • FIG. 8 is a bar graph illustrating the video quality, as represented by peak signal to noise ratio (PSNR), of the sequences illustrated in FIG. 7. As shown in FIG. 8, the Carphone sequence encoded using MPEG-4 with Qp=15 is represented by a bar 802 indicating a PSNR of about 28 dB. The same sequence encoded using AVC/H.264 with Qp=33 is indicated by a bar 804 indicating a PSNR of about 35 dB.
  • Transmission Channel Delay
  • Transmission delay Δt depends on the number of retransmissions used and certain constant time for a given network. It can be assumed that Δt has a nominal value when no retransmissions are used. For example, it may be assumed that Δt has a nominal value of 40 ms when no retransmissions are used. If retransmissions are used, Frame Erasure Rate (FER) drops, but the delay will increase. The delay will depend, at least in part, on the number of retransmissions and associated overhead delays.
  • Error Resiliency Considerations
  • When transmitting RTP streams over a wireless link, or channel, there will generally be some residual packet losses because RTP streams are delay sensitive and ensuring 100% reliable transmission by means of a re-transmission protocol, such as RLP or RLC, is not practical. To assist in understanding the effect of channel errors a description of various protocols, such as the RTP/UDP/IP protocol are provided below. FIG. 9 is a diagram illustrating various levels of encapsulation present when transmitting multimedia data, such as video data, over a wireless links using the RTP/UDP/IP protocol.
  • As shown in FIG. 9, a video codec generates a payload, 902 that includes information describing a video frame. The payload 902 may be made up of several video packets (not depicted). The payload 902 includes a Slice_Header (SH) 904. Thus, an application layer data packet 905 consists of the video data 902 and the associated Slice_Header 904. As the payload passes through a network, such as the Internet, additional header information may be added. For example, a real-time protocol (RTP) header 906, a user datagram protocol (UDP) header 908, and an Internet protocol (IP) header 910 may be added. These headers provide information used to route the payload from its source to its destination.
  • Upon entering the wireless network, a point to point protocol (PPP) header 912 is added to provide framing information for serializing the packets into a continuous stream of bits. A radio link protocol, for example, RLP in cdma2000 or RLC in W-CDMA, then packs the stream of bits into RLP packets 914. The radio-link protocol allows, among other things, the re-transmission and re-ordering of packets sent over the air interface. Finally, the air interface MAC-layer takes one or more RLP packets 914, packs them into MUX layer packet 916, and adds a multiplexing header (MUX) 918. A physical layer channel coder then adds a checksum (CRC) 920 to detect decoding errors, and a tail part 922 forming a physical layer packet 925.
  • The successive uncoordinated encapsulations illustrated in FIG. 9, has several consequences on the transmission of multimedia data. One such consequence is that there may be a mismatch between application layer data packets 905 and physical layer packets 925. As a result of this mismatch, each time a physical layer packet 925 containing portions of one or more application layer packets 905 is lost, the corresponding entire application layer 905 is lost. Because portions of a single application layer data packet 905 may be included in more than one physical layer data packet 925, losing one physical layer packet 925 can result in the loss of an entire application layer packet 905 because the entire application layer data packet 905 is needed to be properly decoded. Another consequence is that if portion of more than one application layer data packets 905 is included in a physical layer data packet 925, then the loss of a single physical layer data packet 925 can result in the loss of more than one application layer data packets 905.
  • FIG. 10 is a diagram illustrating an example of conventional allocation of application data packets 905 such as multimedia data packets, into physical layer data packets 925. Shown in FIG. 10, are two application data packets 1002 and 1004. The application data packets can be multimedia data packets, for example each data packet 1002 and 1004 can represent video frames. The uncoordinated encapsulations illustrated in FIG. 10 can result in a physical layer packet having data that is from a single application data packet or from more than one application data packet. As shown in FIG. 10, a first physical layer data packet 1006 can include data from a single application layer packet 1002, while a second physical layer data packet 1008 can include data from more that one application data packet 1002 and 1004. In this example, if the first physical layer data packet 1006 is “lost”, or corrupted during transmission, then a single application layer data packet 1002 is lost. On the other hand if the second physical layer packet 1008 is lost, then two application data packets 1002 and 1004 are also lost.
  • For example, if the application layer data packets are two successive video frames, then the loss of the first physical layer data packet 1006 results in the loss of a single video frame. But, loss of the second physical layer data packet results in the loss of both video frames because portions of both video frames are lost neither of the video frames can be properly decoded, or recovered, by a decoder.
  • Explicit Bit Rate (EBR) Control
  • Use of a technique referred to as explicit bit rate control (EBR), rather that CBR or VBR, can improve the transmission of a VBR source over a CBR channel. In EBR information units are partitioned into data packets such that the size of the data packets matches a size of an available physical layer packet. For example, a VBR stream of data, such as a video data, may be partitioned into data packets so that the application layer data packets match the physical layer data packets of a communication channel that the data is going to be transported over. For example, in EBR an encoder may be constrained, or configured, to output bytes at time i (previously denoted R(i)) that match “the capacity” of the physical channel used to deliver the data stream in any over-the-air standard, such as, GSM, GPRS, EDGE, TIA/EIA-95-B (IS-95), TIA/EIA-98-C (IS-98), cdma2000, Wideband CDMA (WCDMA), and others. In addition, the encoded packets may be constrained so that it produces data packets that are sized, i.e. the same number of bytes or less, than the size of the physical layer data packets of the communication channel. Also, the encoder can be constrained so that each application layer data packet that it outputs is independently decodable. Simulations of the EBR technique, on a AVC/H.264 reference encoder, show that there is no perceivable loss in quality when the encoder is constrained in accordance with the EBR techniques, provided adequate number of explicit rates are used to constrain the VBR encoding. Examples of constraints for some channels are described below as examples.
  • Multimedia Encoding and Decoding
  • As noted, multimedia encoders, for example video encoders, may generate multimedia frames of variable size. For example, in some compression techniques, each new multimedia frame may include all of the information needed to fully render the frame content, while other frames may include information about changes to the content from a previously fully rendered content. For example, as noted above, in a system based on MPEG-4 compression techniques, video frames may typically be of two types: I or P frames. I frames are self-contained, similar to JPEG files, in that each I frame contains all the information needed to render, or display, one complete frame. In contrast, P frames typically include information relative to the previous frame, such as, differential information relative to the previous frame and motion vectors. Therefore, because P frames rely on previous frames, a P frame is not self-contained, and cannot render, or display, a complete frame without reliance on a previous frame, in other words a P frame cannot be self-decoded. Here, the word “decoded” is used to mean full reconstruction for displaying a frame. Typically, I frames are larger than P frames, for example, about 8 to 10 times larger depending on the content and encoder settings.
  • In general, each frame of data can be partitioned into portions, or “slices”, such that each slice can be independently decoded, as described further below. In one case, a frame of data may be contained in a single slice, in other cases a frame of data is divided into multiple slices. For example, if the frame of data is video information, then the video frame may be include within a independently decodable slice, or the frame may be divided into more than one independently decodable slice. In one embodiment, each encoded slice is configured so that the size of the slice matches an available size of a communication channel physical layer data packet. If the encoder is encoding video information then each slice is configured such that the size of each video slice matches an available size of a physical layer packet. In other words, frame slice sizes are matched to physical layer packet sizes.
  • Advantages to making slices a size that matches an available communication channel physical layer data size is that there is a one to one correspondence between the application packets and the physical layer data packets. This helps alleviate some of the problems associated with uncoordinated encapsulation as illustrated in FIG. 10. Thus, if a physical layer data packet is corrupted, or lost, during transmission only the corresponding slice is lost. Also, if each slice of a frame is independently decodable, then the loss of a slice of a frame will not prevent the decoding of the other slices of the frame. For example, if a video frame is divided into five slices, such that each slice is independently decodable and matched to a physical layer data packet, then corruption, or loss, of one of the physical layer data packets will result in the loss of only the corresponding slice and the physical layer packets that are successfully transmitted can be successfully decoded. Thus, although the entire video frame may not be decoded, portions of it may be. In this example, four of the five video slices will be successfully decoded, and thereby allow the video frame to be rendered, or displayed, albeit at reduced performance.
  • For example, if video slices are communicated from a sending node to a MS, in a system based on cdma2000, using the DCCH and SCH channels then the video slices will be sized to match these available channels. As noted above, the DCCH channel can be configured to support multiple, fixed, data rates. In a system based on cdma2000, for example, the DCCH can support data transmission rates of either 9.60 kbps or 14.4 kbps depending on the selected rate set (RS), RS1 and RS2 respectively. The SCH channel can also be configured to support multiple, fixed data rates, depending on the SCH radio configuration (RC). The SCH supports multiples of 9.6 kps when configured in RC3 and multiples of 14.4 kps when configured as RC5. The SCH data rates are:
    SCH DATA RATE=(n*RC data rate)  Eq. 6
    where n=1, 2, 4, 8, or 16 depending on the channel configuration.
  • Table 2, below, illustrates possible physical layer data packet sizes for the DCCH and SCH channels in a communication system based on cdma2000. The first column identifies a case, or possible configuration. The second and third columns are the DCCH rate set and SCH radio configuration respectively. The fourth column has four entries. The first is the dtx case where no data is sent on either DCCH or SCH. The second is the physical layer data packet size of a 20 ms time slot for the DCCH channel. The third entry is the physical layer data packet size of a 20 ms time slot for the SCH channel. The fourth entry is the physical layer data packet size of a 20 ms time slot for a combination of the DCCH and SCH channels.
    TABLE 2
    Possible Physical Layer Packet Sizes
    for Combinations of DCCH and SCH
    DCCH SCH
    Config- Config- Physical Layer Packet Sizes (bytes)
    Case uration uration dtx, DCCH SCH DCCH + SCH
    1 RS1 2× in RC3 0, 20,  40, 60
    2 RS1 4× in RC3 0, 20,  80, 100
    3 RS1 8× in RC3 0, 20, 160, 180
    4 RS1 16× in RC3 0, 20 320 340
    5 RS2 2× in RC3 0, 31,  40, 71
    6 RS2 4× in RC3 0, 31,  80, 111
    7 RS2 8× in RC3 0, 31, 160, 191
    8 RS2 16× in RC3 0, 31, 320 351
    9 RS1 2× in RC5 0, 20,  64, 84
    10 RS1 4× in RC5 0, 20, 128, 148
    11 RS1 8× in RC5 0, 20, 256, 276
    12 RS1 16× in RC5 0, 20, 512 532
    13 RS2 2× in RC5 0, 31,  64, 95
    14 RS2 4× in RC5 0, 31, 128, 159
    15 RS2 8× in RC5 0, 31, 256, 287
    16 RS2 16× in RC5 0, 31, 512 543
  • It should be noted that there is a tradeoff to be considered when an application layer data packet is too large to fit into the DCCH or SCH physical layer data packets and instead a combined DCCH plus SCH packet is going to be used. A tradeoff in deciding to encode an application layer data packet so that it is sized to fit into a combined DCCH plus SCH data packet size, versus making two packets, is that a larger application layer packet, or slice, generally produces better compression efficiency, while smaller slices generally produce better error resiliency. For example, a larger slice generally requires less overhead. Referring to FIG. 9, each slice 902 has its own slice header 904. Thus, if two slices are used instead of one, there are two slice headers added to the payload, resulting in more data needed to encode the packet and thereby reducing compression efficiency. On the other hand, if two slices are used, one transmitted on the DCCH and the other transmitted on the SCH, then corruption, or loss, of only one of either the DCCH or SCH data packets would still allow recovery of the other data packet, thereby improving error resiliency.
  • To help in understanding Table 2 the derivation of Case 1 and 9 will be explained in detail. In Case 1 DCCH is configured as RS1 corresponding to a data rate of 9.6 Kbps. Because the channels are divided into 20 ms time slots, within an individual time slot the amount of data, or physical layer packet size, that can be transmitted on DCCH configured RS1 is:
    9600 bits/sec*20 msec=192 bits=24 bytes  Eq. 7
    Because of additional overhead that is added to physical layer packet, for example, RLP for error correction, only 20 bytes are available for the application layer data packet, which includes the slice and the slice header. Thus, the first entry in the fourth column of Table 2, for Case 1 is 20.
  • The SCH for Case 1 in is configured as 2× in RC3. RC3 corresponds to a base data rate of 9.6 Kbps and the 2× means that the channel data rate is two times the base data rate. Thus, within an individual time slot the amount of data, or physical layer packet size, that can be transmitted on SCH configured 2× RC3 is:
    2*9600 bits/sec*20 msec=384 bits=48 bytes  Eq. 8
    Here, because of additional overhead that is added to physical layer packet, only 40 bytes are available for the application layer data packet, which includes the slice and the slice header. Thus, the second entry in the fourth column of Table 2, for Case 1 is 40. The third entry in the fourth column of Table 2 for Case 1 is the sum of the first and second entries, or 60.
  • Case 9 is similar to Case 1. In both cases the DCCH is configured RS1, corresponding to a physical layer packet size of 20 bytes. The SCH channel in Case 9 is configured 2× RC5. RC5 corresponds to a base data rate of 14.4 Kbps and the 2× means that the channel data rate is two times the base data rate. Thus, within an individual time slot the amount of data, or physical layer packet size, that can be transmitted on SCH configured 2× RC5 is:
    2*14400 bits/sec*20 msec=576 bits=72 bytes  Eq. 9
    Here, because of additional overhead that is added to physical layer packet, only 64 bytes are available for the application layer data packet, which includes the slice and the slice header. Thus, the second entry in the fourth column of Table 2, for Case 9 is 64. The third entry in the fourth column of Table 2 for Case 9 is the sum of the first and second entries, or 84.
  • The other entries in Table 2 are determined in a similar manner, where RS2 corresponds to DCCH having a data rate of 14.4 Kbps, corresponding to 36 bytes within a 20 msec time slot of which 31 are available to the application layer. It is noted that there is the dtx operation available for all cases, and that is zero payload size, where no data is transmitted on either channel. When the user data can be transmitted in fewer than the available physical layer slots (of 20 ms each), dtx is used in the subsequent slots, reducing the interference to other users in the system.
  • As illustrated in Table 2 above, by configuring the multiple fixed data rate channels available, for example DCCH and SCH, a set of CBR channels can behave similarly to a VBR channel. That is, configuring the multiple fixed rate channels can make a CBR channel behave as a pseudo-VBR channel. Techniques that take advantage of the pseudo-VBR channel include determining possible physical layer data packet sizes corresponding to a CBR channel's bit rate from a plurality of available constant bit rate communication channels, and encoding a variable bit rate stream of data thereby creating a plurality of data packets such that a size of each of the data packets is matched to a size of one of the physical layer data packets sizes.
  • In one embodiment, the configuration of the communication channels is established at the beginning of a session and then either not changed through out the session or only changed infrequently. For example, the SCH discussed in the above example is generally set to a configuration and remains in that configuration through out the entire session. That is, the SCH described is a fixed rate SCH. In another embodiment, the channel configuration can be changed dynamically during the session. For example a variable rate SCH (V-SCH) can change its configuration for each time slot. That is, during one time slot a V-SCH can be configured in one configuration, such as 2×RC3, and in the next time slot the V-SCH can be configured to a different configuration, such as 16×RC3 or any other possible configuration of V-SCH. A V-SCH provides additional flexibility, and can improve system performance in EBR techniques.
  • If the configuration of the communication channel is fixed for the entire session, then application layer packets, or slices, are selected so that that they fit into one of the available physical layer data packets that are available. For example, if the DCCH and SCH are configured as RS1 and 2×RC3, as illustrated in Case 1 in Table 2, then the application layer slices would be selected to fit into either 0 byte, 20 byte, 40 byte, or 60 byte packets. Likewise, if the channels were configured as RS1 and 16×RC3, as illustrated in Case 4 of Table 2, then the application layer slices would be selected to fit into either 0 byte, 20 byte, 320 byte, or 340 byte packets. If a V-SCH channel were used then it is possible to change between two different configurations for each slice. For example, if the DCCH is configured as RS1 and V-SCH is configured as RC3, then it is possible to change between any of the V-SCH configurations 2×RC3, 4×RC3, 8×RC3, or 16×RC3, corresponding to Cases 1-4 of Table 2. Selection between these various configurations provides physical layer data packets of 0 byte, 20 byte, 40 byte, 60 byte, 80 byte, 100 byte, 160 byte, 180 byte, 320 byte, or 340 byte as illustrated in Cases 1-4 of Table 2. Thus, in this example, using a V-SCH channel allows application layer slices to be selected to fit into any of the ten different physical layer data packet sizes listed in Cases 1-4 of Table 2. In the case of cdma2000, the size of the data delivered is estimated by the MS and this process is called “Blind Detection”
  • A similar technique can be used in Wideband CDMA (WCDMA) using a Data Channel (DCH). DCH, similarly to V-SCH, supports different physical layer packet sizes. For example, DCH can support rates of 0 to nx in multiples of 40 octets, where ‘nx’ corresponds to the maximum allocated rate o the DCH channel. Typical values of nx include 64 kbps, 128 kbps and 256 kbps. In a technique referred to as “Explicit Indication” the size of the data delivered can be indicated using additional signaling, thereby eliminating the need to do blind detection. For example, in the case of WCDMA, the size of the data packet delivered may be indicated using “Transport Format Combination Indicator” (TFCI), so that the MS does not have to do blind detection, thereby reducing the computational burden on the MS, when packets of variable sizes are used as in EBR. The EBR concepts described are applicable to both blind detection and explicit indication of the packet sizes.
  • By selecting application layer data packets so that they fit into the physical layer data packets, a combination of constant bit rate communication channels, with their aggregate data rate, can transmit a VBR data stream with performance similar to, and in some cases superior to, a VBR communication channel. In one embodiment, a variable bit rate data stream is encoded into a stream of data packets that are of a size that matches the physical layer data packet size of available communication channels, and are then transmitted over a combination of constant bit rate channels. In another embodiment, as the bit rate of the variable bit rate data stream varies it may be encoded into different sized data packets and a different combinations of constant bit rate channels may be used to transmit the data packets.
  • For example, different frames of video data may be different sizes and thus, different combinations of fixed bit rate communication channels may be selected to accommodate the transmission of the different sized video frames. In other words, variable bit rate data can be efficiently transmitted over a constant bit rate channel by assigning data packets to at least one of the constant bit rate communication channels so as to match the aggregate bit rate of the constant bit rate communication channels to the bit rate of the variable bit rate stream.
  • Another aspect is that the encoder can be constrained so as to limit the total number of bits used to represent the variable bit rate data stream to a pre-selected maximum number of bits. That is, if the variable bit rate data stream is a frame of multimedia data, such as video, the frame may be divided into slices where the slices are selected such that each slice can be independently decoded and the number of bits in the slice is limited to a pre-selected number of bits. For example, if the DCCH and SCH channels are configured RS1 and 2×RC3 respectively (Case 1 in Table 2) then the encoded can be constrained so that a slice will be no larger that either 20 bytes, 40 bytes or 60 bytes.
  • In another embodiment using EBR to transmit multimedia data can use the cdma2000 packet data channel (PDCH). The PDCH can be configured to transmit data packets that are n*45 bytes, where n={1,2,4,8}. Again, using the PDCH for the multimedia data, for example video data, can be partitioned into “slices” the match the available physical layer packet sizes. In cdma2000, the PDCH has different data rates available of the forward PDCH (F-PDCH) and the reverse PDCH (R-PDCH). In cdma2000 the F-PDCH has slightly less bandwidth available than the R-PDCH. While this difference in bandwidth can be taken advantage of, in some cases it is advantageous to limit the R-PDCH to the same bandwidth as the F-PDCH. For example, if a first MS transmits a video stream to a second MS, the video stream will be transmitted by the first MS on the R-PDCH and received by the second MS on the F-PDCH. If the first MS used the entire bandwidth of the R-PDCH then some of the data stream would have to be removed to have it conform to the bandwidth of the F-PDCH transmission to the second MS. To alleviate difficulties associated with reformatting the transmission from the first MS so that it can be transmitted to the second MS on a channel with a smaller bandwidth the bandwidth of the R-PDCH can be limited so that it is the same as the F-PDCH. One way to limit the F-PDCH bandwidth is to limit the application data packet sizes sent on the R-PDCH to those supported by the F-PDCH and then add “stuffing bits” for the remaining bits in the R-PDCH physical layer packet. In other words, if stuffing bits are added to the R-PDCH data packets so as to match the F-PDCH data packets, then the R-PDCH data packets can be used on the F-PDCH forward link with minimal change, for example, by just drop the stuffing bits.
  • Using the technique just described, Table 3 lists possible physical layer data packet sizes for the F-PDCH and R-PDCH for four possible data rate cases, one for each value of n, and the number of “stuffing bits” that will be added to the R-PDCH.
    TABLE 3
    Possible Physical Layer Packet Sizes for PDCH
    and “Stuffing Bits” for R-PDCH
    Physical Layer Packet Size (bytes) R-PDCH
    n F-PDCH and R-PDCH Stuffing bits
    1 45 0
    2 90 24
    4 180 72
    8 360 168
  • As with EBR using DCCH plus SCH, when a multimedia stream, such as a video stream, is portioned into slices, smaller slice sizes generally improve error resiliency, but may compromise compression efficiency. Likewise, if larger slices are used, in general there will be an increase in compression efficiency, but system performance may degrade due to lost packets because the loss of an individual packet results in the loss of more data.
  • Likewise, the techniques of matching multimedia data, such as video slices, to an available size of a physical layer packet can be performed in systems based on other over the air standards. For example, in a system based on GSM, or GPRS, or EDGE the multimedia frames, such as video slices, can be sized to match the available timeslots. As noted above, many GSM, GPRS and EDGE devices are capable of receiving multiple timeslots. Thus, depending on the number of timeslots available, an encoded stream of frames can be constrained so that the video slices are matched to the physical packets. In other words, the multimedia data can be encoded so that packet sizes match an available size of a physical layer packet, such as the GSM timeslot, and the aggregate data rate of the physical layer packets used supports the data rate of the multimedia data.
  • EBR Performance Considerations
  • As noted, when an encoder of multimedia data streams operates in an EBR mode it generates multimedia slices matched to the physical layer, and therefore there is no loss in compression efficiency as compared to true VBR mode. For example, a video codec operating in accordance with the EBR technique generates video slices matched to the particular physical layer over which the video is transmitted. In addition, there are benefits with respect to error resilience, lower latency, and lower transmission overhead. Details of these benefits are explained further below.
  • Performance in Channel Errors
  • As discussed in reference to FIG. 10, it can be seen that in a conventional encapsulation, when a physical layer packet is lost, more than one application layer may be lost. In the EBR technique, each physical packet loss in the wireless link results in the loss of exactly one application layer packet.
  • FIG. 11 illustrates an example of encoding application layer packets in accordance with the EBR technique. As noted above, application layer packets may be of various sizes. As discussed in Tables 2 and 3, the physical layer packets may also be of various sizes, for example, the physical layer may be made up of channels that use different sizes of physical layer data packets. In the example of FIG. 11, there are four application packets 1102, 1104, 1106, and 1108 and four physical layer packets 1110, 1112, 1114, and 1116 illustrated. Three different examples of matching the application layer packets to the physical layer packets are illustrated. First, a single application layer packet can be encoded so that it is transmitted within multiple physical layer packets. In the example shown in FIG. 11, a single physical layer packet 1102 is encoded into two physical layer packets 1110 and 1112. For example, if DCCH and SCH are configured RS1 and 2×RC3 respectively (Case 1 in Table 2) and the application data packet is 60 bytes then it could be transmitted over the two physical layer packets corresponding to the DCCH and SCH packet combination. It is envisioned that a single application layer packet can be encoded in to any number of physical layer packets corresponding to available communication channels. A second example illustrated in FIG. 11 is that a single application layer packet 1104 is encoded into a single physical layer packet 1114. For example, if the application layer data packet is 40 bytes, it could be transmitted using just the SCH physical layer data packet in Case 1 of Table 2. In both of these examples loss of a single physical layer packet results in the loss of only a single application layer packet.
  • A third example illustrated in FIG. 11 is that multiple application layer packets can be encoded into a single physical layer packet 1116. In the example shown in FIG. 11, two application layers 1106 and 1108 are encoded and transmitted in a single physical layer packet. It is envisioned that more that two application layer packets may be encoded to fit within a single physical layer packet. A drawback to this example is that the loss of a single physical layer packet 1116 would result in the loss of multiple application layer packets 1106 and 1108. However, there may be tradeoffs, such as full utilization of the physical layer, that would warrant encoding multiple application layer packets to be transmitted within a single physical layer packet.
  • FIG. 12 is a block diagram illustrating one embodiment of a codec transmitting a VBR data stream through an IP/UDP/RTP network, such as the Internet. As shown in FIG. 12 the codec generates an application layer data packet 1202 that includes a payload, or slice, 1204 and a slice header 1206. The application layer 1202 passes through the network where IP/UDP/RTP header information 1208 is appended to the application layer data packet 1202. The packet then passes through the wireless network where an RLP header 1210 and a MUX header 1212 are appended to the packet. Because the size of the IP/UDP/RTP header 1208, RLP header 1210, and MUX header 1214 are known, the codec selects a size for the slice 1204 so that the slice and all associated headers fits into the physical layer data packet, or payload, 1216.
  • FIG. 13 is a bar graph illustrating the relative drop in peak signal to noise ratio (PSNR) for various examples of encoded video sequences, using a true VBR transmission channel, and using an EBR transmission utilizing DCCH plus SCH, and PDCH, when the channel packet loss is 1%. The video sequences illustrated in FIG. 13 are standard video sequences, that are well known to those in the art, and are used to provide “common” video sequences for use in evaluating various techniques, such as video compression, error correction and transmission. As shown in FIG. 13, the true VBR 1302 sequences have the largest PSNR drop followed by the EBR using PDCH 1306 and then the EBR using DCCH plus SCH 1304. For example, in the Carphone sequence the true VBR 1302 sequence suffered approximately a 1.5 dB drop in PSNR, while the EBR using PDCH 1306 and EBR using DCCH and SCH 1304 suffered drops in PSNR of approximately 0.8 and 0.4 dB respectively. FIG. 13 illustrates that when a transmission channel experiences 1% packet loss the distortion, as measured by PSNR, for the VBR sequence is more severe than for the EBR sequences.
  • FIG. 14, similar to FIG. 13, is a bar graph illustrating the relative drop in peak signal to nose ration (PSNR) when the channel loss is 5% for various examples of standard encoded video sequences, using a true VBR 1402, EBR using DCCH plus SCH 1404, and EBR using PDCH 1406. As shown in FIG. 14, the true VBR 1402 sequences have the largest PSNR drop followed by the EBR using PDCH 1406 and then the EBR using DCCH plus SCH 1404. For example, in the Carphone sequence the true VBR 1402 sequence suffered approximately a 2.5 dB drop in PSNR, while the EBR using PDCH 1406 and EBR using DCCH plus SCH 1404 suffered drops in PSNR of approximately 1.4 and 0.8 dB respectively. Comparing FIGS. 14 and 13 illustrate that when as the transmission channel packet loss increases the distortion, as measured by PSNR, for the VBR sequence is more severe than for the EBR sequences.
  • FIG. 15 is a bar graph illustrating the percentage of defective macroblock received for the encoded video sequences of FIG. 13, using a true VBR 1502, EBR using DCCH and SCH 1504, and EBR using PDCH 1506, when the channel packet loss is 1%. FIG. 16 is a bar graph illustrating the percentage of defective macroblocks received for the encoded video sequences of FIG. 14, using a true VBR 1602, EBR using DCCH and SCH 1604, and EBR using PDCH 1606, when the channel packet loss is 5%. Comparison of these graphs show that in both cases the percentage of defected macroblocks is greater in the VBR sequences than in the EBR sequences. It is noted that in EBR, because the slices are matched to the physical layer packet size, that the defective percentage of slices should be the same as the packet loss rate. However, because slices can include different numbers of macroblocks, loss of one data packet, corresponding to one slice, can result in a different number of defective macroblocks than the loss of a different data packet corresponding to a different slice that includes a different number of macroblocks.
  • FIG. 17 is a graph illustrating the rate distortion of one of the standard encoded video sequences, entitled “Foreman.” As shown in FIG. 17, four different cases are illustrated showing the PSNR versus bit rate. The first two cases show the video sequence encoded using VBR 1702 and 1704. The next two cases show the video sequence encoded using EBR15, where EBR15 is EBR using DCCH plus SCH configured as RS2 and 8× in RC5 respectively, as listed in case 15 in Table 2 above. The VBR and EBR data streams are transmitted over a “clean” channel 1702 and 1706 and a “noisy” channel 1704 and 1708. As noted above, in a clean channel there are no packets lost during transmission, and a noisy channel loses 1% of the data packets. As shown in FIG. 17 the VBR encoded sequence that is transmitted over a clean channel 1702, has the highest PSNR for all bit rates. But the EBR15 encoded sequence that is transmitted over a clean channel 1706 has nearly the same PSNR performance, or rate distortion, for all bit rates. Thus, there is a very small drop in performance between VBR and EBR 15 encoding when the transmission channel is clean. This example illustrates that when there are no packets lost during transmission there can be sufficient granularity in an EBR encoding configuration to have nearly equal performance to a true VBR encoding configuration.
  • When the VBR encoded sequence is transmitted over a noisy channel 1704 the PSNR drops significantly, over 3 dB, across all bit rates. But, when the EBR15 encoded sequence is transmitted over the same noisy channel 1708, although its PSNR performance degrades over all bit rates, its performance only drops about 1 dB. Thus, when transmitting over a noisy channel the PSNR performance of an EBR15 encoded sequence is about 2 dB higher that a VBR encoded sequence transmitted over the same noisy channel. As FIG. 17 shows, in a clean channel the rate distortion performance of EBR15 encoding is comparable to VBR encoding, and when the channel becomes noisy the rate distortion performance of the EBR15 encoding is superior to VBR encoding.
  • FIG. 18 is a graph, similar to FIG. 17, illustrating the rate distortion curves of another encoded video sequences, entitle “Carphone.” Again, four different cases are illustrated showing the PSNR versus bit rate. The first two cases show the video sequence encoded using VBR 1802 and 1804. The next two cases show the video sequence encoded using EBR15, where EBR15 is EBR using DCCH plus VSCH configured as RS2 and 8× in RC5 respectively, as listed in case 15 in Table 2 above. The VBR and EBR data streams are transmitted over a “clean” channel 1802 and 1806, and a “noisy” channel 1804 and 1808. In this example, the PSNR performance of the EBR15 encoded sequence transmitted over a clean channel 1806 exceeds the performance of the VBR sequence over the clean channel 1802. The PSNR performance of the EBR15 sequence over the noisy channel 1808 exceed the VBR sequence transmitted over the noisy channel 1804 by about 1.5 dB. In this example, using the Carphone sequence in both a clean and noisy channel resulted in the rate distortion performance of EBR15 encoding having superior performance, as measured by PSNR, to the VBR encoding.
  • Latency Considerations
  • Use of EBR encoding improves latency performance. For example, using EBR video slices can be transmitted over a wireless channel without traffic shaping buffers at the encoder and the decoder. For real time services this is a significant benefit as the overall user experience can be enhanced.
  • To illustrate the buffering delay due to the variable bitrate (VBR) nature of video encoding, consider a transmission plan for a typical sequence encoded at an average bit rate of 64 kbps and transmitted over a 64 kbps CBR channel, shown in FIG. 6. In order to avoid buffer underflow at the decoder, the display, represented by curve 608, needs to be delayed. In this example, the delay is 10 frames or 1 second for a desired display rate of 10 fps.
  • The delay Δb due to buffer underflow constraints can be computed as follows: B ( i ) = j = 0 i R ( i ) - j = 0 i C ( i ) ; B ( i ) 0 C ( i ) = BW ( i ) / ( f * 8 ) Eq . 10
    where
    • B(i)=Buffer occupancy at the encoder in bytes at frame i
    • R(i)=Encoder output in bytes for frame i
    • C(i)=No. of bytes that can be transmitted in frame interval i
    • f=Desired number of frames per second
    • BW(i)=Available bandwidth in bits at frame interval i
      Note that for the special case of CBR transmission, C(i)=C∀i.
  • In order to avoid decoder buffer starvation during the entire presentation, play out has to be delayed by the time required transmit maximum buffer occupancy at the encoder. Δ b = max { B ( i ) 1 / I i = 1 I C ( i ) } Eq . 11
  • The denominator in the above represents the average data rate for the entire session duration I. For a CBR channel assignment, the denominator is C. For the EBR case, if the aggregate channel bandwidth for a given 100-ms duration is greater than the frame size i.e. C(i)≧R(i) ∀iεI, there is no buffering delay. Then, it follows that the buffer occupancy at the encoder is 0, as data can be transmitted as it arrives. That is,
    B(i)=R(i)−C(i)=0.  Eq. 12
  • Note that video frames typically span multiple MAC layer frames K (slots). If it is possible to vary C(i) over the K slots so that all of R(i) can be transmitted, then the delay Δb due to buffering is 0, as B(i) is 0.
    Δb=max{B(i)/C(i)}∀i  Eq. 13
  • FIG. 19 illustrates the transmission example for a typical EBR stream encoded at an average rate of 64 kbps. In FIG. 19 the cumulative bytes versus frame number is shown for the source 1902, the transmission 1904 and the display 1906 of a multimedia stream. In the example of FIG. 19, buffering delay is 0, but delays due to encoding, decoding and transmission are still present. However, these delays are typically much smaller when compared to the VBR buffering delay.
  • FIG. 20 is a flow diagram illustrating an embodiment of a method of transmitting data. Flow begins in block 2002. Flow then continues to block 2004. In block 2004 possible physical layer packet sizes of available communication channels are determined. For example, if the DCCH and SCH channels are used then the configuration of these radio channels will establish the physical layer packet sizes available, as illustrated in Table 2, above. Flow then continues to block 2006 where an information unit, for example a frame of a variable bit rate data stream is received. Examples of the variable bit rate data streams include a multimedia stream, such as a video stream. Flow then continues to block 2008.
  • In block 2008 the information units are partitioned into slices. The partitions, or slices, are selected such that their size does not exceed the size of one of the possible physical layer packet sizes. For example, the partitions can be sized such that each the size of the partition is no larger than at least one of the available physical layer packets' size. Flow then continues to block 2010 where the partition is encoded and assigned to a physical layer packet. For example, encoding information can include a source encoder equipped with a rate controlled module capable of generating partitions of varying size. Then, in block 2012 it is determined if all of the partitions of the frame have been encoded and assigned to a physical layer packet. If they have not, a negative outcome at block 2012, then flow continues to block 2010 and the next partition is encoded and assigned to a physical layer packet. Returning to block 2012, if all of the partitions of the frame have been encoded and assigned to a physical layer packet, an affirmative outcome at block 2012, then flow continues to block 2014.
  • In block 2014 it is determined if the flow of information has terminated, such as at the end of a session. If the flow of information has not terminated, a negative outcome at block 2014, flow continues to block 2006 and the next information unit is received. Returning to block 2014, if the flow of information has terminated such as the end of a session, an affirmative outcome at 2014, then flow continues to block 2016 and the process stops.
  • FIG. 21 is a flow diagram illustrating another embodiment of a method of transmitting data. Flow begins in block 2102. Flow then continues to block 2104. In block 2104 possible physical layer packet sizes of available communication channels are determined. For example, if the DCCH and SCH channels are used then the configuration of these radio channels will establish the physical layer packet sizes available, as illustrated in Table 2, above. Flow then continues to block 2106 where an information unit is received. For example, the information unit may be variable bit rate data such as a multimedia stream, or video stream. Flow then continues to block 2108.
  • In block 2108 it is determined if it is desirable to reconfigure the communication channels' configuration. If a communication channel is being used that can be reconfigured during a session, such as a V-SCH channel, it may be desirable to change the channel configuration during a session. For example, if frames of data that have more data than can be transmitted over the current configuration of communication channels it may be desired to change the configuration to a higher bandwidth so that the communication channel can support more data. In block 2108 if it is decided that it is not desired to reconfigure the communication channels, a negative outcome at block 2108, the flow continues to block 2110. In block 2110 the information unit is partitioned into sizes such that their size does not exceed the size of one of the possible physical layer packet sizes. Returning to block 2108, if it is determined that it is desired to reconfigure the communication channel, an affirmative outcome at block 2108, flow continues to block 2112. In block 2112 a desired physical layer packet size is determined. For example, the received information unit may be analyzed and the size of a data packet needed to transmit the entire unit may be determined. Flow then continues to block 2114. In block 2114 a desired communication channel configuration is determined. For example, the various physical layer packet sizes of different configurations of the available communication channels can be determined and a configuration that has physical layer packets that are large enough to accommodate the information unit may be selected. The communication channels are then reconfigured accordingly. Flow then continues to block 2110 where the information unit is partitioned into sizes such that their size matches the size of one of the possible physical layer packet sizes of the reconfigured communication channels. Flow then continues to block 2116. In block 2116 the partition is encoded and assigned to a physical layer data packet. For example, encoding information can include a source encoder equipped with a rate controlled module capable of generating partitions of varying size. Flow then continues to block 2118.
  • In block 2118 it is determined if all of the partitions of the information unit have been encoded and assigned to a physical layer packet. If they have not, a negative outcome at block 2118, then flow continues to block 2110 and the next partition is encoded and assigned to a physical layer packet. Returning to block 2118, if all of the partitions of the information unit have been encoded and assigned to a physical layer packet, an affirmative outcome at block 2118, then flow continues to block 2120.
  • In block 2120 it is determined if the information flow has terminated, such as at the end of a session. If the information flow has not terminated, a negative outcome at block 2120, then flow continues to block 2106 and the next information unit is received. Returning to block 2120, if the information flow is terminated, an affirmative outcome at block 2120, then flow continues to block 2122 and the process stops.
  • FIG. 22 is a block diagram of a wireless communication device, or a mobile station (MS), constructed in accordance with an exemplary embodiment of the present invention. The communication device 2202 includes a network interface 2206, codec 2208, a host processor 2210, a memory device 2212, a program product 2214, and a user interface 2216.
  • Signals from the infrastructure are received by the network interface 2206 and sent to the host processor 2210. The host processor 2210 receives the signals and, depending on the content of the signal, responds with appropriate actions. For example, the host processor 2210 may decode the received signal itself, or it may route the received signal to the codec 2208 for decoding. In another embodiment, the received signal is sent directly to the codec 2208 from the network interface 2206.
  • In one embodiment, the network interface 2206 may be a transceiver and an antenna to interface to the infrastructure over a wireless channel. In another embodiment, the network interface 2206 may be a network interface card used to interface to the infrastructure over landlines. The codec 2208 may be implemented as a digital signal processor (DSP), or a general processor such as a central processing unit (CPU).
  • Both the host processor 2210 and the codec 2208 are connected to a memory device 2212. The memory device 2212 may be used to store data during operation of the WCD, as well as store program code that will be executed by the host processor 2210 or the DSP 2208. For example, the host processor, codec, or both, may operate under the control of programming instructions that are temporarily stored in the memory device 2212. The host processor 2210 and codec 2208 also can include program storage memory of their own. When the programming instructions are executed, the host processor 2210 or codec 2208, or both, perform their functions, for example decoding or encoding multimedia streams. Thus, the programming steps implement the functionality of the respective host processor 2210 and codec 2208, so that the host processor and codec can each be made to perform the functions of decoding or encoding content streams as desired. The programming steps may be received from a program product 2214. The program product 2214 may store, and transfer the programming steps into the memory 2212 for execution by the host processor, codec, or both.
  • The program product 2214 may be semiconductor memory chips, such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, as well as other storage devices such as a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art that may store computer readable instructions. Additionally, the program product 2214 may be the source file including the program steps that is received from the network and stored into memory and is then executed. In this way, the processing steps necessary for operation in accordance with the invention may be embodied on the program product 2214. In FIG. 22, the exemplary storage medium is shown coupled to the host processor 2210 such that the host processor may read information from, and write information to, the storage medium. Alternatively, the storage medium may be integral to the host processor 2210.
  • The user interface 2216 is connected to both the host processor 2210 and the codec 2208. For example, the user interface 2216 may include a display and a speaker used to output multimedia data to the user.
  • Those of skill in the art will recognize that the step of a method described in connection with an embodiment may be interchanged without departing from the scope of the invention.
  • Those of skill in the art would also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
  • Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
  • The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
  • The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (71)

1. A method of transmitting information in a wireless communication system, the method comprising:
determining possible physical layer packet sizes of a plurality of available constant bit rate communication channels; and
establishing constraints for partitioning information units such that the partitions are sized so as not to exceed the physical layer packet size of at least one of the available physical layer packet sizes provided by the plurality of available constant bit rate communication channels.
2. A method as defined in claim 1, wherein partitioning information comprises a source encoder equipped with a rate controlled module capable of generating partitions of varying size.
3. A method as defined in claim 1, wherein the information units comprise a variable bit rate data stream
4. A method as defined in claim 1, wherein the information units comprise multimedia data.
5. A method as defined in claim 1, wherein the information units comprise video data.
6. A method as defined in claim 1, wherein the information units comprise audio data.
7. A method as defined in claim 1, wherein the constant bit rate communication channels are CDMA channels.
8. A method as defined in claim 7, wherein the constant bit rate communication channels include a supplemental channel.
9. A method as defined in claim 7, wherein the constant bit rate communication channel includes a dedicated control channel.
10. A method as defined in claim 7, wherein the constant bit rate communication channel includes a packet data channel.
11. A method as defined in claim 1, wherein the constant bit rate communication channels are GSM channels.
12. A method as defined in claim 1, wherein the constant bit rate communication channels are EDGE channels.
13. A method as defined in claim 1, wherein the constant bit rate communication channels are GPRS channels.
14. A method as defined in claim 1, wherein the constraints are used during encoding of the information units.
15. A method as defined in claim 1, wherein the information units occur at a constant interval.
16. A method of transmitting information in a wireless communication system, the method comprising:
determining available physical layer packet sizes for a plurality of available constant bit rate communication channels; and
encoding an information unit into data packets, wherein individual data packet sizes are selected so as not to exceed one of the physical layer packet sizes of the available constant bit rate communication channels.
17. A method as defined in claim 16, wherein encoding information comprises a source encoder equipped with a rate controlled module capable of generating partitions of varying size.
18. A method as defined in claim 16, wherein the information unit comprises a multimedia stream.
19. A method as defined in claim 16, wherein the information unit comprises video data.
20. A method as defined in claim 16, wherein the information unit comprises audio data.
21. A method as defined in claim 16, wherein the constant bit rate communication channels are CDMA channels.
22. A method as defined in claim 21, wherein the constant bit rate communication channels include a supplemental channel.
23. A method as defined in claim 21, wherein the constant bit rate communication channel includes a dedicated control channel.
24. A method as defined in claim 21, wherein the constant bit rate communication channel includes a packet data channel.
25. A method as defined in claim 16, wherein the constant bit rate communication channels are GSM channels.
26. A method as defined in claim 16, wherein the constant bit rate communication channels are EDGE channels.
27. A method as defined in claim 16, wherein the constant bit rate communication channels are GPRS channels.
28. A method as defined in claim 16, wherein the information units occur at a constant interval.
29. A wireless communication device comprising:
a receiver configured to accept a plurality of constant bit rate communication channels; and
a decoder configured to accept the received plurality of constant bit rate communication channels and to decode the constant bit rate channels, wherein the decoded constant bit rate channels are accumulated to produce a variable bit rate stream of data.
30. A wireless communication device as defined in claim 29, wherein the decoder estimates a size of data packets received from the communication channels.
31. A wireless communication device as defined in claim 29, wherein a size of data packets received from the communication channels is indicated in additional signaling.
32. A wireless communication device as defined in claim 29, wherein the variable bit rate stream is a multimedia stream.
33. A wireless communication device as defined in claim 29, wherein the variable bit rate stream comprises video data.
34. A wireless communication device as defined in claim 29, wherein the variable bit rate stream comprises audio data.
35. A wireless communication device as defined in claim 29, wherein the plurality of constant bit rate channels are CDMA channels.
36. A wireless communication device as defined in claim 29, wherein the plurality of constant bit rate channels are GSM channels.
37. A wireless communication device as defined in claim 29, wherein the plurality of constant bit rate channels are GPRS channels.
38. A wireless communication device as defined in claim 29, wherein the plurality of constant bit rate channels are EDGE channels.
39. A wireless communication device comprising:
a controller configured to determine a set of physical layer packet sizes from a plurality of available constant bit rate communication channels; and
an encoder configured to partition information units into data packets, wherein an individual data packet size is selected not to exceed at least one of the size of the physical layer packets of the available constant bit rate communication channels.
40. A wireless communication device as defined in claim 39, wherein the encoder further comprises a rate controlled module capable of generating partitions of varying size.
41. A wireless communication device as defined in claim 39, further comprising a transmitter configured to transmit the physical layer packets.
42. A wireless communication device as defined in claim 39, wherein the information units comprise a variable bit rate stream.
43. A wireless communication device as defined in claim 39, wherein the information units comprise multimedia data.
44. A wireless communication device as defined in claim 39, wherein the information units comprise video data.
45. A wireless communication device as defined in claim 39, wherein the plurality of constant bit rate channels are CDMA channels.
46. A wireless communication device as defined in claim 39, wherein the plurality of constant bit rate channels are GSM channels.
47. A wireless communication device as defined in claim 39, wherein the plurality of constant bit rate channels are GPRS channels.
48. A wireless communication device as defined in claim 39, wherein the plurality of constant bit rate channels are EDGE channels.
49. An encoder in a wireless communication system, the encoder configured to accept information units and partition the information units into data packets, wherein the data packets are sized so as not to exceed at least one physical layer packet size of an available constant bit rate communication channel.
50. An encoder as defined in claim 49, wherein the information units occur at a constant rate.
51. An encoder as defined in claim 49, wherein the information units comprise a variable rate data stream.
52. An encoder as defined in claim 49, wherein the information units comprise multimedia data.
53. An encoder as defined in claim 49, wherein the information units comprise video data.
54. An encoder as defined in claim 49, wherein the information units comprise audio data.
55. An encoder as defined in claim 49, wherein the constant bit rate communication channels are CDMA channels.
56. An encoder as defined in claim 49, wherein the constant bit rate communication channels are GSM channels.
57. An encoder as defined in claim 49, wherein the constant bit rate communication channels are GPRS channels.
58. An encoder as defined in claim 49, wherein the constant bit rate communication channels are EDGE channels.
59. An encoder as defined in claim 46, wherein the encoder is constrained such that the aggregate of the packets is limited to a pre-selected maximum number of bits.
60. A decoder in a wireless communication system, the decoder configured to accept data streams from a plurality of constant bit rate communication channels, decode the data streams and accumulate the decoded plurality of data streams into a variable bit rate data stream.
61. A decoder as defined in claim 60, wherein a size of data packets received from the communication channels is estimated.
62. A decoder as defined in claim 60, wherein a size of data packets received from the communication channels is indicated in additional signaling.
63. A decoder as defined in claim 60 wherein the variable bit rate stream is a multimedia stream.
64. A decoder as defined in claim 60, wherein the variable bit rate stream is a video stream.
65. A decoder as defined in claim 60, wherein the variable bit rate stream is an audio stream.
66. A decoder as defined in claim 60, wherein the constant bit rate communication channels are CDMA channels.
67. A decoder as defined in claim 60, wherein the constant bit rate communication channels are GSM channels.
68. A decoder as defined in claim 60, wherein the constant bit rate communication channels are GPRS channels.
69. A decoder as defined in claim 60, wherein the constant bit rate communication channels are EDGE channels.
70. A computer readable media embodying a method of encoding data, the method comprising partitioning information units thereby creating a plurality of data packets wherein each data packet is sized not to exceed the size of at least one physical layer packet size from a set of physical layer packet sizes corresponding to available constant bit rate communication channels.
71. A computer readable media embodying a method of decoding broadcast content, the method comprising:
accepting data streams from a plurality of constant bit rate communication channels; and
decoding the data streams and accumulating the decoded plurality of data streams into a variable bit rate data stream.
US11/129,625 2004-05-13 2005-05-13 Delivery of information over a communication channel Abandoned US20050259623A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/129,625 US20050259623A1 (en) 2004-05-13 2005-05-13 Delivery of information over a communication channel
US14/499,954 US10034198B2 (en) 2004-05-13 2014-09-29 Delivery of information over a communication channel

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57167304P 2004-05-13 2004-05-13
US11/129,625 US20050259623A1 (en) 2004-05-13 2005-05-13 Delivery of information over a communication channel

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/499,954 Division US10034198B2 (en) 2004-05-13 2014-09-29 Delivery of information over a communication channel

Publications (1)

Publication Number Publication Date
US20050259623A1 true US20050259623A1 (en) 2005-11-24

Family

ID=34969576

Family Applications (6)

Application Number Title Priority Date Filing Date
US11/129,687 Expired - Fee Related US8855059B2 (en) 2004-05-13 2005-05-13 Method and apparatus for allocation of information to channels of a communication system
US11/129,635 Expired - Fee Related US9717018B2 (en) 2004-05-13 2005-05-13 Synchronization of audio and video data in a wireless communication system
US11/129,625 Abandoned US20050259623A1 (en) 2004-05-13 2005-05-13 Delivery of information over a communication channel
US11/129,735 Expired - Fee Related US8089948B2 (en) 2004-05-13 2005-05-13 Header compression of multimedia data transmitted over a wireless communication system
US14/466,702 Active 2026-01-11 US9674732B2 (en) 2004-05-13 2014-08-22 Method and apparatus for allocation of information to channels of a communication system
US14/499,954 Expired - Fee Related US10034198B2 (en) 2004-05-13 2014-09-29 Delivery of information over a communication channel

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/129,687 Expired - Fee Related US8855059B2 (en) 2004-05-13 2005-05-13 Method and apparatus for allocation of information to channels of a communication system
US11/129,635 Expired - Fee Related US9717018B2 (en) 2004-05-13 2005-05-13 Synchronization of audio and video data in a wireless communication system

Family Applications After (3)

Application Number Title Priority Date Filing Date
US11/129,735 Expired - Fee Related US8089948B2 (en) 2004-05-13 2005-05-13 Header compression of multimedia data transmitted over a wireless communication system
US14/466,702 Active 2026-01-11 US9674732B2 (en) 2004-05-13 2014-08-22 Method and apparatus for allocation of information to channels of a communication system
US14/499,954 Expired - Fee Related US10034198B2 (en) 2004-05-13 2014-09-29 Delivery of information over a communication channel

Country Status (14)

Country Link
US (6) US8855059B2 (en)
EP (9) EP1751956B1 (en)
JP (5) JP4361585B2 (en)
KR (6) KR101068055B1 (en)
CN (5) CN1985477B (en)
AT (4) ATE508567T1 (en)
BR (4) BRPI0510952B1 (en)
CA (6) CA2566124C (en)
DE (4) DE602005023983D1 (en)
ES (4) ES2354079T3 (en)
MX (4) MXPA06013193A (en)
MY (3) MY141497A (en)
TW (4) TWI381681B (en)
WO (4) WO2005114943A2 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050259694A1 (en) * 2004-05-13 2005-11-24 Harinath Garudadri Synchronization of audio and video data in a wireless communication system
US20070127399A1 (en) * 2005-12-05 2007-06-07 Nec Corporation Packet joining method, program, and apparatus
US20070169152A1 (en) * 2005-12-30 2007-07-19 Daniel Roodnick Data and wireless frame alignment for error reduction
US20070230492A1 (en) * 2006-03-28 2007-10-04 Fujitsu Limited Frame multiplexing device
US20070238477A1 (en) * 2006-04-06 2007-10-11 Furrer Nathaniel M Method and apparatus to facilitate communication resource allocation for supergroups
US20070274217A1 (en) * 2006-05-25 2007-11-29 Huawei Technologies Co., Ltd. Method for Establishing HRPD Network Packet Data Service in 1x Network
US20080025249A1 (en) * 2006-07-28 2008-01-31 Qualcomm Incorporated 1xEVDO WIRELESS INTERFACE TO ENABLE COMMUNICATIONS VIA A SATELLITE RELAY
US20080095247A1 (en) * 2004-11-17 2008-04-24 Michihiro Ohno Transmitter, Receiver And Communication System
US20090052589A1 (en) * 2007-04-20 2009-02-26 Samsung Electronics Co., Ltd. Transport stream generating device, transmitting device, receiving device, and a digital broadcast system having the same, and method thereof
US20090161593A1 (en) * 2006-05-15 2009-06-25 Godor Istvan Wireless Multicast for Layered Media
US20090235055A1 (en) * 2008-03-11 2009-09-17 Fujitsu Limited Scheduling apparatus and scheduling method
US20090298510A1 (en) * 2006-04-03 2009-12-03 Beon Joon Kim Method of performing scheduling in a wired or wireless communication system and apparatus thereof
US20100208710A1 (en) * 2007-10-19 2010-08-19 Jin Sam Kwak Method for generating control channel and decoding control channel, base station and mobile station thereof
US20110002378A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Coding latency reductions during transmitter quieting
US20110002377A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Transmitter quieting and null data encoding
US20110002347A1 (en) * 2009-07-06 2011-01-06 Samsung Electronics Co. Ltd. Method and system for encoding and decoding medium access control layer packet
US20110002399A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Transmitter quieting and reduced rate encoding
US7970345B2 (en) * 2005-06-22 2011-06-28 Atc Technologies, Llc Systems and methods of waveform and/or information splitting for wireless transmission of information to one or more radioterminals over a plurality of transmission paths and/or system elements
US20110182257A1 (en) * 2010-01-26 2011-07-28 Qualcomm Incorporated White space spectrum commmunciation device with multiplexing capabilties
US20130268691A1 (en) * 2012-04-10 2013-10-10 Cable Television Laboratories, Inc. Redirecting web content
US20130322522A1 (en) * 2011-02-16 2013-12-05 British Telecommunications Public Limited Company Compact cumulative bit curves
US20140082203A1 (en) * 2010-12-08 2014-03-20 At&T Intellectual Property I, L.P. Method and apparatus for capacity dimensioning in a communication network
US20140142955A1 (en) * 2012-11-19 2014-05-22 Apple Inc. Encoding Digital Media for Fast Start on Digital Media Players
US20140376640A1 (en) * 2011-05-04 2014-12-25 Cavium, Inc. Low Latency Rate Control System and Method
US20150071363A1 (en) * 2012-05-22 2015-03-12 Huawei Technologies Co., Ltd. Method and apparatus for assessing video quality
US10069591B2 (en) 2007-01-04 2018-09-04 Qualcomm Incorporated Method and apparatus for distributed spectrum sensing for wireless communication
US20230034716A1 (en) * 2021-07-27 2023-02-02 Korea Aerospace Research Institute Method and system for transmitting multiple data

Families Citing this family (168)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136395B2 (en) * 2000-11-30 2006-11-14 Telefonaktiebolaget L M Ericsson (Publ) Method and system for transmission of headerless data packets over a wireless link
US7599371B1 (en) * 2004-06-09 2009-10-06 Cisco Technology, Inc. System and method for optimizing data transport in a communications system
FI20040817A0 (en) * 2004-06-14 2004-06-14 Nokia Corp Transfer of packing parameters in a mobile communication system
US7664057B1 (en) * 2004-07-13 2010-02-16 Cisco Technology, Inc. Audio-to-video synchronization system and method for packet-based network video conferencing
US20060062312A1 (en) * 2004-09-22 2006-03-23 Yen-Chi Lee Video demultiplexer and decoder with efficient data recovery
US7804850B2 (en) * 2004-10-01 2010-09-28 Nokia Corporation Slow MAC-e for autonomous transmission in high speed uplink packet access (HSUPA) along with service specific transmission time control
CN101073237B (en) * 2004-11-30 2012-02-01 艾利森电话股份有限公司 Method for delivering multimedia files
US7675872B2 (en) * 2004-11-30 2010-03-09 Broadcom Corporation System, method, and apparatus for displaying pictures
US7764713B2 (en) * 2005-09-28 2010-07-27 Avaya Inc. Synchronization watermarking in multimedia streams
US8102878B2 (en) * 2005-09-29 2012-01-24 Qualcomm Incorporated Video packet shaping for video telephony
US9692537B2 (en) * 2005-10-18 2017-06-27 Avago Technologies General Ip (Singapore) Pte. Ltd. System, method, and apparatus for jitter reduction in a video decoder system
WO2007050259A2 (en) * 2005-10-21 2007-05-03 Thomson Licensing Method and apparatus for audio and video synchronization timestamp rollover correction
US8514711B2 (en) * 2005-10-21 2013-08-20 Qualcomm Incorporated Reverse link lower layer assisted video error control
US8548048B2 (en) * 2005-10-27 2013-10-01 Qualcomm Incorporated Video source rate control for video telephony
US8842555B2 (en) * 2005-10-21 2014-09-23 Qualcomm Incorporated Methods and systems for adaptive encoding of real-time information in packet-switched wireless communication systems
US8406309B2 (en) 2005-10-21 2013-03-26 Qualcomm Incorporated Video rate adaptation to reverse link conditions
US7839948B2 (en) * 2005-12-02 2010-11-23 Qualcomm Incorporated Time slicing techniques for variable data rate encoding
US8014389B2 (en) * 2005-12-06 2011-09-06 Lippershy Celestial Llc Bidding network
CN101346995A (en) * 2005-12-23 2009-01-14 皇家飞利浦电子股份有限公司 Splitting of a data stream
US8953596B2 (en) * 2006-01-06 2015-02-10 Qualcomm Incorporated Conserving network capacity by releasing QoS resources
KR100754736B1 (en) * 2006-02-10 2007-09-03 삼성전자주식회사 Method and apparatus for reproducing image frames in video receiver system
US8284713B2 (en) * 2006-02-10 2012-10-09 Cisco Technology, Inc. Wireless audio systems and related methods
KR100728038B1 (en) * 2006-03-03 2007-06-14 삼성전자주식회사 Method and apparatus for transmitting data on plc network by aggregating data
US7876695B2 (en) * 2006-03-07 2011-01-25 Telefonaktiebolaget Lm Ericsson (Publ) Communication station and method providing flexible compression of data packets
US7920469B2 (en) * 2006-06-15 2011-04-05 Alcatel-Lucent Usa Inc. Indicating a variable control channel structure for transmissions in a cellular system
US20070297454A1 (en) * 2006-06-21 2007-12-27 Brothers Thomas J Systems and methods for multicasting audio
US20070299983A1 (en) * 2006-06-21 2007-12-27 Brothers Thomas J Apparatus for synchronizing multicast audio and video
US7584495B2 (en) * 2006-06-30 2009-09-01 Nokia Corporation Redundant stream alignment in IP datacasting over DVB-H
US20080025312A1 (en) * 2006-07-28 2008-01-31 Qualcomm Incorporated Zero-header compression for improved communications
US8060651B2 (en) * 2006-08-17 2011-11-15 Sharp Laboratories Of America, Inc. Systems and methods for adaptively packetizing data partitions for transport over a network
US8644314B2 (en) * 2006-09-07 2014-02-04 Kyocera Corporation Protocol and method of VIA field compression in session initiation protocol signaling for 3G wireless networks
US8379733B2 (en) 2006-09-26 2013-02-19 Qualcomm Incorporated Efficient video packetization methods for packet-switched video telephony applications
US8069412B2 (en) * 2006-10-17 2011-11-29 At&T Intellectual Property I, L.P. Methods, systems, and products for mapping facilities data
US8484059B2 (en) 2006-10-17 2013-07-09 At&T Intellectual Property I, L.P. Methods, systems, and products for surveying facilities
US20080101476A1 (en) * 2006-11-01 2008-05-01 Qualcomm Incorporated Video coding rate adaptation to reduce packetization overhead
CN101179484A (en) * 2006-11-09 2008-05-14 华为技术有限公司 Method and system of synchronizing different media stream
CN100450163C (en) * 2006-11-30 2009-01-07 中兴通讯股份有限公司 A video and audio synchronization playing method for mobile multimedia broadcasting
US7889191B2 (en) 2006-12-01 2011-02-15 Semiconductor Components Industries, Llc Method and apparatus for providing a synchronized video presentation without video tearing
US7953118B2 (en) * 2006-12-08 2011-05-31 Microsoft Corporation Synchronizing media streams across multiple devices
EP1936887B1 (en) 2006-12-19 2015-12-16 Innovative Sonic Limited Method of improving continuous packet connectivity in a wireless communications system and related apparatus
KR100946893B1 (en) * 2007-01-03 2010-03-09 삼성전자주식회사 Method for scheduling downlink packet in mobile communication system and apparatus thereof
WO2008086509A2 (en) * 2007-01-10 2008-07-17 Qualcomm Incorporated Content- and link-dependent coding adaptation for multimedia telephony
KR100861594B1 (en) * 2007-04-23 2008-10-07 주식회사 케이티프리텔 Apparatus and method for controlling multimedia data rate
US8873453B2 (en) 2007-05-14 2014-10-28 Sigma Group, Inc. Method and apparatus for wireless transmission of high data rate streams
US8671302B2 (en) * 2007-05-14 2014-03-11 Picongen Wireless, Inc. Method and apparatus for wireless clock regeneration
CN100574283C (en) * 2007-06-12 2009-12-23 华为技术有限公司 Uplink and downlink transmission method and aggregation node
EP2023521A1 (en) * 2007-07-17 2009-02-11 Alcatel Lucent System and method for improving the use of radio spectrum in transmission of data
CN101094406B (en) * 2007-07-23 2010-09-29 北京中星微电子有限公司 Method and device for transferring video data stream
GB0715281D0 (en) 2007-08-07 2007-09-12 Nokia Siemens Networks Oy Reduced transmission time interval
US7826360B1 (en) 2007-08-27 2010-11-02 Marvell International Ltd. Adjusting transmission rates during packet expansion using in band signaling
JP4410277B2 (en) * 2007-08-28 2010-02-03 富士通株式会社 Semiconductor device and method for controlling semiconductor device
KR100916469B1 (en) 2007-08-29 2009-09-08 엘지이노텍 주식회사 Media Apparatus and Method for Control Thereof
US9521186B2 (en) 2007-09-13 2016-12-13 International Business Machines Corporation Method and system for file transfer over a messaging infrastructure
ES2675947T3 (en) * 2007-10-02 2018-07-13 Nokia Technologies Oy IP MTU control based on multi-radio planning
EP2206368A1 (en) * 2007-10-04 2010-07-14 Telefonaktiebolaget LM Ericsson (PUBL) Inter-system handoff using circuit switched bearers for serving general packet radio service support nodes
KR100918961B1 (en) 2007-10-09 2009-09-25 강릉원주대학교산학협력단 Method for compressing dynamic area and deciding ncb in wireless communication network
US8797850B2 (en) * 2008-01-10 2014-08-05 Qualcomm Incorporated System and method to adapt to network congestion
US9705935B2 (en) * 2008-01-14 2017-07-11 Qualcomm Incorporated Efficient interworking between circuit-switched and packet-switched multimedia services
US20090185534A1 (en) * 2008-01-18 2009-07-23 Futurewei Technologies, Inc. Method and Apparatus for Transmitting a Packet Header
US9357233B2 (en) * 2008-02-26 2016-05-31 Qualcomm Incorporated Video decoder error handling
CN101257366B (en) * 2008-03-27 2010-09-22 华为技术有限公司 Encoding and decoding method, communication system and equipment
US20090268732A1 (en) * 2008-04-29 2009-10-29 Thomson Licencing Channel change tracking metric in multicast groups
JP5847577B2 (en) * 2008-05-07 2016-01-27 デジタル ファウンテン, インコーポレイテッド High quality stream protection over broadcast channels using symbolic identifiers derived from lower level packet structures
EP2292013B1 (en) * 2008-06-11 2013-12-04 Koninklijke Philips N.V. Synchronization of media stream components
US20100003928A1 (en) * 2008-07-01 2010-01-07 Motorola, Inc. Method and apparatus for header compression for cdma evdo systems
US20100027524A1 (en) * 2008-07-31 2010-02-04 Nokia Corporation Radio layer emulation of real time protocol sequence number and timestamp
JP2010081212A (en) * 2008-09-25 2010-04-08 Mitsubishi Electric Corp Sound transmission apparatus
US8966543B2 (en) * 2008-09-29 2015-02-24 Nokia Corporation Method and system to enable adaptation between physical bearers and OMA-BCAST
JP5135147B2 (en) * 2008-09-29 2013-01-30 富士フイルム株式会社 Video file transmission server and operation control method thereof
WO2010068151A1 (en) * 2008-12-08 2010-06-17 Telefonaktiebolaget L M Ericsson (Publ) Device and method for synchronizing received audio data with video data
US8204038B2 (en) * 2009-01-13 2012-06-19 Mediatek Inc. Method for efficient utilization of radio resources in wireless communications system
JP4650573B2 (en) 2009-01-22 2011-03-16 ソニー株式会社 COMMUNICATION DEVICE, COMMUNICATION SYSTEM, PROGRAM, AND COMMUNICATION METHOD
US8560718B2 (en) 2009-03-03 2013-10-15 Ronald R. Davenport, JR. Wired Internet network system for the Internet video streams of radio stations
CN102473188B (en) * 2009-07-27 2015-02-11 国际商业机器公司 Method and system for transformation of logical data objects for storage
CN101998508B (en) * 2009-08-14 2013-08-28 华为技术有限公司 Data encapsulation method and device
WO2011027936A1 (en) * 2009-09-03 2011-03-10 에스케이 텔레콤주식회사 System and method for compressing and decompressing header information of transport layer mounted on near field communication-based protocol, and device applied thereto
EP2302845B1 (en) 2009-09-23 2012-06-20 Google, Inc. Method and device for determining a jitter buffer level
KR101757771B1 (en) * 2009-12-01 2017-07-17 삼성전자주식회사 Apparatus and method for tranmitting a multimedia data packet using cross layer optimization
US8780720B2 (en) 2010-01-11 2014-07-15 Venturi Ip Llc Radio access network load and condition aware traffic shaping control
KR20110090596A (en) * 2010-02-04 2011-08-10 삼성전자주식회사 Method and apparatus for correcting interarrival jitter
EP2362653A1 (en) 2010-02-26 2011-08-31 Panasonic Corporation Transport stream packet header compression
CN101877643B (en) * 2010-06-29 2014-12-10 中兴通讯股份有限公司 Multipoint sound-mixing distant view presenting method, device and system
US8630412B2 (en) 2010-08-25 2014-01-14 Motorola Mobility Llc Transport of partially encrypted media
US8477050B1 (en) 2010-09-16 2013-07-02 Google Inc. Apparatus and method for encoding using signal fragments for redundant transmission of data
US20120243602A1 (en) * 2010-09-23 2012-09-27 Qualcomm Incorporated Method and apparatus for pipelined slicing for wireless display
US8736700B2 (en) * 2010-09-30 2014-05-27 Apple Inc. Techniques for synchronizing audio and video data in an image signal processing system
US9351286B2 (en) * 2010-12-20 2016-05-24 Yamaha Corporation Wireless audio transmission method
US8964783B2 (en) * 2011-01-21 2015-02-24 Qualcomm Incorporated User input back channel for wireless displays
US9413803B2 (en) 2011-01-21 2016-08-09 Qualcomm Incorporated User input back channel for wireless displays
US9582239B2 (en) 2011-01-21 2017-02-28 Qualcomm Incorporated User input back channel for wireless displays
KR101616009B1 (en) * 2011-01-21 2016-04-27 퀄컴 인코포레이티드 User input back channel for wireless displays
US10135900B2 (en) 2011-01-21 2018-11-20 Qualcomm Incorporated User input back channel for wireless displays
US9787725B2 (en) 2011-01-21 2017-10-10 Qualcomm Incorporated User input back channel for wireless displays
US20130022032A1 (en) * 2011-01-26 2013-01-24 Qualcomm Incorporated Systems and methods for communicating in a network
US8838680B1 (en) 2011-02-08 2014-09-16 Google Inc. Buffer objects for web-based configurable pipeline media processing
JP2012222530A (en) * 2011-04-06 2012-11-12 Sony Corp Receiving device and method, and program
EP2547062B1 (en) 2011-07-14 2016-03-16 Nxp B.V. Media streaming with adaptation
CN102325261A (en) * 2011-09-14 2012-01-18 上海交通大学 Synchronization method for eliminating jitter of inter-video video data of stereo video collecting and synthetizing system
CN102521294A (en) * 2011-11-30 2012-06-27 苏州奇可思信息科技有限公司 Remote education lesson teaching method based on voice frequency touch type courseware
US20130155918A1 (en) * 2011-12-20 2013-06-20 Nokia Siemens Networks Oy Techniques To Enhance Header Compression Efficiency And Enhance Mobile Node Security
CN103179094B (en) * 2011-12-22 2019-10-01 南京中兴软件有限责任公司 Sending, receiving method, sending device and the reception device of IP packet head
CN103179449B (en) * 2011-12-23 2016-03-02 联想(北京)有限公司 The player method of media file, electronic equipment and virtual machine architecture
US8687654B1 (en) * 2012-01-05 2014-04-01 Google Inc. Method to packetize an encoded video frame
GB2498992B (en) * 2012-02-02 2015-08-26 Canon Kk Method and system for transmitting video frame data to reduce slice error rate
US20130223412A1 (en) * 2012-02-24 2013-08-29 Qualcomm Incorporated Method and system to improve frame early termination success rate
EP2648418A1 (en) * 2012-04-05 2013-10-09 Thomson Licensing Synchronization of multimedia streams
US9204095B2 (en) * 2012-05-04 2015-12-01 Hong Jiang Instant communications system having established communication channels between communication devices
CN102665140B (en) * 2012-05-16 2014-04-09 哈尔滨工业大学深圳研究生院 RTP (real-time transport protocol) packaging method of AVS (audio video coding standard) video frame
US8872946B2 (en) 2012-05-31 2014-10-28 Apple Inc. Systems and methods for raw image processing
US9142012B2 (en) 2012-05-31 2015-09-22 Apple Inc. Systems and methods for chroma noise reduction
US9077943B2 (en) 2012-05-31 2015-07-07 Apple Inc. Local image statistics collection
US9031319B2 (en) 2012-05-31 2015-05-12 Apple Inc. Systems and methods for luma sharpening
US8817120B2 (en) 2012-05-31 2014-08-26 Apple Inc. Systems and methods for collecting fixed pattern noise statistics of image data
US9743057B2 (en) 2012-05-31 2017-08-22 Apple Inc. Systems and methods for lens shading correction
US9332239B2 (en) 2012-05-31 2016-05-03 Apple Inc. Systems and methods for RGB image processing
US9105078B2 (en) 2012-05-31 2015-08-11 Apple Inc. Systems and methods for local tone mapping
US9025867B2 (en) 2012-05-31 2015-05-05 Apple Inc. Systems and methods for YCC image processing
US8917336B2 (en) 2012-05-31 2014-12-23 Apple Inc. Image signal processing involving geometric distortion correction
US11089247B2 (en) 2012-05-31 2021-08-10 Apple Inc. Systems and method for reducing fixed pattern noise in image data
US8953882B2 (en) 2012-05-31 2015-02-10 Apple Inc. Systems and methods for determining noise statistics of image data
US9014504B2 (en) 2012-05-31 2015-04-21 Apple Inc. Systems and methods for highlight recovery in an image signal processor
US8863307B2 (en) 2012-06-05 2014-10-14 Broadcom Corporation Authenticating users based upon an identity footprint
TWI513320B (en) * 2012-06-25 2015-12-11 Hon Hai Prec Ind Co Ltd Video conferencing device and lip synchronization method thereof
CN103827964B (en) * 2012-07-05 2018-01-16 松下知识产权经营株式会社 Coding/decoding system, decoding apparatus, code device and decoding method
US9668161B2 (en) * 2012-07-09 2017-05-30 Cisco Technology, Inc. System and method associated with a service flow router
KR101947000B1 (en) 2012-07-17 2019-02-13 삼성전자주식회사 Apparatus and method for delivering transport characteristics of multimedia data in broadcast system
US20140192200A1 (en) * 2013-01-08 2014-07-10 Hii Media Llc Media streams synchronization
US20140310735A1 (en) * 2013-04-12 2014-10-16 Codemate A/S Flat rate billing of content distribution
US9532043B2 (en) * 2013-08-02 2016-12-27 Blackberry Limited Wireless transmission of real-time media
FR3011155A1 (en) * 2013-09-26 2015-03-27 Orange METHODS FOR SYNCHRONIZATION, STREAM GENERATION, COMPUTER PROGRAMS, STORAGE MEDIA, CORRESPONDING RESTITUTION, EXECUTION AND GENERATION DEVICES.
US20150195326A1 (en) * 2014-01-03 2015-07-09 Qualcomm Incorporated Detecting whether header compression is being used for a first stream based upon a delay disparity between the first stream and a second stream
US9282171B2 (en) * 2014-03-06 2016-03-08 Qualcomm Incorporated Context establishment in marginal grant conditions
US9369724B2 (en) * 2014-03-31 2016-06-14 Microsoft Technology Licensing, Llc Decoding and synthesizing frames for incomplete video data
WO2015179821A1 (en) 2014-05-22 2015-11-26 Kyocera Corporation Assignment of communication resources in an unlicensed frequency ban to equipment operating in a licensed frequency band
CN103986941A (en) * 2014-05-28 2014-08-13 深圳市智英实业发展有限公司 Wireless audio and video transmission system
EP3016432B1 (en) * 2014-10-30 2018-07-04 Vodafone IP Licensing limited Content compression in mobile network
US10129839B2 (en) * 2014-12-05 2018-11-13 Qualcomm Incorporated Techniques for synchronizing timing of wireless streaming transmissions to multiple sink devices
KR102349450B1 (en) * 2014-12-08 2022-01-10 삼성전자주식회사 Method and Apparatus For Providing Integrity Authentication Data
US9692709B2 (en) 2015-06-04 2017-06-27 Oracle International Corporation Playout buffering of encapsulated media
KR102402881B1 (en) 2015-06-05 2022-05-27 한화테크윈 주식회사 Surveillance system
US9929879B2 (en) 2015-06-09 2018-03-27 Oracle International Corporation Multipath support of real-time communications
CN104980955A (en) * 2015-06-19 2015-10-14 重庆市音乐一号科技有限公司 Method for improving transfer rate of Wi-Fi Display
WO2017008263A1 (en) * 2015-07-15 2017-01-19 Mediatek Singapore Pte. Ltd. Conditional binary tree block partitioning structure
CN105245273B (en) * 2015-08-27 2017-12-12 桂林理工大学 A kind of balanced RS232 of illumination and VLC communication protocol conversion methods
EP3369212B1 (en) * 2015-10-26 2020-12-09 Telefonaktiebolaget LM Ericsson (PUBL) Length control for packet header sampling
WO2017074811A1 (en) * 2015-10-28 2017-05-04 Microsoft Technology Licensing, Llc Multiplexing data
GB201519090D0 (en) * 2015-10-28 2015-12-09 Microsoft Technology Licensing Llc Multiplexing data
CN106817350A (en) * 2015-11-30 2017-06-09 中兴通讯股份有限公司 Message processing method and device
US11924826B2 (en) * 2015-12-10 2024-03-05 Qualcomm Incorporated Flexible transmission unit and acknowledgment feedback timeline for efficient low latency communication
US10332534B2 (en) * 2016-01-07 2019-06-25 Microsoft Technology Licensing, Llc Encoding an audio stream
KR101700370B1 (en) * 2016-06-08 2017-01-26 삼성전자주식회사 Method and apparatus for correcting interarrival jitter
KR102497216B1 (en) * 2017-05-10 2023-02-07 삼성전자 주식회사 Image Processing Device and Image Processing Method Performing Slice-based Compression
US10367750B2 (en) * 2017-06-15 2019-07-30 Mellanox Technologies, Ltd. Transmission and reception of raw video using scalable frame rate
GB2564644B (en) * 2017-07-12 2020-12-16 Canon Kk Method and system of encoding a data stream according to a variable bitrate mode
WO2019037121A1 (en) * 2017-08-25 2019-02-28 SZ DJI Technology Co., Ltd. Systems and methods for synchronizing frame timing between physical layer frame and video frame
EP3493535B1 (en) * 2017-11-29 2020-09-09 Mitsubishi Electric R & D Centre Europe B.V. Method for controlling a video encoder of a video camera installed on a moving conveyance
EP3902294A4 (en) 2017-12-13 2022-09-14 Fiorentino, Ramon Interconnected system for high-quality wireless transmission of audio and video between electronic consumer devices
US10437745B2 (en) * 2018-01-05 2019-10-08 Denso International America, Inc. Mobile de-whitening
US10608947B2 (en) * 2018-02-26 2020-03-31 Qualcomm Incorporated Per-flow jumbo MTU in NR systems
KR102011806B1 (en) * 2018-04-12 2019-08-19 주식회사 넷커스터마이즈 traffic accelerating motheds based UDP based Data Transfer protocol
DE102018212655A1 (en) * 2018-07-30 2020-01-30 Conti Temic Microelectronic Gmbh Detection of the intention to move a pedestrian from camera images
US10834296B2 (en) * 2018-09-12 2020-11-10 Roku, Inc. Dynamically adjusting video to improve synchronization with audio
CN109618240A (en) * 2018-10-26 2019-04-12 安徽清新互联信息科技有限公司 Wireless Multi-Channel adaptive equilibrium method for real-time audio and video transmission
AU2020344540A1 (en) * 2019-09-10 2022-04-28 Sonos, Inc. Synchronizing playback of audio information received from other networks
CN111064541B (en) * 2019-12-18 2021-05-11 中国南方电网有限责任公司超高压输电公司 Method for multiplexing high-low speed data transmission channel
CN111131917B (en) * 2019-12-26 2021-12-28 国微集团(深圳)有限公司 Real-time audio frequency spectrum synchronization method and playing device
CN111866753B (en) * 2020-06-02 2021-06-29 中山大学 Digital transmission broadcast communication method and system
US20220294741A1 (en) * 2020-06-08 2022-09-15 Sky Peak Technologies, Inc. Content shaping and routing in a network

Citations (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5537409A (en) * 1993-07-16 1996-07-16 Pioneer Electronic Corporation Synchronizing system for time-divided video and audio signals
US5541852A (en) * 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
US5559608A (en) * 1989-10-25 1996-09-24 Nec Corporation Method of digitally compressed video and audio data
US5570372A (en) * 1995-11-08 1996-10-29 Siemens Rolm Communications Inc. Multimedia communications with system-dependent adaptive delays
US5583652A (en) * 1994-04-28 1996-12-10 International Business Machines Corporation Synchronized, variable-speed playback of digitally recorded audio and video
US5717464A (en) * 1995-12-18 1998-02-10 Divicom, Inc. Rate control for a video encoder
US5729534A (en) * 1995-01-09 1998-03-17 Nokia Mobile Phones Limited Dynamic allocation of radio capacity in a TDMA system
US5844600A (en) * 1995-09-15 1998-12-01 General Datacomm, Inc. Methods, apparatus, and systems for transporting multimedia conference data streams through a transport network
US5867230A (en) * 1996-09-06 1999-02-02 Motorola Inc. System, device, and method for streaming a multimedia file encoded at a variable bitrate
US5898695A (en) * 1995-03-29 1999-04-27 Hitachi, Ltd. Decoder for compressed and multiplexed video and audio data
US6041067A (en) * 1996-10-04 2000-03-21 Matsushita Electric Industrial Co., Ltd. Device for synchronizing data processing
US6058141A (en) * 1995-09-28 2000-05-02 Digital Bitcasting Corporation Varied frame rate video
US6085270A (en) * 1998-06-17 2000-07-04 Advanced Micro Devices, Inc. Multi-channel, multi-rate isochronous data bus
US6108626A (en) * 1995-10-27 2000-08-22 Cselt-Centro Studi E Laboratori Telecomunicazioni S.P.A. Object oriented audio coding
US6111916A (en) * 1997-02-07 2000-08-29 Texas Instruments Incorporated Error resilient encoding
US6181711B1 (en) * 1997-06-26 2001-01-30 Cisco Systems, Inc. System and method for transporting a compressed video and data bit stream over a communication channel
US20010008535A1 (en) * 2000-01-14 2001-07-19 U.S. Philips Corporation Interconnection of audio/video devices
US20010021197A1 (en) * 1998-06-01 2001-09-13 Tantivy Communications, Inc. Dynamic bandwidth allocation for multiple access communication using session queues
US20020075831A1 (en) * 2000-12-20 2002-06-20 Lucent Technologies Inc. Method and apparatus for communicating heterogeneous data traffic
US20020105976A1 (en) * 2000-03-10 2002-08-08 Frank Kelly Method and apparatus for deriving uplink timing from asynchronous traffic across multiple transport streams
US20020131426A1 (en) * 2000-06-22 2002-09-19 Mati Amit Scalable virtual channel
US20020137521A1 (en) * 2000-12-01 2002-09-26 Samsung Electronics Co. Ltd. Scheduling method for high-rate data service in a mobile communication system
US20020150123A1 (en) * 2001-04-11 2002-10-17 Cyber Operations, Llc System and method for network delivery of low bit rate multimedia content
US6473442B1 (en) * 1999-04-12 2002-10-29 Telefonaktiebolaget Lm Ericsson (Publ) Communications system and method for matching and balancing the bit rates of transport channels to the bit rate of a physical channel
US6473404B1 (en) * 1998-11-24 2002-10-29 Connect One, Inc. Multi-protocol telecommunications routing optimization
US6496504B1 (en) * 1998-08-06 2002-12-17 Ricoh Company, Ltd. Smart allocation of bandwidth for multiple independent calls on a digital network
US20020194606A1 (en) * 2001-06-14 2002-12-19 Michael Tucker System and method of communication between videoconferencing systems and computer systems
US20030021298A1 (en) * 2001-07-30 2003-01-30 Tomokazu Murakami Data multiplexing method, data recorded medium, data recording apparatus and data recording program
US20030039465A1 (en) * 2001-04-20 2003-02-27 France Telecom Research And Development L.L.C. Systems for selectively associating cues with stored video frames and methods of operating the same
US6535043B2 (en) * 2000-05-26 2003-03-18 Lattice Semiconductor Corp Clock signal selection system, method of generating a clock signal and programmable clock manager including same
US6536043B1 (en) * 1996-02-14 2003-03-18 Roxio, Inc. Method and systems for scalable representation of multimedia data for progressive asynchronous transmission
US6535557B1 (en) * 1998-12-07 2003-03-18 The University Of Tokyo Method and apparatus for coding moving picture image
US20030053484A1 (en) * 2001-09-18 2003-03-20 Sorenson Donald C. Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service
US6564382B2 (en) * 2000-08-16 2003-05-13 Koninklijke Philips Electronics N.V. Method for playing multimedia applications
US6584125B1 (en) * 1997-12-22 2003-06-24 Nec Corporation Coding/decoding apparatus, coding/decoding system and multiplexed bit stream
US20030140347A1 (en) * 1999-12-22 2003-07-24 Viktor Varsa Method for transmitting video images, a data transmission system, a transmitting video terminal, and a receiving video terminal
US20030208615A1 (en) * 2002-05-02 2003-11-06 Canon Kabushiki Kaisha Method and device for adjusting the maximum size of the information sequences transmitted in a telecommunication network
US6647006B1 (en) * 1998-06-10 2003-11-11 Nokia Networks Oy High-speed data transmission in a mobile system
US20030224806A1 (en) * 2002-06-03 2003-12-04 Igal Hebron System and method for network data quality measurement
US6680955B1 (en) * 1999-08-20 2004-01-20 Nokia Networks Oy Technique for compressing a header field in a data packet
US6704281B1 (en) * 1999-01-15 2004-03-09 Nokia Mobile Phones Ltd. Bit-rate control in a multimedia device
US20040052209A1 (en) * 2002-09-13 2004-03-18 Luis Ortiz Methods and systems for jitter minimization in streaming media
US20040057446A1 (en) * 2002-07-16 2004-03-25 Nokia Corporation Method for enabling packet transfer delay compensation in multimedia streaming
US20040078744A1 (en) * 2002-10-17 2004-04-22 Yongbin Wei Method and apparatus for transmitting and receiving a block of data in a communication system
US20050047417A1 (en) * 2003-08-26 2005-03-03 Samsung Electronics Co., Ltd. Apparatus and method for multimedia reproduction using output buffering in a mobile communication terminal
US20050094655A1 (en) * 2001-03-06 2005-05-05 Microsoft Corporation Adaptive queuing
US20050105615A1 (en) * 2003-11-13 2005-05-19 Khaled El-Maleh Selective and/or scalable complexity control for video codecs
US20050172154A1 (en) * 2004-01-29 2005-08-04 Chaoticom, Inc. Systems and methods for providing digital content and caller alerts to wireless network-enabled devices
US20050220071A1 (en) * 2004-03-19 2005-10-06 Telefonaktiebolaget L M Ericsson (Publ) Higher layer packet framing using RLP
US20050226262A1 (en) * 2004-03-31 2005-10-13 Shining Hsieh Audio buffering system and method of buffering audio in a multimedia receiver
US6956875B2 (en) * 2002-06-19 2005-10-18 Atlinks Usa, Inc. Technique for communicating variable bit rate data over a constant bit rate link
US6968091B2 (en) * 2001-09-18 2005-11-22 Emc Corporation Insertion of noise for reduction in the number of bits for variable-length coding of (run, level) pairs
US20050259613A1 (en) * 2004-05-13 2005-11-24 Harinath Garudadri Method and apparatus for allocation of information to channels of a communication system
US7016337B1 (en) * 1999-03-02 2006-03-21 Cisco Technology, Inc. System and method for multiple channel statistical re-multiplexing
US7043749B1 (en) * 1998-02-27 2006-05-09 Tandberg Telecom As Audio-video packet synchronization at network gateway
US7068708B2 (en) * 2002-12-13 2006-06-27 Motorola, Inc. Method and receiving unit for demodulating a multi-path signal
US20060285654A1 (en) * 2003-04-14 2006-12-21 Nesvadba Jan Alexis D System and method for performing automatic dubbing on an audio-visual stream
US20070092224A1 (en) * 2003-09-02 2007-04-26 Sony Corporation Content receiving apparatus, video/audio output timing control method, and content provision system
US20080002669A1 (en) * 2001-09-14 2008-01-03 O'brien Ray Packet voice gateway
US7391717B2 (en) * 2003-06-30 2008-06-24 Microsoft Corporation Streaming of variable bit rate multimedia content
US7453843B2 (en) * 2001-12-11 2008-11-18 Texas Instruments Incorporated Wireless bandwidth aggregator

Family Cites Families (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4948019A (en) 1989-03-31 1990-08-14 Rodum Roland K Collapsible clothes hanger
WO1993008531A1 (en) 1991-10-22 1993-04-29 Cae, Inc. Synchronous parallel electronic timing generator
AU5632394A (en) 1993-03-05 1994-09-08 Sony Corporation Apparatus and method for reproducing a prediction-encoded video signal
JP3003839B2 (en) * 1993-11-08 2000-01-31 エヌ・ティ・ティ移動通信網株式会社 CDMA communication method and apparatus
US5510842A (en) 1994-05-04 1996-04-23 Matsushita Electric Corporation Of America Parallel architecture for a high definition television video decoder having multiple independent frame memories
US5646693A (en) 1994-11-04 1997-07-08 Cismas; Sorin Memory utilization for video decoding and display with 3:2 pull-down
KR0137701B1 (en) * 1994-12-13 1998-05-15 양승택 Pes packetizing apparatus of mpeg-2 system
US5914717A (en) 1995-07-21 1999-06-22 Microsoft Methods and system for providing fly out menus
KR0164184B1 (en) 1995-08-31 1999-01-15 배순훈 Encoding controlling apparatus of a mpeg compression disc
KR970012585U (en) 1995-09-21 1997-04-25 Car sun visor
JPH09312656A (en) * 1996-03-21 1997-12-02 Sony Corp Transmitter and method therefor
KR100204043B1 (en) * 1996-11-28 1999-06-15 정선종 Method for making stream channelfor audio/video data in distributed processing environment
DE19652708C2 (en) * 1996-12-18 1999-08-12 Schott Glas Process for producing a filled plastic syringe body for medical purposes
US6154780A (en) * 1996-12-18 2000-11-28 Intel Corporation Method and apparatus for transmission of a flexible and error resilient video bitstream
KR100223298B1 (en) * 1997-02-12 1999-10-15 서평원 Terminal interfacing apparatus of b-isdn
US6577610B1 (en) 1997-06-30 2003-06-10 Spacenet, Inc. Flex slotted Aloha transmission system and method
US6124895A (en) 1997-10-17 2000-09-26 Dolby Laboratories Licensing Corporation Frame-based audio coding with video/audio data synchronization by dynamic audio frame alignment
US5913190A (en) 1997-10-17 1999-06-15 Dolby Laboratories Licensing Corporation Frame-based audio coding with video/audio data synchronization by audio sample rate conversion
US6192257B1 (en) 1998-03-31 2001-02-20 Lucent Technologies Inc. Wireless communication terminal having video image capability
JPH11298878A (en) 1998-04-08 1999-10-29 Nec Corp Image scrambling method and device therefor
US6577631B1 (en) 1998-06-10 2003-06-10 Merlot Communications, Inc. Communication switching module for the transmission and control of audio, video, and computer data over a single network fabric
US6728263B2 (en) * 1998-08-18 2004-04-27 Microsoft Corporation Dynamic sizing of data packets
US6295453B1 (en) 1998-10-07 2001-09-25 Telefonaktiebolaget Lm Ericsson (Publ) Multi-full rate channel assignment for a cellular telephone system
JP3454175B2 (en) 1998-12-24 2003-10-06 日本ビクター株式会社 Image information transmission device
KR100335441B1 (en) 1999-05-01 2002-05-04 윤종용 Multiplexing video decoding apparatus and method
KR100352981B1 (en) 1999-05-21 2002-09-18 유혁 Apparatus and Method for streaming of MPEG-1 data
KR100608042B1 (en) 1999-06-12 2006-08-02 삼성전자주식회사 Encoding method for radio transceiving of multimedia data and device thereof
US6262829B1 (en) 1999-07-29 2001-07-17 Hewlett-Packard Co. Method of digital grayscale control using modulation of a slow-acting light source
CA2397398C (en) 2000-01-14 2007-06-12 Interdigital Technology Corporation Wireless communication system with selectively sized data transport blocks
US6996069B2 (en) * 2000-02-22 2006-02-07 Qualcomm, Incorporated Method and apparatus for controlling transmit power of multiple channels in a CDMA communication system
JP2001245268A (en) 2000-02-29 2001-09-07 Toshiba Corp Contents transmitting system and content processor
US7061936B2 (en) * 2000-03-03 2006-06-13 Ntt Docomo, Inc. Method and apparatus for packet transmission with header compression
CN1223196C (en) 2000-04-14 2005-10-12 索尼公司 Decoder and decoding method, recorded medium and program
US7680912B1 (en) 2000-05-18 2010-03-16 thePlatform, Inc. System and method for managing and provisioning streamed data
US7292772B2 (en) 2000-05-29 2007-11-06 Sony Corporation Method and apparatus for decoding and recording medium for a coded video stream
US7149549B1 (en) 2000-10-26 2006-12-12 Ortiz Luis M Providing multiple perspectives for a venue activity through an electronic hand held device
US6529527B1 (en) 2000-07-07 2003-03-04 Qualcomm, Inc. Method and apparatus for carrying packetized voice and data in wireless communication networks
JP4337244B2 (en) 2000-07-25 2009-09-30 ソニー株式会社 MPEG image stream decoding apparatus and decoding method
WO2002015591A1 (en) 2000-08-16 2002-02-21 Koninklijke Philips Electronics N.V. Method of playing multimedia data
KR100818069B1 (en) 2000-08-23 2008-04-01 코닌클리케 필립스 일렉트로닉스 엔.브이. Communication system and device
SE517245C2 (en) 2000-09-14 2002-05-14 Ericsson Telefon Ab L M Synchronization of audio and video signals
US6747964B1 (en) 2000-09-15 2004-06-08 Qualcomm Incorporated Method and apparatus for high data rate transmission in a wireless communication system
US6859500B2 (en) * 2001-03-20 2005-02-22 Telefonaktiebolaget Lm Ericsson Run-length coding of non-coded macroblocks
US20030016702A1 (en) 2001-03-30 2003-01-23 Bender Paul E. Method and system for maximizing standby time in monitoring a control channel
US7230941B2 (en) 2001-04-26 2007-06-12 Qualcomm Incorporated Preamble channel decoding
JP4647149B2 (en) * 2001-08-06 2011-03-09 独立行政法人情報通信研究機構 Transport stream transmitter and receiver
US7327789B2 (en) 2001-08-06 2008-02-05 Matsushita Electric Industrial Co., Ltd. Decoding apparatus, decoding method, decoding program, and decoding program storage medium
US6847006B2 (en) * 2001-08-10 2005-01-25 Semiconductor Energy Laboratory Co., Ltd. Laser annealing apparatus and semiconductor device manufacturing method
US7075946B2 (en) 2001-10-02 2006-07-11 Xm Satellite Radio, Inc. Method and apparatus for audio output combining
US20040190609A1 (en) 2001-11-09 2004-09-30 Yasuhiko Watanabe Moving picture coding method and apparatus
US7292690B2 (en) 2002-01-02 2007-11-06 Sony Corporation Video scene change detection
DE10300048B4 (en) 2002-01-05 2005-05-12 Samsung Electronics Co., Ltd., Suwon Image coding method for motion picture expert groups, involves image quantizing data in accordance with quantization parameter, and coding entropy of quantized image data using entropy coding unit
US7130313B2 (en) 2002-02-14 2006-10-31 Nokia Corporation Time-slice signaling for broadband digital broadcasting
US7596179B2 (en) 2002-02-27 2009-09-29 Hewlett-Packard Development Company, L.P. Reducing the resolution of media data
FI114679B (en) 2002-04-29 2004-11-30 Nokia Corp Random start points in video encoding
US8699505B2 (en) * 2002-05-31 2014-04-15 Qualcomm Incorporated Dynamic channelization code allocation
US7486678B1 (en) 2002-07-03 2009-02-03 Greenfield Networks Multi-slice network processor
CN1221132C (en) * 2002-07-30 2005-09-28 华为技术有限公司 Device and method for realizing conversion between various VF flow formats
TW569556B (en) 2002-10-04 2004-01-01 Avid Electronics Corp Adaptive differential pulse-code modulation compression encoding/decoding method capable of fast recovery and apparatus thereof
JP2004226272A (en) 2003-01-23 2004-08-12 Seiko Epson Corp Method and apparatus for detecting stain defect
US8483189B2 (en) 2003-03-10 2013-07-09 Panasonic Corporation OFDM signal transmission method, transmission apparatus, and reception apparatus
US7535876B2 (en) 2003-04-01 2009-05-19 Alcatel-Lucent Usa Inc. Method of flow control for HSDPA and HSUPA
US7400889B2 (en) 2003-04-01 2008-07-15 Telefonaktiebolaget Lm Ericsson (Publ) Scalable quality broadcast service in a mobile wireless communication network
US20050138251A1 (en) 2003-12-18 2005-06-23 Fanning Blaise B. Arbitration of asynchronous and isochronous requests
US7599435B2 (en) 2004-01-30 2009-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7558221B2 (en) 2004-02-13 2009-07-07 Seiko Epson Corporation Method and system for recording videoconference data
US7530089B1 (en) 2004-03-29 2009-05-05 Nortel Networks Limited System and method for improving video quality using a constant bit rate data stream
CN100576820C (en) * 2004-05-07 2009-12-30 艾格瑞系统有限公司 The mac header that uses with the frame set compresses

Patent Citations (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6023552A (en) * 1989-10-25 2000-02-08 Nec Corporation Data recording system
US5559608A (en) * 1989-10-25 1996-09-24 Nec Corporation Method of digitally compressed video and audio data
US5537409A (en) * 1993-07-16 1996-07-16 Pioneer Electronic Corporation Synchronizing system for time-divided video and audio signals
US5541852A (en) * 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
US5583652A (en) * 1994-04-28 1996-12-10 International Business Machines Corporation Synchronized, variable-speed playback of digitally recorded audio and video
US5729534A (en) * 1995-01-09 1998-03-17 Nokia Mobile Phones Limited Dynamic allocation of radio capacity in a TDMA system
US5898695A (en) * 1995-03-29 1999-04-27 Hitachi, Ltd. Decoder for compressed and multiplexed video and audio data
US5844600A (en) * 1995-09-15 1998-12-01 General Datacomm, Inc. Methods, apparatus, and systems for transporting multimedia conference data streams through a transport network
US6058141A (en) * 1995-09-28 2000-05-02 Digital Bitcasting Corporation Varied frame rate video
US6108626A (en) * 1995-10-27 2000-08-22 Cselt-Centro Studi E Laboratori Telecomunicazioni S.P.A. Object oriented audio coding
US5570372A (en) * 1995-11-08 1996-10-29 Siemens Rolm Communications Inc. Multimedia communications with system-dependent adaptive delays
US5717464A (en) * 1995-12-18 1998-02-10 Divicom, Inc. Rate control for a video encoder
US6536043B1 (en) * 1996-02-14 2003-03-18 Roxio, Inc. Method and systems for scalable representation of multimedia data for progressive asynchronous transmission
US5867230A (en) * 1996-09-06 1999-02-02 Motorola Inc. System, device, and method for streaming a multimedia file encoded at a variable bitrate
US6041067A (en) * 1996-10-04 2000-03-21 Matsushita Electric Industrial Co., Ltd. Device for synchronizing data processing
US6111916A (en) * 1997-02-07 2000-08-29 Texas Instruments Incorporated Error resilient encoding
US6891854B2 (en) * 1997-06-26 2005-05-10 Cisco Technology, Inc. System and method for transporting a compressed video and data bit stream over a communication channel
US6181711B1 (en) * 1997-06-26 2001-01-30 Cisco Systems, Inc. System and method for transporting a compressed video and data bit stream over a communication channel
US6584125B1 (en) * 1997-12-22 2003-06-24 Nec Corporation Coding/decoding apparatus, coding/decoding system and multiplexed bit stream
US7043749B1 (en) * 1998-02-27 2006-05-09 Tandberg Telecom As Audio-video packet synchronization at network gateway
US20010021197A1 (en) * 1998-06-01 2001-09-13 Tantivy Communications, Inc. Dynamic bandwidth allocation for multiple access communication using session queues
US6647006B1 (en) * 1998-06-10 2003-11-11 Nokia Networks Oy High-speed data transmission in a mobile system
US6085270A (en) * 1998-06-17 2000-07-04 Advanced Micro Devices, Inc. Multi-channel, multi-rate isochronous data bus
US6496504B1 (en) * 1998-08-06 2002-12-17 Ricoh Company, Ltd. Smart allocation of bandwidth for multiple independent calls on a digital network
US6473404B1 (en) * 1998-11-24 2002-10-29 Connect One, Inc. Multi-protocol telecommunications routing optimization
US6535557B1 (en) * 1998-12-07 2003-03-18 The University Of Tokyo Method and apparatus for coding moving picture image
US6704281B1 (en) * 1999-01-15 2004-03-09 Nokia Mobile Phones Ltd. Bit-rate control in a multimedia device
US7016337B1 (en) * 1999-03-02 2006-03-21 Cisco Technology, Inc. System and method for multiple channel statistical re-multiplexing
US6473442B1 (en) * 1999-04-12 2002-10-29 Telefonaktiebolaget Lm Ericsson (Publ) Communications system and method for matching and balancing the bit rates of transport channels to the bit rate of a physical channel
US6680955B1 (en) * 1999-08-20 2004-01-20 Nokia Networks Oy Technique for compressing a header field in a data packet
US20030140347A1 (en) * 1999-12-22 2003-07-24 Viktor Varsa Method for transmitting video images, a data transmission system, a transmitting video terminal, and a receiving video terminal
US20010008535A1 (en) * 2000-01-14 2001-07-19 U.S. Philips Corporation Interconnection of audio/video devices
US20020105976A1 (en) * 2000-03-10 2002-08-08 Frank Kelly Method and apparatus for deriving uplink timing from asynchronous traffic across multiple transport streams
US6535043B2 (en) * 2000-05-26 2003-03-18 Lattice Semiconductor Corp Clock signal selection system, method of generating a clock signal and programmable clock manager including same
US20020131426A1 (en) * 2000-06-22 2002-09-19 Mati Amit Scalable virtual channel
US6564382B2 (en) * 2000-08-16 2003-05-13 Koninklijke Philips Electronics N.V. Method for playing multimedia applications
US20020137521A1 (en) * 2000-12-01 2002-09-26 Samsung Electronics Co. Ltd. Scheduling method for high-rate data service in a mobile communication system
US20020075831A1 (en) * 2000-12-20 2002-06-20 Lucent Technologies Inc. Method and apparatus for communicating heterogeneous data traffic
US20050094655A1 (en) * 2001-03-06 2005-05-05 Microsoft Corporation Adaptive queuing
US20020150123A1 (en) * 2001-04-11 2002-10-17 Cyber Operations, Llc System and method for network delivery of low bit rate multimedia content
US20030039465A1 (en) * 2001-04-20 2003-02-27 France Telecom Research And Development L.L.C. Systems for selectively associating cues with stored video frames and methods of operating the same
US20020194606A1 (en) * 2001-06-14 2002-12-19 Michael Tucker System and method of communication between videoconferencing systems and computer systems
US20030021298A1 (en) * 2001-07-30 2003-01-30 Tomokazu Murakami Data multiplexing method, data recorded medium, data recording apparatus and data recording program
US20080002669A1 (en) * 2001-09-14 2008-01-03 O'brien Ray Packet voice gateway
US6968091B2 (en) * 2001-09-18 2005-11-22 Emc Corporation Insertion of noise for reduction in the number of bits for variable-length coding of (run, level) pairs
US20030053484A1 (en) * 2001-09-18 2003-03-20 Sorenson Donald C. Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service
US7453843B2 (en) * 2001-12-11 2008-11-18 Texas Instruments Incorporated Wireless bandwidth aggregator
US20030208615A1 (en) * 2002-05-02 2003-11-06 Canon Kabushiki Kaisha Method and device for adjusting the maximum size of the information sequences transmitted in a telecommunication network
US20030224806A1 (en) * 2002-06-03 2003-12-04 Igal Hebron System and method for network data quality measurement
US6956875B2 (en) * 2002-06-19 2005-10-18 Atlinks Usa, Inc. Technique for communicating variable bit rate data over a constant bit rate link
US20040057446A1 (en) * 2002-07-16 2004-03-25 Nokia Corporation Method for enabling packet transfer delay compensation in multimedia streaming
US20040052209A1 (en) * 2002-09-13 2004-03-18 Luis Ortiz Methods and systems for jitter minimization in streaming media
US20040078744A1 (en) * 2002-10-17 2004-04-22 Yongbin Wei Method and apparatus for transmitting and receiving a block of data in a communication system
US7068708B2 (en) * 2002-12-13 2006-06-27 Motorola, Inc. Method and receiving unit for demodulating a multi-path signal
US20060285654A1 (en) * 2003-04-14 2006-12-21 Nesvadba Jan Alexis D System and method for performing automatic dubbing on an audio-visual stream
US7391717B2 (en) * 2003-06-30 2008-06-24 Microsoft Corporation Streaming of variable bit rate multimedia content
US20050047417A1 (en) * 2003-08-26 2005-03-03 Samsung Electronics Co., Ltd. Apparatus and method for multimedia reproduction using output buffering in a mobile communication terminal
US20070092224A1 (en) * 2003-09-02 2007-04-26 Sony Corporation Content receiving apparatus, video/audio output timing control method, and content provision system
US20050105615A1 (en) * 2003-11-13 2005-05-19 Khaled El-Maleh Selective and/or scalable complexity control for video codecs
US20050172154A1 (en) * 2004-01-29 2005-08-04 Chaoticom, Inc. Systems and methods for providing digital content and caller alerts to wireless network-enabled devices
US20050220071A1 (en) * 2004-03-19 2005-10-06 Telefonaktiebolaget L M Ericsson (Publ) Higher layer packet framing using RLP
US20050226262A1 (en) * 2004-03-31 2005-10-13 Shining Hsieh Audio buffering system and method of buffering audio in a multimedia receiver
US20050259694A1 (en) * 2004-05-13 2005-11-24 Harinath Garudadri Synchronization of audio and video data in a wireless communication system
US20050259613A1 (en) * 2004-05-13 2005-11-24 Harinath Garudadri Method and apparatus for allocation of information to channels of a communication system
US8089948B2 (en) * 2004-05-13 2012-01-03 Qualcomm Incorporated Header compression of multimedia data transmitted over a wireless communication system

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10034198B2 (en) 2004-05-13 2018-07-24 Qualcomm Incorporated Delivery of information over a communication channel
US20050259613A1 (en) * 2004-05-13 2005-11-24 Harinath Garudadri Method and apparatus for allocation of information to channels of a communication system
US20050259694A1 (en) * 2004-05-13 2005-11-24 Harinath Garudadri Synchronization of audio and video data in a wireless communication system
US8855059B2 (en) 2004-05-13 2014-10-07 Qualcomm Incorporated Method and apparatus for allocation of information to channels of a communication system
US9717018B2 (en) 2004-05-13 2017-07-25 Qualcomm Incorporated Synchronization of audio and video data in a wireless communication system
US20080095247A1 (en) * 2004-11-17 2008-04-24 Michihiro Ohno Transmitter, Receiver And Communication System
US7970345B2 (en) * 2005-06-22 2011-06-28 Atc Technologies, Llc Systems and methods of waveform and/or information splitting for wireless transmission of information to one or more radioterminals over a plurality of transmission paths and/or system elements
US8542704B2 (en) * 2005-12-05 2013-09-24 Nec Corporation Packet joining method, program, and apparatus
US20070127399A1 (en) * 2005-12-05 2007-06-07 Nec Corporation Packet joining method, program, and apparatus
US20070169152A1 (en) * 2005-12-30 2007-07-19 Daniel Roodnick Data and wireless frame alignment for error reduction
US20070230492A1 (en) * 2006-03-28 2007-10-04 Fujitsu Limited Frame multiplexing device
US7961744B2 (en) * 2006-03-28 2011-06-14 Fujitsu Limited Frame multiplexing device
US20090298510A1 (en) * 2006-04-03 2009-12-03 Beon Joon Kim Method of performing scheduling in a wired or wireless communication system and apparatus thereof
US8059534B2 (en) * 2006-04-03 2011-11-15 Lg Electronics Inc. Method of performing scheduling in a wired or wireless communication system and apparatus thereof
WO2007117862A2 (en) * 2006-04-06 2007-10-18 Motorola, Inc. Method and apparatus to facilitate communication resource allocation for supergroups
WO2007117862A3 (en) * 2006-04-06 2008-02-21 Motorola Inc Method and apparatus to facilitate communication resource allocation for supergroups
US20070238477A1 (en) * 2006-04-06 2007-10-11 Furrer Nathaniel M Method and apparatus to facilitate communication resource allocation for supergroups
US7684816B2 (en) 2006-04-06 2010-03-23 Motorola, Inc. Method and apparatus to facilitate communication resource allocation for supergroups
AU2007235124B2 (en) * 2006-04-06 2010-07-15 Motorola Solutions, Inc. Method and apparatus to facilitate communication resource allocation for supergroups
US8125903B2 (en) * 2006-05-15 2012-02-28 Telefonaktiebolaget Lm Ericsson (Publ) Wireless multicast for layered media
US20090161593A1 (en) * 2006-05-15 2009-06-25 Godor Istvan Wireless Multicast for Layered Media
US20070274217A1 (en) * 2006-05-25 2007-11-29 Huawei Technologies Co., Ltd. Method for Establishing HRPD Network Packet Data Service in 1x Network
US8139522B2 (en) * 2006-05-25 2012-03-20 Huawei Technologies Co., Ltd. Method for establishing HRPD network packet data service in 1x network
US20080025249A1 (en) * 2006-07-28 2008-01-31 Qualcomm Incorporated 1xEVDO WIRELESS INTERFACE TO ENABLE COMMUNICATIONS VIA A SATELLITE RELAY
US10069591B2 (en) 2007-01-04 2018-09-04 Qualcomm Incorporated Method and apparatus for distributed spectrum sensing for wireless communication
US20090052589A1 (en) * 2007-04-20 2009-02-26 Samsung Electronics Co., Ltd. Transport stream generating device, transmitting device, receiving device, and a digital broadcast system having the same, and method thereof
US8325830B2 (en) * 2007-04-20 2012-12-04 Samsung Electronics Co., Ltd. Transport stream generating device, transmitting device, receiving device, and a digital broadcast system having the same, and method thereof
US20100208710A1 (en) * 2007-10-19 2010-08-19 Jin Sam Kwak Method for generating control channel and decoding control channel, base station and mobile station thereof
US8462740B2 (en) * 2007-10-19 2013-06-11 Lg Electronics Inc. Method for generating control channel and decoding control channel, base station and mobile station thereof
US20090235055A1 (en) * 2008-03-11 2009-09-17 Fujitsu Limited Scheduling apparatus and scheduling method
US8271985B2 (en) * 2008-03-11 2012-09-18 Fujitsu Limited Scheduling apparatus and scheduling method
US9112618B2 (en) 2009-07-02 2015-08-18 Qualcomm Incorporated Coding latency reductions during transmitter quieting
US20110002399A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Transmitter quieting and reduced rate encoding
US20110002377A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Transmitter quieting and null data encoding
US8902995B2 (en) 2009-07-02 2014-12-02 Qualcomm Incorporated Transmitter quieting and reduced rate encoding
US20110002378A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Coding latency reductions during transmitter quieting
US8958475B2 (en) 2009-07-02 2015-02-17 Qualcomm Incorporated Transmitter quieting and null data encoding
US20110002347A1 (en) * 2009-07-06 2011-01-06 Samsung Electronics Co. Ltd. Method and system for encoding and decoding medium access control layer packet
US8750332B2 (en) * 2009-07-06 2014-06-10 Samsung Electronics Co., Ltd. Method and system for encoding and decoding medium access control layer packet
US20110182257A1 (en) * 2010-01-26 2011-07-28 Qualcomm Incorporated White space spectrum commmunciation device with multiplexing capabilties
US20140082203A1 (en) * 2010-12-08 2014-03-20 At&T Intellectual Property I, L.P. Method and apparatus for capacity dimensioning in a communication network
US9935994B2 (en) 2010-12-08 2018-04-03 At&T Inellectual Property I, L.P. Method and apparatus for capacity dimensioning in a communication network
US9270725B2 (en) * 2010-12-08 2016-02-23 At&T Intellectual Property I, L.P. Method and apparatus for capacity dimensioning in a communication network
US8848785B2 (en) * 2011-02-16 2014-09-30 British Telecommunications Public Limited Company Compact cumulative bit curves
US20130322522A1 (en) * 2011-02-16 2013-12-05 British Telecommunications Public Limited Company Compact cumulative bit curves
US9445107B2 (en) * 2011-05-04 2016-09-13 Cavium, Inc. Low latency rate control system and method
US20140376640A1 (en) * 2011-05-04 2014-12-25 Cavium, Inc. Low Latency Rate Control System and Method
US9098596B2 (en) * 2012-04-10 2015-08-04 Cable Television Laboratories, Inc. Redirecting web content
US20130268691A1 (en) * 2012-04-10 2013-10-10 Cable Television Laboratories, Inc. Redirecting web content
US20150071363A1 (en) * 2012-05-22 2015-03-12 Huawei Technologies Co., Ltd. Method and apparatus for assessing video quality
US10045051B2 (en) * 2012-05-22 2018-08-07 Huawei Technologies Co., Ltd. Method and apparatus for assessing video quality
US20140142955A1 (en) * 2012-11-19 2014-05-22 Apple Inc. Encoding Digital Media for Fast Start on Digital Media Players
US20230034716A1 (en) * 2021-07-27 2023-02-02 Korea Aerospace Research Institute Method and system for transmitting multiple data
US11811795B2 (en) * 2021-07-27 2023-11-07 Korea Aerospace Research Institute Method and system for transmitting multiple data

Also Published As

Publication number Publication date
BRPI0510952B1 (en) 2019-09-03
EP2182734A1 (en) 2010-05-05
CN1985477A (en) 2007-06-20
EP1751987B1 (en) 2010-10-06
US10034198B2 (en) 2018-07-24
WO2005114950A1 (en) 2005-12-01
US20050259690A1 (en) 2005-11-24
ATE426988T1 (en) 2009-04-15
US8855059B2 (en) 2014-10-07
KR101068055B1 (en) 2011-09-28
TW201145943A (en) 2011-12-16
JP4554680B2 (en) 2010-09-29
KR100870215B1 (en) 2008-11-24
TWI353759B (en) 2011-12-01
EP1751956A2 (en) 2007-02-14
CA2771943A1 (en) 2005-12-01
ATE508567T1 (en) 2011-05-15
DE602005011611D1 (en) 2009-01-22
KR101049701B1 (en) 2011-07-15
CN102984133B (en) 2016-11-23
CA2566124C (en) 2014-09-30
MY142161A (en) 2010-10-15
BRPI0510952A (en) 2007-11-20
BRPI0510962A (en) 2007-11-20
DE602005013517D1 (en) 2009-05-07
BRPI0510961A (en) 2007-11-20
MXPA06013186A (en) 2007-02-14
ES2366192T3 (en) 2011-10-18
US20150016427A1 (en) 2015-01-15
US20140362740A1 (en) 2014-12-11
WO2005114943A2 (en) 2005-12-01
KR20090039809A (en) 2009-04-22
ATE484157T1 (en) 2010-10-15
CN1977516A (en) 2007-06-06
KR20070013330A (en) 2007-01-30
KR20070014201A (en) 2007-01-31
WO2005114943A3 (en) 2006-01-19
ES2318495T3 (en) 2009-05-01
JP2007537683A (en) 2007-12-20
KR20070023731A (en) 2007-02-28
CN102984133A (en) 2013-03-20
CA2566124A1 (en) 2005-12-01
EP2262304A1 (en) 2010-12-15
US9674732B2 (en) 2017-06-06
EP3331246A1 (en) 2018-06-06
CA2811040A1 (en) 2005-12-01
MY139431A (en) 2009-09-30
EP2592836A1 (en) 2013-05-15
EP2214412A3 (en) 2012-11-14
EP1751987A1 (en) 2007-02-14
JP2011142616A (en) 2011-07-21
EP1751955A1 (en) 2007-02-14
ATE417436T1 (en) 2008-12-15
CN1969562B (en) 2011-08-03
US8089948B2 (en) 2012-01-03
TWI381681B (en) 2013-01-01
EP2262304B1 (en) 2012-08-22
JP2007537682A (en) 2007-12-20
TW200618544A (en) 2006-06-01
US9717018B2 (en) 2017-07-25
TW200618564A (en) 2006-06-01
KR100906586B1 (en) 2009-07-09
US20050259613A1 (en) 2005-11-24
ES2354079T3 (en) 2011-03-09
EP1751956B1 (en) 2011-05-04
MXPA06013211A (en) 2007-03-01
CN1985477B (en) 2012-11-07
MXPA06013193A (en) 2007-02-14
TWI394407B (en) 2013-04-21
ES2323011T3 (en) 2009-07-03
CA2566126A1 (en) 2005-12-01
CA2566125C (en) 2012-01-24
CA2565977C (en) 2013-06-11
WO2005115009A1 (en) 2005-12-01
KR100918596B1 (en) 2009-09-25
JP5356360B2 (en) 2013-12-04
EP1757027A1 (en) 2007-02-28
EP1751955B1 (en) 2009-03-25
BRPI0510953A (en) 2007-11-20
EP2182734B1 (en) 2013-12-18
US20050259694A1 (en) 2005-11-24
EP1757027B1 (en) 2008-12-10
MXPA06013210A (en) 2007-02-28
EP2214412A2 (en) 2010-08-04
KR20070014200A (en) 2007-01-31
CA2566125A1 (en) 2005-12-01
TW200623737A (en) 2006-07-01
CN1973515A (en) 2007-05-30
JP4361585B2 (en) 2009-11-11
CN1973515B (en) 2013-01-09
CA2565977A1 (en) 2005-12-01
CA2771943C (en) 2015-02-03
WO2005114919A1 (en) 2005-12-01
CN1969562A (en) 2007-05-23
DE602005023983D1 (en) 2010-11-18
JP2007537681A (en) 2007-12-20
DE602005027837D1 (en) 2011-06-16
KR20080084866A (en) 2008-09-19
CN1977516B (en) 2010-12-01
MY141497A (en) 2010-04-30
JP4448171B2 (en) 2010-04-07
KR100871305B1 (en) 2008-12-01
JP2007537684A (en) 2007-12-20

Similar Documents

Publication Publication Date Title
US10034198B2 (en) Delivery of information over a communication channel
TWI416900B (en) Method and apparatus for allocation of information to channels of a communication system

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GARUDADRI, HARINATH;SAGETONG, PHOOM;NANDA, SANJIV;AND OTHERS;REEL/FRAME:016629/0145;SIGNING DATES FROM 20050720 TO 20050725

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION