WO2001063855A1 - Packet scheduling in umts using several calculated transfer rates - Google Patents

Packet scheduling in umts using several calculated transfer rates Download PDF

Info

Publication number
WO2001063855A1
WO2001063855A1 PCT/SE2001/000406 SE0100406W WO0163855A1 WO 2001063855 A1 WO2001063855 A1 WO 2001063855A1 SE 0100406 W SE0100406 W SE 0100406W WO 0163855 A1 WO0163855 A1 WO 0163855A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
score
bandwidth
flows
channel
Prior art date
Application number
PCT/SE2001/000406
Other languages
French (fr)
Inventor
Göran SCHULTZ
Janne Peisa
Toomas Wigell
Reijo Matinmikko
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/698,785 external-priority patent/US6850540B1/en
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to AU2001236302A priority Critical patent/AU2001236302A1/en
Priority to EP01908560A priority patent/EP1264445A1/en
Publication of WO2001063855A1 publication Critical patent/WO2001063855A1/en
Priority to FI20070077U priority patent/FI7776U1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • H04W72/543Allocation or scheduling criteria for wireless resources based on quality criteria based on requested quality, e.g. QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5652Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
    • H04L2012/5653Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly using the ATM adaptation layer [AAL]
    • H04L2012/5656Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly using the ATM adaptation layer [AAL] using the AAL2

Definitions

  • the present invention relates in general to the field of communications systems, and in particular, by way of example but not limitation, to scheduling packets of data/informational flows having differing priority levels in a communications system.
  • wireless networks Access to and use of wireless networks is becoming increasingly important and popular for business, social, and recreational purposes. Users of wireless networks now rely on them for both voice and data communications. Furthermore, an ever increasing number of users demand both an increasing array of services and capabilities as well as greater bandwidth for activities such as Internet surfing. To address and meet the demands for new services and greater bandwidth, the wireless communications industry constantly strives to improve the number of services and the throughput of their wireless networks. Expanding and improving the infrastructure necessary to provide additional services and higher bandwidth is an expensive and manpower-intensive undertaking. Moreover, high-bandwidth data streams will eventually be demanded by consumers to support features such as real-time audiovisual downloads and live audio-visual communication between two or more people.
  • next generation wireless system(s) instead of attempting to upgrade existing system(s).
  • the wireless communications industry intends to continue to improve the capabilities of the technology upon which it relies and that it makes available to its customers by deploying next generation system(s).
  • Protocols for a next-generation standard that is designed to meet the developing needs of wireless customers is being standardized by the 3 rd Generation Partnership Project (3GPP).
  • 3GPP 3 rd Generation Partnership Project
  • the set of protocols is known collectively as the Universal Mobile Telecommunications
  • the network 100 includes a core network 120 and a UMTS Terrestrial Radio Access Network (UTRAN) 130.
  • the UTRAN 130 is composed of, at least partially, a number of Radio Network Controllers (RNCs) 140, each of which may be coupled to one or more neighboring Node Bs 150.
  • RNCs Radio Network Controllers
  • Each Node B 150 is responsible for a given geographical cell and the controlling RNC 140 is responsible for routing user and signaling data between that Node B 150 and the core network 120. All of the RNCs 140 may be directly or indirectly coupled to one another.
  • the UMTS network 100 also includes multiple user equipments (UEs) 110.
  • UE may include, for example, mobile stations, mobile terminals, laptops/personal digital assistants (PDAs) with wireless links, etc.
  • PDAs personal digital assistants
  • data transmissions and/or access requests compete for bandwidth based on first come, first served and/or random paradigms.
  • Each mobile station, and its associated transmissions typically acquire access to a network using some type of request (e.g., a message) prior to establishing a connection.
  • the mobile station receives a predetermined transmission bandwidth that is usually mandated by the air interface requirements of the relevant system.
  • transmission bandwidth is variable, more flexible, and somewhat separated from the physical channel maximum mandated by the air interface requirements of UMTS.
  • QoS quality of service
  • a Medium Access Control (MAC) layer schedules packet transmission of various data flows to meet stipulated criteria, including permitted transport format combinations (TFCs) from a TFC set (TFCS).
  • TFCs transport format combinations
  • TFCS TFC set
  • WFQ weighted fair queuing
  • QoS class transport block set size
  • TBSS transport block set size
  • QoS transport format combinations
  • second embodiment(s) memory requirements are reduced by selecting a TFC based on guaranteed rate transmission rates, QoS class, TBSS, and queue fill levels, without accommodating backlogs corresponding to previously unsatisfied requirements.
  • a scheduling method for providing bandwidth to entities in a communications system includes the steps of: calculating a first transfer rate for multiple flows; calculating a second transfer rate for the multiple flows; ascertaining a quality of service (QoS) for each flow of the multiple flows; and assigning bandwidth to each flow of the multiple flows responsive to the first transfer rate, the second transfer rate, and the QoS for each flow of the multiple flows.
  • the first transfer rate may correspond to a guaranteed rate transfer rate
  • the second transfer rate may correspond to a weighted fair queuing (WFQ) transfer rate.
  • WFQ weighted fair queuing
  • the first and second transfer rates may correspond to aggregated transfer rates over the multiple flows.
  • a scheduling method for providing bandwidth to entities in a communications system includes the steps of: ascertaining a quality of service (QoS) class that is associated with each channel of multiple channels; ascertaining a guaranteed rate transmission rate for each channel; ascertaining a queue fill level of a queue that corresponds to each channel; calculating a first score for each channel responsive to the QoS class, the guaranteed rate transmission rate, and the queue fill level.
  • QoS quality of service
  • an additional step of calculating a second score for each channel responsive to the guaranteed rate transmission rate and the queue fill level is included.
  • FIG. 1 illustrates an exemplary wireless communications system with which the present invention may be advantageously employed
  • FIG. 2 illustrates a protocol model for an exemplary next-generation system with which the present invention may be advantageously employed
  • FIG. 3 illustrates a view of an exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention
  • FIG. 4 illustrates an exemplary method in flowchart form for allocating bandwidth resources to data flow streams between entities in the exemplary second layer architecture of FIG. 3;
  • FIG. 5 illustrates an exemplary environment for scheduling data flows in accordance with the present invention
  • FIG. 6 illustrates an exemplary method in flowchart form for scheduling data flows in accordance with the present invention
  • FIG. 7 illustrates another view of the exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention
  • FIG. 8 illustrates another exemplary method in flowchart form for scheduling data flows in accordance with the present invention.
  • FIGS . 1-8 of the drawings like numerals being used for like and corresponding parts of the various drawings.
  • Aspects of the UMTS are used to describe a preferred embodiment of the present invention.
  • the principles of the present invention are applicable to other wireless communication standards (or systems), especially those in which communication is packet-based.
  • FIG. 2 a protocol model for an exemplary next-generation system with which the present invention may be advantageously employed is illustrated generally at 200.
  • the "Uu” indicates the interface between UTRAN 130 and the UE 110
  • “Iub” indicates the interface between the RNC 140 and a Node B 150 (where "Node B” is a generalization of, for example, a Base
  • RABs Radio Access Bearers
  • a UE 110 is allocated one or more RABs, each of which is capable of carrying a flow of user or signaling data.
  • RABs are mapped onto respective logical channels.
  • MAC Media Access Control
  • a set of logical channels is mapped in turn onto a transport channel, of which there are two types: a "common” transport channel which is shared by different UEs 110 and a “dedicated” transport channel which is allocated to a single UE 110 (thus leading to the terms “MAC-c" and "MAC-d”).
  • FACH One type of common channel is the FACH.
  • a basic characteristic of a FACH is that it is possible to send one or more fixed size packets per transmission time interval (e.g., 10, 20, 40, or 80 ms).
  • Several transport channels e.g., FACHs
  • S-CCPCH Secondary Common Control Physical CHannel
  • a UE 110 registers with an RNC 140 via a Node B 150, that RNC 140 acts at least initially as both the serving and the controlling RNC 140 for the UE 110.
  • the serving RNC 140 may subsequently differ from the controlling RNC 140 in a UMTS network 100, but the presence or absence of this condition is not particularly relevant here.
  • the RNC 140 both controls the air interface radio resources and terminates the layer 3 intelligence (e.g., the Radio Resource Control (RRC) protocol), thus routing data associated with the UE 110 directly to and from the core network
  • RRC Radio Resource Control
  • the MAC-c entity in the RNC 140 transfers MAC- c Packet Data Units (PDUs) to the peer MAC-c entity at the UE 110 using the services of the FACH Frame Protocol (FACH FP) entity between the RNC 140 and the Node B 150.
  • the FACH FP entity adds header information to the MAC-c PDUs to form
  • FACH FP PDUs which are transported to the Node B 150 over an AAL2 (or other transport mechanism) connection.
  • An interworking function at the Node B 150 interworks the FACH frame received by the FACH FP entity into the PHY entity.
  • an important task of the MAC-c entity is the scheduling of packets (MAC PDUs) for transmission over the air interface. If it were the case that all packets received by the MAC-c entity were of equal priority (and of the same size), then scheduling would be a simple matter of queuing the received packets and sending them on a first come first served basis (e.g., first-in, first-out (FIFO)).
  • FIFO first-in, first-out
  • UMTS defines a framework in which different Quality of Services (QoSs) maybe assigned to different RABs.
  • Packets corresponding to a RAB that has been allocated a high QoS should be transmitted over the air interface at a high priority whilst packets corresponding to a RAB that has been allocated a low QoS should be transmitted over the air interface at a lower priority.
  • Priorities maybe determined at the MAC entity (e.g., MAC-c or MAC-d) on the basis of RAB parameters.
  • UMTS deals with the question of priority by providing at the controlling RNC 140 a set of queues for each FACH.
  • the queues may be associated with respective priority levels.
  • An algorithm which is defined for selecting packets from the queues in such a way that packets in the higher priority queues are (on average) dealt with more quickly than packets in the lower priority queues, is implemented. The nature of this algorithm is complicated by the fact that the FACHs that are sent on the same physical channel are not independent of one another. More particularly, a set of Transport Format Combinations (TFCs) is defined for each S-CCPCH, where each TFC includes a transmission time interval, a packet size, and a total transmission size (indicating the number of packets in the transmission) for each FACH. The algorithm should select for the FACHs a TFC which matches one of those present in the TFC set in accordance with UMTS protocols.
  • TFCs Transport Format Combinations
  • a packet received at the controlling RNC 140 is placed in a queue (for transmission on a FACH), where the queue corresponds to the priority level attached to the packet as well as to the size of the packet.
  • the FACH is mapped onto a S-CCPCH at a Node B 150 or other corresponding node of the UTRAN 130.
  • the packets for transmission on the FACH are associated with either a Dedicated Control CHannel (DCCH) or to a Dedicated Traffic CHannel (DTCH).
  • DCCH Dedicated Control CHannel
  • DTCH Dedicated Traffic CHannel
  • each FACH is arranged to carry only one size of packets. However, this is not necessary, and it may be that the packet size that can be carried by a given FACH varies from one transmission time interval to another.
  • the UE 110 may communicate with the core network 120 of the UMTS system 100 via separate serving and controlling (or drift)
  • RNCs 140 within the UTRAN 130 e.g., when the UE 110 moves from an area covered by the original serving RNC 140 into a new area covered by a controlling/drift RNC 140 (not specifically shown). Signaling and user data packets destined for the
  • the UE 110 are received at the MAC-d entity of the serving RNC 140 from the core network 120 and are "mapped" onto logical channels, namely a Dedicated Control CHannel (DCCH) and a Dedicated traffic CHannel (DTCH), for example.
  • the MAC- d entity constructs MAC Service Data Units (SDUs), which include a payload section containing logical channel data and a MAC header containing, inter alia, a logical channel identifier.
  • SDUs MAC Service Data Units
  • the MAC-d entity passes the MAC SDUs to the FACH FP entity.
  • This FACH FP entity adds a further FACH FP header to each MAC SDU, where the FACH FP header includes a priority level that has been allocated to the MAC SDU by an RRC entity.
  • the RRC is notified of available priority levels, together with an identification of one or more accepted packet sizes for each priority level, following the entry of a UE 110 into the coverage area of the drift RNC 140.
  • the FACH FP packets are sent to a peer FACH FP entity at the drift RNC 140 over an AAL2 (or other) connection.
  • the peer FACH FP entity decapsulates the MAC-d SDU and identifies the priority contained in the FRAME FP header.
  • the SDU and associated priority are passed to the MAC-c entity at the controlling RNC 140.
  • the MAC-c layer is responsible for scheduling SDUs for transmission on the
  • each SDU is placed in a queue corresponding to its priority and size. For example, if there are 16 priority levels, there will be 16 queue sets for each FACH, with the number of queues in each of the 16 queue sets depending upon the number of packet sizes accepted for the associated priority. As described hereinabove, SDUs are selected from the queues for a given FACH in accordance with some predefined algorithm (e.g., so as to satisfy the TFC requirements of the physical channel).
  • the scheme described hereinbelow with reference to FIGS. 3 and 4 relates to data transmission in a telecommunications network and in particular, though not necessarily, to data transmission in a UMTS.
  • the 3 GPP is currently in the process of standardizing a new set of protocols for mobile telecommunications systems.
  • the set of protocols is known collectively as the UMTS.
  • FIG. 3 a view of an exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention is illustrated generally at 300.
  • the exemplary second layer architecture 300 illustrates a simplified UMTS layer 2 protocol structure which is involved in the communication between mobile stations (e.g. mobile telephones), or more broadly UEs 110, and Radio Network Controllers (RNCs) 140 of a UMTS network 100.
  • the RNCs 140 are analogous to the Base Station Controllers (BSCs) of existing GSM mobile telecommunications networks, communicating with the mobile stations via Node Bs 150.
  • BSCs Base Station Controllers
  • the layer 2 structure of the exemplary second layer architecture 300 includes a set of Radio Access Bearers (RABs) 305 that make available radio resources (and services) to user applications.
  • RABs Radio Access Bearers
  • Data flows (e.g., in the form of segments) from the RABs 305 are passed to respective Radio Link Control (RLC) entities 310, which amongst other tasks buffer the received data segments.
  • RLC Radio Link Control
  • RABs 305 are mapped onto respective logical channels 315.
  • a Medium Access Control (MAC) entity 320 receives data transmitted in the logical channels 315 and further maps the data from the logical channels 315 onto a set of transport channels 325.
  • MAC Medium Access Control
  • the transport channels 325 are finally mapped to a single physical transport channel 330, which has a total bandwidth (e.g., of ⁇ 2Mbits/sec) allocated to it by the network.
  • a physical channel is used exclusively by one mobile station or is shared between many mobile stations, it is referred to as either a "dedicated physical channel” or a "common channel”.
  • a MAC entity connected to a dedicated physical channel is known as MAC-d; there is preferably one MAC-d entity for each mobile station.
  • a MAC entity connected to a common channel is known as MAC-c; there is preferably one MAC-c entity for each cell.
  • the bandwidth of a transport channel 325 is not directly restricted by the capabilities of the physical layer 330, but is rather configured by a Radio Resource Controller (RRC) entity 335 using Transport Formats (TFs).
  • RRC Radio Resource Controller
  • TFs Transport Formats
  • the RRC entity 335 defines one or several Transport Block (TB) sizes.
  • TB Transport Block Size
  • PDU MAC Protocol Data Unit
  • PDU MAC Protocol Data Unit
  • TBS Transport Block Set
  • MAC entity can transmit to the physical layer in a single transmission time interval (TTI).
  • TTI transmission time interval
  • TFC Transport Format Combination
  • ⁇ TF1 (80, 80)
  • the MAC entity 320 has to decide how much data to transmit on each transport channel 325 connected to it. These transport channels 325 are not independent of one another, and are later multiplexed onto a single physical channel 330 at the physical layer 330 (as discussed hereinabove).
  • the RRC entity 335 has to ensure that the total transmission capability on all transport channels 325 does not exceed the transmission capability of the underlying physical channel 330. This is accomplished by giving the MAC entity 320 a Transport Format Combination Set (TFCS), which contains the allowed Transport Format Combinations for all transport channels.
  • TFCS Transport Format Combination Set
  • TFCS Transport Format Combination Set
  • the RRC entity 335 has to restrict the total transmission rate by not allowing all combinations of the TFs.
  • An example of this would be a TFCS as follows [ ⁇ (80, 0), (80, 0) ⁇ , ⁇ (80, 0), (80, 80) ⁇ , ⁇ (80, 0),
  • TFCI Transport Format Combination indicator
  • the TFCI ⁇ would correspond to the second TFC, which is ⁇ (80, 0), (80, 80) ⁇ , meaning that nothing is transmitted from the first transport channel and a single packet of 80 bits is transmitted from the second transport channel.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • ATM Asynchronous Transfer Mode
  • GPS Generalized Processor Sharing
  • WFQ Weighted Fair Queuing
  • ratej. weight_i/(sum_of_all_active__weights) * maximum _r ate.
  • GPS could be applied to the MAC entity in UMTS, with the weighting for each input flow being determined (by the RRC entity) on the basis of certain RAB parameters, which are allocated to the corresponding RAB by the network.
  • RAB parameter may equate to a Quality of Service (QoS) or Guaranteed rate allocated to a user for a particular network service.
  • TFC Transport Format Combination
  • TFCS TFC Set
  • Embodiments of this scheme allow the TFC selection process for a subsequent frame to take into account any backlogs which exists for the input flows. The tendency is to adjust the selected TFC to reduce the backlogs. Such a backlog may exist due to the finite number of data transmission possibilities provided for by the TFCS.
  • Nodes at which the method of this scheme maybe employed include mobile stations (such as mobile telephones and communicator type devices) (or more generally UEs) and Radio Network Controllers (RNCs).
  • the input flows to the MAC entity are provided by respective Radio Network Controllers
  • each RLC entity provides buffering for the associated data flow.
  • the step of computing a fair share of resources for an input flow is carried out by a Radio Network Controller (RNC) entity.
  • RNC Radio Network Controller
  • the step of computing a fair share of resources for an input flow includes the step of determining the weighting given to that flow as a fraction of the sum of the weights given to all of the input flows. The fair share may then be determined by multiplying the total output bandwidth by the determined fraction. Also preferably, this step may involve using the Generalised Processor Sharing (GPS) mechanism.
  • GPS Generalised Processor Sharing
  • the weighting for a data flow may be defined by one or more Radio Access Bearer (RAB) parameters allocated to a RAB by the UMTS network, where the RAB is associated with each MAC input flow.
  • RAB Radio Access Bearer
  • the method further includes the step of adding the value of the backlog counter to the computed fair share for that flow and selecting a TFC on the basis of the resulting sums for all of the input flows.
  • the difference maybe subtracted from the backlog counter for the input flow.
  • UMTS Media Access Control
  • MAC Media Access Control
  • TFCS TFC Set
  • second processor means for adding to a backlog counter associated with each input flow the difference between the data transmission rate for the flow resulting from the selected TFC and the determined fair share, if the data transmission rate is less than the determined fair share, where the first processor means is arranged to take into account the value of the backlog counters when selecting a TFC for the subsequent frame of the output data flow.
  • the first and second processor means are provided by a Radio Network Controller (RNC) entity.
  • RNC Radio Network Controller
  • a simplified UMTS layer 2 includes one Radio Resource Control (RRC) entity, a Medium Access Control (MAC) entity for each mobile station, and a Radio Link Control (RLC) entity for each Radio
  • RRC Radio Resource Control
  • MAC Medium Access Control
  • RLC Radio Link Control
  • the MAC entity performs scheduling of outgoing data packets, while the RLC entities provide buffers for respective input flows .
  • the RRC entity sets a limit on the maximum amount of data that can be transmitted from each flow by assigning a set of allowed Transport Format Combinations (TFC) to each MAC (referred to as a TFC Set or TFCS), but each MAC must independently decide how much data is transmitted from each flow by choosing the best available Transport Format Combination (TFC) from the TFCS .
  • TFC Transport Format Combinations
  • the flowchart 400 is a flow diagram of a method of allocating bandwidth resources to, for example, the input flow streams ofa MAC entity of the layer 2 of FIG 3.
  • an exemplary method in accordance with the flowchart 400 may follow the following steps. First, input flows are received at RLCs and the data is buffered (step 405). Information on buffer fill levels is passed to the MAC entity (step 410). After the information on buffer fill levels is passed, the fair MAC bandwidth share for each input flow is computed (step 415).
  • the computed fair share of each is then adjusted by adding the contents of an associated backlog counter to the respective computed fair share (step 420).
  • a TFC is selected from the TFC set to most closely match the adjusted fair shares (step 425).
  • the RLC is next instructed to deliver packets to the MAC entity according to the selected TFC (step 420).
  • the MAC entity may also schedule packets in accordance with the selected TFC (step 435). After packet scheduling, the traffic channels may be transported on the physical channel(s) (step 440). Once packet traffic has been transported, the backlog counters should be updated (step 445). The process may continue (via arrow 450) when new input flows are received at the RLCs, which buffer the data (at step 405).
  • MAC entity on a per Transmission Time Interval (TTI) basis, the optimal distribution of available bandwidth using the Generalised Processor Sharing (GPS) approach ⁇ See, e.g., the article by A. K. Parekh et al. referenced hereinabove.) and by keeping track of how far behind each flow is from the optimal bandwidth allocation using respective backlog counters.
  • GPS Generalised Processor Sharing
  • the available bandwidth is distributed to flows by using the standard GPS weights, which may be calculated by the RRC using the RAB parameters.
  • the method may first calculate the GPS distribution for the input flows and add to the GPS values the current respective backlogs. This is performed once for each 10ms TTI and results in a fair transmission rate for each flow. However, this rate may not be optimal as it may happen that there is not enough data to be sent in all buffers. In order to achieve optimal throughput as well as fairness, the fair GPS distribution is reduced so as to not exceed the current buffer fill level or the maximum allowed rate for any logical channel. A two step rating process is then carried out.
  • TFCs Transport Format Combinations
  • each TFC being scored according to how close it comes to sending out the optimal rate. In practice this is done by simply counting how much of the fair configuration a TFC fails to send (if a given TFC can send all packets at the fair rate, it is given a score of zero) and then considering only the TFCs which have the lowest scores. The closest match is chosen and used to determine the amount of packets sent from each queue. TFCs having an equal score are given a bonus score according to how many extra bits they can send (this can be further weighted by a Quality of Service rating in order to ensure that the excess capacity goes to the bearer with the highest quality class).
  • TFCs having an equal score are given a bonus score according to how many extra bits they can send (this can be further weighted by a Quality of Service rating in order to ensure that the excess capacity goes to the bearer with the highest quality class).
  • the final selection is based on a two-level scoring: the TFC with the lowest score is taken. If there are several TFCs with an equal score, the one with the highest bonus score is chosen. This ensures that the rate for each TTI is maximized. Fairness is achieved by checking that if the chosen TFC does not give all flows at least their determined fair rate, the missing bits are added to a backlog counter of the corresponding flow and the selection is repeated for the next TTI. If any of the flows has nothing to transmit, the backlog is set to zero.
  • This algorithm can be shown to provide bandwidth (and, under certain assumptions, delay bounds) that is close to that of GPS. However, it remains fair and maintains isolation between all flows. It is also computationally simpler than Weighted Fair Queuing algorithms because it utilizes the fact that the MAC layer ran transmit on several transport channels at the same time. This results in optimal or close to optimal utilization of the radio interface in the UMTS radio link.
  • the following pseudo-code is an outline of an exemplary algorithm for implementing the scheme described hereinabove with reference to FIGS. 3 and 4:
  • a model of a MAC-c entity 500 is illustrated as being in communication with common RLC entities, MAC-d entities, and common transport channels (e.g., RACH/FACH).
  • RLC entities e.g., MAC-c entities
  • MAC-d entities e.g., MAC-d entities
  • common transport channels e.g., RACH/FACH.
  • MAC-c scheduler 505 schedules the forwarding of packets (or more generally segments) from QoS buffers 510, which receive MAC PDUs from MAC-d entities.
  • the MAC layer of UMTS schedules packets in a manner such that the total QoS provided to the end user fulfills the guarantees given when the corresponding RAB was established.
  • On aspect of this scheduling is the requirement that each flow receive the agreed-upon QoS. Because it is possible to multiplex several input flows (e.g., logical channels) on to one output channel (e.g., a transport channel), previously-known scheduling algorithms for the UMTS MAC-layer are not directly applicable.
  • a two-level scheduling algorithm is applied, which enables the implementation of fair scheduling in environments in which the MAC needs to perform multiplexing.
  • the two-level scheduling enables the provision of an arbitrary QoS to all flows that are multiplexed onto a single output channel.
  • the MAC-c entity 500 may be incorporated in, and thus the principles of the present invention may be applied with, the UMTS MAC layer in an RNC, a UE, etc.
  • relevant parameters for each logical channel are first received as input.
  • a backlog counter (value) for each logical channel is maintained. In order to apply a fair queuing mechanism, these parameters are converted to GPS weights.
  • weights for each transport channel are calculated by adding the weights for each logical channel to be multiplexed onto each corresponding transport channel.
  • scheduling is performed by choosing the best TFC according to, for example, the original GPS- based scheduling method.
  • the TBSS given to a transport channel is distributed to corresponding logical channels by using, for example, essentially a similar process as in step 3 for choosing the TFC. It should be noted that this is now simpler because there are no longer any restrictions on the available TFCs.
  • the backlog (value) is updated for each logical channel. This guarantees that each logical channel will get its respective fair share of the total bandwidth, regardless of multiplexing.
  • tfcs [trch] [tfci] A two-dimensional array containing the TFCS . Each element of the array is a vector containing two integers, the TBS and the TBSS. It is assumed that the TFCS is stored in such a way that the most significant index is the Transport Channel Identifier.
  • max _r ate The maximum rate that can be transmitted on all transport channels. Note that this is not typically the same as the sum of the maximum rates on each transport channel, as the transport capability on FACH or DCH channels is limited by the transport capability of the physical common channel. This is preferably calculated directly from the TFCS every time the TFCS is modified and/or limited.
  • trch_max_rate [trch] An array that contains the maximum rate for each transport channel. This parameter, while actually optional, is used to ensure that if the guaranteed rate is higher than the maximum transport rate, then the backlog for the respective flow is not accumulated and the excess data rate can be given to other flow(s). This parameter is preferably calculated directly from the TFCS every time the
  • TFCS is modified and/or limited.
  • lch_qos_class [lch] An array containing the QoS class for each input flow ("logical channel"). This array is preferably re-computed when new input flows are added or old flows are removed.
  • lch_guarjate[lch] An array containing the guaranteed rate for each input flow ("logical channel”). This array is preferably re-computed when new input flows are added or old flows are removed.
  • lchjrchjnap An array containing the transport channel indicator for each input flow ("logical channel"). This array defines how the input flows are multiplexed to fransport channels, and thus provides a mapping from logical channel to corresponding transport channel. This parameter is preferably re-computed when new input flows are added or old flows are removed.
  • lch_queuejill [lch] An array containing the number of packets in the input buffer for each incoming flow. This is the maximum number of packets that can be transmitted from this incoming flow ("logical channel") in this TTI. If more than this number is requested, then the RLC can provide padding, but for packets in QoS buffers (e.g., QoS buffers 510) this is not possible. This parameter is preferably updated before each scheduling decision.
  • lch_pu_size [lch] An array containing the size of the packets in the input buffers for each incoming flow. This parameter may be updated only when the size of the packets/PDUs change, or when new channels are added.
  • trch_qos_class An array containing the maximum QoS class of all input flows ("logical channels") multiplexed to a given transport channel. This array is preferably re-computed whenever lch_qos_class or lchjrchjnap is changed.
  • trch_guar_rate An array containing the sum of guaranteed rate of all input flows ("logical channels”) multiplexed to a given transport channel. This array is preferably re-computed whenever lch guar_rate is changed.
  • trch_queuejill An array containing the total number of bits that can be transmitted from any transport channel. This array is preferably updated for every scheduling decision.
  • This exemplary version of the exemplary scheduling algorithm preferably employs two (2) "external" arrays, which may be stored at memory in between the scheduling decisions. Both of these arrays are updated once per scheduling decision: 1. lch_grJ>acklog[lch]: An array containing the current guaranteed rate backlog (i.e., how far behind the guaranteed rate this flow is) for each logical channel. This backlog may be specified in bits.
  • lch_wfq_backlog An array containing the current fair queuing backlog (i.e., how for behind the WFQ scheduling this flow is) for each logical channel. This backlog may be specified in bits.
  • two (2) more backlog arrays are preferably calculated for each scheduling decision:
  • trch_gr_backlog An array containing the sum of all current guaranteed rate backlogs of the logical channels multiplexed to a given transport channel.
  • trch_wfq An array containing the sum of all current fair queuing backlogs of the logical channels multiplexed to a given fransport channel.
  • the flowchart 600 indicates in some detail an exemplary method for employing a two-step scheduling algorithm.
  • the two-step scheduling algorithm operates responsive to both the guaranteed rates and the fair queuing amounts of each flow.
  • the two-step scheduling algorithm (i) selects a TFCI based on three variables and (ii) allocates the resulting TBSS in an order responsive to each flows QoS.
  • the exemplary method works by first updating the trch_grJ>acklog and trch_wfqJ)acklog counters and the trc _queue ill memory.
  • tfc_gr guarjate [trch] + trch_grJ>acklog [trch]
  • the tfcjgr is a transport format combination that would transmit enough bits from all incoming flows in order to give each their respective guaranteed rate.
  • the backlog value ensures that if any flow cannot transmit its guaranteed rate, then its share of the bandwidth is increased.
  • two special cases should be noted. First, if the tfcjgr indicates a transmission rate that is larger than the maximum rate for any transport channel (e.g., if tfc_gr [trch] > trjnaxjate [trch]), then the tfcjgr value is preferably reduced to the value of trjnaxjate.
  • the tfcjgr value is preferably reduced to the trch_queuejill value. This ensures that no unnecessary padding is requested. (It also ensures that if any flow has nothing to send, then nothing will be requested.)
  • the tfcjwfq variable is similar to the TFC that gives a fair queuing result according to the QoS classes. However, the calculation of the tfcjwfq variable is slightly more complicated than for the tfcjgr variable.
  • the tfcjwfq [trch] is preferably further modified to ensure that the WFQ scheduling does not request more bandwidth than that defined by the maxjrchjate value and/or the trch _queue Jill value (e.g., in bits).
  • the TFCS is scanned through and every TFC is given three scores according to (i) how close the TFC is to tfcjgr, (ii) how close the TFC is to tfcjwfq, and (iii) how much of the excess bandwidth the TFC allocates to flows with different QoS classes. (Step 615.)
  • the scores are determined as follows:
  • these three scores are ranked in a defined priority.
  • the TFCI that maximizes the grjcore is selected.
  • the TFCI that maximizes the wfqjcore is selected.
  • the TFCI with the maximum bonus jcore is chosen. This three-tiered selection process ensures that all the guaranteed rates are served first. If this is not possible, then the flows with the highest quality of service class are scheduled because the score is multiplied by "qosjlass".
  • transport channels are analyzed only one at time, the situation is analogous to those of an IP/ ATM router, where several flows of different QoS classes share a single output channel. This suggests that a well-tested method like WFQ may be employed for multiplexing several logical channels to single transport channel.
  • two backlog counters are already present. These two backlog counters can ensure a guaranteed rate and a fair allocation on average for each logical channel, so a simpler alternative is available.
  • the TBSS is divided between logical channels by a three-stage process. (Step 620.) First, check if the TBSS is smaller than the trch_guarjate.
  • Step 625 This may be accomplished by checking if any logical channel has transmitted less than lchjpuarjate and by adding the difference to grjbacklog. A similar procedure may be applied to and for wfqjacklog.
  • FIG. 7 another view of the exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention is illustrated generally at 700.
  • the exemplary second layer architecture 700 includes additional details regarding elements of, and interrelationships between, various aspects of the second layer architecture of, for example, the Universal Mobile Telecommunications System (UMTS).
  • UMTS Universal Mobile Telecommunications System
  • RRC element 705 is connected to one or more Radio Link Controllers (RLCs) 710.
  • RLCs 710 includes at least one RLC Packet Data Unit (PDU) Buffer 715.
  • the RLCs 710 are connected to respective common channel Medium Access Control (MAC-c) element(s)/layer 720 or dedicated channel Medium Access Control (MAC-d) element(s)/layer 725.
  • MAC-c Medium Access Control
  • MAC-d Medium Access Control
  • the MAC-c, MAC-d, and RLC layers of UMTS may be located, for example, in a Radio Network Controller (RNC) 140 (of FIG. 1) of the UTRAN 130, a User Equipment (UE) 110, etc.
  • RNC Radio Network Controller
  • the MAC layer of UMTS preferably schedules packets so that the total Quality of Service (QoS) provided to the end user fulfills the guarantees given when the Radio Access Bearer (RAB) 730 was established.
  • QoS Quality of Service
  • RAB Radio Access Bearer
  • One resulting issue is guaranteeing (e.g., different) guaranteed bit rates to services having different QoS classes. It is preferable to guarantee that, if possible, all flows are given their guaranteed bit rate regardless of their QoS class. If this is not possible (e.g., due to high demand), then the flows with the higher (or highest) QoS classes are preferably given their respective guaranteed rates.
  • Certain embodiment(s) of the present invention approach this problem of providing all flows a guaranteed bit rate by following a two-step scheduling process in a scheduler 735 located in the MAC layer.
  • This two-level scheduling process guarantees that, if at all possible, all flows receive their guaranteed bit rates and also ensures that the guaranteed bit rates of the higher (and highest) priority flows are maintained as long as possible.
  • these embodiment(s) may be implemented in the RNC node, the UE (node), etc.
  • the MAC entity In each TTI, the MAC entity has to decide how much data to transmit on each transport channel connected to it. These transport channels are not independent of one another, and are later multiplexed onto a single physical channel at the physical layer
  • the RRC 705 entity has to ensure that the total fransmission capability on all transport channels does not exceed the transmission capability of the underlying physical channel. This is done by giving the MAC entity a TFCS, which contains the allowed TFCs for all transport channels.
  • FIG. 8 another exemplary method in flowchart form for scheduling data flows in accordance with the present invention is illustrated generally at 800.
  • the scheduling process in the MAC layer includes the selection of a TFC from a TFCS using a two-step scoring process. This selection may be performed once for each TTI. Initially, several parameters are obtained for each logical channel.
  • the QoS Class for each logical channel may be obtained from the corresponding RAB parameter.
  • the QoS Class value may be obtained directly from the RAB parameter called "QoS Class", or it may alternatively be calculated from one or more RAB parameters using any suitable formula.
  • the Guaranteed Rate for each logical channel may also be obtained from the corresponding RAB parameter.
  • the Guaranteed Rate value may be obtained directly from the "Guaranteed Rate” RAB parameter, calculated from preassigned fair queuing weights using the GPS formula (as presented in "A Generalised Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case", A. K. Parekh, R. G. Gallager, published in IEEE/ ACM Transactions On Networking, Vol. 1, No. 3, June
  • the Queue Fill Level corresponds to a number of
  • the TFC that has the greatest Score is selected to determine the bandwidth distribution. If two or more TFCs have equal Scores, the TFC with the highest Bonus jcore is selected therefrom. (Step 820.)
  • This exemplary procedure from flowchart 800 ensures that if there is a TFC that transmits at least the guaranteed rate for each flow, then that TFC is chosen. This exemplary procedure also attempts to maximize the amount of data being transmitted from the highest QoS class(es). (It should be noted that it is assumed that the TFCs are ordered within the TFCS such that the TBSS for each logical channel increases with increasing TFCI.)
  • int maxTrch tfcs;Length( ); int tfc, tfci, qf, gr, rate, trch, trchGl; int tfcToUse;
  • bits_to_send min(tbss, qf); /* Give score according to real bits that can be sent, but not
  • these embodiment(s) are not fair, they still provide the guaranteed rate transfer rate to all service classes. Specifically, these embodiment(s) are optimized to provide best quality of service to flows having the highest QoS class(es), while still providing a minimum level of service to all flows. Furthermore, because no backlog memory need be updated each TTI, they can be faster to execute, even though they cannot guarantee fairness over the long run.

Abstract

Methods, systems, and arrangements enable packet scheduling in accordance with quality of service (QoS) constraints for data flows. In a Universal Mobile Telecommunications System (UMTS) network environment (100), for example, a Medium Access Control (MAC) layer (320) schedules packet transmission of various data flows to meet stipulated criteria, including permitted transport format combinations (TFCs) from a TFC set (TFCS). In first embodiment(s) (600), the TFC is selected based on guaranteed rate transmission rates, weighted fair queuing (WFQ) transmission rates (610, 805), QoS class, transport block set size (TBSS) (615), and optionally queue fill levels. These first embodiment(s) also further refine the selection process using backlog memories corresponding to previously unmet guaranteed and/or fair transmission rates. In second embodiment(s) (800), memory requirements are reduced by selecting a TFC based on guaranteed rate transmission rates QoS class, TBSS, and queue fill levels, without accommodating backlogs.

Description

PACKET SCHEDULING IN UMTS USING SEVERAL CALCULATED TRANSFER RATES
CROSS-REFERENCES TO RELATED APPLICATIONS
This Non-provisional Application for Patent claims the benefit of priority from, and hereby incorporates by reference the entire disclosure of, co-pending U.S.
Provisional Application for Patent Serial No. 60/185,005, (Attorney Docket No. 34646-00459USPL) filed February 25, 2000. Co-pending U.S. Provisional Applications for Patent Serial Nos. 60/184,975 (Attorney Docket No. 34646- 00458USPL) and 60/185,003 (Attorney Docket No. 34646-00460USPL), both filed on February 25, 2000, are also hereby incorporated by reference in their entirety herein.
This Non-provisional Application for Patent is related by subject matter to U.S.
Non-provisional Applications forPatentNos.09/698786 (Attorney Docket No.34646-
00458USPT) and 09/698672 (Attorney Docket No. 34646-00460USPT), both of which are filed on even date herewith. These two U.S. Non-provisional Applications for Patent are also hereby incorporated by reference in their entirety herein.
BACKGROUND OF THE INVENTION Technical Field of the Invention The present invention relates in general to the field of communications systems, and in particular, by way of example but not limitation, to scheduling packets of data/informational flows having differing priority levels in a communications system.
Description of Related Art Access to and use of wireless networks is becoming increasingly important and popular for business, social, and recreational purposes. Users of wireless networks now rely on them for both voice and data communications. Furthermore, an ever increasing number of users demand both an increasing array of services and capabilities as well as greater bandwidth for activities such as Internet surfing. To address and meet the demands for new services and greater bandwidth, the wireless communications industry constantly strives to improve the number of services and the throughput of their wireless networks. Expanding and improving the infrastructure necessary to provide additional services and higher bandwidth is an expensive and manpower-intensive undertaking. Moreover, high-bandwidth data streams will eventually be demanded by consumers to support features such as real-time audiovisual downloads and live audio-visual communication between two or more people. In the future, it will therefore become necessary and/or more cost-effective to introduce next generation wireless system(s) instead of attempting to upgrade existing system(s). To that end, the wireless communications industry intends to continue to improve the capabilities of the technology upon which it relies and that it makes available to its customers by deploying next generation system(s). Protocols for a next-generation standard that is designed to meet the developing needs of wireless customers is being standardized by the 3rd Generation Partnership Project (3GPP). The set of protocols is known collectively as the Universal Mobile Telecommunications
System (UMTS).
Referring now to FIG. 1, an exemplary wireless communications system with which the present invention maybe advantageously employed is illustrated generally at 100. In a UMTS network 100, the network 100 includes a core network 120 and a UMTS Terrestrial Radio Access Network (UTRAN) 130. The UTRAN 130 is composed of, at least partially, a number of Radio Network Controllers (RNCs) 140, each of which may be coupled to one or more neighboring Node Bs 150. Each Node B 150 is responsible for a given geographical cell and the controlling RNC 140 is responsible for routing user and signaling data between that Node B 150 and the core network 120. All of the RNCs 140 may be directly or indirectly coupled to one another. A general outline of the UTRAN 130 is given in Technical Specification TS 25.401 V2.0.0 (1999-09) of the 3rd Generation Partnership Project, 3GPP, which is hereby incorporated by reference in its entirety herein. The UMTS network 100 also includes multiple user equipments (UEs) 110. UE may include, for example, mobile stations, mobile terminals, laptops/personal digital assistants (PDAs) with wireless links, etc. In conventional wireless systems, data transmissions and/or access requests compete for bandwidth based on first come, first served and/or random paradigms. Each mobile station, and its associated transmissions, typically acquire access to a network using some type of request (e.g., a message) prior to establishing a connection. Once the mobile station has established a connection, the mobile station receives a predetermined transmission bandwidth that is usually mandated by the air interface requirements of the relevant system. In a UMTS network, on the other hand, transmission bandwidth is variable, more flexible, and somewhat separated from the physical channel maximum mandated by the air interface requirements of UMTS. However, certain guaranteed bandwidth and/or quality of service (QoS) requirements must be provided to the UEs. There is therefore a need to ensure that the guaranteed bandwidth and/or QoS is provided to each respective UE in the variable and flexible environment of UMTS.
SUMMARY OF THE INVENTION
The above-identified deficiencies, as well as others, that are associated with existing schemes are remedied by the methods, systems, and arrangements of the present invention. For example, as heretofore unrecognized, it would be beneficial to be able to handle specified guaranteed bandwidth and QoS requirements when multiplexing more than one incoming data flow onto a single output channel. In fact, it would be beneficial if a two-level scheduling mechanism was employed in order to maintain guaranteed bit rates to the extent practicable as queued input flows are multiplexed onto a single output flow.
Methods, systems, and arrangements in accordance with certain embodiment(s) of the present invention enable packet scheduling in accordance with quality of service
(QoS) constraints for data flows in communications systems. In a Universal Mobile Telecommunications System (UMTS) network environment, for example, a Medium Access Control (MAC) layer schedules packet transmission of various data flows to meet stipulated criteria, including permitted transport format combinations (TFCs) from a TFC set (TFCS). In first embodiment(s), the TFC is selected based on guaranteed rate transmission rates, weighted fair queuing (WFQ) transmission rates, QoS class, transport block set size (TBSS), and optionally queue fill levels. These first embodiment(s) also further refine the selection process using backlog memories corresponding to previously unmet guaranteed and/or fair transmission rates. In second embodiment(s), memory requirements are reduced by selecting a TFC based on guaranteed rate transmission rates, QoS class, TBSS, and queue fill levels, without accommodating backlogs corresponding to previously unsatisfied requirements.
In certain first embodiment(s), a scheduling method for providing bandwidth to entities in a communications system includes the steps of: calculating a first transfer rate for multiple flows; calculating a second transfer rate for the multiple flows; ascertaining a quality of service (QoS) for each flow of the multiple flows; and assigning bandwidth to each flow of the multiple flows responsive to the first transfer rate, the second transfer rate, and the QoS for each flow of the multiple flows. In a preferred embodiment, the first transfer rate may correspond to a guaranteed rate transfer rate, and the second transfer rate may correspond to a weighted fair queuing (WFQ) transfer rate. In another preferred embodiment, the first and second transfer rates may correspond to aggregated transfer rates over the multiple flows. hi certain second embodiment(s), a scheduling method for providing bandwidth to entities in a communications system includes the steps of: ascertaining a quality of service (QoS) class that is associated with each channel of multiple channels; ascertaining a guaranteed rate transmission rate for each channel; ascertaining a queue fill level of a queue that corresponds to each channel; calculating a first score for each channel responsive to the QoS class, the guaranteed rate transmission rate, and the queue fill level. In a preferred embodiment, an additional step of calculating a second score for each channel responsive to the guaranteed rate transmission rate and the queue fill level is included.
The above-described and other features of the present invention are explained in detail hereinafter with reference to the illustrative examples shown in the accompanying drawings. Those skilled in the art will appreciate that the described embodiments are provided for purposes of illustration and understanding and that numerous equivalent embodiments are contemplated herein. BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the methods, systems, and arrangements of the present invention may be had by reference to the following detailed description when taken in conjunction with the accompanying drawings wherein: FIG. 1 illustrates an exemplary wireless communications system with which the present invention may be advantageously employed;
FIG. 2 illustrates a protocol model for an exemplary next-generation system with which the present invention may be advantageously employed;
FIG. 3 illustrates a view of an exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention;
FIG. 4 illustrates an exemplary method in flowchart form for allocating bandwidth resources to data flow streams between entities in the exemplary second layer architecture of FIG. 3;
FIG. 5 illustrates an exemplary environment for scheduling data flows in accordance with the present invention;
FIG. 6 illustrates an exemplary method in flowchart form for scheduling data flows in accordance with the present invention;
FIG. 7 illustrates another view of the exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention; and FIG. 8 illustrates another exemplary method in flowchart form for scheduling data flows in accordance with the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular circuits, logic modules (implemented in, for example, software, hardware, firmware, some combination thereof, etc.), techniques, etc. in order to provide a thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known methods, devices, logical code (e.g., hardware, software, firmware, etc.), etc. are omitted so as not to obscure the description of the present invention with unnecessary detail.
A preferred embodiment of the present invention and its advantages are best understood by referring to FIGS . 1-8 of the drawings, like numerals being used for like and corresponding parts of the various drawings. Aspects of the UMTS are used to describe a preferred embodiment of the present invention. However, it should be understood that the principles of the present invention are applicable to other wireless communication standards (or systems), especially those in which communication is packet-based. Referring now to FIG. 2, a protocol model for an exemplary next-generation system with which the present invention may be advantageously employed is illustrated generally at 200. hi the protocol model 200 (e.g., for a Forward Access CHannel (FACH) transport channel type), the "Uu" indicates the interface between UTRAN 130 and the UE 110, and "Iub" indicates the interface between the RNC 140 and a Node B 150 (where "Node B" is a generalization of, for example, a Base
Transceiver Station (BTS)). User and signaling data may be carried between an RNC 140 and a UE 110 using Radio Access Bearers (RABs) (as illustrated hereinbelow with reference to FIG. 3). Typically, a UE 110 is allocated one or more RABs, each of which is capable of carrying a flow of user or signaling data. RABs are mapped onto respective logical channels. At the Media Access Control (MAC) layer, a set of logical channels is mapped in turn onto a transport channel, of which there are two types: a "common" transport channel which is shared by different UEs 110 and a "dedicated" transport channel which is allocated to a single UE 110 (thus leading to the terms "MAC-c" and "MAC-d"). One type of common channel is the FACH. A basic characteristic of a FACH is that it is possible to send one or more fixed size packets per transmission time interval (e.g., 10, 20, 40, or 80 ms). Several transport channels (e.g., FACHs) are in turn mapped at the physical layer onto a Secondary Common Control Physical CHannel (S-CCPCH) for transmission over the air interface between a Node B 150 and a UE 110. When a UE 110 registers with an RNC 140 via a Node B 150, that RNC 140 acts at least initially as both the serving and the controlling RNC 140 for the UE 110. (The serving RNC 140 may subsequently differ from the controlling RNC 140 in a UMTS network 100, but the presence or absence of this condition is not particularly relevant here.) The RNC 140 both controls the air interface radio resources and terminates the layer 3 intelligence (e.g., the Radio Resource Control (RRC) protocol), thus routing data associated with the UE 110 directly to and from the core network
120.
It should be understood that the MAC-c entity in the RNC 140 transfers MAC- c Packet Data Units (PDUs) to the peer MAC-c entity at the UE 110 using the services of the FACH Frame Protocol (FACH FP) entity between the RNC 140 and the Node B 150. The FACH FP entity adds header information to the MAC-c PDUs to form
FACH FP PDUs which are transported to the Node B 150 over an AAL2 (or other transport mechanism) connection. An interworking function at the Node B 150 interworks the FACH frame received by the FACH FP entity into the PHY entity.
In an exemplary aspect of the scenario illustrated in FIG. 2, an important task of the MAC-c entity is the scheduling of packets (MAC PDUs) for transmission over the air interface. If it were the case that all packets received by the MAC-c entity were of equal priority (and of the same size), then scheduling would be a simple matter of queuing the received packets and sending them on a first come first served basis (e.g., first-in, first-out (FIFO)). However, UMTS defines a framework in which different Quality of Services (QoSs) maybe assigned to different RABs. Packets corresponding to a RAB that has been allocated a high QoS should be transmitted over the air interface at a high priority whilst packets corresponding to a RAB that has been allocated a low QoS should be transmitted over the air interface at a lower priority. Priorities maybe determined at the MAC entity (e.g., MAC-c or MAC-d) on the basis of RAB parameters.
UMTS deals with the question of priority by providing at the controlling RNC 140 a set of queues for each FACH. The queues may be associated with respective priority levels. An algorithm, which is defined for selecting packets from the queues in such a way that packets in the higher priority queues are (on average) dealt with more quickly than packets in the lower priority queues, is implemented. The nature of this algorithm is complicated by the fact that the FACHs that are sent on the same physical channel are not independent of one another. More particularly, a set of Transport Format Combinations (TFCs) is defined for each S-CCPCH, where each TFC includes a transmission time interval, a packet size, and a total transmission size (indicating the number of packets in the transmission) for each FACH. The algorithm should select for the FACHs a TFC which matches one of those present in the TFC set in accordance with UMTS protocols.
Preferably, a packet received at the controlling RNC 140 is placed in a queue (for transmission on a FACH), where the queue corresponds to the priority level attached to the packet as well as to the size of the packet. The FACH is mapped onto a S-CCPCH at a Node B 150 or other corresponding node of the UTRAN 130. In an alternative preference, the packets for transmission on the FACH are associated with either a Dedicated Control CHannel (DCCH) or to a Dedicated Traffic CHannel (DTCH). It should be noted that, preferably, each FACH is arranged to carry only one size of packets. However, this is not necessary, and it may be that the packet size that can be carried by a given FACH varies from one transmission time interval to another.
As alluded to hereinabove, the UE 110 may communicate with the core network 120 of the UMTS system 100 via separate serving and controlling (or drift)
RNCs 140 within the UTRAN 130 (e.g., when the UE 110 moves from an area covered by the original serving RNC 140 into a new area covered by a controlling/drift RNC 140) (not specifically shown). Signaling and user data packets destined for the
UE 110 are received at the MAC-d entity of the serving RNC 140 from the core network 120 and are "mapped" onto logical channels, namely a Dedicated Control CHannel (DCCH) and a Dedicated traffic CHannel (DTCH), for example. The MAC- d entity constructs MAC Service Data Units (SDUs), which include a payload section containing logical channel data and a MAC header containing, inter alia, a logical channel identifier. The MAC-d entity passes the MAC SDUs to the FACH FP entity. This FACH FP entity adds a further FACH FP header to each MAC SDU, where the FACH FP header includes a priority level that has been allocated to the MAC SDU by an RRC entity. The RRC is notified of available priority levels, together with an identification of one or more accepted packet sizes for each priority level, following the entry of a UE 110 into the coverage area of the drift RNC 140. The FACH FP packets are sent to a peer FACH FP entity at the drift RNC 140 over an AAL2 (or other) connection. The peer FACH FP entity decapsulates the MAC-d SDU and identifies the priority contained in the FRAME FP header. The SDU and associated priority are passed to the MAC-c entity at the controlling RNC 140. The MAC-c layer is responsible for scheduling SDUs for transmission on the
FACHs. More particularly, each SDU is placed in a queue corresponding to its priority and size. For example, if there are 16 priority levels, there will be 16 queue sets for each FACH, with the number of queues in each of the 16 queue sets depending upon the number of packet sizes accepted for the associated priority. As described hereinabove, SDUs are selected from the queues for a given FACH in accordance with some predefined algorithm (e.g., so as to satisfy the TFC requirements of the physical channel).
The scheme described hereinbelow with reference to FIGS. 3 and 4 relates to data transmission in a telecommunications network and in particular, though not necessarily, to data transmission in a UMTS.
As noted hereinabove, the 3 GPP is currently in the process of standardizing a new set of protocols for mobile telecommunications systems. The set of protocols is known collectively as the UMTS. With reference to FIG. 3, a view of an exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention is illustrated generally at 300. Specifically, by way of example only, the exemplary second layer architecture 300 illustrates a simplified UMTS layer 2 protocol structure which is involved in the communication between mobile stations (e.g. mobile telephones), or more broadly UEs 110, and Radio Network Controllers (RNCs) 140 of a UMTS network 100. The RNCs 140 are analogous to the Base Station Controllers (BSCs) of existing GSM mobile telecommunications networks, communicating with the mobile stations via Node Bs 150.
The layer 2 structure of the exemplary second layer architecture 300 includes a set of Radio Access Bearers (RABs) 305 that make available radio resources (and services) to user applications. For each mobile station there may be one or several RABs 305. Data flows (e.g., in the form of segments) from the RABs 305 are passed to respective Radio Link Control (RLC) entities 310, which amongst other tasks buffer the received data segments. There is one RLC entity 310 for each RAB 305. In the RLC layer, RABs 305 are mapped onto respective logical channels 315. A Medium Access Control (MAC) entity 320 receives data transmitted in the logical channels 315 and further maps the data from the logical channels 315 onto a set of transport channels 325. The transport channels 325 are finally mapped to a single physical transport channel 330, which has a total bandwidth (e.g., of <2Mbits/sec) allocated to it by the network. Depending whether a physical channel is used exclusively by one mobile station or is shared between many mobile stations, it is referred to as either a "dedicated physical channel" or a "common channel". A MAC entity connected to a dedicated physical channel is known as MAC-d; there is preferably one MAC-d entity for each mobile station. A MAC entity connected to a common channel is known as MAC-c; there is preferably one MAC-c entity for each cell.
The bandwidth of a transport channel 325 is not directly restricted by the capabilities of the physical layer 330, but is rather configured by a Radio Resource Controller (RRC) entity 335 using Transport Formats (TFs). For each transport channel 325, the RRC entity 335 defines one or several Transport Block (TB) sizes. Each Transport Block size directly corresponds to an allowed MAC Protocol Data Unit (PDU) and tells the MAC entity what packet sizes it can use to transmit data to the physical layer. In addition to block size, the RRC entity 335 informs the MAC entity 320 of a Transport Block Set (TBS) size, which is the total number of bits the
MAC entity can transmit to the physical layer in a single transmission time interval (TTI). The TB size and TBS size, together with some additional information relating to the allowed physical layer configuration, form a TF. An example of a TF is (TB=80 bits, TBS=160 bits), which means that the MAC entity 320 can transmit two 80 bit packets in a single TTI. Thus, this TF can be written as TF=(80, 160). The RRC entity
335 also informs the MAC entity of all possible TFs for a given transport channel. This combination of TFs is called a Transport Format Combination (TFC). An example ofa TFC is {TF1=(80, 80), TF2=(80, 160)}. In this example, the MAC entity can choose to transmit one or two PDUs in one TTI on the particular transport channel in question; in both cases, the PDUs have a size of 80 bits. In each TTI, the MAC entity 320 has to decide how much data to transmit on each transport channel 325 connected to it. These transport channels 325 are not independent of one another, and are later multiplexed onto a single physical channel 330 at the physical layer 330 (as discussed hereinabove). The RRC entity 335 has to ensure that the total transmission capability on all transport channels 325 does not exceed the transmission capability of the underlying physical channel 330. This is accomplished by giving the MAC entity 320 a Transport Format Combination Set (TFCS), which contains the allowed Transport Format Combinations for all transport channels. By way of example, consider a MAC entity 320 which has two transport channels 325 that are further multiplexed onto a single physical channel 330, which has a transport capacity of 160 bits per transmission time interval (It should be understood that, in practice, the capacity will be much greater than 160). The RRC entity 335 could decide to assign three transport formats TF1=(80, 0), TF2=(80, 80) and TF3=(80, 160) to both transport channels 325. Clearly however, the MAC entity
320 cannot choose to transmit on both transport channels 325 at the same time using TF3 as this would result in the need to transmit 320 bits on the physical channel 330, which has only a capability to transmit 160 bits. The RRC entity 335 has to restrict the total transmission rate by not allowing all combinations of the TFs. An example of this would be a TFCS as follows [{(80, 0), (80, 0)}, {(80, 0), (80, 80)}, {(80, 0),
(80, 160)}, {(80, 80), (80, 0)}, {(80, 80), (80, 80)}, {(80, 160), (80, 0)}], where the transport format of transport channel "1" is given as the first element of each element pair, and the transport format of transport channel "2" is given as the second element. As the MAC entity 320 can only choose one of these allowed transport format combinations from the transport format combination set, it is not possible to exceed the capability of the physical channel 330.
An element of the TFCS is pointed out by a Transport Format Combination indicator (TFCI), which is the index of the corresponding TFC. For example, in the previous example there are 6 different TFCs, meaning that the TFCI can take any value between 1 and 6. The TFCI^ would correspond to the second TFC, which is {(80, 0), (80, 80)}, meaning that nothing is transmitted from the first transport channel and a single packet of 80 bits is transmitted from the second transport channel.
It is of course necessary to share the total available bandwidth between the logical channels 315. The decision to distribute the bandwidth to different transport channels is made by the MAC entity 320 for each transmission time interval by choosing a suitable TFCI. This sharing of bandwidth can be done in several ways, for example by giving an absolute preference to flows which are considered to be more important than others. This would be the easiest method to implement, but can result in a very unfair distribution of the bandwidth. Specifically, it is possible that flows that have lower priorities are not allowed to transmit for prolonged periods of time.
This can result in extremely poor performance if the flow control mechanism of a lower priority flow reacts to this. A typical example of such a flow control mechanism can be found in the present day Transmission Control Protocol (TCP) protocol used in the Internet. In existing technologies, such as Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) networks, provision is made for allocating resources on a single output channel to multiple input flows. However, the algorithms used to share out the resources in such systems are not directly applicable to UMTS where multiple input flows are transmitted on respective logical output channels.
Sharing resources between multiple input data flows is referred to as Generalized Processor Sharing (GPS). This GPS, when employed in systems having only a single output channel, is known as Weighted Fair Queuing (WFQ) and is described in a paper entitled "A Generalised Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case", A. K. Parekh, R. G. Gallager, published in IEEE/ ACM Transactions On Networking, Vol. 1, No. 3, June 1993, Pp. 344-357. Stated simply, GPS involves calculating a GPS weight for each input flow on the basis of certain parameters associated with the flow. The weights calculated for all of the input flows are added together, and the total available output bandwidth is divided amongst the input flows depending upon the weight of each flow as a fraction of the total weight according to, for example, the following formula: ratej. = weight_i/(sum_of_all_active__weights) * maximum _r ate. GPS could be applied to the MAC entity in UMTS, with the weighting for each input flow being determined (by the RRC entity) on the basis of certain RAB parameters, which are allocated to the corresponding RAB by the network. In particular, an RAB parameter may equate to a Quality of Service (QoS) or Guaranteed rate allocated to a user for a particular network service.
Continuing now with the scheme described herein with reference to FIGS. 3 and 4, it was realized that there are difficulties involved in applying GPS directly to bandwidth allocation in a UMTS network, as GPS assumes that data can be sent on the MAC entity logical channels in infinitely small blocks. This is not possible in UMTS, as UMTS relies upon Transport Format Combinations Sets (TFCSs) as the basic mechanism defining how much data can be sent in each TTI. If GPS is to be employed in UMTS, it is necessary to select the TFC (from the TFCS) which most closely matches the bandwidth allocated to an input flow by GPS. The result of this approach is that the actual amount of data sent for an input stream in a given frame either may fall below the optimized rate or may exceed that optimized rate. In the former case, a backlog of unsent data may build up for the input flow.
It is an object of the scheme described herein with reference to FIGS. 3 and 4 to overcome, or at least mitigate, the disadvantage noted in the preceding paragraph. This and other objects are achieved at least in part by maintaining a backlog counter which keeps track of the backlog of unsent data for a given input flow to the MAC entity. The backlog is taken into account when determining an appropriate TFC for that input flow for a subsequent frame. According to a first aspect of this scheme, there is provided a method of allocating transmission resources at a Media Access Control (MAC) entity of a node of a Universal Mobile Telecommunications System (UMTS), the method including the following steps for each frame of an output data flow: computing for each input flow to the MAC entity a fair share of the available output bandwidth of the MAC entity; selecting a Transport Format Combination (TFC) from a TFC Set (TFCS) on the basis of the bandwidth share computed for the input flows, where the TFC includes a Transport Format allocated to each input flow; and for each input flow, if the allocated TF results in a data transmission rate which is less than the determined fair distribution, adding the difference to a backlog counter for the input flow, where the value of the backlog counter(s) is taken into account when selecting a TFC for the subsequent frame of the output data flow. Embodiments of this scheme allow the TFC selection process for a subsequent frame to take into account any backlogs which exists for the input flows. The tendency is to adjust the selected TFC to reduce the backlogs. Such a backlog may exist due to the finite number of data transmission possibilities provided for by the TFCS. Nodes at which the method of this scheme maybe employed include mobile stations (such as mobile telephones and communicator type devices) (or more generally UEs) and Radio Network Controllers (RNCs). Preferably, the input flows to the MAC entity are provided by respective Radio
Link Control (RLC) entities. Also preferably, each RLC entity provides buffering for the associated data flow. Also preferably, the step of computing a fair share of resources for an input flow is carried out by a Radio Network Controller (RNC) entity. Also preferably, the step of computing a fair share of resources for an input flow includes the step of determining the weighting given to that flow as a fraction of the sum of the weights given to all of the input flows. The fair share may then be determined by multiplying the total output bandwidth by the determined fraction. Also preferably, this step may involve using the Generalised Processor Sharing (GPS) mechanism. The weighting for a data flow may be defined by one or more Radio Access Bearer (RAB) parameters allocated to a RAB by the UMTS network, where the RAB is associated with each MAC input flow. Also preferably, in the event that the backlog counter for a given input flow has a positive value, the method further includes the step of adding the value of the backlog counter to the computed fair share for that flow and selecting a TFC on the basis of the resulting sums for all of the input flows.
In certain embodiments of the scheme described herein with reference to FIGS . 3 and 4, where, for a given input flow, the allocated TF results in a data transmission rate that is more than the determined fair distribution, the difference maybe subtracted from the backlog counter for the input flow. According to a second aspect of this scheme, there is provided a node of a Universal Mobile Telecommunications System
(UMTS), the node including: a Media Access Control (MAC) entity for receiving a plurality of input data flows; first processor means for computing for each input flow to the MAC entity a fair share of the available output bandwidth of the MAC entity and for selecting a Transport Format Combination (TFC), from a TFC Set (TFCS), on the basis of the bandwidth share computed for the input flows, where the TFC includes a Transport Format allocated to each input flow; second processor means for adding to a backlog counter associated with each input flow the difference between the data transmission rate for the flow resulting from the selected TFC and the determined fair share, if the data transmission rate is less than the determined fair share, where the first processor means is arranged to take into account the value of the backlog counters when selecting a TFC for the subsequent frame of the output data flow. Preferably, the first and second processor means are provided by a Radio Network Controller (RNC) entity.
As is described herein with reference to FIG. 3, a simplified UMTS layer 2 includes one Radio Resource Control (RRC) entity, a Medium Access Control (MAC) entity for each mobile station, and a Radio Link Control (RLC) entity for each Radio
Access Bearer (RAB). The MAC entity performs scheduling of outgoing data packets, while the RLC entities provide buffers for respective input flows . The RRC entity sets a limit on the maximum amount of data that can be transmitted from each flow by assigning a set of allowed Transport Format Combinations (TFC) to each MAC (referred to as a TFC Set or TFCS), but each MAC must independently decide how much data is transmitted from each flow by choosing the best available Transport Format Combination (TFC) from the TFCS .
With reference now to FIG. 4, an exemplary method in flowchart form for allocating bandwidth resources to data flow streams between entities in the exemplary second layer architecture of FIG. 3 is illustrated generally at 400. The flowchart 400 is a flow diagram of a method of allocating bandwidth resources to, for example, the input flow streams ofa MAC entity of the layer 2 of FIG 3. Generally, an exemplary method in accordance with the flowchart 400 may follow the following steps. First, input flows are received at RLCs and the data is buffered (step 405). Information on buffer fill levels is passed to the MAC entity (step 410). After the information on buffer fill levels is passed, the fair MAC bandwidth share for each input flow is computed (step 415). The computed fair share of each is then adjusted by adding the contents of an associated backlog counter to the respective computed fair share (step 420). Once the computed fair shares have been adjusted, a TFC is selected from the TFC set to most closely match the adjusted fair shares (step 425). The RLC is next instructed to deliver packets to the MAC entity according to the selected TFC (step
430). The MAC entity may also schedule packets in accordance with the selected TFC (step 435). After packet scheduling, the traffic channels may be transported on the physical channel(s) (step 440). Once packet traffic has been transported, the backlog counters should be updated (step 445). The process may continue (via arrow 450) when new input flows are received at the RLCs, which buffer the data (at step 405).
Furthermore, certain embodiment(s) of the scheme operate by calculating at the
MAC entity, on a per Transmission Time Interval (TTI) basis, the optimal distribution of available bandwidth using the Generalised Processor Sharing (GPS) approach {See, e.g., the article by A. K. Parekh et al. referenced hereinabove.) and by keeping track of how far behind each flow is from the optimal bandwidth allocation using respective backlog counters. The available bandwidth is distributed to flows by using the standard GPS weights, which may be calculated by the RRC using the RAB parameters.
The method may first calculate the GPS distribution for the input flows and add to the GPS values the current respective backlogs. This is performed once for each 10ms TTI and results in a fair transmission rate for each flow. However, this rate may not be optimal as it may happen that there is not enough data to be sent in all buffers. In order to achieve optimal throughput as well as fairness, the fair GPS distribution is reduced so as to not exceed the current buffer fill level or the maximum allowed rate for any logical channel. A two step rating process is then carried out.
First, the set of fair rates computed for all of the input flows is compared against possible Transport Format Combinations (TFCs) in turn, with each TFC being scored according to how close it comes to sending out the optimal rate. In practice this is done by simply counting how much of the fair configuration a TFC fails to send (if a given TFC can send all packets at the fair rate, it is given a score of zero) and then considering only the TFCs which have the lowest scores. The closest match is chosen and used to determine the amount of packets sent from each queue. TFCs having an equal score are given a bonus score according to how many extra bits they can send (this can be further weighted by a Quality of Service rating in order to ensure that the excess capacity goes to the bearer with the highest quality class). The final selection is based on a two-level scoring: the TFC with the lowest score is taken. If there are several TFCs with an equal score, the one with the highest bonus score is chosen. This ensures that the rate for each TTI is maximized. Fairness is achieved by checking that if the chosen TFC does not give all flows at least their determined fair rate, the missing bits are added to a backlog counter of the corresponding flow and the selection is repeated for the next TTI. If any of the flows has nothing to transmit, the backlog is set to zero.
This algorithm can be shown to provide bandwidth (and, under certain assumptions, delay bounds) that is close to that of GPS. However, it remains fair and maintains isolation between all flows. It is also computationally simpler than Weighted Fair Queuing algorithms because it utilizes the fact that the MAC layer ran transmit on several transport channels at the same time. This results in optimal or close to optimal utilization of the radio interface in the UMTS radio link. The following pseudo-code is an outline of an exemplary algorithm for implementing the scheme described hereinabove with reference to FIGS. 3 and 4:
/*
* GPS based TFC selection. Schedules packets by optimizing the throughput
* while still keeping the fairness (i.e. guaranteed rates).
int sched_gps() {
double weight, weight_sum; double score, bonus_score; double min_score = HUGE_NUMBER; double max_bonus_score = 0; int maxrate; int i ; int tfc, tfci, qf, rate, trch; int tfc_to_use;
double backlog[MAX_TRCH]; double gps_req[MAX_TRCH]; double gps_req_comp[MAX_TRCH];
/* First calculate the sum of the weights of all active queues */
weight_sum=0; for (trch = 0; trch < MAX RCH; trch++) { if (queue_fill_state[trch] > 0) { weight_sum. += weight_vector[trch];
} }
/* Then calculate the fair distribution of available bandwidth * using GPS. Modify the GPS scheduling reducing the rate if there
* is not enough data in the buffers or if the scheduled rate is
* higher that the maximum rate for a given logical channel */
int gps_rate=0; for(trch = 0; trch < MAX TRCH; trch++) {
if (queue_f ill_state [trch] = 0) { backlog[trch] = 0; } // here we calculate how many bits we should send on each channel // by GPS
gps_req[trch] = 0; gps_req_comp[trch] = 0; if(queue_fill_state[trch) > 0) { weight = weight_vector[trch]; gps_req[trch] = weight / weight_sum * maxrate + backlog[trch]; gps_req_comp[trch] = gps_req[trch]; if (gps_req_comp[trch] > queue_fill_ state[trch]) { gps_req_comp[trch] = queue_fill_state[trch];
} if (gps_req_comp[trch] > trch_max_rate[trch]) { gps_req_comp[trch} = trch_max_rate[trch];
} } }
/* Now we have our basis for selecting the TFC. Score all available
* TFCs by calculating how far they are from the modified GPS
* result. If there are several TFCs that can send the whole GPS
* result (or are
* equally close) choose the one that maximises the throughput of * highest QoS class. Note that the TFCIs are assumed to be in
* increasing order regarding the bandwidth usage */
for (tfci = o; tfci < MAX TPCI; tfci++) { rate = score = bonus_score = 0; for (trch = 0; trch < MAX TRCH; trch+-+) { int tbs = tfcs [trch] [tfci] [0]; int tbss = tfcs [trch] [tfci] [1]; rate += tbss;
if (tbss < gps_req_comp[trch]) { score += gps_req_comp [trch] - tbss; } else { if (tbss <= queue_filι_state(trch]) { bonus_score += QoS_vector[trch]*(tbss - gps_req_comp [frch] ) ;
} } } if (score < min_score) { tfc_to_use = tfci; min_score = tfcScore; max_bonus_score = bonus_score;
} if (score == min_score && bonus_score > max-bonus-score) { tfc_to_use = tfci; min_score = score; max_bonus_score = bonus_score;
} } /* Now we have chosen the TFC to use. Update the backlog and output the
* right TFCI
*/
for (trch = 0; trch < MAX TRCH; trch++) {
tbss = tfcs [trch] [tfcToUse] [1]; if (tbss < queue_fill_state) { if (gps_req [trch] — gps_req_comp[trch]) { backlog [trch] = gpsReq [trch] - tbss; if (backlog[trch] < 0) backlog[trch] = 0; } else { backlog[trchGl] = 0; } } return tfc to use;
Referring now to FIG. 5, an exemplary environment for scheduling data flows in accordance with the present invention is illustrated generally at 500. A model ofa MAC-c entity 500 is illustrated as being in communication with common RLC entities, MAC-d entities, and common transport channels (e.g., RACH/FACH). A
MAC-c scheduler 505 schedules the forwarding of packets (or more generally segments) from QoS buffers 510, which receive MAC PDUs from MAC-d entities. As alluded to hereinabove, the MAC layer of UMTS schedules packets in a manner such that the total QoS provided to the end user fulfills the guarantees given when the corresponding RAB was established. On aspect of this scheduling is the requirement that each flow receive the agreed-upon QoS. Because it is possible to multiplex several input flows (e.g., logical channels) on to one output channel (e.g., a transport channel), previously-known scheduling algorithms for the UMTS MAC-layer are not directly applicable. In accordance with certain embodiment(s) of the present invention, a two-level scheduling algorithm is applied, which enables the implementation of fair scheduling in environments in which the MAC needs to perform multiplexing. The two-level scheduling enables the provision of an arbitrary QoS to all flows that are multiplexed onto a single output channel. It should be noted that the MAC-c entity 500 may be incorporated in, and thus the principles of the present invention may be applied with, the UMTS MAC layer in an RNC, a UE, etc. In accordance with certain embodiment(s) of the present invention, relevant parameters for each logical channel are first received as input. A backlog counter (value) for each logical channel is maintained. In order to apply a fair queuing mechanism, these parameters are converted to GPS weights. There may be one or alternatively several different levels of weights for each flow. Second, weights for each transport channel are calculated by adding the weights for each logical channel to be multiplexed onto each corresponding transport channel. Third, scheduling is performed by choosing the best TFC according to, for example, the original GPS- based scheduling method. Fourth, the TBSS given to a transport channel is distributed to corresponding logical channels by using, for example, essentially a similar process as in step 3 for choosing the TFC. It should be noted that this is now simpler because there are no longer any restrictions on the available TFCs. Fifth, the backlog (value) is updated for each logical channel. This guarantees that each logical channel will get its respective fair share of the total bandwidth, regardless of multiplexing. Certain embodiment(s) in accordance with the present invention are described below in the context of variables that approximate a pseudo-code format. It should be noted that this description assumes that all logical channels to be multiplexed onto single transport channel have an equal TBS. However, it should be understood that generalization to the case of unequal block sizes maybe made by one of ordinary skill in the art after reading and understanding the principles of the present invention.
The following parameters are advantageously provided as input:
1. tfcs [trch] [tfci] : A two-dimensional array containing the TFCS . Each element of the array is a vector containing two integers, the TBS and the TBSS. It is assumed that the TFCS is stored in such a way that the most significant index is the Transport Channel Identifier.
2. max _r ate: The maximum rate that can be transmitted on all transport channels. Note that this is not typically the same as the sum of the maximum rates on each transport channel, as the transport capability on FACH or DCH channels is limited by the transport capability of the physical common channel. This is preferably calculated directly from the TFCS every time the TFCS is modified and/or limited. 3. trch_max_rate [trch]: An array that contains the maximum rate for each transport channel. This parameter, while actually optional, is used to ensure that if the guaranteed rate is higher than the maximum transport rate, then the backlog for the respective flow is not accumulated and the excess data rate can be given to other flow(s). This parameter is preferably calculated directly from the TFCS every time the
TFCS is modified and/or limited.
4. lch_qos_class [lch] : An array containing the QoS class for each input flow ("logical channel"). This array is preferably re-computed when new input flows are added or old flows are removed. 5. lch_guarjate[lch]: An array containing the guaranteed rate for each input flow ("logical channel"). This array is preferably re-computed when new input flows are added or old flows are removed.
6. lchjrchjnap [lch]: An array containing the transport channel indicator for each input flow ("logical channel"). This array defines how the input flows are multiplexed to fransport channels, and thus provides a mapping from logical channel to corresponding transport channel. This parameter is preferably re-computed when new input flows are added or old flows are removed.
7. lch_queuejill [lch] : An array containing the number of packets in the input buffer for each incoming flow. This is the maximum number of packets that can be transmitted from this incoming flow ("logical channel") in this TTI. If more than this number is requested, then the RLC can provide padding, but for packets in QoS buffers (e.g., QoS buffers 510) this is not possible. This parameter is preferably updated before each scheduling decision.
8. lch_pu_size [lch] : An array containing the size of the packets in the input buffers for each incoming flow. This parameter may be updated only when the size of the packets/PDUs change, or when new channels are added.
From the above eight (8) parameters, the following three (3) additional parameters may be calculated:
1. trch_qos_class [trch]: An array containing the maximum QoS class of all input flows ("logical channels") multiplexed to a given transport channel. This array is preferably re-computed whenever lch_qos_class or lchjrchjnap is changed. 2. trch_guar_rate [trch] : An array containing the sum of guaranteed rate of all input flows ("logical channels") multiplexed to a given transport channel. This array is preferably re-computed whenever lch guar_rate is changed.
3. trch_queuejill [trch]: An array containing the total number of bits that can be transmitted from any transport channel. This array is preferably updated for every scheduling decision.
This exemplary version of the exemplary scheduling algorithm preferably employs two (2) "external" arrays, which may be stored at memory in between the scheduling decisions. Both of these arrays are updated once per scheduling decision: 1. lch_grJ>acklog[lch]: An array containing the current guaranteed rate backlog (i.e., how far behind the guaranteed rate this flow is) for each logical channel. This backlog may be specified in bits.
2. lch_wfq_backlog [lch]: An array containing the current fair queuing backlog (i.e., how for behind the WFQ scheduling this flow is) for each logical channel. This backlog may be specified in bits.
From the above two (2) backlog arrays, two (2) more backlog arrays are preferably calculated for each scheduling decision:
1. trch_gr_backlog [trch]: An array containing the sum of all current guaranteed rate backlogs of the logical channels multiplexed to a given transport channel.
2. trch_wfq )acklog [trch] : An array containing the sum of all current fair queuing backlogs of the logical channels multiplexed to a given fransport channel.
Referring now to FIG. 6, an exemplary method in flowchart form for scheduling data flows in accordance with the present invention is illustrated generally at 600. The flowchart 600 indicates in some detail an exemplary method for employing a two-step scheduling algorithm. The two-step scheduling algorithm operates responsive to both the guaranteed rates and the fair queuing amounts of each flow. The two-step scheduling algorithm (i) selects a TFCI based on three variables and (ii) allocates the resulting TBSS in an order responsive to each flows QoS. The exemplary method works by first updating the trch_grJ>acklog and trch_wfqJ)acklog counters and the trc _queue ill memory. (Step 605.) Next, two reference (e.g., so-called "optimal") transport format combinations, tfc_gr and tfcjwfq, are calculated. (Step 610) The tfcjgr is the sum of the guaranteed rate and any possible guaranteed rate backlog for all logical channels multiplexed to this transport channel: tfc_gr[trch] = guarjate [trch] + trch_grJ>acklog [trch]
Thus the tfcjgr is a transport format combination that would transmit enough bits from all incoming flows in order to give each their respective guaranteed rate. The backlog value ensures that if any flow cannot transmit its guaranteed rate, then its share of the bandwidth is increased. In order to provide optimal service, two special cases should be noted. First, if the tfcjgr indicates a transmission rate that is larger than the maximum rate for any transport channel (e.g., if tfc_gr [trch] > trjnaxjate [trch]), then the tfcjgr value is preferably reduced to the value of trjnaxjate. Second, if the tfcjgr value is greater than the number of bits that are buffered for this transport channel (e.g., if tfc_gr [trch] > trch _queue ill [trch]), then the tfcjgr value is preferably reduced to the trch_queuejill value. This ensures that no unnecessary padding is requested. (It also ensures that if any flow has nothing to send, then nothing will be requested.)
The tfcjwfq variable is similar to the TFC that gives a fair queuing result according to the QoS classes. However, the calculation of the tfcjwfq variable is slightly more complicated than for the tfcjgr variable. First, the sum of the QoS classes of all active flows is calculated (a flow maybe defined as "active" if it has at least one packet to send): qosjum = ∑ trch josjclass [trch] , where the sum is over all the transport channels that have trch _queue Jill [trch] > 0. Second, the fair scheduling can then be calculated by: tfcjwfq [trch] = maxjate * trch jjos lass [trch] I qosjum . The fair scheduling TFC should also be further modified by taking into account any possible backlog: tfcjwfq [trch] = tfcjwfq [trch] + trchjwfqjbacklog [trch]. As is explained hereinabove with respect to tfcjgr and the providing of optimal service, the tfcjwfq [trch] is preferably further modified to ensure that the WFQ scheduling does not request more bandwidth than that defined by the maxjrchjate value and/or the trch _queue Jill value (e.g., in bits).
Once the two reference TFCs have been calculated, the TFCS is scanned through and every TFC is given three scores according to (i) how close the TFC is to tfcjgr, (ii) how close the TFC is to tfcjwfq, and (iii) how much of the excess bandwidth the TFC allocates to flows with different QoS classes. (Step 615.) The scores are determined as follows:
gi score - ∑ qos lass [trch] * min(tbss, tfcjgr); wfq core = ∑ qosjlass [trch] * m (tbss, tfcjwfq); and bonus jcore - ∑ qos lass [trch] * mm(trch_queuejϊll, tbss-max(tfcjgr, tfcjwfq)) .
Thus the grjcore and the wfqjcore increase up to a maximum that is reached when tbss >= tfc rr and tfcjwfq, respectively, while the bonus jcore increases always when tbss < trch_queuejill.
In certain embodiment(s), these three scores are ranked in a defined priority. First, the TFCI that maximizes the grjcore is selected. Second, if there are several TFCIs with the same grjcore, then the TFCI that maximizes the wfqjcore is selected. Third, if there are still several choices left (i.e., several TFCIs have the same wfqjcore as well as the same gr core), then the TFCI with the maximum bonus jcore is chosen. This three-tiered selection process ensures that all the guaranteed rates are served first. If this is not possible, then the flows with the highest quality of service class are scheduled because the score is multiplied by "qosjlass". If all guaranteed rates can be provided, then a fair allocation is tried as well. If this is not possible, as much bandwidth as possible is given to flows with the highest priorities. Finally, if there is any excess bandwidth after fair scheduling (e.g., typically because one of the higher priority flows has only a few bits buffered for this TTI), the excess bandwidth is given to the flow which has the highest priority and still has data to send. Once the TFCI (and thus the amount of data to be transmitted on each transport channel) has been selected, the amount of data to be transmitted from each transport channel is mapped to logical channels using that particular transport channel. This, at least in principle, is a separate scheduling problem in which the TBSS is allocated to flows that are multiplexed to corresponding transport channels. If transport channels are analyzed only one at time, the situation is analogous to those of an IP/ ATM router, where several flows of different QoS classes share a single output channel. This suggests that a well-tested method like WFQ may be employed for multiplexing several logical channels to single transport channel. However, in accordance with the principles of the present invention, two backlog counters are already present. These two backlog counters can ensure a guaranteed rate and a fair allocation on average for each logical channel, so a simpler alternative is available. Specifically, the TBSS is divided between logical channels by a three-stage process. (Step 620.) First, check if the TBSS is smaller than the trch_guarjate. If so, give the flow with the highest priority the lch_guarjate bits, give the flow with the second highest priority the second lchjguarjate, etc. until the whole TBSS has been allocated. Second, if the TBSS is larger than the trch_guarjate, all flows are given their respective guaranteed rate. Third, check if the TBSS is smaller than tfcjwfq. If it is, first allocate to the flow with the highest priority its respective fair share (e.g., lch_qos lass I qos um * maxjate), then second allocate to the flow with the second highest priority its respective fair share, etc. until the whole TBSS has been allocated. Finally, if the TBSS is larger than the tfcjwfq, all flows can automatically receive their respective fair share, and the excess bandwidth may be given to the flow or flows with the highest priority or priorities. The appropriate TFCI has been determined as well as how much data should be requested from each input flow. However, it should also be ensured that each logical channel will, on average, receive both the guaranteed rate and its respective fair allocation of the bandwidth. (Step 625.) This may be accomplished by checking if any logical channel has transmitted less than lchjpuarjate and by adding the difference to grjbacklog. A similar procedure may be applied to and for wfqjacklog.
If any flow transmits less than lch_qos lass I qos um * maxjate bits, the difference is added to wfqjbacklog. It should be noted that if any flow transmits all the packets it had (previously) buffered, then its backlog is re-set to zero. This "zeroing" of the backlog guarantees that no flow can accumulate excess backlog and take advantage of it later at the expense of other flows. Referring now to FIG. 7, another view of the exemplary second layer architecture of an exemplary next-generation system in accordance with the present invention is illustrated generally at 700. The exemplary second layer architecture 700 includes additional details regarding elements of, and interrelationships between, various aspects of the second layer architecture of, for example, the Universal Mobile Telecommunications System (UMTS). Each illustrated Radio Resource Control
(RRC) element 705 is connected to one or more Radio Link Controllers (RLCs) 710. Each illustrated RLC 710 includes at least one RLC Packet Data Unit (PDU) Buffer 715. The RLCs 710 are connected to respective common channel Medium Access Control (MAC-c) element(s)/layer 720 or dedicated channel Medium Access Control (MAC-d) element(s)/layer 725. The MAC-c, MAC-d, and RLC layers of UMTS may be located, for example, in a Radio Network Controller (RNC) 140 (of FIG. 1) of the UTRAN 130, a User Equipment (UE) 110, etc.
As noted hereinabove, the MAC layer of UMTS preferably schedules packets so that the total Quality of Service (QoS) provided to the end user fulfills the guarantees given when the Radio Access Bearer (RAB) 730 was established. One resulting issue is guaranteeing (e.g., different) guaranteed bit rates to services having different QoS classes. It is preferable to guarantee that, if possible, all flows are given their guaranteed bit rate regardless of their QoS class. If this is not possible (e.g., due to high demand), then the flows with the higher (or highest) QoS classes are preferably given their respective guaranteed rates. Certain embodiment(s) of the present invention approach this problem of providing all flows a guaranteed bit rate by following a two-step scheduling process in a scheduler 735 located in the MAC layer.
This two-level scheduling process guarantees that, if at all possible, all flows receive their guaranteed bit rates and also ensures that the guaranteed bit rates of the higher (and highest) priority flows are maintained as long as possible. Advantageously, these embodiment(s) may be implemented in the RNC node, the UE (node), etc.
In each TTI, the MAC entity has to decide how much data to transmit on each transport channel connected to it. These transport channels are not independent of one another, and are later multiplexed onto a single physical channel at the physical layer
(as discussed hereinabove). The RRC 705 entity has to ensure that the total fransmission capability on all transport channels does not exceed the transmission capability of the underlying physical channel. This is done by giving the MAC entity a TFCS, which contains the allowed TFCs for all transport channels. Referring now to FIG. 8, another exemplary method in flowchart form for scheduling data flows in accordance with the present invention is illustrated generally at 800. For the exemplary flowchart 800, the scheduling process in the MAC layer includes the selection of a TFC from a TFCS using a two-step scoring process. This selection may be performed once for each TTI. Initially, several parameters are obtained for each logical channel. (Step 805.) The QoS Class for each logical channel may be obtained from the corresponding RAB parameter. The QoS Class value may be obtained directly from the RAB parameter called "QoS Class", or it may alternatively be calculated from one or more RAB parameters using any suitable formula. The Guaranteed Rate for each logical channel may also be obtained from the corresponding RAB parameter.
The Guaranteed Rate value may be obtained directly from the "Guaranteed Rate" RAB parameter, calculated from preassigned fair queuing weights using the GPS formula (as presented in "A Generalised Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case", A. K. Parekh, R. G. Gallager, published in IEEE/ ACM Transactions On Networking, Vol. 1, No. 3, June
1993, Pp. 344-357), or it may alternatively be calculated from one or more RAB parameters using any suitable formula. If the "Guaranteed Rate" RAB parameter is not applicable or is otherwise unsatisfactory, a zero (0) value may optionally be assigned to this parameter, (h the following description, it is assumed that the Guaranteed Rate is expressed as bits per 10 ms.) The Queue Fill Level corresponds to a number of
PDUs queued for each logical channel, and it may be obtained from the RLC entity. For each logical channel for each TFC in the TFCS, two scores are calculated by the following formulas (Step 810.):
(1) Score Jch = QoS Class * min(TBSS, Guaranteed Rate, Queue Fill Level); and (2) If min(Queue Fill Level, TBSS) > Guaranteed Rate, then Bonus jcore Jch = QoS Class *
[m (Queue Fill Level, TBSS) - Guaranteed Rate],
Otherwise, Bonus jcore Jch = 0 .
For each TFC in the TFCS, two other scores are calculated using the following formulas (Step 815.):
(1) Score = Sum(Sco7*eJch); and
(2) Bonus jcore = Sum(5 onus core Jch) .
The TFC that has the greatest Score is selected to determine the bandwidth distribution. If two or more TFCs have equal Scores, the TFC with the highest Bonus jcore is selected therefrom. (Step 820.) This exemplary procedure from flowchart 800 ensures that if there is a TFC that transmits at least the guaranteed rate for each flow, then that TFC is chosen. This exemplary procedure also attempts to maximize the amount of data being transmitted from the highest QoS class(es). (It should be noted that it is assumed that the TFCs are ordered within the TFCS such that the TBSS for each logical channel increases with increasing TFCI.)
The following pseudo-code is an outline of an exemplary algorithm for implementing the scheme described hereinabove with reference to FIGS. 7 and 8:
Int sched_abs jprio(const REALVECTOR & GuarRatevect, const VECTOR & tfcs, const REALVECTOR & QoSin, const INTVECTOR & Pusizein) {
int maxTrch = tfcs;Length( ); int tfc, tfci, qf, gr, rate, trch, trchGl; int tfcToUse;
//maximum bitrate that can be sent in one frame int maxrate, bits_to_send;
//how many TFCs, supposing that first LCH is always used
int maxTFC = ((VECTOR &) tfcs [0]) . Length( );
double score = 0, bonus_score = 0; double max_score = 0; double max_bonus_score=0;
for (tfc = 0; tfc < maxTFC; tfc++) {
score=bonus_score=0;
//loop through all TrChs of this MS for this TFC
for (trch = 0; trch < maxTrCh; trch++) {
//count the score for this TFC
int tbs = ((INTVECTOR _t&) <(VECTOR_t&)tfcs[lch]) [tfc]
> [0]; int tbss = ((INTVECTOR &:) <(VECTOR_t&)tfcs[lch])
Figure imgf000033_0001
rate += tbss;
qf = PTisizein[lch]* queueFillStateMemoryflchGl]; gr = GuarRateVect[lch] / 100;
bits_to_send = min(tbss, qf); /* Give score according to real bits that can be sent, but not
* for more than guaranteed rate */
score += QoSin[lch]*min(gr, bits_to_send);
/* If the real bits that can be sent is larger than guaranteed
* rate, give bonus score for bits sent over guaranteed rate */
if (bits_to_send >= gr) { bonus_score += QoSin[lch]*(bits_to_send - gr);
}
} if(score > max_score) { tfcToUse = tfc; max_score = score; max_bonus_score = bonus_score; } else if (score == max_score && bonus_score > max-bonus_score) { tfcToUse = tfc; max_bonus_score = bonus=score; }
} return (tfcToUse);
The various principles and embodiment(s) of the present invention therefore describe and enable the provisioning of bandwidth allocation to entities in a communications system. With respect to embodiment(s) described hereinabove with reference to FIGS. 5 and 6, they provide fair queuing for a mixed service scenario in which it is desirable (or necessary) to multiplex several services to a single transport channel. This is typically necessary in a MAC-c entity, but it may also be beneficial in a MAC-d entity (e.g., in order to save transport channels). With respect to embodiment(s) described hereinabove with reference to FIGS. 7 and 8, they provide another alternative that can be especially advantageous if there is only limited memory available (because the backlog memory is not necessary), when fairness is not required. Even though these embodiment(s) are not fair, they still provide the guaranteed rate transfer rate to all service classes. Specifically, these embodiment(s) are optimized to provide best quality of service to flows having the highest QoS class(es), while still providing a minimum level of service to all flows. Furthermore, because no backlog memory need be updated each TTI, they can be faster to execute, even though they cannot guarantee fairness over the long run.
Although preferred embodiment(s) of the methods, systems, and arrangements of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it will be understood that the present invention is not limited to the embodiment(s) disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit and scope of the present invention as set forth and defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A scheduling method for providing bandwidth to entities in a communications system, comprising the steps of: calculating a first transfer rate for a plurality of flows; calculating a second transfer rate for said plurality of flows; ascertaining a quality of service (QoS) for each flow of said plurality of flows; and assigning bandwidth to each flow of said plurality of flows responsive to said first transfer rate, said second transfer rate, and said quality of service (QoS) for each flow of said plurality of flows.
2. The method according to Claim 1, wherein said first transfer rate comprises a guaranteed rate transfer rate, and said second transfer rate comprises a weighted fair queuing (WFQ) transfer rate.
3. The method according to Claim 1, wherein said first and second fransfer rates comprise aggregated transfer rates over said plurality of flows.
4. A method for allocating channel resources in a communications system, comprising the steps of: calculating a guaranteed rate transfer rate for a plurality of flows; calculating a weighted fair queuing (WFQ) fransfer rate for said plurality of flows; scoring three scores in three predetermined categories for each transport format combination in a transport format combination set; selecting a transport format combination index based on said three scores; and multiplexing a transport block set size associated with the selected transport format combination index using a three stage process.
5. A method according to Claim 4, wherein said step of calculating a guaranteed rate transfer rate for a plurality of flows comprises the step of summing a guaranteed rate of a fransport channel and a guaranteed rate backlog, said guaranteed rate backlog capable of being zero.
6. A method according to Claim 4, wherein said step of calculating a weighted fair queuing (WFQ) fransfer rate for said plurality of flows comprises the step of calculating said weighted fair queuing (WFQ) transfer rate responsive to a maximum rate of all transport channels, a maximum quality of service class, a sum of quality of service classes for all active flows of said plurality of flows, and a fair queuing rate backlog, said fair queuing rate backlog capable of being zero.
7. A method according to Claim 4, wherein said step of scoring three scores in three predetermined categories for each transport format combination comprises the steps of: determining a guaranteed rate score based on a sum of quality of service classes and a minimum between a fransport block set size and said guaranteed rate transfer rate; determining a weighted fair queuing (WFQ) score based on said sum of quality of service classes and a minimum between said transport block set size and said weighted fair queuing (WFQ) transfer rate; and determining a bonus score responsive to, at least in part, said sum of quality of service classes, a total number of bits that may be transmitted from any transport channel, said transport block set size, said guaranteed rate transfer rate, and said weighted fair queuing (WFQ) fransfer rate.
8. A method according to Claim 7, wherein said step of selecting a transport format combination index based on said three scores comprises the steps of: selecting a particular transport format combination index that corresponds to a maximum of said guaranteed rate score; if no single maximum of said guaranteed rate score exists, selecting a particular transport format combination index that corresponds to said maximum of said guaranteed rate score and to a maximum of said weighted fair queuing (WFQ) score; and if no single maximum of both said guaranteed rate score and said weighted fair queuing (WFQ) score exists, selecting a particular transport format combination index that corresponds to said maximum of said guaranteed rate score and to a maximum of said weighted fair queuing (WFQ) score and to a maximum of said bonus score.
9. A method according to Claim 4, wherein said step of multiplexing a transport block set size associated with the selected transport format combination index using a three stage process comprises the step of: if said transport block set size is less than said guaranteed rate fransfer rate, giving each flow its respective guaranteed rate in quality of service order until said transport block set size is exhausted; if, on the other hand, said transport block set size is greater than said guaranteed rate transfer rate, giving all flows their respective guaranteed rates and if said transport block set size is less than said weighted fair queuing (WFQ) fransfer rate, giving each flow its respective fair share until said transport block set size is exhausted; and if, on the other hand, said transport block set size is greater than said weighted fair queuing (WFQ) transfer rate, giving all flows their respective fair shares, with at least one flow having a highest priority receiving excess bandwidth.
10. A method according to Claim 4, further comprising the step of: updating a guaranteed rate backlog and a fair queuing rate backlog for each flow that has pending data to send and did not receive either its respective guaranteed rate or it respective fair share.
11. A scheduling system for providing bandwidth to entities in a communications system, comprising: means for calculating a first transfer rate for a plurality of flows; means for calculating a second fransfer rate for said plurality of flows; means for ascertaining a quality of service (QoS) for each flow of said plurality of flows; and means for assigning bandwidth to each flow of said plurality of flows responsive to said first transfer rate, said second transfer rate, and said quality of service (QoS) for each flow of said plurality of flows.
12. A receiver entity for providing bandwidth to a plurality of transmitter entities in a communications system, comprising: a plurality of buffers, each buffer of said plurality of buffers being associated with a quality of service (QoS) level and including one or more packets; a scheduler, said scheduler in operative communication with said plurality of buffers to receive said quality of service (QoS) and said one or more packets from each buffer, each buffer transmitting at least one packet of said one or more packets in accordance with at least one instruction from said scheduler, said scheduler configured to: calculate a guaranteed rate transfer rate for said plurality of buffers; calculate a weighted fair queuing (WFQ) fransfer rate for said plurality of buffers; and assign bandwidth via said at least one instruction to each buffer of said plurality of buffers responsive to said guaranteed rate transfer rate, said weighted fair queuing (WFQ) transfer rate, and said quality of service (QoS) associated with each buffer of said plurality of buffers.
13. A receiver entity according to Claim 12, wherein said scheduler is further configured to assign bandwidth responsive to a predetermined transport format combination set.
14. A receiver entity according to Claim 12, wherein the receiver entity comprises a portion of at least one of a radio network controller node and a user equipment.
15. A scheduling method for providing bandwidth to entities in a communications system, comprising the steps of: ascertaining an associated quality of service (QoS) for each flow ofa plurality of flows; updating a guaranteed rate backlog memory and a weighted fair queuing (WFQ) backlog memory for each flow of said plurality of flows; calculating a guaranteed rate fransfer rate for said plurality of flows; calculating a weighted fair queuing (WFQ) transfer rate for said plurality of flows; and assigning bandwidth to each flow of said plurality of flows responsive to (i) said guaranteed rate transfer rate; (iϊ) said weighted fair queuing (WFQ) transfer rate; and (iii) said guaranteed rate backlog memory, said weighted fair queuing (WFQ) backlog memory, and said associated quality of service (QoS) for said each flow of said plurality of flows.
16. A method according to Claim 15, further comprising the step of: scoring each bandwidth distribution option ofa plurality of bandwidth disfribution options responsive to (i) said associated quality of service (QoS) for said each flow of said plurality of flows and said guaranteed rate transfer rate and (iϊ) said associated quality of service (QoS) for said each flow of said plurality of flows and said weighted fair queuing (WFQ) transfer rate.
17. A method according to Claim 16, further comprising the step of: selecting a bandwidth distribution option from said plurality of bandwidth distribution options by placing a higher priority on scores determined responsive to said guaranteed rate fransfer rate.
18. A method according to Claim 15, wherein said step of assigning bandwidth to each flow of said plurality of flows comprises the step of: allocating bandwidth from a selected bandwidth disfribution option by giving said each flow of said plurality of flows its respective guaranteed rate fransfer rate in an order determined by the respective associated quality of services (QoSs) of said each flow and by giving said each flow its respective weighted fair queuing (WFQ) transfer rate in an order determined by the respective associated quality of services (QoSs) of said each flow.
19. A method according to Claim 18, wherein said step of assigning bandwidth to each flow of said plurality of flows further comprises the step of: sending any remaining packets from said each flow of said plurality of flows in an order determined by the respective associated quality of services (QoSs) of said each flow.
20. A method according to Claim 15, wherein said step of updating a guaranteed rate backlog memory and a weighted fair queuing (WFQ) backlog memory for each flow of said plurality of flows comprises the step of: increasing said guaranteed rate backlog memory and said weighted fair queuing (WFQ) backlog memory for said each flow of said plurality of flows that was not permitted to transfer a number of packet or packets that equals each respective flows guaranteed rate transfer rate and weighted fair queuing (WFQ) fransfer rate, respectively, during a previous transmission time interval (TTI).
21. A method for allocating channel resources in a communications system, comprising the steps of: ascertaining a quality of service class for each logical channel of a plurality of logical channels; ascertaining a guaranteed rate for said each logical channel; ascertaining a queue fill level for said each logical channel; calculating a first and a second score for each ofa plurality of fransport format combinations of a transport format combination set; selecting a transport format combination of said plurality of transport format combinations that has a highest first score.
22. The method of Claim 21 , further comprising the step of: if multiple transport format combinations of said plurality of transport format combinations have an equally high first score, selecting a fransport format combination from said multiple transport format combinations that has a highest second score.
23. The method of Claim 21 , wherein said step of ascertaining a quality of service class for each logical channel of a plurality of logical channels comprises the step of analyzing one or more radio bearer parameters.
24. The method of Claim 21 , wherein said step of ascertaining a guaranteed rate for said each logical channel comprises the step of analyzing one or more radio bearer parameters.
25. The method of Claim 21 , wherein said step of ascertaining a queue fill level for said each logical channel comprises the step of obtaining a number of protocol data units for each logical channel from a radio link control entity.
26. The method of Claim 21, wherein said step of calculating said first score comprises the steps of: determining a logical channel score responsive to a quality of service class and a minimum ofa transport block set size, a guaranteed rate, and a queue fill level; repeating said step of determining a logical channel score for each of a plurality of logical channels of a fransport format combination; and determining said first score by summing a plurality of logical channel scores that correspond to said plurality of logical channels.
27. The method of Claim 21, wherein said step of calculating said second score comprises the steps of: determining a bonus score responsive to a quality of service class, a guaranteed rate, and a minimum of a transport block set size and a queue fill level if said minimum is greater than said guaranteed rate, and determining said bonus score to be zero if said minimum is not greater than said guaranteed rate; repeating said step of determining a bonus score for each of a plurality of logical channels of a fransport format combination; and determining said second score by summing a plurality of bonus scores that correspond to said plurality of logical channels.
28. A scheduling method for providing bandwidth to entities in a communications system, comprising the steps of: ascertaining a quality of service (QoS) class that is associated with each channel of a plurality of channels; ascertaining a guaranteed rate transmission rate for said each channel; ascertaining a queue fill level of a queue that corresponds to said each channel; calculating a first score for said each channel responsive to said quality of service (QoS) class, said guaranteed rate transmission rate, and said queue fill level.
29. A method according to Claim 28, further comprising the step of: calculating a second score for said each channel responsive to said guaranteed rate fransmission rate and said queue fill level.
30. A method according to Claim 29, further comprising the steps of: repeating said steps of calculating a first score and calculating a second score for each bandwidth disfribution option of a plurality of bandwidth disfribution options; calculating a third score and a fourth score responsive to said first score and said second score, respectively, and based on said step of repeating.
31. A method according to Claim 30, wherein said step of calculating a third score and a fourth score further comprising the steps of: calculating said third score by summing a number of first scores and calculating said fourth score by summing said number of second scores, said number corresponding to the number of bandwidth distribution options of said plurality of bandwidth distribution options.
32. A method according to Claim 30, further comprising the steps of: selecting at least one bandwidth disfribution option from said plurality of bandwidth disfribution options by determining a highest third score and said at least one bandwidth disfribution option corresponding thereto; and if said at least one bandwidth distribution option corresponds to more than one bandwidth distribution option, selecting a bandwidth distribution option from said at least one bandwidth disfribution option that corresponds to a highest fourth score.
33. A scheduling system for providing bandwidth to entities in a communications system, comprising: means for ascertaining a quality of service (QoS) class that is associated with each channel of a plurality of channels; means for ascertaining a guaranteed rate transmission rate for said each channel; means for ascertaining a queue fill level ofa queue that corresponds to said each channel; means for calculating a first score for said each channel responsive to said quality of service (QoS) class, said guaranteed rate transmission rate, and said queue fill level; and means for calculating a second score for said each channel responsive to said guaranteed rate transmission rate and said queue fill level.
34. A receiver entity for providing bandwidth to a plurality of transmitter entities in a communications system, comprising: a plurality of buffers, each buffer of said plurality of buffers being associated with a quality of service (QoS) level and a guaranteed rate transfer rate, said each buffer including one or more packets defining a queue fill level; a scheduler, said scheduler in operative communication with said plurality of buffers to receive said quality of service (QoS), said guaranteed rate transfer rate, and said queue fill level, said each buffer fransmitting at least one packet of said one or more packets in accordance with at least one instruction from said scheduler, said scheduler configured to: calculate a channel score responsive to said quality of service (QoS), said guaranteed rate fransfer rate, and said queue fill level for said each buffer; calculate a channel bonus score responsive to said guaranteed rate transfer rate and said queue fill level for said each buffer; and assign bandwidth via said at least one instruction to said each buffer of said plurality of buffers responsive to said channel score for said each buffer and said channel bonus score for said each buffer.
35. A receiver entity according to Claim 34, wherein said scheduler is further configured to assign bandwidth responsive to a plurality of transport block set sizes (TBSSs) from a predetermined bandwidth distribution option set.
36. A receiver entity according to Claim 34, wherein the receiver entity comprises a portion of at least one of a radio network controller node and a user equipment.
37. A scheduling method for providing bandwidth to entities in a communications system, comprising the steps of: calculating a plurality of first scores, each first score of said plurality of first scores corresponding to a bandwidth distribution option of a plurality of bandwidth distribution options, said plurality of first scores calculated responsive to Quality of Service (QoS) levels; calculating a plurality of second scores, each second score of said plurality of second scores corresponding to a bandwidth distribution option of said plurality of bandwidth distribution options; determining whether there is a highest first score from among said plurality of first scores ; if so, selecting the bandwidth distribution option corresponding to said highest first score; if not, identifying a group of second scores from said plurality of second scores that correspond to a group of first scores that are higher than all other first scores of said plurality of first scores; and selecting the bandwidth disfribution option corresponding to a highest second score from said group of second scores.
38. A method according to Claim 37, further comprising the step of: distributing bandwidth to said entities in accordance with the selected bandwidth distribution option.
39. A method according to Claim 37, wherein each of said bandwidth distribution options comprise a fransport format combination and said plurality of bandwidth distribution options comprises a transport format combination set.
40. A method for allocating channel resources in a communications system, comprising the steps of: ascertaining a quality of service class for each logical channel of a plurality of logical channels; ascertaining a guaranteed rate for said each logical channel of said plurality of logical channels; and assigning bandwidth to said each logical channel of said plurality of logical channels responsive to said quality of service class and said guaranteed rate for respective ones of said each logical channel of said plurality of logical channels.
41. A method according to Claim 40, further comprising the step of: ascertaining a queue fill level for said each logical cham el of said plurality of logical channels; and wherein said step of assigning bandwidth to said each logical channel of said plurality of logical channels responsive to said quality of service class and said guaranteed rate for respective ones of said each logical channel of said plurality of logical channels comprises the step of assigning bandwidth to said each logical channel of said plurality of logical channels responsive to said queue fill level for respective ones of said each logical channel of said plurality of logical channels.
42. A method according to Claim 41, wherein said step of ascertaining a queue fill level for said each logical channel of said plurality of logical channels comprises the step of obtaining a number of protocol data units for said each logical channel of said plurality of logical channels from a predetermined entity.
43. A method according to Claim 40, wherein said each logical channel of said plurality of logical channels comprises an information flow.
44. A method according to Claim 40, wherein said step of ascertaining a quality of service class for each logical channel of a plurality of logical channels comprises the step of analyzing at least one radio bearer parameter.
45. A method according to Claim 40, wherein said step of ascertaining a guaranteed rate for said each logical channel of said plurality of logical channels comprises the step of analyzing at least one radio bearer parameter.
46. A method according to Claim 40, wherein said step of ascertaining a guaranteed rate for said each logical channel of said plurality of logical channels comprises the step of ascertaining said guaranteed rate for said each logical channel of said plurality of logical channels responsive to a respective ratio of fair queuing rates.
47. A method according to Claim 46, wherein said respective ratio of fair queuing rates comprises a ratio ofa respective fair queuing rate for a respective logical channel of said plurality of logical channels to a total fair queuing rate of said plurality of logical channels.
48. A method according to Claim 40, wherein said step of ascertaining a guaranteed rate for said each logical channel of said plurality of logical channels comprises the step of ascertaining said guaranteed rate for said each logical channel of said plurality of logical channels responsive to a maximum possible fransmission rate.
49. A method according to Claim 40, wherein said step of ascertaining a guaranteed rate for said each logical channel of said plurality of logical channels comprises the step of ascertaining said guaranteed rate for said each logical channel of said plurality of logical channels in accordance with the following equation: ratej = weight l(sumj>f jill jictivejweights) * maximum jate, where said ratej comprises a respective said guaranteed rate for said each logical channel of said plurality of logical channels, said weight J comprises a respective fair queuing rate for said each logical channel of said plurality of logical channels, said sum_ofjxlljxctivejweights comprises a sum of said respective fair queuing rates for said each logical channel for all active logical channels of said plurality of logical channels, and said maximum jate comprises a maximum possible fransmission rate.
50. A method for allocating channel resources in a communications system, comprising the steps of: ascertaining a quality of service class for each logical channel of a plurality of logical channels; ascertaining a guaranteed rate for said each logical channel of said plurality of logical channels based, at least in part, on an equation, said equation including a product of a maximum rate and a ratio, said ratio being a quotient of a weight of a respective one of said each logical channel of said plurality of logical channels and a total weight of said plurality of logical channels; and assigning bandwidth to respective ones of said each logical channel of said plurality of logical channels responsive to said quality of service class and said guaranteed rate for said respective ones of said each logical channel of said plurality of logical channels.
51. A method according to Claim 50, wherein said each logical channel of said plurality of logical channels comprises an information flow.
52. A method according to Claim 50, further comprising the step of: ascertaining a queue fill level for said each logical channel of said plurality of logical channels; and wherein said step of assigning bandwidth to respective ones of said each logical channel of said plurality of logical channels responsive to said quality of service class and said guaranteed rate for said respective ones of said each logical channel of said plurality of logical channels comprises the step of assigning bandwidth to said respective ones of said each logical channel of said plurality of logical channels responsive to said queue fill level for said respective ones of said each logical channel of said plurality of logical channels.
PCT/SE2001/000406 2000-02-25 2001-02-23 Packet scheduling in umts using several calculated transfer rates WO2001063855A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2001236302A AU2001236302A1 (en) 2000-02-25 2001-02-23 Packet scheduling in umts using several calculated transfer rates
EP01908560A EP1264445A1 (en) 2000-02-25 2001-02-23 Packet scheduling in umts using several calculated transfer rates
FI20070077U FI7776U1 (en) 2000-02-25 2007-02-26 Packet scheduling in UMTS using multiple calculated transfer rates

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US18500500P 2000-02-25 2000-02-25
US60/185,005 2000-02-25
US09/698,785 US6850540B1 (en) 1999-10-28 2000-10-27 Packet scheduling in a communications system
US09/698,785 2000-10-27

Publications (1)

Publication Number Publication Date
WO2001063855A1 true WO2001063855A1 (en) 2001-08-30

Family

ID=26880688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2001/000406 WO2001063855A1 (en) 2000-02-25 2001-02-23 Packet scheduling in umts using several calculated transfer rates

Country Status (4)

Country Link
EP (1) EP1264445A1 (en)
AU (1) AU2001236302A1 (en)
FI (1) FI7776U1 (en)
WO (1) WO2001063855A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002054685A1 (en) * 2000-12-29 2002-07-11 Nokia Corporation Determination of bit rate
WO2002091757A1 (en) * 2001-05-08 2002-11-14 Huawei Technologies Co., Ltd. A scheduling method of realizing the quality of service of router in integrated service
GB2381996A (en) * 2001-10-30 2003-05-14 Hewlett Packard Co Network bandwidth optimization method and system
WO2003085903A1 (en) * 2002-04-09 2003-10-16 Nokia Corporation Packet scheduling of real time packet data
WO2003098881A1 (en) * 2002-05-15 2003-11-27 Nokia Corporation Method for establishing an l2cap channel dedicated for data flow transmission in bluetooth networks
GB2390779A (en) * 2002-07-12 2004-01-14 Fujitsu Ltd Packet scheduling
WO2004038976A2 (en) 2002-10-24 2004-05-06 Motorola, Inc. Method and apparatus for wirelessly communicating different information streams
SG108277A1 (en) * 2000-09-28 2005-01-28 Ntt Docomo Inc Wireless communication apparatus and wireless channel assignment method
EP1551202A2 (en) * 2003-12-31 2005-07-06 STMicroelectronics Asia Pacific Pte Ltd System and method for selecting an optimal transport format combination using progressive candidate set reduction
EP1643690A1 (en) * 2004-10-01 2006-04-05 Matsushita Electric Industrial Co., Ltd. Quality-of-Service (QoS)-aware scheduling for uplink transmissions on dedicated channels
EP1672941A1 (en) * 2004-12-15 2006-06-21 Matsushita Electric Industrial Co., Ltd. Support of guaranteed bit-rate traffic for uplink transmissions
EP1677463A1 (en) * 2004-12-30 2006-07-05 Research In Motion Limited Method and apparatus for selecting a transport format combination
WO2007002723A2 (en) * 2005-06-27 2007-01-04 Qualcomm Incorporated Block-based assignment of quality of service precedence values
CN100369502C (en) * 2004-11-19 2008-02-13 大唐移动通信设备有限公司 Direct proportion fair dispatch method for base station selective service terminal
CN100373837C (en) * 2003-05-21 2008-03-05 华为技术有限公司 Transmission form combined set configuration method used for code division multiple access communication system
WO2008011604A3 (en) * 2006-07-21 2008-05-22 Qualcomm Inc Efficiently assigning precedence values to new and existing qos filters
US7630316B2 (en) 2004-12-30 2009-12-08 Research In Motion Limited Method and apparatus for selecting a transport format combination
WO2010107348A1 (en) * 2009-03-19 2010-09-23 Telefonaktiebolaget L M Ericsson (Publ) Hspa relative bit-rate aimd-based qos profiling
WO2012030271A1 (en) * 2010-09-03 2012-03-08 Telefonaktiebolaget L M Ericsson (Publ) Scheduling multiple users on a shared communication channel in a wireless communication system
TWI392256B (en) * 2004-01-09 2013-04-01 Intel Corp Transport format combination selection in a wireless transmit/receive unit
CN110958503A (en) * 2019-12-03 2020-04-03 锐捷网络股份有限公司 Bandwidth distribution device and method
US10779288B2 (en) 2002-07-15 2020-09-15 Wi-Lan Inc. Apparatus, system and method for the transmission of data with different QoS attributes
FR3106710A1 (en) * 2020-01-28 2021-07-30 Naval Group DATA FLOW EXCHANGE MANAGEMENT MODULE IN AN EXCHANGE ARCHITECTURE FOR MOBILE DEVICE TRAINING

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0859492A2 (en) * 1997-02-07 1998-08-19 Lucent Technologies Inc. Fair queuing system with adaptive bandwidth redistribution
US5914950A (en) * 1997-04-08 1999-06-22 Qualcomm Incorporated Method and apparatus for reverse link rate scheduling
EP1030484A2 (en) * 1999-01-29 2000-08-23 Nortel Networks Corporation Data link layer quality of service for UMTS

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0859492A2 (en) * 1997-02-07 1998-08-19 Lucent Technologies Inc. Fair queuing system with adaptive bandwidth redistribution
US5914950A (en) * 1997-04-08 1999-06-22 Qualcomm Incorporated Method and apparatus for reverse link rate scheduling
EP1030484A2 (en) * 1999-01-29 2000-08-23 Nortel Networks Corporation Data link layer quality of service for UMTS

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG108277A1 (en) * 2000-09-28 2005-01-28 Ntt Docomo Inc Wireless communication apparatus and wireless channel assignment method
US7130636B2 (en) 2000-09-28 2006-10-31 Ntt Docomo, Inc. Wireless communication apparatus and wireless channel assignment method
US7872980B2 (en) 2000-12-29 2011-01-18 Nokia Corporation Determination of a bit rate
WO2002054685A1 (en) * 2000-12-29 2002-07-11 Nokia Corporation Determination of bit rate
US7355972B2 (en) 2001-05-08 2008-04-08 Huawei Technologies Co., Ltd. Scheduling method of realizing the quality of service of router in integrated service
WO2002091757A1 (en) * 2001-05-08 2002-11-14 Huawei Technologies Co., Ltd. A scheduling method of realizing the quality of service of router in integrated service
GB2381996B (en) * 2001-10-30 2004-10-06 Hewlett Packard Co Network bandwidth optimization method and system
GB2381996A (en) * 2001-10-30 2003-05-14 Hewlett Packard Co Network bandwidth optimization method and system
WO2003085903A1 (en) * 2002-04-09 2003-10-16 Nokia Corporation Packet scheduling of real time packet data
WO2003098881A1 (en) * 2002-05-15 2003-11-27 Nokia Corporation Method for establishing an l2cap channel dedicated for data flow transmission in bluetooth networks
GB2390779B (en) * 2002-07-12 2006-02-22 Fujitsu Ltd Packet scheduling
GB2390779A (en) * 2002-07-12 2004-01-14 Fujitsu Ltd Packet scheduling
US11229032B2 (en) 2002-07-15 2022-01-18 Wi-Lan Inc. Apparatus, system and method for the transmission of data with different QoS attributes
US10779288B2 (en) 2002-07-15 2020-09-15 Wi-Lan Inc. Apparatus, system and method for the transmission of data with different QoS attributes
EP1554820A2 (en) * 2002-10-24 2005-07-20 Motorola, Inc. Method and apparatus for wirelessly communicating different information streams
WO2004038976A2 (en) 2002-10-24 2004-05-06 Motorola, Inc. Method and apparatus for wirelessly communicating different information streams
EP1554820B1 (en) * 2002-10-24 2013-11-27 Motorola Mobility LLC Method and apparatus for wirelessly communicating different information streams
CN100373837C (en) * 2003-05-21 2008-03-05 华为技术有限公司 Transmission form combined set configuration method used for code division multiple access communication system
EP1551202A3 (en) * 2003-12-31 2006-06-14 STMicroelectronics Asia Pacific Pte Ltd System and method for selecting an optimal transport format combination using progressive candidate set reduction
EP1551202A2 (en) * 2003-12-31 2005-07-06 STMicroelectronics Asia Pacific Pte Ltd System and method for selecting an optimal transport format combination using progressive candidate set reduction
US7525925B2 (en) 2003-12-31 2009-04-28 Stmicroelectronics Asia Pacific Pte. Ltd. System and method for selecting an optimal transport format combination using progressive set reduction
US9942878B2 (en) 2004-01-09 2018-04-10 Intel Corporation Transport format combination selection in a wireless transmit/receive unit
TWI392256B (en) * 2004-01-09 2013-04-01 Intel Corp Transport format combination selection in a wireless transmit/receive unit
WO2006037492A1 (en) * 2004-10-01 2006-04-13 Matsushita Electric Industrial Co. Ltd. Quality-of-service (qos)-aware scheduling for uplink transmission on dedicated channels
EP1643690A1 (en) * 2004-10-01 2006-04-05 Matsushita Electric Industrial Co., Ltd. Quality-of-Service (QoS)-aware scheduling for uplink transmissions on dedicated channels
EP1892901A3 (en) * 2004-10-01 2011-07-13 Panasonic Corporation Quality-of-service (qos)-aware scheduling for uplink transmission on dedicated channels
US7948936B2 (en) 2004-10-01 2011-05-24 Panasonic Corporation Quality-of-service (QoS)-aware scheduling for uplink transmission on dedicated channels
CN100369502C (en) * 2004-11-19 2008-02-13 大唐移动通信设备有限公司 Direct proportion fair dispatch method for base station selective service terminal
US7362726B2 (en) 2004-12-15 2008-04-22 Matsushita Electric Industrial Co., Ltd. Support of guaranteed bit-rate traffic for uplink transmissions
US7899011B2 (en) 2004-12-15 2011-03-01 Panasonic Corporation Support of guaranteed bit-rate traffic for uplink transmissions
EP1672941A1 (en) * 2004-12-15 2006-06-21 Matsushita Electric Industrial Co., Ltd. Support of guaranteed bit-rate traffic for uplink transmissions
WO2006063642A1 (en) * 2004-12-15 2006-06-22 Matsushita Electric Industrial Co., Ltd. Support of guaranteed bit-rate traffic for uplink transmissions
EP1677463A1 (en) * 2004-12-30 2006-07-05 Research In Motion Limited Method and apparatus for selecting a transport format combination
US7630316B2 (en) 2004-12-30 2009-12-08 Research In Motion Limited Method and apparatus for selecting a transport format combination
JP2012075094A (en) * 2005-06-27 2012-04-12 Qualcomm Inc Block-based assignment of quality of service precedence value
WO2007002723A3 (en) * 2005-06-27 2007-05-10 Qualcomm Inc Block-based assignment of quality of service precedence values
WO2007002723A2 (en) * 2005-06-27 2007-01-04 Qualcomm Incorporated Block-based assignment of quality of service precedence values
US7826418B2 (en) 2005-06-27 2010-11-02 Qualcomm Incorporated Block-based assignment of quality of service precedence values
WO2008011604A3 (en) * 2006-07-21 2008-05-22 Qualcomm Inc Efficiently assigning precedence values to new and existing qos filters
US7870231B2 (en) 2006-07-21 2011-01-11 Qualcomm Incorporated Efficiently assigning precedence values to new and existing QoS filters
WO2010107348A1 (en) * 2009-03-19 2010-09-23 Telefonaktiebolaget L M Ericsson (Publ) Hspa relative bit-rate aimd-based qos profiling
US8964551B2 (en) 2009-03-19 2015-02-24 Telefonaktiebolaget L M Ericsson (Publ) HSPA relative bit-rate AIMD-based QoS profiling
US8687576B2 (en) 2010-09-03 2014-04-01 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic bandwidth allocation control in a multi-access radio communication system
WO2012030271A1 (en) * 2010-09-03 2012-03-08 Telefonaktiebolaget L M Ericsson (Publ) Scheduling multiple users on a shared communication channel in a wireless communication system
CN110958503A (en) * 2019-12-03 2020-04-03 锐捷网络股份有限公司 Bandwidth distribution device and method
CN110958503B (en) * 2019-12-03 2022-03-18 锐捷网络股份有限公司 Bandwidth distribution device and method
FR3106710A1 (en) * 2020-01-28 2021-07-30 Naval Group DATA FLOW EXCHANGE MANAGEMENT MODULE IN AN EXCHANGE ARCHITECTURE FOR MOBILE DEVICE TRAINING
WO2021151994A1 (en) * 2020-01-28 2021-08-05 Naval Group Module for managing exchanges of data streams in an exchange architecture for a formation of mobile machines

Also Published As

Publication number Publication date
AU2001236302A1 (en) 2001-09-03
EP1264445A1 (en) 2002-12-11
FIU20070077U0 (en) 2007-02-26
FI7776U1 (en) 2008-02-29

Similar Documents

Publication Publication Date Title
US6850540B1 (en) Packet scheduling in a communications system
US6826193B1 (en) Data transmission in a telecommunications network
EP1264445A1 (en) Packet scheduling in umts using several calculated transfer rates
WO2001063856A1 (en) Flow control between transmitter and receiver entities in a communications system
US7031254B2 (en) Rate control system and method for a link within a wireless communications system
KR101012683B1 (en) Video packets over a wireless link under varying delay and bandwidth conditions
EP2227885B1 (en) Compressed buffer status reports in lte
US8837285B2 (en) Method and apparatus for initializing, preserving, and reconfiguring token buckets
US6459687B1 (en) Method and apparatus for implementing a MAC coprocessor in a communication system
US6879561B1 (en) Method and system for wireless packet scheduling with per packet QoS support and link adaptation
US7039013B2 (en) Packet flow control method and device
JP4549599B2 (en) Method and apparatus for synchronizing and transmitting data in a wireless communication system
JP3866963B2 (en) Method and system for scheduling multiple data flows to coordinate quality of service in a CDMA system
US9642156B2 (en) Transmitting radio node and method therein for scheduling service data flows
JP2007507934A (en) Harmonized data flow control and buffer sharing in UMTS
US20090059929A1 (en) Scheduling method and apparatus for high speed video stream service in communication system
EP1209940A1 (en) Method and system for UMTS packet transmission scheduling on uplink channels
US6961589B2 (en) Method of transmitting between a base station in an access network and an access network controller of a telecommunications system
US20110047271A1 (en) Method and system for allocating resources
EP1264447A1 (en) Overload handling in a communications system
WO2008066345A1 (en) Packet scheduler and packet scheduling method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2001908560

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001908560

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

WWE Wipo information: entry into national phase

Ref document number: U20070077

Country of ref document: FI

WWW Wipo information: withdrawn in national office

Ref document number: 2001908560

Country of ref document: EP