US20140341025A1 - Systems and methods for dynamic congestion management in communications networks - Google Patents

Systems and methods for dynamic congestion management in communications networks Download PDF

Info

Publication number
US20140341025A1
US20140341025A1 US14/310,671 US201414310671A US2014341025A1 US 20140341025 A1 US20140341025 A1 US 20140341025A1 US 201414310671 A US201414310671 A US 201414310671A US 2014341025 A1 US2014341025 A1 US 2014341025A1
Authority
US
United States
Prior art keywords
node
congested
determining
response
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/310,671
Inventor
Robert E. Denman
Frederick C. Kemmerer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ribbon Communications Operating Co Inc
Original Assignee
Genband US LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genband US LLC filed Critical Genband US LLC
Priority to US14/310,671 priority Critical patent/US20140341025A1/en
Publication of US20140341025A1 publication Critical patent/US20140341025A1/en
Assigned to GENBAND INC. reassignment GENBAND INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DENMAN, ROBERT E., KEMMERER, FREDERICK C.
Assigned to GENBAND US LLC reassignment GENBAND US LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GENBAND INC.
Assigned to SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT reassignment SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT PATENT SECURITY AGREEMENT Assignors: GENBAND US LLC
Assigned to SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT reassignment SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT CORRECTIVE ASSIGNMENT TO CORRECT PATENT NO. 6381239 PREVIOUSLY RECORDED AT REEL: 039269 FRAME: 0234. ASSIGNOR(S) HEREBY CONFIRMS THE PATENT SECURITY AGREEMENT. Assignors: GENBAND US LLC
Assigned to GENBAND US LLC reassignment GENBAND US LLC TERMINATION AND RELEASE OF PATENT SECURITY AGREEMENT Assignors: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT
Assigned to RIBBON COMMUNICATIONS OPERATING COMPANY, INC. reassignment RIBBON COMMUNICATIONS OPERATING COMPANY, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: GENBAND US LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0882Utilisation of link capacity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0226Traffic management, e.g. flow control or congestion control based on location or mobility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/127Avoiding congestion; Recovering from congestion by using congestion prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic

Definitions

  • the presently disclosed subject matter relates to communications networks. Particularly, the presently disclosed subject matter relates to dynamic congestion management in communications networks.
  • DPI deep packet inspection
  • systems and methods for dynamic congestion management in communications networks may, during times of congestion, dynamically limit bandwidth usage of subscribers using more than their fair share of bandwidth.
  • Such congestion management may be dynamically or automatically implemented so as to address congestion at various points in a network hierarchy.
  • a method can include determining traffic statistics of at least one node in a communications network. The method can also include determining whether the at least one node is congested based on the traffic statistics. Further, the method can include dynamically changing or provisioning a traffic shaping rule for application to the at least one node in response to determining that the at least one node is congested.
  • the presently disclosed subject matter provides: automated detection and mitigation of network congestion by dynamic provisioning of DPI-enabled traffic management policies, so as to address network congestion events in a more timely fashion; and automated evaluation of (possibly repeated) dynamic policy changes and their effectiveness, so as to expedite the provisioning of any necessary, corresponding statically provisioned policies that may mitigate future congestion events in real time.
  • FIG. 1 is a block diagram of an exemplary communications network 100 in which the presently disclosed subject matter may be deployed for dynamic congestion management in accordance with embodiments of the present disclosure
  • FIG. 2 is a flowchart of an example method for dynamic congestion management in accordance with embodiments of the present disclosure
  • FIG. 3 is a flowchart of another example method for dynamic congestion management in accordance with embodiments of the present disclosure
  • FIG. 4 is a block diagram of an example system for dynamic congestion management in accordance with an embodiment of the present disclosure
  • FIG. 5 is a block diagram of an example system 500 for tapping into GGSN signaling according to embodiments of the present disclosure
  • FIG. 6 is a graph showing throughput-capacity utilization over a period of time
  • FIG. 7 is a block diagram of an example network hierarchy showing nodes identified as targets for dynamic congestion policies according to embodiments of the present disclosure
  • FIG. 8 is a graph showing an example of monitoring of throughput-capacity utilization instigating hysteresis techniques to back out a dynamic policy change according to embodiments of the present disclosure.
  • FIG. 9 is a block diagram of an example system for tuning policies according to embodiments of the present disclosure.
  • FIG. 1 illustrates a block diagram of an exemplary communications network 100 in which the presently disclosed subject matter may be deployed for dynamic congestion management in accordance with embodiments of the present disclosure.
  • a DPI module 102 having a TPM function 104 is deployed behind a GGSN 106 on Gi (GGSN-to-PDN (public data network) interface).
  • Gi GGSN-to-PDN (public data network) interface
  • the TPM policy of the TPM function 104 is not controlled by a policy, charging, and rules function (PCRF).
  • PCRF policy, charging, and rules function
  • Various embodiments for dynamically managing congestion at one or more nodes in accordance with the present disclosure may be implemented by the EPM function 104 of the DPI module 102 ; however, any other suitable function of another suitable device or component may be used for dynamically managing congestion at one or more nodes.
  • the network 100 includes various other communications networks such as, but not limited to, the Internet 108 , a packet core network 110 , and a radio access network (RAN) 112 .
  • Computing devices 114 - 128 may utilize the Internet 108 , the packet core network 110 , and the RAN 112 for accessing various computing services or content.
  • the Internet 108 may be communicatively connected to servers 130 - 138 that are configured to provide computing services to devices such as the computing devices 114 - 128 .
  • the server 130 may provide a video subscription service
  • the server 132 may provide an Internet search service
  • the server 134 may provide a peer-to-peer file-transfer service
  • the server 136 may provide a video sharing service
  • the server 138 may provide a video subscription service.
  • Network traffic between the computing devices 114 - 128 and the servers 130 - 138 may be managed and handled by nodes of the Internet 108 , the packet core network 110 , and the RAN 112 .
  • the Internet may include various network nodes for handling the transmission of data between the servers 130 - 138 and the GGSN 106 .
  • the packet core network 110 may include network nodes for handling the transmission of data between the GGSN 106 and serving GPRS support nodes (SGSNs) 140 , which may communicate with radio network controllers (RNCs) 142 for the transmission of data.
  • the RAN 112 may include backhaul network nodes for handling the transmission of data between the RNCs 142 and NodeBs 144 .
  • Each RNCs 142 is configured to control one or more NodeBs 144 that are connected to it. These networks and nodes may be targeted for dynamic congestion management of network traffic between the computing devices 114 - 128 and the servers 130 - 138 or other components in the network 100 in accordance with embodiments of the present disclosure.
  • examples described herein involve a mobile communications network; however, any other suitable communications network may be used to implement system and method embodiments of the presently disclosed subject matter.
  • the presently disclosed subject matter may also be applied to fixed broadband (e.g., DSL technologies (xDSL), fiber-to-the-home (FTTH), and the like), cable networks, or any other suitable type of communications network.
  • fixed broadband e.g., DSL technologies (xDSL), fiber-to-the-home (FTTH), and the like
  • cable networks or any other suitable type of communications network.
  • a computing device should be broadly construed. It can include any type Of device capable of communicating with other devices, network nodes, and/or networks.
  • a computing device may be a mobile device such as, for example, but not limited to, a smart phone, a feature (cell) phone, a pager, a personal digital assistant (PDA), a tablet, a mobile computer, or some other device with a wireless or cellular network interface card (NIC).
  • PDA personal digital assistant
  • NIC network interface card
  • a computing device can also include any type of conventional computer, for example, a desktop computer or a laptop computer.
  • a typical mobile computing device is a wireless data access-enabled device (e.g., an iPHONE® smart phone, a BLACKBERRY® smart phone, a NEXUS ONETM smart phone, an iPAD® device, or the like) that is capable of sending and receiving data in a wireless manner using protocols like the Internet Protocol, or IP, or the wireless application protocol, or WAP.
  • a wireless data access-enabled device e.g., an iPHONE® smart phone, a BLACKBERRY® smart phone, a NEXUS ONETM smart phone, an iPAD® device, or the like
  • IP Internet Protocol
  • WAP wireless application protocol
  • Wireless data access is supported by many wireless networks, including, but not limited to, CDPD, CDMA, GSM, PDC, PHS, TDMA, FLEX, ReFLEX, iDEN, TETRA, DECT, DataTAC, Mobitex, EDGE, UMTS, HSPA, WiMAX, LTE, LTE Advanced, and other 2G, 3G and 4G technologies, and it operates with many handheld device operating systems, such as PalmOS, EPOC, Windows CE, FLEXOS, OS/9, JavaOS, iOS and Android.
  • these devices use graphical displays and can access the Internet (or other communications network) on so-called mini- or micro-browsers, which are web browsers optimized for small displays and which may accommodate the reduced memory constraints of many wireless devices.
  • the mobile device is a cellular telephone or smart phone that operates over GPRS, which is a data technology for GSM networks.
  • GPRS Global System for Mobile communications
  • a given mobile device can communicate with another such device via many different types of message transfer techniques, including SMS (short message service), enhanced SMS (EMS), multi-media message (MMS), email WAP, paging, or other known or later-developed wireless data formats.
  • SMS short message service
  • EMS enhanced SMS
  • MMS multi-media message
  • email WAP paging
  • paging or other known or later-developed wireless data formats.
  • FIG. 2 illustrates a flowchart of an example method for dynamic congestion management in accordance with embodiments of the present disclosure.
  • This method may be implemented, for example, by the TPM function 104 of the system 100 shown in FIG. 1 during times of network traffic congestion, for periodically and dynamically augmenting traffic-management policies.
  • this method may be partially or entirety automated by the systems and devices described herein.
  • this method may be implemented by any suitable component or node, such as a DPI system, configured for dynamic provisioning of congestion management policies as disclosed herein.
  • the DPI module may be implemented by one or more other components such as, but not limited to, a DPI engine, a statistics storage unit, and a subscriber manager.
  • the subscriber manager may associate user identities, serving network nodes and device types with the IP addresses of computing devices.
  • the method includes determining traffic statistics of one or more nodes in a communications network (step 200 ).
  • the TPM function 104 of the DPI module 102 may determine traffic statistics including, but not limited to, a QoE score for the aggregate of one or more computing devices being served by a given network node, such as computing devices 114 , 116 , and 118 served by NodeB 144 .
  • the QoE score may be derived from detected packet drops and retransmissions occurring in the context of Transmission Control Protocol (TCP) connections With computing devices 114 , 116 , and 118 .
  • TCP Transmission Control Protocol
  • TPM function 104 may determine at least one of aggregate nodal downlink bandwidth, aggregate nodal uplink bandwidth or aggregate nodal downlink and uplink bandwidth of traffic exchanged with computing devices 114 , 116 , and 118 . Collecting statistics for nodal QoE scores, or for nodal aggregate bandwidth may prove useful in subsequently inferring nodal congestion.
  • nodal bandwidth statistics may be collected per user, per application, per device, or per some combination of the preceding.
  • nodal bandwidth statistics may be collected per application per user.
  • the method of FIG. 2 includes determining whether the node(s) are congested based on the traffic statistics (step 202 ). For instance, TPM function 104 may determine that one of the NodeBs 144 is congested as depicted in FIG. 1 . Continuing an aforementioned example, the TPM function 104 may determine that the node's aggregate QoE score is falls below a predefined threshold, and thereby ascertains that the node is likely congested. In another example, the TPM function 104 may assess the aggregate bandwidth of traffic exchanged with computing devices served by a network node against an engineered link capacity for this node. Such an engineered capacity is hereinafter referred to as a link-utilization threshold. If the aggregate bandwidth exceeds a provisioned, nodal, link-utilization threshold, TPM function 104 may infer that the node is congested.
  • the method of FIG. 2 includes dynamically changing or provisioning a set of one or more traffic shaping rules for application to the node(s) in response to determining that the node(s) are congested.
  • the TPM function 104 may dynamically change or provision a traffic shaping rule for application to the congested nodeB 144 in response to determining that the nodeB is congested.
  • the traffic shaping rules applied to manage nodal congestion may vary in consideration of at least one of service plans associated with users that are served by the congested node, limitations on congestion management permitted by the regulatory environment in which the node resides, and operator-specific policies.
  • TRM function 104 may shape nodal traffic of lower-tier (e.g., “bronze”) users per their service plan before iteratively impacting the traffic of higher-tier (e.g., “silver” and “cold”) users.
  • TPM function 104 may selectively throttle other traffic to ensure that the user enjoys as optimal gaming or VoIP experience.
  • TPM function 104 may during nodal congestion shape traffic of applications which do not directly contribute to the network operator's revenues, in order to provide a better QoE for applications which do contribute to the operator's revenues.
  • TPM function 104 shaping traffic of applications which are less sensitive to delay or packet loss, in order to provide a better QoE for users of applications which are more sensitive.
  • TPM function 104 may shape peer-to-peer (P2P) and other file-transfer traffic, as well as downloads of software updates, in order to provide other users with a better VoIP and web browsing experience.
  • P2P peer-to-peer
  • TPM function 104 applying traffic-shaping rules to drop packets of high-definition (HD), streamed video to computing devices with low-resolution displays, so as to trigger feedback from the computing device to the video head-end, such that lower-definition video is streamed because of network congestion being assumed by the end points.
  • TPM function 104 may enforce “usage fairness” traffic-shaping policies that are application-agnostic but which effectively target users that use more than their “fair share” of bandwidth during nodal congestion, so as to limit the detrimental impact of such “bandwidth abusive” users on other users' QoE.
  • FIG. 3 illustrates a flowchart of another example method for dynamic congestion management in accordance with embodiments of the present disclosure.
  • This method may be implemented, for example, by the TPM function 104 of the system 100 shown in FIG. 1 during times of network traffic congestion for periodically and dynamically augment traffic-management policies.
  • this method may be partially or entirely automated by the systems and devices described herein.
  • this method may be implemented by any suitable component or node, such as a DPI system, configured for dynamic provisioning of congestion management policies as disclosed herein.
  • the method includes implementing statically provisioned, baseline traffic-shaping policies (step 300 ).
  • the TPM function 104 shown in FIG. 1 may implement a baseline traffic management policy such as, but not limited to, one of the following: ensuring that downlink traffic's bandwidth does not exceed the network's engineered downlink bandwidth capacity, dropping (policing) traffic associated with illegal peer-to-peer file sharing, or prioritizing premium (e.g., “gold”) users' traffic over others' traffic.
  • a baseline traffic management policy such as, but not limited to, one of the following: ensuring that downlink traffic's bandwidth does not exceed the network's engineered downlink bandwidth capacity, dropping (policing) traffic associated with illegal peer-to-peer file sharing, or prioritizing premium (e.g., “gold”) users' traffic over others' traffic.
  • the method of FIG. 3 includes periodically auditing nodal traffic statistics (step 302 ).
  • the TPM function 104 may audit nodal traffic statistics every 15 minutes as described for step 202 of FIG. 2 .
  • every 15 minutes TPM function 104 may examine nodal aggregate QoE scores or the aggregate traffic bandwidth supported by network nodes.
  • the method of FIG. 3 includes determining whether there is nodal congestion (step 304 ). For example, as described for step 202 of FIG. 2 , TPM function 104 may assess whether nodal aggregate QoE scores have fallen below a provisioned QoE threshold, or whether aggregate traffic bandwidth exceeds a nodal link-utilization or throughput-capacity threshold. For congested nodes, problematic users and applications may be identified.
  • the method may dynamically augment or provision shaping rules or policies for congested nodes (step 306 ).
  • rules or policies may selectively throttle any combination of users, applications, nodal bandwidth usage, and device types.
  • TPM function 104 may apply traffic-shaping policies, such as those described for step 204 of FIG. 2 .
  • Policies may be applied to congested nodes at a lowest level in a network hierarchy that is experiencing congestion, since addressing congestion at lower-level nodes may alleviate congestion at nodes higher in the network hierarchy.
  • the method may back out any last dynamic rule set changes (step 308 ).
  • hysteresis techniques may be employed to minimize “ping ponging” between congested and uncongested nodal states.
  • the method may log any dynamic rule set changes.
  • Telecommunications products often support time-stamped logging of various system events, including provisioning changes.
  • TPM function 104 may augment an existing log to record dynamic provisioning of traffic-management policies. accounting for both their application to congested nodes and their disablement or removal from network nodes that are no longer congested. Logging of dynamic policy changes related to congested nodes, together with collection of nodal traffic statistics, allows subsequent analysis of patterns in and effectiveness of these dynamic, congestion-management policies.
  • FIG. 4 illustrates a block diagram of an example system 400 for dynamic congestion management in accordance with an embodiment of the present disclosure.
  • the system 400 may include a subscriber manager 402 , a statistics storage unit 404 , and a DPI engine 406 configured for inline traffic analysis and management. These components may be operable together for implementing dynamic congestion management in accordance with embodiments of the present disclosure.
  • the DPI engine 406 may be positioned behind a GGSN 408 and may manage traffic between one or more subscriber computing devices 408 and the Internet 410 .
  • the DPI engine 406 may provide inline traffic classification—e.g., identify the applications associated with traffic flows—and correlate traffic flows with one or more traffic-management policies, as will be understood.
  • the DPI engine 406 may collect traffic statistics and store the statistics at the statistics storage unit 404 .
  • the subscriber manager 402 may access the statistics storage unit 404 to retrieve statistics for informing of congestion detection and dynamic policy creation. Further, the subscriber manager 402 may provide user, location, and device awareness via analysis of signaling traffic that it taps or is replicated and tunneled to subscriber manager 402 by the DPI engine 406 .
  • user, location, and device, awareness we mean that the IP address of a computing device 410 may be associated via various signaling with a user identity, with network elements that carry traffic exchanged with said computing device 410 , and with the type of computing device 410 . For example, and as is further elucidated in FIG. 5 , subscriber manager 402 may examine signaling exchanged with GGSN 408 to provide these associations with the IP address of computing device 410 .
  • the subscriber manager 402 may have a script that is periodically invoked to pull statistics from the statistics storage unit 404 .
  • the subscriber manager 402 may subsequently analyze the statistics to identify one or more congested nodes and associated users (or subscribers), device types, and applications.
  • the subscriber manager 402 may dynamically create or modify one or more policies (e.g., traffic-shaping or traffic-management rules) to mitigate congestion and push the policy changes to (i.e., provision the policies on) the DPI engine 406 for enforcement.
  • policies e.g., traffic-shaping or traffic-management rules
  • FIG. 5 illustrates a block diagram of an example system 500 For tapping into GGSN signaling according to embodiments of the present disclosure.
  • the subscriber manager 402 and DPI engine 406 may be operable together for tapping signaling traffic exchanged by GGSN 106 With a network authentication, authorization, and accounting (AAA) component 502 , a policy and charging rules function (PCRF) component 504 , or a serving GPRS support node (SGSN).
  • the AAA component 502 and the PCRF component 504 may each be suitably implemented within a server or other computing device.
  • the GGSN may exchange RADIUS or Diameter signaling with AAA 502 , Diameter signaling with PCRF 504 , and/or GTP-C signaling with SGSN 140 .
  • Subscriber manager 402 may directly tap into this signaling, as depicted in FIG. 5 .
  • DPI engine 406 may detect such signaling, and tunnel a copy of the signaling to subscriber manager 402 .
  • subscriber manager 402 may associate the IP addresses of computing devices 114 and 116 with respective user identities, with network nodes that carry traffic exchanged with said computing devices, and with the respective types of said computing devices.
  • the signaling thus enables subscriber manager 402 to become user, location, and device aware, such that traffic-management policies (e.g., traffic-shaping or policing rules) may be dynamically provisioned that are specific to users, network nodes, or device types.
  • traffic-management policies e.g., traffic-shaping or policing rules
  • nodal congestion may be determined based on throughput- or link-capacity utilization.
  • FIG. 6 illustrates a graph showing nodal throughput-capacity utilization over a period of time.
  • a congestion threshold sometimes corresponding to an engineering limit, is shown together with bandwidth usage. Congestion may be inferred when the bandwidth usage is measured to be above said predefined threshold. For example, with reference to FIG. 6 , congestion may be detected at 11 p.m., when nodal throughput-capacity utilization exceeds the congestion threshold of 80%.
  • Provisioned objects representing network nodes may have a throughput-capacity or link-capacity property.
  • the objects may be provisioned for both downlink and uplink throughput or link capacities.
  • throughput or link utilization may be assessed and reported relative to thresholds.
  • a subscriber manager script such as a script implemented by the subscriber manager 402 shown in FIG. 4 , may periodically compare users' aggregate bandwidth versus the throughput capacity of a node serving them. Nodal congestion (or the cessation of such congestion) may be determined by use of the thresholds.
  • System and method embodiments of the presently disclosed subject matter may employ throughput-capacity or link utilization and/or QoE metrics thresholds for determining nodal congestion.
  • FIG. 7 illustrates a block diagram of an example network hierarchy 700 showing nodes identified as targets for dynamic congestion policies according to embodiments of the present disclosure.
  • the “circled” nodeBs 144 and RNC 142 indicate targeted nodes.
  • Provisioned QoE-score and/or link-utilization thresholds indicating congestion may vary between levels in the network hierarchy. Higher level thresholds may be more stringent than lower level thresholds, because more traffic is at stake.
  • the node at the higher level as the parent node
  • the node at the lower level as the child node.
  • each of depicted SGSNs 140 are children nodes with respect to a GGSN 106 , but parent nodes with respect to subtending RNCs 142 .
  • a goal in selecting target nodes for dynamic policy changes is to minimize the policy changes for nodes that are not experiencing congestion. Another goal is to judiciously and iteratively apply policy changes to both effectively manage congestion and impact a minimal subset of congested nodes.
  • a parent node is congested but has one or more congested children nodes to which congestion-management policies could be applied, then the children are candidates for dynamic policies rather than the parent node, since managing congestion at the children may concomitantly address congestion at the parent node. If a congested node has no congested child and congestion-management policies remain that could be applied to the congested node, then that node is targeted for policy changes.
  • policies specific to “bandwidth abusive” users who are moving between congested cells include policies specific to “bandwidth abusive” users who are moving between congested cells. After determining that such users arc moving between congested cells, policies specific to such users that are applied to nodes at lower levels in the network hierarchy may be replaced with a policy at the GGSN. (In this case, policies could also be applied to an RNC 142 or SGSN 140 that serves the set of congested cells.) Further, exceptions may be applied for roaming “bandwidth hogs” whose traffic is anchored at a congested GGSN 106 , since the GGSN is the only node serving such users that is under the network operator's control and to which the operator can apply congestion-management policies.
  • a provisioned interval between periodic audits 302 should allow time for determining an effect of one or more implemented, dynamic, congestion-management policies. It is further noted that the provisioned interval between audits could be different before and after congestion is detected.
  • Periodic audits enable iterative policy application in the management of congestion. Policies may be iteratively applied to both a given congested node and, as described above in connection with FIG. 7 , congested nodes in a network hierarchy. Iteration may also be employed in the removal or disablement of congestion-management policies that have been applied to formerly congested nodes.
  • FIG. 7 depicts only one example network applicable to the presently disclosed subject matter. However, the present subject matter may be applied to any suitable type of network as will be understood by those of skill in the art.
  • Traffic statistics may provide nodal breakout data of bandwidth usage within a network hierarchy, identifying as well the users, applications and device types that are consuming bandwidth. Nodal uplink and downlink throughout- or link-capacities may be provisioned at subscriber manager 402 , or subscriber manager 402 may obtain such capacities from a network management system (NMS).
  • NMS network management system
  • hysteresis techniques may be used to back out dynamic policy changes. Such techniques may minimize “ping-ponging” between congested and non-congested states.
  • Hysteresis may be embodied in several exemplary forms.
  • hysteresis techniques may include applying two thresholds to trigger entering and leaving congested state. A higher link-utilization or throughput-capacity threshold may be used to enter a nodal congested state, and a lower threshold used to leave the state. In contrast, a lower QoE-score threshold may be used to enter a nodal congested state, and a higher threshold used to exit this state.
  • dynamic policies may be maintained for multiple, consecutive audit intervals without congestion, before they are finally removed
  • a per-node stark of dynamic rule-set changes may be kept per audit.
  • iteratively applied policy changes may be backed out in reverse order.
  • FIG. illustrates a graph showing an example of nodal link utilization undergoing hysteresis techniques to back out a dynamic policy change according to embodiments of the present disclosure.
  • congestion-management policy changes may be applied at 11 p.m. when nodal bandwidth exceeds a congestion threshold of 80%, such changes may be backed out at 1:00 a.m. (i.e., at the second-to-last diamond shape in “Bandwidth Usage”), after the second audit below the 75%, non-congestion threshold.
  • dynamic rule-set i.e., congestion-management policy
  • changes may be logged to enable pattern analysis.
  • Such logs, in conjunction with traffic statistics, may allow evaluation of the effectiveness of dynamic policies.
  • pattern analysis and evaluation of policy effectiveness may be automated.
  • the monitoring of logs and traffic statistics may be periodic, or aperiodic and triggered by a recent (or latest) congestion event in the network.
  • the pattern may be a degenerate pattern of one set of at least one dynamically applied policy.
  • Persistent patterns of effective, dynamically installed policies may suggest the need to augment the baseline of statically provisioned policies. For example, if a dynamically installed policy is consistently applied to manage nodal congestion during the data busy hour, then the policy may be statically provisioned. Further, a time-of day condition may be added to the policy. This allows policies to be enforced in real-time rather than during congestion audits.
  • FIG. 9 illustrates a block diagram of an example system 900 for tuning statically provisioned, congestion-management policies according to embodiments of the present disclosure.
  • the system 900 may include: a congestion audit node 402 , a system 406 for managing traffic per installed policies, a system 906 for storing dynamic policy-change logs, a traffic statistics storage system 404 , and a system 910 for assessing, effectiveness of repeatedly applied dynamic policies.
  • static policies may be provisioned at one or more nodes at 912 .
  • the system 406 may manage traffic based on the statically and dynamically provisioned policies.
  • the system 406 may report traffic statistics, before and after dynamic policies are applied and removed, to the statistics storage system 404 for storage.
  • congestion-audit node 402 may determine whether one or more nodes are congested, provision or remove/disable dynamic congestion-management policies on system 406 for enforcement, and log dynamic policy changes to the system 906 .
  • Assessment system 910 retrieving dynamic policy changes from logging storage 906 and traffic statistics from statistics storage 404 , may detect patterns of dynamic policy changes, and assess the effectiveness of dynamic policies in managing congestion.
  • Assessment system 910 may provide a report on its analysis, and the report may inform (manual or automated) deliberation on whether dynamically applied policies should have corresponding static policies provisioned on traffic-management system 406 . Such static policies may be provisioned at 912 .
  • aspects of the present subject matter may be embodied as a system, method or computer program product. Accordingly, aspects of the present subject matter may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present subject matter may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium (including, but not limited to, non-transitory computer readable storage media).
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present subject matter may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages, or assembly language that is specific to the instruction execution system.
  • the program code may be compiled and the resulting object code executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may he made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider an Internet Service Provider
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Abstract

Systems and methods for dynamic congestion management in communications networks are disclosed herein. According to an aspect, a method can include determining traffic statistics of at least one node in a communications network. The method can also include determining whether the at least one node is congested based on the traffic statistics. Further, the method can include dynamically changing or provisioning a set of at least one traffic shaping rule for application to the at least one node in response to determining that the at least one node is congested.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • The present application claims priority to the commonly owned U.S. Provisional Patent Application No. 61/420,272, titled SYSTEMS AND METHODS FOR DYNAMIC CONGESTION MANAGEMENT IN COMMUNICATIONS NETWORKS and filed Dec. 6, 2010, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • The presently disclosed subject matter relates to communications networks. Particularly, the presently disclosed subject matter relates to dynamic congestion management in communications networks.
  • BACKGROUND
  • During times of communications network congestion, subscribers using more than their fair share of bandwidth can impact the quality of experience (QoE) of all active subscribers. In addition, certain applications may unnecessarily consume a large portion of bandwidth during times of congestion, thereby impacting the responsiveness and QoE for more interactive applications. These problems may exist even where traffic and policy management (TPM) systems have been deployed.
  • Provisioning of deep packet inspection (DPI)-enabled traffic management policies to address data network congestion is often an imprecise, iterative science: policies are statically provisioned, results are observed, conclusions drawn regarding the need for further policy changes, and the cycle may then begin again. When congestion occurs despite enforcement of policies currently in place, manual provisioning of policy changes may be required; but this often occurs after the congestion has passed or, at best, with some delay in response to an alarm being raised. Sometimes it is the reoccurring pattern of network congestion that predicates manual provisioning changes, but the network operations staff must first recognize the pattern and assess what changes are needed.
  • Accordingly, there is a continuing need for improving systems and methods for congestion management in communications networks.
  • SUMMARY
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • Disclosed herein are systems and methods for dynamic congestion management in communications networks. According to an aspect, systems and methods disclosed herein may, during times of congestion, dynamically limit bandwidth usage of subscribers using more than their fair share of bandwidth. Such congestion management may be dynamically or automatically implemented so as to address congestion at various points in a network hierarchy.
  • According to an aspect, a method can include determining traffic statistics of at least one node in a communications network. The method can also include determining whether the at least one node is congested based on the traffic statistics. Further, the method can include dynamically changing or provisioning a traffic shaping rule for application to the at least one node in response to determining that the at least one node is congested.
  • The presently disclosed subject matter provides: automated detection and mitigation of network congestion by dynamic provisioning of DPI-enabled traffic management policies, so as to address network congestion events in a more timely fashion; and automated evaluation of (possibly repeated) dynamic policy changes and their effectiveness, so as to expedite the provisioning of any necessary, corresponding statically provisioned policies that may mitigate future congestion events in real time.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purposes of illustration, there is shown in the drawings exemplary embodiments; however, the presently disclosed subject matter is not limited to the specific methods and instrumentalities disclosed. In the drawings:
  • FIG. 1 is a block diagram of an exemplary communications network 100 in which the presently disclosed subject matter may be deployed for dynamic congestion management in accordance with embodiments of the present disclosure;
  • FIG. 2 is a flowchart of an example method for dynamic congestion management in accordance with embodiments of the present disclosure;
  • FIG. 3 is a flowchart of another example method for dynamic congestion management in accordance with embodiments of the present disclosure;
  • FIG. 4 is a block diagram of an example system for dynamic congestion management in accordance with an embodiment of the present disclosure;
  • FIG. 5 is a block diagram of an example system 500 for tapping into GGSN signaling according to embodiments of the present disclosure;
  • FIG. 6 is a graph showing throughput-capacity utilization over a period of time;
  • FIG. 7 is a block diagram of an example network hierarchy showing nodes identified as targets for dynamic congestion policies according to embodiments of the present disclosure;
  • FIG. 8 is a graph showing an example of monitoring of throughput-capacity utilization instigating hysteresis techniques to back out a dynamic policy change according to embodiments of the present disclosure; and
  • FIG. 9 is a block diagram of an example system for tuning policies according to embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • The presently disclosed subject matter is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or elements similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the term “step” may be used herein to connote different aspects of methods employed, the term should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
  • FIG. 1 illustrates a block diagram of an exemplary communications network 100 in which the presently disclosed subject matter may be deployed for dynamic congestion management in accordance with embodiments of the present disclosure. Referring to FIG. 1, a DPI module 102 having a TPM function 104 is deployed behind a GGSN 106 on Gi (GGSN-to-PDN (public data network) interface). In this example, the TPM policy of the TPM function 104 is not controlled by a policy, charging, and rules function (PCRF). Various embodiments for dynamically managing congestion at one or more nodes in accordance with the present disclosure may be implemented by the EPM function 104 of the DPI module 102; however, any other suitable function of another suitable device or component may be used for dynamically managing congestion at one or more nodes.
  • In this example, the network 100 includes various other communications networks such as, but not limited to, the Internet 108, a packet core network 110, and a radio access network (RAN) 112. Computing devices 114-128 may utilize the Internet 108, the packet core network 110, and the RAN 112 for accessing various computing services or content. For example, the Internet 108 may be communicatively connected to servers 130-138 that are configured to provide computing services to devices such as the computing devices 114-128. For example, the server 130 may provide a video subscription service, the server 132 may provide an Internet search service, the server 134 may provide a peer-to-peer file-transfer service, the server 136 may provide a video sharing service, and the server 138 may provide a video subscription service.
  • Network traffic between the computing devices 114-128 and the servers 130-138 may be managed and handled by nodes of the Internet 108, the packet core network 110, and the RAN 112. For example, the Internet may include various network nodes for handling the transmission of data between the servers 130-138 and the GGSN 106. The packet core network 110 may include network nodes for handling the transmission of data between the GGSN 106 and serving GPRS support nodes (SGSNs) 140, which may communicate with radio network controllers (RNCs) 142 for the transmission of data. Further, the RAN 112 may include backhaul network nodes for handling the transmission of data between the RNCs 142 and NodeBs 144. Each RNCs 142 is configured to control one or more NodeBs 144 that are connected to it. These networks and nodes may be targeted for dynamic congestion management of network traffic between the computing devices 114-128 and the servers 130-138 or other components in the network 100 in accordance with embodiments of the present disclosure.
  • It is noted that examples described herein involve a mobile communications network; however, any other suitable communications network may be used to implement system and method embodiments of the presently disclosed subject matter. For example, the presently disclosed subject matter may also be applied to fixed broadband (e.g., DSL technologies (xDSL), fiber-to-the-home (FTTH), and the like), cable networks, or any other suitable type of communications network.
  • As referred to herein, the term “computing device” should be broadly construed. It can include any type Of device capable of communicating with other devices, network nodes, and/or networks. For example, a computing device may be a mobile device such as, for example, but not limited to, a smart phone, a feature (cell) phone, a pager, a personal digital assistant (PDA), a tablet, a mobile computer, or some other device with a wireless or cellular network interface card (NIC). A computing device can also include any type of conventional computer, for example, a desktop computer or a laptop computer. A typical mobile computing device is a wireless data access-enabled device (e.g., an iPHONE® smart phone, a BLACKBERRY® smart phone, a NEXUS ONE™ smart phone, an iPAD® device, or the like) that is capable of sending and receiving data in a wireless manner using protocols like the Internet Protocol, or IP, or the wireless application protocol, or WAP. This allows users to access information via wireless devices, such as smart phones, mobile phones, pagers, two-way radios, communicators, and the like. Wireless data access is supported by many wireless networks, including, but not limited to, CDPD, CDMA, GSM, PDC, PHS, TDMA, FLEX, ReFLEX, iDEN, TETRA, DECT, DataTAC, Mobitex, EDGE, UMTS, HSPA, WiMAX, LTE, LTE Advanced, and other 2G, 3G and 4G technologies, and it operates with many handheld device operating systems, such as PalmOS, EPOC, Windows CE, FLEXOS, OS/9, JavaOS, iOS and Android. Typically, these devices use graphical displays and can access the Internet (or other communications network) on so-called mini- or micro-browsers, which are web browsers optimized for small displays and which may accommodate the reduced memory constraints of many wireless devices. In a representative embodiment, the mobile device is a cellular telephone or smart phone that operates over GPRS, which is a data technology for GSM networks. In addition to a conventional voice communication, a given mobile device can communicate with another such device via many different types of message transfer techniques, including SMS (short message service), enhanced SMS (EMS), multi-media message (MMS), email WAP, paging, or other known or later-developed wireless data formats. Although many of the examples provided herein are implemented on smart phones, the examples may similarly be implemented on any suitable electronic device, such as a computer.
  • FIG. 2 illustrates a flowchart of an example method for dynamic congestion management in accordance with embodiments of the present disclosure. This method may be implemented, for example, by the TPM function 104 of the system 100 shown in FIG. 1 during times of network traffic congestion, for periodically and dynamically augmenting traffic-management policies. In another example, this method may be partially or entirety automated by the systems and devices described herein. In an example, this method may be implemented by any suitable component or node, such as a DPI system, configured for dynamic provisioning of congestion management policies as disclosed herein. It is noted that the DPI module may be implemented by one or more other components such as, but not limited to, a DPI engine, a statistics storage unit, and a subscriber manager. In one embodiment, described below, the subscriber manager may associate user identities, serving network nodes and device types with the IP addresses of computing devices.
  • Referring to FIG. 2, the method includes determining traffic statistics of one or more nodes in a communications network (step 200). For example, the TPM function 104 of the DPI module 102 may determine traffic statistics including, but not limited to, a QoE score for the aggregate of one or more computing devices being served by a given network node, such as computing devices 114, 116, and 118 served by NodeB 144. The QoE score, for example, may be derived from detected packet drops and retransmissions occurring in the context of Transmission Control Protocol (TCP) connections With computing devices 114, 116, and 118. As an alternative or in addition to determining an aggregate QoE score for NodeB 144. TPM function 104 may determine at least one of aggregate nodal downlink bandwidth, aggregate nodal uplink bandwidth or aggregate nodal downlink and uplink bandwidth of traffic exchanged with computing devices 114, 116, and 118. Collecting statistics for nodal QoE scores, or for nodal aggregate bandwidth may prove useful in subsequently inferring nodal congestion.
  • Additional statistics may he collected for possible use in deriving policies to manage nodal congestion. For example, nodal bandwidth statistics may be collected per user, per application, per device, or per some combination of the preceding. For example, nodal bandwidth statistics may be collected per application per user.
  • The method of FIG. 2 includes determining whether the node(s) are congested based on the traffic statistics (step 202). For instance, TPM function 104 may determine that one of the NodeBs 144 is congested as depicted in FIG. 1. Continuing an aforementioned example, the TPM function 104 may determine that the node's aggregate QoE score is falls below a predefined threshold, and thereby ascertains that the node is likely congested. In another example, the TPM function 104 may assess the aggregate bandwidth of traffic exchanged with computing devices served by a network node against an engineered link capacity for this node. Such an engineered capacity is hereinafter referred to as a link-utilization threshold. If the aggregate bandwidth exceeds a provisioned, nodal, link-utilization threshold, TPM function 104 may infer that the node is congested.
  • The method of FIG. 2 includes dynamically changing or provisioning a set of one or more traffic shaping rules for application to the node(s) in response to determining that the node(s) are congested. For example, the TPM function 104 may dynamically change or provision a traffic shaping rule for application to the congested nodeB 144 in response to determining that the nodeB is congested. The traffic shaping rules applied to manage nodal congestion may vary in consideration of at least one of service plans associated with users that are served by the congested node, limitations on congestion management permitted by the regulatory environment in which the node resides, and operator-specific policies. For example, where users subscribe to service plans that are tiered on the basis of allowed bandwidth or traffic volume, TRM function 104 may shape nodal traffic of lower-tier (e.g., “bronze”) users per their service plan before iteratively impacting the traffic of higher-tier (e.g., “silver” and “cold”) users. Or where a user is served by the congested node and subscribes to a plan offering a premium gaming or voice over internet protocol (VoIP) experience, TPM function 104 may selectively throttle other traffic to ensure that the user enjoys as optimal gaming or VoIP experience. In another embodiment, where “net neutrality” concerns do not prohibit targeting specific applications, TPM function 104 may during nodal congestion shape traffic of applications which do not directly contribute to the network operator's revenues, in order to provide a better QoE for applications which do contribute to the operator's revenues. Another example would be TPM function 104 shaping traffic of applications which are less sensitive to delay or packet loss, in order to provide a better QoE for users of applications which are more sensitive. For instance, TPM function 104 may shape peer-to-peer (P2P) and other file-transfer traffic, as well as downloads of software updates, in order to provide other users with a better VoIP and web browsing experience. Yet another example would be TPM function 104 applying traffic-shaping rules to drop packets of high-definition (HD), streamed video to computing devices with low-resolution displays, so as to trigger feedback from the computing device to the video head-end, such that lower-definition video is streamed because of network congestion being assumed by the end points. Where “net neutrality” concerns do prohibit targeting specific applications, TPM function 104 may enforce “usage fairness” traffic-shaping policies that are application-agnostic but which effectively target users that use more than their “fair share” of bandwidth during nodal congestion, so as to limit the detrimental impact of such “bandwidth abusive” users on other users' QoE.
  • FIG. 3 illustrates a flowchart of another example method for dynamic congestion management in accordance with embodiments of the present disclosure. This method may be implemented, for example, by the TPM function 104 of the system 100 shown in FIG. 1 during times of network traffic congestion for periodically and dynamically augment traffic-management policies. In another example, this method may be partially or entirely automated by the systems and devices described herein. In an example, this method may be implemented by any suitable component or node, such as a DPI system, configured for dynamic provisioning of congestion management policies as disclosed herein.
  • Referring to FIG. 3, the method includes implementing statically provisioned, baseline traffic-shaping policies (step 300). For example, the TPM function 104 shown in FIG. 1 may implement a baseline traffic management policy such as, but not limited to, one of the following: ensuring that downlink traffic's bandwidth does not exceed the network's engineered downlink bandwidth capacity, dropping (policing) traffic associated with illegal peer-to-peer file sharing, or prioritizing premium (e.g., “gold”) users' traffic over others' traffic.
  • The method of FIG. 3 includes periodically auditing nodal traffic statistics (step 302). For example, the TPM function 104 may audit nodal traffic statistics every 15 minutes as described for step 202 of FIG. 2. For instance, every 15 minutes TPM function 104 may examine nodal aggregate QoE scores or the aggregate traffic bandwidth supported by network nodes.
  • The method of FIG. 3 includes determining whether there is nodal congestion (step 304). For example, as described for step 202 of FIG. 2, TPM function 104 may assess whether nodal aggregate QoE scores have fallen below a provisioned QoE threshold, or whether aggregate traffic bandwidth exceeds a nodal link-utilization or throughput-capacity threshold. For congested nodes, problematic users and applications may be identified.
  • In response to determining that there is nodal congestion at step 304, the method may dynamically augment or provision shaping rules or policies for congested nodes (step 306). Such rules or policies may selectively throttle any combination of users, applications, nodal bandwidth usage, and device types. For example, TPM function 104 may apply traffic-shaping policies, such as those described for step 204 of FIG. 2. Policies may be applied to congested nodes at a lowest level in a network hierarchy that is experiencing congestion, since addressing congestion at lower-level nodes may alleviate congestion at nodes higher in the network hierarchy.
  • In response to determining that there is no longer congestion at a node to which shaping rules or policies were dynamically applied, the method may back out any last dynamic rule set changes (step 308). As expounded in connection with FIG. 8, hysteresis techniques may be employed to minimize “ping ponging” between congested and uncongested nodal states.
  • At step 310, the method may log any dynamic rule set changes. Telecommunications products often support time-stamped logging of various system events, including provisioning changes. TPM function 104 may augment an existing log to record dynamic provisioning of traffic-management policies. accounting for both their application to congested nodes and their disablement or removal from network nodes that are no longer congested. Logging of dynamic policy changes related to congested nodes, together with collection of nodal traffic statistics, allows subsequent analysis of patterns in and effectiveness of these dynamic, congestion-management policies.
  • FIG. 4 illustrates a block diagram of an example system 400 for dynamic congestion management in accordance with an embodiment of the present disclosure. Referring to FIG. 4, the system 400 may include a subscriber manager 402, a statistics storage unit 404, and a DPI engine 406 configured for inline traffic analysis and management. These components may be operable together for implementing dynamic congestion management in accordance with embodiments of the present disclosure. For example, the DPI engine 406 may be positioned behind a GGSN 408 and may manage traffic between one or more subscriber computing devices 408 and the Internet 410. Further, the DPI engine 406 may provide inline traffic classification—e.g., identify the applications associated with traffic flows—and correlate traffic flows with one or more traffic-management policies, as will be understood. Further, the DPI engine 406 may collect traffic statistics and store the statistics at the statistics storage unit 404.
  • The subscriber manager 402 may access the statistics storage unit 404 to retrieve statistics for informing of congestion detection and dynamic policy creation. Further, the subscriber manager 402 may provide user, location, and device awareness via analysis of signaling traffic that it taps or is replicated and tunneled to subscriber manager 402 by the DPI engine 406. By user, location, and device, awareness, we mean that the IP address of a computing device 410 may be associated via various signaling with a user identity, with network elements that carry traffic exchanged with said computing device 410, and with the type of computing device 410. For example, and as is further elucidated in FIG. 5, subscriber manager 402 may examine signaling exchanged with GGSN 408 to provide these associations with the IP address of computing device 410. In one embodiment, the subscriber manager 402 may have a script that is periodically invoked to pull statistics from the statistics storage unit 404. The subscriber manager 402 may subsequently analyze the statistics to identify one or more congested nodes and associated users (or subscribers), device types, and applications. In response to determining that one or more nodes are congested, the subscriber manager 402 may dynamically create or modify one or more policies (e.g., traffic-shaping or traffic-management rules) to mitigate congestion and push the policy changes to (i.e., provision the policies on) the DPI engine 406 for enforcement.
  • FIG. 5 illustrates a block diagram of an example system 500 For tapping into GGSN signaling according to embodiments of the present disclosure. Referring to FIG. 5, the subscriber manager 402 and DPI engine 406 may be operable together for tapping signaling traffic exchanged by GGSN 106 With a network authentication, authorization, and accounting (AAA) component 502, a policy and charging rules function (PCRF) component 504, or a serving GPRS support node (SGSN). The AAA component 502 and the PCRF component 504 may each be suitably implemented within a server or other computing device. The GGSN may exchange RADIUS or Diameter signaling with AAA 502, Diameter signaling with PCRF 504, and/or GTP-C signaling with SGSN 140. Subscriber manager 402 may directly tap into this signaling, as depicted in FIG. 5. Alternatively, DPI engine 406 may detect such signaling, and tunnel a copy of the signaling to subscriber manager 402. By examining the constituent parts of this signaling, subscriber manager 402 may associate the IP addresses of computing devices 114 and 116 with respective user identities, with network nodes that carry traffic exchanged with said computing devices, and with the respective types of said computing devices. The signaling thus enables subscriber manager 402 to become user, location, and device aware, such that traffic-management policies (e.g., traffic-shaping or policing rules) may be dynamically provisioned that are specific to users, network nodes, or device types.
  • Detecting Nodal Congestion
  • According to embodiments of the present disclosure, nodal congestion may be determined based on throughput- or link-capacity utilization. For example, FIG. 6 illustrates a graph showing nodal throughput-capacity utilization over a period of time. Referring to FIG. 6, a congestion threshold, sometimes corresponding to an engineering limit, is shown together with bandwidth usage. Congestion may be inferred when the bandwidth usage is measured to be above said predefined threshold. For example, with reference to FIG. 6, congestion may be detected at 11 p.m., when nodal throughput-capacity utilization exceeds the congestion threshold of 80%. Provisioned objects representing network nodes may have a throughput-capacity or link-capacity property. The objects may be provisioned for both downlink and uplink throughput or link capacities. As a result, throughput or link utilization may be assessed and reported relative to thresholds. A subscriber manager script, such as a script implemented by the subscriber manager 402 shown in FIG. 4, may periodically compare users' aggregate bandwidth versus the throughput capacity of a node serving them. Nodal congestion (or the cessation of such congestion) may be determined by use of the thresholds. System and method embodiments of the presently disclosed subject matter may employ throughput-capacity or link utilization and/or QoE metrics thresholds for determining nodal congestion.
  • Identifying Nodes as Targets for Dynamic Congestion Policies
  • FIG. 7 illustrates a block diagram of an example network hierarchy 700 showing nodes identified as targets for dynamic congestion policies according to embodiments of the present disclosure. The “circled” nodeBs 144 and RNC 142 indicate targeted nodes. Provisioned QoE-score and/or link-utilization thresholds indicating congestion may vary between levels in the network hierarchy. Higher level thresholds may be more stringent than lower level thresholds, because more traffic is at stake. With respect to two immediately connected nodes in the hierarchy, we refer to the node at the higher level as the parent node, and the node at the lower level as the child node. For example, each of depicted SGSNs 140 are children nodes with respect to a GGSN 106, but parent nodes with respect to subtending RNCs 142.
  • A goal in selecting target nodes for dynamic policy changes is to minimize the policy changes for nodes that are not experiencing congestion. Another goal is to judiciously and iteratively apply policy changes to both effectively manage congestion and impact a minimal subset of congested nodes. Thus, as depicted in FIG. 7, if a parent node is congested but has one or more congested children nodes to which congestion-management policies could be applied, then the children are candidates for dynamic policies rather than the parent node, since managing congestion at the children may concomitantly address congestion at the parent node. If a congested node has no congested child and congestion-management policies remain that could be applied to the congested node, then that node is targeted for policy changes.
  • Exceptions for which policies may be applied “globally” at GGSN 106 include policies specific to “bandwidth abusive” users who are moving between congested cells. After determining that such users arc moving between congested cells, policies specific to such users that are applied to nodes at lower levels in the network hierarchy may be replaced with a policy at the GGSN. (In this case, policies could also be applied to an RNC 142 or SGSN 140 that serves the set of congested cells.) Further, exceptions may be applied for roaming “bandwidth hogs” whose traffic is anchored at a congested GGSN 106, since the GGSN is the only node serving such users that is under the network operator's control and to which the operator can apply congestion-management policies.
  • It is also noted that a provisioned interval between periodic audits 302 should allow time for determining an effect of one or more implemented, dynamic, congestion-management policies. It is further noted that the provisioned interval between audits could be different before and after congestion is detected. Periodic audits enable iterative policy application in the management of congestion. Policies may be iteratively applied to both a given congested node and, as described above in connection with FIG. 7, congested nodes in a network hierarchy. Iteration may also be employed in the removal or disablement of congestion-management policies that have been applied to formerly congested nodes.
  • It is noted that FIG. 7 depicts only one example network applicable to the presently disclosed subject matter. However, the present subject matter may be applied to any suitable type of network as will be understood by those of skill in the art.
  • Gathering Data to Inform Dynamic Policies for Congested Nodes
  • Traffic statistics may provide nodal breakout data of bandwidth usage within a network hierarchy, identifying as well the users, applications and device types that are consuming bandwidth. Nodal uplink and downlink throughout- or link-capacities may be provisioned at subscriber manager 402, or subscriber manager 402 may obtain such capacities from a network management system (NMS).
  • Backing Out Dynamic Policy Changes
  • According to embodiments of the present disclosure, hysteresis techniques may be used to back out dynamic policy changes. Such techniques may minimize “ping-ponging” between congested and non-congested states. Hysteresis may be embodied in several exemplary forms. First, hysteresis techniques may include applying two thresholds to trigger entering and leaving congested state. A higher link-utilization or throughput-capacity threshold may be used to enter a nodal congested state, and a lower threshold used to leave the state. In contrast, a lower QoE-score threshold may be used to enter a nodal congested state, and a higher threshold used to exit this state. Second, dynamic policies may be maintained for multiple, consecutive audit intervals without congestion, before they are finally removed Finally, and by way of example, a per-node stark of dynamic rule-set changes may be kept per audit. In this example, iteratively applied policy changes may be backed out in reverse order.
  • FIG. illustrates a graph showing an example of nodal link utilization undergoing hysteresis techniques to back out a dynamic policy change according to embodiments of the present disclosure. Referring to FIG. 8, whereas congestion-management policy changes may be applied at 11 p.m. when nodal bandwidth exceeds a congestion threshold of 80%, such changes may be backed out at 1:00 a.m. (i.e., at the second-to-last diamond shape in “Bandwidth Usage”), after the second audit below the 75%, non-congestion threshold.
  • Tuning Congestion Management in the Network
  • In accordance with embodiments of the present disclosure, dynamic rule-set (i.e., congestion-management policy) changes may be logged to enable pattern analysis. Such logs, in conjunction with traffic statistics, may allow evaluation of the effectiveness of dynamic policies. Such pattern analysis and evaluation of policy effectiveness may be automated. The monitoring of logs and traffic statistics may be periodic, or aperiodic and triggered by a recent (or latest) congestion event in the network. The pattern may be a degenerate pattern of one set of at least one dynamically applied policy.
  • Persistent patterns of effective, dynamically installed policies may suggest the need to augment the baseline of statically provisioned policies. For example, if a dynamically installed policy is consistently applied to manage nodal congestion during the data busy hour, then the policy may be statically provisioned. Further, a time-of day condition may be added to the policy. This allows policies to be enforced in real-time rather than during congestion audits.
  • FIG. 9 illustrates a block diagram of an example system 900 for tuning statically provisioned, congestion-management policies according to embodiments of the present disclosure. Referring to FIG. 9, the system 900 may include: a congestion audit node 402, a system 406 for managing traffic per installed policies, a system 906 for storing dynamic policy-change logs, a traffic statistics storage system 404, and a system 910 for assessing, effectiveness of repeatedly applied dynamic policies. In an example, static policies may be provisioned at one or more nodes at 912. The system 406 may manage traffic based on the statically and dynamically provisioned policies. The system 406 may report traffic statistics, before and after dynamic policies are applied and removed, to the statistics storage system 404 for storage. Using traffic statistics retrieved front statistics storage 404, congestion-audit node 402 may determine whether one or more nodes are congested, provision or remove/disable dynamic congestion-management policies on system 406 for enforcement, and log dynamic policy changes to the system 906. Assessment system 910, retrieving dynamic policy changes from logging storage 906 and traffic statistics from statistics storage 404, may detect patterns of dynamic policy changes, and assess the effectiveness of dynamic policies in managing congestion. Assessment system 910 may provide a report on its analysis, and the report may inform (manual or automated) deliberation on whether dynamically applied policies should have corresponding static policies provisioned on traffic-management system 406. Such static policies may be provisioned at 912.
  • As will be appreciated by one skilled in the art, aspects of the present subject matter may be embodied as a system, method or computer program product. Accordingly, aspects of the present subject matter may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present subject matter may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium (including, but not limited to, non-transitory computer readable storage media). A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present subject matter may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages, or assembly language that is specific to the instruction execution system. The program code may be compiled and the resulting object code executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter situation scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may he made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects of the present subject matter arc described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the subject matter. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions acts specified in the flowchart and/or block diagram block or blocks.
  • The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present subject matter. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the subject matter. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • The corresponding structures, materials, acts, and equivalents of all means or steps plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present subject matter has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the subject matter in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the subject matter. The embodiment was chosen and described in order to best explain the principles of the presently disclosed subject matter and the practical application, and to enable others of ordinary skill in the art to understand the presently disclosed subject matter for various embodiments with various modifications as are suited to the particular use contemplated.
  • The descriptions of the various embodiments of the present subject matter have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will he apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

What is claimed is:
1. A method for dynamic congestion management in a communications network, the method comprising:
determining traffic statistics of at least one node in a communications network;
determining whether the at least one node is congested based on the traffic statistics;
in response to determining that the at least one node is congested, dynamically changing or provisioning a set of at least one traffic shaping rule for application to the at least one node;
in response to determining that the at least one node is congested, determining whether the at least one node has a congested child node; and
in response to determining that the congested at least one node has a congested child node, applying the dynamic traffic shaping rule to the child node.
2. The method of claim 1, further comprising backing out a last dynamic rule-set change or provisioning in response to determining that the at least one node is not congested.
3. The method of claim 1, further comprising automatically implementing each of the steps.
4. The method of claim 1, wherein the steps are periodically implemented.
5. The method of claim 1, wherein the steps are iteratively implemented.
6. The method of claim 1, further comprising logging the change or provisioning of the set of at least one traffic shaping rule.
7. The method of claim 6, further comprising using the logged change or provisioning of the set of at least one traffic shaping rule to tune congestion management in the communications network.
8. The method of claim 7, wherein using the logged change or provisioning comprises assessing at least one of persistent patterns and effectiveness of congestion management resulting from dynamic changes to traffic shaping rules, in order to inform any provisioning of static rules.
9. The method of claim 1, further comprising implementing a statically provisioned traffic shaping policy for the at least one node.
10. The method of claim 1, wherein determining whether the at least one node is congested comprises using one or both of a Quality of Experience (QoE) score and a throughput-capacity utilization threshold of the at least one node to determine congestion.
11. The method of claim 1, further comprising:
in response to determining that the at least one node is congested, determining whether the at least one node does not have a congested child node; and
in response to determining that the congested at least one node does not have a congested child node, applying the traffic shaping rule to the congested at least one node.
12. The method of claim 1, wherein changing or provisioning a traffic shaping rule comprises using a usage fairness technique.
13. The method of claim 1, wherein the communications network is a mobile network, wherein the method further comprises:
determining whether a user served by the at least one node uses a disproportionate share of bandwidth compared to other users and has moved between congested cells; and
in response to determining that the user served by the at least one node uses a disproportionate share of bandwidth and has moved between congested cells, provisioning a set of at least one global traffic shaping rule for application to the user.
14. A system for dynamic congestion management in a communications network. the system comprising:
at least a processor and memory configured to:
determine traffic statistics of at least one node in a communications network;
determine whether the at least one node is congested based on the traffic statistics;
dynamically change or provision a set of at least one traffic shaping rule for application to the at least one node in response to determining that the at least one node is congested;
in response to determining that the at least one node is congested, determine whether the at least one node has a congested child node; and
in response to determining that the congested at least one node has a congested child node, apply the dynamic traffic shaping rule to the child node.
15. The system of claim 14, wherein the at least a processor and memory are configured to back out a last dynamic rule-set change or provisioning in response to determining that the at least one node is not congested.
16. The system of claim 14, wherein the at least a processor and memory arc configured to automatically determine traffic statistics, determine whether the at least one node is congested, and dynamically change or provision the set of at least one traffic shaping rule.
17. The system of claim 14, wherein the at least a processor and memory are configured to periodically determine traffic statistics, determine whether the at least one node is congested, and dynamically change or provision the set of at least one traffic shaping rule.
18. A computer program product for dynamic congestion management in a communications network, the computer program product comprising:
a non-transitory computer readable storage medium having computer readable program code for execution by a processor, the computer readable program code comprising:
computer readable program code configured to determine traffic statistics of at least one node in a communications network;
computer readable program code configured to determine whether the at least one node is congested based on the traffic statistics; and
computer readable program code configured to dynamically change or provision a set of at least one traffic shaping rule for application to the at least one node in response to determining that the at least one node is congested;
computer readable program code configured to, in response to determining that the at least one node is congested, determine whether the at least one node has a congested child node; and
computer readable program code configured to, in response to determining that the congested at least one node has a congested child node, apply the dynamic traffic shaping rule to the child node.
19. The computer program product of claim 18, further comprising:
computer readable program code configured to, in response to determining that the at least one node is congested, determine whether the at least one node does not have a congested child node; and
computer readable program code configured to, in response to determining that the congested at least one node does not have a congested child node, apply the traffic shaping rule to the congested at least one node.
20. The computer program product of claim 18, further comprising:
computer readable program code configured to determine whether a user served by the at least one node uses a disproportionate share of bandwidth compared to other users and has moved between congested cells; and
computer readable program code configured to, in response to determining that the user served by the at least one node uses a disproportionate share of bandwidth and has moved between congested cells, provision a set of at least one global traffic shaping rule for application to the user.
US14/310,671 2010-12-06 2014-06-20 Systems and methods for dynamic congestion management in communications networks Abandoned US20140341025A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/310,671 US20140341025A1 (en) 2010-12-06 2014-06-20 Systems and methods for dynamic congestion management in communications networks

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US42027210P 2010-12-06 2010-12-06
US13/312,436 US8773981B2 (en) 2010-12-06 2011-12-06 Systems and methods for dynamic congestion management in communications networks
US14/310,671 US20140341025A1 (en) 2010-12-06 2014-06-20 Systems and methods for dynamic congestion management in communications networks

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/312,436 Continuation US8773981B2 (en) 2010-12-06 2011-12-06 Systems and methods for dynamic congestion management in communications networks

Publications (1)

Publication Number Publication Date
US20140341025A1 true US20140341025A1 (en) 2014-11-20

Family

ID=46162151

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/312,436 Active 2032-04-29 US8773981B2 (en) 2010-12-06 2011-12-06 Systems and methods for dynamic congestion management in communications networks
US14/310,671 Abandoned US20140341025A1 (en) 2010-12-06 2014-06-20 Systems and methods for dynamic congestion management in communications networks

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/312,436 Active 2032-04-29 US8773981B2 (en) 2010-12-06 2011-12-06 Systems and methods for dynamic congestion management in communications networks

Country Status (1)

Country Link
US (2) US8773981B2 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010070699A1 (en) * 2008-12-15 2010-06-24 富士通株式会社 Data transmission method
US20150222939A1 (en) * 2010-10-28 2015-08-06 Avvasi Inc. System for monitoring a video network and methods for use therewith
US8953443B2 (en) 2011-06-01 2015-02-10 At&T Intellectual Property I, L.P. Method and apparatus for providing congestion management for a wireless communication network
US8804530B2 (en) * 2011-12-21 2014-08-12 Cisco Technology, Inc. Systems and methods for gateway relocation
KR20130093746A (en) * 2011-12-27 2013-08-23 한국전자통신연구원 Network bandwidth distribution device and method thereof
US9106513B2 (en) * 2012-03-23 2015-08-11 Microsoft Technology Licensing, Llc Unified communication aware networks
US9167021B2 (en) 2012-03-30 2015-10-20 Citrix Systems, Inc. Measuring web browsing quality of experience in real-time at an intermediate network node
US20130263167A1 (en) * 2012-03-30 2013-10-03 Bytemobile, Inc. Adaptive Traffic Management in Cellular Wireless Networks
US11469914B2 (en) * 2012-08-10 2022-10-11 Viasat, Inc. System, method and apparatus for subscriber user interfaces
US10136355B2 (en) 2012-11-26 2018-11-20 Vasona Networks, Inc. Reducing signaling load on a mobile network
WO2014127812A1 (en) * 2013-02-20 2014-08-28 Nokia Solutions And Networks Oy Adapting pcc rules to user experience
US10362081B2 (en) 2013-08-30 2019-07-23 Citrix Systems, Inc. Methods and systems for quantifying the holistic quality of experience for internet multimedia
US9515938B2 (en) 2013-10-24 2016-12-06 Microsoft Technology Licensing, Llc Service policies for communication sessions
MX368605B (en) * 2013-11-12 2019-10-09 Vasona Networks Inc Congestion in a wireless network.
US10341881B2 (en) 2013-11-12 2019-07-02 Vasona Networks, Inc. Supervision of data in a wireless network
US10039028B2 (en) 2013-11-12 2018-07-31 Vasona Networks Inc. Congestion in a wireless network
EP3108664B1 (en) * 2014-02-20 2019-05-01 Markport Limited Enhanced traffic management during signaling storms
US9628485B2 (en) * 2014-08-28 2017-04-18 At&T Intellectual Property I, L.P. Facilitating peering between devices in wireless communication networks
US10229395B2 (en) 2015-06-25 2019-03-12 Bank Of America Corporation Predictive determination and resolution of a value of indicia located in a negotiable instrument electronic image
US10373128B2 (en) 2015-06-25 2019-08-06 Bank Of America Corporation Dynamic resource management associated with payment instrument exceptions processing
US10049350B2 (en) 2015-06-25 2018-08-14 Bank Of America Corporation Element level presentation of elements of a payment instrument for exceptions processing
US10115081B2 (en) * 2015-06-25 2018-10-30 Bank Of America Corporation Monitoring module usage in a data processing system
FR3049150B1 (en) * 2016-03-21 2019-06-07 Societe Francaise Du Radiotelephone-Sfr METHOD AND SYSTEM FOR OPTIMIZING INDIVIDUALIZED FLOW WITHIN A TELECOMMUNICATIONS NETWORK
US10631198B2 (en) 2017-11-14 2020-04-21 T-Mobile Usa, Inc. Data congestion management system and method
US11044208B2 (en) * 2017-11-27 2021-06-22 Hughes Network Systems, Llc System and method for maximizing throughput using prioritized efficient bandwidth sharing
KR20200083582A (en) * 2017-11-27 2020-07-08 오팡가 네트웍스, 인크. Systems and methods for accelerating or decelerating data transmission network protocols based on real-time transmission network congestion conditions
JP7059149B2 (en) * 2018-08-24 2022-04-25 株式会社東芝 Wireless communication equipment, wireless communication systems, wireless communication methods and programs
US11399054B2 (en) * 2018-11-29 2022-07-26 Arista Networks, Inc. Method and system for profiling teleconference session quality in communication networks
CN111050360B (en) * 2019-11-21 2022-12-30 京信网络系统股份有限公司 Uplink data distribution method, device, base station and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030023774A1 (en) * 2001-06-14 2003-01-30 Gladstone Philip J. S. Stateful reference monitor
US20060080417A1 (en) * 2004-10-12 2006-04-13 International Business Machines Corporation Method, system and program product for automated topology formation in dynamic distributed environments
US20070067264A1 (en) * 2005-09-19 2007-03-22 Tektronix, Inc. System and method of forwarding end user correlated user and control plane or network states to OSS system
US20090232001A1 (en) * 2008-03-11 2009-09-17 Cisco Technology, Inc. Congestion Control in Wireless Mesh Networks
US20100017506A1 (en) * 2008-07-18 2010-01-21 Apple Inc. Systems and methods for monitoring data and bandwidth usage
US20120039175A1 (en) * 2010-08-11 2012-02-16 Alcatel-Lucent Usa Inc. Enabling a distributed policy architecture with extended son (extended self organizing networks)
US20120113807A1 (en) * 2010-11-09 2012-05-10 Cisco Technology, Inc. Affecting Node Association Through Load Partitioning
US20120155276A1 (en) * 2010-12-17 2012-06-21 Cisco Technology Inc. Dynamic Expelling of Child Nodes in Directed Acyclic Graphs in a Computer Network

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6882709B1 (en) 1999-04-14 2005-04-19 General Instrument Corporation Enhanced broadband telephony services
US7694127B2 (en) 2003-12-11 2010-04-06 Tandberg Telecom As Communication systems for traversing firewalls and network address translation (NAT) installations
US7835506B2 (en) 2004-10-12 2010-11-16 Cox Communications, Inc. Method and system for real-time notification and disposition of voice services in a cable services network
US7636578B1 (en) 2004-12-30 2009-12-22 Sprint Communications Company L.P. Method and system to provide text messages via a host device connected to a media-delivery network
US8194640B2 (en) 2004-12-31 2012-06-05 Genband Us Llc Voice over IP (VoIP) network infrastructure components and method
US8045700B2 (en) 2006-10-25 2011-10-25 At&T Intellectual Property I, L.P. System and method of providing voice communication
US8971883B2 (en) 2006-11-07 2015-03-03 Qualcomm Incorporated Registration timer adjustment based on wireless network quality
US7561575B2 (en) 2006-11-14 2009-07-14 Cisco Technology, Inc. Mechanisms for providing intelligent throttling on a nat session border controller
US7626929B2 (en) 2006-12-28 2009-12-01 Genband Inc. Methods and apparatus for predictive call admission control within a media over internet protocol network
US7774481B2 (en) 2006-12-29 2010-08-10 Genband Us Llc Methods and apparatus for implementing a pluggable policy module within a session over internet protocol network
US20080168503A1 (en) 2007-01-08 2008-07-10 General Instrument Corporation System and Method for Selecting and Viewing Broadcast Content Based on Syndication Streams
US8018918B2 (en) 2007-06-29 2011-09-13 Genband Us Llc Methods and apparatus for dual-tone multi-frequency signal conversion within a media over internet protocol network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030023774A1 (en) * 2001-06-14 2003-01-30 Gladstone Philip J. S. Stateful reference monitor
US20060080417A1 (en) * 2004-10-12 2006-04-13 International Business Machines Corporation Method, system and program product for automated topology formation in dynamic distributed environments
US20070067264A1 (en) * 2005-09-19 2007-03-22 Tektronix, Inc. System and method of forwarding end user correlated user and control plane or network states to OSS system
US20090232001A1 (en) * 2008-03-11 2009-09-17 Cisco Technology, Inc. Congestion Control in Wireless Mesh Networks
US20100017506A1 (en) * 2008-07-18 2010-01-21 Apple Inc. Systems and methods for monitoring data and bandwidth usage
US20120039175A1 (en) * 2010-08-11 2012-02-16 Alcatel-Lucent Usa Inc. Enabling a distributed policy architecture with extended son (extended self organizing networks)
US20120113807A1 (en) * 2010-11-09 2012-05-10 Cisco Technology, Inc. Affecting Node Association Through Load Partitioning
US20120155276A1 (en) * 2010-12-17 2012-06-21 Cisco Technology Inc. Dynamic Expelling of Child Nodes in Directed Acyclic Graphs in a Computer Network

Also Published As

Publication number Publication date
US8773981B2 (en) 2014-07-08
US20120140624A1 (en) 2012-06-07

Similar Documents

Publication Publication Date Title
US8773981B2 (en) Systems and methods for dynamic congestion management in communications networks
JP6335293B2 (en) Method and system for LTE multi-carrier load balancing based on user traffic profile
EP2959707B1 (en) Network security system and method
US9026851B2 (en) System and method for intelligent troubleshooting of in-service customer experience issues in communication networks
US20210185071A1 (en) Providing security through characterizing mobile traffic by domain names
JP5461689B2 (en) Method and system for targeted offers to mobile users
US8335161B2 (en) Systems and methods for network congestion management using radio access network congestion indicators
US8885466B1 (en) Systems for selective activation of network management policies of mobile devices in a mobile network
US20150295808A1 (en) System and method for dynamically monitoring, analyzing, managing, and alerting packet data traffic and applications
US10863383B2 (en) Tagging and metering network traffic originating from tethered stations
US9787484B2 (en) Adapting PCC rules to user experience
US9300685B2 (en) Detecting altered applications using network traffic data
US9191444B2 (en) Intelligent network management of network-related events
US20150333986A1 (en) Predicting video engagement from wireless network measurements
US20140269279A1 (en) Triggering congestion control for radio aware applications
US20150215187A1 (en) Data Services in a Computer System
US11689426B2 (en) System and method for applying CMTS management policies based on individual devices
US20190372897A1 (en) Systems and methods for congestion measurements in data networks via qos availability
WO2020109853A1 (en) Optimized resource management based on predictive analytics
Theera-Ampornpunt et al. Using big data for more dependability: a cellular network tale
US20120315893A1 (en) Intelligent network management of subscriber-related events
CN110324802B (en) Flow charging method and MME
US20240107314A1 (en) Fake network-utilization detection for independent cellular access points
WO2023179871A1 (en) Technique for triggering congestion mitigation actions in an o-ran-compliant communication network
WO2013174416A1 (en) Network usage event data record handling

Legal Events

Date Code Title Description
AS Assignment

Owner name: GENBAND INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DENMAN, ROBERT E.;KEMMERER, FREDERICK C.;REEL/FRAME:034949/0680

Effective date: 20111205

AS Assignment

Owner name: GENBAND US LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GENBAND INC.;REEL/FRAME:035076/0432

Effective date: 20140709

AS Assignment

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, CALIFORNIA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:GENBAND US LLC;REEL/FRAME:039269/0234

Effective date: 20160701

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, CALI

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:GENBAND US LLC;REEL/FRAME:039269/0234

Effective date: 20160701

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT PATENT NO. 6381239 PREVIOUSLY RECORDED AT REEL: 039269 FRAME: 0234. ASSIGNOR(S) HEREBY CONFIRMS THE PATENT SECURITY AGREEMENT;ASSIGNOR:GENBAND US LLC;REEL/FRAME:041422/0080

Effective date: 20160701

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, CALI

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE PATENT NO. 6381239 PREVIOUSLY RECORDED AT REEL: 039269 FRAME: 0234. ASSIGNOR(S) HEREBY CONFIRMS THE PATENT SECURITY AGREEMENT;ASSIGNOR:GENBAND US LLC;REEL/FRAME:041422/0080

Effective date: 20160701

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, CALI

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT PATENT NO. 6381239 PREVIOUSLY RECORDED AT REEL: 039269 FRAME: 0234. ASSIGNOR(S) HEREBY CONFIRMS THE PATENT SECURITY AGREEMENT;ASSIGNOR:GENBAND US LLC;REEL/FRAME:041422/0080

Effective date: 20160701

AS Assignment

Owner name: GENBAND US LLC, TEXAS

Free format text: TERMINATION AND RELEASE OF PATENT SECURITY AGREEMENT;ASSIGNOR:SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT;REEL/FRAME:044986/0303

Effective date: 20171221

AS Assignment

Owner name: RIBBON COMMUNICATIONS OPERATING COMPANY, INC., MASSACHUSETTS

Free format text: MERGER;ASSIGNOR:GENBAND US LLC;REEL/FRAME:053223/0260

Effective date: 20191220