WO2010081365A1 - Processing method and system for preventing congestion - Google Patents

Processing method and system for preventing congestion Download PDF

Info

Publication number
WO2010081365A1
WO2010081365A1 PCT/CN2009/075635 CN2009075635W WO2010081365A1 WO 2010081365 A1 WO2010081365 A1 WO 2010081365A1 CN 2009075635 W CN2009075635 W CN 2009075635W WO 2010081365 A1 WO2010081365 A1 WO 2010081365A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
information
output end
size
input
Prior art date
Application number
PCT/CN2009/075635
Other languages
French (fr)
Chinese (zh)
Inventor
赖伟
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2010081365A1 publication Critical patent/WO2010081365A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element

Definitions

  • FIG. 1 is a schematic diagram of the architecture of a current core router.
  • the core switching module (including the switching access chip and the switching chip in FIG. 1) is a bridge connecting an input end (ingress) and an output end (Egress), and is a router implementation grouping. Forwarded core device. Typically, weighted random early discards are applied at each input of the core switching network (Weighted
  • the Randomly Early Discard (WRED) detection function is used to avoid congestion on the input side.
  • the WRED detection processes the packets received by the input according to the queue parameters, weights, or cache occupancy, including discarding or adding to the corresponding queue. in.
  • the WRED detection can adjust the sending traffic of the input end, so that the entire network can be prevented from entering the congestion state and all the packets are discarded.
  • the WRED detection implementation algorithm is relatively mature. However, since the input-based WRED detection only acts on the input end and cannot be applied to the output end, there may still be packet congestion at the output end. Once packet congestion occurs, the packet may need to be discarded.
  • FIG. 2 shows the case where congestion occurs at the output end in the core switching network. As shown in FIG.
  • two data streams of the same level are from device A and device B, respectively.
  • the size is 1G
  • the two data streams are simultaneously output to one port (that is, the switching access chip shown on the right side of FIG. 2), and the output port has a traffic of 1G, assuming that the data packet in Flow A is a committed bandwidth.
  • the data packet in Flow B is the excess bandwidth. That is, the priority of the data packet in Flow A is greater than the priority of the data packet in Flow B. According to the current processing, half of the data packets in Flow A will be discarded. High priority data will be discarded, which is undoubtedly detrimental to the operation and performance of the system.
  • an anti-congestion processing method is provided, which is applied to a system layer, wherein the system layer includes an input end and an output end.
  • the anti-congestion processing method includes: the input end sends the size information of each queue to be sent to the corresponding output end; the input end respectively receives and saves the queue information from each output end, wherein the queue information includes the port of the output end Information, queue size information, and queue size information indicate the sum of the queue sizes at the output; according to the queue information and the predetermined rule, the input end processes the received message to be queued.
  • the predetermined rule is: performing weighted random early discard detection on the waiting queue of the received message.
  • the operation of sending the size information of each queue to be sent to the corresponding output end includes the following: the input end periodically sends the size information of each queue to be sent to the corresponding output end; the input end occurs in the size of the queue.
  • the size information of each queue to be sent is sent to the corresponding output. And after the size information of each queue to be sent is sent to the corresponding output, each output periodically or aperiodically feeds back the queue information to the input end. In addition, if the queue size indicated by the latest feedback queue information of the output terminal is larger than the queue size indicated by the previously saved queue information, the previously saved queue information is updated using the latest feedback queue information.
  • the processing performed by the input end on the received packet to be queued includes: determining, according to the destination address of the packet to be queued, the queue that the packet needs to be enqueued, and further determining the need The output corresponding to the queue of the enqueue; obtains the queue information of the corresponding output; discards or enqueues the packet according to the queue size indicated by the queue information and the result of the weighted random early discard detection.
  • an anti-congestion system including an input end and an output end, wherein the input end includes a first for receiving a message Receive module.
  • the input terminal further includes: a first sending module, configured to send size information of each queue to be sent to a corresponding output end; and a second receiving module, configured to separately receive and save each of the The queue information of the output end, wherein the queue information includes the port information of the output end and the queue size information, and the queue size information indicates the sum of the size of each queue of the output end; the processing module is configured to use the queue information received by the second receiving module according to the predetermined rule.
  • the output terminal includes: a third receiving module, configured to receive information sent by the first sending module; and an operation module, configured to receive according to the third receiving module
  • the information is used to calculate the queue size information of the output end;
  • the second sending module is configured to send the queue size information of the output end calculated by the computing module to the second receiving module.
  • the input terminal further includes: an update module, configured to update the previously saved queue information.
  • the first timer is configured to periodically send size information of each queue to be sent to the output end.
  • the output terminal further includes: a second timer, configured to periodically feed back queue information to the input end.
  • FIG. 1 is a schematic structural diagram of a core router in the related art
  • FIG. 2 is a schematic diagram of a congestion state of an output port in a core switching network in the related art
  • FIG. 3 is an implementation according to an embodiment of the present invention.
  • FIG. 4 is a block diagram of an anti-congestion system according to an embodiment of the present invention; and
  • FIG. 5 is a schematic structural diagram of an anti-congestion system according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of an anti-congestion processing method according to an embodiment of the present invention.
  • the anti-congestion processing method includes the following steps (steps S302-S306).
  • Step S302 the input end sends the size information of each queue to be sent to the corresponding one or more output ends.
  • the input end can send the size information to the following two types: (method 1) The size information of each queue to be sent can be periodically sent to the corresponding output end, that is, at the input end.
  • a timer is set to send the size information of each queue to be sent to the corresponding output when the timer expires.
  • Method 2 When the size of the queue changes, the size information of each queue to be sent is sent to The corresponding output.
  • Step S304 the input end respectively receives and saves the queue information from each output end, wherein the queue information may include port information of the output end and queue size information, and the queue size information indicates a sum of the size of each queue of the output end.
  • each output After receiving the information sent by the input end in the foregoing step S302, each output first calculates the queue size information (ie, the above-mentioned queue information) that needs to be processed on the local side, and then periodically or The queue information is fed back to each input aperiodically, so that the input can know in advance the data traffic that needs to be processed at the output.
  • the input port may be configured with a destination port Queue Size Table, where the address of each output terminal and the queue information fed back by each output end are used to maintain the traffic volume of each output end. The input side can find the queue information fed back by the output corresponding to the address according to the address in the table.
  • the initial value of the queue information at each output in the table can be set to 0 to update according to the queue information of the received output. If the queue size indicated by the latest feedback queue information is larger than the queue size indicated by the previously saved queue information, the latest feedback queue information is used to the previously saved queue information.
  • the update is performed, that is, each time the input end receives the queue information (which may be in the form of a cell), the queue size indicated by the received queue information is compared with the value of the corresponding address in the table, if the comparison result is greater than The value in the corresponding address is updated with the received queue information, otherwise the table is not updated. Step S306, according to the queue information and the predetermined rule, the input end processes the received packet to be queued.
  • the predetermined rule here can preferably be implemented by the weighted random early discard detection technology in the prior art, which technology is relatively mature at present, and will not be mentioned here.
  • the input end receives a message, first determines the destination address of the packet to be queued, and further determines the output end of the queue that needs to be enqueued, thereby obtaining the Queue information of the corresponding output; according to the queue size indicated by the queue information, combined with the weighted random early discard detection technology, discarding or enqueuing the essay.
  • the received message is processed in advance at the input end, and the congestion of the message at the output end can be alleviated in advance, thereby avoiding the problem of processing the message incorrectly at the output end.
  • the size information of each queue to be sent is sent to the corresponding output end at the input end, so that the output end can feed back the size of the message to be processed on the side, so that the input end can be based on the output end.
  • the feedback situation combined with the weighted random early discard detection technology, processes the received 4 texts (including discarding or entering the corresponding queue;).
  • Input side step 1 the input side sends the size of each queue (in the form of a cell) to each output through the core switching network to notify the output of the size of the traffic to be processed; this step corresponds to the above step S302 Step 2, the input side sets the destination port queue size table, and sets the value in the table according to the received "port queue size cell".
  • the input side compares the queue size indicated in the received port queue size cell with the value in the corresponding address. If the result of the comparison is greater than the value in the corresponding address, the queue size is indicated in the corresponding address.
  • Step 4 is corresponding to the above step 4 S304;
  • Step 3 the input side processes the packet to be enqueued, and uses the destination address carried by the packet
  • the queue size of the corresponding output terminal in the destination port queue size table is searched, and the packet is processed in conjunction with the weighted random early discarding technique to avoid congestion at the output end; this step corresponds to the above step S306.
  • the output terminal can set an output queue size table locally ( Output Port
  • Queue Size Table Since an output can have multiple queues, these queues correspond to different inputs. Therefore, one output may receive multiple "queue size cells", and the output needs to have multiple "queue size cells”.
  • the indicated queue size is added to get the total queue size, and this value is filled into the output queue size table; Step 2, the value in the output queue size table is carried on the "port queue size cell", through the core
  • the switching network is sent to each input terminal to notify each input end of the local (ie, output) message to be processed; Step 3, the output end optimizes the output queue size table, that is, periodic or
  • the output queue size table is set to 0 aperiodically.
  • the output compares the calculated total queue size with the value in the output queue size table each time. If the result of the comparison is greater than the value in the table, the total queue size is calculated. Update, otherwise, the values in the table are not updated.
  • an anti-congestion system is provided, which may be preferably used to implement the method provided by the foregoing method embodiments.
  • 4 is a block diagram of an anti-congestion system according to the present embodiment.
  • the anti-congestion system includes an input terminal 1 and an output terminal 2.
  • one input terminal can correspond to multiple output terminals, and one output The terminal can correspond to a plurality of input terminals.
  • an output terminal and an input terminal are illustrated in FIG. 4 as an example.
  • the input terminal 1 includes: a first receiving module 10, a first sending module 12, a second receiving module 14, and a processing module 16.
  • the output terminal 2 includes: a third receiving module 20, an arithmetic module 22 and the second transmitting module 24, each of which is described below.
  • the first receiving module 10 is configured to receive the size information of each queue to be sent to the corresponding output end
  • the second receiving module 14 is configured to separately receive and save the data from each The queue information of the output end, wherein the queue information includes the port information of the output end, the queue size information, and the queue size information indicates the sum of the queue sizes of the output end
  • the processing module 16 is connected to the first receiving module 10 and the second receiving module 14,
  • the third receiving module 20 is configured to receive the sending from the first sending module 12
  • the operation module 22 is connected to the third receiving module 20 for calculating the queue size information of the output terminal according to the information received by the third receiving module 20
  • the second sending module 24 is connected to the computing module 22 for using the computing module 22
  • the calculated queue size information of the output is sent to the second receiving module
  • the predetermined rule here may be the weighted random early discard detection technique in the above method embodiment.
  • the input terminal 1 may further include an update module (not shown) for updating the previously saved queue information
  • each of the output terminal 1 and the input terminal 2 may include a timer. (not shown in the figure), wherein the timer of the input terminal 1 is used to periodically send the size information of each queue to be sent to the output end, and the timer of the output end 2 is used to periodically The queue information is fed back to the input.
  • the first sending module 12 and the second sending module 24 enable the output end and the input end to interact, and the input end can perform corresponding processing on the received 4 ⁇ text, which can be relieved in advance.
  • FIG. 5 is a schematic structural diagram of an anti-congestion system according to an embodiment of the present invention. As shown in FIG. 5, the structures of the input side and the output side are described in detail below in conjunction with actual applications.
  • the input side virtual destination port queue module 501 corresponding to the second receiving module 14, is configured to save the destination port queue size table, so that the packet to be queued can be in virtual destination according to the destination address carried by the packet.
  • the port queue module 301 finds the corresponding destination port queue size; the weighted random early discarding module 502 corresponds to the processing module 16 described above, and is configured according to the foregoing report.
  • the queue number of the inbound queue and the corresponding destination port queue size are selected as the discarding policy, and the received packet is processed as a result of the discarding policy.
  • the packet is discarded; the virtual queue module 503 And sending, by the sending module 504, the sending module 504 of the input side is responsible for sending the packet and the queue size information (in the format of the cell) respectively. Switch to the switch chip.
  • the virtual queue module 503 and the sending module 504 correspond to the first sending module 12 described above.
  • an output side cell classification module 505, configured to perform classification on various cells received from the switch chip; a virtual destination port queue module 506 corresponding to the third receiving module 20 and the operation module 22,
  • the received queue size information is updated according to the output end corresponding to the queue size information, and the output queue size table is updated, and the value in the output queue size table may be sent in a cell format periodically or non-periodically.
  • the port virtual queue module 507 is configured to enqueue the message sent by the input end, and wait for the output; the port module 508 is configured to schedule the output message to complete the message exchange process.
  • the input end sends the size information of each queue to be sent to the corresponding output end, and then the output end returns the queue information to the output end, so that the output end can be based on the queue information.
  • the received message is processed.
  • the interaction between the input end and the output end can relieve the congestion of the output end in advance at the input end, thereby preventing the output end from being erroneously processing the message.
  • the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices.
  • the invention is not limited to any specific combination of hardware and software.
  • the above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the scope of the present invention are intended to be included within the scope of the present invention.

Abstract

A processing method and system for preventing congestion are provided, wherein the processing method for preventing congestion includes: the input ends send the size information of each queue to be sent to the corresponding output ends; the input ends receive and save the queue information from each output end separately, wherein the queue information includes the port information of the output ends and the size information of the queues, and the size information of the queues indicates the sum of size of each queue, and the input ends process the received messages to be in the queue based on the queue information and preset rules. By the present invention, the message congestion circumstance of the output ends is relieved in advance, and thus the problem that messages are processed incorrectly in output ends is avoided.

Description

防拥塞的处理方法及系统 技术领域 本发明涉及通信领域, 具体而言, 涉及一种防拥塞的处理方法及系统。 背景技术 从本质上看, Internet 网络是通过核心路由器连接的各种异构网络的集 合, 随着网络业务的发展, 核心路由器的容量将进一步扩大。 图 1是目前的 核心路由器的架构示意图, 其中, 核心交换模块 (包括图 1中的交换接入芯 片和交换芯片) 是连接输入端 (Ingress ) 与输出端 (Egress ) 的桥梁, 是路 由器实现分组转发的核心器件。 通常, 在核心交换网的各个输入端, 应用加权随机早期丢弃(Weighted TECHNICAL FIELD The present invention relates to the field of communications, and in particular to a method and system for preventing congestion. BACKGROUND OF THE INVENTION In essence, an Internet network is a collection of heterogeneous networks connected through a core router. As network services develop, the capacity of core routers will further expand. FIG. 1 is a schematic diagram of the architecture of a current core router. The core switching module (including the switching access chip and the switching chip in FIG. 1) is a bridge connecting an input end (ingress) and an output end (Egress), and is a router implementation grouping. Forwarded core device. Typically, weighted random early discards are applied at each input of the core switching network (Weighted
Randomly Early Discard, 简称为 WRED )检测功能来避免输入侧的拥塞, 其 中, WRED检测根据队列参数、 权重、 或緩存占用等情况对输入端接收的报 文进行处理, 包括丢弃或加入到相应的队列中。 WRED检测可以调节输入端 的发送流量, 从而可以避免全网络进入拥塞状态而丢弃所有报文。 目前, 这种 WRED检测的实现算法已经比较成熟, 但是, 由于这种基 于输入端的 WRED检测只作用于输入端, 而不能作用于输出端, 因而在输出 端, 仍然可能存在报文拥塞的情况, 一旦出现报文拥塞, 可能就需要丢弃报 文。 例如,图 2示出了核心交换网中在输出端出现拥塞的情况,如图 2所示, 两个相同等级的数据流(分别用 Flow A和 Flow B表示)分别来自器件 A和 器件 B, 大小都为 1G, 这两个数据流同时往一个端口 (即图 2中右侧示出的 交换接入芯片)输出, 该输出端口的流量为 1G, 假设 Flow A中的数据包是 承诺带宽, Flow B中的数据包是超额带宽, 即, Flow A中的数据包的优先级 大于 Flow B中的数据包的优先级, 根据目前的处理, Flow A中将有一半数 据包被丢弃。 高优先级的数据将被丢弃, 这无疑不利于系统的操作及性能。 发明内容 针对上述输出端出现的报文拥塞而导致的输出端无法正确处理报文的 问题而提出本发明, 为此, 本发明旨在提供一种改进的防拥塞方案, 以解决 上述问题。 为了实现上述目的,根据本发明的一方面,提供了一种防拥塞处理方法, 应用于系统层, 其中, 系统层包括输入端和输出端。 根据本发明的防拥塞处理方法包括:输入端将待发送的各队列的大小信 息发送给相应的输出端; 输入端分别接收并保存来自各输出端的队列信息, 其中, 队列信息中包括输出端的端口信息、 队列大小信息, 队列大小信息指 示输出端的各队列大小的总和; 根据队列信息和预定规则, 输入端对接收的 待入队的 艮文进行处理。 其中, 预定规则为: 对接收的报文的待入队列进行加权随机早期丢弃检 测。 上述将待发送的各队列的大小信息发送给相应的输出端的操作包括以 下之一:输入端周期性地将待发送的各队列的大小信息发送给相应的输出端; 输入端在队列的大小发生变化时, 将待发送的各队列的大小信息发送给相应 的输出端。 以及, 将待发送的各队列的大小信息发送给相应的输出端之后, 各输出 端周期性地或非周期性地将队列信息反馈给输入端。 另外,如果输出端最新反馈的队列信息所指示的队列大小大于先前保存 的队列信息所指示的队列大小, 则使用最新反馈的队列信息对先前保存的队 列信息进行更新。 具体地, 根据队列信息和预定规则, 输入端对接收的待入队的报文进行 的处理包括: 根据待入队的报文的目的地址, 确定报文需要入队的队列, 并 进一步确定需要入队的队列对应的输出端; 获取对应的输出端的队列信息; 根据队列信息所指示的队列大小、 以及加权随机早期丢弃检测的结果, 对报 文进行丢弃或入队处理。 为了实现上述目的, 根据本发明的另一方面, 提供了一种防拥塞系统, 该防拥塞系统包括输入端和输出端, 其中, 输入端包括用于接收报文的第一 接收模块。 根据本发明的防拥塞系统, 上述输入端还包括: 第一发送模块, 用于将 待发送的各队列的大小信息发送给相应的输出端; 第二接收模块, 用于分别 接收并保存来自各输出端的队列信息, 其中, 队列信息中包括输出端的端口 信息、 队列大小信息, 队列大小信息指示输出端的各队列大小的总和; 处理 模块, 用于根据预定规则和第二接收模块接收的队列信息, 对第一接收模块 接收的待入队的报文进行处理; 上述输出端包括: 第三接收模块, 用于接收 来自第一发送模块发送的信息; 运算模块, 用于根据第三接收模块接收的信 息计算输出端的队列大小信息; 第二发送模块, 用于将运算模块计算的输出 端的队列大小信息发送给第二接收模块。 优选地, 上述输入端还包括: 更新模块, 用于对先前保存的队列信息进 行更新。 第一定时器, 用于周期性地将待发送的各队列的大小信息发送给输 出端。 以及, 上述输出端还包括: 第二定时器, 用于周期性地将队列信息反馈 给输入端。 借助于上述技术方案的至少之一, 本发明通过输入端与输出端的交互, 使得在输入端对接收的报文进行处理, 能够提前緩解输出端的报文拥塞的情 况, 进而可以避免输出端错误的处理报文的问题。 附图说明 此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部 分, 本发明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的 不当限定。 在附图中: 图 1是相关技术中的核心路由器的架构示意图; 图 2是相关技术中的核心交换网中的输出端口出现拥塞状况的示意图; 图 3是才艮据本发明实施例的防拥塞处理方法的流程图; 图 4是才艮据本发明实施例的防拥塞系统的框图; 图 5是 居本发明实施例的防拥塞系统的架构示意图。 具体实施方式 下面将参考附图并结合实施例, 来详细说明本发明。 需要说明的是, 如 果不冲突, 本申请中的实施例以及实施例中的特征可以相互组合。 方法实施例 在本发明实施例中 ,提供了一种防拥塞处理方法,该方法应用于系统层, 其中, 这里的系统层可以是上述的核心交换网, 包括输入端和输出端。 图 3是 居本发明实施例的防拥塞处理方法的流程图, 如图 3所示, 该 防拥塞处理方法包括如下步骤 (步骤 S302—步骤 S306 )。 步骤 S302, 输入端将待发送的各队列的大小信息发送给相应的一个或 多个输出端。 在具体实施过程中,输入端对于上述大小信息的发送方式可以有如下两 种: (方式一)可以周期性地将待发送的各队列的大小信息发送给相应的输出 端, 即, 在输入端设置一定时器, 在定时器到时时, 将待发送的各队列的大 小信息发送给相应的输出端; (方式二)在队列的大小发生变化时, 将待发送 的各队列的大小信息发送给相应的输出端。 步骤 S304, 输入端分别接收并保存来自各输出端的队列信息, 其中, 该队列信息可以包括输出端的端口信息、 队列大小信息, 队列大小信息指示 输出端的各队列大小的总和。 在该步骤 S304 中, 各输出端在接收到输入端在上述步骤 S302 中发送 的信息后,首先计算在本侧需要处理的队列大小信息(即,上述的队列信息), 然后, 周期性地或非周期性地将队列信息反馈给各个输入端, 这样, 输入端 就可以预先获知在输出端需要处理的数据流量。 在具体的实施过程中, 输入侧可以设置一目的端口队列大小表格 ( Destination Port Queue Size Table ),该表格中包含各输出端的地址以及各输 出端反馈的队列信息, 用于维护各输出端的流量大小, 输入侧可以根据该表 格中的地址查找到该地址对应的输出端反馈的队列信息。 该表格中各输出端 的队列信息的初始值可以置为 0, 以根据接收到的输出端的队列信息进行更 新。 如果输出端最新反馈的队列信息所指示的队列大小大于先前保存的队列 信息所指示的队列大小, 则使用最新反馈的队列信息对先前保存的队列信息 进行更新, 即, 输入端每次接收到队列信息(可以是以信元的形式), 将接收 的队列信息所指示的队列大小与该表格中对应地址的值进行比较, 如果比较 的结果为大于对应地址中的值, 则以接收的队列信息进行更新, 否则, 不更 新该表格。 步骤 S306, 根据队列信息和预定规则, 输入端对接收的待入队的报文 进行处理。 这里的预定规则, 优选地可以通过现有技术中的加权随机早期丢 弃检测技术来实现, 该技术在目前已经比较成熟, 这里不再赞述。 具体地, 输入端接收到一 4艮文, 首先 居该 4艮文携带的目的地址确定该 报文需要入队的队列, 并进一步确定该需要入队的队列对应的输出端, 以此 获取该对应的输出端的队列信息; 根据队列信息所指示的队列大小、 并结合 加权随机早期丢弃检测技术, 对艮文进行丢弃或入队处理。 这样, 在输入端 预先对接收的报文进行处理, 可以提前緩解输出端的报文拥塞的情况, 进而 可以避免输出端错误的处理报文的问题。 由以上描述可以看出,通过在输入端将待发送的各队列的大小信息发送 给相应的输出端, 使得输出端可以反馈本侧需要处理的报文的大小, 进而使 得输入端能够根据输出端的反馈情况, 并结合加权随机早期丢弃检测技术对 接收的 4艮文进行处理 (包括丢弃或入相应的队列;)。 以下从系统层角度对本发明进一步说明, 即, 分别从输入侧和输出侧来 描述本发明实施例。 (一) 输入侧 步骤 1 , 输入侧将各队列的大小 (以信元的形式)通过核心交换网发送 给各输出端, 以通知各输出端将要处理的流量大小; 该步骤对应于上述步骤 S302; 步骤 2, 输入侧设置目的端口队列大小表格, 根据接收的"端口队列大 小信元 "设置表格中的值。输入侧每次将接收到的端口队列大小信元中指示的 队列大小, 与对应地址中的值进行比较, 如果比较的结果为大于对应地址中 的值, 则以指示队列大小对对应地址中的值进行更新, 否则, 不更新对应地 址中的值; 该步 4聚对应于上述步 4聚 S304; 步骤 3 , 输入侧对待入队的报文进行处理, 利用该报文携带的目的地址 查找目的端口队列大小表格中对应的输出端的队列大小, 并结合加权随机早 期丢弃技术, 对该报文进行处理, 以避免在输出端发生拥塞; 该步骤对应于 上述步骤 S306。 The Randomly Early Discard (WRED) detection function is used to avoid congestion on the input side. The WRED detection processes the packets received by the input according to the queue parameters, weights, or cache occupancy, including discarding or adding to the corresponding queue. in. The WRED detection can adjust the sending traffic of the input end, so that the entire network can be prevented from entering the congestion state and all the packets are discarded. At present, the WRED detection implementation algorithm is relatively mature. However, since the input-based WRED detection only acts on the input end and cannot be applied to the output end, there may still be packet congestion at the output end. Once packet congestion occurs, the packet may need to be discarded. For example, FIG. 2 shows the case where congestion occurs at the output end in the core switching network. As shown in FIG. 2, two data streams of the same level (represented by Flow A and Flow B, respectively) are from device A and device B, respectively. The size is 1G, and the two data streams are simultaneously output to one port (that is, the switching access chip shown on the right side of FIG. 2), and the output port has a traffic of 1G, assuming that the data packet in Flow A is a committed bandwidth. The data packet in Flow B is the excess bandwidth. That is, the priority of the data packet in Flow A is greater than the priority of the data packet in Flow B. According to the current processing, half of the data packets in Flow A will be discarded. High priority data will be discarded, which is undoubtedly detrimental to the operation and performance of the system. SUMMARY OF THE INVENTION The present invention has been made in view of the problem that an output terminal cannot properly process a message due to packet congestion occurring at the above output. To this end, the present invention aims to provide an improved anti-congestion scheme to solve the above problem. In order to achieve the above object, according to an aspect of the present invention, an anti-congestion processing method is provided, which is applied to a system layer, wherein the system layer includes an input end and an output end. The anti-congestion processing method according to the present invention includes: the input end sends the size information of each queue to be sent to the corresponding output end; the input end respectively receives and saves the queue information from each output end, wherein the queue information includes the port of the output end Information, queue size information, and queue size information indicate the sum of the queue sizes at the output; according to the queue information and the predetermined rule, the input end processes the received message to be queued. The predetermined rule is: performing weighted random early discard detection on the waiting queue of the received message. The operation of sending the size information of each queue to be sent to the corresponding output end includes the following: the input end periodically sends the size information of each queue to be sent to the corresponding output end; the input end occurs in the size of the queue. When the change is made, the size information of each queue to be sent is sent to the corresponding output. And after the size information of each queue to be sent is sent to the corresponding output, each output periodically or aperiodically feeds back the queue information to the input end. In addition, if the queue size indicated by the latest feedback queue information of the output terminal is larger than the queue size indicated by the previously saved queue information, the previously saved queue information is updated using the latest feedback queue information. Specifically, according to the queue information and the predetermined rule, the processing performed by the input end on the received packet to be queued includes: determining, according to the destination address of the packet to be queued, the queue that the packet needs to be enqueued, and further determining the need The output corresponding to the queue of the enqueue; obtains the queue information of the corresponding output; discards or enqueues the packet according to the queue size indicated by the queue information and the result of the weighted random early discard detection. In order to achieve the above object, according to another aspect of the present invention, an anti-congestion system is provided, the anti-congestion system including an input end and an output end, wherein the input end includes a first for receiving a message Receive module. According to the anti-congestion system of the present invention, the input terminal further includes: a first sending module, configured to send size information of each queue to be sent to a corresponding output end; and a second receiving module, configured to separately receive and save each of the The queue information of the output end, wherein the queue information includes the port information of the output end and the queue size information, and the queue size information indicates the sum of the size of each queue of the output end; the processing module is configured to use the queue information received by the second receiving module according to the predetermined rule. Processing the packet to be queued received by the first receiving module; the output terminal includes: a third receiving module, configured to receive information sent by the first sending module; and an operation module, configured to receive according to the third receiving module The information is used to calculate the queue size information of the output end; the second sending module is configured to send the queue size information of the output end calculated by the computing module to the second receiving module. Preferably, the input terminal further includes: an update module, configured to update the previously saved queue information. The first timer is configured to periodically send size information of each queue to be sent to the output end. And the output terminal further includes: a second timer, configured to periodically feed back queue information to the input end. By means of at least one of the foregoing technical solutions, the invention interacts with the input end, so that the received message is processed at the input end, which can relieve the congestion of the output end of the message, thereby avoiding the error of the output end. Handling messages. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are set to illustrate,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, In the drawings: FIG. 1 is a schematic structural diagram of a core router in the related art; FIG. 2 is a schematic diagram of a congestion state of an output port in a core switching network in the related art; FIG. 3 is an implementation according to an embodiment of the present invention. FIG. 4 is a block diagram of an anti-congestion system according to an embodiment of the present invention; and FIG. 5 is a schematic structural diagram of an anti-congestion system according to an embodiment of the present invention. BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments. It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other if they do not conflict. Method Embodiment In the embodiment of the present invention, an anti-congestion processing method is provided. The method is applied to a system layer, where the system layer herein may be the core switching network, including an input end and an output end. FIG. 3 is a flowchart of an anti-congestion processing method according to an embodiment of the present invention. As shown in FIG. 3, the anti-congestion processing method includes the following steps (steps S302-S306). Step S302, the input end sends the size information of each queue to be sent to the corresponding one or more output ends. In the specific implementation process, the input end can send the size information to the following two types: (method 1) The size information of each queue to be sent can be periodically sent to the corresponding output end, that is, at the input end. A timer is set to send the size information of each queue to be sent to the corresponding output when the timer expires. (Method 2) When the size of the queue changes, the size information of each queue to be sent is sent to The corresponding output. Step S304, the input end respectively receives and saves the queue information from each output end, wherein the queue information may include port information of the output end and queue size information, and the queue size information indicates a sum of the size of each queue of the output end. In the step S304, after receiving the information sent by the input end in the foregoing step S302, each output first calculates the queue size information (ie, the above-mentioned queue information) that needs to be processed on the local side, and then periodically or The queue information is fed back to each input aperiodically, so that the input can know in advance the data traffic that needs to be processed at the output. In the specific implementation process, the input port may be configured with a destination port Queue Size Table, where the address of each output terminal and the queue information fed back by each output end are used to maintain the traffic volume of each output end. The input side can find the queue information fed back by the output corresponding to the address according to the address in the table. The initial value of the queue information at each output in the table can be set to 0 to update according to the queue information of the received output. If the queue size indicated by the latest feedback queue information is larger than the queue size indicated by the previously saved queue information, the latest feedback queue information is used to the previously saved queue information. The update is performed, that is, each time the input end receives the queue information (which may be in the form of a cell), the queue size indicated by the received queue information is compared with the value of the corresponding address in the table, if the comparison result is greater than The value in the corresponding address is updated with the received queue information, otherwise the table is not updated. Step S306, according to the queue information and the predetermined rule, the input end processes the received packet to be queued. The predetermined rule here can preferably be implemented by the weighted random early discard detection technology in the prior art, which technology is relatively mature at present, and will not be mentioned here. Specifically, the input end receives a message, first determines the destination address of the packet to be queued, and further determines the output end of the queue that needs to be enqueued, thereby obtaining the Queue information of the corresponding output; according to the queue size indicated by the queue information, combined with the weighted random early discard detection technology, discarding or enqueuing the essay. In this way, the received message is processed in advance at the input end, and the congestion of the message at the output end can be alleviated in advance, thereby avoiding the problem of processing the message incorrectly at the output end. It can be seen from the above description that the size information of each queue to be sent is sent to the corresponding output end at the input end, so that the output end can feed back the size of the message to be processed on the side, so that the input end can be based on the output end. The feedback situation, combined with the weighted random early discard detection technology, processes the received 4 texts (including discarding or entering the corresponding queue;). The invention will now be further described from the perspective of a system layer, i.e., embodiments of the invention are described from the input side and the output side, respectively. (1) Input side step 1 , the input side sends the size of each queue (in the form of a cell) to each output through the core switching network to notify the output of the size of the traffic to be processed; this step corresponds to the above step S302 Step 2, the input side sets the destination port queue size table, and sets the value in the table according to the received "port queue size cell". The input side compares the queue size indicated in the received port queue size cell with the value in the corresponding address. If the result of the comparison is greater than the value in the corresponding address, the queue size is indicated in the corresponding address. The value is updated, otherwise, the value in the corresponding address is not updated; the step 4 is corresponding to the above step 4 S304; Step 3, the input side processes the packet to be enqueued, and uses the destination address carried by the packet The queue size of the corresponding output terminal in the destination port queue size table is searched, and the packet is processed in conjunction with the weighted random early discarding technique to avoid congestion at the output end; this step corresponds to the above step S306.
(二) 输出侧 步骤 1 , 输出端在本地可以设置一输出端队列大小表格 ( Output Port(2) Output side Step 1 , The output terminal can set an output queue size table locally ( Output Port
Queue Size Table )。 由于一个输出端可以有多个队列, 这些队列对应于不同 的输入端, 因此, 一个输出端可能接收到多个 "队列大小信元", 输出端需要 将这多个"队列大小信元"所指示的队列大小相加得到总的队列大小, 并将这 个值填写入输出端队列大小表格; 步骤 2, 将上述输出端队列大小表格中的值携带在"端口队列大小信元" 上, 通过核心交换网发送给各输入端, 用于通知各输入端本地(即, 输出端) 需要处理的报文的情况; 步骤 3 , 输出端对该输出端队列大小表格进行优化处理, 即, 周期性或 非周期性地对该输出端队列大小表格进行置 0处理。 类似于上述输入端的步 骤 2, 输出端每次将计算的总的队列大小与输出端队列大小表格中的值进行 比较, 如果比较的结果为大于表格中的值, 则以计算的总的队列大小进行更 新, 否则, 不更新表格中的值。 需要说明的是, 为了便于描述, 在上文中, 以步骤的形式示出并描述了 本发明的方法实施例的技术方案, 上述所示出的步骤可以在诸如一组计算机 可执行指令的计算机系统中执行。 虽然在上述的描述中示出了逻辑顺序, 但 是在某些情况下, 可以以不同于此处的顺序执行所示出或描述的步骤。 装置实施例 在本发明实施例中, 提供了一种防拥塞系统, 优选地可以用于实现上述 方法实施例提供的方法。 图 4是根据本实施例的防拥塞系统的框图, 如图 4 所示, 该防拥塞系统包括输入端 1和输出端 2 , 在实际中, 一个输入端可以 对应多个输出端, 而一个输出端可以对应多个输入端, 为了描述方便, 在图 4中以一个输出端和输入端为例进行说明。 如图 4所示, 输入端 1 包括: 第一接收模块 10、 第一发送模块 12、 第 二接收模块 14和处理模块 16; 输出端 2包括: 第三接收模块 20、 运算模块 22和第二发送模块 24 , 以下分别对各模块进行描述。 第一接收模块 10 , 用于接收报文; 第一发送模块 12 , 用于将待发送的 各队列的大小信息发送给相应的输出端; 第二接收模块 14 , 用于分别接收并 保存来自各输出端的队列信息, 其中, 队列信息中包括输出端的端口信息、 队列大小信息,队列大小信息指示输出端的各队列大小的总和;处理模块 16 , 连接至第一接收模块 10和第二接收模块 14 , 用于根据预定规则和第二接收 模块 14接收的队列信息,对第一接收模块 10接收的待入队的报文进行处理; 第三接收模块 20, 用于接收来自第一发送模块 12发送的信息;运算模块 22 , 连接至第三接收模块 20 , 用于根据第三接收模块 20接收的信息计算输出端 的队列大小信息; 第二发送模块 24 , 连接至运算模块 22 , 用于将运算模块 22 计算的输出端的队列大小信息发送给第二接收模块 14。 这里的预定规则 可以是上述方法实施例中的加权随机早期丢弃检测技术。 在具体实施过程中, 上述输入端 1 还可以包括一更新模块 (图中未示 出), 用于对先前保存的队列信息进行更新, 以及输出端 1 和输入端 2各自 都可以包括一定时器 (图中未示出), 其中, 输入端 1 的定时器, 用于在周 期性地将待发送的各队列的大小信息发送给输出端, 输出端 2的定时器, 用 于周期性地将队列信息反馈给输入端。 由以上描述可以看出, 通过第一发送模块 12与第二发送模块 24 , 使得 输出端与输入端能够进行交互, 并以此输入端能够对接收的 4艮文进行相应的 处理, 能够提前緩解输出端的报文拥塞的情况, 进而可以避免输出端错误的 处理报文的问题。 类似于上述方法实施例, 在本实施例中, 也从系统层角度对本发明进一 步说明, 即, 分别从输入侧和输出侧来描述本发明实施例。 图 5是根据本发 明实施例的防拥塞系统的架构示意图, 如图 5所示, 下面结合实际应用对输 入侧和输出侧的结构进行详细的描述。 Queue Size Table ). Since an output can have multiple queues, these queues correspond to different inputs. Therefore, one output may receive multiple "queue size cells", and the output needs to have multiple "queue size cells". The indicated queue size is added to get the total queue size, and this value is filled into the output queue size table; Step 2, the value in the output queue size table is carried on the "port queue size cell", through the core The switching network is sent to each input terminal to notify each input end of the local (ie, output) message to be processed; Step 3, the output end optimizes the output queue size table, that is, periodic or The output queue size table is set to 0 aperiodically. Similar to step 2 of the above input, the output compares the calculated total queue size with the value in the output queue size table each time. If the result of the comparison is greater than the value in the table, the total queue size is calculated. Update, otherwise, the values in the table are not updated. It should be noted that, for convenience of description, in the above, the technical solutions of the method embodiments of the present invention are shown and described in the form of steps, and the steps shown above may be in a computer system such as a set of computer executable instructions. Executed in. Although the logical order is shown in the above description, in some cases, the steps shown or described may be performed in a different order than the ones described herein. Apparatus Embodiments In an embodiment of the present invention, an anti-congestion system is provided, which may be preferably used to implement the method provided by the foregoing method embodiments. 4 is a block diagram of an anti-congestion system according to the present embodiment. As shown in FIG. 4, the anti-congestion system includes an input terminal 1 and an output terminal 2. In practice, one input terminal can correspond to multiple output terminals, and one output The terminal can correspond to a plurality of input terminals. For convenience of description, an output terminal and an input terminal are illustrated in FIG. 4 as an example. As shown in FIG. 4, the input terminal 1 includes: a first receiving module 10, a first sending module 12, a second receiving module 14, and a processing module 16. The output terminal 2 includes: a third receiving module 20, an arithmetic module 22 and the second transmitting module 24, each of which is described below. The first receiving module 10 is configured to receive the size information of each queue to be sent to the corresponding output end, and the second receiving module 14 is configured to separately receive and save the data from each The queue information of the output end, wherein the queue information includes the port information of the output end, the queue size information, and the queue size information indicates the sum of the queue sizes of the output end; the processing module 16 is connected to the first receiving module 10 and the second receiving module 14, For processing the packet to be queued received by the first receiving module 10 according to the predetermined rule and the queue information received by the second receiving module 14, the third receiving module 20 is configured to receive the sending from the first sending module 12 The operation module 22 is connected to the third receiving module 20 for calculating the queue size information of the output terminal according to the information received by the third receiving module 20; the second sending module 24 is connected to the computing module 22 for using the computing module 22 The calculated queue size information of the output is sent to the second receiving module 14. The predetermined rule here may be the weighted random early discard detection technique in the above method embodiment. In the specific implementation process, the input terminal 1 may further include an update module (not shown) for updating the previously saved queue information, and each of the output terminal 1 and the input terminal 2 may include a timer. (not shown in the figure), wherein the timer of the input terminal 1 is used to periodically send the size information of each queue to be sent to the output end, and the timer of the output end 2 is used to periodically The queue information is fed back to the input. It can be seen from the above description that the first sending module 12 and the second sending module 24 enable the output end and the input end to interact, and the input end can perform corresponding processing on the received 4 艮 text, which can be relieved in advance. The congestion of the packets at the output end can avoid the problem of the packets being processed incorrectly at the output end. Similar to the above-described method embodiments, in the present embodiment, the present invention is further described from the perspective of the system layer, that is, the embodiments of the present invention are described from the input side and the output side, respectively. FIG. 5 is a schematic structural diagram of an anti-congestion system according to an embodiment of the present invention. As shown in FIG. 5, the structures of the input side and the output side are described in detail below in conjunction with actual applications.
(一 ) 输入侧 虚拟目的端口队列模块 501 , 对应于上述第二接收模块 14 , 用于保存目 的端口队列大小表格, 使得待入队的报文, 可以根据该报文携带的目的地址 在虚拟目的端口队列模块 301中查找到对应的目的端口队列大小; 加权随机早期丢弃模块 502 , 对应于上述处理模块 16 , 用于根据上述报 文的待入队的队列号和对应的目的端口队列大小选择丢弃策略, 以 居丢弃 策略的结果处理接收的报文,如果丢弃策略的结果为丢弃, 则报文将被丢弃; 虚拟队列模块 503 , 用于将每个队列的大小信息以信元格式发送给该队 列对应的输出端; 发送模块 504 , 输入侧的发送模块 504负责将报文、 队列大小信息 (以 信元的格式) 分别发送到交换芯片进行交换。 虚拟队列模块 503和发送模块 504对应于上述第一发送模块 12。 (1) The input side virtual destination port queue module 501, corresponding to the second receiving module 14, is configured to save the destination port queue size table, so that the packet to be queued can be in virtual destination according to the destination address carried by the packet. The port queue module 301 finds the corresponding destination port queue size; the weighted random early discarding module 502 corresponds to the processing module 16 described above, and is configured according to the foregoing report. The queue number of the inbound queue and the corresponding destination port queue size are selected as the discarding policy, and the received packet is processed as a result of the discarding policy. If the result of the discarding policy is discarded, the packet is discarded; the virtual queue module 503 And sending, by the sending module 504, the sending module 504 of the input side is responsible for sending the packet and the queue size information (in the format of the cell) respectively. Switch to the switch chip. The virtual queue module 503 and the sending module 504 correspond to the first sending module 12 described above.
(二) 输出侧 信元分类模块 505 , 用于负责对从交换芯片接收到的各种信元进行分 类; 虚拟目的端口队列模块 506 , 对应于上述第三接收模块 20和运算模块 22 , 用于将接收的队列大小信息, 根据该队列大小信息对应的输出端, 更新 输出端队列大小表格, 并且, 可以周期性地或非周期性地将输出端队列大小 表格中的值以信元的格式发送到交换芯片; 端口虚拟队列模块 507, 用于将输入端发送的报文入队, 等待输出; 端口模块 508 , 用于调度输出报文, 完成报文的交换过程。 综上所述, 通过本发明的上述实施例, 输入端将待发送的各队列的大小 信息发送给相应的输出端, 然后, 输出端返回队列信息给输出端, 使得输出 端能够根据队列信息对接收的报文进行处理, 相比于现有技术, 通过输入端 与输出端的交互, 能够在输入端提前緩解输出端的 4艮文拥塞的情况, 进而可 以避免输出端错误的处理报文的情况。 显然, 本领域的技术人员应该明白, 上述的本发明的各模块或各步骤可 以用通用的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布 在多个计算装置所组成的网络上, 可选地, 它们可以用计算装置可执行的程 序代码来实现, 从而, 可以将它们存储在存储装置中由计算装置来执行, 或 者将它们分别制作成各个集成电路模块, 或者将它们中的多个模块或步骤制 作成单个集成电路模块来实现。 这样, 本发明不限制于任何特定的硬件和软 件结合。 以上所述仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本 领域的技术人员来说, 本发明可以有各种更改和变化。 凡在本发明的 ^"神和 原则之内, 所作的任何修改、 等同替换、 改进等, 均应包含在本发明的保护 范围之内。 (2) an output side cell classification module 505, configured to perform classification on various cells received from the switch chip; a virtual destination port queue module 506 corresponding to the third receiving module 20 and the operation module 22, The received queue size information is updated according to the output end corresponding to the queue size information, and the output queue size table is updated, and the value in the output queue size table may be sent in a cell format periodically or non-periodically. To the switch chip, the port virtual queue module 507 is configured to enqueue the message sent by the input end, and wait for the output; the port module 508 is configured to schedule the output message to complete the message exchange process. In summary, according to the foregoing embodiment of the present invention, the input end sends the size information of each queue to be sent to the corresponding output end, and then the output end returns the queue information to the output end, so that the output end can be based on the queue information. The received message is processed. Compared with the prior art, the interaction between the input end and the output end can relieve the congestion of the output end in advance at the input end, thereby preventing the output end from being erroneously processing the message. Obviously, those skilled in the art should understand that the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device, or they may be separately fabricated into individual integrated circuit modules, or they may be Multiple modules or steps are made into a single integrated circuit module. Thus, the invention is not limited to any specific combination of hardware and software. The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the scope of the present invention are intended to be included within the scope of the present invention.

Claims

权 利 要 求 书 Claim
1. 一种防拥塞处理方法, 应用于系统层, 其中, 所述系统层包括输入端和 输出端, 其特征在于, 所述方法包括: An anti-congestion processing method is applied to a system layer, where the system layer includes an input end and an output end, and the method includes:
所述输入端将待发送的各队列的大小信息发送给相应的输出端; 所述输入端分别接收并保存来自各输出端的队列信息, 其中, 所述 队列信息中包括输出端的端口信息、 队列大小信息, 所述队列大小信息 指示所述输出端的各队列大小的总和;  The input end sends the size information of each queue to be sent to the corresponding output end; the input end receives and saves the queue information from each output end, wherein the queue information includes the port information of the output end and the queue size. Information, the queue size information indicating a sum of sizes of the queues at the output end;
才艮据所述队列信息和预定规则,所述输入端对接收的待入队的 4艮文 进行处理。  According to the queue information and the predetermined rule, the input terminal processes the received message to be queued.
2. 居权利要求 1所述的方法, 其特征在于, 所述输入端将所述待发送的 各队列的大小信息发送给所述相应的输出端包括以下之一: 2. The method according to claim 1, wherein the input end sends the size information of each queue to be sent to the corresponding output end, including one of the following:
所述输入端周期性地将所述待发送的各队列的大小信息发送给所 述相应的输出端;  The input end periodically sends size information of each queue to be sent to the corresponding output end;
所述输入端在队列的大小发生变化时,将所述待发送的各队列的大 小信息发送给所述相应的输出端。  The input end sends the size information of each queue to be sent to the corresponding output end when the size of the queue changes.
3. 居权利要求 2所述的方法, 其特征在于, 在所述输入端将所述待发送 的各队列的大小信息发送给所述相应的输出端之后, 所述方法还包括: 所述各输出端周期性地或非周期性地将所述队列信息反馈给所述 输入端。 The method of claim 2, wherein after the inputting, by the input end, the size information of each queue to be sent to the corresponding output end, the method further includes: The output periodically or non-periodically feeds back the queue information to the input.
4. 根据权利要求 3所述的方法, 其特征在于, 还包括: 4. The method according to claim 3, further comprising:
如果所述输出端最新反馈的队列信息所指示的队列大小大于先前 保存的所述队列信息所指示的队列大小, 则使用所述最新反馈的队列信 息对先前保存的所述队列信息进行更新。  If the queue size indicated by the most recently fed back queue information is larger than the queue size indicated by the previously saved queue information, the previously saved queue information is updated using the latest feedback queue information.
5. 根据权利要求 1所述的方法, 其特征在于, 所述预定规则: 对接收的报 文的待入队列进行加权随机早期丢弃检测。 才艮据权利要求 5所述的方法, 其特征在于, -据所述队列信息和所述预 定规则, 所述输入端对所述接收的待入队的 4艮文进行处理包括: The method according to claim 1, wherein the predetermined rule is: performing weighted random early discard detection on a queue to be received of the received message. The method according to claim 5, characterized in that, according to the queue information and the predetermined rule, the inputting, by the input terminal, the received message to be queued includes:
根据所述待入队的报文的目的地址, 确定所述报文需要入队的队 列, 并进一步确定所述需要入队的队列对应的输出端;  Determining, according to the destination address of the packet to be enqueued, the queue that the packet needs to be enqueued, and further determining the output end corresponding to the queue that needs to be enqueued;
获取所述对应的输出端的队列信息;  Obtaining queue information of the corresponding output end;
根据所述队列信息所指示的队列大小、以及加权随机早期丢弃检测 的结果, 对所述 4艮文进行丢弃或入队处理。 一种防拥塞系统, 所述防拥塞系统包括输入端和输出端, 所述输入端包 括用于接收报文的第一接收模块, 其特征在于,  The packet is discarded or enqueued according to the queue size indicated by the queue information and the result of the weighted random early discard detection. An anti-congestion system, the anti-congestion system comprising an input end and an output end, the input end comprising a first receiving module for receiving a message, wherein
所述输入端还包括:  The input terminal further includes:
第一发送模块, 用于将待发送的各队列的大小信息发送给相应 的输出端;  a first sending module, configured to send size information of each queue to be sent to a corresponding output end;
第二接收模块, 用于分别接收并保存来自各输出端的队列信息, 其中, 所述队列信息中包括输出端的端口信息、 队列大小信息, 所 述队列大小信息指示所述输出端的各队列大小的总和;  a second receiving module, configured to receive and save the queue information from each output end, where the queue information includes port information of the output end and queue size information, where the queue size information indicates a sum of the size of each queue of the output end ;
处理模块, 用于根据预定规则和所述第二接收模块接收的所述 队列信息, 对所述第一接收模块接收的待入队的报文进行处理; 所述输出端包括:  a processing module, configured to process, according to a predetermined rule and the queue information received by the second receiving module, a packet to be queued received by the first receiving module; the output end includes:
第三接收模块, 用于接收来自所述第一发送模块发送的信息; 运算模块, 用于根据所述第三接收模块接收的信息计算所述输 出端的队列大小信息;  a third receiving module, configured to receive information sent by the first sending module, and an operation module, configured to calculate queue size information of the output end according to information received by the third receiving module;
第二发送模块, 用于将所述运算模块计算的所述输出端的所述 队列大小信息发送给所述第二接收模块。 才艮据权利要求 7所述的系统, 其特征在于, 所述输入端还包括: 更新模块, 用于对先前保存的所述队列信息进行更新。 才艮据权利要求 7所述的系统, 其特征在于, 所述输入端还包括: 第一定时器,用于周期性地将待发送的各队列的大小信息发送给所 述输出端。 And a second sending module, configured to send the queue size information of the output end calculated by the operation module to the second receiving module. The system of claim 7, wherein the input terminal further comprises: an update module, configured to update the previously saved queue information. The system according to claim 7, wherein the input terminal further comprises: a first timer, configured to periodically send size information of each queue to be sent to the output end.
0. 根据权利要求 7所述的系统, 其特征在于, 所述输出端还包括: 第二定时器, 用于周期性地将所述队列信息反馈给所述输入端。 The system according to claim 7, wherein the output terminal further comprises: a second timer, configured to periodically feed back the queue information to the input end.
PCT/CN2009/075635 2009-01-16 2009-12-16 Processing method and system for preventing congestion WO2010081365A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200910005402XA CN101783763B (en) 2009-01-16 2009-01-16 Congestion prevention processing method and system
CN200910005402.X 2009-01-16

Publications (1)

Publication Number Publication Date
WO2010081365A1 true WO2010081365A1 (en) 2010-07-22

Family

ID=42339430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/075635 WO2010081365A1 (en) 2009-01-16 2009-12-16 Processing method and system for preventing congestion

Country Status (2)

Country Link
CN (1) CN101783763B (en)
WO (1) WO2010081365A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125157B (en) * 2013-04-25 2017-07-07 联发科技股份有限公司 Process circuit and processing method based on Random early detection
CN103312566B (en) * 2013-06-28 2016-05-18 盛科网络(苏州)有限公司 The method that detection messages port is congested and device
CN112751778A (en) * 2019-10-30 2021-05-04 阿里巴巴集团控股有限公司 Data transmission control method and device, congestion detection and device and server system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5768257A (en) * 1996-07-11 1998-06-16 Xylan Corporation Input buffering/output control for a digital traffic switch
CN1272992A (en) * 1998-06-16 2000-11-08 阿尔卡塔尔公司 Digital traffic switch with credit-based buffer control
CN1878144A (en) * 2006-07-14 2006-12-13 华为技术有限公司 Multi-queue flow control method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6404752B1 (en) * 1999-08-27 2002-06-11 International Business Machines Corporation Network switch using network processor and methods
KR100716184B1 (en) * 2006-01-24 2007-05-10 삼성전자주식회사 Apparatus and method for a queue management of network processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5768257A (en) * 1996-07-11 1998-06-16 Xylan Corporation Input buffering/output control for a digital traffic switch
CN1272992A (en) * 1998-06-16 2000-11-08 阿尔卡塔尔公司 Digital traffic switch with credit-based buffer control
CN1878144A (en) * 2006-07-14 2006-12-13 华为技术有限公司 Multi-queue flow control method

Also Published As

Publication number Publication date
CN101783763B (en) 2012-06-06
CN101783763A (en) 2010-07-21

Similar Documents

Publication Publication Date Title
EP3516833B1 (en) Methods, systems, and computer readable media for discarding messages during a congestion event
CN109039936B (en) Transmission rate control method, device, sending equipment and receiving equipment
US20210297350A1 (en) Reliable fabric control protocol extensions for data center networks with unsolicited packet spraying over multiple alternate data paths
US8953631B2 (en) Interruption, at least in part, of frame transmission
US20210297351A1 (en) Fabric control protocol with congestion control for data center networks
US20050213507A1 (en) Dynamically provisioning computer system resources
US9276866B2 (en) Tuning congestion notification for data center networks
WO2014141006A1 (en) Scalable flow and congestion control in a network
WO2006069511A1 (en) A method for preventing ip multicast data stream to overload communication system by distinguishing all kinds of services
WO2015172668A1 (en) Method and device for determining congestion window in network
US11223568B2 (en) Packet processing method and apparatus
WO2017097201A1 (en) Data transmission method, transmission device and receiving device
CN111464452A (en) Fast congestion feedback method based on DCTCP
CN107566293B (en) Method and device for limiting message speed
CN113726671B (en) Network congestion control method and related products
WO2010081365A1 (en) Processing method and system for preventing congestion
CN111224888A (en) Method for sending message and message forwarding equipment
US9537764B2 (en) Communication apparatus, control apparatus, communication system, communication method, method for controlling communication apparatus, and program
CN111431812A (en) Message forwarding control method and device
Hu et al. Dynamic queuing sharing mechanism for per-flow quality of service control
US20210297343A1 (en) Reliable fabric control protocol extensions for data center networks with failure resilience
WO2016061985A1 (en) Packet processing method, device, and system
EP2897332A1 (en) Controller, communication system, communication method and program
US20070280685A1 (en) Method of Optimising Connection Set-Up Times Between Nodes in a Centrally Controlled Network
Minkenberg et al. Design and performance of speculative flow control for high-radix datacenter interconnect switches

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09838152

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09838152

Country of ref document: EP

Kind code of ref document: A1