WO2008028376A1 - A method for improving the fair efficiency in the resilient packet ring and the apparatus thereof - Google Patents

A method for improving the fair efficiency in the resilient packet ring and the apparatus thereof Download PDF

Info

Publication number
WO2008028376A1
WO2008028376A1 PCT/CN2007/000588 CN2007000588W WO2008028376A1 WO 2008028376 A1 WO2008028376 A1 WO 2008028376A1 CN 2007000588 W CN2007000588 W CN 2007000588W WO 2008028376 A1 WO2008028376 A1 WO 2008028376A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packet
packet
rate
queue
site
Prior art date
Application number
PCT/CN2007/000588
Other languages
French (fr)
Chinese (zh)
Inventor
Feng Liu
Original Assignee
Zte Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zte Corporation filed Critical Zte Corporation
Publication of WO2008028376A1 publication Critical patent/WO2008028376A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/437Ring fault isolation or reconfiguration

Definitions

  • the present invention relates to the field of network communication, and in particular to a method and apparatus for improving fairness in an elastic packet ring network. Background technique
  • Ethernet adopts the best-effort transmission mechanism, which has good scalability and is suitable for the current bursty data service.
  • QoS Quality of Service
  • SDH Serial Digital Hierarchy
  • Optical Ethernet RPR Silicon Packet Ring
  • Optical Ethernet RPR Silicon Packet Ring
  • Optical Ethernet RPR combines the advantages of Ethernet and SDH. It defines an independent physical layer, the flexible packet ring medium access control layer, which has the advantages of SDH network: Provides a double ring structure. , respectively, the service is transmitted on two rings, with a fast protection mechanism with 50ms protection; at the same time, it has the characteristics of Ethernet: convenient and flexible, plug and play, automatic network discovery, able to adapt to sudden data services, in addition,
  • spatial multiplexing technology statistical multiplexing technology and business classification. The spatial multiplexing technology makes full use of the idle bandwidth between any sites on the ring network to improve the bandwidth utilization.
  • the statistical multiplexing technology adjusts the upper ring fair service of the site through the fair algorithm and shares the loop bandwidth.
  • Three types of services Class A service, Class B service, and Class C service. Among them, Class A service guarantees service bandwidth, delay and jitter, and meets the needs of audio and video services. Class C services do not guarantee loop bandwidth and try to transmit. , to meet the needs of data services.
  • the RPR ring network ring service can be divided into two types of services: fair service (class C service and class B service exceeding the rated rate) and unfair service (class A service and specified rate range). Class B business).
  • fair service class C service and class B service exceeding the rated rate
  • unfair service class A service and specified rate range
  • Class B business The speed guarantee rate of the unfair service, as long as the uplink service does not exceed the guaranteed rate, it is not restricted to the direct ring, but it is never allowed to exceed the warranty. Certificate rate.
  • the guaranteed rate of each site is determined. The sum of the guaranteed rates of each site is not greater than the total loop rate, ensuring that the guaranteed rate of each site can be achieved. The difference between the total loop rate and the guaranteed rate is the remaining rate. The remaining rate is shared by all sites on the ring. If the guaranteed rate of a site is not used and is idle, it can also be shared by all sites on the ring.
  • the technology of the present invention is only for fair business, and does not consider non-fair services. The data flows described
  • the fair frame is defined in the RPR technical standard. It is recommended to use the fair algorithm to calculate the loop traffic and control the upper ring rate of each site to ensure the fairness of each site.
  • the service can distribute the remaining bandwidth on the loop fairly, without the site location and service traffic, causing some sites to occupy more loop bandwidth, and some sites occupy less loop bandwidth, and finally achieve fair distribution of all sites. Loop bandwidth. in
  • each station counts the traffic of packets on the ring, and receives, calculates, uses, and sends four fair rate values to achieve fair rate control of the entire loop, ensuring fair access to each site. Loop bandwidth.
  • the four fair rate values are:
  • Receive Fair Rate Receives the fair rate sent from the downstream site, and controls the local ring rate of the site to not exceed the fair rate of reception.
  • the station calculates the recommended fair rate of the site through a set of fair calculation formulas based on the loop traffic, idle bandwidth capacity, and the upper ring rate of the site within a fair period. .
  • the station In the fair algorithm control, when the recommended rate calculated by one station is smaller than the recommended rate of other stations, other stations on the loop select the minimum rate obtained by the fair algorithm to control, and the station is called a blocking point.
  • the meaning of the blocking point is: The data traffic passing through this site (referred to as this point) is too large, and congestion occurs. As a result, the ring packets on the site are restricted. At this time, the recommended rate calculated by the site is the smallest.
  • the fair rate used at this point is equal to the smaller one of the fair rate of the site and the fair rate recommended by the site. That is, the fair rate actually used by the site cannot be greater than the fair rate of the receiver, nor can it be greater than this.
  • the recommended fair rate of the site can only be the smaller of the two.
  • Sending a fair rate By counting the traffic on the local ring and the traffic on the upper ring, you can determine whether the upstream site has any relationship with the downstream traffic. If the upstream site has nothing to do with downstream congestion, upstream If the incoming data packet has been looped and no longer continues to be transmitted along the loop, then the site sends its own proposed fair rate of the proposed point. If the upstream site and the downstream congestion are related, the point is sent using the fair rate. The ring rate on the site cannot be greater than this fair rate.
  • the site uses the fair rate to control the upper ring rate of the fair service, ensuring that the actual ring rate of the site is not greater than the upper ring rate of the blocking point (this site is related to the blocking point), and avoids occupying more loop bandwidth.
  • each site receives, calculates, uses, and transmits four fair values at the same time, and operates once in each fair period. These operations are synchronized through all sites to achieve a balance of fair services on the ring. Because of the double-ring structure, each site has two sets, which control two loops. There is no relationship between the two loops working independently in the work. In the scenario, the outer ring is introduced as an example.
  • the specific operation process of the site fairness algorithm is as follows:
  • the S2 site is taken as an example: the ring packet on the S2 station passes the outer ring (ring 0), and the downstream station S3 sends the downstream recommended rate F3 through the other ring-ring (ring 1).
  • the upper ring rate used to control the S2 site cannot exceed the recommended fair rate F3 given by the downstream site S3.
  • the S2 site calculates the recommended rate F of the site.
  • the fair rate of the S2 site cannot exceed the recommended rate of the site. Nor can it exceed the downstream recommended fair rate F3, whichever is smaller.
  • the S2 station sends the recommended rate F2 of the local station to the upstream site S1. If the upstream site of the S2 is not related to the downstream congestion, the recommended rate F of the site is directly sent, that is, F2 is equal to F. If the upstream site of the S2 is related to the downstream blocked site. Then, F2 is equal to the minimum value of F and F3, and the upper ring rate of the upstream station S1 is required to not exceed the recommended rate F2.
  • FIG. 2 shows the control process of the fair algorithm:
  • the fair rate of the S5 station received by the S5 station is F6, and the fair rate calculated by itself is F5. Because F5 ⁇ F6, the S5 station selects the F5 fair rate control on the site. Ring rate. Because the upstream site is associated with downstream congestion, the S5 site will be sent to the S4 site using rate F5.
  • the S4 station receives the fair rate F5 of the S5 site, and the self-calculated recommended rate is F4. Because F4 ⁇ F5, the S4 station selects the F4 rate to control the upper ring rate of the site. Because the upstream site is associated with downstream congestion, the S4 site sends F4 to the upstream site S3.
  • the S3 station receives the fair rate F4 of S4 and calculates the fair rate F3 of the local station. Because F3>F4, the S3 station selects F4 to control the upper ring rate of the local station. Because the upstream site is associated with downstream congestion, S3 sends F4 to the upstream site S2. Similarly, S2 also calculates the fair rate F2, However, because F2>F4, the S2 site selects F4 to control the upper loop rate of the site. Because the upstream site is related to downstream congestion, S2 sends F4 to S1; and so on, F1>F4, and the SI site sends F4 to S10.
  • the SI site, the S2 site, and the S3 site are all related to the downstream blocking, and F1>F4, F2>F4, F3>F4, the SI site, the S2 site, and the S3 site all select the downstream minimum recommended fair rate F4, that is, the blocking.
  • the recommended rate of the point is used to control the ring rate of the local station and send it to the respective upstream. Therefore, the maximum upper ring rate of the S1 station, the S2 station, and the S3 station is F4.
  • S4, S5, and S6 are all blocking points, but F4 is the smallest and the blocking is the most serious. Therefore, all upstream related sites of the S4 site use F4 instead of F5 and F6.
  • the fair rate used by a site is the same as the fair rate of the blocking point, it indicates that the site has a relationship with the blocking point. This site affects the blocking of the blocking point and blocks the transmission path of the site using the same fair value. area. As shown in Figure 2, the S10 site, the S1 site, the S2 site, the S3 site, and the S4 site all use the same fair rate value and belong to a blocked domain. The following sections discuss the case of the same blocked domain.
  • the fairness algorithm requires that all sites in the blocked domain control the upper loop rate according to the recommended rate of the blocking point. This ensures that each site can share the loop bandwidth, avoiding that some sites occupy more loop bandwidth, while other sites occupy less loops. Unfairness in bandwidth.
  • the fairness algorithm performs control, the total amount of all data traffic of the upper ring is controlled.
  • the bandwidth of the upper ring the packet traffic that can be controlled without control is also controlled.
  • the S1-S3 sites can only select F4 to control the ring rate of the site, which avoids the ring rate on the S4 site being too small, resulting in unfairness.
  • the S1 site has a traffic flow, it is sent from the SI site to the S3 site, as shown in Figure 2. Since the S1 site uses the downstream fair rate F4 to control the upper ring rate bandwidth, the maximum rate of the traffic flow Flow can only be F4.
  • the blocking occurs at the S4 site, the traffic flow does not pass through the S4 site, but is terminated after the S3 site is looped down.
  • the current fair algorithm implementation method is flawed, resulting in a fair rate of using the blocking point F4. To control the traffic flow that does not pass through the blocking point, the traffic on the traffic flow cannot be increased. Summary of the invention
  • the technical problem to be solved by the present invention is to provide a method and device for improving fairness efficiency in an elastic packet ring, which is used for solving a fair traffic rate that does not pass through a blocking point by using a fair rate of a blocking point, and the service traffic cannot be improved.
  • An apparatus for improving fairness and efficiency in an elastic packet ring comprising:
  • a queue management circuit for queuing the actual storage address of the data packet in a column of the queue according to the order in which the data packets arrive, and in the other column of the queue, the data is queued according to the queued order of the data packet
  • the destination site address of the packet is queued;
  • a scanning circuit connected to the queue management circuit, for scanning a destination site address of each data packet from a first data packet in the queue;
  • An analysis circuit connected to the queue management circuit and the scanning circuit, configured to analyze the purpose according to the distance information between the received blocking point and the local station, and the destination site address of the data packet scanned by the scanning circuit Whether the data packet corresponding to the site address passes through the blocking point, and controls the scanning mode of the scanning circuit according to the analysis result;
  • a proposed fair rate control circuit configured to connect to the queue management circuit, configured to control an upper ring rate of the data packet according to a recommended fair rate of the site, such that the upper ring rate does not exceed the recommended fair rate, and Allowing a packet ring that does not pass through the blocking point when all the upper ring rates do not exceed the recommended fair rate;
  • the queue management circuit is configured to control an upper loop rate of the data packet according to a fair rate of use of the site, so that an upper loop rate of the data packet passing through the blocking point does not exceed the use Fair rate;
  • a computing circuit connected to the analysis circuit, the using fair rate control circuit, for extracting a lifetime according to the use fair rate, and obtaining a distance between the blocking point and the site according to the lifetime, and then The distance information is provided to the analysis circuit.
  • the device for improving fairness in the flexible packet ring further includes a scheduler, and is connected to the queue management circuit, and is configured to select a queue to which the data packet for performing the upper ring operation belongs when the site has multiple queues. ; Fair rate the recommended control packet comprises a control circuit without blocking data points; The data packet controlled by the fair rate control circuit includes a data packet passing through the blocking point and a data packet not passing through the blocking point.
  • the apparatus for improving fairness efficiency in the resilient packet ring of the present invention when the usage fair rate is generated by the local station, the lifetime is 255; when the usage fair rate is by the downstream site of the site The sent time is less than 255.
  • the distance between the blocking point and the local station is the difference between the survival time and 255; when the difference is 0, the blocking point is the local station, when the difference is N Then, the blocking point is the Nth station after the local station, where N is a natural number greater than or equal to 1.
  • a method for improving fairness efficiency in an elastic packet ring by using the device of the present invention characterized in that it comprises:
  • Step 1 The queue management circuit queues the actual storage address of the data packet in one column of the queue, and queues the destination site address of the data packet according to the queue order of the data packet in another column of the queue. ;
  • Step 2 The calculation circuit extracts a lifetime according to the use fair rate, and obtains a distance between the blocking point and the site according to the survival time;
  • Step 3 The analyzing circuit analyzes, according to the distance, the destination site address of the data packet scanned by the scanning circuit, whether the data packet corresponding to the destination site address passes through a blocking point, and controls the scanning circuit according to the analysis result. Scanning method;
  • Step 4 The recommended fair rate control circuit controls an upper ring rate of the data packet, so that all the upper ring rates do not exceed the recommended fair rate, and when all the upper ring rates do not exceed the recommended fair rate Allowing a packet to be looped through the blocking point; controlling the upper ring rate of the data packet by using the fair rate control circuit such that the upper ring rate of the packet passing through the blocking point does not exceed the usage fair rate.
  • the analyzing circuit controls the scanning circuit to continue scanning the queue until the data packet that does not pass through the blocking point is scanned or scanned to the tail of the queue.
  • the first data packet in the queue passes a blocking point and the upper ring rate of the data packet currently passing the blocking point is greater than the usage fair rate, the first data is sent.
  • the packet entry rate drops waiting for the upper ring state.
  • the data packet is not allowed to pass through the blocking point. Sheung Wan.
  • the queue management circuit deletes the data packet from the queue to which the data packet belongs, and the scanning circuit continues to scan the data packet. data pack.
  • the present invention solves the disadvantage of using a fair rate of blocking points to control fair traffic that does not pass through the blocking point, resulting in an inability to improve traffic.
  • Figure 1 is a schematic diagram of the fair algorithm control
  • FIG. 2 is a schematic diagram of a control process of a fair algorithm
  • Figure 3 is a schematic diagram of a fair frame structure
  • FIG. 4 is a schematic diagram of an apparatus for managing a ring-up data packet at each station
  • Figure 5 is a schematic view of the apparatus of the present invention.
  • Figure 6 is a flow chart of the method of the present invention. Preferred embodiment of the invention
  • FIG. 3 shows a schematic diagram of the fair frame structure.
  • each site uses the fair rate of the blocking point to control all the upper ring data packets in the blocking domain, so that the data packets in the ring data packets of the local station that are not blocked are also controlled by the fair rate.
  • the fair rate of the blocking point is transmitted in the form of a fair frame, which is transmitted from the downstream site to the upstream site.
  • the site uses the fair rate of the blocking point (the site finds no free bandwidth after the statistical loop condition), and forwards the fair rate of the blocking point to the upstream. transfer.
  • the fairness structure of the fair rate is shown in Figure 3.
  • the frame contains: time to live (LiveToLive), ring (road) control (baseRingControl), source (site) address (sourceMacAddress), fair-head check/fair control header (faimessControlHeader), fair control value (faimessControlValue) and frame check sequence (frameCheckSequence).
  • Time to live Gives the recommended fair rate to pass the number of sites on the loop.
  • the fair rate is generated, the value is 255. If the fair rate of delivery remains the same, the value is decremented by one each time a site passes.
  • Source Address Gives the site address that generated the fair rate, which is the site address of the blocking point.
  • Fair (Control) value This part conveys the recommended fair value of the downstream site, that is, the fair rate of the blocking point, which is used to control the upper ring data flow of the upstream site. .
  • the upstream site wants to continue uploading this fair value, it will keep the fair value, the source address unchanged, and the lifetime reduced by 1.
  • the other sites can know the source site address of the fair value.
  • the size of the time-to-live can be used to know the distance from the source site, that is, the distance from the blocking point.
  • the S4 station calculates the fair rate as F4, F4 ⁇ F5, so S4 sends F4 to S3, and the sent fair frame contains the site information of F4 and S4, and the lifetime is 255.
  • the S3 station decrements the lifetime 255 by 1 and becomes 254. Because F3>F4, the S3 station continues to send F4 to the S2 site.
  • the fair frame rate is still F4, the source site address is S4, and the lifetime is 254 after the subtraction operation.
  • the fair rate sent to the S1 site is still F4, and the survival time becomes 253;
  • the fair rate sent to the S10 site is still F4, and the survival time becomes Into 252.
  • the ring data flow F w at the site S1 does not pass through the blocking point S4
  • the data flow can be It is not controlled by F4, but only controlled by F1 calculated by this site. Because F1>F4, the upper loop bandwidth of the data flow Flow will increase, which improves the efficiency of packet transmission.
  • the specific implementation is as follows: At each site, the queue management technology is used to classify all the upper ring data packets and queue them according to the categories.
  • the data packets in the same queue are queued in the upper ring in order.
  • the data packets between different queues have no order relationship.
  • the upper ring sequence between the queues is controlled by the scheduler.
  • the scheduler determines which of the current queues can be looped, and which queue data packets cannot be looped.
  • the scheduler selects the queue according to its own priority. When a queue is selected, the same queue is internally looped in the order of priority. However, whether it can be on the ring, it is also necessary to judge whether the queue data packet is controlled by the fair algorithm: if it is not controlled by the fair algorithm, it is directly on the ring (such as non-fair business).
  • the invention relates only to fair services, so the queue is controlled by a fair algorithm. When the queue is controlled by the fair algorithm, if the upper loop speed is lower than the fair rate, the ring can be looped. If the upper loop speed is greater than the fair rate, the ring cannot be looped. As shown in Figure 4.
  • FIG. 4 a schematic diagram of a device for managing the upper ring data packet of each station is given.
  • Classify all the upper ring data queue them in their own queues by class, and wait for the upper ring to send.
  • the actual storage address of the upper ring data packet is stored in each queue, and is stored in order.
  • the head pointer indicates the data packet storage address with the longest queue time
  • the tail pointer indicates the data packet storage address with the shortest queue time.
  • the queue can store the specific 'content of the data packet, or only the actual storage address of the data packet.
  • the specific content of the data packet may be stored in other places, such as a memory module.
  • the present invention introduces the actual storage address of the data packet stored in the queue as an example.
  • the actual storage address of the data packet is only stored in the queue, and the actual storage address represents the content of the data packet, and the actual storage address of the data packet is performed in the queue.
  • Queuing which means that packets are queued in the queue. When you need to find the contents of the data packet, first find the actual storage address of the data packet in the queue, and indirectly search for the data packet content through the actual storage address.
  • the fairness algorithm is used to control whether the upper ring data packet is greater than the fair rate. If ⁇ is greater than, the data packet is forbidden to the upper ring.
  • the scheduler is used to select which queue data is currently available for ringing. The scheduler is used in multiple queue devices, and the scheduler is used to indicate that the queue can perform the ring operation.
  • the fair algorithm control and the scheduler work together to determine whether the data packets in the queue are passed on the ring. If the scheduler dispatches to the queue (the queue can perform the upper loop operation) and can be looped under the control of the fair algorithm, then one data packet can be looped, followed by the second, third, ... data packets. Even if the scheduler dispatches to this queue, if the traffic is too large under the control of the fair algorithm and the upper ring is not allowed, the first data packet cannot be looped. If the first packet cannot be looped, all subsequent packets cannot be looped. In the queue management, the queue is controlled according to the fairness algorithm.
  • the upper ring is sent in the order, the previous data packet is not sent to the upper ring, and the subsequent data packets cannot be sent to the upper ring.
  • the fairness algorithm because the upper ring is sent in order, regardless of whether the data packet passes through the blocking point, the data packet that does not pass through the blocking point later may not be looped due to the previous data packet. Packet delivery efficiency.
  • FIG. 5 it is a schematic diagram of the apparatus of the present invention.
  • the transmission destination station address and other attached circuits of one column of data packets are added, so that the queue management structure becomes the device structure shown in FIG.
  • the apparatus structure includes: a queue management circuit 51, a scanning circuit 52, an analysis circuit 53, a proposed fair rate control circuit 54, a fair rate control circuit 55, a calculation circuit 56, and a scheduler 57.
  • the queue management circuit 51 queues the data packets (actually queued according to the storage address of the data packets) and simultaneously queues the destination site addresses of the data packets.
  • one column in each queue stores the storage address of the queued data packet, and the other column corresponds to the destination site address of the queued data packet in the order in which the data packets are queued.
  • the store address information is used to query the storage location of the data packet.
  • the destination site address indicates to which site the data packet is sent to analyze whether the data packet has passed the blocking point.
  • the scanning circuit 52 is configured to scan the destination site address of each data packet; according to the requirement of the analysis circuit 53, start scanning from the first data packet until scanning the first data packet that does not pass through the blocking point, or scans the team tail.
  • the analyzing circuit 53 is configured to analyze the destination site address of the data packet scanned by the scanning circuit 52, and analyze the data packet corresponding to the destination site address of the data packet according to the distance information between the blocking point provided by the calculating circuit 56 and the local station. Whether it passes through the blocking point. If the blocking point has elapsed, the scanning circuit 52 is required to continue scanning backward from this point until the first packet that does not pass through the blocking point is scanned, or scanned to the end of the queue.
  • the fair rate control circuit 54 is configured to control the upper ring rate of the data packet according to the recommended fair rate of the site, and the upper ring rate of all the upper ring data packets cannot be greater than the recommended fair rate; if all current upper ring rates are not Greater than the recommended fair rate of this site, it is allowed to pass without blocking
  • the data packet on the slug is on the ring (the packet is not controlled by the fair rate control circuit).
  • the control is suitable for data packets that pass through and do not pass through the blocking point.
  • the calculating circuit 56 is configured to obtain a survival time according to the use fair rate, calculate a distance of the blocking point from the site according to the survival time, and provide the distance information to the analyzing circuit 53, and the analyzing circuit 53 analyzes whether the data packet passes through the blocking point.
  • the lifetime is 255; if it is generated by other sites, it is less than 255.
  • the blocking point is the site; if the lifetime is 'j, 255, indicating that the blocking point is at a certain point downstream, the distance of the blocking point from the site is equal to the The difference between the survival time and 255.
  • the distance extracted by the S1 site is 3 (the lifetime of the S4 site is 255, and it is reduced to 252 when it is delivered to the S1 site), indicating that the distance between the blocking point and the S1 site is 3.
  • the calculation circuit 56 calculates the difference between the survival time and 255 to obtain the distance of the blocking point from the site, and the analyzing circuit 53 analyzes the distance information.
  • the scheduler 57 is used to select which queue of packets can be looped.
  • the scheduler is only valid in the case of multiple queues. If there is only one queue at this site, the scheduler is omitted.
  • the head pointer indicates the first data packet to be sent, and the data packet is the first queued data packet, and needs to be sent first. If the queue is controlled by the fair algorithm rate, the fair rate used by the station is F4, the first packet passes through the blocking point, and the current upper loop traffic is greater than the fair rate F4, then control is required to prohibit the packet from ringing up to the upper ring. The flow is reduced before it can be looped. During the prohibition of the first packet ring, the scanning circuit 52 scans the destination site address of each packet from the head of the queue to analyze whether the destination site address passes through the blocking point.
  • the data packet can be controlled without using the fair rate F4, and only needs to be recommended by this point.
  • Rate Fl control can be.
  • the first queued data is controlled by the fair rate F4, and the first packet is not allowed to ring, but after that, the second packet does not pass through the blocking point and can be controlled without using the fair rate F4.
  • Fair rate F1 control If the fair rate F1 is allowed, the second packet can be inserted into the ring, and the idle time of the upper ring can be fully utilized to queue the second packet. Because F1>F4, the data packet can be looped up with F1 traffic, and the upper loop speed increases.
  • scanning circuit 52 scans from the head of the team until the first packet that does not pass through the blocking point is scanned, or is not found at the end of the scan and stays there. If the first queued packet in the queue cannot be looped because it uses fair rate control, the first packet that does not pass through the blocking point is allowed to ring on the ring in idle time, and the packet that does not pass through the blocking point is queued. Not subject to the order of the queue.
  • the packet on the queue is not controlled by the fair rate used at this point (the fair rate is used in the S1 site in the figure is F4), but only the recommended fair rate control calculated by the site (the S1 site recommended fair rate is Fl in the figure). Because the recommended rate value calculated by the site uses the fair rate value, the rate of the upper ring data packet increases, and the loop transmission efficiency is improved.
  • the delete operation is implemented by the queue management function of the queue management circuit 51.
  • the scan circuit 52 continues the scan of the subsequent packets until the next packet that does not pass through the blocking point is scanned. Since the network packet traffic is dynamically changing, the blocking point shifts as the traffic on the ring network changes. The original blocking point may become a non-blocking point, and the original non-blocking point can become a new blocking point. When the blocking point transitions, the length of the blocking point and the site will change.
  • the packets that have been scanned from the beginning of the queue may change the packets that have passed through the blocking point to not pass through the blocking point. These new packets that do not pass through the blocking point.
  • the upper ring can be re-plugged; if the distance from the blocking point is reduced, the packet that has passed through the blocking point still passes through the blocking point, and the scanning can be continued in the previous manner regardless of the already scanned data packet.
  • Just use the new blocking point distance information it is only necessary to consider the case where the blocking point and the site are increased in distance. The analysis circuit 53 will detect this when the blocking point distance becomes larger.
  • the analysis circuit 53 Upon detecting an increase in the blocking point distance, the analysis circuit 53 requests the scanning circuit 52 to rescan from the head of the team to redetermine the position of the packet that does not pass through the blocking point.
  • the scanning circuit 52 points to the first data packet, it means that the first data packet in the queue does not pass through the blocking point, and can be directly uplinked, and the calculation is performed according to the recommended rate of the point, instead of using the fair rate upper ring. . In this way, it can be guaranteed that the data packet does not occupy the fair rate of use, but only takes up the recommended fair rate.
  • the head pointer points to the next packet, and the scanning circuit 52 scans from the next packet, scanning out the new first one without blocking. Point of the packet.
  • Step 61 In the data packet queuing queue, the queue management circuit queues the address according to the sequence of the data packets, and correspondingly stores the destination site address of the data packet;
  • the sequence refers to sorting according to the order in which the data packets arrive at the queue management circuit.
  • Step 62 When each station uses the fair rate, the computing circuit extracts the lifetime of using the fair rate, and gives the large distance of the blocking point from the site;
  • Step 63 The scanning circuit starts scanning from the destination site address of the first data packet, and gives the destination site address of the data packet;
  • Step 64 The analysis circuit analyzes the destination site address of the data packet, and determines whether the data packet passes through the blocking point according to the distance of the blocking point from the local station point. If the ⁇ passes through the blocking point, the scanning circuit is required to continue scanning until the data packet that does not pass through the blocking point is scanned, or scanned to the end of the queue;
  • Step 65 Perform ring-to-ring rate control on the data packet; when: a data packet cannot be looped because it passes through the blocking point, the data packet that does not pass through the blocking point is allowed to be queued.
  • a queue queue content is composed of two partial groups: the storage address of the data packet and the destination site address of the data packet, and are queued according to the order of the data packets;
  • the storage address of the data packet gives the storage location of the data packet.
  • the data packet is requested according to the storage address;
  • the destination site address of the data packet indicates to which station the data packet is sent;
  • the storage address of the data packet and the destination site address of the data packet (in order) are maintained - the corresponding relationship;
  • the fair rate used by the site refers to the fair rate of use adopted by the site and controlled by the upper ring rate;
  • the distance from the blocking point to the site is the difference between the time to live and 255.
  • the difference indicates that the blocking point is the local station; when the difference is 1, it indicates that the blocking point is the next adjacent station of the site, and there is only one span from the site; the rest, and so on, That is, when the difference is a natural number N, it means that the blocking point is the Nth station after the site.
  • the data packet cannot be looped because it passes through the blocking point, which means that the first data packet passes through the blocking point, and the current ring rate of the data packet passing through the blocking point is greater than the fair rate, so that the first data packet cannot be used. Ring, you need to wait for the rate to drop before you can go to the ring;
  • Allowing the ring to not pass through the blocking point packet means that the upper ring rate of all the upper ring data packets is less than the recommended fair rate of the site itself (that is, the recommended fair rate used at this point), and the first data packet needs to wait. Ring, which allows packets that do not pass through the blocking point to be queued to the upper ring;
  • the packet After the packet that has not passed through the blocking point is queued, the packet is deleted from the queue, and the scanning circuit continues to scan the subsequent data packet to prepare for the next queue.
  • the present invention applies a fairness algorithm for statistical multiplexing and spatial multiplexing, and fully utilizes the remaining bandwidth on the loop to achieve fair access to these loop bandwidths at all sites on the loop.
  • the present invention determines the location of the blocked site by analyzing the information provided by the fairness algorithm while maintaining the fairness algorithm unchanged. Analyze the destination site address of each packet in the queue management to determine if the packet has passed the blocking point. In the case of the control of the fair algorithm, if the first data packet in the queue cannot be sent out, the data packet in the queue that does not pass through the blocking point is allowed to be inserted into the ring, thereby improving the transmission efficiency of the fair service of the upper ring.
  • the present invention solves the disadvantage of using a fair rate of blocking points to control fair traffic that does not pass through the blocking point, resulting in an inability to improve traffic. .
  • the present invention applies a fairness algorithm for statistical multiplexing and spatial multiplexing, and fully utilizes the remaining bandwidth on the loop to achieve fair access to these loop bandwidths at all sites on the loop.
  • the present invention determines the location of the blocked site by analyzing the information provided by the fairness algorithm while maintaining the fairness algorithm unchanged. Analyze the destination site address of each packet in the queue management to determine if the packet has passed the blocking point. In the case of the control of the fair algorithm, if the first data packet in the queue cannot be sent out, the data packet in the queue that does not pass through the blocking point is allowed to be inserted into the ring, thereby improving the transmission efficiency of the fair service of the upper ring.
  • the present invention solves the disadvantage of using a fair rate of blocking points to control fair traffic that does not pass through the blocking point, resulting in an inability to improve traffic.

Abstract

A method for improving the fair efficiency in the resilient packet ring and the apparatus thereof include: a queue management circuit, for storing addresses according to the precedence order queue of packets, and storing the packet destination station addresses accordingly; a calculating circuit (15), for extracting the live time of the use fair rate to obtain the distance from the congestion point to the current station; a scanning circuit (6), for scanning the packet destination station addresses from the first one, to get the packet destination station address; an analysis circuit (7), for analysing the packet destination station address,and determining whether the packet passes through the congestion point based on the distance from the congestion point to the current station. If it does, the scanning circuit (6) is demanded to continue scanning until the packet which doesn't pass through the congestion point is found or the scanning reaches the queue ending; the packet upper ring rate control is performed; when the first packet can't reach the upper ring owing to passing through the congestion point, the packet which doesn't pass through the congestion point is granted to jump the queue to reach the upper ring.

Description

一种弹性分组环中提高公平效率的方法及其装置  Method and device for improving fairness efficiency in elastic grouping ring
技术领域 Technical field
本发明涉及网络通信扶术领域,特别是涉及一种弹性分组环网中提高公 平效率的方法及其装置。 背景技术  The present invention relates to the field of network communication, and in particular to a method and apparatus for improving fairness in an elastic packet ring network. Background technique
以太网技术的快速发展, 使得数据业务逐渐成为主要的通信流量, 数据 网络成为未来通讯网的发展方向。 以太网采用尽力传送的机制, 具有很好的 扩展性, 很适应现在的突发性数据业务, 但是, QoS ( Quality of Service, 服 务质量) 没有保障, 保护倒换的能力也很差。 SDH ( Synchronous Digital Hierarchy, 同步数字体系)设备具有小于 50ms的倒换时间, 有多种保护方 式, 具有良好的 QoS, 但是 SDH采用的是固定传送带宽, 传送 IP数据业务 的效率不高, 造成很大的浪费, SDH对数据业务的传送不是最佳的选择。  The rapid development of Ethernet technology has made data services gradually become the main communication traffic, and data networks have become the development direction of future communication networks. Ethernet adopts the best-effort transmission mechanism, which has good scalability and is suitable for the current bursty data service. However, QoS (Quality of Service) is not guaranteed, and the ability to protect switching is also poor. SDH (Synchronous Digital Hierarchy) devices have a switching time of less than 50ms. There are multiple protection modes and good QoS. However, SDH uses a fixed transmission bandwidth, and the efficiency of transmitting IP data services is not high. Waste, SDH is not the best choice for data service delivery.
光以太网 RPR( Resilient Packet Ring,弹性分组环)综合了以太网和 SDH 的优点, 它定义了一个独立的物理层——弹性分组环媒介访问控制层, 既具 有 SDH 网的优点: 提供双环结构, 在两个环上分别传递业务, 具有 50ms 保护的快速保护机制; 同时具有以太网的特点: 方便灵活、 即插即用、 自动 发现网络, 能够适应突发的数据业务, 除此之外, 还有空间复用技术、 统计 复用技术和业务分类的优点。空间复用技术充分利用环网络上任何站点之间 的空闲带宽,提高带宽利用率; 统计复用技术通过公平算法调节 站点的上 环公平业务, 共享环路带宽; 通过业务分类, RPR环网提供三种业务, A类 业务、 B类业务、 C类业务, 其中 A类业务保证业务带宽, 延迟、抖动最小, 满足音频、 视频业务的需要, C类业务不保证环路带宽, 尽力而为传递, 满 足数据业务需要。  Optical Ethernet RPR (Resilient Packet Ring) combines the advantages of Ethernet and SDH. It defines an independent physical layer, the flexible packet ring medium access control layer, which has the advantages of SDH network: Provides a double ring structure. , respectively, the service is transmitted on two rings, with a fast protection mechanism with 50ms protection; at the same time, it has the characteristics of Ethernet: convenient and flexible, plug and play, automatic network discovery, able to adapt to sudden data services, in addition, There are also advantages of spatial multiplexing technology, statistical multiplexing technology and business classification. The spatial multiplexing technology makes full use of the idle bandwidth between any sites on the ring network to improve the bandwidth utilization. The statistical multiplexing technology adjusts the upper ring fair service of the site through the fair algorithm and shares the loop bandwidth. Three types of services: Class A service, Class B service, and Class C service. Among them, Class A service guarantees service bandwidth, delay and jitter, and meets the needs of audio and video services. Class C services do not guarantee loop bandwidth and try to transmit. , to meet the needs of data services.
根据上环业务是否受公平算法控制, RPR环网上环业务可以分为两类业 务: 公平业务 ( C类业务和超过额定速率的 B类业务)和非公平业务( A类 业务和规定速率范围内的 B 类业务) 。 非公平业务的速度保证速率, 只要 该类上环业务没有超过保证速率, 则不受限制直接上环,但绝不允许超过保 证速率。 在 RPR环网建立时, 确定每个站点的保证速率, 各站点保证速率 之和不大于环路总速率,确保每个站点的保证速率都能实现。 环路总速率和 保证速率的差值, 是剩余速率。 剩余速率供环路上所有站点共同享用。 如果 一个站点的保证速率没有被使用而空闲, 也可以供环路上所有站点共同享 用.。 本发明技术只针对公平业务, 不考虑非公平业务, 后面讲述的数据流全 部为公平业务。 According to whether the upper ring service is controlled by a fair algorithm, the RPR ring network ring service can be divided into two types of services: fair service (class C service and class B service exceeding the rated rate) and unfair service (class A service and specified rate range). Class B business). The speed guarantee rate of the unfair service, as long as the uplink service does not exceed the guaranteed rate, it is not restricted to the direct ring, but it is never allowed to exceed the warranty. Certificate rate. When the RPR ring network is established, the guaranteed rate of each site is determined. The sum of the guaranteed rates of each site is not greater than the total loop rate, ensuring that the guaranteed rate of each site can be achieved. The difference between the total loop rate and the guaranteed rate is the remaining rate. The remaining rate is shared by all sites on the ring. If the guaranteed rate of a site is not used and is idle, it can also be shared by all sites on the ring. The technology of the present invention is only for fair business, and does not consider non-fair services. The data flows described later are all fair services.
为了保证所有站点公平享用环路中可以共享的上环速率 , RPR技术标准 中定义了公平帧,建议采用公平算法统计环路流量情况, 并控制各个站点的 上环速率,保证每个站点的公平业务能够公平分配环路上的剩余带宽, 而不 会出现因为站点位置、 业务流量的不同, 导致某些站点多占用环路带宽, 某 些站点少占用环路带宽的现象, 最终实现所有站点公平分配环路带宽。 在 In order to ensure that all stations can enjoy the upper ring rate that can be shared in the loop, the fair frame is defined in the RPR technical standard. It is recommended to use the fair algorithm to calculate the loop traffic and control the upper ring rate of each site to ensure the fairness of each site. The service can distribute the remaining bandwidth on the loop fairly, without the site location and service traffic, causing some sites to occupy more loop bandwidth, and some sites occupy less loop bandwidth, and finally achieve fair distribution of all sites. Loop bandwidth. in
RPR标准中, 定义了公平周期的时间长度为 100μδ。 在一个公平周期内, 每 个站点都在统计环路上的数据包的流量情况, 并接收、 计算、 使用、 发送四 个公平速率值, 实现整个环路的公平速率控制,保证每个站点公平享用环路 带宽。 这四个公平速率值分别为: In the RPR standard, the length of the fair period is defined as 100 μ δ . In a fair period, each station counts the traffic of packets on the ring, and receives, calculates, uses, and sends four fair rate values to achieve fair rate control of the entire loop, ensuring fair access to each site. Loop bandwidth. The four fair rate values are:
接收公平速率: 接收到从下游站点发送过来的公平速率,控制本站点上 环速率不能超过接收公平速率。  Receive Fair Rate: Receives the fair rate sent from the downstream site, and controls the local ring rate of the site to not exceed the fair rate of reception.
本点建议公平速率: 本站根据一段公平周期内经过本站点的过环流量、 空闲带宽容量、 本站点的上环速率, 经过一套公平计算公式, 计算出本站点 的建议公平速率。在公平算法控制中, 当一个站点计算出的建议速率比其他 站点的建议速率都小时,环路上其他站点都选择这个公平算法得出的最小速 率进行控制, 此时该站点称为阻塞点。 阻塞点的含义为: 经过本站点 (简称 本点)的数据流量过大, 发生拥挤现象, 导致本站点上环数据包受到限制, 此时本站点计算的建议速率最小。 本点使用公平速率:本点使用公平速率等于本站点的接收公平速率和本 点建议公平速率中较小的一个,也就是站点实际使用的公平速率既不能大于 接收公平速率,也不能大于本站点的建议公平速率,只能取两者中的较小值。 发送公平速率: 通过统计本站点的过环流量和上环流量, 判断上游站点 与下游阻塞是否有任何关系。如果上游站点和下游阻塞没有任何关系, 上游 过来的数据包已经下环, 不再继续沿着环路传递, 则本站点发送自己生成的 本点建议公平速率; 如果上游站点和下游阻塞存在关系, 则发送本点使用公 平速率, 要求上游站点上环速率不能大于这个公平速率。 Suggested fair rate at this point: The station calculates the recommended fair rate of the site through a set of fair calculation formulas based on the loop traffic, idle bandwidth capacity, and the upper ring rate of the site within a fair period. . In the fair algorithm control, when the recommended rate calculated by one station is smaller than the recommended rate of other stations, other stations on the loop select the minimum rate obtained by the fair algorithm to control, and the station is called a blocking point. The meaning of the blocking point is: The data traffic passing through this site (referred to as this point) is too large, and congestion occurs. As a result, the ring packets on the site are restricted. At this time, the recommended rate calculated by the site is the smallest. This point uses the fair rate: the fair rate used at this point is equal to the smaller one of the fair rate of the site and the fair rate recommended by the site. That is, the fair rate actually used by the site cannot be greater than the fair rate of the receiver, nor can it be greater than this. The recommended fair rate of the site can only be the smaller of the two. Sending a fair rate: By counting the traffic on the local ring and the traffic on the upper ring, you can determine whether the upstream site has any relationship with the downstream traffic. If the upstream site has nothing to do with downstream congestion, upstream If the incoming data packet has been looped and no longer continues to be transmitted along the loop, then the site sends its own proposed fair rate of the proposed point. If the upstream site and the downstream congestion are related, the point is sent using the fair rate. The ring rate on the site cannot be greater than this fair rate.
本站点使用公平速率控制公平业务的上环速率,保证本站点的实际上环 速率不大于阻塞点(本站点和阻塞点有关系)的上环速率, 避免多占用环路 带宽。  The site uses the fair rate to control the upper ring rate of the fair service, ensuring that the actual ring rate of the site is not greater than the upper ring rate of the blocking point (this site is related to the blocking point), and avoids occupying more loop bandwidth.
在实际工作中, 每个站点都同时在接收、 计算、 使用、 发送这四个公平 值, 每个公平周期内操作一次。 通过全部站点同步进行这些操作, 实现环网 上环公平业务的平衡。 由于是双环结构, 每个站点都有两套, 分别控制两个 环路。 工作中两个环路之间独立工作, 没有任何关系。 在方案中, 以外环为 例子进行介绍。 站点公平算法具体操作过程如下:  In actual work, each site receives, calculates, uses, and transmits four fair values at the same time, and operates once in each fair period. These operations are synchronized through all sites to achieve a balance of fair services on the ring. Because of the double-ring structure, each site has two sets, which control two loops. There is no relationship between the two loops working independently in the work. In the scenario, the outer ring is introduced as an example. The specific operation process of the site fairness algorithm is as follows:
在图 1 , 以 S2站点为例: S2站点上环数据包通过外环 (环 0 )上环, 下游站点 S3则通过另外一个环——内环 (环 1 )发送下游的建议公平速率 F3,用于控制 S2站点的上环速率不能超过下游站点 S3给出的建议公平速率 F3, 同时 S2站点计算本站点的建议速率 F, S2站点的使用公平速率既不能 超过本站点的建议速率 F,也不能超过下游的建议公平速率 F3 ,取两者中的 较小值。 S2站点同时向上游站点 S1发送本站点的建议速率 F2, 如果 S2上 游站点与下游阻塞无关, 则直接发送本站点的建议速率 F, 即 F2等于 F; 如 果 S2上游站点与下游阻塞站点有关,则 F2等于 F和 F3的中的最小一个值, 要求上游站点 S1上环速率不能超过建议速率 F2。  In Figure 1, the S2 site is taken as an example: the ring packet on the S2 station passes the outer ring (ring 0), and the downstream station S3 sends the downstream recommended rate F3 through the other ring-ring (ring 1). The upper ring rate used to control the S2 site cannot exceed the recommended fair rate F3 given by the downstream site S3. The S2 site calculates the recommended rate F of the site. The fair rate of the S2 site cannot exceed the recommended rate of the site. Nor can it exceed the downstream recommended fair rate F3, whichever is smaller. The S2 station sends the recommended rate F2 of the local station to the upstream site S1. If the upstream site of the S2 is not related to the downstream congestion, the recommended rate F of the site is directly sent, that is, F2 is equal to F. If the upstream site of the S2 is related to the downstream blocked site. Then, F2 is equal to the minimum value of F and F3, and the upper ring rate of the upstream station S1 is required to not exceed the recommended rate F2.
图 2给出公平算法的控制过程: S5站点接收到 S6站点的公平速率为 F6, 同时自己计算出的公平速率为 F5, 因为 F5<F6, 所以 S5站点选择 F5公平 速率控制本站点的上环速率。 因为上游站点与下游阻塞有关, S5 站点将使 用速率 F5发送给 S4站点。 S4站点接收到 S5站点的公平速率 F5 , 同时自 己计算出的建议速率为 F4。 因为 F4<F5, 所以 S4站点选择 F4速率控制本 站点的上环速率。 因为上游站点与下游阻塞有关, S4站点将 F4发送到上游 站点 S3。 S3站点接收到 S4的公平速率 F4,同时计算本站点的公平速率 F3 , 因为 F3>F4, S3站点选择 F4控制本站点的上环速率。 因为上游站点与下游 阻塞有关, S3将 F4发送给上游站点 S2。 同理, S2也计算出公平速率 F2, 但是因为 F2>F4, S2站点选择 F4控制本站点的上环速率。 因为上游站点与 下游阻塞有关, S2将 F4发送给 S1; 依此类推, F1>F4, SI站点将 F4发送 给 S10。 因为 SI站点、 S2站点、 S3站点都和下游阻塞有关, 并且 F1〉F4, F2>F4, F3>F4, 所以 SI站点、 S2站点、 S3站点都选择下游的最小建议公 平速率 F4, 也就是阻塞点的建议速率, 以此建议速率来控制本站点上环速 率, 并发送给各自的上游, 所以 S1站点、 S2站点、 S3站点最大上环速率为 F4。 在图 2中, S4、 S5、 S6都是阻塞点, 但是 F4最小, 阻塞最严重, 因此 S4站点的所有上游相关站点都使用 F4, 而不是 F5、 F6。 Figure 2 shows the control process of the fair algorithm: The fair rate of the S5 station received by the S5 station is F6, and the fair rate calculated by itself is F5. Because F5<F6, the S5 station selects the F5 fair rate control on the site. Ring rate. Because the upstream site is associated with downstream congestion, the S5 site will be sent to the S4 site using rate F5. The S4 station receives the fair rate F5 of the S5 site, and the self-calculated recommended rate is F4. Because F4 < F5, the S4 station selects the F4 rate to control the upper ring rate of the site. Because the upstream site is associated with downstream congestion, the S4 site sends F4 to the upstream site S3. The S3 station receives the fair rate F4 of S4 and calculates the fair rate F3 of the local station. Because F3>F4, the S3 station selects F4 to control the upper ring rate of the local station. Because the upstream site is associated with downstream congestion, S3 sends F4 to the upstream site S2. Similarly, S2 also calculates the fair rate F2, However, because F2>F4, the S2 site selects F4 to control the upper loop rate of the site. Because the upstream site is related to downstream congestion, S2 sends F4 to S1; and so on, F1>F4, and the SI site sends F4 to S10. Because the SI site, the S2 site, and the S3 site are all related to the downstream blocking, and F1>F4, F2>F4, F3>F4, the SI site, the S2 site, and the S3 site all select the downstream minimum recommended fair rate F4, that is, the blocking. The recommended rate of the point is used to control the ring rate of the local station and send it to the respective upstream. Therefore, the maximum upper ring rate of the S1 station, the S2 station, and the S3 station is F4. In Figure 2, S4, S5, and S6 are all blocking points, but F4 is the smallest and the blocking is the most serious. Therefore, all upstream related sites of the S4 site use F4 instead of F5 and F6.
如果一个站点使用的公平速率和阻塞点的公平速率相同,则表示本站点 与阻塞点存在关系, 本站点影响着阻塞点的阻塞情况,将使用同一个公平值 的站点的传递路径成为阻塞域。 如图 2中, S10站点、 S1站点、 S2站点、 S3站点、 S4站点都使用同一个公平速率值, 属于一个阻塞域。 以下部分讨 论内容是同一个阻塞域的情况。  If the fair rate used by a site is the same as the fair rate of the blocking point, it indicates that the site has a relationship with the blocking point. This site affects the blocking of the blocking point and blocks the transmission path of the site using the same fair value. area. As shown in Figure 2, the S10 site, the S1 site, the S2 site, the S3 site, and the S4 site all use the same fair rate value and belong to a blocked domain. The following sections discuss the case of the same blocked domain.
公平算法要求控制阻塞域中的所有站点都按照阻塞点的建议速率控制 上环速率, 可以保证各个站点都能共同分享环路带宽,避免一些站点多占用 环路带宽, 而其他站点少占用环路带宽的不公平现象。但是采用这个控制方 法存在一个缺陷: 公平算法进行控制时,对上环的所有数据流量实行总量控 制。 在控制上环带宽时, 将可以不用控制的数据包流量也控制了。 图 2中, 因为 F1>F4, F2>F4, F3>F4,说明这三个站点自身建议上环速率都很大, 可 以大于 F4的带宽。 但是因为 F4小, 在 S4站点发生阻塞, S1 - S3这三个站 点都只能选择 F4控制本站点上环速率,避免 S4站点上环速率偏小,造成不 公平现象。 当 S1站点有一条业务流 Flow, 从 SI站点发送到 S3站点, 如图 2所示。 由于 S1站点采用下游的公平速率 F4来控制上环速率带宽, 因此业 务流 Flow最大速率只能是 F4。 但是实际上因为阻塞发生在 S4站点, 而业 务流 Flow并不经过 S4站点, 而是在 S3站点下环后就中止了, 但是目前的 公平算法实现方法的缺陷, 造成使用阻塞点的公平速率 F4去控制不经过阻 塞点的业务流 Flow, 造成业务流 Flow上环流量无法提高。 发明内容 The fairness algorithm requires that all sites in the blocked domain control the upper loop rate according to the recommended rate of the blocking point. This ensures that each site can share the loop bandwidth, avoiding that some sites occupy more loop bandwidth, while other sites occupy less loops. Unfairness in bandwidth. However, there is a defect in adopting this control method: When the fairness algorithm performs control, the total amount of all data traffic of the upper ring is controlled. When controlling the bandwidth of the upper ring, the packet traffic that can be controlled without control is also controlled. In Figure 2, because F1>F4, F2>F4, F3>F4, the three sites themselves suggest that the upper ring rate is very large and can be larger than the bandwidth of F4. However, because F4 is small and blocking occurs at the S4 site, the S1-S3 sites can only select F4 to control the ring rate of the site, which avoids the ring rate on the S4 site being too small, resulting in unfairness. When the S1 site has a traffic flow, it is sent from the SI site to the S3 site, as shown in Figure 2. Since the S1 site uses the downstream fair rate F4 to control the upper ring rate bandwidth, the maximum rate of the traffic flow Flow can only be F4. However, in fact, because the blocking occurs at the S4 site, the traffic flow does not pass through the S4 site, but is terminated after the S3 site is looped down. However, the current fair algorithm implementation method is flawed, resulting in a fair rate of using the blocking point F4. To control the traffic flow that does not pass through the blocking point, the traffic on the traffic flow cannot be increased. Summary of the invention
本发明所要解决的技术问题在于提供一种弹性分组环中提高公平效率 的方法及其装置,用于解决因为使用阻塞点的公平速率去控制不经过阻塞点 的公平业务流, 造成业务流量无法提高的缺点。  The technical problem to be solved by the present invention is to provide a method and device for improving fairness efficiency in an elastic packet ring, which is used for solving a fair traffic rate that does not pass through a blocking point by using a fair rate of a blocking point, and the service traffic cannot be improved. Shortcomings.
一种弹性分组环中提高公平效率的装置, 其特征在于, 包括:  An apparatus for improving fairness and efficiency in an elastic packet ring, comprising:
—队列管理电路,用于在队列的一列中根据数据包到来的先后顺序将数 据包的实际存放地址进行排队,并在所述队列的另一列中按照所述数据包的 排队顺序对所述数据包的目的站点地址进行排队;  a queue management circuit for queuing the actual storage address of the data packet in a column of the queue according to the order in which the data packets arrive, and in the other column of the queue, the data is queued according to the queued order of the data packet The destination site address of the packet is queued;
一扫描电路, 连接所述队列管理电路, 用于从所述队列中第一个数据包 开始扫描所迷每个数据包的目的站点地址;  a scanning circuit, connected to the queue management circuit, for scanning a destination site address of each data packet from a first data packet in the queue;
一分析电路, 连接所述队列管理电路和所述扫描电路, 用于根据接收的 阻塞点与本站点之间的距离信息、所述扫描电路扫描得到的数据包的目的站 点地址分析所述目的站点地址对应的数据包是否经过阻塞点,并根据分析结 果控制所述扫描电路的扫描方式;  An analysis circuit, connected to the queue management circuit and the scanning circuit, configured to analyze the purpose according to the distance information between the received blocking point and the local station, and the destination site address of the data packet scanned by the scanning circuit Whether the data packet corresponding to the site address passes through the blocking point, and controls the scanning mode of the scanning circuit according to the analysis result;
一建议公平速率控制电路, 连接所述队列管理电路, 用于根据本站点的 建议公平速率控制所述数据包的上环速率,使得所述所有上环速率不超过所 述建议公平速率,并在所述所有上环速率不超过所述建议公平速率时允许不 经过阻塞点的数据包上环;  a proposed fair rate control circuit, configured to connect to the queue management circuit, configured to control an upper ring rate of the data packet according to a recommended fair rate of the site, such that the upper ring rate does not exceed the recommended fair rate, and Allowing a packet ring that does not pass through the blocking point when all the upper ring rates do not exceed the recommended fair rate;
一使用公平速率控制电路, 连接所述队列管理电路, 用于根据本站点的 使用公平速率控制所述数据包的上环速率,使得经过阻塞点的数据包的上环 速率不超过所述使用公平速率; 及  Using a fair rate control circuit, the queue management circuit is configured to control an upper loop rate of the data packet according to a fair rate of use of the site, so that an upper loop rate of the data packet passing through the blocking point does not exceed the use Fair rate; and
一计算电路, 连接所述分析电路、 所述使用公平速率控制电路, 用于根 据所述使用公平速率提取一生存时间,并根据该生存时间得到阻塞点与本站 点之间的距离, 再将该距离信息提供给所述分析电路。  a computing circuit, connected to the analysis circuit, the using fair rate control circuit, for extracting a lifetime according to the use fair rate, and obtaining a distance between the blocking point and the site according to the lifetime, and then The distance information is provided to the analysis circuit.
所述的弹性分组环中提高公平效率的装置, 还包括一调度器, 连接所述 队列管理电路,用于当本站点有多个队列时从中选择进行上环操作的数据包 所属的队列。 ; 所述建议公平速率控制电路所控制的数据包包括不经过阻塞点的数据; 而所述使用公平速率控制电路所控制的数据包包括经过阻塞点的数据包、不 经过阻塞点的数据包。 The device for improving fairness in the flexible packet ring further includes a scheduler, and is connected to the queue management circuit, and is configured to select a queue to which the data packet for performing the upper ring operation belongs when the site has multiple queues. ; Fair rate the recommended control packet comprises a control circuit without blocking data points; The data packet controlled by the fair rate control circuit includes a data packet passing through the blocking point and a data packet not passing through the blocking point.
本发明所述的弹性分组环中提高公平效率的装置, 当所述使用公平速率 是由本站点生成时, 所述生存时间为 255; 当所述使用公平速率是由本站点 的下游站点发送的, 所述生存时间小于 255。  The apparatus for improving fairness efficiency in the resilient packet ring of the present invention, when the usage fair rate is generated by the local station, the lifetime is 255; when the usage fair rate is by the downstream site of the site The sent time is less than 255.
所述阻塞点与本站点之间的距离为所述生存时间与 255的差值; 当所述 差值为 0时, 则阻塞点为所述本站点, 当所述差值为 N时, 则阻塞点为所 述本站点后的第 N个站点, 其中 N为大于等于 1的自然数。  The distance between the blocking point and the local station is the difference between the survival time and 255; when the difference is 0, the blocking point is the local station, when the difference is N Then, the blocking point is the Nth station after the local station, where N is a natural number greater than or equal to 1.
一种利用本发明所述装置实现弹性分组环中提高公平效率的方法,其特 征在于, 包括:  A method for improving fairness efficiency in an elastic packet ring by using the device of the present invention, characterized in that it comprises:
步骤一,由队列管理电路在队列的一列中将数据包的实际存放地址进行 排队,并在所述队列的另一列中按照所述数椐包的排队顺序对所 数据包的 目的站点地址进行排队;  Step 1: The queue management circuit queues the actual storage address of the data packet in one column of the queue, and queues the destination site address of the data packet according to the queue order of the data packet in another column of the queue. ;
步驟二, 由计算电路根据使用公平速率提取一生存时间,根据该生存时 间得到阻塞点与本站点之间的距离;  Step 2: The calculation circuit extracts a lifetime according to the use fair rate, and obtains a distance between the blocking point and the site according to the survival time;
步骤三, 所述分析电路根据所述距离、所述扫描电路扫描得到的数据包 的目的站点地址分析所述目的站点地址对应的数据包是否经过阻塞点,并根 据分析结果控制所述扫描电路的扫描方式; 及  Step 3: The analyzing circuit analyzes, according to the distance, the destination site address of the data packet scanned by the scanning circuit, whether the data packet corresponding to the destination site address passes through a blocking point, and controls the scanning circuit according to the analysis result. Scanning method; and
步骤四, 通过所述建议公平速率控制电路控制数据包的上环速率,使得 所述所有上环速率不超过所述建议公平速率,并在所述所有上环速率不超过 所述建议公平速率时允许不经过阻塞点的数据包上环;通过所述使用公平速 率控制电路控制数据包的上环速率,使得经过阻塞点的数据包的上环速率不 超过所述使用公平速率。  Step 4: The recommended fair rate control circuit controls an upper ring rate of the data packet, so that all the upper ring rates do not exceed the recommended fair rate, and when all the upper ring rates do not exceed the recommended fair rate Allowing a packet to be looped through the blocking point; controlling the upper ring rate of the data packet by using the fair rate control circuit such that the upper ring rate of the packet passing through the blocking point does not exceed the usage fair rate.
所述步骤三中, 当所述目的站点地址对应的数据包经过阻塞点时, 所述 分析电路控制所迷扫描电路继续扫描队列,直到扫描出不经过阻塞点的数据 包或扫描至队列尾。  In the third step, when the data packet corresponding to the destination site address passes through the blocking point, the analyzing circuit controls the scanning circuit to continue scanning the queue until the data packet that does not pass through the blocking point is scanned or scanned to the tail of the queue.
所述步骤四中,当所述队列中的第一个数据包经过一阻塞点且当前经过 该阻塞点的数据包的上环速率大于所述使用公平速率时,则所述第一个数据 包进入速率下降等待上环状态。 In the step 4, when the first data packet in the queue passes a blocking point and the upper ring rate of the data packet currently passing the blocking point is greater than the usage fair rate, the first data is sent. The packet entry rate drops waiting for the upper ring state.
所述步骤四中,当所有上环的数据包的上环速率小于本站点的建议公平 速率且所述第一个数据包处于等待上环状态时,则允许不经过阻塞点的数据 包插队上环。  In the fourth step, when the upper ring rate of all the upper ring data packets is less than the recommended fair rate of the local station and the first data packet is in the waiting upper ring state, the data packet is not allowed to pass through the blocking point. Sheung Wan.
所述步驟四中, 当不经过阻塞点的数据包插队上环后, 由所述队列管理 电路将该数据包从该数据包所属的队列中删除,所述扫描电路继续扫描该数 据包后的数据包。  In the step 4, after the data packet that has not passed through the blocking point is queued, the queue management circuit deletes the data packet from the queue to which the data packet belongs, and the scanning circuit continues to scan the data packet. data pack.
本发明解决了因为使用阻塞点的公平速率去控制不经过阻塞点的公平 业务流, 造成业务流量无法提高的缺点。  The present invention solves the disadvantage of using a fair rate of blocking points to control fair traffic that does not pass through the blocking point, resulting in an inability to improve traffic.
附图概迷 Drawing fan
图 1为公平算法控制示意图;  Figure 1 is a schematic diagram of the fair algorithm control;
图 2为公平算法的控制过程示意图;  2 is a schematic diagram of a control process of a fair algorithm;
图 3为公平帧结构示意图; '  Figure 3 is a schematic diagram of a fair frame structure; '
图 4为每个站点管理上环数据包的装置示意图;  4 is a schematic diagram of an apparatus for managing a ring-up data packet at each station;
图 5为本发明的装置示意图;  Figure 5 is a schematic view of the apparatus of the present invention;
图 6为本发明的方法流程图。 本发明的较佳实施方式  Figure 6 is a flow chart of the method of the present invention. Preferred embodiment of the invention
图 3所示为公平帧结构示意图。在公平算法应用中, 每个站点都采用阻 塞点的公平速率控制阻塞域中所有上环数据包,造成本站点上环数据包中不 经过阻塞站点的数据包也被该公平速率控制, 影响上环速率和传递效率。 实 际应用中, 阻塞点的公平速率是以公平帧的形式来传递, 由下游站点向上游 站点传递。 当一个站点自身的建议速率大于阻塞点的公平速率, 则该站点使 用阻塞点的公平速率(本站点在统计环路状况后发现没有空闲带宽), 同时 将阻塞点的公平速率一直向上游转发传递。 传递公平速率的公平桢结构如图 3 所示, 帧中包含: 生存时间 (timeToLive) 、 环(路)控制 (baseRingControl) 、 源 (站 点 )地址 (sourceMacAddress)、 公平顿头校验 /公平控制头 (faimessControlHeader)、 公 平控制值 (faimessControlValue)和帧校验序列(frameCheckSequence)。 Figure 3 shows a schematic diagram of the fair frame structure. In the application of the fairness algorithm, each site uses the fair rate of the blocking point to control all the upper ring data packets in the blocking domain, so that the data packets in the ring data packets of the local station that are not blocked are also controlled by the fair rate. Upper loop rate and transfer efficiency. In practical applications, the fair rate of the blocking point is transmitted in the form of a fair frame, which is transmitted from the downstream site to the upstream site. When the recommended rate of a site is greater than the fair rate of the blocking point, the site uses the fair rate of the blocking point (the site finds no free bandwidth after the statistical loop condition), and forwards the fair rate of the blocking point to the upstream. transfer. The fairness structure of the fair rate is shown in Figure 3. The frame contains: time to live (LiveToLive), ring (road) control (baseRingControl), source (site) address (sourceMacAddress), fair-head check/fair control header (faimessControlHeader), fair control value (faimessControlValue) and frame check sequence (frameCheckSequence).
在公平帧中, 有三个关键部分: 生存时间、 源地址和公平 (控制)值。 生存时间: 给出建议公平速率在环路上传递站点数, 公平速率产生时, 该值为 255。 如果传递的公平速率保持不变, 则每经过一个站点, 该值减 1。  In a fair frame, there are three key parts: time to live, source address, and fair (control) value. Time to live: Gives the recommended fair rate to pass the number of sites on the loop. When the fair rate is generated, the value is 255. If the fair rate of delivery remains the same, the value is decremented by one each time a site passes.
源地址: 给出产生公平速率的站点地址, 也就是阻塞点的站点地址。 公平 (控制)值: 该部分传递下游站点的建议公平值, 也就是阻塞点的公 平速率, 用于控制上游站点的上环数据流。.  Source Address: Gives the site address that generated the fair rate, which is the site address of the blocking point. Fair (Control) value: This part conveys the recommended fair value of the downstream site, that is, the fair rate of the blocking point, which is used to control the upper ring data flow of the upstream site. .
当一个新公平速率产生并向上游传递公平速率时, 生存时间初始值为 When a new fair rate is generated and a fair rate is passed upstream, the initial value of the lifetime is
255。 上游站点如果要继续上传这个公平值, 则保持公平值、 源地址不变, 生存时间减 1。 这样其他站点接收到公平值时, 可以知道公平值的源站点地 址, 通过生存时间的大小可以知道距离源站点的距离大小, 也就是距离阻塞 点的距离。 在图 2中, S4站点计算出公平速率为 F4, F4<F5, 因此 S4将 F4 发送给 S3, 发送的公平帧中包含 F4、 S4的站点信息, 生存时间为 255。 S3 站点接收到 S4站点发送的公平帧后, 将生存时间 255减 1 , 变成 254。 因为 F3>F4, S3站点继续将 F4发送给 S2站点, 公平帧中速率仍为 F4, 源站点 地址为 S4的地址, 生存时间为减操作后的 254。 同理, S2站点接收后进行 减操作, 发送至 S1站点的公平速率仍旧为 F4, 生存时间变成 253; S1站点 接收后进行减操作, 发送至 S10站点的公平速率仍旧为 F4, 生存时间变成 252。 255. If the upstream site wants to continue uploading this fair value, it will keep the fair value, the source address unchanged, and the lifetime reduced by 1. When the other sites receive the fair value, they can know the source site address of the fair value. The size of the time-to-live can be used to know the distance from the source site, that is, the distance from the blocking point. In Figure 2, the S4 station calculates the fair rate as F4, F4<F5, so S4 sends F4 to S3, and the sent fair frame contains the site information of F4 and S4, and the lifetime is 255. After receiving the fair frame sent by the S4 station, the S3 station decrements the lifetime 255 by 1 and becomes 254. Because F3>F4, the S3 station continues to send F4 to the S2 site. The fair frame rate is still F4, the source site address is S4, and the lifetime is 254 after the subtraction operation. Similarly, after the S2 station receives the subtraction operation, the fair rate sent to the S1 site is still F4, and the survival time becomes 253; after the S1 station receives the subtraction operation, the fair rate sent to the S10 site is still F4, and the survival time becomes Into 252.
通过生存时间, 可以知道产生公平值的站点距离本站点的地址, 也就是 阻塞点距离本站点的距离。 在知道本站点上环数据包的目的站点的情况下, 如果上环数据包不经过阻塞点, 如图 2中站点 S1上环数据流 F w, 不经过 阻塞点 S4, 则该数据流可以不受 F4控制, 而只受本站点计算出来的 F1控 制。 因为 F1>F4,数据流 Flow的上环带宽将增大, 提高了数据包传递效率。 具体实现如下: 在每个站点, 采用队列管理技术, 将所有上环数据包分类, 按照类别分 别进行排队, 同一队列中的数据包按照先后顺序排队上环。 不同队列之间的 数据包没有顺序关系, 队列之间的上环顺序由调度器选择控制, 调度器决定 当前哪个队列的数据包可以上环,哪个队列的数据包不能上环。调度器按照 自己的优先级选择队列, 当选定一个队列后, 同一队列内部按照先后顺序上 环。 但是能否上环, 还需要判断该队列数据包是否受公平算法控制: 如果不 受公平算法控制, 则直接上环(如非公平业务) 。 本发明只涉及公平业务, 因此队列受公平算法控制。 当队列受公平算法控制时, 如杲上环速度小于公 平速率, 则可以上环; 如果上环速度大于公平速率, 则不能上环, 需要等待 上环速度降下来后才能上环。 如图 4所示。' Through the time-to-live, you can know the address of the site that generates the fair value from the site, that is, the distance from the site to the site. In the case of knowing the destination site of the ring packet on the site, if the upper ring data packet does not pass through the blocking point, as shown in FIG. 2, the ring data flow F w at the site S1 does not pass through the blocking point S4, then the data flow can be It is not controlled by F4, but only controlled by F1 calculated by this site. Because F1>F4, the upper loop bandwidth of the data flow Flow will increase, which improves the efficiency of packet transmission. The specific implementation is as follows: At each site, the queue management technology is used to classify all the upper ring data packets and queue them according to the categories. The data packets in the same queue are queued in the upper ring in order. The data packets between different queues have no order relationship. The upper ring sequence between the queues is controlled by the scheduler. The scheduler determines which of the current queues can be looped, and which queue data packets cannot be looped. The scheduler selects the queue according to its own priority. When a queue is selected, the same queue is internally looped in the order of priority. However, whether it can be on the ring, it is also necessary to judge whether the queue data packet is controlled by the fair algorithm: if it is not controlled by the fair algorithm, it is directly on the ring (such as non-fair business). The invention relates only to fair services, so the queue is controlled by a fair algorithm. When the queue is controlled by the fair algorithm, if the upper loop speed is lower than the fair rate, the ring can be looped. If the upper loop speed is greater than the fair rate, the ring cannot be looped. As shown in Figure 4. '
如图 4所示, 给出每个站点管理上环数据包的装置示意图。将所有上环 数据分类, 按类在自己的队列中进行排队, 等待上环发送。 每个队列中存储 上环数据包的实际存放地址,按照先后顺序存放, 队头指针指出排队时间最 长的数据包存放地址, 队尾指针指出排队时间最短的数据包存放地址。 队列 中既可以存储数据包的具体'内容, 也可以只存储数据包的实际存放地址 ,数 据包的具体内容可能保存在其它地方, 如内存条等电路中。本发明以队列中 存放数据包的实际存放地址为例进行介绍,队列中只存储该数据包的实际存 放地址, 实际存放地址代表了数据包的内容,将数据包的实际存放地址在队 列中进行排队, 代表着数据包在队列中进行排队。 当需要找数据包内容时, 先在队列中找到数据包的实际存放地址,通过实际存放地址间接地去寻找数 据包内容。 公平算法用来控制上环数据包是否大于公平速率, 如杲大于, 则 数据包禁止上环。调度器用来选择当前哪个队列的数据可以上环, 在多个队 列的装置中才使用调度器, 用调度器来指示那个队列可以进行上环操作。公 平算法控制和调度器共同作用, 决定队列中的数据包是否上环传递。.如果调 度器调度到本队列(该队列可以进行上环操作), 并且在公平算法的控制下 可以上环, 则笫一个数据包可以上环, 依次是第二、 三……个数据包。 即使 调度器调度到本队列,如果在公平算法的控制下发现流量过大,不允许上环, 则第一个数据包仍不能上环。 如果第一个数据包不能上环, 则后面的数据包 全部不能上环。 在队列管理中, 队列按照公平算法进行控制, 按照先后顺序上环, 前面 的数据包没有发送上环,后面的数据包则不能发送上环。在公平算法的控制 下, 由于按照先后次序发送上环, 而不考虑数据包是否经过阻塞点, 导致后 来并不经过阻塞点的数据包, 但是会因为前面的数据包原因导致不能上环, 影响数据包的传递效率。 As shown in FIG. 4, a schematic diagram of a device for managing the upper ring data packet of each station is given. Classify all the upper ring data, queue them in their own queues by class, and wait for the upper ring to send. The actual storage address of the upper ring data packet is stored in each queue, and is stored in order. The head pointer indicates the data packet storage address with the longest queue time, and the tail pointer indicates the data packet storage address with the shortest queue time. The queue can store the specific 'content of the data packet, or only the actual storage address of the data packet. The specific content of the data packet may be stored in other places, such as a memory module. The present invention introduces the actual storage address of the data packet stored in the queue as an example. The actual storage address of the data packet is only stored in the queue, and the actual storage address represents the content of the data packet, and the actual storage address of the data packet is performed in the queue. Queuing, which means that packets are queued in the queue. When you need to find the contents of the data packet, first find the actual storage address of the data packet in the queue, and indirectly search for the data packet content through the actual storage address. The fairness algorithm is used to control whether the upper ring data packet is greater than the fair rate. If 杲 is greater than, the data packet is forbidden to the upper ring. The scheduler is used to select which queue data is currently available for ringing. The scheduler is used in multiple queue devices, and the scheduler is used to indicate that the queue can perform the ring operation. The fair algorithm control and the scheduler work together to determine whether the data packets in the queue are passed on the ring. If the scheduler dispatches to the queue (the queue can perform the upper loop operation) and can be looped under the control of the fair algorithm, then one data packet can be looped, followed by the second, third, ... data packets. Even if the scheduler dispatches to this queue, if the traffic is too large under the control of the fair algorithm and the upper ring is not allowed, the first data packet cannot be looped. If the first packet cannot be looped, all subsequent packets cannot be looped. In the queue management, the queue is controlled according to the fairness algorithm. The upper ring is sent in the order, the previous data packet is not sent to the upper ring, and the subsequent data packets cannot be sent to the upper ring. Under the control of the fairness algorithm, because the upper ring is sent in order, regardless of whether the data packet passes through the blocking point, the data packet that does not pass through the blocking point later may not be looped due to the previous data packet. Packet delivery efficiency.
如图 5所示, 为本发明的装置示意图。 在本发明中, 每个队列中除了存 放数据包的存放地址外,增加一列数据包的发送目的站点地址及其他附属电 路, 这样队列管理结构变成图 5所示的装置结构。  As shown in FIG. 5, it is a schematic diagram of the apparatus of the present invention. In the present invention, in addition to the storage address of the stored data packet in each queue, the transmission destination station address and other attached circuits of one column of data packets are added, so that the queue management structure becomes the device structure shown in FIG.
该装置结构包括: 队列管理电路 51、 扫描电路 52、 分析电路 53、 建议 公平速率控制电路 54、 使用公平速率控制电路 55、 计算电路 56和调度器 57。  The apparatus structure includes: a queue management circuit 51, a scanning circuit 52, an analysis circuit 53, a proposed fair rate control circuit 54, a fair rate control circuit 55, a calculation circuit 56, and a scheduler 57.
队列管理电路 51对数据包进行排队(实际是按照数据包的存放地址进 行排队) , 同时对应地把数据包的目的站点地址进行排队。  The queue management circuit 51 queues the data packets (actually queued according to the storage address of the data packets) and simultaneously queues the destination site addresses of the data packets.
在本发明装置结构中, 每个队列中的一列存放排队数据包的存放地址, 同时另一列按照数据包的排队顺序对应存放排队数据包的目的站点地址。存 放地址信息用于查询数据包的存放位置, 目的站点地址表示数据包发送到哪 个站点去, 用于分析数据包是否经过阻塞点。  In the apparatus structure of the present invention, one column in each queue stores the storage address of the queued data packet, and the other column corresponds to the destination site address of the queued data packet in the order in which the data packets are queued. The store address information is used to query the storage location of the data packet. The destination site address indicates to which site the data packet is sent to analyze whether the data packet has passed the blocking point.
扫描电路 52, 用于扫描每个数据包的目的站点地址; 根据分析电路 53 的要求, 从第一个数据包开始扫描, 直到扫描出第一个不经过阻塞点的数据 包, 或者扫描到队尾。  The scanning circuit 52 is configured to scan the destination site address of each data packet; according to the requirement of the analysis circuit 53, start scanning from the first data packet until scanning the first data packet that does not pass through the blocking point, or scans the team tail.
分析电路 53,用于分析由扫描电路 52扫描出的数据包的目的站点地址, 根据计算电路 56提供的阻塞点与本站点之间的距离信息, 分析数据包的目 的站点地址对应的数据包是否经过阻塞点。如果经过了阻塞点, 则要求扫描 电路 52继续从本点向后扫描, 直到扫描出第一个不经过阻塞点的数据包, 或者扫描到队尾。  The analyzing circuit 53 is configured to analyze the destination site address of the data packet scanned by the scanning circuit 52, and analyze the data packet corresponding to the destination site address of the data packet according to the distance information between the blocking point provided by the calculating circuit 56 and the local station. Whether it passes through the blocking point. If the blocking point has elapsed, the scanning circuit 52 is required to continue scanning backward from this point until the first packet that does not pass through the blocking point is scanned, or scanned to the end of the queue.
建议公平速率控制电路 54, 用于根据本站点的建议公平速率来控制数 据包的上环速率, 要求所有上环数据包的上环速率不能大于该建议公平速 率; 如果当前所有上环速率不大于本站点的建议公平速率, 则允许不经过阻 塞点的数据包上环(该数据包不受使用公平速率控制电路控制)。 在本发明 中, 该控制适合于经过和不经过阻塞点的数据包。 The fair rate control circuit 54 is configured to control the upper ring rate of the data packet according to the recommended fair rate of the site, and the upper ring rate of all the upper ring data packets cannot be greater than the recommended fair rate; if all current upper ring rates are not Greater than the recommended fair rate of this site, it is allowed to pass without blocking The data packet on the slug is on the ring (the packet is not controlled by the fair rate control circuit). In the present invention, the control is suitable for data packets that pass through and do not pass through the blocking point.
. 使用公平速率控制电路 55, 用于根据本站点的使用公平速率来控制数 据包的上环速率,要求经过阻塞点的数据包上环速率不能大于该使用公平速 率, 在本发明中, 该控制适合于所有经过阻塞点的数据包。  Using the fair rate control circuit 55, for controlling the upper loop rate of the data packet according to the usage fair rate of the site, requiring that the data packet upper loop rate passing through the blocking point cannot be greater than the usage fair rate, in the present invention, Controls packets that are appropriate for all blocked points.
计算电路 56, 用于根据使用公平速率得到生存时间, 根据生存时间计 算阻塞点距离本站点的距离, 并将距离信息提供给分析电路 53, 供分析电 路 53分析数据包是否经过阻塞点。  The calculating circuit 56 is configured to obtain a survival time according to the use fair rate, calculate a distance of the blocking point from the site according to the survival time, and provide the distance information to the analyzing circuit 53, and the analyzing circuit 53 analyzes whether the data packet passes through the blocking point.
如果使用公平速率是本站点生成的, 生存时间为 255; 如果是其他站点 生成, 则小于 255。  If the fair rate is generated by the site, the lifetime is 255; if it is generated by other sites, it is less than 255.
如果本站点应用公平帧的生存时间为 255, 则表示阻塞点是本站点; 如 果生存时间' j、于 255, 表示阻塞点在下游某个点, 阻塞点距本站点的距离等 于该生存时间和 255的差值。 在本例中 S1站点提取的距离是 3 ( S4站点传 递的生存时间为 255 , 传递到 S1站点时已经减到 252 ) , 表示阻塞点与 S1 站点的距离为 3。  If the lifetime of the fair frame applied to the site is 255, the blocking point is the site; if the lifetime is 'j, 255, indicating that the blocking point is at a certain point downstream, the distance of the blocking point from the site is equal to the The difference between the survival time and 255. In this example, the distance extracted by the S1 site is 3 (the lifetime of the S4 site is 255, and it is reduced to 252 when it is delivered to the S1 site), indicating that the distance between the blocking point and the S1 site is 3.
计算电路 56计算生存时间和 255之间的差值, 得到阻塞点距离本站点 的距离, 分析电路 53根据距离信息进行分析。  The calculation circuit 56 calculates the difference between the survival time and 255 to obtain the distance of the blocking point from the site, and the analyzing circuit 53 analyzes the distance information.
调度器 57用于选择哪个队列的数据包可以上环操作。 调度器只在多队 列的情况下才有效, 如果本站点只有一个队列, 则省去调度器。  The scheduler 57 is used to select which queue of packets can be looped. The scheduler is only valid in the case of multiple queues. If there is only one queue at this site, the scheduler is omitted.
以站点 S1为例, 在具体工作中, 队头指针指出第一个需要发送的数据 包, 该数据包是第一个排队的数据包, 需要优先发送。 如果该队列受公平算 法速率控制, 站点使用公平速率是 F4, 第一个数据包经过阻塞点, 并且当 前上环流量大于使用公平速率 F4, 则需要控制, 禁止该数据包上环, 直到 上环流量降下来方可上环。 在禁止第一个数据包上环期间, 扫描电路 52从 队头开始扫描每个数据包的目的站点地址,分析目的站点地址是否经过阻塞 点。 当扫描到第二个数据包时,发现第二个数据包的目的站点地址不经过阻 塞点 S4, 而是在阻塞点前面巳经中止, 如本例中数据流 Flow在 S3站点终 止传送, 这样的数据包可以不受使用公平速率 F4控制, 只需受本点建议速 率 Fl控制即可。 在这样情况下, 使用公平速率 F4控制住第一个排队数据, 不许第一个数据包上环,但是此后第二个数据包不经过阻塞点,可以不受使 用公平速率 F4控制, 只受建议公平速率 F1控制。 如果建议公平速率 F1允 许, 则第二个数据包则可以插队上环, 充分利用上环的空闲时间, 将第二个 数据包插队上环去。 因为 F1>F4, 因此该数据包可以以 Fl的流量上环, 上 环速度增大。 Taking the site S1 as an example, in the specific work, the head pointer indicates the first data packet to be sent, and the data packet is the first queued data packet, and needs to be sent first. If the queue is controlled by the fair algorithm rate, the fair rate used by the station is F4, the first packet passes through the blocking point, and the current upper loop traffic is greater than the fair rate F4, then control is required to prohibit the packet from ringing up to the upper ring. The flow is reduced before it can be looped. During the prohibition of the first packet ring, the scanning circuit 52 scans the destination site address of each packet from the head of the queue to analyze whether the destination site address passes through the blocking point. When the second data packet is scanned, it is found that the destination address of the second data packet does not pass through the blocking point S4, but is aborted before the blocking point, as in this example, the data flow Flow is terminated at the S3 station, so that The data packet can be controlled without using the fair rate F4, and only needs to be recommended by this point. Rate Fl control can be. In this case, the first queued data is controlled by the fair rate F4, and the first packet is not allowed to ring, but after that, the second packet does not pass through the blocking point and can be controlled without using the fair rate F4. Fair rate F1 control. If the fair rate F1 is allowed, the second packet can be inserted into the ring, and the idle time of the upper ring can be fully utilized to queue the second packet. Because F1>F4, the data packet can be looped up with F1 traffic, and the upper loop speed increases.
开始时, 扫描电路 52从队头开始扫描, 直到扫描出第一个不经过阻塞 点的数据包, 或扫描到队尾也没有发现, 并停留在该位置。 如果队列中第一 个排队数据包因为使用公平速率控制而无法上环,空闲时间内则允许第一个 不经过阻塞点的数据包上环, 不经过阻塞点的数据包进行插队上环, 而不受 排队的先后顺序限制。 插队上环的数据包不受本点使用公平速率的控制(图 中 S1站点使用公平速率是 F4), 而只是受本站点计算出来的建议公平速率 控制(图中 S1站点建议公平速率是 Fl), 因为本站点计算的建议速率值犬于 使用公平速率值, 因此上环数据包的速率增大, 提高环路的传递效率。  Initially, scanning circuit 52 scans from the head of the team until the first packet that does not pass through the blocking point is scanned, or is not found at the end of the scan and stays there. If the first queued packet in the queue cannot be looped because it uses fair rate control, the first packet that does not pass through the blocking point is allowed to ring on the ring in idle time, and the packet that does not pass through the blocking point is queued. Not subject to the order of the queue. The packet on the queue is not controlled by the fair rate used at this point (the fair rate is used in the S1 site in the figure is F4), but only the recommended fair rate control calculated by the site (the S1 site recommended fair rate is Fl in the figure). Because the recommended rate value calculated by the site uses the fair rate value, the rate of the upper ring data packet increases, and the loop transmission efficiency is improved.
当插队上环的数据包上环以后, 需要将该数据包从队列中删除掉, 避免 再次上环, 造成两次上环的错误现象。 删除操作由队列管理电路 51的队列 管理功能实现。 删除操作后, 扫描电路 52继续进行后面数据包的扫描, 直 到扫描出下一个不经过阻塞点的数据包。 由于网絡数据包流量是动态变化, 阻塞点随着的环网上流量的变化而转移。 原来的阻塞点可能变成非阻塞点, 原来的非阻塞点可以成为新的阻塞点。 当阻塞点发生转移时, 阻塞点和本站 点的长度将发生变化。 阻塞点距离本站点的距离如果增大, 则从队头开始扫 描过的数据包中, 原来经过阻塞点的数据包可能现在变化为不经过阻塞点, 这些新的不经过阻塞点的数据包可以重新插队上环;如果阻塞点距 '离本站点 的距离减小, 则原来经过阻塞点的数据包仍然经过阻塞点, 可以不考虑已经 扫描过的数据包,按以前方式继续进行扫描,只是使用新的阻塞点距离信息。 因此在实际应用中, 只需要考虑阻塞点和本站点距离增大的情况。 当阻塞点 距离变大时' 分析电路 53将会检测到这种情况。 一旦检测到阻塞点距离增 大, 分析电路 53要求扫描电路 52从队头开始重新扫描, 重新确定笫一个不 经过阻塞点的数据包位置。 在工作中, 如果扫描电路 52指向第一个数据包, 表示排队中第一个数 据包不经过阻塞点, 可以直接上环, 计算中按照本点建议速率上环, 而不是 使用公平速率上环。采取这种方式,可以保证该数据包不占用使用公平速率, 而只是占用建议公平速率。 在这种情况下, 等第一个数据包插队上环后, 队 头指针指向下一个数据包,而扫描电路 52则从下一个数据包开始进行扫描, 扫描出新的第一个不经过阻塞点的数据包。 After the packet on the upper ring of the queue is looped, the packet needs to be deleted from the queue to avoid looping again, causing two loop errors. The delete operation is implemented by the queue management function of the queue management circuit 51. After the delete operation, the scan circuit 52 continues the scan of the subsequent packets until the next packet that does not pass through the blocking point is scanned. Since the network packet traffic is dynamically changing, the blocking point shifts as the traffic on the ring network changes. The original blocking point may become a non-blocking point, and the original non-blocking point can become a new blocking point. When the blocking point transitions, the length of the blocking point and the site will change. If the distance from the blocking point to the site increases, the packets that have been scanned from the beginning of the queue may change the packets that have passed through the blocking point to not pass through the blocking point. These new packets that do not pass through the blocking point. The upper ring can be re-plugged; if the distance from the blocking point is reduced, the packet that has passed through the blocking point still passes through the blocking point, and the scanning can be continued in the previous manner regardless of the already scanned data packet. Just use the new blocking point distance information. Therefore, in practical applications, it is only necessary to consider the case where the blocking point and the site are increased in distance. The analysis circuit 53 will detect this when the blocking point distance becomes larger. Upon detecting an increase in the blocking point distance, the analysis circuit 53 requests the scanning circuit 52 to rescan from the head of the team to redetermine the position of the packet that does not pass through the blocking point. In operation, if the scanning circuit 52 points to the first data packet, it means that the first data packet in the queue does not pass through the blocking point, and can be directly uplinked, and the calculation is performed according to the recommended rate of the point, instead of using the fair rate upper ring. . In this way, it can be guaranteed that the data packet does not occupy the fair rate of use, but only takes up the recommended fair rate. In this case, after the first packet is queued, the head pointer points to the next packet, and the scanning circuit 52 scans from the next packet, scanning out the new first one without blocking. Point of the packet.
如图 6所示, 为本发明的方法流程图。 该方法流程具体包括如下步驟: 步驟 61 , 在数据包排队队列中, 队列管理电路按照数据包的先后顺序 排队存放地址, 同时对应地存放数据包的目的站点地址;  As shown in FIG. 6, it is a flowchart of the method of the present invention. The method includes the following steps: Step 61: In the data packet queuing queue, the queue management circuit queues the address according to the sequence of the data packets, and correspondingly stores the destination site address of the data packet;
其中先后顺序指按照数据包到达队列管理电路的先后顺序进行排序。 步骤 62, 每个站点使用公平速率时, 由计算电路提取使用公平速率的 生存时间, 给出阻塞点距离本站点的 巨离;  The sequence refers to sorting according to the order in which the data packets arrive at the queue management circuit. Step 62: When each station uses the fair rate, the computing circuit extracts the lifetime of using the fair rate, and gives the large distance of the blocking point from the site;
步骤 63 , 扫描电路从第一个数据包的目的站点地址开始扫描, 给出数 据包的目的站点地址;  Step 63: The scanning circuit starts scanning from the destination site address of the first data packet, and gives the destination site address of the data packet;
步骤 64, 分析电路分析数据包的目的站点地址, 根据阻塞点距离本站 点的距离, 判断数据包是否经过阻塞点。 如杲经过阻塞点, 则要求扫描电路 继续扫描, 直到扫描出不经过阻塞点的数据包, 或者扫描到队尾;  Step 64: The analysis circuit analyzes the destination site address of the data packet, and determines whether the data packet passes through the blocking point according to the distance of the blocking point from the local station point. If the 杲 passes through the blocking point, the scanning circuit is required to continue scanning until the data packet that does not pass through the blocking point is scanned, or scanned to the end of the queue;
步骤 65 , 进行数据包上环速率控制; 当第:一个数据包因为经过阻塞点 无法上环时, 则允许不经过阻塞点的数据包插队上环。  Step 65: Perform ring-to-ring rate control on the data packet; when: a data packet cannot be looped because it passes through the blocking point, the data packet that does not pass through the blocking point is allowed to be queued.
上述步骤 61 中, 一个队列排队内容由两个部分组^: 数据包的存放地 址和数据包的目的站点地址, 按照数据包的先后顺序进行排队;  In the above step 61, a queue queue content is composed of two partial groups: the storage address of the data packet and the destination site address of the data packet, and are queued according to the order of the data packets;
数据包的存放地址给出数据包的存放位置, 当发送这个数据包时,按照 存放地址索取该数据包;数据包的目的站点地址给出该数据包发送到哪个站 点;  The storage address of the data packet gives the storage location of the data packet. When the data packet is sent, the data packet is requested according to the storage address; the destination site address of the data packet indicates to which station the data packet is sent;
数据包的存放地址和数据包的目的站点地址(按照先后顺序)保持—— 对应关系;  The storage address of the data packet and the destination site address of the data packet (in order) are maintained - the corresponding relationship;
数据包的存放地址和数据包的目的站点地址两个部分可以分开存放,以 便分开单独操作, 但两者需保持一一对应关系。 上述步驟 62中, 站点使用公平速率, 是指本站点采纳的并进行上环速 率控制的使用公平速率; The two parts of the storage address of the data packet and the destination site address of the data packet can be stored separately so as to be operated separately, but the two need to maintain a one-to-one correspondence. In the above step 62, the fair rate used by the site refers to the fair rate of use adopted by the site and controlled by the upper ring rate;
当使用公平速率是由本站点自身产生时, 其生存时间为 255; 当使用公 平速率是下游其他站点发送过来的时, 其生存时间小于 255; 及  When the fair rate is generated by the site itself, its lifetime is 255; when the fair rate is sent by other downstream stations, its lifetime is less than 255;
阻塞点距离本站点的距离是生存时间和 255的差值。 当差值为 0时, 表 示阻塞点是本站点; 当差值为 1时, 表示阻塞点是本站点的下一相邻站点, 距离本站点距离只有一个跨段; 其余依次类推, 即当差值为自然数 N 时, 表示阻塞点是本站点后的第 N个站点。  The distance from the blocking point to the site is the difference between the time to live and 255. When the difference is 0, it indicates that the blocking point is the local station; when the difference is 1, it indicates that the blocking point is the next adjacent station of the site, and there is only one span from the site; the rest, and so on, That is, when the difference is a natural number N, it means that the blocking point is the Nth station after the site.
上述步驟 65中, 数据包因为经过阻塞点无法上环, 是指第一个数据包 经过阻塞点, 并且当前经过阻塞点的数据包上环速率大于使用公平速率, 导 致第一个数据包无法上环, 需要等待速率下降后才能上环;  In the foregoing step 65, the data packet cannot be looped because it passes through the blocking point, which means that the first data packet passes through the blocking point, and the current ring rate of the data packet passing through the blocking point is greater than the fair rate, so that the first data packet cannot be used. Ring, you need to wait for the rate to drop before you can go to the ring;
允许不经过阻塞点数据包上环,是指所有上环数据包的上环速率小于本 站点自身的建议公平速率(即本点使用的建议公平速率), 并且第一个数据 包需要等待上环, 则允许不经过阻塞点的数据包插队上环;  Allowing the ring to not pass through the blocking point packet means that the upper ring rate of all the upper ring data packets is less than the recommended fair rate of the site itself (that is, the recommended fair rate used at this point), and the first data packet needs to wait. Ring, which allows packets that do not pass through the blocking point to be queued to the upper ring;
不经过阻塞点的数据包插队上环后, 则将该数据包从队列中删除, 扫描 电路继续扫描后面的数据包, 为下次插队上环准备。  After the packet that has not passed through the blocking point is queued, the packet is deleted from the queue, and the scanning circuit continues to scan the subsequent data packet to prepare for the next queue.
本发明应用公平算法进行统计复用和空间复用,充分利用环路上的剩余 带宽, 实现环路上所有站点公平享用这些环路带宽。本发明在保持公平算法 不变的情况下, 通过分析公平算法提供的信息, 确定阻塞站点的位置。 分析 队列管理中每个数据包的目的站点地址,确定数据包是否经过阻塞点。在公 平算法控制的情况下, 如果排队中第一个数据包无法发送出去, 则允许排队 中后面不经过阻塞点的数据包插队上环, 从而提高上环公平业务的传递效 率。 :: 本发明解决了因为使用阻塞点的公平速率去控制不经过阻塞点的公平 业务流, 造成业务流量无法提高的缺点。 .  The present invention applies a fairness algorithm for statistical multiplexing and spatial multiplexing, and fully utilizes the remaining bandwidth on the loop to achieve fair access to these loop bandwidths at all sites on the loop. The present invention determines the location of the blocked site by analyzing the information provided by the fairness algorithm while maintaining the fairness algorithm unchanged. Analyze the destination site address of each packet in the queue management to determine if the packet has passed the blocking point. In the case of the control of the fair algorithm, if the first data packet in the queue cannot be sent out, the data packet in the queue that does not pass through the blocking point is allowed to be inserted into the ring, thereby improving the transmission efficiency of the fair service of the upper ring. The present invention solves the disadvantage of using a fair rate of blocking points to control fair traffic that does not pass through the blocking point, resulting in an inability to improve traffic. .
当然, 本发明还可有其他多种实施例, 在不背离本发明精神及其实质的 情况下, 熟悉本领域的技术人员当可根据本发明 出各种相应的改变和变 形, 但这些相应的改变和变形都应属于本发明所附的权利要求的保护范围。 工业实用 4生 The invention may, of course, be embodied in a variety of other embodiments without departing from the spirit and scope of the invention. Changes and modifications are intended to be included within the scope of the appended claims. Industrial and practical 4 students
本发明应用公平算法进行统计复用和空间复用,充分利用环路上的剩余 带宽, 实现环路上所有站点公平享用这些环路带宽。本发明在保持公平算法 不变的情况下, 通过分析公平算法提供的信息, 确定阻塞站点的位置。 分析 队列管理中每个数据包的目的站点地址, 确定数据包是否经过阻塞点。在公 平算法控制的情况下, 如果排队中第一个数据包无法发送出去, 则允许排队 中后面不经过阻塞点的数据包插队上环, 从而提高上环公平业务的传递效 率。  The present invention applies a fairness algorithm for statistical multiplexing and spatial multiplexing, and fully utilizes the remaining bandwidth on the loop to achieve fair access to these loop bandwidths at all sites on the loop. The present invention determines the location of the blocked site by analyzing the information provided by the fairness algorithm while maintaining the fairness algorithm unchanged. Analyze the destination site address of each packet in the queue management to determine if the packet has passed the blocking point. In the case of the control of the fair algorithm, if the first data packet in the queue cannot be sent out, the data packet in the queue that does not pass through the blocking point is allowed to be inserted into the ring, thereby improving the transmission efficiency of the fair service of the upper ring.
本发明解决了因为使用阻塞点的公平速率去控制不经过阻塞点的公平 业务流, 造成业务流量无法提高的缺点。  The present invention solves the disadvantage of using a fair rate of blocking points to control fair traffic that does not pass through the blocking point, resulting in an inability to improve traffic.

Claims

权 利 要 求 书 Claim
1、 一种弹性分组环中提高公平效率的装置, 其特征在于, 包括: 一队列管理电路,用于在队列的一列中根据数据包到来的先后顺序将数 据包的实际存放地址进行排队,并在所述队列的另一列中按照所迷数据包的 排队顺序对所述数据包的目的站点地址进行排队;  A device for improving fairness and efficiency in an elastic packet ring, comprising: a queue management circuit, configured to queue the actual storage address of the data packet according to the order of arrival of the data packets in a queue of the queue, and Queue the destination site address of the data packet in another queue of the queue according to the queue order of the data packets;
一扫描电路, 连接所述队列管理电路,用于从所述队列中第一个数据包 开始扫描所述每个 :据包的目的站点地址;  a scanning circuit, connected to the queue management circuit, for scanning the first packet from the first data packet in the queue: a destination site address of the packet;
一分析电路, 连接所述队列管理电路和所迷扫描电路, 用于根据接收的 阻塞点与本站点之间的距离信息、所述扫描电路扫描得到的数据包的目的站 点地址分析所述 的站点地址对应的数据包是否经过阻塞点,并根据分析结 果控制所述扫描电路的扫描方式;  An analysis circuit, connected to the queue management circuit and the scanning circuit, configured to analyze the distance information according to the received blocking point and the local station, and the destination site address of the data packet scanned by the scanning circuit Whether the data packet corresponding to the site address passes through the blocking point, and controls the scanning mode of the scanning circuit according to the analysis result;
一建议公平速率控制电路, 连接所述队列管理电路, 用于根据本站点的 建议公平速率控制所迷 n据包的上环速率,使得所迷所有上环速率不超过所 述建议公平速率,并在所述所有上环速率不超过所述建议公平速率时允许不 经过阻塞点的数据包上环;  a proposed fair rate control circuit, configured to connect to the queue management circuit, to control an upper loop rate of the packet according to a recommended fair rate of the site, so that all the upper loop rates do not exceed the recommended fair rate, And allowing the data packet ring that does not pass through the blocking point when all the upper ring rates do not exceed the recommended fair rate;
一使用公平速率控制电路, 连接所述队列管理电路, 用于根据本站点的 使用公平速率控制所述数据包的上环速率,使得经过阻塞点的数据包的上环 速率不超过所述使用公平速率; 及  Using a fair rate control circuit, the queue management circuit is configured to control an upper loop rate of the data packet according to a fair rate of use of the site, so that an upper loop rate of the data packet passing through the blocking point does not exceed the use Fair rate; and
一计算电路, 连接所述分析电路、 所述使用公平速率控制电路, 用于根 据所述使用公平速率提取一生存时间,并根据该生存时间得到阻塞点与本站 点之间的距离, 再将该距离信息提供给所述分析电路。  a computing circuit, connected to the analysis circuit, the using fair rate control circuit, for extracting a lifetime according to the use fair rate, and obtaining a distance between the blocking point and the site according to the lifetime, and then The distance information is provided to the analysis circuit.
2、 根据权利要求 1所述的弹性分组环中提高公平效率的装置, 其特 征在于, 还包括一调度器, 连接所述队列管理电路, 用于当本站点有多个队 列时从中选择进行上环操作的数据包所属的队列。  2. The apparatus for improving fairness efficiency in an elastic packet ring according to claim 1, further comprising a scheduler, connected to the queue management circuit, configured to select from when the site has multiple queues. The queue to which the packet of the upper ring operation belongs.
3、 根据权利要求 1或 2所述的弹性分组环中提高公平效率的装置, 其特征在于,所述建议公平速率控制电路所控制的数据包包括不经过阻塞点 的数据;而所述使用公平速率控制电路所控制的数据包包括经过阻塞点的数 据包、 不经过阻塞点的数据包。 3. The apparatus for improving fairness efficiency in an elastic packet ring according to claim 1 or 2, wherein the data packet controlled by the recommended fair rate control circuit includes data that does not pass through a blocking point; and the use is fair. The data packets controlled by the rate control circuit include data packets passing through the blocking point and data packets not passing through the blocking point.
4、 根据权利要求 1或 1所述的弹性分组环中提高公平效率的装置, 其特征在于, 当所述使用公平速率是由本站点生成时,所述生存时间为 255; 当所述使用公平速率是由本站点的下游站点发送的,所述生存时间小于 255。 4. The apparatus for improving fairness efficiency in an elastic packet ring according to claim 1 or 1, wherein when said use fair rate is generated by the site, said lifetime is 255; when said using The fair rate is sent by the downstream site of the site, and the lifetime is less than 255.
5、 根据权利要求 4所述的弹性分组环中提高公平效率的装置, 其特 征在于, 所述阻塞点与本站点之间的距离为所述生存时间与 255的差值; 当 所述差值为 0时, 则阻塞点为所述本站点, 当所述差值为 N时, 则阻塞点 为所述本站点后的第 N个站点, 其中 N为大于等于 1的自然数。  5. The apparatus for improving fairness efficiency in an elastic packet ring according to claim 4, wherein a distance between the blocking point and the local station is a difference between the survival time and 255; When the value is 0, the blocking point is the local station. When the difference is N, the blocking point is the Nth station after the local station, where N is a natural number greater than or equal to 1.
6、 一种利用权利要求 1所述装置实现弹性分组环中提高公平效率的 方法., 其特征在于, 包括:  6. A method for improving fairness efficiency in an elastic packet ring by using the apparatus of claim 1. The method comprises:
步骤一,由队列管理电路在队列的一列中将数据包的实际存放地址进行 排队,并在所述队列的另一列中按照所述数据包的排队顺序对所述数据包的 目的站点地址进行排队; ,  Step 1: The queue management circuit queues the actual storage address of the data packet in one column of the queue, and queues the destination site address of the data packet according to the queue order of the data packet in another column of the queue. ;
步骤二, 由计算电路根据使用公平速率提取一生存时间,根据该生存时 间得到阻塞点与本站点之间的距离;  Step 2: The calculation circuit extracts a lifetime according to the use fair rate, and obtains a distance between the blocking point and the site according to the survival time;
步驟三, 所述分析电路根据所述距离、所述扫描电路扫描得到的数据包 的目的站点地址分析所述目的站点地址对应的数据包是否经过阻塞点,并根 据分析结果控制所述扫描电路的扫描方式; 及  Step 3: The analyzing circuit analyzes, according to the distance, the destination site address of the data packet scanned by the scanning circuit, whether the data packet corresponding to the destination site address passes through a blocking point, and controls the scanning circuit according to the analysis result. Scanning method; and
步骤四, 通过所述建议公平速率控制电路控制数据包的上环速率, 使得 所述所有上环速率不超过所述建议公平速率,并在所述所有上环速率不超过 所述建议公平速率时允许不经过阻塞点的数据包上环;通过所述使用公平速 率控制电路控制数据包的上环速率,使得经过阻塞点的数据包的上环速率不 超过所述使用公平速率。 '  Step 4: The recommended fair rate control circuit controls the upper ring rate of the data packet, so that all the upper ring rates do not exceed the recommended fair rate, and when all the upper ring rates do not exceed the recommended fair rate Allowing a packet to be looped through the blocking point; controlling the upper ring rate of the data packet by using the fair rate control circuit such that the upper ring rate of the packet passing through the blocking point does not exceed the usage fair rate. '
7、 根据权利要求 6所述的弹性分组环中提高公平效率的方法, 其特 征在于, 所述步骤三中, 当所述目的站点地址对应的数据包经过阻塞点时, 所述分析电路控制所述扫描电路继续扫描队列,直到扫描出不经过阻塞点的 数据包或扫描至队列尾。  The method for improving fairness efficiency in the resilient packet ring according to claim 6, wherein in the step (3), when the data packet corresponding to the destination site address passes through the blocking point, the analyzing circuit control station The scanning circuit continues to scan the queue until a packet that does not pass through the blocking point is scanned or scanned to the end of the queue.
8、 根据权利要求 6或 7所述的弹性分组环中提高公平效率的方法, 其特征在于, 所迷步骤四中, 当所述队列中的第一个数据包经过一阻塞点且 当前经过该阻塞点的数据包的上环速率大于所述使用公平速率时,则所述第 一个数据包进入速率下降等待上环状态。 The method for improving fairness efficiency in an elastic packet ring according to claim 6 or 7, wherein in step 4, when the first data packet in the queue passes a blocking point and When the upper ring rate of the data packet passing through the blocking point is greater than the usage fair rate, the first data packet entering rate decreases and waits for the upper ring state.
9、 根据权利要求 8所述的弹性分组环中提高公平效率的方法, 其特 征在于,所述步骤四中, 当所有上环的数据包的上环速率小于本站点的建议 公平速率且所述第一个数据包处于等待上环状态时,则允许不经过阻塞点的 数据包插队上环。  The method for improving fairness efficiency in the resilient packet ring according to claim 8, wherein in the fourth step, when the rate of the upper ring of all the upper ring data packets is smaller than the recommended fair rate of the site, When the first data packet is waiting for the upper ring state, the data packet that does not pass through the blocking point is allowed to be queued.
10、 根据权利要求 9所述的弹性分組环中提高公平效率的方法, 其特 征在于, 所述步骤四中, 当不经过阻塞点的数据包插队上环后, 由所述队列 管理电路将该数据包从该数据包所属的队列中删除,所述扫描电路继续扫描 该数据包后的数据包。  The method for improving fairness in the resilient packet ring according to claim 9, wherein in the step 4, when the data packet that does not pass through the blocking point is queued, the queue management circuit The data packet is deleted from the queue to which the data packet belongs, and the scanning circuit continues to scan the data packet after the data packet.
PCT/CN2007/000588 2006-08-30 2007-02-16 A method for improving the fair efficiency in the resilient packet ring and the apparatus thereof WO2008028376A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200610112716.6 2006-08-30
CN2006101127166A CN101136829B (en) 2006-08-30 2006-08-30 Method and device for improving equitable efficiency of elastic grouping ring

Publications (1)

Publication Number Publication Date
WO2008028376A1 true WO2008028376A1 (en) 2008-03-13

Family

ID=39156822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2007/000588 WO2008028376A1 (en) 2006-08-30 2007-02-16 A method for improving the fair efficiency in the resilient packet ring and the apparatus thereof

Country Status (2)

Country Link
CN (1) CN101136829B (en)
WO (1) WO2008028376A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445707B1 (en) * 1999-04-21 2002-09-03 Ems Technologies Canada, Limited Broadcast rate control allocation (BRCA) for congestion avoidance in satellite ATM networks
US6526060B1 (en) * 1997-12-05 2003-02-25 Cisco Technology, Inc. Dynamic rate-based, weighted fair scheduler with explicit rate feedback option

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003303508A1 (en) * 2003-01-02 2004-07-29 Zte Corporation A method for distributing dynamic liink bandwith for resilient packet ring
KR100560748B1 (en) * 2003-11-11 2006-03-13 삼성전자주식회사 method for bandwidth allocation using resilient packet ring fairness mechanism
CN1290295C (en) * 2003-12-10 2006-12-13 北京邮电大学 Method for ensuring fair sharing of bandwidth among sites on resilient packet ring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6526060B1 (en) * 1997-12-05 2003-02-25 Cisco Technology, Inc. Dynamic rate-based, weighted fair scheduler with explicit rate feedback option
US6445707B1 (en) * 1999-04-21 2002-09-03 Ems Technologies Canada, Limited Broadcast rate control allocation (BRCA) for congestion avoidance in satellite ATM networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FANG H. ET AL.: "A New RPR Fairness Algorithm Based on Rate Estimation", HIGH TECHNOLOGY LETTERS, vol. 14, no. 9, 1 September 2004 (2004-09-01), pages 1 - 6 *

Also Published As

Publication number Publication date
CN101136829A (en) 2008-03-05
CN101136829B (en) 2010-08-18

Similar Documents

Publication Publication Date Title
CN101616097B (en) Method and system for managing output port queue of network processor
US20120127879A1 (en) Service Interface for QOS-Driven HPNA Networks
EP2302843A1 (en) Method and device for packet scheduling
JPS63176045A (en) Method and apparatus for width control type packet exchange
CN101692648B (en) Method and system for queue scheduling
CA2421362A1 (en) Improved flow control and quality of service provision for frame relay protocols
US8189463B2 (en) Method for realizing backpressure of masses of ports and device thereof
JP2007097114A (en) Packet repeating apparatus with communication quality control function
US9197570B2 (en) Congestion control in packet switches
JP2008507204A (en) How to manage inter-zone bandwidth in a two-way messaging network
JP2006506845A (en) How to select a logical link for a packet in a router
US20040081095A1 (en) Policing mechanism for resource limited wireless MAC processors
US8031682B2 (en) Apparatus and method for aggregating and switching traffic in subscriber network
JP4742072B2 (en) Shaping device and router device
Kaur et al. Core-stateless guaranteed throughput networks
WO2011144100A2 (en) Service scheduling method and apparatus under multiple broadband network gateways
WO2014000467A1 (en) Method for adjusting bandwidth in network virtualization system
WO2007085159A1 (en) QoS CONTROL METHOD AND SYSTEM
KR100653454B1 (en) Apparatus and method for dynamic traffic management of qos to an each service in homenetwork environment
Socrates et al. Congestion control for packet switched networks: A survey
WO2023123104A1 (en) Message transmission method and network device
JP2007243382A (en) Network transmission system and its control method
WO2008028376A1 (en) A method for improving the fair efficiency in the resilient packet ring and the apparatus thereof
Potter et al. Request control-for provision of guaranteed band width within the dqdb framework
WO2006042470A1 (en) A method for performing flow fairness transmission in mpls ring network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07711004

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07711004

Country of ref document: EP

Kind code of ref document: A1