WO2009097788A1 - 缓存数据处理方法、装置及系统 - Google Patents

缓存数据处理方法、装置及系统 Download PDF

Info

Publication number
WO2009097788A1
WO2009097788A1 PCT/CN2009/070224 CN2009070224W WO2009097788A1 WO 2009097788 A1 WO2009097788 A1 WO 2009097788A1 CN 2009070224 W CN2009070224 W CN 2009070224W WO 2009097788 A1 WO2009097788 A1 WO 2009097788A1
Authority
WO
WIPO (PCT)
Prior art keywords
read
memory
data
memories
data processing
Prior art date
Application number
PCT/CN2009/070224
Other languages
English (en)
French (fr)
Inventor
Qin Zheng
Haiyan Luo
Yunfeng Bian
Hui Lu
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2009097788A1 publication Critical patent/WO2009097788A1/zh
Priority to US12/779,745 priority Critical patent/US20100220589A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9042Separate storage for different parts of the packet, e.g. header and payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow
    • H04L49/9089Reactions to storage capacity overflow replacing packets in a storage arrangement, e.g. pushout
    • H04L49/9094Arrangements for simultaneous transmit and receive, e.g. simultaneous reading/writing from/to the storage element

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a cache data processing method, apparatus, and system. Background technique
  • Packet caching is one of the key technologies in modern communication equipment. Its main function is to provide packet buffering when traffic is congested, to avoid or reduce traffic loss. As port speeds continue to increase, high-end communication equipment is often used.
  • the parallel packet buffer technology is used to obtain the packet buffer bandwidth matching the port speed.
  • FIG. 1 it is a schematic diagram of the structure of the existing cache data processing system. The system is composed of N parallel memories, and the data coming in from the port.
  • the packet enqueue controller is distributed to each memory through the storage controller for buffering, and the control information of the packet enters the packet queue, and the dequeue scheduler dispatches the control information of the packet from the packet queue, and the corresponding controller is stored from the corresponding memory through the storage controller.
  • the packet data is read and sent to the next level device, where A represents the data channel and B represents the control channel.
  • the dequeue scheduler can only read the packet data from the memory selected by the enqueue controller, it is possible that all the scheduled packets are in one memory within a certain period of time, resulting in only the design capability of the dequeued bandwidth from the packet buffer. 1/N, therefore, one of the key issues to be addressed in the above multi-way parallel packet caching system is how to balance the read and write bandwidth of each memory.
  • balancing the read and write bandwidth of each memory is achieved by the following means: First, parallel storage is performed in small granularity across the memory, that is, each packet is separated according to the minimum storage granularity of each memory (such as 32 bits). In the memory, when each packet is read out from multiple storages, the imbalance of the dequeued bandwidth is reduced. Second, the multi-package dequeue method is adopted, that is, one scheduling is allowed from one queue. When multiple packets are sent out, the packets that are used in the same queue are stored in multiple memories in order, so that the packet data is evenly distributed in multiple memories when dequeued, which improves the number of packets. A balance of read bandwidth between memories.
  • Embodiments of the present invention provide a cache data processing method, apparatus, and system, which improve memory read and write efficiency, improve read/write bandwidth balance among multiple memories, and thereby improve system performance.
  • the embodiment of the invention provides a method for processing a cached data, the method specifically comprising: splicing data packets entering the same queue;
  • the divided data units are stored in a plurality of memories.
  • the above method improves the read/write efficiency of the memory, improves the balance of the read/write bandwidth between the multiple memories, and improves the system performance.
  • the embodiment of the present invention provides a cache data processing device, where the device specifically includes: a splicing module, configured to splicing data packets entering the same queue;
  • a segmentation module configured to divide the spliced data packet into a plurality of data units according to a predetermined granularity
  • a storage module configured to store the divided data units in the plurality of memories.
  • the above device improves the read/write efficiency of the memory and improves the balance of the read/write bandwidth between the multiple memories.
  • An embodiment of the present invention provides a cache data processing system, where the system specifically includes: an enqueue controller, configured to splicing data packets entering the same queue;
  • a storage controller configured to divide the spliced data packet into a plurality of data units according to a predetermined granularity and control distribution of the divided data unit;
  • a plurality of parallel memories for holding the divided data units, and the data units are stored in a plurality of memories.
  • the above system improves the read/write efficiency of the memory, improves the balance of the read/write bandwidth between the multiple memories, and improves the system performance.
  • 1 is a schematic structural diagram of an existing cache data processing system
  • FIG. 2 is a flowchart of an embodiment of a method for processing cache data according to the present invention
  • FIG. 3 is a schematic structural diagram of an embodiment of a cache data processing apparatus according to the present invention.
  • FIG. 4 is a schematic structural diagram of an embodiment of a cache data processing system according to the present invention. detailed description
  • FIG. 2 it is a flowchart of an embodiment of a method for processing cache data according to the present invention.
  • the method specifically includes:
  • Step 101 splicing data packets entering the same queue.
  • the data packets entering the same queue are spliced according to a predetermined length, and a status entry is set for the data entering the same queue.
  • the entry maintains each queue that is being packaged, and records the packet length of the spliced queue.
  • the packet length reaches a predetermined length, a packet is completed, and the predetermined length is set according to conditions such as the number of memories; in addition, when the last packet entering the same queue is spliced with the spliced packet, the packet length exceeds When the predetermined length is completed, the consolidation operation is completed;
  • Step 102 The spliced data packet is divided into a plurality of data units according to a predetermined granularity; the spliced data packet is divided according to a predetermined granularity, and the predetermined granularity may be determined according to requirements, for example, according to the size of the data packet and the memory. Number to determine;
  • Step 103 Store the divided data unit in a plurality of memories.
  • the method may further include: comparing each The length of the memory write queue, the memory with the smallest write queue length is selected as the first memory to store the divided complete data unit.
  • the smallest memory can effectively balance the write bandwidth between the memories; in addition, in order to quickly and conveniently read the divided data units from the memory, the divided data units can be stored in multiple memories. In the same address; the divided data units may also be stored in the same address of the plurality of memories consecutive to the first memory.
  • a reading process may be further included, that is, the data is read from the memory of the read data that needs to be read by the read request according to the read request; if the read request needs to be read, the read data is stored in the above When a memory is in a plurality of memories that are continuous with the memory, when data is read, it is also necessary to read data from the corresponding memory.
  • the dequeuing operation may cause a certain degree of imbalance in the read bandwidth between the memories.
  • a read request sent to a memory is required.
  • the on-chip cache can be used to store data that needs to be read by a read request that exceeds the bandwidth.
  • the data entering the same queue is spliced into a large data packet, and then the divided data units are stored in a plurality of memories, thereby improving the read/write efficiency of the memory. Improves the balance of read and write bandwidth between multiple memories, thereby improving system performance.
  • FIG. 3 it is a schematic structural diagram of an embodiment of a cache data processing apparatus according to the present invention.
  • the apparatus specifically includes: a splicing module 1 1 1 for splicing data packets entering the same queue; and a splitting module 1 12 for The spliced data packet is divided into a plurality of data units according to a predetermined granularity; and the storage module 1 13 is configured to store the divided data units in a plurality of memories.
  • the foregoing apparatus may further include: a selecting module, configured to compare lengths of the respective memory write queues, and select a memory with the smallest write queue length as the first memory for storing the divided data units; and may further include: a reading module, The data is read from the storage module according to the read request.
  • the storage module may be specifically a storage module for separating the data units.
  • the storage unit may be specifically a uniform continuous storage module for storing the divided data units in the same address of the plurality of memories consecutive to the first memory.
  • the above reading module may be specifically a continuous reading module for reading the read data that the read request needs to read from the plurality of memories consecutive to the first memory according to the read request.
  • the cache data processing device uses a splicing module to splicing data entering the same queue into a large data packet, and then dividing the data packet into data units by using a dividing module, and storing the divided data units in a plurality of memories by using the storage module. Or using the uniform hook storage module to uniformly store the data unit in the unified address of the plurality of memories; the device may further read the data from the storage module storing the data unit by using the reading module, thereby improving the reading of the memory. Write efficiency improves the balance of read and write bandwidth between multiple memories.
  • FIG. 4 it is a schematic structural diagram of an embodiment of a cache data processing system according to the present invention.
  • the system specifically includes: an enqueue controller 1 for splicing data packets entering the same queue; and a storage controller 2 for The spliced data packet is divided into a plurality of data units at a predetermined granularity and controlled to distribute the divided data units; a plurality of parallel memories 3 for storing the divided data units, and the data units are stored in a plurality of memories.
  • each of the divided data packets is stored in the memory in units of fixed length cells, and the cell length should be large enough to ensure the read/write efficiency of each memory 3, and the 32b wide dynamic random storage device (Dynamic Random-Access Memory, DRAM), for example, the cell length can be set to 512b, and each cell is stored in the same buffer of the DRAM to avoid the impact of bank switching timing constraints on read and write efficiency; In addition to the first cell, other cells are not free to select the memory to be written.
  • the memory controller 2 includes: a comparison module 21 for comparing each memory write queue.
  • a selection module 22 configured to select a memory having the smallest write queue length as the first memory storing the divided data unit; and a distribution module 23, configured to distribute the segmented data from the selected first memory unit.
  • each of the foregoing memories 3 includes: a first cache module, where the data traffic saved by the enqueue controller to the memory exceeds the When the memory write bandwidth, data traffic exceeding the bandwidth is stored.
  • the foregoing embodiment may further include: a dequeue scheduler 4, configured to read data from a memory that stores read data that needs to be read by the read request according to the read request; in addition, the storage controller may be specifically a continuous storage control.
  • the dequeue scheduler may also be specifically continuous a dequeue scheduler, configured to read, from the plurality of memories consecutive to the first memory according to the read request, read data that needs to be read by the read request; since the divided data unit is stored in the plurality of memories to ensure the write data The balance of the bandwidth, while the dequeue scheduler can only read the data packets from the memory selected by the enqueue controller, thus ensuring the balance of the read data bandwidth.
  • each of the above memories includes: a second cache module, configured to store data that needs to be read by the read request exceeding a bandwidth when a read request sent to the memory selected by the enqueue controller exceeds a read bandwidth of the memory.
  • the data entering the same queue is spliced into data packets by using the enqueue controller, and then the spliced data packets are divided into data units according to a predetermined granularity, that is, a large granularity, and the data units are saved by using multiple parallel memories.
  • the on-chip cache is used to absorb the data that needs to be read by the read request exceeding the memory read bandwidth, thereby improving the read/write efficiency of the memory, improving the balance of the read/write bandwidth between the multiple memories, and improving the system performance.

Description

緩存数据处理方法、 装置及系统 技术领域
本发明涉及通信技术领域, 尤其涉及一种緩存数据处理方法、 装置及系 统。 背景技术
包緩存是现代通讯设备中必不可少的关键技术之一, 其主要作用是在流 量拥塞时提供数据包緩存, 避免或减少流量丟失; 随着端口速度的不断提高, 高端通讯设备通常釆用多路并行包緩存技术以获得同端口速度相匹配的包緩 存带宽, 如图 1所示, 为现有緩存数据处理系统的结构示意图, 该系统由 N 个并行的存储器所组成, 从端口进来的数据包经入队控制器通过存储控制器 分发到各个存储器进行緩存, 数据包的控制信息则进入包队列, 出队调度器 从包队列调度出数据包的控制信息, 通过存储控制器从相应的存储器读出包 数据, 发送到下一级设备, 其中, A表示数据通道, B表示控制通道。
由于出队调度器只能从入队控制器选定的存储器中读出包数据, 有可能 一段时间内所有调度出来的包正好都在一个存储器内, 导致从包緩存出队带 宽只有设计能力的 1/N, 因此, 在上述多路并行包緩存系统中一个需要解决 的关键问题是如何平衡各存储器的读写带宽。
目前, 平衡各存储器读写带宽是通过以下手段实现: 第一, 釆用小粒度 跨存储器并行保存, 即按每个存储器的最小存储粒度(如 32 bits )把每个数 据包分割开来存在多个存储器里,这样出队时每个包都从多个储存器读出来, 减少了出队带宽的不平衡程度; 第二, 釆用多包出队的方法, 即允许从一个 队列中一次调度多个包出来, 入队时釆用入同一队列的包按顺序存到多个存 储器中, 这样出队时包数据就会比较均匀地分布在多个存储器中, 改进了多 存储器之间读带宽的平衡。
但是第一种方法对于通用的 DRAM存储器,小粒度存储会造成每个存储 器的读写效率降低, 从而降低整个包緩存的有效带宽; 对于第二种方法多包 出队调度实现比较复杂, 同时为提高每个存储器的有效带宽而釆用较大的存 储粒度时, 存储器的空间效率和带宽效率都会有较大的降低, 入队时由于包 需要按顺序存入各个存储器,也会导致多存储器之间写带宽不平衡性的增加。 发明内容
本发明实施例提供一种緩存数据处理方法、 装置及系统, 以提高存储器 的读写效率, 改善多存储器间读写带宽的均衡性, 从而提高系统性能。
本发明实施例提供了一种緩存数据处理方法, 该方法具体包括: 对进入同一队列的数据包进行拼接;
将拼接后的数据包按预定粒度分割成多个数据单元;
将分割后的数据单元存储在多个存储器中。
上述方法, 提高了存储器的读写效率, 改善了多存储器间读写带宽的均 衡性, 从而提高了系统性能。
本发明实施例提供了一种緩存数据处理装置, 该装置具体包括: 拼接模块, 用于对进入同一队列的数据包进行拼接;
分割模块, 用于将拼接后的数据包按预定粒度分割成多个数据单元; 存储模块, 用于将分割后的数据单元存储在多个存储器中。
上述装置, 提高了存储器的读写效率, 改善了多存储器间读写带宽的均 衡性。
本发明实施例提供了一种緩存数据处理系统, 该系统具体包括: 入队控制器, 用于对进入同一队列的数据包进行拼接;
存储控制器, 用于将拼接后的数据包按预定粒度分割成多个数据单元并 控制分发分割后的数据单元; 多个并行的存储器, 用于保存分割后的数据单元, 且所述数据单元保存 在多个存储器中。
上述系统, 提高了存储器的读写效率, 改善了多存储器间读写带宽的均 衡性, 从而提高了系统性能。
下面通过附图和实施例, 对本发明的技术方案做进一步的详细描述。 附图说明
图 1为现有緩存数据处理系统的结构示意图;
图 2为本发明緩存数据处理方法实施例的流程图;
图 3为本发明緩存数据处理装置实施例的结构示意图;
图 4为本发明緩存数据处理系统实施例的结构示意图。 具体实施方式
如图 2所示, 为本发明緩存数据处理方法实施例的流程图, 该方法具体 包括:
步骤 101、 对进入同一队列的数据包进行拼接;
对进入同一队列的数据包按预定长度进行拼接, 为进入同一队列的数据 设置一个状态表项, 该表项维护每一个正在拼包的队列, 记录已拼队列的包 长, 当已拼的队列的包长达到预定长度时, 完成一个拼包, 所述预定长度根 据存储器的个数等条件进行设置; 另外, 当进入同一队列的最后一个数据包 与已拼接的数据包拼接后的包长超过预定长度时, 完成该拼包操作;
步骤 102、 将拼接后的数据包按预定粒度分割成多个数据单元; 将拼接后的数据包按预定粒度进行分割, 上述预定粒度可根据需要进行 确定, 例如可根据数据包的大小和存储器个数来确定;
步骤 103、 将分割后的数据单元存储在多个存储器中。
在将分割后的数据单元存储在多个存储器之前, 还可以包括: 先比较各 存储器写队列的长度, 选择写队列长度最小的存储器作为存储分割后的完整 数据单元的第一个存储器, 各存储器写队列的长度越小, 说明写往该存储器 的流量越小, 选择写队列长度最小的存储器可以有效地平衡存储器之间的写 带宽; 另外, 为了快速、 方便地将分割后的数据单元从存储器中读取出来, 可以将分割后的数据单元均勾地存储在多个存储器的同一地址中; 也可以将 分割后的数据单元均勾地存储在与第一个存储器连续的多个存储器的同一地 址中。
另外, 在上述步骤 103之后还可以包括一个读取过程, 即根据读请求从 存储读请求需要读取的读数据的存储器中读出数据; 若读请求需要读取的读 数据存储在从上述第一个存储器起与该存储器连续的多个存储器中, 则在读 取数据时, 也需要从对应的存储器中读出数据。
进一步地, 当上述预定长度不是预定粒度的整数倍时, 出队操作会造成 存储器之间读带宽一定程度上的不均衡, 为了改善读带宽的不均衡性, 当发 送给一个存储器的读请求需要读取的数据超过该存储器的读带宽时, 可以通 过片内緩存来存储超过带宽的读请求需要读取的数据。
因此, 本发明实施例緩存数据处理方法, 利用将进入同一队列的数据拼 接成大的数据包, 然后将分割后的数据单元存储在多个存储器中, 较好地提 高了存储器的读写效率, 改善了多存储器间读写带宽的均衡性, 从而提高了 系统性能。
如图 3所示, 为本发明緩存数据处理装置实施例的结构示意图, 该装置 具体包括: 拼接模块 1 1 1 , 用于对进入同一队列的数据包进行拼接; 分割模 块 1 12, 用于将拼接后的数据包按预定粒度分割成多个数据单元; 存储模块 1 13 , 用于将分割后的数据单元存储在多个存储器中。
另外, 上述装置还可以包括: 选择模块, 用于比较各存储器写队列的长 度, 选择写队列长度最小的存储器作为存储分割后的数据单元的第一个存储 器; 也可以包括: 读取模块, 用于根据读请求从存储模块中读取数据; 其中, 上述存储模块可以具体为均勾存储模块, 用于将分割后的数据单元均勾地存 储在多个存储器的同一地址中; 上述均勾存储模块可以具体为均匀连续存储 模块, 用于将分割后的数据单元均勾地存储在与第一个存储器连续的多个存 储器的同一地址中; 上述读取模块可以具体为连续读取模块, 用于根据读请 求从与上述第一个存储器连续的多个存储器中读出读请求需要读取的读数 据。
上述緩存数据处理装置, 利用拼接模块将进入同一队列的数据拼接成大 数据包, 然后利用分割模块将上述数据包分割成数据单元, 并利用存储模块 将分割后的数据单元存储在多个存储器中或者利用均勾存储模块将数据单元 均匀地存储在多个存储器的统一地址中; 上述装置还可以利用读取模块从存 储上述数据单元的存储模块中读取数据, 较好地提高了存储器的读写效率, 改善了多存储器间读写带宽的均衡性。
如图 4所示, 为本发明緩存数据处理系统实施例的结构示意图, 该系统 具体包括: 入队控制器 1 , 用于对进入同一队列的数据包进行拼接; 存储控 制器 2, 用于将拼接后的数据包按预定粒度分割成多个数据单元并控制分发 分割后的数据单元; 多个并行的存储器 3 , 用于保存分割后的数据单元, 且 上述数据单元保存在多个存储器中。
其中,上述分割后的每个数据包以固定长度的信元为单位存入存储器中, 信元长度应该足够大以保证每个存储器 3的读写效率, 以 32b宽的动态随机 存 4诸器( Dynamic Random-Access Memory, DRAM )为例, 信元长度可设为 512b, 每个信元存入 DRAM的同一緩存区 ( bank ) 以避免 bank切换时序限 制对读写效率的影响; 入队时, 除第一信元外, 其它信元不能自由选择写入 的存储器, 为了尽量改善存储器之间写带宽的不均衡性, 上述存储控制器 2 包括: 比较模块 21 , 用于比较各存储器写队列的长度; 选择模块 22, 用于选 择写队列长度最小的存储器作为存储分割后的数据单元的第一个存储器; 分 发模块 23 ,用于从选择的上述第一个存储器开始分发上述分割后的数据单元。
另外, 为了有效地改善写带宽的不均衡性, 上述每个存储器 3均包括:第 一緩存模块, 用于当上述入队控制器发送给该存储器保存的数据流量超过该 存储器的写带宽时, 存储超过带宽的数据流量。 进一步地, 上述实施例还可 以包括: 出队调度器 4 , 用于根据读请求从存储读请求需要读取的读数据的 存储器中读出数据; 另外, 上述存储控制器可以具体为连续存储控制器, 用 于将拼接后的数据包按预定粒度分割成多个数据单元, 并将分割后的数据单 元分发给与第一个存储器连续的多个存储器; 上述出队调度器也可以具体为 连续出队调度器, 用于根据读请求从与上述第一个存储器连续的多个存储器 中读出读请求需要读取的读数据; 由于分割后的数据单元存储在多个存储器 中保证了写数据带宽的平衡性, 同时由于出队调度器只能从入队控制器选定 的存储器中读出数据包, 因而也保证了读数据带宽的平衡性。
但当发送给入队控制器选定的存储器的读请求需要读取的数据超过该存 储器的读带宽时, 也会引起读带宽的不均衡性, 为了改善这种状况, 上述每 个存储器中还包括: 第二緩存模块, 用于当发送给上述入队控制器选定的存 储器的读请求需要读取的数据超过该存储器的读带宽时, 存储超过带宽的上 述读请求需要读取的数据。
上述实施例, 利用入队控制器将进入同一队列的数据拼接成数据包, 然 后对拼接后的数据包按预定粒度即大粒度分割成数据单元, 并利用多个并行 的存储器保存上述数据单元, 利用片内緩存来吸收超过存储器读带宽的读请 求需要读取的数据, 从而提高了存储器的读写效率, 改善了多存储器间读写 带宽的均衡性, 提高了系统性能。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对其 限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通技术 人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修改, 或 者对其中部分技术特征进行等同替换; 而这些修改或者替换, 并不使相应技 术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims

权 利 要 求
1、 一种緩存数据处理方法, 其特征在于包括:
对进入同一队列的数据包进行拼接;
将拼接后的数据包按预定粒度分割成多个数据单元;
将分割后的数据单元存储在多个存储器中。
2、 根据权利要求 1 所述的緩存数据处理方法, 其特征在于所述将分割 后的数据单元存储在多个存储器中之前还包括: 比较各存储器写队列的长度, 选择写队列长度最 、的存储器作为存储分割后的数据单元的第一个存储器。
3、 根据权利要求 1或 2所述的緩存数据处理方法, 其特征在于所述将 分割后的数据单元存储在多个存储器中之后还包括: 根据读请求从存储读请 求需要读取的读数据的存储器中读出数据。
4、 根据权利要求 1 所述的緩存数据处理方法, 其特征在于所述对进入 同一队列的数据包进行拼接具体为:
对进入同一队列的数据包按照预定长度进行拼接。
5、 根据权利要求 2所述的緩存数据处理方法, 其特征在于所述将分割 后的数据单元存储在多个存储器中具体为:
将分割后的数据单元均勾地存储在多个存储器的同一地址中。
6、 根据权利要求 5所述的緩存数据处理方法, 其特征在于所述将分割 后的数据单元均匀地存储在多个存储器的同一地址中具体为:
将分割后的数据单元均勾地存储在与第一个存储器连续的多个存储器的 同一地址中。
7、 根据权利要求 3所述的緩存数据处理方法, 其特征在于所述根据读 请求从存储读请求需要读取的读数据的存储器中读出数据具体为:
根据读请求从与所述第一个存储器连续的多个存储器中读出读请求需要 读取的读数据。
8、 根据权利要求 7所述的緩存数据处理方法, 其特征在于当发送给一 个存储器的读请求需要读取的数据超过该存储器的读带宽时, 存储超过带宽 的所述读请求需要读取的数据。
9、 一种緩存数据处理装置, 其特征在于包括:
拼接模块, 用于对进入同一队列的数据包进行拼接;
分割模块, 用于将拼接后的数据包按预定粒度分割成多个数据单元; 存储模块, 用于将分割后的数据单元存储在多个存储器中。
10、 根据权利要求 9所述的緩存数据处理装置, 其特征在于还包括: 选择模块, 用于比较各存储器写队列的长度, 选择写队列长度最小的存 储器作为存储分割后的数据单元的第一个存储器。
11、 根据权利要求 10所述的緩存数据处理装置, 其特征在于还包括: 读取模块, 用于根据读请求从存储模块中读取数据。
12、 根据权利要求 11所述的緩存数据处理装置, 其特征在于所述存储 模块具体为均勾存储模块, 用于将分割后的数据单元均勾地存储在多个存储 器的同一地址中。
13、 根据权利要求 12所述的緩存数据处理装置, 其特征在于所述均匀 存储模块具体为均勾连续存储模块, 用于将分割后的数据单元均勾地存储在 与第一个存储器连续的多个存储器的同一地址中。
14、 根据权利要求 13所述的緩存数据处理装置, 其特征在于所述读取 模块具体为连续读取模块, 用于根据读请求从与所述第一个存储器连续的多 个存储器中读出读请求需要读取的读数据。
15、 一种緩存数据处理系统, 其特征在于包括:
入队控制器, 用于对进入同一队列的数据包进行拼接;
存储控制器, 用于将拼接后的数据包按预定粒度分割成多个数据单元并 控制分发分割后的数据单元;
多个并行的存储器, 用于保存分割后的数据单元, 且所述数据单元保存 在多个存储器中。
16、 根据权利要求 15所述的緩存数据处理系统, 其特征在于所述存储 控制器包括:
比较模块, 用于比较各存储器写队列的长度;
选择模块, 用于选择写队列长度最小的存储器作为存储分割后的数据单 元的第一个存储器;
分发模块, 用于从选择的所述第一个存储器开始分发所述分割后的数据 单元。
17、 根据权利要求 15或 16所述的緩存数据处理系统,其特征在于还包 括:
第一緩存模块, 用于当所述入队控制器发送给该存储器保存的数据流量 超过该存储器的写带宽时, 存储超过带宽的数据流量。
18、 根据权利要求 17所述的緩存数据处理系统, 其特征在于还包括: 出队调度器, 用于根据读请求从存储读请求需要读取的读数据的存储器 中读出数据。
19、 根据权利要求 18所述的緩存数据处理系统, 其特征在于所述存储 控制器具体为连续存储控制器, 用于将拼接后的数据包按预定粒度分割成多 个数据单元, 并将分割后的数据单元分发给与第一个存储器连续的多个存储 器。
20、 根据权利要求 19所述的緩存数据处理系统, 其特征在于所述出队 调度器具体为连续出队调度器, 用于根据读请求从与所述第一个存储器连续 的多个存储器中读出读请求需要读取的读数据。
21、 根据权利要求 20所述的緩存数据处理系统, 其特征在于还包括: 第二緩存模块, 用于当发送给所述入队控制器选定的存储器的读请求需 要读取的数据超过该存储器的读带宽时, 存储超过带宽的所述读请求需要读 取的数据。
PCT/CN2009/070224 2008-02-04 2009-01-20 缓存数据处理方法、装置及系统 WO2009097788A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/779,745 US20100220589A1 (en) 2008-02-04 2010-05-13 Method, apparatus, and system for processing buffered data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2008100576966A CN101222444B (zh) 2008-02-04 2008-02-04 缓存数据处理方法、装置及系统
CN200810057696.6 2008-02-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/779,745 Continuation US20100220589A1 (en) 2008-02-04 2010-05-13 Method, apparatus, and system for processing buffered data

Publications (1)

Publication Number Publication Date
WO2009097788A1 true WO2009097788A1 (zh) 2009-08-13

Family

ID=39632026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/070224 WO2009097788A1 (zh) 2008-02-04 2009-01-20 缓存数据处理方法、装置及系统

Country Status (3)

Country Link
US (1) US20100220589A1 (zh)
CN (1) CN101222444B (zh)
WO (1) WO2009097788A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222444B (zh) * 2008-02-04 2011-11-09 华为技术有限公司 缓存数据处理方法、装置及系统
CN102684976B (zh) * 2011-03-10 2015-07-22 中兴通讯股份有限公司 一种基于ddr sdram进行数据读写的方法、装置及系统
US9158602B2 (en) * 2012-05-21 2015-10-13 Intermational Business Machines Corporation Processing posted receive commands in a parallel computer
CN103425437B (zh) * 2012-05-25 2016-05-25 华为技术有限公司 初始写入地址选择方法和装置
US9240870B2 (en) 2012-10-25 2016-01-19 Telefonaktiebolaget L M Ericsson (Publ) Queue splitting for parallel carrier aggregation scheduling
CN103475451A (zh) * 2013-09-10 2013-12-25 江苏中科梦兰电子科技有限公司 一种适合前向纠错和加密应用的数据报网络传输方法
CN104581398B (zh) * 2013-10-15 2019-03-15 富泰华工业(深圳)有限公司 缓存数据管理系统及方法
CN105573711B (zh) * 2014-10-14 2019-07-19 深圳市中兴微电子技术有限公司 一种数据缓存方法及装置
WO2017088180A1 (zh) * 2015-11-27 2017-06-01 华为技术有限公司 向队列存储数据的方法、装置及设备
CN106326029A (zh) * 2016-08-09 2017-01-11 浙江万胜智能科技股份有限公司 一种用于电力仪表的数据存储方法
CN108881062A (zh) * 2017-05-12 2018-11-23 深圳市中兴微电子技术有限公司 一种数据包传输方法和设备
CN109802897B (zh) * 2017-11-17 2020-12-01 华为技术有限公司 一种数据传输方法及通信设备
US10686910B2 (en) * 2018-02-02 2020-06-16 Servicenow, Inc. Distributed queueing in a remote network management architecture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001013241A1 (en) * 1999-08-19 2001-02-22 Sun Microsystems, Inc. Scalable-remembered-set garbage collection
CN1633089A (zh) * 2003-12-24 2005-06-29 华为技术有限公司 混合粒度虚级联的实现方法
CN101021436A (zh) * 2007-03-14 2007-08-22 中国兵器工业第二○五研究所 用于瞬态光强测试的同步触发控制方法
CN101222444A (zh) * 2008-02-04 2008-07-16 华为技术有限公司 缓存数据处理方法、装置及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6147996A (en) * 1995-08-04 2000-11-14 Cisco Technology, Inc. Pipelined multiple issue packet switch
US7389510B2 (en) * 2003-11-06 2008-06-17 International Business Machines Corporation Load balancing of servers in a cluster
US7188219B2 (en) * 2004-01-30 2007-03-06 Micron Technology, Inc. Buffer control system and method for a memory system having outstanding read and write request buffers
US20050198459A1 (en) * 2004-03-04 2005-09-08 General Electric Company Apparatus and method for open loop buffer allocation
US20070055788A1 (en) * 2005-08-11 2007-03-08 Andrew Dunshea Method for forwarding network file system requests and responses between network segments
US8055386B2 (en) * 2005-08-22 2011-11-08 Trane International Inc. Building automation system data management
US20080170571A1 (en) * 2007-01-12 2008-07-17 Utstarcom, Inc. Method and System for Synchronous Page Addressing in a Data Packet Switch

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001013241A1 (en) * 1999-08-19 2001-02-22 Sun Microsystems, Inc. Scalable-remembered-set garbage collection
CN1633089A (zh) * 2003-12-24 2005-06-29 华为技术有限公司 混合粒度虚级联的实现方法
CN101021436A (zh) * 2007-03-14 2007-08-22 中国兵器工业第二○五研究所 用于瞬态光强测试的同步触发控制方法
CN101222444A (zh) * 2008-02-04 2008-07-16 华为技术有限公司 缓存数据处理方法、装置及系统

Also Published As

Publication number Publication date
CN101222444A (zh) 2008-07-16
CN101222444B (zh) 2011-11-09
US20100220589A1 (en) 2010-09-02

Similar Documents

Publication Publication Date Title
WO2009097788A1 (zh) 缓存数据处理方法、装置及系统
US8225026B2 (en) Data packet access control apparatus and method thereof
US8472457B2 (en) Method and apparatus for queuing variable size data packets in a communication system
US7904677B2 (en) Memory control device
JP4299536B2 (ja) Dramベースのランダム・アクセス・メモリ・サブシステムでツリー・アクセスに関する性能を改善するためのマルチ・バンク・スケジューリング
TWI301367B (en) Compact packet switching node storage architecture employing double data rate synchronous dynamic ram
US10248350B2 (en) Queue management method and apparatus
KR102082020B1 (ko) 다수의 링크된 메모리 리스트들을 사용하기 위한 방법 및 장치
WO2016011894A1 (zh) 报文处理方法和装置
US9170753B2 (en) Efficient method for memory accesses in a multi-core processor
EP2913963A1 (en) Data buffering system and method for ethernet device
US20050025140A1 (en) Overcoming access latency inefficiency in memories for packet switched networks
WO2009111971A1 (zh) 缓存数据写入系统及方法和缓存数据读取系统及方法
US7822915B2 (en) Memory controller for packet applications
US20200259766A1 (en) Packet processing
US10067868B2 (en) Memory architecture determining the number of replicas stored in memory banks or devices according to a packet size
Lin et al. Two-stage fair queuing using budget round-robin
WO2012163019A1 (zh) 降低数据类芯片外挂ddr功耗的方法及数据类芯片系统
US8345701B1 (en) Memory system for controlling distribution of packet data across a switch
US11720279B2 (en) Apparatus and methods for managing packet transfer across a memory fabric physical layer interface
EP3771164B1 (en) Technologies for providing adaptive polling of packet queues
WO2003088047A1 (en) System and method for memory management within a network processor architecture
US10067690B1 (en) System and methods for flexible data access containers
US20230396561A1 (en) CONTEXT-AWARE NVMe PROCESSING IN VIRTUALIZED ENVIRONMENTS
US20230367713A1 (en) In-kernel cache request queuing for distributed cache

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09709041

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09709041

Country of ref document: EP

Kind code of ref document: A1