WO2008069531A1 - Method of accelerating i/o between user memory and disk using pci memory - Google Patents

Method of accelerating i/o between user memory and disk using pci memory Download PDF

Info

Publication number
WO2008069531A1
WO2008069531A1 PCT/KR2007/006219 KR2007006219W WO2008069531A1 WO 2008069531 A1 WO2008069531 A1 WO 2008069531A1 KR 2007006219 W KR2007006219 W KR 2007006219W WO 2008069531 A1 WO2008069531 A1 WO 2008069531A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
disk
pci
data
user
Prior art date
Application number
PCT/KR2007/006219
Other languages
French (fr)
Inventor
Song Woo Sok
Kap Dong Kim
Chang Soo Kim
Hag Young Kim
Original Assignee
Electronics And Telecommunications Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020070098180A external-priority patent/KR20080051021A/en
Application filed by Electronics And Telecommunications Research Institute filed Critical Electronics And Telecommunications Research Institute
Publication of WO2008069531A1 publication Critical patent/WO2008069531A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device

Definitions

  • the present invention relates to a method of accelerating I/O between a user memory and a disk, and more particularly, to a method of accelerating VO between a user memory and a disk using a PCI memory which inputs or outputs data using a DMA scheme between the user memory and the disk by using the PCI memory in an Internet server computing system.
  • the network- storage card (hereinafter, referred to as a "NS card") in the above Patent Document is constructed by integrating disk I/O devices, a network controller, and a memory into one peripheral component interconnect (PCI) card, so that the NS card performs zero-copy transmission so as to transmits data of the disk through network.
  • PCI peripheral component interconnect
  • the PCI memory that is a memory on the NS card when used for a user program, there is disclosed a method of allocating a PCI memory block to the user program. More specifically, the PCI memory of the NS card has physical PCI addresses, so that the memory is accessible through a PCI bus. When the user program requests for allocating of the PCI memory block, the PCI memory is mapped onto a user memory area, so that the user program can access the PCI memory.
  • An aspect of the present invention provides a method of accelerating a disk I/O between a user memory and a disk using a PCI memory, in which a user can use to load streaming data on the user memory, so that the user program manipulates and refers to data at high speed.
  • a method of accelerating I/O between a user memory and a disk using a PCI (Peripheral Component Interconnect) memory of an Internet server computing system comprising: allocating the PCI memory connected to a disk, when a user program requests for disk I/O; determining whether the request for the disk I/O is a writing request or a reading request; when the request for the disk I/O is the reading request, reading data of a block of the disk into the PCI memory and transmitting the data read into the PCI memory to the user memory using a DMA (Direct Memory Access) scheme; and when the request for the disk VO is the writing request, transmitting data of the user memory to the PCI memory in the DMA scheme and storing the data of the PCI memory in the block of the disk.
  • PCI Peripheral Component Interconnect
  • the allocating the PCI memory connected to the disk may comprise: searching a NS card (Network-Storage card) connected to the disk; and allocating an unused PCI memory on the searched NS card.
  • NS card Network-Storage card
  • the transmitting the data read into the PCI memory to the user memory using the DMA scheme may comprise determining whether or not the data is loaded on the PCI memory and, if the data is loaded on the PCI memory, transmitting the data loaded on the PCI memory to the user memory.
  • the transmitting the data read into the PCI memory to the user memory using the DMA scheme may comprise transmitting the data read into the PCI memory to the user memory using a DMA controller included in an I/O processor on the NS card.
  • the transmitting the data of the user memory to the PCI memory may comprise transmitting the data of the user memory to the PCI memory by using a DMA controller included in an I/O processor on the NS card.
  • the storing the data of the PCI memory in the block of the disk may comprise, if the transmission of the data of the user memory to the PCI memory is completed, storing the data of the PCI memory in the block of the disk.
  • the data of the disk is input or output for a user memory using the PCI memory as a buffer.
  • the performance is degraded due to addition of DMA step between the user memory and the PCI memory in the method of the present invention in comparison with that of the disk I/O in the form of zero copy of the conventional NS card, the user program can access the data at high speed when the user program manipulates and refers to the data.
  • the method of accelerating I/O between the user memory and the disk using the PCI memory according to the present invention can perform the disk I/O at a high speed by applying the I/O in the form of zero copy of the NS card while reducing a load of a CPU.
  • FIG. 1 is a schematic view showing a configuration of an Internet server computing system according to an embodiment of the present invention
  • FIG. 2 is a flowchart showing a method of accelerating VO between a user memory and a disk using a PCI memory in an Internet server computing system according to an embodiment of the present invention
  • FIG. 3 is s flowchart showing a method of loading data onto a disk to a user memory using a PCI memory according to an embodiment of the present invention.
  • FIG. 4 is a flowchart showing a method of writing data in a user memory on a disk using a PCI memory according to an embodiment of the present invention.
  • Fig. 1 is a schematic view showing a configuration of an Internet server computing system according to an embodiment of the present invention.
  • user programs 100 are connected to a user memory 300 and peripheral devices bus (representatively, a peripheral component interconnect (PCI) bus) 400 through a machine check handler (MCH) 200.
  • PCI peripheral component interconnect
  • MCH machine check handler
  • the user programs 100 access a network storage card (hereinafter, referred to as "NS card”) 500 through the connected PCI bus 400.
  • NS card network storage card
  • the NS card 500 of the Internet sever computing system operates in accordance with the interface of the PCI bus 400.
  • the NS card 300 is connected to network devices such as a disk 600 and Ethernet.
  • the NS card 500 allows a disk controller 520, a PCI memory 530, and TOE
  • FIG. 2 is a flowchart showing a method of accelerating VO between the user memory and the disk using the PCI memory in the Internet server computing system according to an embodiment of the present invention.
  • the Internet server computing system checks the NS card 500 connected to the corresponding disk 600 (S 102) and allocates the PCI memory 530 not used by the corresponding NS card 500 (S 103).
  • the Internet server computing system determines whether the request for the disk I/O of the user program 100 is a writing request or a reading request (S 104). In case of the reading request, the internet server computing system loads data in a block of the corresponding disk 600 to be requested into the PCI memory 530 (S 105). Subsequently, the Internet server computing system transmits the data read into the PCI memory 530 to the user memory 300 using a DMA scheme of the VO processor on the NS card (S 106).
  • Fig. 3 is s flowchart showing a method of reading data of a disk into a user memory using a PCI memory according to an embodiment of the present invention.
  • the user program 100 when the user program 100 attempts to read the data of the user memory 300 using the PCI memory 530 in the Internet server computing system, the user program firstly allocates an arbitrary user memory 300 (S201) and requests for reading data from a block of an arbitrary disk 600 using the allocated user memory 300 (S202).
  • the Internet server computing system checks the NS card 500 connected to the disk 600 of which the data reading is requested (S203) and allocates an PCI memory 530 not used by the checked NS card 500 (S204).
  • the Internet server computing system reads the data of the block of the disk
  • the Internet server computing system transmits data loaded on the PCI memory 530 to the user memory 300 using a DMA controller included in an I/O processor on the NS card 500 (S207).
  • PCI memory 530 to the user memory 300 by using the DMA controller on the NS card 500, it is performed in a DMA scheme.
  • FIG. 4 is a flowchart showing a method of writing data in a user memory on a disk using a PCI memory according to an embodiment of the present invention.
  • a user program 100 attempts to write data in a user memory 300 using a PCI memory 530 in an Internet server computing system, first the user program allocates an arbitrary user memory 300 (S301), and requests for writing data in a block of an arbitrary disk 600 using the allocated user memory 300 (S302).
  • S301 an arbitrary user memory 300
  • S302 requests for writing data in a block of an arbitrary disk 600 using the allocated user memory 300
  • the Internet server computing system checks a NS card 500 connected to the disk 600 requested the data writing (S303), and allocates an unused PCI memory 530 on the checked NS card 500 (S304).
  • the Internet server computing system transmits data in the user memory 300 to the allocated PCI memory 530 using a DMA scheme through the DMA controller included in the VO processor on the NS card 500 (S305).

Abstract

The present invention relates to a method of accelerating I/O between a user memory and a disk using a PCI memory. The method comprises: allocating the PCI memory connected to a disk, when a user program requests for disk I/O; determining whether the request for the disk I/O is a writing request or a reading request; when the request for the disk I/O is the reading request, reading data of a block of the disk into the PCI memory and transmitting the data read into the PCI memory to the user memory using a DMA (Direct Memory Access) scheme; and when the request for the disk I/O is the writing request, transmitting data of the user memory to the PCI memory using the DMA scheme and storing the data of the PCI memory in the block of the disk. As a result, while minimizing a load of a CPU, the I/O between the disk and the user memory can be performed at high speed using a network storage card.

Description

Description
METHOD OF ACCELERATING I/O BETWEEN USER MEMORY AND DISK USING PCI MEMORY
Technical Field
[1] The present invention relates to a method of accelerating I/O between a user memory and a disk, and more particularly, to a method of accelerating VO between a user memory and a disk using a PCI memory which inputs or outputs data using a DMA scheme between the user memory and the disk by using the PCI memory in an Internet server computing system.
[2] This work was supported by the IT R&D program of MIC/IITA [Project No.
2005-S-405-02, Project Name: A Development of the Next Generation Internet Server Technology].
[3]
Background Art
[4] Recently, as the advent of high-speed network and various multimedia terminals, multimedia streaming services have been increasingly demanded. Therefore, a performance of a dedicated multimedia server storing and streaming multimedia data is also needed to be increased.
[5] For this reason, hardware and software for enhancing the multimedia streaming performance have been widely researched. As an example of hardware, various disk arrays and cache devices for enhancing I/O speed of a disk storage device storing multimedia data and VO devices have been researched.
[6]
[7] As a conventional technology, in Korea Patent Application Publication No.
2004-0056309, there is disclosed a technique of "A network- storage apparatus for high-speed streaming data transmission through network."
[8] As one of techniques for accelerating disk VO and network transmission, the network- storage card (hereinafter, referred to as a "NS card") in the above Patent Document is constructed by integrating disk I/O devices, a network controller, and a memory into one peripheral component interconnect (PCI) card, so that the NS card performs zero-copy transmission so as to transmits data of the disk through network.
[9]
[10] In the above Patent Document, when the PCI memory that is a memory on the NS card is used for a user program, there is disclosed a method of allocating a PCI memory block to the user program. More specifically, the PCI memory of the NS card has physical PCI addresses, so that the memory is accessible through a PCI bus. When the user program requests for allocating of the PCI memory block, the PCI memory is mapped onto a user memory area, so that the user program can access the PCI memory.
[11] Therefore, when the user program requests for the disk I/O using the allocated PCI memory block, a device driver identifies the PCI memory to perform zero-copy input and output.
[12] As a result, in a memory accessing architecture, in the case where the user program refers to or corrects the PCI memory, I/O occurs through the PCI bus when data in the PCI memory is manipulated or referred to by the user program.
[13]
Disclosure of Invention Technical Problem
[14] Therefore, in a case where the user program manipulates and refers to data in a local memory through the PCI bus, the performance is degraded in comparison with a case where the user program manipulates and refers to the data in the local memory directly connected thereto without the PCI bus.
[15] That is, in the conventional technique, when the data of the disk is streamed through network without data manipulation, the high speed streaming performance can be obtained. However, when the data needs to be manipulated and referred to by the user program, there is a problem in that the performance is degraded.
[16]
[17] On the other hand, as a network transmission and control method used for multimedia streaming, there are protocols using directly multimedia data stored in the disk. However, recently, protocols referring to, changing, and transmitting some data fields have used widely to provide users with convenient use environment. For example, in a video-on-demand (VOD) service, a Time Stamp field in a streaming data is manipulated to provide a fast forward (FF) service or a fast rewind (FR) service.
[18] Therefore, in the conventional technique, there is a problem in that performance of the application for manipulating the data and, after that streaming is seriously degraded.
[19]
Technical Solution
[20] An aspect of the present invention provides a method of accelerating a disk I/O between a user memory and a disk using a PCI memory, in which a user can use to load streaming data on the user memory, so that the user program manipulates and refers to data at high speed. [22] According to an aspect of the present invention, there is provided a method of accelerating I/O between a user memory and a disk using a PCI (Peripheral Component Interconnect) memory of an Internet server computing system comprising: allocating the PCI memory connected to a disk, when a user program requests for disk I/O; determining whether the request for the disk I/O is a writing request or a reading request; when the request for the disk I/O is the reading request, reading data of a block of the disk into the PCI memory and transmitting the data read into the PCI memory to the user memory using a DMA (Direct Memory Access) scheme; and when the request for the disk VO is the writing request, transmitting data of the user memory to the PCI memory in the DMA scheme and storing the data of the PCI memory in the block of the disk.
[23]
[24] In the above aspect, when the user program requests for the disk I/O, the allocating the PCI memory connected to the disk may comprise: searching a NS card (Network-Storage card) connected to the disk; and allocating an unused PCI memory on the searched NS card.
[25] In addition, when the request for the disk I/O is the reading request, the transmitting the data read into the PCI memory to the user memory using the DMA scheme may comprise determining whether or not the data is loaded on the PCI memory and, if the data is loaded on the PCI memory, transmitting the data loaded on the PCI memory to the user memory.
[26]
[27] In addition, when the request for the disk I/O is the reading request, the transmitting the data read into the PCI memory to the user memory using the DMA scheme may comprise transmitting the data read into the PCI memory to the user memory using a DMA controller included in an I/O processor on the NS card.
[28] In addition, when the request for the disk I/O is the writing request, the transmitting the data of the user memory to the PCI memory may comprise transmitting the data of the user memory to the PCI memory by using a DMA controller included in an I/O processor on the NS card.
[29]
[30] In addition, when the request for the disk I/O is the writing request, the storing the data of the PCI memory in the block of the disk may comprise, if the transmission of the data of the user memory to the PCI memory is completed, storing the data of the PCI memory in the block of the disk.
[31]
Advantageous Effects [32] As described above, in the method of accelerating VO between a user memory and a disk using a PCI memory according to the present invention, the data of the disk is input or output for a user memory using the PCI memory as a buffer. Although the performance is degraded due to addition of DMA step between the user memory and the PCI memory in the method of the present invention in comparison with that of the disk I/O in the form of zero copy of the conventional NS card, the user program can access the data at high speed when the user program manipulates and refers to the data.
[33]
[34] Further, as described above, since the interrupt of a CPU is minimized in comparison with a general disk I/O, the method of accelerating I/O between the user memory and the disk using the PCI memory according to the present invention can perform the disk I/O at a high speed by applying the I/O in the form of zero copy of the NS card while reducing a load of a CPU.
[35]
Brief Description of the Drawings
[36] Fig. 1 is a schematic view showing a configuration of an Internet server computing system according to an embodiment of the present invention;
[37] Fig. 2 is a flowchart showing a method of accelerating VO between a user memory and a disk using a PCI memory in an Internet server computing system according to an embodiment of the present invention;
[38] Fig. 3 is s flowchart showing a method of loading data onto a disk to a user memory using a PCI memory according to an embodiment of the present invention; and
[39] Fig. 4 is a flowchart showing a method of writing data in a user memory on a disk using a PCI memory according to an embodiment of the present invention.
[40]
Best Mode for Carrying Out the Invention
[41] Exemplary embodiments of the invention will be described in detail as those skilled in the art can easily implement, with reference to the accompanying drawings below. However, in describing in detail the principle of the operation according to preferred embodiments of the present invention, when the gist of the present invention is not clear due to detailed descriptions about related known functions or structures, the detailed description thereof will be omitted.
[42]
[43] Also, the parts of similar functions and operations at the entire drawings are denoted by the same reference numerals.
[44] Fig. 1 is a schematic view showing a configuration of an Internet server computing system according to an embodiment of the present invention. [45] Referring to Fig. 1, in the Internet server computing system, user programs 100 are connected to a user memory 300 and peripheral devices bus (representatively, a peripheral component interconnect (PCI) bus) 400 through a machine check handler (MCH) 200. The user programs 100 access a network storage card (hereinafter, referred to as "NS card") 500 through the connected PCI bus 400.
[46]
[47] Here, the NS card 500 of the Internet sever computing system operates in accordance with the interface of the PCI bus 400. The NS card 300 is connected to network devices such as a disk 600 and Ethernet.
[48] That is, the NS card 500 allows a disk controller 520, a PCI memory 530, and TOE
540 constituting thereof to access the PCI bus 400 through a PCI bridge 510.
[49]
[50] On the other hand, detailed description of components of the Internet server computing system can be derived from a well-known technology will be omitted.
[51] Next, in the Internet server computing system with the aforementioned configuration, a method of accelerating I/O between the user memory 300 and the disk 600 using the PCI memory 530 will be described with reference to the drawings.
[52] Fig. 2 is a flowchart showing a method of accelerating VO between the user memory and the disk using the PCI memory in the Internet server computing system according to an embodiment of the present invention.
[53] Referring to Fig. 2, when the user program 100 requests for I/O of an arbitrary disk
600 I/O (SlOl), the Internet server computing system checks the NS card 500 connected to the corresponding disk 600 (S 102) and allocates the PCI memory 530 not used by the corresponding NS card 500 (S 103).
[54]
[55] The Internet server computing system determines whether the request for the disk I/O of the user program 100 is a writing request or a reading request (S 104). In case of the reading request, the internet server computing system loads data in a block of the corresponding disk 600 to be requested into the PCI memory 530 (S 105). Subsequently, the Internet server computing system transmits the data read into the PCI memory 530 to the user memory 300 using a DMA scheme of the VO processor on the NS card (S 106).
[56] However, in the case that the VO request of the user program 100 is the writing request, the Internet server computing system transmits the data in the user memory 300 to the allocated PCI memory 530 using the DMA scheme of the I/O processor on the NS card (S 107) and stores the data of the PCI memory 530 in the block of the corresponding disk 600 (S 108). [58] Fig. 3 is s flowchart showing a method of reading data of a disk into a user memory using a PCI memory according to an embodiment of the present invention.
[59] As shown in Fig. 3, when the user program 100 attempts to read the data of the user memory 300 using the PCI memory 530 in the Internet server computing system, the user program firstly allocates an arbitrary user memory 300 (S201) and requests for reading data from a block of an arbitrary disk 600 using the allocated user memory 300 (S202).
[60] Then, when the user program 100 requests for the data reading, the Internet server computing system checks the NS card 500 connected to the disk 600 of which the data reading is requested (S203) and allocates an PCI memory 530 not used by the checked NS card 500 (S204).
[61] Then, the Internet server computing system reads the data of the block of the disk
600 reading-requested from the user program 100 into the allocated PCI memory 530 (S205).
[62]
[63] Next, when the data is loaded on the PCI memory 530 (S206), the Internet server computing system transmits data loaded on the PCI memory 530 to the user memory 300 using a DMA controller included in an I/O processor on the NS card 500 (S207).
[64] Here, when the Internet server computing system transmits the data loaded on the
PCI memory 530 to the user memory 300 by using the DMA controller on the NS card 500, it is performed in a DMA scheme.
[65] Fig. 4 is a flowchart showing a method of writing data in a user memory on a disk using a PCI memory according to an embodiment of the present invention.
[66] As shown in Fig. 4, when a user program 100 attempts to write data in a user memory 300 using a PCI memory 530 in an Internet server computing system, first the user program allocates an arbitrary user memory 300 (S301), and requests for writing data in a block of an arbitrary disk 600 using the allocated user memory 300 (S302).
[67] Then, when the data writing is requested from the user program 100, the Internet server computing system checks a NS card 500 connected to the disk 600 requested the data writing (S303), and allocates an unused PCI memory 530 on the checked NS card 500 (S304).
[68] Then, the Internet server computing system transmits data in the user memory 300 to the allocated PCI memory 530 using a DMA scheme through the DMA controller included in the VO processor on the NS card 500 (S305).
[69]
[70] Next, when data is loaded on the PCI memory 530 (S306), the data loaded on the PCI memory 530 is stored in a block of the writing-requested disk (S307).
[71] The above-mentioned the present invention is no limited to earlier described em- bodiments and attached drawings, it will be apparent to those skilled in the art from this disclosure that various changes and modifications can be made herein without departing from the scope of the invention.

Claims

Claims
[1] A method of accelerating I/O between a user memory and a disk using a PCI
(Peripheral Component Interconnect) memory of an Internet server computing system comprising: allocating the PCI memory connected to a disk, when a user program requests for disk I/O; determining whether the request for the disk I/O is a reading request or a writing request; when the request for the disk I/O is the reading request, reading data of a block of the disk into the PCI memory and transmitting the data read into the PCI memory to the user memory using a DMA (Direct Memory Access) scheme; and when the request for the disk I/O is the writing request, transmitting data of the user memory to the PCI memory using the DMA scheme and storing the data of the PCI memory in the block of the disk.
[2] The method of claim 1, wherein, when the user program requests for the disk I/
O, the allocating the PCI memory connected to the disk comprises: searching a NS card (Network- Storage card) connected to the disk; and allocating an unused PCI memory on the searched NS card.
[3] The method of claim 1, wherein, when the request for the disk I/O is the reading request, the transmitting the data read into the PCI memory to the user memory using the DMA scheme comprises determining whether or not the data is loaded on the PCI memory and, if the data is loaded on the PCI memory, transmitting the data loaded on the PCI memory to the user memory.
[4] The method of claim 1, wherein, when the request for the disk I/O is the reading request, the transmitting the data read into the PCI memory to the user memory using the DMA scheme comprises transmitting the data read into the PCI memory to the user memory by using a DMA controller included in an VO processor on the NS card.
[5] The method of claim 1, wherein, when the request for the disk I/O is the writing request, the transmitting the data of the user memory to the PCI memory comprises transmitting the data of the user memory to the PCI memory by using a DMA controller included in an I/O processor on the NS card.
[6] The method of claim 1, wherein, when the request for the disk I/O is the writing request, the storing the data of the PCI memory in the block of the disk comprises if the transmission of the data of the user memory to the PCI memory is completed, storing the data of the PCI memory in the block of the disk.
PCT/KR2007/006219 2006-12-04 2007-12-04 Method of accelerating i/o between user memory and disk using pci memory WO2008069531A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2006-0121623 2006-12-04
KR20060121623 2006-12-04
KR1020070098180A KR20080051021A (en) 2006-12-04 2007-09-28 Method for accelerating i/o between user memory and disk using pci memory
KR10-2007-0098180 2007-09-28

Publications (1)

Publication Number Publication Date
WO2008069531A1 true WO2008069531A1 (en) 2008-06-12

Family

ID=39492349

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2007/006219 WO2008069531A1 (en) 2006-12-04 2007-12-04 Method of accelerating i/o between user memory and disk using pci memory

Country Status (1)

Country Link
WO (1) WO2008069531A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010037406A1 (en) * 1997-10-14 2001-11-01 Philbrick Clive M. Intelligent network storage interface system
US20030126348A1 (en) * 2001-12-29 2003-07-03 Lg Electronics Inc. Multi-processing memory duplication system
KR20050065133A (en) * 2003-12-24 2005-06-29 한국전자통신연구원 Network card having zero-copy transmission function, server and method thereof
KR20060024746A (en) * 2004-09-14 2006-03-17 한국과학기술원 Contents delivery accelerator apparatus for increasing transmission efficiency between disks of server and network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010037406A1 (en) * 1997-10-14 2001-11-01 Philbrick Clive M. Intelligent network storage interface system
US20030126348A1 (en) * 2001-12-29 2003-07-03 Lg Electronics Inc. Multi-processing memory duplication system
KR20050065133A (en) * 2003-12-24 2005-06-29 한국전자통신연구원 Network card having zero-copy transmission function, server and method thereof
KR20060024746A (en) * 2004-09-14 2006-03-17 한국과학기술원 Contents delivery accelerator apparatus for increasing transmission efficiency between disks of server and network

Similar Documents

Publication Publication Date Title
US10083131B2 (en) Generating and/or employing a descriptor associated with a memory translation table
EP3796168A1 (en) Information processing apparatus, information processing method, and virtual machine connection management program
US9336001B2 (en) Dynamic instrumentation
US9037810B2 (en) Pre-fetching of data packets
US8196147B1 (en) Multiple-processor core optimization for producer-consumer communication
US7844752B2 (en) Method, apparatus and program storage device for enabling multiple asynchronous direct memory access task executions
US9390036B2 (en) Processing data packets from a receive queue in a remote direct memory access device
CN101290604A (en) Information processing apparatus and method, and program
US20110202918A1 (en) Virtualization apparatus for providing a transactional input/output interface
US7415703B2 (en) Loading software on a plurality of processors
US20050144402A1 (en) Method, system, and program for managing virtual memory
US20170270056A1 (en) Main memory including hardware accelerator and method of operating the same
CN109995881A (en) The load-balancing method and device of cache server
CN102323888B (en) Diskless computer startup accelerating method
CN111143234A (en) Storage device, system including such storage device and method of operating the same
CN113703672A (en) Super-fusion system, IO request issuing method thereof and physical server
Hruby et al. On Sockets and System Calls: Minimizing Context Switches for the Socket {API}
Yang et al. Shortening the boot time of android os
US7581045B2 (en) Method, system, and article of manufacture for mapping programming interfaces
US7913059B2 (en) Information processing device, data transfer method, and information storage medium
WO2014206229A1 (en) Accelerator and data processing method
JP2009516296A (en) Asynchronous just-in-time compilation
US20150295991A1 (en) Method and device for browsing network data, and storage medium
US20080195838A1 (en) Cyclic Buffer Management
WO2008069531A1 (en) Method of accelerating i/o between user memory and disk using pci memory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07851210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07851210

Country of ref document: EP

Kind code of ref document: A1