US7152231B1 - High speed interprocess communication - Google Patents

High speed interprocess communication Download PDF

Info

Publication number
US7152231B1
US7152231B1 US09/431,449 US43144999A US7152231B1 US 7152231 B1 US7152231 B1 US 7152231B1 US 43144999 A US43144999 A US 43144999A US 7152231 B1 US7152231 B1 US 7152231B1
Authority
US
United States
Prior art keywords
message
data
location
memory
accumulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/431,449
Inventor
Anthony P. Galluscio
William L. Holt
Douglas M. Dyer
Albert T. Montroy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commstech LLC
Original Assignee
Harris Exigent Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
US case filed in Texas Western District Court litigation Critical https://portal.unifiedpatents.com/litigation/Texas%20Western%20District%20Court/case/6%3A19-cv-00296 Source: District Court Jurisdiction: Texas Western District Court "Unified Patents Litigation Data" by Unified Patents is licensed under a Creative Commons Attribution 4.0 International License.
First worldwide family litigation filed litigation https://patents.darts-ip.com/?family=37526658&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US7152231(B1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority to US09/431,449 priority Critical patent/US7152231B1/en
Application filed by Harris Exigent Inc filed Critical Harris Exigent Inc
Assigned to EXIGENT INTERNATIONAL, INC. reassignment EXIGENT INTERNATIONAL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DYER, DOUGLAS M., GALLUSCIO, ANTHONY P., HOLT, WILLIAM L., MONTROY, ALBERT T.
Assigned to HARRIS-EXIGENT, INC. reassignment HARRIS-EXIGENT, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: EXIGENT INTERNATINAL, INC.
Publication of US7152231B1 publication Critical patent/US7152231B1/en
Application granted granted Critical
Assigned to HARRIS TECHNICAL SERVICES CORPORATION reassignment HARRIS TECHNICAL SERVICES CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: HARRIS-EXIGENT, INC.
Assigned to HARRIS IT SERVICES CORPORATION reassignment HARRIS IT SERVICES CORPORATION MERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: HARRIS TECHNICAL SERVICES CORPORATION, MULTIMAX INCORPORATED
Assigned to HARRIS CORPORATION reassignment HARRIS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARRIS IT SERVICES CORPORATION
Assigned to COMMSTECH LLC reassignment COMMSTECH LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARRIS CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Definitions

  • This invention relates to the field of network interprocess communications and more particularly to a system and method for high speed interprocess communications.
  • Interprocess communications includes both process synchronization and messaging.
  • Messaging, or message passing can be accomplished using pipes, sockets, or message queues.
  • Pipes provide a byte stream mechanism for transferring data between processes.
  • System calls for example read and write calls, provide the underlying mechanisms for reading and writing data to a pipe.
  • As the writing process writes to a pipe bytes are copied from the sending process into a shared data page. Subsequently, when the reading process reads, the bytes are copied out of the shared data page to the destination process.
  • Sending a message using a pipe consumes a minimum of two memory copies moving a total of 2n bytes, where n is the number of bytes in the message.
  • Sockets provide an abstraction mechanism for application programs which can simplify access to communications protocols.
  • a message queue is an IPC mechanism that is typically managed by an operating system kernel. Typically, the implementation of a message queue is hidden.
  • IPC mechanisms can provide IPC between two processes using virtual memory in a shared memory space contained within an operating system kernel.
  • virtual memory implies that, although all processes share the same physical memory space, each process can map a region of the shared memory space differently from other processes.
  • data residing at one address in the shared memory space can differ physically from the data residing at the same address into the same shared memory space as interpreted by the memory map of a different process.
  • a first process can copy n bytes of data from user memory space into a shared memory space in the operating system kernel. Subsequently, using a system call to the operating system kernel, a second process can copy the same n bytes of data from the shared memory space into the user memory space. Therefore, traditional IPC mechanisms require a minimum overhead of 2n byte copies to communicate n bytes of data between the two processes.
  • a method for high speed interprocess communications can comprise four steps. Initially, first and second processes can be attached to a message buffer in a shared region of user memory (RAM). Moreover, message lists corresponding to each of the processes can be established in the shared region.
  • the attaching step can comprise the steps of: detecting a previously created shared region of user RAM; if a shared region of RAM is not detected, creating and configuring a shared region of user memory for storing accumulated data; and, attaching to the created and configured shared region of RAM.
  • the attaching step comprises the step of attaching the first and second processes to a message buffer in a shared region of RAM exclusive of operating system kernel space. Message data from the first process can be accumulated in a location in the message buffer.
  • the message list can be implemented as a message queue using the common data structure, “queue”.
  • a memory offset corresponding to the location in the message buffer can be added to the message queue of the second process.
  • the adding step can comprise the steps of: retrieiving a memory offset in the message buffer corresponding to the location of message data accumulated by the first process; and, inserting the memory offset in the message queue corresponding to the second process.
  • the inserting step can comprise the step of atomically assigning the memory offset to an integer location in the message queue corresponding to the second process.
  • the accumulated message data at the location corresponding to the memory offset can be processed in the second process.
  • the processing step can comprise the steps of: identifying a memory offset in the message list corresponding to the second process; processing in the second process message data stored at a location in the message buffer corresponding to the memory offset; and, releasing the message buffer. Consequently, the accumulated message data is transferred from the first process to the second process with minimal data transfer overhead.
  • a method for configuring high speed interprocess communications between first and second processes can include several steps. Initially, the method can include disposing a message buffer in a shared region of RAM shared between first and second processes.
  • the disposing step can comprise the steps of: creating and configuring a message buffer in a shared region of RAM exclusive of operating system kernel space; and, creating a message list in the shared region for each process, whereby the message list can store memory offsets of message data stored in the message buffer.
  • the inventive method can include the step of accumulating message data from the first process in a location in the message buffer and adding the memory offset to a message list corresponding to the second process.
  • the message list can be implemented as a message queue.
  • the adding step can comprise the steps of: retrieving a memory offset in the message buffer, the memory offset corresponding to the location of message data accumulated by the first process; and, inserting the memory offset in the message queue corresponding to the second process.
  • the inserting step can comprise the step of atomically assigning the memory offset to an integer location in the message queue corresponding to the second process.
  • the method can include processing in the second process the accumulated message data stored in the message buffer at a location corresponding to the memory offset.
  • the processing step comprises the steps of: identifying a memory offset in the message list corresponding to the second process; using in the second process accumulated message data at a location in the message buffer corresponding to the memory offset; and, releasing the message buffer.
  • FIG. 1 is a schematic illustration of a traditional IPC architecture.
  • FIG. 2 is a schematic illustration of a high-speed IPC architecture in accordance with the inventive arrangements.
  • FIG. 3 is a process diagram showing the passing of data using a high-speed IPC architecture in accordance with the inventive arrangements.
  • FIG. 4 is a schematic representation of a memory offset.
  • FIG. 5 is a flow chart illustrating an algorithm for high-speed IPC.
  • FIG. 1 illustrates the commonality between the three traditional mechanisms for IPC.
  • two processes 1 , 2 communicate using a shared memory space 5 contained within an operating system kernel 4 .
  • process 1 can copy n bytes of data 6 from a user memory space 3 into a shared memory space 5 in the operating system kernel 4 .
  • process 2 can copy the same n bytes of data 6 from the shared memory space 5 in the operating system kernel 4 into user memory space 3 . Therefore, FIG. 1 shows a minimum overhead of two system calls and 2n byte copies to communicate n bytes of data between two processes.
  • a method for high speed IPC can provide extremely fast IPC both by communicating message data in a shared region of random access memory (RAM) external to the operating system kernel and by limiting the movement of the data. Processes are notified of the location of the message data rather than actually receiving a copy of the message data. The recipient process subsequently can read or manipulate (process) the message data in place. As a result, the number of data copies necessary for high speed IPC is minimized.
  • a method for high speed IPC utilizes a message list in a message buffer for storing a memory offsets set by atomic assignment. Each memory offset can denote a location in the shared region of RAM where a process attached to the shared region can manipulate data stored therein.
  • the message list can be implemented using the common data structure, “queue”. When one process messages another, the process need only insert a memory offset to the message data in the recipient's message queue.
  • FIG. 2 provides a high-level perspective of the relationships between the message data 10 , the operating system kernel 14 , and two processes 11 , 12 using high speed IPC. From FIG. 2 , it will be apparent to one skilled in the art that the message data 10 resides in a shared region of RAM 15 common to both processes 11 , 12 . Still, one skilled in the art will further recognize that although the shared region of RAM is common to both processes 11 , 12 , each process 11 , 12 can maintain a virtual memory page therein. That is, each process 11 , 12 can maintain a different and distinct memory map of the shared region of user RAM 15 .
  • both processes 11 , 12 can reconcile each other's memory mapping into the shared region of RAM 15 by communicating to one another the location of data in the shared RAM relative to a commonly known address.
  • high speed IPC permits the use of the shared region of RAM 15 despite differing memory maps among the processes 11 , 12 .
  • message passing does not require storing message data 10 in operating system kernel space 14 . Therefore, in the preferred embodiment, system calls are not required to write and read the data 10 .
  • the elevated risk associated with utilizing operating system kernel space 14 is eliminated.
  • the inventive method avoids the risk of a process losing CPU control upon invoking the system call required to read or write the data 10 .
  • the inventive IPC mechanism can provide the level of service required for real-time applications.
  • FIG. 3 illustrates an exemplary conveyance of data 30 between two processes 21 , 22 using the inventive method for high speed IPC.
  • the invention is not limited in this regard. Rather, the invention can include more than two processes communicating through a shared message store.
  • the conveyance consists of four essential steps. Initially, a first process 22 and a second process 23 can attach to a small message buffer 25 from a configured pool of message buffers 24 in a shared region of RAM.
  • the pool of messages buffers 24 can include small 25 , medium 26 and large buffers 27 .
  • the invention is not limited in this regard. Rather, any number or type of message buffers will suffice for operation of the present invention.
  • a “first-fit” allocation strategy For instance, a “first-fit” allocation strategy, a “best-fit” strategy, or an “approximate-fit” strategy can also suffice. Still, the preferred combination of all buffers and the management thereof can optimize memory utilization while reducing the cost of memory management.
  • the first process 22 can accumulate message data 30 in a location in the small message buffer 25 . Subsequently, the first process 22 can notify a second process 21 of the location of the data 30 in the small message buffer by adding the location of the data 30 into a message list. Specifically, the first process 22 can insert a memory offset 29 of the message data 30 into a message queue 23 associated with the second process 21 . As shown in FIG. 4 , a memory offset B represents the number of bytes C from the beginning A of a buffer D, in which data E can be located. In consequence of using memory offsets, rather than absolute addresses (pointers), two processes can reference a single piece of data in a common region of RAM, despite having different memory maps of the memory region.
  • the memory offset 29 can indicate to each process 21 , 22 the number of bytes from a common address of the small message buffer 25 in which the message data 30 can be located.
  • Message queues 23 , 28 preferably are created in the shared region of RAM.
  • Each message queue 23 , 28 is a list of messages which can be represented by the common data structure, “queue”, which, in the preferred embodiment, can handle integer values in a first-in-first-out (FIFO) order.
  • FIFO first-in-first-out
  • Each message queue 23 , 28 alternatively referred to as an “inbox”, can contain an administrative area having variables for administering the queue of integer offsets. Those variables may include variables for tracking the position of the front and rear elements of the queue and the queue size.
  • the first process 22 can access the message queue 23 of the second process 21 by addressing the message queue 23 by name.
  • the first process 22 can either have a priori knowledge of the name of the message queue 23 , or the first process 22 can rely on a naming service.
  • the first process 22 can cross-reference in a naming service the process identification number corresponding to the second process 21 with the location of the message queue 23 of the second process 21 .
  • the naming service can be as simple as a file that contains names of message queues mapped to processes.
  • the naming of message queues can depend on the nature of the specific operating system. For instance, in the Windows NT operating system, the operating system names the message queue. In contrast, the Unix operating system uses integer identifiers to identify a message queue.
  • the memory offset 29 can be logically inserted in the message queue 23 of the second process 21 , but advantageously, because the memory offset 29 can be internally represented as an integer, the memory offset 29 can be physically assigned to a data member in a node in the message queue 23 using a simple integer assignment available, for instance, in the C, C++ or Java programming languages.
  • the mechanism for assignment can vary depending on the implementation of the queue data structure.
  • the first process 22 can calculate the address of the first element in the message queue 23 , and can make an atomic assignment of the memory offset 29 to that address.
  • This C-style statement can atomically assign the memory offset 29 of the message data 30 to the address of the first element in message queue 23 .
  • the atomic assignment can be contrasted with the case of copying a data message using a memory copy, such as “memcpy”, which performs an assignment for each byte in the data message.
  • the front_of_queue variable can be stored at a pre-determined location, as specified by the message queue structure in the beginning of the small message buffer 25 . Still, one skilled in the art will recognize that the message queue 23 needn't be stored in the small message buffer 25 . Rather, the message queue 23 can be stored in another message buffer, into which access can be provided using any of the traditional IPC techniques. Alternatively and advantageously, access to the message buffer could occur using high speed IPC.
  • the second process 21 can identify the memory offset 29 placed in the corresponding message queue 23 . Specifically, the second process 21 can poll the message queue 23 waiting for a new memory offset 29 to arrive. Alternatively, the first process 22 can signal the second process 21 that new message data 30 has arrived. Either mechanism can be acceptable depending upon specific application requirements.
  • the second process 21 can manipulate the accumulated message data 30 in place in the small message buffer 25 corresponding to the memory location denoted by the memory offset 29 .
  • the second process 21 can use the accumulated data 30 in accordance with the unique data requirements of the second process 21 .
  • the second process 21 can release the small message buffer 25 using conventional memory management techniques.
  • FIG. 5 is a flow chart describing a method for high speed IPC.
  • the flow chart depicts a single process which can communicate with another process using the inventive method.
  • the method begins in step 100 where a process can attempt to attach to a message buffer in a shared region of RAM, exclusive of the operating system kernel.
  • a locking mechanism is a mutex which allows an atomic check and set of a variable that protects a shared region. If one process has the mutex, other processes are blocked from accessing the shared region until the mutex is released.
  • step 104 if the process is the first activated process in the system, in step 104 , a message buffer in the shared region of RAM is created and, in step 106 , configured.
  • the process creating the shared region of RAM obtains a mutex and releases the mutex only when the shared region is created and configured. The release of the mutex acts as notification to other interested processes that the shared region of RAM is ready for use.
  • Configuring the shared region of RAM can include naming the shared region, initializing the shared region variables in an administrative area, and sizing the buffer pools.
  • the shared region may be configured using a stored configuration that is merely retrieved by the process and applied to the shared region.
  • the process can create a message queue in the shared region corresponding to the process.
  • the message queue can be used to store incoming memory offsets, placed in the message queue by other processes.
  • the process can perform normal intraprocess operations until a need for IPC arises, either where the process is a recipient or sender of a message, as determined in decision step 112 .
  • the first process in step 122 can obtain a memory offset to free memory space in the message buffer.
  • obtaining a memory offset to free memory requires the use of a memory management mechanism for allocating buffers in a shared region of user memory.
  • a buffer pool allocator for high speed IPC can abstract the details of managing memory offsets into the message buffer.
  • the buffer pool allocator for high speed IPC can be implemented using techniques well known in the art and thoroughly documented in Kernighan and Ritchie, “The C Programming Language: 2nd Edition”, pp. 185–189, incorporated herein by reference.
  • the first process can accumulate message data for the benefit of the second process with the writing beginning at the location corresponding to the memory offset.
  • the first process can place the memory offset in the message queue corresponding to the second process.
  • the placement of the memory offset can be an atomic assignment to an integer location in the shared region of RAM. The act of placing the memory offset in the message queue is tantamount to notifying the second process of an attempt at IPC.
  • the second process can identify a memory offset in the message queue corresponding to the second process.
  • the second process can retrieve the memory offset, and in step 118 , the second process can use the memory offset to access the data accumulated by the first process at an appropriate location in the message buffer.
  • the second process need only release the buffer using the above-identified buffer pool allocator.
  • the significant differences between the inventive method and traditional IPC mechanisms include the present method's use of a shared region of RAM to store accumulated data. As a result of the use of the shared region, the inventive method does not require operating system calls to write and read accumulated data. In addition, because the present method uses a shared region of RAM instead of a memory region in the operating system kernel, the reconfiguration of the shared region does not require the rebooting of the operating system. Finally, high speed IPC provides a faster and safer mechanism for IPC in that the overhead associated with IPC is minimized from two system calls and 2n bytes of data movement to a minimal n bytes of data movement.

Abstract

A method for high speed interprocess communications comprises four steps. Initially, first and second processes can be attached to a message buffer in a shared region of user memory. In addition, each process can have a corresponding message queue. In a preferred embodiment, the attaching step comprises the step of attaching first and second processes to a message buffer in a shared region of user memory exclusive of operating system kernel space. Second, message data from the first process can be accumulated in a location in the message buffer. Third, a memory offset corresponding to the location in the message buffer can be placed in the message queue of the second process. Finally, the accumulated data at the location corresponding to the offset can be used in the second process. Consequently, the accumulated message data is transferred from the first process to the second process with minimal data transfer overhead.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
(Not Applicable)
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
(Not Applicable)
BACKGROUND OF THE INVENTION
1. Technical Field
This invention relates to the field of network interprocess communications and more particularly to a system and method for high speed interprocess communications.
2. Description of the Related Art
Interprocess communications (IPC) includes both process synchronization and messaging. Messaging, or message passing, can be accomplished using pipes, sockets, or message queues. Pipes provide a byte stream mechanism for transferring data between processes. System calls, for example read and write calls, provide the underlying mechanisms for reading and writing data to a pipe. As the writing process writes to a pipe, bytes are copied from the sending process into a shared data page. Subsequently, when the reading process reads, the bytes are copied out of the shared data page to the destination process. Sending a message using a pipe consumes a minimum of two memory copies moving a total of 2n bytes, where n is the number of bytes in the message. Sockets provide an abstraction mechanism for application programs which can simplify access to communications protocols. Although network communications provide the primary impetus for sockets, sockets may be used for IPC, as well. Still, the transmission of data between processes using a socket can consume time necessary to perform the abstraction overhead, two system calls, for example readv( ) and writev( ), and a minimum of two memory copies moving, in total, 2n bytes. A message queue is an IPC mechanism that is typically managed by an operating system kernel. Typically, the implementation of a message queue is hidden.
Traditional IPC mechanisms can provide IPC between two processes using virtual memory in a shared memory space contained within an operating system kernel. The use of virtual memory implies that, although all processes share the same physical memory space, each process can map a region of the shared memory space differently from other processes. Thus, data residing at one address in the shared memory space can differ physically from the data residing at the same address into the same shared memory space as interpreted by the memory map of a different process.
In traditional IPC, a first process can copy n bytes of data from user memory space into a shared memory space in the operating system kernel. Subsequently, using a system call to the operating system kernel, a second process can copy the same n bytes of data from the shared memory space into the user memory space. Therefore, traditional IPC mechanisms require a minimum overhead of 2n byte copies to communicate n bytes of data between the two processes.
In addition, all methods of traditional IPC require some type of interaction with the operating system kernel. In particular, traditional IPC mechanisms require a minimum of two system calls to the operating system kernel. Moving data in and out of an operating system kernel can include some risk. Specifically, not only must each process move n bytes of data, but each process risks losing CPU control upon invoking the system call required to read or write the data, respectively.
Analogously, computer scientists have recognized the unnecessary expense of passing a message from one process to another. In fact, legacy third-generation programming languages which provide dynamic memory allocation, for example Fortran, inefficiently pass data between processes by copying the data stored in one region of memory, and storing the data in a different region of memory. Subsequently, the recipient function can process the data before returning a copy of the same using the same mechanism. Recognizing the inefficiencies of this type of message passing, computer scientists have adopted pointer passing as an alternative to data passing when messaging a process. In pointer passing, a recipient process receives only an address of a location in memory of the message data. Subsequently, the recipient can manipulate the data at the passed address, in place, without the need for excessive data copies.
Still, in third-generation languages which have adopted pointer passing, for example C or C++, the communicating processes ultimately share one memory mapping of a shared memory space for passing data. In fact, in the absence of a single memory map of shared memory space, present methods of pointer passing become unworkeable because data residing at an address in one memory space is not equivalent to the data residing at the same address in another memory space. Therefore, traditional pointer passing cannot be used to resolve the inefficiencies of traditional IPC in which different processes have different memory maps of a shared region of user memory by virtue of the virtual memory scheme associated with network IPC.
In view of the inefficiencies of traditional IPC, traditional mechanisms for IPC are not suitable for real time command and control systems which can require fail-safe and extremely fast conveyancing of information between processes. For example, copying data can be expensive in terms of processor overhead and time delay. In addition, moving data in and out of an operating system kernel can include some risk. Thus, present IPC mechanisms do not provide the level of service required for real-time applications.
SUMMARY OF THE INVENTION
In a preferred embodiment, a method for high speed interprocess communications can comprise four steps. Initially, first and second processes can be attached to a message buffer in a shared region of user memory (RAM). Moreover, message lists corresponding to each of the processes can be established in the shared region. In particular, the attaching step can comprise the steps of: detecting a previously created shared region of user RAM; if a shared region of RAM is not detected, creating and configuring a shared region of user memory for storing accumulated data; and, attaching to the created and configured shared region of RAM. In a preferred embodiment, the attaching step comprises the step of attaching the first and second processes to a message buffer in a shared region of RAM exclusive of operating system kernel space. Message data from the first process can be accumulated in a location in the message buffer.
Advantageously, the message list can be implemented as a message queue using the common data structure, “queue”. As a result, subsequent to the accumulating step, a memory offset corresponding to the location in the message buffer can be added to the message queue of the second process. The adding step can comprise the steps of: retrieiving a memory offset in the message buffer corresponding to the location of message data accumulated by the first process; and, inserting the memory offset in the message queue corresponding to the second process. Moreover, the inserting step can comprise the step of atomically assigning the memory offset to an integer location in the message queue corresponding to the second process.
Finally, the accumulated message data at the location corresponding to the memory offset can be processed in the second process. The processing step can comprise the steps of: identifying a memory offset in the message list corresponding to the second process; processing in the second process message data stored at a location in the message buffer corresponding to the memory offset; and, releasing the message buffer. Consequently, the accumulated message data is transferred from the first process to the second process with minimal data transfer overhead.
Viewed from a system architecture standpoint, a method for configuring high speed interprocess communications between first and second processes can include several steps. Initially, the method can include disposing a message buffer in a shared region of RAM shared between first and second processes. In particular, the disposing step can comprise the steps of: creating and configuring a message buffer in a shared region of RAM exclusive of operating system kernel space; and, creating a message list in the shared region for each process, whereby the message list can store memory offsets of message data stored in the message buffer.
The inventive method can include the step of accumulating message data from the first process in a location in the message buffer and adding the memory offset to a message list corresponding to the second process. Advantageously, the message list can be implemented as a message queue. In consequence, the adding step can comprise the steps of: retrieving a memory offset in the message buffer, the memory offset corresponding to the location of message data accumulated by the first process; and, inserting the memory offset in the message queue corresponding to the second process. Moreover, the inserting step can comprise the step of atomically assigning the memory offset to an integer location in the message queue corresponding to the second process.
Finally, the method can include processing in the second process the accumulated message data stored in the message buffer at a location corresponding to the memory offset. In particular, the processing step comprises the steps of: identifying a memory offset in the message list corresponding to the second process; using in the second process accumulated message data at a location in the message buffer corresponding to the memory offset; and, releasing the message buffer. Thus, as a result of the inventive method, the accumulated message data can be transferred from the first process to the second process with minimal data transfer overhead.
BRIEF DESCRIPTION OF THE DRAWINGS
There are presently shown in the drawings embodiments which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
FIG. 1 is a schematic illustration of a traditional IPC architecture.
FIG. 2 is a schematic illustration of a high-speed IPC architecture in accordance with the inventive arrangements.
FIG. 3 is a process diagram showing the passing of data using a high-speed IPC architecture in accordance with the inventive arrangements.
FIG. 4 is a schematic representation of a memory offset.
FIG. 5 is a flow chart illustrating an algorithm for high-speed IPC.
DETAILED DESCRIPTION OF THE INVENTION
The traditional interprocess communication (IPC) architecture includes drawbacks which preclude the use of traditional IPC mechanisms in real time command and control systems which can require fail-safe and extremely fast conveyancing of information between processes. FIG. 1 illustrates the commonality between the three traditional mechanisms for IPC. As shown in the figure, two processes 1, 2 communicate using a shared memory space 5 contained within an operating system kernel 4. In particular, process 1 can copy n bytes of data 6 from a user memory space 3 into a shared memory space 5 in the operating system kernel 4. Subsequently, using a system call, process 2 can copy the same n bytes of data 6 from the shared memory space 5 in the operating system kernel 4 into user memory space 3. Therefore, FIG. 1 shows a minimum overhead of two system calls and 2n byte copies to communicate n bytes of data between two processes.
In contrast, a method for high speed IPC can provide extremely fast IPC both by communicating message data in a shared region of random access memory (RAM) external to the operating system kernel and by limiting the movement of the data. Processes are notified of the location of the message data rather than actually receiving a copy of the message data. The recipient process subsequently can read or manipulate (process) the message data in place. As a result, the number of data copies necessary for high speed IPC is minimized. Notably, a method for high speed IPC utilizes a message list in a message buffer for storing a memory offsets set by atomic assignment. Each memory offset can denote a location in the shared region of RAM where a process attached to the shared region can manipulate data stored therein. Specifically, the message list can be implemented using the common data structure, “queue”. When one process messages another, the process need only insert a memory offset to the message data in the recipient's message queue.
FIG. 2 provides a high-level perspective of the relationships between the message data 10, the operating system kernel 14, and two processes 11, 12 using high speed IPC. From FIG. 2, it will be apparent to one skilled in the art that the message data 10 resides in a shared region of RAM 15 common to both processes 11, 12. Still, one skilled in the art will further recognize that although the shared region of RAM is common to both processes 11, 12, each process 11, 12 can maintain a virtual memory page therein. That is, each process 11, 12 can maintain a different and distinct memory map of the shared region of user RAM 15. However, unlike prior art network IPC where processes do not reconcile differing memory maps of shared RAM, in the preferred embodiment, both processes 11, 12 can reconcile each other's memory mapping into the shared region of RAM 15 by communicating to one another the location of data in the shared RAM relative to a commonly known address.
Capitalizing on this reconciliation, high speed IPC permits the use of the shared region of RAM 15 despite differing memory maps among the processes 11, 12. As a result, in the preferred embodiment, message passing does not require storing message data 10 in operating system kernel space 14. Therefore, in the preferred embodiment, system calls are not required to write and read the data 10. Thus, the elevated risk associated with utilizing operating system kernel space 14 is eliminated. Specifically, the inventive method avoids the risk of a process losing CPU control upon invoking the system call required to read or write the data 10. Hence, the inventive IPC mechanism can provide the level of service required for real-time applications.
FIG. 3 illustrates an exemplary conveyance of data 30 between two processes 21, 22 using the inventive method for high speed IPC. Notwithstanding, the invention is not limited in this regard. Rather, the invention can include more than two processes communicating through a shared message store. The conveyance consists of four essential steps. Initially, a first process 22 and a second process 23 can attach to a small message buffer 25 from a configured pool of message buffers 24 in a shared region of RAM. In the preferred embodiment, the pool of messages buffers 24 can include small 25, medium 26 and large buffers 27. However, the invention is not limited in this regard. Rather, any number or type of message buffers will suffice for operation of the present invention. For instance, a “first-fit” allocation strategy, a “best-fit” strategy, or an “approximate-fit” strategy can also suffice. Still, the preferred combination of all buffers and the management thereof can optimize memory utilization while reducing the cost of memory management.
The first process 22 can accumulate message data 30 in a location in the small message buffer 25. Subsequently, the first process 22 can notify a second process 21 of the location of the data 30 in the small message buffer by adding the location of the data 30 into a message list. Specifically, the first process 22 can insert a memory offset 29 of the message data 30 into a message queue 23 associated with the second process 21. As shown in FIG. 4, a memory offset B represents the number of bytes C from the beginning A of a buffer D, in which data E can be located. In consequence of using memory offsets, rather than absolute addresses (pointers), two processes can reference a single piece of data in a common region of RAM, despite having different memory maps of the memory region. Hence, although the first process 22 and the second process 21 may have differing memory maps of the small message buffer 25, the memory offset 29 can indicate to each process 21, 22 the number of bytes from a common address of the small message buffer 25 in which the message data 30 can be located.
Message queues 23, 28 preferably are created in the shared region of RAM. Each message queue 23, 28 is a list of messages which can be represented by the common data structure, “queue”, which, in the preferred embodiment, can handle integer values in a first-in-first-out (FIFO) order. Each message queue 23, 28, alternatively referred to as an “inbox”, can contain an administrative area having variables for administering the queue of integer offsets. Those variables may include variables for tracking the position of the front and rear elements of the queue and the queue size.
The first process 22 can access the message queue 23 of the second process 21 by addressing the message queue 23 by name. The first process 22 can either have a priori knowledge of the name of the message queue 23, or the first process 22 can rely on a naming service. Specifically, the first process 22 can cross-reference in a naming service the process identification number corresponding to the second process 21 with the location of the message queue 23 of the second process 21. The naming service can be as simple as a file that contains names of message queues mapped to processes. The naming of message queues can depend on the nature of the specific operating system. For instance, in the Windows NT operating system, the operating system names the message queue. In contrast, the Unix operating system uses integer identifiers to identify a message queue.
The memory offset 29 can be logically inserted in the message queue 23 of the second process 21, but advantageously, because the memory offset 29 can be internally represented as an integer, the memory offset 29 can be physically assigned to a data member in a node in the message queue 23 using a simple integer assignment available, for instance, in the C, C++ or Java programming languages. The mechanism for assignment can vary depending on the implementation of the queue data structure. However, as an example, the first process 22 can calculate the address of the first element in the message queue 23, and can make an atomic assignment of the memory offset 29 to that address.
Specifically, in C-syntax, the physical assignment can consist of “*(inbox_address+front_of_queue)=offset_of_message data”. This C-style statement can atomically assign the memory offset 29 of the message data 30 to the address of the first element in message queue 23. The assignment can be atomic in that a single instruction is required, e.g. “newValue=5”. In the case of an atomic assignment, the entire integer memory offset 29 can be written to the message queue 23 using the single instruction. The atomic assignment can be contrasted with the case of copying a data message using a memory copy, such as “memcpy”, which performs an assignment for each byte in the data message.
The front_of_queue variable can be stored at a pre-determined location, as specified by the message queue structure in the beginning of the small message buffer 25. Still, one skilled in the art will recognize that the message queue 23 needn't be stored in the small message buffer 25. Rather, the message queue 23 can be stored in another message buffer, into which access can be provided using any of the traditional IPC techniques. Alternatively and advantageously, access to the message buffer could occur using high speed IPC.
Finally, the second process 21 can identify the memory offset 29 placed in the corresponding message queue 23. Specifically, the second process 21 can poll the message queue 23 waiting for a new memory offset 29 to arrive. Alternatively, the first process 22 can signal the second process 21 that new message data 30 has arrived. Either mechanism can be acceptable depending upon specific application requirements.
Having identified the memory offset 29, the second process 21 can manipulate the accumulated message data 30 in place in the small message buffer 25 corresponding to the memory location denoted by the memory offset 29. The second process 21 can use the accumulated data 30 in accordance with the unique data requirements of the second process 21. When finished, the second process 21 can release the small message buffer 25 using conventional memory management techniques.
FIG. 5 is a flow chart describing a method for high speed IPC. The flow chart depicts a single process which can communicate with another process using the inventive method. As shown in the drawings, the method begins in step 100 where a process can attempt to attach to a message buffer in a shared region of RAM, exclusive of the operating system kernel. One skilled in the art will recognize that each attempt to attach to the message buffer will include moderation by a locking mechanism in order to prevent a logic race condition. One such example of a locking mechanism is a mutex which allows an atomic check and set of a variable that protects a shared region. If one process has the mutex, other processes are blocked from accessing the shared region until the mutex is released.
In decision step 102, if the process is the first activated process in the system, in step 104, a message buffer in the shared region of RAM is created and, in step 106, configured. Preferably, the process creating the shared region of RAM obtains a mutex and releases the mutex only when the shared region is created and configured. The release of the mutex acts as notification to other interested processes that the shared region of RAM is ready for use.
Configuring the shared region of RAM can include naming the shared region, initializing the shared region variables in an administrative area, and sizing the buffer pools. Notably, the shared region may be configured using a stored configuration that is merely retrieved by the process and applied to the shared region.
Whether the process creates and configures a new shared region or attaches to a previously created shared region, in step 108, the process can create a message queue in the shared region corresponding to the process. In particular, the message queue can be used to store incoming memory offsets, placed in the message queue by other processes. Having attached to a message buffer and created a message queue, in step 110, the process can perform normal intraprocess operations until a need for IPC arises, either where the process is a recipient or sender of a message, as determined in decision step 112.
If the process is a first process attempting to transmit data to a second process, the first process, in step 122 can obtain a memory offset to free memory space in the message buffer. One skilled in the art will recognize that obtaining a memory offset to free memory requires the use of a memory management mechanism for allocating buffers in a shared region of user memory. Still, one skilled in the art will futher recognize the widespread availability of memory management mechanisms suitable for accomplishing the same. For example, just as the “malloc( )” function included as part of the ANSI C standard library can abstract the details of memory management, a buffer pool allocator for high speed IPC can abstract the details of managing memory offsets into the message buffer. The buffer pool allocator for high speed IPC can be implemented using techniques well known in the art and thoroughly documented in Kernighan and Ritchie, “The C Programming Language: 2nd Edition”, pp. 185–189, incorporated herein by reference.
Subsequently, in step 124, the first process can accumulate message data for the benefit of the second process with the writing beginning at the location corresponding to the memory offset. When finished accumulating the message data in the message buffer, in steps 126 and 128, the first process can place the memory offset in the message queue corresponding to the second process. Significantly, the placement of the memory offset can be an atomic assignment to an integer location in the shared region of RAM. The act of placing the memory offset in the message queue is tantamount to notifying the second process of an attempt at IPC.
Correspondingly, if the process is a second process receiving a request for IPC from a first process, in step 114, the second process can identify a memory offset in the message queue corresponding to the second process. In step 116, the second process can retrieve the memory offset, and in step 118, the second process can use the memory offset to access the data accumulated by the first process at an appropriate location in the message buffer. Significantly, because the accumulated data is stored in a shared region of user memory, it is not necessary for the second process to copy the accumulated data to a different memory space. Rather, in step 120, when finished using the data, the second process need only release the buffer using the above-identified buffer pool allocator.
As illustrated in FIG. 4, the significant differences between the inventive method and traditional IPC mechanisms include the present method's use of a shared region of RAM to store accumulated data. As a result of the use of the shared region, the inventive method does not require operating system calls to write and read accumulated data. In addition, because the present method uses a shared region of RAM instead of a memory region in the operating system kernel, the reconfiguration of the shared region does not require the rebooting of the operating system. Finally, high speed IPC provides a faster and safer mechanism for IPC in that the overhead associated with IPC is minimized from two system calls and 2n bytes of data movement to a minimal n bytes of data movement.

Claims (14)

1. A method for high speed interprocess communications comprising the steps of:
detecting a previously created shared region of RAM;
if a shared region of RAM is not detected, creating and configuring a shared region of RAM for storing accumulated data;
attaching first and second processes to a message buffer in the shared region of random access memory (RAM) exclusive of operating system kernel space, each said process having a message list that is a message queue;
accumulating message data from said first process in a location in said message buffer;
said first process adding to said message list of said second process a memory offset corresponding to said location in said message buffer; and,
manipulating in said second process said accumulated data at said location corresponding to said offset,
whereby said accumulated message data is transferred from said first process to said second process with minimal data transfer overhead.
2. The method according to claim 1, wherein the adding step comprises the steps of:
retrieving a memory offset in said message buffer corresponding to said location of data accumulated by said first process; and,
inserting said memory offset in said message queue corresponding to said second process.
3. The method according to claim 2, wherein the inserting step comprises the step of atomically assigning said memory offset to an integer location in said message queue corresponding to said second process.
4. The method according to claim 1, wherein said manipulating step comprises the steps of:
identifying a memory offset in said message list corresponding to said second process;
processing in said second process message data stored at a location in said message buffer corresponding to said memory offset; and,
releasing said message buffer.
5. The method according to claim 1, further comprising the step of locking said accumulated data to prevent said first process from accessing said accumulated data while said accumulated data is being manipulated.
6. A method for configuring high speed interprocess communications between first and second processes comprising the steps of:
creating and configuring a message buffer in a shared region of RAM exclusive of operating system kernel space and shared between said first and second processes;
accumulating message data from said first process in a location in said message buffer;
creating a message list in said shared region of RAM, whereby said message list is a message queue and can store memory offsets of message data stored in said message buffer;
said first process adding to said message list corresponding to said second process a memory offset corresponding to said location in said message buffer; and,
manipulating in said second process said accumulated message data stored in said message buffer at a location corresponding to said offset,
whereby said accumulated message data is transferred from said first process to said second process with minimal data transfer overhead.
7. The method according to claim 6, wherein the adding step comprises the steps of:
retrieving a memory offset in said message buffer, said memory offset corresponding to said location of said message data accumulated by said first process; and,
inserting said memory offset in said message queue corresponding to said second process.
8. The method according to claim 7, wherein the inserting step comprises the step of atomically assigning said memory offset to an integer location in said message queue corresponding to said second process.
9. The method according to claim 6, wherein said manipulating step comprises the steps of:
identifying a memory offset in said message list corresponding to said second process;
processing in said second process said accumulated message data at a location in said message buffer corresponding to said memory offset; and,
releasing said message buffer.
10. A computer apparatus programmed with a set of instructions stored in a fixed medium for high speed interprocess communications, said programmed computer apparatus comprising:
means for detecting a previously created shared region of RAM;
means for creating and configuring a shared region in RAM for storing accumulated data if a previously created shared region of RAM is not detected by said detecting means;
means for attaching first and second processes to a message buffer in the shared region of random access memory (RAM) exclusive of operating system kernel space, each said process having a message list that is a message queue;
means for accumulating message data from said first process in a location in said message buffer;
means for said first process to add to said message list of said second process a memory offset corresponding to said location in said message buffer; and,
means for manipulating in said second process said accumulated data at said location corresponding to said offset.
11. The computer apparatus according to claim 10, wherein the adding means comprises:
means for retrieving a memory offset in said message buffer corresponding to said location of data accumulated by said first process; and,
means for inserting said memory offset in said message queue corresponding to said second process.
12. The computer apparatus according to claim 11, wherein the inserting means comprises means for atomically assigning said memory offset to an integer location in said message queue corresponding to said second process.
13. The computer apparatus according to claim 10, wherein said manipulating means comprises:
means for identifying a memory offset in said message list corresponding to said second process;
means for using in said second process message data at a location in said message buffer corresponding to said memory offset; and,
means for releasing said message buffer.
14. The method according to claim 10, wherein said accumulated data is locked to prevent said first process from accessing said accumulated data while said accumulated data is being manipulated.
US09/431,449 1999-11-01 1999-11-01 High speed interprocess communication Expired - Lifetime US7152231B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/431,449 US7152231B1 (en) 1999-11-01 1999-11-01 High speed interprocess communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/431,449 US7152231B1 (en) 1999-11-01 1999-11-01 High speed interprocess communication

Publications (1)

Publication Number Publication Date
US7152231B1 true US7152231B1 (en) 2006-12-19

Family

ID=37526658

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/431,449 Expired - Lifetime US7152231B1 (en) 1999-11-01 1999-11-01 High speed interprocess communication

Country Status (1)

Country Link
US (1) US7152231B1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050086661A1 (en) * 2003-10-21 2005-04-21 Monnie David J. Object synchronization in shared object space
US20050086237A1 (en) * 2003-10-21 2005-04-21 Monnie David J. Shared queues in shared object space
US20050086662A1 (en) * 2003-10-21 2005-04-21 Monnie David J. Object monitoring system in shared object space
US20050097567A1 (en) * 2003-10-21 2005-05-05 Monnie David J. Shared listeners in shared object space
US20050160135A1 (en) * 2004-01-15 2005-07-21 Atsushiro Yokoro Method and system for managing programs for distributed processing systems
US20060230359A1 (en) * 2005-04-07 2006-10-12 Ilja Fischer Methods of forwarding context data upon application initiation
US20070014295A1 (en) * 2005-07-15 2007-01-18 Microsoft Corporation Handle passing using an inter-process communication
US20070094463A1 (en) * 2005-10-25 2007-04-26 Harris Corporation, Corporation Of The State Of Delaware Mobile wireless communications device providing data management and security features and related methods
US20090119347A1 (en) * 2007-11-02 2009-05-07 Gemstone Systems, Inc. Data replication method
US20100185585A1 (en) * 2009-01-09 2010-07-22 Gemstone Systems, Inc. Preventing pauses in algorithms requiring pre-image information concerning modifications during data replication
US8209704B1 (en) * 2008-03-28 2012-06-26 Emc Corporation Techniques for user space and kernel space communication
US20140053165A1 (en) * 2012-08-17 2014-02-20 Elektrobit Automotive Gmbh Configuration technique for an electronic control unit with intercommunicating applications
US20150212867A1 (en) * 2014-01-30 2015-07-30 Vmware, Inc. User Space Function Execution from a Kernel Context for Input/Output Filtering
US9274861B1 (en) * 2014-11-10 2016-03-01 Amazon Technologies, Inc. Systems and methods for inter-process messaging
US20170344408A1 (en) * 2016-05-27 2017-11-30 Huawei Technologies Co., Ltd. Method and System of Performing Inter-Process Communication Between OS-Level Containers In User Space
US20170366492A1 (en) * 2016-06-20 2017-12-21 Huawei Technologies Co., Ltd. System and Method for Messaging Between Operating System Containers
US20190012115A1 (en) * 2017-07-07 2019-01-10 Seagate Technology Llc Runt Handling Data Storage System
US10419329B2 (en) 2017-03-30 2019-09-17 Mellanox Technologies Tlv Ltd. Switch-based reliable multicast service
WO2020040964A1 (en) * 2018-08-24 2020-02-27 Apple Inc. Methods and apparatus for control of a jointly shared memory-mapped region
EP3693856A1 (en) * 2019-02-11 2020-08-12 Siemens Aktiengesellschaft Computer system and method for transmitting a message in a computer system
US10798224B2 (en) 2018-03-28 2020-10-06 Apple Inc. Methods and apparatus for preventing packet spoofing with user space communication stacks
US11171884B2 (en) 2019-03-13 2021-11-09 Mellanox Technologies Tlv Ltd. Efficient memory utilization and egress queue fairness
US11275631B1 (en) * 2019-09-30 2022-03-15 Amazon Technologies, Inc. Systems, methods, and apparatuses for using shared memory for data between processes
US11403154B1 (en) 2019-09-30 2022-08-02 Amazon Technologies, Inc. Systems, methods and apparatuses for running multiple machine learning models on an edge device
US11477123B2 (en) 2019-09-26 2022-10-18 Apple Inc. Methods and apparatus for low latency operation in user space networking
US11558348B2 (en) 2019-09-26 2023-01-17 Apple Inc. Methods and apparatus for emerging use case support in user space networking
US11606302B2 (en) 2020-06-12 2023-03-14 Apple Inc. Methods and apparatus for flow-based batching and processing
US11775359B2 (en) 2020-09-11 2023-10-03 Apple Inc. Methods and apparatuses for cross-layer processing
US11799986B2 (en) 2020-09-22 2023-10-24 Apple Inc. Methods and apparatus for thread level execution in non-kernel space
US11829303B2 (en) 2019-09-26 2023-11-28 Apple Inc. Methods and apparatus for device driver operation in non-kernel space
US11876719B2 (en) 2021-07-26 2024-01-16 Apple Inc. Systems and methods for managing transmission control protocol (TCP) acknowledgements
US11882051B2 (en) 2021-07-26 2024-01-23 Apple Inc. Systems and methods for managing transmission control protocol (TCP) acknowledgements
US11954540B2 (en) 2021-09-10 2024-04-09 Apple Inc. Methods and apparatus for thread-level execution in non-kernel space

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5428781A (en) * 1989-10-10 1995-06-27 International Business Machines Corp. Distributed mechanism for the fast scheduling of shared objects and apparatus
US5434975A (en) * 1992-09-24 1995-07-18 At&T Corp. System for interconnecting a synchronous path having semaphores and an asynchronous path having message queuing for interprocess communications
US5504901A (en) * 1989-09-08 1996-04-02 Digital Equipment Corporation Position independent code location system
US5652885A (en) * 1993-05-25 1997-07-29 Storage Technology Corporation Interprocess communications system and method utilizing shared memory for message transfer and datagram sockets for message control
US5797005A (en) * 1994-12-30 1998-08-18 International Business Machines Corporation Shared queue structure for data integrity
US5802341A (en) * 1993-12-13 1998-09-01 Cray Research, Inc. Method for the dynamic allocation of page sizes in virtual memory
US5913058A (en) * 1997-09-30 1999-06-15 Compaq Computer Corp. System and method for using a real mode bios interface to read physical disk sectors after the operating system has loaded and before the operating system device drivers have loaded
US5991845A (en) * 1996-10-21 1999-11-23 Lucent Technologies Inc. Recoverable spin lock system
US6148377A (en) * 1996-11-22 2000-11-14 Mangosoft Corporation Shared memory computer networks
US6181707B1 (en) * 1997-04-04 2001-01-30 Clear Com Intercom system having unified control and audio data transport
US6275912B1 (en) * 1998-06-30 2001-08-14 Microsoft Corporation Method and system for storing data items to a storage device
US6442619B1 (en) * 1997-12-31 2002-08-27 Alcatel Usa Sourcing, L.P. Software architecture for message processing in a distributed architecture computing system
US6754666B1 (en) * 1999-08-19 2004-06-22 A2I, Inc. Efficient storage and access in a database management system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5504901A (en) * 1989-09-08 1996-04-02 Digital Equipment Corporation Position independent code location system
US5428781A (en) * 1989-10-10 1995-06-27 International Business Machines Corp. Distributed mechanism for the fast scheduling of shared objects and apparatus
US5434975A (en) * 1992-09-24 1995-07-18 At&T Corp. System for interconnecting a synchronous path having semaphores and an asynchronous path having message queuing for interprocess communications
US5652885A (en) * 1993-05-25 1997-07-29 Storage Technology Corporation Interprocess communications system and method utilizing shared memory for message transfer and datagram sockets for message control
US5802341A (en) * 1993-12-13 1998-09-01 Cray Research, Inc. Method for the dynamic allocation of page sizes in virtual memory
US5797005A (en) * 1994-12-30 1998-08-18 International Business Machines Corporation Shared queue structure for data integrity
US5991845A (en) * 1996-10-21 1999-11-23 Lucent Technologies Inc. Recoverable spin lock system
US6148377A (en) * 1996-11-22 2000-11-14 Mangosoft Corporation Shared memory computer networks
US6181707B1 (en) * 1997-04-04 2001-01-30 Clear Com Intercom system having unified control and audio data transport
US5913058A (en) * 1997-09-30 1999-06-15 Compaq Computer Corp. System and method for using a real mode bios interface to read physical disk sectors after the operating system has loaded and before the operating system device drivers have loaded
US6442619B1 (en) * 1997-12-31 2002-08-27 Alcatel Usa Sourcing, L.P. Software architecture for message processing in a distributed architecture computing system
US6275912B1 (en) * 1998-06-30 2001-08-14 Microsoft Corporation Method and system for storing data items to a storage device
US6754666B1 (en) * 1999-08-19 2004-06-22 A2I, Inc. Efficient storage and access in a database management system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BEA, "BEA MessageQ Introduction to Message Queuing", May 1997, Version 4.0, Edition 2.0, pp. 1-20. *
Linux, Real-Time Linux, & IPC; Dr. Dobb's Journal, Nov. 1999.

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689986B2 (en) 2003-10-21 2010-03-30 Gemstone Systems, Inc. Shared listeners in shared object space
US9189263B1 (en) 2003-10-21 2015-11-17 Pivotal Software, Inc. Object synchronization in shared object space
US20080066081A1 (en) * 2003-10-21 2008-03-13 Gemstone Systems, Inc. Object monitoring system in shared object space
US20050097567A1 (en) * 2003-10-21 2005-05-05 Monnie David J. Shared listeners in shared object space
US20080072238A1 (en) * 2003-10-21 2008-03-20 Gemstone Systems, Inc. Object synchronization in shared object space
US20050086237A1 (en) * 2003-10-21 2005-04-21 Monnie David J. Shared queues in shared object space
US8201187B2 (en) 2003-10-21 2012-06-12 Vmware, Inc. Object monitoring system in shared object space
US20050086661A1 (en) * 2003-10-21 2005-04-21 Monnie David J. Object synchronization in shared object space
US20050086662A1 (en) * 2003-10-21 2005-04-21 Monnie David J. Object monitoring system in shared object space
US8171491B2 (en) 2003-10-21 2012-05-01 Vmware, Inc. Object synchronization in shared object space
US7543301B2 (en) * 2003-10-21 2009-06-02 Gemstone Systems, Inc. Shared queues in shared object space
US8205199B2 (en) * 2004-01-15 2012-06-19 Hitachi, Ltd. Method and system for associating new queues with deployed programs in distributed processing systems
US20050160135A1 (en) * 2004-01-15 2005-07-21 Atsushiro Yokoro Method and system for managing programs for distributed processing systems
US20060230359A1 (en) * 2005-04-07 2006-10-12 Ilja Fischer Methods of forwarding context data upon application initiation
US7823160B2 (en) * 2005-04-07 2010-10-26 Sap Aktiengesellschaft Methods of forwarding context data upon application initiation
US20070014295A1 (en) * 2005-07-15 2007-01-18 Microsoft Corporation Handle passing using an inter-process communication
US7870558B2 (en) * 2005-07-15 2011-01-11 Microsoft Corporation Handle passing using an inter-process communication
US20070094463A1 (en) * 2005-10-25 2007-04-26 Harris Corporation, Corporation Of The State Of Delaware Mobile wireless communications device providing data management and security features and related methods
US8443158B2 (en) * 2005-10-25 2013-05-14 Harris Corporation Mobile wireless communications device providing data management and security features and related methods
US20090119347A1 (en) * 2007-11-02 2009-05-07 Gemstone Systems, Inc. Data replication method
US8005787B2 (en) 2007-11-02 2011-08-23 Vmware, Inc. Data replication method
US20110184911A1 (en) * 2007-11-02 2011-07-28 Vmware, Inc. Data replication method
US8180729B2 (en) 2007-11-02 2012-05-15 Vmware, Inc. Data replication method
US8209704B1 (en) * 2008-03-28 2012-06-26 Emc Corporation Techniques for user space and kernel space communication
US9720995B1 (en) 2009-01-09 2017-08-01 Pivotal Software, Inc. Preventing pauses in algorithms requiring pre-image information concerning modifications during data replication
US8645324B2 (en) 2009-01-09 2014-02-04 Pivotal Software, Inc. Preventing pauses in algorithms requiring pre-image information concerning modifications during data replication
US10303700B1 (en) 2009-01-09 2019-05-28 Pivotal Software, Inc. Preventing pauses in algorithms requiring pre-image information concerning modifications during data replication
US9128997B1 (en) 2009-01-09 2015-09-08 Pivotal Software, Inc. Preventing pauses in algorithms requiring pre-image information concerning modifications during data replication
US20100185585A1 (en) * 2009-01-09 2010-07-22 Gemstone Systems, Inc. Preventing pauses in algorithms requiring pre-image information concerning modifications during data replication
US20140053165A1 (en) * 2012-08-17 2014-02-20 Elektrobit Automotive Gmbh Configuration technique for an electronic control unit with intercommunicating applications
US9235456B2 (en) * 2012-08-17 2016-01-12 Elektrobit Automotive Gmbh Configuration technique for an electronic control unit with intercommunicating applications
US9542224B2 (en) * 2014-01-30 2017-01-10 Vmware, Inc. User space function execution from a kernel context for input/output filtering from a thread executing in the user space
US20150212867A1 (en) * 2014-01-30 2015-07-30 Vmware, Inc. User Space Function Execution from a Kernel Context for Input/Output Filtering
US9934067B2 (en) 2014-01-30 2018-04-03 Vmware, Inc. Synchronous user space function execution from a kernel context
US9274861B1 (en) * 2014-11-10 2016-03-01 Amazon Technologies, Inc. Systems and methods for inter-process messaging
US9569291B1 (en) * 2014-11-10 2017-02-14 Amazon Technologies, Inc. Systems and methods for inter-process messaging
US20170344408A1 (en) * 2016-05-27 2017-11-30 Huawei Technologies Co., Ltd. Method and System of Performing Inter-Process Communication Between OS-Level Containers In User Space
CN109196837B (en) * 2016-05-27 2021-01-15 华为技术有限公司 Method and system for inter-process communication between OS level containers in user space
CN109196837A (en) * 2016-05-27 2019-01-11 华为技术有限公司 The method and system of interprocess communication is carried out in user's space between OS grades of containers
US10599494B2 (en) * 2016-05-27 2020-03-24 Huawei Technologies Co., Ltd. Method and system of performing inter-process communication between OS-level containers in user space
US20170366492A1 (en) * 2016-06-20 2017-12-21 Huawei Technologies Co., Ltd. System and Method for Messaging Between Operating System Containers
US10305834B2 (en) * 2016-06-20 2019-05-28 Huawei Technologies Co., Ltd. System and method for messaging between operating system containers
CN109314726A (en) * 2016-06-20 2019-02-05 华为技术有限公司 The system and method communicated between operating system container
CN109314726B (en) * 2016-06-20 2021-07-09 华为技术有限公司 System and method for communication between operating system containers
US10419329B2 (en) 2017-03-30 2019-09-17 Mellanox Technologies Tlv Ltd. Switch-based reliable multicast service
US20190012115A1 (en) * 2017-07-07 2019-01-10 Seagate Technology Llc Runt Handling Data Storage System
US10564890B2 (en) * 2017-07-07 2020-02-18 Seagate Technology Llc Runt handling data storage system
US11178259B2 (en) 2018-03-28 2021-11-16 Apple Inc. Methods and apparatus for regulating networking traffic in bursty system conditions
US11368560B2 (en) 2018-03-28 2022-06-21 Apple Inc. Methods and apparatus for self-tuning operation within user space stack architectures
US10819831B2 (en) 2018-03-28 2020-10-27 Apple Inc. Methods and apparatus for channel defunct within user space stack architectures
US11843683B2 (en) 2018-03-28 2023-12-12 Apple Inc. Methods and apparatus for active queue management in user space networking
US11824962B2 (en) 2018-03-28 2023-11-21 Apple Inc. Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
US11792307B2 (en) 2018-03-28 2023-10-17 Apple Inc. Methods and apparatus for single entity buffer pool management
US11095758B2 (en) 2018-03-28 2021-08-17 Apple Inc. Methods and apparatus for virtualized hardware optimizations for user space networking
US11146665B2 (en) 2018-03-28 2021-10-12 Apple Inc. Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
US11159651B2 (en) 2018-03-28 2021-10-26 Apple Inc. Methods and apparatus for memory allocation and reallocation in networking stack infrastructures
US10798224B2 (en) 2018-03-28 2020-10-06 Apple Inc. Methods and apparatus for preventing packet spoofing with user space communication stacks
US11178260B2 (en) 2018-03-28 2021-11-16 Apple Inc. Methods and apparatus for dynamic packet pool configuration in networking stack infrastructures
WO2020040964A1 (en) * 2018-08-24 2020-02-27 Apple Inc. Methods and apparatus for control of a jointly shared memory-mapped region
US10846224B2 (en) 2018-08-24 2020-11-24 Apple Inc. Methods and apparatus for control of a jointly shared memory-mapped region
EP3693856A1 (en) * 2019-02-11 2020-08-12 Siemens Aktiengesellschaft Computer system and method for transmitting a message in a computer system
WO2020164991A1 (en) 2019-02-11 2020-08-20 Siemens Aktiengesellschaft Method for transmitting a message in a computing system, and computing system
US11171884B2 (en) 2019-03-13 2021-11-09 Mellanox Technologies Tlv Ltd. Efficient memory utilization and egress queue fairness
US11477123B2 (en) 2019-09-26 2022-10-18 Apple Inc. Methods and apparatus for low latency operation in user space networking
US11558348B2 (en) 2019-09-26 2023-01-17 Apple Inc. Methods and apparatus for emerging use case support in user space networking
US11829303B2 (en) 2019-09-26 2023-11-28 Apple Inc. Methods and apparatus for device driver operation in non-kernel space
US11403154B1 (en) 2019-09-30 2022-08-02 Amazon Technologies, Inc. Systems, methods and apparatuses for running multiple machine learning models on an edge device
US11275631B1 (en) * 2019-09-30 2022-03-15 Amazon Technologies, Inc. Systems, methods, and apparatuses for using shared memory for data between processes
US11606302B2 (en) 2020-06-12 2023-03-14 Apple Inc. Methods and apparatus for flow-based batching and processing
US11775359B2 (en) 2020-09-11 2023-10-03 Apple Inc. Methods and apparatuses for cross-layer processing
US11799986B2 (en) 2020-09-22 2023-10-24 Apple Inc. Methods and apparatus for thread level execution in non-kernel space
US11876719B2 (en) 2021-07-26 2024-01-16 Apple Inc. Systems and methods for managing transmission control protocol (TCP) acknowledgements
US11882051B2 (en) 2021-07-26 2024-01-23 Apple Inc. Systems and methods for managing transmission control protocol (TCP) acknowledgements
US11954540B2 (en) 2021-09-10 2024-04-09 Apple Inc. Methods and apparatus for thread-level execution in non-kernel space

Similar Documents

Publication Publication Date Title
US7152231B1 (en) High speed interprocess communication
US6658490B1 (en) Method and system for multi-threaded processing
US5652885A (en) Interprocess communications system and method utilizing shared memory for message transfer and datagram sockets for message control
JP6238898B2 (en) System and method for providing and managing message queues for multi-node applications in a middleware machine environment
US6629152B2 (en) Message passing using shared memory of a computer
Kamrad et al. Distributed communications
EP0840935B1 (en) A method and apparatus for transporting messages between processors in a multiple processor system
JP4755390B2 (en) Method and apparatus for controlling the flow of data between data processing systems via a memory
US6622193B1 (en) Method and apparatus for synchronizing interrupts in a message passing queue oriented bus system
US6647423B2 (en) Direct message transfer between distributed processes
US7337275B2 (en) Free list and ring data structure management
US7779165B2 (en) Scalable method for producer and consumer elimination
US7234004B2 (en) Method, apparatus and program product for low latency I/O adapter queuing in a computer system
US6385658B2 (en) Method and apparatus for synchronized message passing using shared resources
US6032179A (en) Computer system with a network interface which multiplexes a set of registers among several transmit and receive queues
JP4755391B2 (en) Method and apparatus for controlling the flow of data between data processing systems via a memory
EP0582666A1 (en) Method and apparatus for buffering data within stations of a communication network
CA2415043A1 (en) A communication multiplexor for use with a database system implemented on a data processing system
US6684281B1 (en) Fast delivery of interrupt message over network
US5388222A (en) Memory subsystem command input queue having status locations for resolving conflicts
US6944863B1 (en) Queue bank repository and method for sharing limited queue banks in memory
US5386514A (en) Queue apparatus and mechanics for a communications interface architecture
JPH0587854B2 (en)
Abrossimov et al. Virtual memory management in Chorus
US20230161641A1 (en) Compact NUMA-aware Locks

Legal Events

Date Code Title Description
AS Assignment

Owner name: EXIGENT INTERNATIONAL, INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GALLUSCIO, ANTHONY P.;HOLT, WILLIAM L.;DYER, DOUGLAS M.;AND OTHERS;REEL/FRAME:010505/0204

Effective date: 19991101

AS Assignment

Owner name: HARRIS-EXIGENT, INC., FLORIDA

Free format text: CHANGE OF NAME;ASSIGNOR:EXIGENT INTERNATINAL, INC.;REEL/FRAME:014260/0801

Effective date: 20010611

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

SULP Surcharge for late payment

Year of fee payment: 7

SULP Surcharge for late payment
AS Assignment

Owner name: HARRIS IT SERVICES CORPORATION, VIRGINIA

Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:HARRIS TECHNICAL SERVICES CORPORATION;MULTIMAX INCORPORATED;REEL/FRAME:036230/0488

Effective date: 20080613

Owner name: HARRIS TECHNICAL SERVICES CORPORATION, VIRGINIA

Free format text: MERGER;ASSIGNOR:HARRIS-EXIGENT, INC.;REEL/FRAME:036244/0105

Effective date: 20050630

AS Assignment

Owner name: HARRIS CORPORATION, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HARRIS IT SERVICES CORPORATION;REEL/FRAME:036244/0943

Effective date: 20150803

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

FEPP Fee payment procedure

Free format text: 11.5 YR SURCHARGE- LATE PMT W/IN 6 MO, LARGE ENTITY (ORIGINAL EVENT CODE: M1556); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

AS Assignment

Owner name: COMMSTECH LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HARRIS CORPORATION;REEL/FRAME:047551/0415

Effective date: 20181031