US9270532B2 - Resource command messages and methods - Google Patents

Resource command messages and methods Download PDF

Info

Publication number
US9270532B2
US9270532B2 US11/246,721 US24672105A US9270532B2 US 9270532 B2 US9270532 B2 US 9270532B2 US 24672105 A US24672105 A US 24672105A US 9270532 B2 US9270532 B2 US 9270532B2
Authority
US
United States
Prior art keywords
resource
command
nodes
message
command message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/246,721
Other versions
US20070083662A1 (en
Inventor
Mark Adams
Thomas Earl Ludwig
Charles William Frank
Nicholas J. Witchey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rateze Remote Mgmt LLC
Original Assignee
Rateze Remote Mgmt LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to ZETERA CORPORATION reassignment ZETERA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ADAMS, MARK, FRANK, CHARLES WILLIAM, LUDWIG, THOMAS EARL, WITCHEY, NICHOLAS J.
Priority to US11/246,721 priority Critical patent/US9270532B2/en
Application filed by Rateze Remote Mgmt LLC filed Critical Rateze Remote Mgmt LLC
Publication of US20070083662A1 publication Critical patent/US20070083662A1/en
Assigned to CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998 reassignment CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998 SECURITY AGREEMENT Assignors: ZETERA CORPORATION
Assigned to THE FRANK REVOCABLE LIVING TRUST OF CHARLES W. FRANK AND KAREN L. FRANK reassignment THE FRANK REVOCABLE LIVING TRUST OF CHARLES W. FRANK AND KAREN L. FRANK SECURITY AGREEMENT Assignors: ZETERA CORPORATION
Assigned to WARBURG PINCUS PRIVATE EQUITY VIII, L.P. reassignment WARBURG PINCUS PRIVATE EQUITY VIII, L.P. SECURITY AGREEMENT Assignors: ZETERA CORPORATION
Assigned to ZETERA CORPORATION reassignment ZETERA CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998
Assigned to ZETERA CORPORATION reassignment ZETERA CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998
Assigned to ZETERA CORPORATION reassignment ZETERA CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: THE FRANK REVOCABLE LIVING TRUST OF CHARLES W. FRANK AND KAREN L. FRANK
Assigned to ZETERA CORPORATION reassignment ZETERA CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WARBURG PINCUS PRIVATE EQUITY VIII, L.P.
Assigned to RATEZE REMOTE MGMT. L.L.C. reassignment RATEZE REMOTE MGMT. L.L.C. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZETERA CORPORATION
Priority to US14/876,743 priority patent/US11601334B2/en
Publication of US9270532B2 publication Critical patent/US9270532B2/en
Application granted granted Critical
Priority to US18/104,264 priority patent/US11848822B2/en
Priority to US18/463,189 priority patent/US20230421447A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0053Allocation of signaling, i.e. of overhead other than pilot signals

Definitions

  • the field of the invention is computing resource command messaging and resource devices.
  • Resource devices manage computing resources. There are a myriad of examples of such computing resources, extending from data storage on disk drives in storage arrays to connection processing within high volume content servers in server farms.
  • resource devices comprise a plurality of resource nodes, thereby forming a distributed resource that appears as a single, logical resource device from the perspective of a resource consumer.
  • the resource nodes responsible for managing the physical resources must function efficiently, especially in complex environments comprising many resource consumers and resource devices. The efficiency depends on desirable characteristics including scalability, high performance, load balancing, low response times (responsiveness), or others characteristics that could require optimization.
  • Resource nodes are typically either managed by external management systems, or have a management layer imposed on them, which unfortunately introduces extra overhead beyond the core responsibility of resource management.
  • a resource device comprising a plurality of resource nodes exacerbates the external management problem.
  • resource nodes As resource nodes become loaded, they typically inform resource consumers of their state, shift resource requests to other resource nodes, or perform other out-of-band management communications to maximize performance resulting in excessive out-of-band communication.
  • Such systems suffer from scalability issues because as new resource nodes are added to the environment, the management “chatter” increases subtend a larger fraction of communication and processing bandwidth, which negatively impacts performance. Due to such coarse grained scalability, the cost to incrementally enhance the capability of the system increases, and the cost to replicate the entire system becomes prohibitive.
  • a resource node should not require information regarding other resource nodes to perform its primary responsibility of managing a resource.
  • autonomous resource nodes should determine when to process commands from a resource consumer based upon, or at least determined as a factor of, (a) information relating to the resource node and (b) any information supplied by the resource consumer within the message comprising the command.
  • resource consumers supply information regarding their desired urgency or importance for having a command processed.
  • benefits of scalability, performance, or responsiveness are achieved naturally, without imposing additional functionality, because resource consumers are able to adjust their resource command messages based upon the interactions with all the resource nodes to gain higher performance.
  • Resource nodes that are able to determine when to handle commands sent to them result in several advantages.
  • each individual resource node focuses on its main responsibilities rather than on other non-resource centric tasks; therefore, the resource node functions with a higher efficiency than a similar resource node that has additional management tasks to perform.
  • multiple resource consumers are able to interact with multiple resource nodes of a resource device without an extraneous arbitrator. This results in improved response time because each resource node is able to determine independently which resource consumer deserves (if applicable) attention.
  • redundant resource node If one redundant resource node is fully loaded, another redundant resource node is able to service requests without external intervention. Additionally, any of the redundant resource nodes are capable of providing a valid response to a resource consumer; therefore, the responsiveness of the system is higher than a resource device without redundant resource nodes. Fourth, the scalability of such an environment is high because each resource node is independent and does not require additional information from resource consumers or other resource nodes and can integrate into the environment easily.
  • resource command messages and resource devices comprising one or more resource nodes that autonomously determine when to process resource command messages based upon the contents of the command message and on information associated with the resource node.
  • One aspect of the invention is directed toward a resource command message comprising a command and command parameters comprising an indication of the command's urgency or the command's importance.
  • Resource consumers construct resource command messages to interact with resource nodes composing a resource device.
  • a resource node processes resource command messages based upon the urgency or importance of the resource command message in addition to information centric to the resource node.
  • a resource device can comprise a plurality of resource nodes where each resource node has an ability to operate independently of all other nodes and each resource node is able to receive the resource command message.
  • the urgency or importance with a resource command message includes relative or absolute values.
  • the present invention is directed toward a method of processing resource command messages.
  • the method includes interpreting command urgency or command importance information within the resource command message and combining the information along with resource node information to establish when the command within the resource command message will be processed.
  • the method also includes a step of determining the ordering in which commands in a command queue are processed based upon when the command is to be processed Through the determination, the command could be processed immediately, delayed in processing, never processed, or could have its processing order changed relative to other commands sent previously or subsequently.
  • the method includes processing resource command messages by more than one resource node that composes a resource device.
  • the present invention is directed toward a method of accessing a resource device through creating a resource command message that includes a command and command parameters comprising at least one of a command urgency or command importance.
  • the method also includes sending the resource command message to a resource device and determining when to process the command within the resource command message.
  • sending resource command messages include multicasting to at least some of the resource nodes.
  • resource nodes within a resource device can operate as autonomous entities, each responsible for its own individual resources.
  • Resource consumers acquire resources from resource nodes to fulfill their individual functions, and are also autonomous entities.
  • As resource consumers require resources they send resource command messages to the resource device with an indication of the urgency of the command or the importance of the command to acquire the resources, to reserve the resources, to use the resource, or to interact with the resource in other ways.
  • resource nodes are autonomous and service requests from multiple resource consumers, the resource nodes fold information regarding their state, history, capabilities, or other relevant information together with their interpretation of the urgency or importance information to decide how or when to process the command.
  • the phrase “when to process” means autonomously handling the processing of a command and should be interpreted broadly including time based processing, order of processing, or other process handling concepts.
  • a resource device comprises a plurality of resource nodes, where each resource node is responsible for all or some fraction of the resource and also functions independently of all other nodes, devices, or consumers.
  • resource nodes provide redundant resources, resource consumers send resource command messages to some or all the resource nodes, and given the current conditions of the network or loading the most capable resource node will respond.
  • other resource nodes interpret additional resource command messages or resource command responses as instructions to suspend or stop processing of previously unprocessed commands to reduce multiple responses.
  • Resource device means a logical device that is addressable, in whole or in part, on a communication path, and provides access to a commodity used as a computing resource by a resource consumer.
  • Logical resource devices are contemplated to include physical devices or virtual devices.
  • Physical resource devices include computers, monitors, hard disk drives, power supplies, or other physical elements.
  • Virtual resource devices include addressable video displays, logical storage volumes, a web server farm with a URL, or other abstractions of physical elements. Resource consumers interpret each resource device as a coherent whole device, regardless of its actual physical or virtual structure.
  • Resource consumer means an entity that requires access or control over a commodity to perform its desired functions.
  • Resource consumers include computers, applications, users, web server gateways, or other entities that are able to communicate with resource nodes over a communication path; therefore, resource consumers are also addressable. It is contemplated that a resource devices can at times function as a resource consumer.
  • Resource node means a portion of a resource device that represents a fraction of a larger resource device, up to and including the complete resource device. Resource nodes can also operate as independent, addressable entities on the communication path. Contemplated resource nodes include logical partitions that combine with other logical partitions to form a logical volume from the perspective of resource consumers, addressable video frames, individual web servers in a server farm, or other constituent elements.
  • teachings herein may be advantageously employed by developers and producers of computing resources including storage devices, or media content servers to create efficient, scalable systems that deliver high performance and fast response.
  • FIG. 1 represents an environment where resource consumers interact with a resource device comprising multiple resource nodes.
  • FIG. 2 represents a schematic of a possible physical embodiment of a resource node.
  • FIG. 3 represents a schematic of a possible resource command message stored in a computer readable memory.
  • FIG. 4 represents a schematic of a possible resource node command queue.
  • FIG. 5 represents a schematic of possible steps for processing a resource command message.
  • FIG. 6 represents a schematic of possible steps for accessing a resource device.
  • FIG. 1 represents an environment where resource consumers interact with a resource device comprising one or more resource nodes.
  • Resource device 110 comprises one or more resource nodes 100 A through 100 N.
  • Each individual resource node is communicatively coupled to one or more resource consumers 120 A through 120 P through communication path 115 .
  • many resource consumers interact with many resource devices.
  • Resource consumers 120 A through 120 P operate independently of each other and do not require information from other entities beyond resource nodes 100 A through 100 N to interact with the desired resources managed by resource nodes 100 A through 100 N.
  • Resource consumers 120 A through 120 P comprise a combination of hardware, software, or firmware that includes instructions within a computer readable memory programmed to interact with resource device 110 , and to access the resources managed by resource nodes 100 A through 100 N.
  • resource consumer comprises a computer running an application or an operating system that desires access to a resource.
  • a resource consumer comprises a workstation with a driver that provides for communications between the workstation's operating system and resource nodes 100 A through 100 N. The driver also provides the operating system with enough information regarding resource device 100 that resource device 100 appears as a locally connected device. For example, a Windows® computer wishes to mount a logical volume for storage.
  • the Windows computer includes a driver that accepts I/O commands from the file system and transforms them into message transferred over a network to logical partitions composing the logical volume in a manner that is transparent to the file system or applications accessing the logical volume.
  • the logical volume appears as a locally attached disk drive.
  • resource consumers 120 A through 120 P are contemplated to comprise applications that directly interact with resource nodes 100 A through 100 N.
  • a gateway to a web site could represent a resource consumer that accesses a distributed web server farm where an individual web server represents a resource node.
  • resource consumers 120 A through 120 P operate independently of each other, they interact with resource nodes 100 A through 100 N collectively or individually.
  • resource consumers 120 A through 120 P do not require information from a system external to the resource consumers 120 A through 120 P or resource nodes 100 A through 100 N, including name servers, metadata servers, or other extraneous systems.
  • resource consumers 120 A through 120 P comprise the ability to discover resource nodes 100 A through 100 N. The ability to discover includes sending a broadcast message over communication path 115 to which resource nodes 100 A through 100 N respond with their individual names.
  • resource consumers 120 A through 120 P use name resolution to convert responses from resource nodes 100 A through 100 N into addresses on communication path 115 .
  • One skilled in the art of network programming will appreciate there are numerous ways to conduct discovery and name resolution including SSDP, DNS, WINS, or others.
  • resource consumers 120 A through 120 P send resource command messages addressed to resource device 110 .
  • the resource command messages can be addressed to resource device 110 in whole or in part.
  • resource commands messages are sent to resource nodes 100 A through 100 N collectively through multicast where resource device 110 is addressed in whole, although it is contemplated that unicast messaging where resource device 100 is address in part is also possible.
  • multicast means sending a single message over communication path 115 where two or more of resource nodes 100 A through 100 N receive the message without requiring a resource consumer to consume bandwidth on communication path 115 by sending more than one copy of the message to each resource node. It is also contemplated that resource device 110 can be addressed simultaneously through multicast and unicast messaging.
  • Resource consumers 120 A through 120 P each construct resource command messages that comprise command parameters regarding their individual specific needs. It is contemplated that at least a portion of the resource command message will reside in a memory as it is constructed.
  • the term “memory” means any hardware that stores information, no matter where the memory is located or how the information is stored.
  • the command parameters include the resource consumer's sense of urgency or importance relative to having their need satisfied. Urgency gives a sense of the timing constraints while importance gives a sense of priority desired by the individual resource consumer.
  • Resource nodes 100 A through 100 N use the urgency or importance command parameters and other command parameters to aid in the determination of when to process the resource command message.
  • resource consumers determine their urgency or importance based upon their own internal information or based information gathered from responses from resource nodes.
  • command parameters include command identifiers used to correlate a group of related resource command messages.
  • Resource consumers 120 A through 120 P each comprise the ability to receive more than one response from a single resource command message.
  • resource device 110 comprises redundant resources managed by resource nodes 100 A through 100 N
  • more than one of resource node 100 A through 100 N responds to a message.
  • Multiple responses are expected because each resource node functions independently from other nodes and does not know if a response has already been generated. However, multiple responses are quenched due to proper handling of urgency or importance information.
  • resource consumers 120 A through 120 P employ a slow start algorithm to avoid congestion to ensure efficient use of bandwidth and to reduce multiple responses from resource nodes.
  • resource consumers 120 A through 120 P determine which of resource nodes 100 A through 100 N are likely to response first, then the each individual resource consumer 120 A through 120 P are able to adjust their urgency or importance information independently to aid in reduction of multiple responses.
  • a slow start algorithm could break large command messages into smaller command messages, and send the smaller messages slowly. As responses are received, the algorithm begins sending larger messages more quickly. Slow start ensures networking equipment with small buffers is not flooded with large packets. If they become flooded, network performance drops.
  • a slow start provides resource consumers an opportunity to detect which resource nodes are initially more responsive. As packets are sent slowly at first, a window is provided to allow multiple responses from the resource nodes. Resource consumers can use the multiple responses to establish a preferred provider of the resource. Preferred provider information can then be used to quench multiple responses as the communication speeds up.
  • Resource device 110 comprises one or more resource nodes as indicated by resource nodes 100 A through 100 N. Although FIG. 1 depicts a single resource device, it is contemplated that multiple resource devices coexist on communication path 115 .
  • Resource device 110 is accessible by one or more of resource consumers 120 A through 120 P; therefore, resource device 110 can be a shared resource.
  • resource device 110 includes information residing on resource nodes 100 A through 100 N to indicate when resource device 110 is privately owned or shared among resource consumers 120 A through 120 P.
  • Resource device 110 comprises an identifier used by resource consumers 120 A through 120 P to differentiate resource device 110 from other resource devices on communication path 115 .
  • the identifier comprises a name stored in the memory of resource nodes 100 A through 100 N wherein the name is resolvable to an address on communication path 115 .
  • resource nodes 100 A through 100 N responds with a name that comprises the name of resource device 110 indicating they belong to resource device 110 .
  • the name resolves to an IP address which can include a unicast or multicast address. It is contemplated resource consumers 120 A through 120 P can address resource device 110 through a single address, preferable an IP multicast address.
  • resource device 110 comprises redundant resource nodes where two or more of resource nodes 100 A through 100 N manage duplicate resources.
  • resource device 110 represents a logical volume used by resource consumers 120 A through 120 P to store data
  • resource node 100 A and resource node 100 B could represent logical partitions that mirror the same stored data.
  • resource device 110 represents a logical web server where each of resource nodes 100 A through 100 N are individual servers and have equivalent ability to processes incoming connections requesting content.
  • a resource device with redundant resource nodes As an example of a resource device with redundant resource nodes, consider a storage array implemented based upon ZeteraTM technology where a logical volume, a resource device, is virtualized as a plurality of IP addressable logical partitions, resource nodes.
  • the logical volume represents a single virtual disk with logical block addresses (LBA) ranging from 1 to a maximum value of MAX.
  • LBA logical block addresses
  • Each logical partition is responsible for a set of LBAs, not necessarily continuous or contiguous, wherein the collection of logical partitions cover the entire range of LBAs, 1 to MAX.
  • two or more logical partitions are redundant when they are responsible for an identical set of LBAs; thereby producing a mirror of the data.
  • Workstations mount the logical volume as if it were a locally connected disk.
  • a driver handles all communications with the logical partitions over a network sending command messages via multicast to all the logical partitions using a single address.
  • Another example of a resource device with redundant nodes is a web server farm where each server is able to serve identical content to browsers.
  • a gateway sends requests coming from the Internet via command messages to the servers collectively.
  • the first server to respond handles the connections.
  • resource device 110 could represent other computing resources including, processor bandwidth, displays, memory, servable content, connection handling, network bandwidth, or other computing related resources.
  • Communication path 115 provides support for addressing and data transport among resource consumers 120 A through 120 P and resource nodes 100 A through 100 N. It is contemplated that communication path 115 is not under the direct control of the resource nodes or resource consumers; however, it is contemplated resource consumers 120 A through 120 P or resource nodes 100 A through 100 N could alter the behavior of communication path 115 . In addition, it is contemplated that communication path 115 comprises characteristics that render it unreliable.
  • communication path 115 comprises a packet switched network comprising Ethernet communication transporting an internet protocol.
  • resource consumers 120 A through 120 P and resource nodes 100 A through 100 N acquire IP addresses through DHCP.
  • FIG. 2 represents a possible physical embodiment of a resource node.
  • Resource node 200 receives resource command messages from resource consumers over communication path 115 .
  • Processing unit 210 receives the resource command messages and processes the commands within the message through the use of command queue 230 stored in memory 220 .
  • the command from the message is placed in command queue 230 as represented by commands 233 A through 233 N.
  • Processing unit 210 processes commands 233 A through 233 N according to resource node information stored in memory 220 including command queue 230 or resource node data 240 .
  • processing unit 210 accesses resources 260 A through 260 M over resource communication path 215 .
  • resource node information stored in memory 220 comprises sufficient information to allow resource node 200 to function independently of other resource nodes and to focus on its main set of responsibilities.
  • one element of hardware comprising processing unit 210 and memory 220 services one or more resource nodes.
  • a disk drive with a data storage resource could be adapted with a memory and processing unit to offer a number of logical partitions, each with their own IP address and each responsible for a set of LBAs.
  • a rack-mount enclosure supporting a plurality of disk drives could include one or more CPUs forming processing unit 210 and could include a one or more RAM modules forming memory 220 . The rack-mount enclosure could then offer many logical partitions that have responsibility across the plurality of disk drives.
  • resource node 200 could represent a single resource. For example, a logical partition with an address could be responsible one complete disk drive.
  • Resource communication path 215 provides the addressing and data transfer between processing unit 210 and resource 260 A through 260 M.
  • resource communication path 215 comprises a disk drive communication bus. Examples of disk buses include ATA, SCSI, Fibre Channel, USB, or others existing or yet to be invented. It is also contemplated that resource communication path 215 could include a packet switched network. For example, in the case where resource node 200 is a content server, resource communication path 200 could be an IP network to a storage array that houses content.
  • Resource node 200 determines when to processes commands 233 A through 233 N based upon interpreting the urgency or importance information found in each resource command message and on interpreting resource node information stored in memory 220 .
  • Resource node 200 uses information about itself to make an assertion of a proper way to handle commands autonomously.
  • Information about resource node 200 includes ability to process commands, capacity, loading, command queue ordering, previous commands stored in command queue, or other relevant information that impacts servicing resource command messages from resource consumers. For example, if resource node 200 is functioning at 100% capacity servicing many resource consumers, it can determine that it will not service a current resource command message by silently discarding it while processing its current load. The resource consumer whose resource command message was dropped can attempt another command, possibly adjusting the message's urgency or importance, or can wait for another resource node to respond.
  • resource node data 240 includes information for use by resource consumers to construct an understanding of the overall resource device including the name of the resource device to which the resource node belongs, the name of the resource node, the role the resource node plays in the resource device, attributes, or other resource node information. This implies the resource node data 240 also represents resource device information.
  • resource node 200 focuses on handling its responsibilities without performing extraneous tasks to enhance desirable characteristic of the resource device. This allows resource node 200 to fully utilize its capabilities toward servicing requests without negatively impacting performance or responsiveness. Furthermore, duplicates of resource node 200 provide enhanced capabilities from the perspective of resource consumers.
  • Redundant resource nodes are resource nodes that provide access to nearly identical resources. Redundant resource nodes can be differentiated by resource node data 240 , name or address, for example. However, each redundant resource node has responsibility for the same type of resource and has equivalent ability to service resource command messages subject to their loading, capabilities, or other abilities.
  • An example of redundant resource nodes includes logical partitions that have responsibility for the same set of LBAs within a logical volume but on different disks or two web servers capable of serving identical content. In a preferred embodiment, redundant resource nodes can participate in the same multicast group where a resource consumer is able to address them simultaneously.
  • resource consumers send resource command messages to the resource nodes of a resource device without regard to which resource nodes will actually process the resource command message.
  • a resource command message will potentially be processed substantially in parallel by the redundant resource nodes.
  • substantially in parallel means at least two resource nodes process the resource command message within ten seconds of each other due to the timing characteristics of the communication path and the resource nodes. Timing characteristics include latency, node loading, or other parameters that affect the processing time including those directly imposed by the resource consumer or resource nodes.
  • redundant resource nodes can generate multiple responses to resource command messages, which potentially consume bandwidth.
  • resource nodes and resource consumes interact in a manner that attempt to quench multiple responses.
  • resource consumers can initiate an exchange of multiple resource command messages expecting multiple responses.
  • the resource consumer selects a preferred provider from among the responding resource nodes, and then includes the preferred provider information in subsequent resource command message urgency. If a resource node is a preferred provider, it processes the resource command message normally. If a resource node is not a preferred provider, it delays processing. When the preferred provider, responds, the resource consumer sends its next message. The non-preferred provider resource nodes receive the next message and cancel a previously sent pending command. It is also contemplated that the current command could take over the previous command's position in the command queue.
  • resource command messages can comprise command identifiers that are used to identify a group of related commands. In that situation, if a resource node has a command in its command queue and receives an additional related command, the resource node can interpret this sequence of events as an instruction to suspend the processing of the previous command, including deleting the command, thereby reducing the number of potential multiple response.
  • Resource node 200 can execute commands or reserve resources for future use based upon the command and command parameters in a resource command message. Executing a command provides for actual servicing resource command messages. Reserving resources allows resource consumers to aggregate abilities of multiple resource nodes.
  • FIG. 3 represents a possible schematic of a resource command message.
  • Resource command message 300 comprises command 320 having command parameters 330 to be processed by a resource node.
  • resource consumers address resource command message 300 to a resource device or a resource node via resource destination address 310 .
  • Resource command message 300 also optionally includes data 340 .
  • data 340 is present if command 320 indicates a write command to a disk drive where data 340 represents the target data to be written.
  • resource command 320 comprises command urgency 335 or command importance information 337 .
  • resource command 320 comprises command identifiers 333 .
  • indicates means something that can be resolved to something else.
  • the wording “command 320 indicates a write command” means that “command 320 can be resolved to a write command.”
  • a resource consumer constructs resource command message 300 in a computer readable memory wherein at least a portion of resource command message 300 resides. Once constructed, resource command message 300 is sent over the communication path coupling the resource consumer to resource nodes. It is contemplated that resource command message 300 could also be sent while being constructed.
  • resource command message 300 is encapsulated into a datagram and sent over a packet switched network.
  • resource command message 300 is sent using User Datagram Protocol (UDP) as a transport.
  • UDP has reduced processing overhead relative to Transmission Control Protocol (TCP), and lends itself to the atomic command structure where information from one command is unnecessary in the processing of another command.
  • Contemplated commands include conducting I/O processing, reading data, writing data, allocating a resource, reserving a resource, managing a resource, checking status of a resource, conducting an inventory of a resource, logging resource events, locking a resource, or other resource related operation.
  • Resource nodes use command parameters 330 coupled with their own information to determine when to process command 320 .
  • Command Identifier 333 comprises information to group two or more related commands. It is contemplated command identifier 333 comprise a value unique to a grouping of commands. Commands are grouped for a number of reasons. For example, when a file system requests file data comprising a large number of LBAs to be read from a logical volume comprised of a plurality of mirrored logical partitions, a driver breaks the requests into individual resource command messages for each LBA or for related groups of LBAs. Each mirrored logical partition could respond to each resource command message generating multiple responses.
  • command identifier 333 comprises an ID number or a sequence number.
  • command identifier 333 represents a series of bid-response transactions. For example, if a web server gateway has a larger number of connections that require attention beyond the capability of a single web server. The gateway sends resource command message 300 with the number of connections in data 340 and with command identifier 333 to all the web servers operating as resource nodes. Each web server capable of responding, reserves is capacity and sends a response. The gateway aggregates the responses, sending a subsequent command with the same command identifier 333 instructing the participating web server to handle the connections. Furthermore, the non-participating web server interprets the subsequent command as an instruction to stop processing the commands with the same command identifier 333 .
  • Urgency 335 (used here as a noun) comprises information relating to the timing of processing command 320 . It is contemplated resource nodes infer from urgency 335 the actual timing for when a command is to be processed and the ordering of commands in a command queue. Contemplated urgencies include relative timing information or absolute timing information. Relative timing information includes specifying a desire for processing within a time window. Absolute timing information includes specifying a specific time to be processed from the resource consumer's perspective or the resource node's perspective.
  • Resource nodes fold urgency 335 together with their own information as well.
  • urgency 335 includes a resource consumer's preferred provider.
  • the resource node that matches the preferred provide infers urgency higher than a resource node that does not match the preferred provider.
  • a preferred provider resource node processes the command normally whereas a non-preferred provider resource node processes the command with a delay.
  • Importance 337 (used here as a noun) comprises information relating to the priority of processing command 320 . It is contemplated priority includes relative priority or absolute propriety. Relative priority includes quality of service (QoS) information. Absolute priority includes discreet levels possibly associated with a command queue. It is contemplated that resource nodes process resource command messages from multiple resource consumers and use importance information to help resolve the ordering of command to be processed.
  • QoS quality of service
  • Resource nodes use command parameters including urgency 335 or importance 337 to determine a final ordering of commands to be processed.
  • FIG. 4 represents a possible schematic of a resource node's command queue.
  • Command queue 400 comprises one or more command positions 415 A through 415 Z where the number of positions depends on the implementation of the resource node.
  • command queue should be interpreted broadly to encompass any ordering of commands for processing.
  • Example command queues include those ordered by time, order by priority, first come first serve, having just a pending command and one executing command, or other ordering determined by a resource node.
  • Resource nodes determine the ordering or the reordering of commands based upon when to process the command. Once the ordering is determined based upon the resource node information, command urgency or importance, the resource node will reorder the queue by placing the command in command queue 400 at an appropriate position.
  • position should be interpreted broadly to encompass the concept of command ordering relative to other commands, pending or executing. Resource nodes comprise the ability to manipulate command queue 400 . Furthermore, the ordering could indicate that the resource node might never process the command; therefore, the command is not placed in the queue at all. This concept also includes circumstances where the resource node is so loaded, it can not process incoming messages at all. Consequently, the concept of a resource node determining “when” to process a command includes ignoring a resource command message.
  • command queue 400 generally represents a first come first serve queue where the resource node modifies command positions based upon QoS, preferred provider information, or command identifier.
  • FIG. 5 represents a set of possible steps employed by a resource node to process command queue messages.
  • Resource consumers send resource command messages to one or more resource nodes; therefore, the steps presented in FIG. 5 occur substantially in parallel when more than one resource node, preferably redundant nodes, receives the resource command message.
  • a resource node receives a resource command message.
  • the resource command message could be addressed to the individual node or addressed to a set of resource nodes collectively.
  • the resource node receives the resource command message at an IP address, unicast or multicast. It is contemplated that the resource node could be loaded where it is unable to receive the resource command message. If so, either another resource node processes it, or the resource consumer attempts to send the resource message again.
  • the resource node begins the evaluation of the resource command message.
  • the resource node interprets the urgency information within the resource command message, if applicable.
  • Urgency information includes direct or indirect information.
  • Direct information comprises references to a time when the command should be processed.
  • direct information includes stating the resource consumer's desired urgency as an absolute time or a relative time.
  • Indirect information comprises references where the resource node infers the time based upon the urgency information. For example, when the resource command message includes preferred provider information, the resource node can alter when the command will be processed.
  • the resource node continues with the evaluation of the resource command message by interpreting the importance information, if applicable.
  • the importance information includes direct or indirect information.
  • Direct information includes absolute or relative priority information.
  • Indirect information includes QoS information.
  • QoS information informs the resource node to preferentially process commands over others to enhance performance.
  • Contemplated resource node information includes loading information, capabilities, previous commands, commands in the command queue, or other resource node centric information.
  • the resource node combines its resource node information along with the information interpreted from the urgency or importance information to establish when the command in the resource command message should be processed.
  • the resource node determines if the command should be processed at all. If not, resource node silently discards the command message at step 535 .
  • the resource node autonomously determines if the resource command message is discarded and the resource consumer assumes responsibility for ensuring its resource needs are met. It is contemplated the resource node discards the command when it is fully loaded, when its command queue is full, when its resources are reserved, or other reasons where the resource node does not wish to process the command. Once discarded, the resource node again waits to receive additional resource command messages at step 500 .
  • the resource node determines if the command should be processed, it determines if the command should be delayed at step 543 .
  • the command could be delayed for several reasons including that the resource node is not a preferred provider or a resource consumer specifically requests a time for the command to be processed. If the command is to be delayed, at step 545 the resource node determines the amount of time for the command to be delayed. It is also contemplated the resource node could accelerate processing of a command by canceling a executing command in favor of a current command.
  • the resource node determines if a pending command should be suspended. Pending commands are suspended if the command is no longer valid as determined by information with the command parameters of the command. If the current command identifies itself as part of a group through a command identifier to which a pending command belongs, the resource node can interpret the current command as an instruction to suspend the pending command at step 555 . Suspending includes further delaying the pending command from being processed, halting the pending command from being processed, removing the pending command from the command queue, deleting the pending command, or other actions that result in altering the pending command's processing time.
  • the resource node has completed its determination on when the command should be processed and the resource node places the command in the queue of commands.
  • the command queue ordering is modified by the resource node based upon priority, urgency, or command identifier.
  • the resource node places the command in an absolute position or a relative position within the command queue. If the command queue has a set number of positions, an absolute position represents a specific index into a standard queue, for example. Examples of absolute positions include the currently executing command position, the first positions, or last position.
  • a relative position represents a position, possibly ordered by time or priority, relative to other commands in the queue.
  • the resource node executes the command when appropriate. Furthermore, if applicable, the resource node will send a resource command response message to the resource consumer at step 575 .
  • the response includes acknowledgement the command is processed, requested data, or an indication of ability to process the command.
  • the resource node reserves at least a portion of the requested allocation of resources for the resource consumer and informs the resource consumer of the indication of its ability. For example, if a resource consumer requests to store 100 gigabytes of data, the resource node could response with an indication that it is able to store 50 gigabytes. The resource node could also reserve the 50 gigabytes to allow the resource consumer to aggregate other resources node's abilities to achieve the 100 gigabytes.
  • a resource command response message could be received by other resource nodes and could be interpreted as an instruction to suspend processing of the command in the resource command message.
  • step 570 could execute as a parallel thread or task to the message handling steps.
  • the resource node steps illustrated in FIG. 5 are stored in a computer-readable medium as a series of instructions to be executed on a processing unit.
  • a plurality of resource nodes processes a resource command message substantially in parallel.
  • the plurality of resource nodes processes the resource command message within three seconds of each other.
  • FIG. 6 represents a set of possible steps employed by a resource consumer and a resource node to enable access to a resource.
  • Resource consumers send resource command messages to a resource device comprising one or more resource nodes.
  • a resource device comprising one or more resource nodes.
  • one or more resource consumers perform the steps independently of each other, possibly interacting with the same resource nodes.
  • a resource consumer begins the process of constructing a resource command message in a computer readable memory.
  • the resource consumer establishes its desired sense of urgency associated with the command in the resource command message.
  • the resource consumer establishes the importance of the command. Both step 600 and 605 occur, if applicable, for the current resource command message.
  • the resource consumer optionally assigns a command identifier that signifies how the current command relates to previous commands or subsequent commands. Steps 600 , 605 , or 610 can occur in any desirable order.
  • the resource consumer constructs the resource command message based upon the command, command parameters including the command identifier, urgency, or importance.
  • the resource consumer sends the resource command message to a resource device.
  • the resource command message is formed into one or more packets and sent over a packet switched network.
  • the packets are sent using UDP.
  • the resource consumer sends the resource command message, it is preferable that the resource consumer sends the message to a group of resource nodes or all of them collectively.
  • the resource command message is sent via multicast where each resource node is a member of a multicast group whose address represents the resource device. It is contemplated that resource command messages are sent slowly at first to avoid congestion on the communication path coupling the resource consumers and the resource nodes.
  • network protocols including TCP, will appreciate a slow start for congestion avoidance.
  • the resource node receives the resource command message and begins processing the message.
  • multiple resource nodes are able to receive the same resource message.
  • multiple resource nodes are equally able to process the command and responded back to the resource consumer who sent the resource command message.
  • the resource node utilizes the urgency, importance, or command identifier information as well as information regarding itself to determine when the command should be processed.
  • the resource node determines if a previous command should be suspended from processing at step 633 . If so, at step 635 , the previous command is suspended, otherwise the current command is placed in a queue of commands at step 640 . Once the command's turn for processing arrives, the resource node executes the command at step 645 and sends an appropriate response at step 650 .
  • the resource consumer could receive multiple responses from multiple resource nodes where the resource nodes offer redundant capabilities. If so, the resource consumer selects a preferred resource node among the plurality of nodes. In an especially preferred embodiment, the preferred resource node is selected based upon which of the redundant nodes responds first.
  • Each resource consumer interacting with a resource device comprising a plurality of resource nodes is able to have a completely different preferred provider.
  • the preferred provide is able to change as conditions in the environment change. Consequently, at any given time, resource consumers experience solid performance, load balancing, or responsiveness naturally without imposing extraneous management.
  • the steps presented in FIG. 6 are stored in a computer readable media in the form of instructions to be executed on a processing unit.
  • Resource consumers and resource devices comprising one or more resource nodes realize a number of advantages as a natural result through employing resource command messages.
  • Resources scale naturally as additional resource devices or resource nodes are added to the system. Each individual resource node focuses on its main responsibilities and processing resource command messages; therefore, they are autonomous allowing for scaling the system at an atomic level up to the ability of the communication path to handle resource command messages.
  • the bandwidth of the communication path is more efficiently utilized because all traffic is relevant to accessing the resource rather than system management or maintenance.
  • incremental costs are reduced because if the resource system requires further capabilities individual resource nodes can be added as opposed to replicating an entire resource system.
  • Resource consumers send resource command messages to the resource nodes collectively, thereby allowing more than one resource node to respond. Given different loading across each resource node, the resource node most able to respond responds the quickest resulting in a fast response time.
  • multiple resource nodes not necessarily redundant nodes, process resource command message substantially in parallel providing higher performance to the resource consumer.
  • Resource consumers use importance information to indicate to a resource node the priority that should be considered for processing the command. Importance information aids in the handling of QoS data. Multiple responses are reduced through a slow start for congestion avoidance to limit consumption of bandwidth.
  • resource consumers each have their own view of the resource nodes and independently select a preferred provider when working with redundant resource nodes to aid in securing fastest response times and reduced multiple messages.
  • Load balancing is achieved as a natural result across redundant resource nodes because each node functions independently allowing each node to handle as much traffic as they are designed to handle.
  • Resource consumers have no a priori preference which resource node services its requests; however, the resource consumer can bias which node is preferred to reduce multiple responses. Even though a resource consumer could have a preferred provider, it can change the preferred provider based upon how other resource nodes respond through continued interactions. Therefore, loading is balanced across nodes. As additional nodes are added to the system to reduce loading, resource consumers are able to cycle through preferred nodes if required so that multiple resource consumers effectively share resource nodes.

Abstract

Resource command messages comprise commands and command urgency or importance information that is interpreted by a resource device and is coupled with information relating to the resource device to determine when to process the command within the resource command message. Resource devices comprising a plurality of resource nodes provide increased performance, responsiveness, and load balancing by multiple resource nodes processing the same resource command message in parallel.

Description

FIELD OF THE INVENTION
The field of the invention is computing resource command messaging and resource devices.
BACKGROUND OF THE INVENTION
Resource devices manage computing resources. There are a myriad of examples of such computing resources, extending from data storage on disk drives in storage arrays to connection processing within high volume content servers in server farms. Typically, resource devices comprise a plurality of resource nodes, thereby forming a distributed resource that appears as a single, logical resource device from the perspective of a resource consumer.
The resource nodes responsible for managing the physical resources, whether the resource nodes manage a data storage resource-or is an individual web server responsible for connection and content resources, must function efficiently, especially in complex environments comprising many resource consumers and resource devices. The efficiency depends on desirable characteristics including scalability, high performance, load balancing, low response times (responsiveness), or others characteristics that could require optimization. Resource nodes are typically either managed by external management systems, or have a management layer imposed on them, which unfortunately introduces extra overhead beyond the core responsibility of resource management. A resource device comprising a plurality of resource nodes exacerbates the external management problem. For example, as resource nodes become loaded, they typically inform resource consumers of their state, shift resource requests to other resource nodes, or perform other out-of-band management communications to maximize performance resulting in excessive out-of-band communication. Such systems suffer from scalability issues because as new resource nodes are added to the environment, the management “chatter” increases subtend a larger fraction of communication and processing bandwidth, which negatively impacts performance. Due to such coarse grained scalability, the cost to incrementally enhance the capability of the system increases, and the cost to replicate the entire system becomes prohibitive.
To reduce costs and achieve other desirable results, it is advantageous to design a system in which each individual resource node functions independently of other resource nodes or external management systems. In other words, a resource node should not require information regarding other resource nodes to perform its primary responsibility of managing a resource. To service resource consumers who desire access to a resource, autonomous resource nodes should determine when to process commands from a resource consumer based upon, or at least determined as a factor of, (a) information relating to the resource node and (b) any information supplied by the resource consumer within the message comprising the command.
Beyond supplying command information, resource consumers supply information regarding their desired urgency or importance for having a command processed. When resource consumers understand the behavior of the autonomous resource nodes, benefits of scalability, performance, or responsiveness are achieved naturally, without imposing additional functionality, because resource consumers are able to adjust their resource command messages based upon the interactions with all the resource nodes to gain higher performance.
Current related art attempts to provide efficient access to computing resources by focusing on activities external to resource consumer-resource node interactions rather than on the natural behavior resulting from their interactions. The related art imposes additional functionality to optimize desirable characteristics. For example, related art might require resource consumers to communicate with other resource consumers, resource nodes to communicate with other resource nodes, or communicate with a communication path to manage communications between resource consumers and resource nodes.
Sun Microsystems' U.S. Pat. No. 5,506,969 titled “Method and apparatus for bus bandwidth management” teaches how to efficiently schedule bus accesses from multiple applications to peripheral modules on a high-speed bus. Although the patent describes how the bus between the applications and modules is managed, it requires a bus management system rather than allowing the individual module's and application's behavior to have the desired performance result. The applications do not employ resource command messages comprising of urgency or importance information that allows the modules to determine when to process requests.
Hewlett Packard Development Company's U.S. Pat. No.6,886,035titled “Dynamic load balancing of a network of a client and server computer” teaches how client computers optimize throughput between themselves and a resource through the use of redirection. Redirection requires a host the network to be knowledgeable of other hosts on the network beyond itself. The scalability of the system is reduced because each additional element added to the system must be managed and incorporated into the system to ensure it has sufficient knowledge for redirection. The load-balancing is not achieved naturally as it would be through the use of urgency or importance information within commands messages.
EMC Corporation's U.S. Pat. No. 6,904,470 titled “Device selection by a disk adapter scheduler” teaches how to efficiently schedule resource I/O requests of based upon urgency and priority of requests. The patent describes how a main scheduler determines what type of scheduler should be used to manage various I/O tasks directed toward logical volumes managed by a disk adapter. Although logical volumes are associated with physical data storage resources, they are a coherent virtual device. Therefore, each disk adapter is responsible for a device rather than a resource and imposes additional management capabilities to ensure load-balancing, performance, and other qualities across the logical volumes. The '470 patent does not address the autonomous behavior of a resource node whose behavior naturally results from the use of urgency or importance information used along with the resource node information to determine when tasks are issued.
None of the related art addresses the need for autonomous resource nodes whose behavior naturally results in desired characteristics including scalability, high performance, load balancing, or responsiveness. To fully realize the benefits of autonomous resource nodes, a solution would preferably include the following characteristics:
    • Resource nodes determine when to handle commands from resource consumers based upon information relating to the command and relating to the resource node's own information
    • Messages sent from resource consumers to resource nodes indicate the resource consumer's sense of urgency or importance related to processing the message or the command within the message
Resource nodes that are able to determine when to handle commands sent to them result in several advantages. First, each individual resource node focuses on its main responsibilities rather than on other non-resource centric tasks; therefore, the resource node functions with a higher efficiency than a similar resource node that has additional management tasks to perform. Second, multiple resource consumers are able to interact with multiple resource nodes of a resource device without an extraneous arbitrator. This results in improved response time because each resource node is able to determine independently which resource consumer deserves (if applicable) attention. Third, when multiple resource nodes provide access to redundant resources, and the resource nodes are addressed simultaneously, the collection of resource node automatically load balance because each resource devices functions to its fullest capabilities. If one redundant resource node is fully loaded, another redundant resource node is able to service requests without external intervention. Additionally, any of the redundant resource nodes are capable of providing a valid response to a resource consumer; therefore, the responsiveness of the system is higher than a resource device without redundant resource nodes. Fourth, the scalability of such an environment is high because each resource node is independent and does not require additional information from resource consumers or other resource nodes and can integrate into the environment easily. Although only a few advantages are presented, other contemplated advantages are naturally inherent in the presented subject matter.
Thus, there remains a considerable need for methods and apparatus that provide for resource command messages and resource devices comprising one or more resource nodes that autonomously determine when to process resource command messages based upon the contents of the command message and on information associated with the resource node.
SUMMARY OF THE INVENTION
One aspect of the invention is directed toward a resource command message comprising a command and command parameters comprising an indication of the command's urgency or the command's importance. Resource consumers construct resource command messages to interact with resource nodes composing a resource device. A resource node processes resource command messages based upon the urgency or importance of the resource command message in addition to information centric to the resource node. Furthermore, a resource device can comprise a plurality of resource nodes where each resource node has an ability to operate independently of all other nodes and each resource node is able to receive the resource command message. The urgency or importance with a resource command message includes relative or absolute values.
In another aspect, the present invention is directed toward a method of processing resource command messages. The method includes interpreting command urgency or command importance information within the resource command message and combining the information along with resource node information to establish when the command within the resource command message will be processed. The method also includes a step of determining the ordering in which commands in a command queue are processed based upon when the command is to be processed Through the determination, the command could be processed immediately, delayed in processing, never processed, or could have its processing order changed relative to other commands sent previously or subsequently. Furthermore, the method includes processing resource command messages by more than one resource node that composes a resource device.
In still yet another aspect, the present invention is directed toward a method of accessing a resource device through creating a resource command message that includes a command and command parameters comprising at least one of a command urgency or command importance. The method also includes sending the resource command message to a resource device and determining when to process the command within the resource command message. When a resource device comprises a plurality of resource nodes, sending resource command messages include multicasting to at least some of the resource nodes.
In preferred embodiments, resource nodes within a resource device can operate as autonomous entities, each responsible for its own individual resources. Resource consumers acquire resources from resource nodes to fulfill their individual functions, and are also autonomous entities. As resource consumers require resources, they send resource command messages to the resource device with an indication of the urgency of the command or the importance of the command to acquire the resources, to reserve the resources, to use the resource, or to interact with the resource in other ways. Because resource nodes are autonomous and service requests from multiple resource consumers, the resource nodes fold information regarding their state, history, capabilities, or other relevant information together with their interpretation of the urgency or importance information to decide how or when to process the command. As used herein, the phrase “when to process” means autonomously handling the processing of a command and should be interpreted broadly including time based processing, order of processing, or other process handling concepts.
It is contemplated that resource consumers and resource nodes can communicate over a path that is outside the control of the consumers or nodes. To ensure high performance or reliability, in a preferred embodiment, a resource device comprises a plurality of resource nodes, where each resource node is responsible for all or some fraction of the resource and also functions independently of all other nodes, devices, or consumers. When resource nodes provide redundant resources, resource consumers send resource command messages to some or all the resource nodes, and given the current conditions of the network or loading the most capable resource node will respond. Furthermore, other resource nodes interpret additional resource command messages or resource command responses as instructions to suspend or stop processing of previously unprocessed commands to reduce multiple responses. Through autonomous operation of resource nodes coupled with resource command urgency or importance an overall load balanced system is achieved without requiring out-of-band communications.
Glossary
The following descriptions refer to terms used within this document. The terms are provided to ensure clarity when discussing the various aspects of the invention matter without implied limitations.
“Resource device” means a logical device that is addressable, in whole or in part, on a communication path, and provides access to a commodity used as a computing resource by a resource consumer. Logical resource devices are contemplated to include physical devices or virtual devices. Physical resource devices include computers, monitors, hard disk drives, power supplies, or other physical elements. Virtual resource devices include addressable video displays, logical storage volumes, a web server farm with a URL, or other abstractions of physical elements. Resource consumers interpret each resource device as a coherent whole device, regardless of its actual physical or virtual structure.
“Resource consumer” means an entity that requires access or control over a commodity to perform its desired functions. Resource consumers include computers, applications, users, web server gateways, or other entities that are able to communicate with resource nodes over a communication path; therefore, resource consumers are also addressable. It is contemplated that a resource devices can at times function as a resource consumer.
“Resource node” means a portion of a resource device that represents a fraction of a larger resource device, up to and including the complete resource device. Resource nodes can also operate as independent, addressable entities on the communication path. Contemplated resource nodes include logical partitions that combine with other logical partitions to form a logical volume from the perspective of resource consumers, addressable video frames, individual web servers in a server farm, or other constituent elements.
The teachings herein may be advantageously employed by developers and producers of computing resources including storage devices, or media content servers to create efficient, scalable systems that deliver high performance and fast response.
Various objects, features, aspects, and advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments of the invention, along with the accompanying drawings in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 represents an environment where resource consumers interact with a resource device comprising multiple resource nodes.
FIG. 2 represents a schematic of a possible physical embodiment of a resource node.
FIG. 3 represents a schematic of a possible resource command message stored in a computer readable memory.
FIG. 4 represents a schematic of a possible resource node command queue.
FIG. 5 represents a schematic of possible steps for processing a resource command message.
FIG. 6 represents a schematic of possible steps for accessing a resource device.
DETAILED DESCRIPTION
The following detail description refers to examples based upon disks in a storage array and web servers in a server farm, and illustrate applicability of the inventive subject matter. Although these two examples are used, a myriad of other examples could be provided, so that no implied limitations should be drawn from these examples.
FIG. 1 represents an environment where resource consumers interact with a resource device comprising one or more resource nodes. Resource device 110 comprises one or more resource nodes 100A through 100N. Each individual resource node is communicatively coupled to one or more resource consumers 120A through 120P through communication path 115. In a preferred embodiment many resource consumers interact with many resource devices.
Resource Consumers
Resource consumers 120A through 120P operate independently of each other and do not require information from other entities beyond resource nodes 100A through 100N to interact with the desired resources managed by resource nodes 100A through 100N.
Resource consumers 120A through 120P comprise a combination of hardware, software, or firmware that includes instructions within a computer readable memory programmed to interact with resource device 110, and to access the resources managed by resource nodes 100A through 100N. In a preferred embodiment, resource consumer comprises a computer running an application or an operating system that desires access to a resource. In a yet more preferred embodiment, a resource consumer comprises a workstation with a driver that provides for communications between the workstation's operating system and resource nodes 100A through 100N. The driver also provides the operating system with enough information regarding resource device 100 that resource device 100 appears as a locally connected device. For example, a Windows® computer wishes to mount a logical volume for storage. The Windows computer includes a driver that accepts I/O commands from the file system and transforms them into message transferred over a network to logical partitions composing the logical volume in a manner that is transparent to the file system or applications accessing the logical volume. The logical volume appears as a locally attached disk drive.
Alternatively, resource consumers 120A through 120P are contemplated to comprise applications that directly interact with resource nodes 100A through 100N. For example, a gateway to a web site could represent a resource consumer that accesses a distributed web server farm where an individual web server represents a resource node.
Although resource consumers 120A through 120P operate independently of each other, they interact with resource nodes 100A through 100N collectively or individually. In addition, resource consumers 120A through 120P do not require information from a system external to the resource consumers 120A through 120P or resource nodes 100A through 100N, including name servers, metadata servers, or other extraneous systems. In a preferred embodiment, it is contemplated that resource consumers 120A through 120P comprise the ability to discover resource nodes 100A through 100N. The ability to discover includes sending a broadcast message over communication path 115 to which resource nodes 100A through 100N respond with their individual names. Furthermore, in a preferred embodiment resource consumers 120A through 120P use name resolution to convert responses from resource nodes 100A through 100N into addresses on communication path 115. One skilled in the art of network programming will appreciate there are numerous ways to conduct discovery and name resolution including SSDP, DNS, WINS, or others.
Once resource consumers 120A through 120P have established communications with resource nodes 110A through 100N, resource consumers 120A through 120P send resource command messages addressed to resource device 110. The resource command messages can be addressed to resource device 110 in whole or in part. In a preferred embodiment, resource commands messages are sent to resource nodes 100A through 100N collectively through multicast where resource device 110 is addressed in whole, although it is contemplated that unicast messaging where resource device 100 is address in part is also possible. In this context “multicast” means sending a single message over communication path 115 where two or more of resource nodes 100A through 100N receive the message without requiring a resource consumer to consume bandwidth on communication path 115 by sending more than one copy of the message to each resource node. It is also contemplated that resource device 110 can be addressed simultaneously through multicast and unicast messaging.
Resource consumers 120A through 120P each construct resource command messages that comprise command parameters regarding their individual specific needs. It is contemplated that at least a portion of the resource command message will reside in a memory as it is constructed. As used herein, the term “memory” means any hardware that stores information, no matter where the memory is located or how the information is stored. The command parameters include the resource consumer's sense of urgency or importance relative to having their need satisfied. Urgency gives a sense of the timing constraints while importance gives a sense of priority desired by the individual resource consumer. Resource nodes 100A through 100N use the urgency or importance command parameters and other command parameters to aid in the determination of when to process the resource command message. In a preferred embodiment, resource consumers determine their urgency or importance based upon their own internal information or based information gathered from responses from resource nodes. Furthermore, in a more preferred embodiment, command parameters include command identifiers used to correlate a group of related resource command messages.
Resource consumers 120A through 120P each comprise the ability to receive more than one response from a single resource command message. In cases where resource device 110 comprises redundant resources managed by resource nodes 100A through 100N, then more than one of resource node 100A through 100N responds to a message. Multiple responses are expected because each resource node functions independently from other nodes and does not know if a response has already been generated. However, multiple responses are quenched due to proper handling of urgency or importance information.
In a preferred embodiment, resource consumers 120A through 120P employ a slow start algorithm to avoid congestion to ensure efficient use of bandwidth and to reduce multiple responses from resource nodes. By initially sending small resource commands messages slowly, resource consumers 120A through 120P determine which of resource nodes 100A through 100N are likely to response first, then the each individual resource consumer 120A through 120P are able to adjust their urgency or importance information independently to aid in reduction of multiple responses. For example, a slow start algorithm could break large command messages into smaller command messages, and send the smaller messages slowly. As responses are received, the algorithm begins sending larger messages more quickly. Slow start ensures networking equipment with small buffers is not flooded with large packets. If they become flooded, network performance drops. In addition, a slow start provides resource consumers an opportunity to detect which resource nodes are initially more responsive. As packets are sent slowly at first, a window is provided to allow multiple responses from the resource nodes. Resource consumers can use the multiple responses to establish a preferred provider of the resource. Preferred provider information can then be used to quench multiple responses as the communication speeds up.
Resource Devices
Resource device 110 comprises one or more resource nodes as indicated by resource nodes 100A through 100N. Although FIG. 1 depicts a single resource device, it is contemplated that multiple resource devices coexist on communication path 115.
Resource device 110 is accessible by one or more of resource consumers 120A through 120P; therefore, resource device 110 can be a shared resource. In a preferred embodiment, resource device 110 includes information residing on resource nodes 100A through 100N to indicate when resource device 110 is privately owned or shared among resource consumers 120A through 120P.
Resource device 110 comprises an identifier used by resource consumers 120A through 120P to differentiate resource device 110 from other resource devices on communication path 115. In a preferred embodiment, the identifier comprises a name stored in the memory of resource nodes 100A through 100N wherein the name is resolvable to an address on communication path 115. When resource consumers 120A through 120N issue discovery requests, resource nodes 100A through 100N responds with a name that comprises the name of resource device 110 indicating they belong to resource device 110. In an especially preferred embodiment, the name resolves to an IP address which can include a unicast or multicast address. It is contemplated resource consumers 120A through 120P can address resource device 110 through a single address, preferable an IP multicast address.
In a preferred environment resource device 110 comprises redundant resource nodes where two or more of resource nodes 100A through 100N manage duplicate resources. For example, if resource device 110 represents a logical volume used by resource consumers 120A through 120P to store data, resource node 100A and resource node 100B could represent logical partitions that mirror the same stored data. Yet another example includes a case where resource device 110 represents a logical web server where each of resource nodes 100A through 100N are individual servers and have equivalent ability to processes incoming connections requesting content.
As an example of a resource device with redundant resource nodes, consider a storage array implemented based upon Zetera™ technology where a logical volume, a resource device, is virtualized as a plurality of IP addressable logical partitions, resource nodes. The logical volume represents a single virtual disk with logical block addresses (LBA) ranging from 1 to a maximum value of MAX. Each logical partition is responsible for a set of LBAs, not necessarily continuous or contiguous, wherein the collection of logical partitions cover the entire range of LBAs, 1 to MAX. Furthermore, two or more logical partitions are redundant when they are responsible for an identical set of LBAs; thereby producing a mirror of the data. Workstations mount the logical volume as if it were a locally connected disk. A driver handles all communications with the logical partitions over a network sending command messages via multicast to all the logical partitions using a single address.
Another example of a resource device with redundant nodes is a web server farm where each server is able to serve identical content to browsers. A gateway sends requests coming from the Internet via command messages to the servers collectively. The first server to respond handles the connections.
It is contemplated resource device 110 could represent other computing resources including, processor bandwidth, displays, memory, servable content, connection handling, network bandwidth, or other computing related resources.
Communication Path
Communication path 115 provides support for addressing and data transport among resource consumers 120A through 120P and resource nodes 100A through 100N. It is contemplated that communication path 115 is not under the direct control of the resource nodes or resource consumers; however, it is contemplated resource consumers 120A through 120P or resource nodes 100A through 100N could alter the behavior of communication path 115. In addition, it is contemplated that communication path 115 comprises characteristics that render it unreliable.
In a preferred embodiment, communication path 115 comprises a packet switched network comprising Ethernet communication transporting an internet protocol. In the preferred embodiment, resource consumers 120A through 120P and resource nodes 100A through 100N acquire IP addresses through DHCP.
Resource Nodes
FIG. 2 represents a possible physical embodiment of a resource node. Resource node 200 receives resource command messages from resource consumers over communication path 115. Processing unit 210 receives the resource command messages and processes the commands within the message through the use of command queue 230 stored in memory 220. The command from the message is placed in command queue 230 as represented by commands 233A through 233N. Processing unit 210 processes commands 233A through 233N according to resource node information stored in memory 220 including command queue 230 or resource node data 240. As processing unit 210 processes commands 233A through 233N, processing unit 210 accesses resources 260A through 260M over resource communication path 215.
It is contemplated, resource node information stored in memory 220 comprises sufficient information to allow resource node 200 to function independently of other resource nodes and to focus on its main set of responsibilities. In a preferred embodiment, one element of hardware comprising processing unit 210 and memory 220 services one or more resource nodes. For example, a disk drive with a data storage resource could be adapted with a memory and processing unit to offer a number of logical partitions, each with their own IP address and each responsible for a set of LBAs. Alternatively, a rack-mount enclosure supporting a plurality of disk drives could include one or more CPUs forming processing unit 210 and could include a one or more RAM modules forming memory 220. The rack-mount enclosure could then offer many logical partitions that have responsibility across the plurality of disk drives. It is also contemplated that resource node 200 could represent a single resource. For example, a logical partition with an address could be responsible one complete disk drive.
Resource communication path 215 provides the addressing and data transfer between processing unit 210 and resource 260A through 260M. In a preferred embodiment, resource communication path 215 comprises a disk drive communication bus. Examples of disk buses include ATA, SCSI, Fibre Channel, USB, or others existing or yet to be invented. It is also contemplated that resource communication path 215 could include a packet switched network. For example, in the case where resource node 200 is a content server, resource communication path 200 could be an IP network to a storage array that houses content.
Resource node 200 determines when to processes commands 233A through 233N based upon interpreting the urgency or importance information found in each resource command message and on interpreting resource node information stored in memory 220. Resource node 200 uses information about itself to make an assertion of a proper way to handle commands autonomously. Information about resource node 200 includes ability to process commands, capacity, loading, command queue ordering, previous commands stored in command queue, or other relevant information that impacts servicing resource command messages from resource consumers. For example, if resource node 200 is functioning at 100% capacity servicing many resource consumers, it can determine that it will not service a current resource command message by silently discarding it while processing its current load. The resource consumer whose resource command message was dropped can attempt another command, possibly adjusting the message's urgency or importance, or can wait for another resource node to respond.
Information relating to resource node 200 stored in memory 220 can advantageously comprise instructions and data that determine the behavior of resource node 200. In an especially preferred embodiment, resource node data 240 includes information for use by resource consumers to construct an understanding of the overall resource device including the name of the resource device to which the resource node belongs, the name of the resource node, the role the resource node plays in the resource device, attributes, or other resource node information. This implies the resource node data 240 also represents resource device information.
In a preferred embodiment resource node 200 focuses on handling its responsibilities without performing extraneous tasks to enhance desirable characteristic of the resource device. This allows resource node 200 to fully utilize its capabilities toward servicing requests without negatively impacting performance or responsiveness. Furthermore, duplicates of resource node 200 provide enhanced capabilities from the perspective of resource consumers.
Redundant Resource Nodes
Redundant resource nodes are resource nodes that provide access to nearly identical resources. Redundant resource nodes can be differentiated by resource node data 240, name or address, for example. However, each redundant resource node has responsibility for the same type of resource and has equivalent ability to service resource command messages subject to their loading, capabilities, or other abilities. An example of redundant resource nodes includes logical partitions that have responsibility for the same set of LBAs within a logical volume but on different disks or two web servers capable of serving identical content. In a preferred embodiment, redundant resource nodes can participate in the same multicast group where a resource consumer is able to address them simultaneously.
In a preferred embodiment, resource consumers send resource command messages to the resource nodes of a resource device without regard to which resource nodes will actually process the resource command message. In the case of redundant resource nodes, a resource command message will potentially be processed substantially in parallel by the redundant resource nodes. As used herein, “substantially in parallel” means at least two resource nodes process the resource command message within ten seconds of each other due to the timing characteristics of the communication path and the resource nodes. Timing characteristics include latency, node loading, or other parameters that affect the processing time including those directly imposed by the resource consumer or resource nodes.
It is contemplated that redundant resource nodes can generate multiple responses to resource command messages, which potentially consume bandwidth. In a preferred embodiment resource nodes and resource consumes interact in a manner that attempt to quench multiple responses. It is also contemplated that resource consumers can initiate an exchange of multiple resource command messages expecting multiple responses. In a preferred embodimentthe resource consumer selects a preferred provider from among the responding resource nodes, and then includes the preferred provider information in subsequent resource command message urgency. If a resource node is a preferred provider, it processes the resource command message normally. If a resource node is not a preferred provider, it delays processing. When the preferred provider, responds, the resource consumer sends its next message. The non-preferred provider resource nodes receive the next message and cancel a previously sent pending command. It is also contemplated that the current command could take over the previous command's position in the command queue.
It is contemplated that resource command messages can comprise command identifiers that are used to identify a group of related commands. In that situation, if a resource node has a command in its command queue and receives an additional related command, the resource node can interpret this sequence of events as an instruction to suspend the processing of the previous command, including deleting the command, thereby reducing the number of potential multiple response.
Resource node 200 can execute commands or reserve resources for future use based upon the command and command parameters in a resource command message. Executing a command provides for actual servicing resource command messages. Reserving resources allows resource consumers to aggregate abilities of multiple resource nodes.
Resource Command Messages
FIG. 3 represents a possible schematic of a resource command message. Resource command message 300 comprises command 320 having command parameters 330 to be processed by a resource node. In a preferred embodiment, resource consumers address resource command message 300 to a resource device or a resource node via resource destination address 310. Resource command message 300 also optionally includes data 340. For example, data 340 is present if command 320 indicates a write command to a disk drive where data 340 represents the target data to be written. In a preferred embodiment, resource command 320 comprises command urgency 335 or command importance information 337. In yet a more preferred embodiment, resource command 320 comprises command identifiers 333. As used herein the term “indicates” means something that can be resolved to something else. Thus, the wording “command 320 indicates a write command” means that “command 320 can be resolved to a write command.”
A resource consumer constructs resource command message 300 in a computer readable memory wherein at least a portion of resource command message 300 resides. Once constructed, resource command message 300 is sent over the communication path coupling the resource consumer to resource nodes. It is contemplated that resource command message 300 could also be sent while being constructed. In a preferred embodiment, resource command message 300 is encapsulated into a datagram and sent over a packet switched network. In an especially preferred embodiment, resource command message 300 is sent using User Datagram Protocol (UDP) as a transport. UDP has reduced processing overhead relative to Transmission Control Protocol (TCP), and lends itself to the atomic command structure where information from one command is unnecessary in the processing of another command. Contemplated commands include conducting I/O processing, reading data, writing data, allocating a resource, reserving a resource, managing a resource, checking status of a resource, conducting an inventory of a resource, logging resource events, locking a resource, or other resource related operation. Resource nodes use command parameters 330 coupled with their own information to determine when to process command 320.
Command Identifier
Command Identifier 333 comprises information to group two or more related commands. It is contemplated command identifier 333 comprise a value unique to a grouping of commands. Commands are grouped for a number of reasons. For example, when a file system requests file data comprising a large number of LBAs to be read from a logical volume comprised of a plurality of mirrored logical partitions, a driver breaks the requests into individual resource command messages for each LBA or for related groups of LBAs. Each mirrored logical partition could respond to each resource command message generating multiple responses. However, when a resource node detects a new read command within the command group identified by command identifier 333, the resource node suspends processing of the previous command reducing the potential of a multiple response to the previous resource command message. It is also contemplated that a resource node could halt the processing of a currently executing command, or could suspend the response of a command that has been processed. In a preferred embodiment, command identifier 333 comprises an ID number or a sequence number.
It is also contemplated that command identifier 333 represents a series of bid-response transactions. For example, if a web server gateway has a larger number of connections that require attention beyond the capability of a single web server. The gateway sends resource command message 300 with the number of connections in data 340 and with command identifier 333 to all the web servers operating as resource nodes. Each web server capable of responding, reserves is capacity and sends a response. The gateway aggregates the responses, sending a subsequent command with the same command identifier 333 instructing the participating web server to handle the connections. Furthermore, the non-participating web server interprets the subsequent command as an instruction to stop processing the commands with the same command identifier 333.
Urgency
Urgency 335 (used here as a noun) comprises information relating to the timing of processing command 320. It is contemplated resource nodes infer from urgency 335 the actual timing for when a command is to be processed and the ordering of commands in a command queue. Contemplated urgencies include relative timing information or absolute timing information. Relative timing information includes specifying a desire for processing within a time window. Absolute timing information includes specifying a specific time to be processed from the resource consumer's perspective or the resource node's perspective.
Resource nodes fold urgency 335 together with their own information as well. In a preferred embodiment, urgency 335 includes a resource consumer's preferred provider. The resource node that matches the preferred provide infers urgency higher than a resource node that does not match the preferred provider. For example, a preferred provider resource node processes the command normally whereas a non-preferred provider resource node processes the command with a delay. This approach provides several benefits: multiple responses are reduced conserving bandwidth, and allows another resource node to take over as preferred provider if the original preferred provider is unable to respond fast enough, thereby ensuring high responsiveness.
Importance
Importance 337 (used here as a noun) comprises information relating to the priority of processing command 320. It is contemplated priority includes relative priority or absolute propriety. Relative priority includes quality of service (QoS) information. Absolute priority includes discreet levels possibly associated with a command queue. It is contemplated that resource nodes process resource command messages from multiple resource consumers and use importance information to help resolve the ordering of command to be processed.
Resource nodes use command parameters including urgency 335 or importance 337 to determine a final ordering of commands to be processed.
Command Queue
FIG. 4 represents a possible schematic of a resource node's command queue. Command queue 400 comprises one or more command positions 415A through 415Z where the number of positions depends on the implementation of the resource node.
Although FIG. 4 presents a common representation of command queue, one ordinarily skilled in the art will recognize there are many possible ways to order the processing of a set of commands even those that are not data structures. As used herein, “command queue” should be interpreted broadly to encompass any ordering of commands for processing. Example command queues include those ordered by time, order by priority, first come first serve, having just a pending command and one executing command, or other ordering determined by a resource node.
Resource nodes determine the ordering or the reordering of commands based upon when to process the command. Once the ordering is determined based upon the resource node information, command urgency or importance, the resource node will reorder the queue by placing the command in command queue 400 at an appropriate position. As used herein, “position” should be interpreted broadly to encompass the concept of command ordering relative to other commands, pending or executing. Resource nodes comprise the ability to manipulate command queue 400. Furthermore, the ordering could indicate that the resource node might never process the command; therefore, the command is not placed in the queue at all. This concept also includes circumstances where the resource node is so loaded, it can not process incoming messages at all. Consequently, the concept of a resource node determining “when” to process a command includes ignoring a resource command message.
In a preferred embodiment, command queue 400 generally represents a first come first serve queue where the resource node modifies command positions based upon QoS, preferred provider information, or command identifier.
Processing Resource Command Messages
FIG. 5 represents a set of possible steps employed by a resource node to process command queue messages. Resource consumers send resource command messages to one or more resource nodes; therefore, the steps presented in FIG. 5 occur substantially in parallel when more than one resource node, preferably redundant nodes, receives the resource command message.
At step 500, a resource node receives a resource command message. The resource command message could be addressed to the individual node or addressed to a set of resource nodes collectively. In a preferred embodiment, the resource node receives the resource command message at an IP address, unicast or multicast. It is contemplated that the resource node could be loaded where it is unable to receive the resource command message. If so, either another resource node processes it, or the resource consumer attempts to send the resource message again.
At step 505 the resource node begins the evaluation of the resource command message. The resource node interprets the urgency information within the resource command message, if applicable. Urgency information includes direct or indirect information. Direct information comprises references to a time when the command should be processed. For example, direct information includes stating the resource consumer's desired urgency as an absolute time or a relative time. Indirect information comprises references where the resource node infers the time based upon the urgency information. For example, when the resource command message includes preferred provider information, the resource node can alter when the command will be processed.
At step 510 the resource node continues with the evaluation of the resource command message by interpreting the importance information, if applicable. As in the step for interpreting the urgency information, the importance information includes direct or indirect information. Direct information includes absolute or relative priority information. Indirect information includes QoS information. QoS information informs the resource node to preferentially process commands over others to enhance performance.
At step 515 the resource node gathers relevant information regarding itself to make a final determination on when the command within the resource command message should be processed. Contemplated resource node information includes loading information, capabilities, previous commands, commands in the command queue, or other resource node centric information.
One ordinarily skilled in the art will recognize the ordering of previous steps are alterable and do not necessarily have to be followed in the order presented.
At step 520 the resource node combines its resource node information along with the information interpreted from the urgency or importance information to establish when the command in the resource command message should be processed. At step 533 the resource node determines if the command should be processed at all. If not, resource node silently discards the command message at step 535. In a preferred embodiment, the resource node autonomously determines if the resource command message is discarded and the resource consumer assumes responsibility for ensuring its resource needs are met. It is contemplated the resource node discards the command when it is fully loaded, when its command queue is full, when its resources are reserved, or other reasons where the resource node does not wish to process the command. Once discarded, the resource node again waits to receive additional resource command messages at step 500.
If the resource node determines that the command should be processed, it determines if the command should be delayed at step 543. The command could be delayed for several reasons including that the resource node is not a preferred provider or a resource consumer specifically requests a time for the command to be processed. If the command is to be delayed, at step 545 the resource node determines the amount of time for the command to be delayed. It is also contemplated the resource node could accelerate processing of a command by canceling a executing command in favor of a current command.
After handling the conditions for the command processing, at step 553, the resource node determines if a pending command should be suspended. Pending commands are suspended if the command is no longer valid as determined by information with the command parameters of the command. If the current command identifies itself as part of a group through a command identifier to which a pending command belongs, the resource node can interpret the current command as an instruction to suspend the pending command at step 555. Suspending includes further delaying the pending command from being processed, halting the pending command from being processed, removing the pending command from the command queue, deleting the pending command, or other actions that result in altering the pending command's processing time.
At step 565, the resource node has completed its determination on when the command should be processed and the resource node places the command in the queue of commands. In a preferred embodiment, the command queue ordering is modified by the resource node based upon priority, urgency, or command identifier. On ordinarily skilled in the art will recognize there are many ways to embody a command queue other than those presented. It is contemplated the resource node places the command in an absolute position or a relative position within the command queue. If the command queue has a set number of positions, an absolute position represents a specific index into a standard queue, for example. Examples of absolute positions include the currently executing command position, the first positions, or last position. A relative position represents a position, possibly ordered by time or priority, relative to other commands in the queue.
At step 570 the resource node executes the command when appropriate. Furthermore, if applicable, the resource node will send a resource command response message to the resource consumer at step 575. In a preferred embodiment, the response includes acknowledgement the command is processed, requested data, or an indication of ability to process the command. In yet a more preferred embodiment, the resource node reserves at least a portion of the requested allocation of resources for the resource consumer and informs the resource consumer of the indication of its ability. For example, if a resource consumer requests to store 100 gigabytes of data, the resource node could response with an indication that it is able to store 50 gigabytes. The resource node could also reserve the 50 gigabytes to allow the resource consumer to aggregate other resources node's abilities to achieve the 100 gigabytes.
It is also contemplated that a resource command response message could be received by other resource nodes and could be interpreted as an instruction to suspend processing of the command in the resource command message. One ordinarily skilled in the art of software or firmware develop will appreciate that step 570 could execute as a parallel thread or task to the message handling steps.
In a preferred embodiment, the resource node steps illustrated in FIG. 5 are stored in a computer-readable medium as a series of instructions to be executed on a processing unit. One ordinarily skilled the art of firmware or software development will recognize there are many possible ways to implement the steps, all of which fall within the scope of the inventive material. In yet another preferred embodiment, it is contemplated that a plurality of resource nodes processes a resource command message substantially in parallel. In a more preferred embodiment, the plurality of resource nodes processes the resource command message within three seconds of each other.
Accessing Resource Devices
FIG. 6 represents a set of possible steps employed by a resource consumer and a resource node to enable access to a resource. Resource consumers send resource command messages to a resource device comprising one or more resource nodes. In a preferred embodiment, it is contemplated that one or more resource consumers perform the steps independently of each other, possibly interacting with the same resource nodes.
At step 600 a resource consumer begins the process of constructing a resource command message in a computer readable memory. The resource consumer establishes its desired sense of urgency associated with the command in the resource command message. At step 605 the resource consumer establishes the importance of the command. Both step 600 and 605 occur, if applicable, for the current resource command message. At step 610, the resource consumer optionally assigns a command identifier that signifies how the current command relates to previous commands or subsequent commands. Steps 600, 605, or 610 can occur in any desirable order.
At step 615 the resource consumer constructs the resource command message based upon the command, command parameters including the command identifier, urgency, or importance.
At step 620 the resource consumer sends the resource command message to a resource device. In a preferred embodiment, the resource command message is formed into one or more packets and sent over a packet switched network. In an especially preferred embodiment, the packets are sent using UDP. Furthermore, when the resource consumer sends the resource command message, it is preferable that the resource consumer sends the message to a group of resource nodes or all of them collectively. In a preferred embodiment, the resource command message is sent via multicast where each resource node is a member of a multicast group whose address represents the resource device. It is contemplated that resource command messages are sent slowly at first to avoid congestion on the communication path coupling the resource consumers and the resource nodes. One ordinarily skilled in the art of network protocols, including TCP, will appreciate a slow start for congestion avoidance.
At step 625, the resource node receives the resource command message and begins processing the message. In a preferred embodiment, multiple resource nodes are able to receive the same resource message. Furthermore, in a yet more preferable embodiment, multiple resource nodes are equally able to process the command and responded back to the resource consumer who sent the resource command message.
At step 630, the resource node utilizes the urgency, importance, or command identifier information as well as information regarding itself to determine when the command should be processed. The resource node determines if a previous command should be suspended from processing at step 633. If so, at step 635, the previous command is suspended, otherwise the current command is placed in a queue of commands at step 640. Once the command's turn for processing arrives, the resource node executes the command at step 645 and sends an appropriate response at step 650.
In a preferred embodiment, at step 655, the resource consumer could receive multiple responses from multiple resource nodes where the resource nodes offer redundant capabilities. If so, the resource consumer selects a preferred resource node among the plurality of nodes. In an especially preferred embodiment, the preferred resource node is selected based upon which of the redundant nodes responds first. Each resource consumer interacting with a resource device comprising a plurality of resource nodes is able to have a completely different preferred provider. Furthermore, the preferred provide is able to change as conditions in the environment change. Consequently, at any given time, resource consumers experience solid performance, load balancing, or responsiveness naturally without imposing extraneous management.
In a preferred embodiment, the steps presented in FIG. 6 are stored in a computer readable media in the form of instructions to be executed on a processing unit.
Advantages
Resource consumers and resource devices comprising one or more resource nodes realize a number of advantages as a natural result through employing resource command messages.
Resources scale naturally as additional resource devices or resource nodes are added to the system. Each individual resource node focuses on its main responsibilities and processing resource command messages; therefore, they are autonomous allowing for scaling the system at an atomic level up to the ability of the communication path to handle resource command messages. The bandwidth of the communication path is more efficiently utilized because all traffic is relevant to accessing the resource rather than system management or maintenance. Furthermore, incremental costs are reduced because if the resource system requires further capabilities individual resource nodes can be added as opposed to replicating an entire resource system.
Both performance and responsiveness of the resources increase as additional redundant nodes are added to the system. Resource consumers send resource command messages to the resource nodes collectively, thereby allowing more than one resource node to respond. Given different loading across each resource node, the resource node most able to respond responds the quickest resulting in a fast response time. In addition, multiple resource nodes, not necessarily redundant nodes, process resource command message substantially in parallel providing higher performance to the resource consumer. Resource consumers use importance information to indicate to a resource node the priority that should be considered for processing the command. Importance information aids in the handling of QoS data. Multiple responses are reduced through a slow start for congestion avoidance to limit consumption of bandwidth. In addition, resource consumers each have their own view of the resource nodes and independently select a preferred provider when working with redundant resource nodes to aid in securing fastest response times and reduced multiple messages.
Load balancing is achieved as a natural result across redundant resource nodes because each node functions independently allowing each node to handle as much traffic as they are designed to handle. Resource consumers have no a priori preference which resource node services its requests; however, the resource consumer can bias which node is preferred to reduce multiple responses. Even though a resource consumer could have a preferred provider, it can change the preferred provider based upon how other resource nodes respond through continued interactions. Therefore, loading is balanced across nodes. As additional nodes are added to the system to reduce loading, resource consumers are able to cycle through preferred nodes if required so that multiple resource consumers effectively share resource nodes.
Thus, specific compositions and methods of resource command messages have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the disclosure. Moreover, in interpreting the disclosure all terms should be interpreted in the broadest possible manner consistent with the context. In particular the terms “comprises” and “comprising” should be interpreted as referring to the elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps can be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.

Claims (17)

What is claimed is:
1. An article of manufacture comprising:
a non-transitory computer-readable medium; and
instructions within the computer-readable medium that, when executed, cause a node:
to transmit, in a first multicast message, a first resource command message to a plurality of resource nodes over a communication path;
to receive responses to the first resource command message from the plurality of resource nodes;
to select a preferred provider from the plurality of resource nodes based at least in part on the responses received from the plurality of resource nodes; and
to transmit, in a second multicast message, a second resource command message to the plurality of resource nodes, the second resource command message including a command and an indication of the preferred provider selected from the plurality of resource nodes, said second resource command message to instruct the preferred provider to process the command with a first urgency and to instruct another resource node of the plurality of resource nodes to process the command with a second urgency that is less than the first urgency.
2. The article of manufacture of claim 1, wherein each of the responses from the plurality of resource nodes include identical content.
3. The article of manufacture of claim 1, wherein the plurality of resource nodes comprise a plurality of redundant resource nodes that manage duplicate resources.
4. The article of manufacture of claim 1, wherein the second resource command message has a command urgency that includes the indication of the preferred provider.
5. The article of manufacture of claim 1, wherein the first resource command message includes a command urgency that indicates a relative time to process a command in the first resource command message.
6. The article of manufacture of claim 1, wherein the first resource command message includes a command urgency that indicates an absolute time to process a command in the first resource command message.
7. The article of manufacture of claim 1, wherein the first resource command message includes a command importance that indicates an absolute priority to process a command in the first resource command message.
8. The article of manufacture of claim 1, wherein the second resource command message includes a command identifier that relates the second resource command message to the first resource command message.
9. A method comprising:
transmitting, in a first multicast message, a first resource command message to a plurality of resource nodes;
receiving responses to the first resource command message from the plurality of resource nodes;
selecting a preferred provider from the plurality of resource nodes based at least in part on the responses received from the plurality of resource nodes; and
transmitting, in a second multicast message, a second resource command message to the plurality of resource nodes, the second resource command message including a command and an indication of the preferred provider selected from the plurality of resource nodes, said second resource command message to instruct the preferred provider to process the command with a first urgency and to instruct another resource node of the plurality of resource nodes to process the command with a second urgency that is less than the first urgency.
10. The method of claim 9, wherein each of the responses from the plurality of resource nodes include identical content.
11. The method of claim 9, wherein the plurality of resource nodes comprise a plurality of redundant resource nodes that manage duplicate resources.
12. The method of claim 9, further comprising:
providing the second resource command message with a command urgency that includes the indication of the preferred provider.
13. The method of claim 9, further comprising:
providing the first resource command message with a command urgency that indicates a relative time to process a command within the first resource command message.
14. The method of claim 9, further comprising:
providing the first resource command message with a command urgency that indicates an absolute time to process a command within the first resource command message.
15. The method of claim 9, further comprising:
providing the first resource command message with a command importance that indicates an absolute priority to process a command within the first resource command.
16. The method of claim 9, further comprising:
providing the second resource command message with a command identifier that relates the second resource command message to the first resource command message.
17. The method of claim 9, further comprising:
transmitting a broadcast discovery message;
receiving a discovery response from a first resource node of the plurality of resource nodes, the discovery response including a name of the first resource node; and
determining, based at least in part on the name, an address to be used in sending communications to the first resource node over a communication path.
US11/246,721 2005-10-06 2005-10-06 Resource command messages and methods Active 2031-05-01 US9270532B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US11/246,721 US9270532B2 (en) 2005-10-06 2005-10-06 Resource command messages and methods
US14/876,743 US11601334B2 (en) 2005-10-06 2015-10-06 Resource command messages and methods
US18/104,264 US11848822B2 (en) 2005-10-06 2023-01-31 Resource command messages and methods
US18/463,189 US20230421447A1 (en) 2005-10-06 2023-09-07 Resource command messages and methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/246,721 US9270532B2 (en) 2005-10-06 2005-10-06 Resource command messages and methods

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/876,743 Continuation US11601334B2 (en) 2005-10-06 2015-10-06 Resource command messages and methods

Publications (2)

Publication Number Publication Date
US20070083662A1 US20070083662A1 (en) 2007-04-12
US9270532B2 true US9270532B2 (en) 2016-02-23

Family

ID=37912115

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/246,721 Active 2031-05-01 US9270532B2 (en) 2005-10-06 2005-10-06 Resource command messages and methods
US14/876,743 Active 2026-01-22 US11601334B2 (en) 2005-10-06 2015-10-06 Resource command messages and methods
US18/104,264 Active US11848822B2 (en) 2005-10-06 2023-01-31 Resource command messages and methods
US18/463,189 Pending US20230421447A1 (en) 2005-10-06 2023-09-07 Resource command messages and methods

Family Applications After (3)

Application Number Title Priority Date Filing Date
US14/876,743 Active 2026-01-22 US11601334B2 (en) 2005-10-06 2015-10-06 Resource command messages and methods
US18/104,264 Active US11848822B2 (en) 2005-10-06 2023-01-31 Resource command messages and methods
US18/463,189 Pending US20230421447A1 (en) 2005-10-06 2023-09-07 Resource command messages and methods

Country Status (1)

Country Link
US (4) US9270532B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10021008B1 (en) * 2015-06-29 2018-07-10 Amazon Technologies, Inc. Policy-based scaling of computing resource groups
US10148592B1 (en) 2015-06-29 2018-12-04 Amazon Technologies, Inc. Prioritization-based scaling of computing resources

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8005918B2 (en) 2002-11-12 2011-08-23 Rateze Remote Mgmt. L.L.C. Data storage devices having IP capable partitions
US7620981B2 (en) 2005-05-26 2009-11-17 Charles William Frank Virtual devices and virtual bus tunnels, modules and methods
US8819092B2 (en) 2005-08-16 2014-08-26 Rateze Remote Mgmt. L.L.C. Disaggregated resources and access methods
US7844709B2 (en) * 2006-09-20 2010-11-30 International Business Machines Corporation Method and apparatus for managing central processing unit resources of a logically partitioned computing environment without shared memory access
US7925805B2 (en) * 2008-01-29 2011-04-12 Hewlett-Packard Development Company, L.P. Critical resource management
US7962650B2 (en) * 2008-04-10 2011-06-14 International Business Machines Corporation Dynamic component placement in an event-driven component-oriented network data processing system
US8336047B2 (en) * 2008-08-25 2012-12-18 International Business Machines Corporation Provisioning virtual resources using name resolution
US7890644B2 (en) * 2009-01-07 2011-02-15 Sony Corporation Parallel tasking application framework
US20130091198A1 (en) * 2011-10-05 2013-04-11 Htc Corporation Method of Reducing Message Transmission between DM Client and DM Server and Related Communication Device
US10218622B2 (en) * 2013-05-13 2019-02-26 Vmware, Inc. Placing a network device into a maintenance mode in a virtualized computing environment
CN105162878B (en) * 2015-09-24 2018-08-31 网宿科技股份有限公司 Document distribution system based on distributed storage and method
CN106559460B (en) 2015-09-30 2020-06-26 华为技术有限公司 Method and system for allocating resources in software defined protocol network
US11500570B2 (en) 2018-09-06 2022-11-15 Pure Storage, Inc. Efficient relocation of data utilizing different programming modes
US11520514B2 (en) 2018-09-06 2022-12-06 Pure Storage, Inc. Optimized relocation of data based on data characteristics
US11194759B2 (en) * 2018-09-06 2021-12-07 Pure Storage, Inc. Optimizing local data relocation operations of a storage device of a storage system
US11470184B2 (en) 2019-12-12 2022-10-11 Microsoft Technology Licensing, Llc. Intellegent queuing of rules based command invocations
KR20220082563A (en) * 2020-12-10 2022-06-17 삼성전자주식회사 Storate device and operating method of the same
US20240069729A1 (en) * 2022-08-31 2024-02-29 Pure Storage, Inc. Optimizing Data Deletion in a Storage System

Citations (198)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4890227A (en) 1983-07-20 1989-12-26 Hitachi, Ltd. Autonomous resource management system with recorded evaluations of system performance with scheduler control including knowledge learning function
US5129088A (en) 1987-11-30 1992-07-07 International Business Machines Corporation Data processing method to create virtual disks from non-contiguous groups of logically contiguous addressable blocks of direct access storage device
US5193171A (en) 1989-12-11 1993-03-09 Hitachi, Ltd. Method of managing space of peripheral storages and apparatus for the same
US5506969A (en) 1993-11-29 1996-04-09 Sun Microsystems, Inc. Method and apparatus for bus bandwidth management
US5546541A (en) * 1991-02-05 1996-08-13 International Business Machines Corporation System for routing transaction commands to an external resource manager when the target resource is not managed by the local transaction managing computer program
US5590124A (en) 1993-05-07 1996-12-31 Apple Computer, Inc. Link and discovery protocol for a ring interconnect architecture
US5590276A (en) 1992-01-08 1996-12-31 Emc Corporation Method for synchronizing reserved areas in a redundant storage array
US5634111A (en) 1992-03-16 1997-05-27 Hitachi, Ltd. Computer system including a device with a plurality of identifiers
US5742604A (en) 1996-03-28 1998-04-21 Cisco Systems, Inc. Interswitch link mechanism for connecting high-performance network switches
US5758188A (en) 1995-11-21 1998-05-26 Quantum Corporation Synchronous DMA burst transfer protocol having the peripheral device toggle the strobe signal such that data is latched using both edges of the strobe signal
US5758050A (en) 1996-03-12 1998-05-26 International Business Machines Corporation Reconfigurable data storage system
US5867686A (en) 1993-11-09 1999-02-02 Conner; Kenneth H. High speed real-time information storage system
US5884038A (en) 1997-05-02 1999-03-16 Whowhere? Inc. Method for providing an Internet protocol address with a domain name server
US5889935A (en) 1996-05-28 1999-03-30 Emc Corporation Disaster control features for remote data mirroring
US5930786A (en) 1995-10-20 1999-07-27 Ncr Corporation Method and apparatus for providing shared data to a requesting client
US5937169A (en) 1997-10-29 1999-08-10 3Com Corporation Offload of TCP segmentation to a smart adapter
EP0485110B1 (en) 1990-11-09 1999-08-25 Emc Corporation Logical partitioning of a redundant array storage system
US5949977A (en) * 1996-10-08 1999-09-07 Aubeta Technology, Llc Method and apparatus for requesting and processing services from a plurality of nodes connected via common communication links
US5948062A (en) * 1995-10-27 1999-09-07 Emc Corporation Network file server using a cached disk array storing a network file directory including file locking information and data mover computers each having file system software for shared read-write file access
US5991891A (en) 1996-12-23 1999-11-23 Lsi Logic Corporation Method and apparatus for providing loop coherency
US6018779A (en) 1997-12-15 2000-01-25 Emc Corporation System for encapsulating a plurality of selected commands within a single command and transmitting the single command to a remote device over a communication link therewith
US6081879A (en) 1997-11-04 2000-06-27 Adaptec, Inc. Data processing system and virtual partitioning method for creating logical multi-level units of online storage
EP0654736B1 (en) 1993-11-19 2000-07-12 Hitachi, Ltd. Dynamically expandable storage unit array system
US6101559A (en) 1997-10-22 2000-08-08 Compaq Computer Corporation System for identifying the physical location of one or more peripheral devices by selecting icons on a display representing the one or more peripheral devices
US6105122A (en) 1998-02-06 2000-08-15 Ncr Corporation I/O protocol for highly configurable multi-node processing system
JP2000267979A (en) 1999-03-12 2000-09-29 Nec Corp Storage system
US6128664A (en) 1997-10-20 2000-10-03 Fujitsu Limited Address-translating connection device
US6157955A (en) 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US6157935A (en) 1996-12-17 2000-12-05 Tran; Bao Q. Remote data access and management system
WO2001001270A1 (en) 1999-06-30 2001-01-04 Webtv Networks, Inc. Interactive television triggers having connected content/disconnected content attribute
US6181927B1 (en) 1997-02-18 2001-01-30 Nortel Networks Corporation Sponsored call and cell service
US6202060B1 (en) 1996-10-29 2001-03-13 Bao Q. Tran Data management system
JP2001094987A (en) 1999-09-22 2001-04-06 Matsushita Electric Ind Co Ltd Image data transmission method
US6246683B1 (en) 1998-05-01 2001-06-12 3Com Corporation Receive processing with network protocol bypass
US6253273B1 (en) 1998-02-06 2001-06-26 Emc Corporation Lock mechanism
US6259448B1 (en) 1998-06-03 2001-07-10 International Business Machines Corporation Resource model configuration and deployment in a distributed computer network
US6275898B1 (en) 1999-05-13 2001-08-14 Lsi Logic Corporation Methods and structure for RAID level migration within a logical unit
US20010020273A1 (en) 1999-12-03 2001-09-06 Yasushi Murakawa Method of virtual private network communication in security gateway apparatus and security gateway apparatus using the same
US6288716B1 (en) 1997-06-25 2001-09-11 Samsung Electronics, Co., Ltd Browser based command and control home network
US6295584B1 (en) 1997-08-29 2001-09-25 International Business Machines Corporation Multiprocessor computer system with memory map translation
US20010026550A1 (en) 2000-03-29 2001-10-04 Fujitsu Limited Communication device
US20010049739A1 (en) 2000-06-02 2001-12-06 Koji Wakayama Apparatus and method for interworking between MPLS network and non-MPLS network
US6330615B1 (en) 1998-09-14 2001-12-11 International Business Machines Corporation Method of using address resolution protocol for constructing data frame formats for multiple partitions host network interface communications
US6330236B1 (en) 1998-06-11 2001-12-11 Synchrodyne Networks, Inc. Packet switching method with time-based routing
JP2001359200A (en) 2000-06-12 2001-12-26 Yamaha Corp Wireless audio equipment
US20020016811A1 (en) * 1999-04-07 2002-02-07 International Business Machines Corporation Computer system and method for sharing a job with other computers on a computer network using IP multicast
WO2002015018A1 (en) 2000-08-11 2002-02-21 3Ware, Inc. Architecture for providing block-level storage access over a computer network
US20020026558A1 (en) 2000-06-02 2002-02-28 Reuter James M. Architecture for parallel distributed table driven I/O mapping
US20020035621A1 (en) * 1999-06-11 2002-03-21 Zintel William Michael XML-based language description for controlled devices
US20020039196A1 (en) 2000-02-25 2002-04-04 Luca Chiarabini System and method for downloading and for printing data from an external content source
US20020052962A1 (en) 1998-11-16 2002-05-02 Ludmila Cherkasova Hybrid and predictive admission control strategies for a server
US6385638B1 (en) * 1997-09-04 2002-05-07 Equator Technologies, Inc. Processor resource distributor and method
US6389448B1 (en) * 1999-12-06 2002-05-14 Warp Solutions, Inc. System and method for load balancing
US20020062387A1 (en) 2000-10-30 2002-05-23 Michael Yatziv Interface emulation for storage devices
US6396480B1 (en) 1995-07-17 2002-05-28 Gateway, Inc. Context sensitive remote control groups
US20020065875A1 (en) 2000-11-30 2002-05-30 Shawn Bracewell System and method for managing states and user context over stateless protocols
US6401183B1 (en) 1999-04-01 2002-06-04 Flash Vos, Inc. System and method for operating system independent storage management
US20020087811A1 (en) 2000-12-28 2002-07-04 Manoj Khare Method and apparatus for reducing memory latency in a cache coherent multi-node architecture
US20020091830A1 (en) * 2001-01-10 2002-07-11 Koji Muramatsu Distributed-processing system and command transfer method in the same
CN1359214A (en) 2000-12-13 2002-07-17 Lg电子株式会社 Device and method for long distance controlling domestic electric appliance
US6434683B1 (en) 2000-11-07 2002-08-13 Storage Technology Corporation Method and system for transferring delta difference data to a storage device
JP2002252880A (en) 2001-02-26 2002-09-06 Sanyo Electric Co Ltd Liquid crystal projector
US6449607B1 (en) 1998-09-11 2002-09-10 Hitachi, Ltd. Disk storage with modifiable data management function
US20020126658A1 (en) 2001-03-06 2002-09-12 Nec Corporation Main unit address restricted notification system
US6466571B1 (en) 1999-01-19 2002-10-15 3Com Corporation Radius-based mobile internet protocol (IP) address-to-mobile identification number mapping for wireless communication
US6470342B1 (en) 1999-03-12 2002-10-22 Compaq Computer Corporation Process of maintaining a distributed map of transaction identifiers and using hashing to access these maps
US6473774B1 (en) 1998-09-28 2002-10-29 Compaq Computer Corporation Method and apparatus for record addressing in partitioned files
JP2002318725A (en) 2001-04-20 2002-10-31 Hitachi Ltd Disk array system
US20020165978A1 (en) 2001-05-07 2002-11-07 Terence Chui Multi-service optical infiniband router
US6480934B1 (en) 1998-09-28 2002-11-12 Hitachi, Ltd. Storage control unit and method for handling data storage system using thereof
US6487555B1 (en) 1999-05-07 2002-11-26 Alta Vista Company Method and apparatus for finding mirrored hosts by analyzing connectivity and IP addresses
US20030018784A1 (en) 2001-01-25 2003-01-23 Lette John T. System and method for processing requests from newly registered remote application consumers
US20030023811A1 (en) 2001-07-27 2003-01-30 Chang-Soo Kim Method for managing logical volume in order to support dynamic online resizing and software raid
US20030026246A1 (en) 2001-06-06 2003-02-06 Zarlink Semiconductor V.N. Inc. Cached IP routing tree for longest prefix search
US20030041138A1 (en) * 2000-05-02 2003-02-27 Sun Microsystems, Inc. Cluster membership monitor
US20030065733A1 (en) 2001-09-28 2003-04-03 Pecone Victor Key Modular architecture for a network storage controller
US20030069995A1 (en) 2001-10-05 2003-04-10 Fayette Brad K. Method and system for communicating among heterogeneous systems
US6549983B1 (en) 1998-05-20 2003-04-15 Samsung Electronics Co., Ltd. Cache memory system and method for managing the same
US20030081592A1 (en) 2001-06-01 2003-05-01 Ainkaran Krishnarajah Method and apparatus for transporting different classes of data bits in a payload over a radio interface
US6567863B1 (en) 1998-12-07 2003-05-20 Schneider Electric Industries Sa Programmable controller coupler
US20030118053A1 (en) 2001-12-26 2003-06-26 Andiamo Systems, Inc. Methods and apparatus for encapsulating a frame for transmission in a storage area network
US20030130986A1 (en) 1998-06-30 2003-07-10 Tamer Philip E. System for determining the mapping of logical objects in a data storage system
US6601135B1 (en) 2000-11-16 2003-07-29 International Business Machines Corporation No-integrity logical volume management method and system
US6601101B1 (en) 2000-03-15 2003-07-29 3Com Corporation Transparent access to network attached devices
US20030152041A1 (en) * 2002-01-10 2003-08-14 Falk Herrmann Protocol for reliable, self-organizing, low-power wireless network for security and building automation systems
US20030161312A1 (en) 2002-02-27 2003-08-28 International Business Machines Corporation Apparatus and method of maintaining two-byte IP identification fields in IP headers
US6618743B1 (en) 1998-10-09 2003-09-09 Oneworld Internetworking, Inc. Method and system for providing discrete user cells in a UNIX-based environment
US20030172157A1 (en) 2001-06-28 2003-09-11 Wright Michael H. System and method for managing replication sets of data distributed over one or more computer systems
US20030182349A1 (en) 2002-03-21 2003-09-25 James Leong Method and apparatus for decomposing I/O tasks in a raid system
US6629264B1 (en) 2000-03-30 2003-09-30 Hewlett-Packard Development Company, L.P. Controller-based remote copy system with logical unit grouping
US20030202510A1 (en) 2002-04-26 2003-10-30 Maxxan Systems, Inc. System and method for scalable switch fabric for computer network
US20030204611A1 (en) 2002-04-29 2003-10-30 Mccosh John C. Communications tester and method of using same
US6678241B1 (en) 1999-11-30 2004-01-13 Cisc Technology, Inc. Fast convergence with topology switching
US6681244B1 (en) 2000-06-09 2004-01-20 3Com Corporation System and method for operating a network adapter when an associated network computing system is in a low-power state
US6683883B1 (en) 2002-04-09 2004-01-27 Sancastle Technologies Ltd. ISCSI-FCP gateway
US20040025477A1 (en) 2000-11-14 2004-02-12 Massimiliano Sichera Method and apparatus for inserting a cover sheet between a transparent film and a book-like case
US6693912B1 (en) 1999-06-04 2004-02-17 Oki Electric Industry Co., Ltd. Network interconnecting apparatus and active quality-of-service mapping method
JP2004054562A (en) 2002-07-19 2004-02-19 Nec Corp Method of controlling input and output for network file system
US6701432B1 (en) 1999-04-01 2004-03-02 Netscreen Technologies, Inc. Firewall including local bus
US6701431B2 (en) 2000-01-28 2004-03-02 Infineon Technologies Ag Method of generating a configuration for a configurable spread spectrum communication device
US20040047367A1 (en) 2002-09-05 2004-03-11 Litchfield Communications, Inc. Method and system for optimizing the size of a variable buffer
US6710786B1 (en) 1997-02-03 2004-03-23 Oracle International Corporation Method and apparatus for incorporating state information into a URL
US6711164B1 (en) 1999-11-05 2004-03-23 Nokia Corporation Method and apparatus for performing IP-ID regeneration to improve header compression efficiency
US20040078465A1 (en) 2002-10-17 2004-04-22 Coates Joshua L. Methods and apparatus for load balancing storage nodes in a distributed stroage area network system
US6732230B1 (en) 1999-10-20 2004-05-04 Lsi Logic Corporation Method of automatically migrating information from a source to an assemblage of structured data carriers and associated system and assemblage of data carriers
US6732171B2 (en) 2002-05-31 2004-05-04 Lefthand Networks, Inc. Distributed network storage system with virtualization
US6742034B1 (en) 1999-12-16 2004-05-25 Dell Products L.P. Method for storage device masking in a storage area network and storage controller and storage subsystem for using such a method
US6741554B2 (en) 2002-08-16 2004-05-25 Motorola Inc. Method and apparatus for reliably communicating information packets in a wireless communication network
US20040100952A1 (en) 1997-10-14 2004-05-27 Boucher Laurence B. Method and apparatus for dynamic packet batching with a high performance network interface
US6754662B1 (en) 2000-08-01 2004-06-22 Nortel Networks Limited Method and apparatus for fast and consistent packet classification via efficient hash-caching
US6757845B2 (en) 2000-11-30 2004-06-29 Bitmicro Networks, Inc. Method and apparatus for testing a storage device
US6772161B2 (en) 2001-12-19 2004-08-03 Hewlett-Packard Development Company, L.P. Object-level migration in a partition-based distributed file system
US6775673B2 (en) 2001-12-19 2004-08-10 Hewlett-Packard Development Company, L.P. Logical volume-level migration in a partition-based distributed file system
US6775672B2 (en) 2001-12-19 2004-08-10 Hewlett-Packard Development Company, L.P. Updating references to a migrated object in a partition-based distributed file system
US20040181476A1 (en) 2003-03-13 2004-09-16 Smith William R. Dynamic network resource brokering
US6795534B2 (en) 2000-09-04 2004-09-21 Nec Corporation Data recording system for IP telephone communication
US20040184455A1 (en) 2003-03-19 2004-09-23 Institute For Information Industry System and method used by a gateway for processing fragmented IP packets from a private network
US6799244B2 (en) 1999-12-13 2004-09-28 Hitachi, Ltd. Storage control unit with a volatile cache and a non-volatile backup cache for processing read and write requests
US6799255B1 (en) 1998-06-29 2004-09-28 Emc Corporation Storage mapping and partitioning among multiple host processors
US6834326B1 (en) 2000-02-04 2004-12-21 3Com Corporation RAID method and device with network protocol between controller and storage devices
US6854021B1 (en) 2000-10-02 2005-02-08 International Business Machines Corporation Communications between partitions within a logically partitioned computer
US6853382B1 (en) 2000-10-13 2005-02-08 Nvidia Corporation Controller for a memory system having multiple partitions
US20050033740A1 (en) 2000-05-22 2005-02-10 Broadcom Corporation Method and apparatus for performing a binary search on an expanded tree
WO2005017738A1 (en) 2003-08-13 2005-02-24 Fujitsu Limited Print control method, print controller and print control program
US6862606B1 (en) 2001-05-11 2005-03-01 Novell, Inc. System and method for partitioning address space in a proxy cache server cluster
US20050058131A1 (en) 2003-07-29 2005-03-17 Samuels Allen R. Wavefront detection and disambiguation of acknowledgments
US6876657B1 (en) 2000-12-14 2005-04-05 Chiaro Networks, Ltd. System and method for router packet control and ordering
US6886035B2 (en) 1996-08-02 2005-04-26 Hewlett-Packard Development Company, L.P. Dynamic load balancing of a network of client and server computer
US20050102522A1 (en) 2003-11-12 2005-05-12 Akitsugu Kanda Authentication device and computer system
US6895461B1 (en) 2002-04-22 2005-05-17 Cisco Technology, Inc. Method and apparatus for accessing remote storage using SCSI and an IP network
US6894976B1 (en) 2000-06-15 2005-05-17 Network Appliance, Inc. Prevention and detection of IP identification wraparound errors
US6895511B1 (en) 1998-10-29 2005-05-17 Nortel Networks Limited Method and apparatus providing for internet protocol address authentication
US6901497B2 (en) 2000-10-27 2005-05-31 Sony Computer Entertainment Inc. Partition creating method and deleting method
US6904470B1 (en) 2003-03-26 2005-06-07 Emc Corporation Device selection by a disk adapter scheduler
US6907473B2 (en) 1998-10-30 2005-06-14 Science Applications International Corp. Agile network protocol for secure communications with assured system availability
US6912622B2 (en) 2002-04-15 2005-06-28 Microsoft Corporation Multi-level cache architecture and cache management method for peer-to-peer name resolution protocol
US6917616B1 (en) 1998-09-18 2005-07-12 Alcatel Canada Inc. Method and apparatus for reduction and restoration of data elements pertaining to transmitted data packets in a communications network
US6922688B1 (en) 1998-01-23 2005-07-26 Adaptec, Inc. Computer system storage
US20050166022A1 (en) 2004-01-28 2005-07-28 Hitachi, Ltd. Method and apparatus for copying and backup in storage systems
US6928473B1 (en) 2000-09-26 2005-08-09 Microsoft Corporation Measuring network jitter on application packet flows
EP0706113B1 (en) 1994-10-05 2005-08-10 Hewlett-Packard Company, A Delaware Corporation Method for adding storage disks to a hierarchic disk array while maintaining data availability
US20050175005A1 (en) 2000-06-21 2005-08-11 Mosaid Technologies, Inc. Method and apparatus for physical width expansion of longest prefix match lookup table
US6934799B2 (en) 2002-01-18 2005-08-23 International Business Machines Corporation Virtualization of iSCSI storage
US6941555B2 (en) 1998-11-05 2005-09-06 Bea Systems, Inc. Clustered enterprise Java™ in a secure distributed processing system
US20050198371A1 (en) 2004-02-19 2005-09-08 Smith Michael R. Interface bundles in virtual network devices
US6947430B2 (en) 2000-03-24 2005-09-20 International Business Machines Corporation Network adapter with embedded deep packet processing
JP2005265914A (en) 2004-03-16 2005-09-29 Ricoh Co Ltd Zoom lens, camera and personal digital assistance
US20050246401A1 (en) 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file system layout
US20050267929A1 (en) 2004-06-01 2005-12-01 Hitachi, Ltd. Method of dynamically balancing workload of a storage system
US20050270856A1 (en) 2004-06-03 2005-12-08 Inphase Technologies, Inc. Multi-level format for information storage
US6977927B1 (en) 2000-09-18 2005-12-20 Hewlett-Packard Development Company, L.P. Method and system of allocating storage resources in a storage area network
US20050286517A1 (en) 2004-06-29 2005-12-29 Babbar Uppinder S Filtering and routing of fragmented datagrams in a data network
US6983326B1 (en) 2001-04-06 2006-01-03 Networks Associates Technology, Inc. System and method for distributed function discovery in a peer-to-peer network environment
US6985956B2 (en) 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US6993587B1 (en) * 2000-04-07 2006-01-31 Network Appliance Inc. Method and apparatus for election of group leaders in a distributed network
US20060036602A1 (en) 2004-08-13 2006-02-16 Unangst Marc J Distributed object-based storage system that stores virtualization maps in object attributes
US20060077902A1 (en) 2004-10-08 2006-04-13 Kannan Naresh K Methods and apparatus for non-intrusive measurement of delay variation of data traffic on communication networks
US7039934B2 (en) 1999-12-10 2006-05-02 Sony Corporation Recording system
US7051087B1 (en) 2000-06-05 2006-05-23 Microsoft Corporation System and method for automatic detection and configuration of network parameters
US7065579B2 (en) 2001-01-22 2006-06-20 Sun Microsystems, Inc. System using peer discovery and peer membership protocols for accessing peer-to-peer platform resources on a network
US20060133365A1 (en) 2004-12-16 2006-06-22 Shankar Manjunatha Method, system and article for improved network performance by avoiding IP-ID wrap-arounds causing data corruption on fast networks
US7069295B2 (en) 2001-02-14 2006-06-27 The Escher Group, Ltd. Peer-to-peer enterprise storage
US7072986B2 (en) 2001-11-07 2006-07-04 Hitachi, Ltd. System and method for displaying storage system topology
US7073090B2 (en) 1993-04-23 2006-07-04 Emc Corporation Remote data mirroring system having a remote link adapter
US20060168345A1 (en) 2005-01-21 2006-07-27 Microsoft Corporation Resource identifier zone translation
US20060176903A1 (en) 2000-02-03 2006-08-10 Gemplus Conveying protocol units for portable electronic objects via a protocol for microcomputer peripherals
US7111303B2 (en) 2002-07-16 2006-09-19 International Business Machines Corporation Virtual machine operating system LAN
US7120666B2 (en) 2002-10-30 2006-10-10 Riverbed Technology, Inc. Transaction accelerator for client-server communication systems
US7146427B2 (en) 2002-04-23 2006-12-05 Lsi Logic Corporation Polling-based mechanism for improved RPC timeout handling
US7145866B1 (en) 2001-03-01 2006-12-05 Emc Corporation Virtual network devices
US7149769B2 (en) 2002-03-26 2006-12-12 Hewlett-Packard Development Company, L.P. System and method for multi-destination merge in a storage area network
US7152069B1 (en) 2002-10-15 2006-12-19 Network Appliance, Inc. Zero copy writes through use of mbufs
US7181521B2 (en) 2003-03-21 2007-02-20 Intel Corporation Method and system for selecting a local registry master from among networked mobile devices based at least in part on abilities of the mobile devices
US7184424B2 (en) 2002-11-12 2007-02-27 Zetera Corporation Multiplexing storage element interface
US7188194B1 (en) 2002-04-22 2007-03-06 Cisco Technology, Inc. Session-based target/LUN mapping for a storage area network and associated method
US7200641B1 (en) 2000-12-29 2007-04-03 Emc Corporation Method and system for encoding SCSI requests for transmission using TCP/IP
US7203730B1 (en) 2001-02-13 2007-04-10 Network Appliance, Inc. Method and apparatus for identifying storage devices
US7206805B1 (en) 1999-09-09 2007-04-17 Oracle International Corporation Asynchronous transcription object management system
US20070101023A1 (en) 2005-10-28 2007-05-03 Microsoft Corporation Multiple task offload to a peripheral device
US20070110047A1 (en) 2004-01-30 2007-05-17 Sun-Kwon Kim Method of collecting and searching for access route of information resource on internet and computer readable medium stored thereon program for implementing the same
US7225243B1 (en) 2000-03-14 2007-05-29 Adaptec, Inc. Device discovery methods and systems implementing the same
US7237036B2 (en) 1997-10-14 2007-06-26 Alacritech, Inc. Fast-path apparatus for receiving data corresponding a TCP connection
US7243144B2 (en) 2002-09-26 2007-07-10 Hitachi, Ltd. Integrated topology management method for storage and IP networks
US7260638B2 (en) 2000-07-24 2007-08-21 Bluesocket, Inc. Method and system for enabling seamless roaming in a wireless network
US7263108B2 (en) 2002-08-06 2007-08-28 Netxen, Inc. Dual-mode network storage systems and methods
US7278142B2 (en) 2000-08-24 2007-10-02 Veritas Operating Corporation Dynamic computing environment using remotely allocable resources
US7296050B2 (en) 2002-01-18 2007-11-13 Hewlett-Packard Development Company L.P. Distributed computing system and method
US7333451B1 (en) 1999-10-18 2008-02-19 Nortel Networks Limited Buffer management for mobile internet protocol
US7406523B1 (en) 2000-11-21 2008-07-29 Microsoft Corporation Client-server communications system and method using a semi-connectionless protocol
US20080181158A1 (en) 2005-03-24 2008-07-31 Nokia Corporation Notification of a Receiving Device About a Forthcoming Transmission Session
US7415018B2 (en) 2003-09-17 2008-08-19 Alcatel Lucent IP Time to Live (TTL) field used as a covert channel
US7428584B2 (en) 2002-10-07 2008-09-23 Hitachi, Ltd. Method for managing a network including a storage system
US7436789B2 (en) 2003-10-09 2008-10-14 Sarnoff Corporation Ad Hoc wireless node and network
US7447209B2 (en) 2004-03-09 2008-11-04 The University Of North Carolina Methods, systems, and computer program products for modeling and simulating application-level traffic characteristics in a network based on transport and network layer header information
US20080279106A1 (en) 2003-10-03 2008-11-13 3Com Corporation Switching fabrics and control protocols for them
US7463582B2 (en) 2000-04-14 2008-12-09 Hughes Network Systems, Llc System and method for scaling a two-way satellite system
US7526577B2 (en) 2003-09-19 2009-04-28 Microsoft Corporation Multiple offload of network state objects with support for failover events
EP0700231B1 (en) 1994-08-29 2009-07-15 AT&T Corp. Methods and systems for interprocess communication and inter-network data transfer

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185860A (en) * 1990-05-03 1993-02-09 Hewlett-Packard Company Automatic discovery of network elements
JPH07325779A (en) 1994-06-01 1995-12-12 Fuji Xerox Co Ltd Input/output controller
US6601083B1 (en) * 1996-08-29 2003-07-29 Frederick John Reznak Multitasking data processing system and method of controlling allocation of a shared resource
JPH1074159A (en) * 1996-08-30 1998-03-17 Hitachi Ltd Method for controlling computer system
JPH10198680A (en) * 1997-01-07 1998-07-31 Hitachi Ltd Distributed dictionary managing method and machine translating method using the method
US6216006B1 (en) * 1997-10-31 2001-04-10 Motorola, Inc. Method for an admission control function for a wireless data network
US6769019B2 (en) * 1997-12-10 2004-07-27 Xavier Ferguson Method of background downloading of information from a computer network
US6154769A (en) * 1998-03-27 2000-11-28 Hewlett-Packard Company Scheduling server requests to decrease response time and increase server throughput
US6601175B1 (en) * 1999-03-16 2003-07-29 International Business Machines Corporation Method and system for providing limited-life machine-specific passwords for data processing systems
US6725252B1 (en) * 1999-06-03 2004-04-20 International Business Machines Corporation Method and apparatus for detecting and processing multiple additional requests from a single user at a server in a distributed data processing system
US7882501B1 (en) * 1999-08-13 2011-02-01 Oracle America, Inc. System and method for enabling dynamic modifed class reloading in an application server environment
GB2355362B (en) * 1999-10-12 2003-08-06 Ericsson Telefon Ab L M Media gateway control
US6611860B1 (en) * 1999-11-17 2003-08-26 I/O Controls Corporation Control network with matrix architecture
US6532509B1 (en) * 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
DE60039975D1 (en) * 2000-06-19 2008-10-02 Hewlett Packard Co Method for managing units of an intranet network via the WEB
TW540205B (en) * 2001-02-27 2003-07-01 Ind Tech Res Inst Real-time scheduling mechanism capable of controlling quality of service
US20030097443A1 (en) * 2001-11-21 2003-05-22 Richard Gillett Systems and methods for delivering content over a network
US20030193967A1 (en) * 2001-12-31 2003-10-16 Gregg Fenton Method, apparatus and system for processing multimedia messages
US7178144B2 (en) * 2002-04-23 2007-02-13 Secure Resolutions, Inc. Software distribution via stages
US20050188087A1 (en) * 2002-05-28 2005-08-25 Dai Nippon Printing Co., Ltd. Parallel processing system
AU2003275181A1 (en) * 2002-09-18 2004-04-08 Netezza Corporation Programmable streaming data processor for database appliance having multiple processing unit groups

Patent Citations (202)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4890227A (en) 1983-07-20 1989-12-26 Hitachi, Ltd. Autonomous resource management system with recorded evaluations of system performance with scheduler control including knowledge learning function
US5129088A (en) 1987-11-30 1992-07-07 International Business Machines Corporation Data processing method to create virtual disks from non-contiguous groups of logically contiguous addressable blocks of direct access storage device
US5193171A (en) 1989-12-11 1993-03-09 Hitachi, Ltd. Method of managing space of peripheral storages and apparatus for the same
EP0485110B1 (en) 1990-11-09 1999-08-25 Emc Corporation Logical partitioning of a redundant array storage system
US5546541A (en) * 1991-02-05 1996-08-13 International Business Machines Corporation System for routing transaction commands to an external resource manager when the target resource is not managed by the local transaction managing computer program
US5590276A (en) 1992-01-08 1996-12-31 Emc Corporation Method for synchronizing reserved areas in a redundant storage array
US5634111A (en) 1992-03-16 1997-05-27 Hitachi, Ltd. Computer system including a device with a plurality of identifiers
US7073090B2 (en) 1993-04-23 2006-07-04 Emc Corporation Remote data mirroring system having a remote link adapter
US5590124A (en) 1993-05-07 1996-12-31 Apple Computer, Inc. Link and discovery protocol for a ring interconnect architecture
US5867686A (en) 1993-11-09 1999-02-02 Conner; Kenneth H. High speed real-time information storage system
EP0654736B1 (en) 1993-11-19 2000-07-12 Hitachi, Ltd. Dynamically expandable storage unit array system
US5506969A (en) 1993-11-29 1996-04-09 Sun Microsystems, Inc. Method and apparatus for bus bandwidth management
EP0700231B1 (en) 1994-08-29 2009-07-15 AT&T Corp. Methods and systems for interprocess communication and inter-network data transfer
EP0706113B1 (en) 1994-10-05 2005-08-10 Hewlett-Packard Company, A Delaware Corporation Method for adding storage disks to a hierarchic disk array while maintaining data availability
US6396480B1 (en) 1995-07-17 2002-05-28 Gateway, Inc. Context sensitive remote control groups
US5930786A (en) 1995-10-20 1999-07-27 Ncr Corporation Method and apparatus for providing shared data to a requesting client
US5948062A (en) * 1995-10-27 1999-09-07 Emc Corporation Network file server using a cached disk array storing a network file directory including file locking information and data mover computers each having file system software for shared read-write file access
US5758188A (en) 1995-11-21 1998-05-26 Quantum Corporation Synchronous DMA burst transfer protocol having the peripheral device toggle the strobe signal such that data is latched using both edges of the strobe signal
US5758050A (en) 1996-03-12 1998-05-26 International Business Machines Corporation Reconfigurable data storage system
US5742604A (en) 1996-03-28 1998-04-21 Cisco Systems, Inc. Interswitch link mechanism for connecting high-performance network switches
US5889935A (en) 1996-05-28 1999-03-30 Emc Corporation Disaster control features for remote data mirroring
US6886035B2 (en) 1996-08-02 2005-04-26 Hewlett-Packard Development Company, L.P. Dynamic load balancing of a network of client and server computer
US5949977A (en) * 1996-10-08 1999-09-07 Aubeta Technology, Llc Method and apparatus for requesting and processing services from a plurality of nodes connected via common communication links
US6202060B1 (en) 1996-10-29 2001-03-13 Bao Q. Tran Data management system
US6157935A (en) 1996-12-17 2000-12-05 Tran; Bao Q. Remote data access and management system
US5991891A (en) 1996-12-23 1999-11-23 Lsi Logic Corporation Method and apparatus for providing loop coherency
US6710786B1 (en) 1997-02-03 2004-03-23 Oracle International Corporation Method and apparatus for incorporating state information into a URL
US6181927B1 (en) 1997-02-18 2001-01-30 Nortel Networks Corporation Sponsored call and cell service
US5884038A (en) 1997-05-02 1999-03-16 Whowhere? Inc. Method for providing an Internet protocol address with a domain name server
US6288716B1 (en) 1997-06-25 2001-09-11 Samsung Electronics, Co., Ltd Browser based command and control home network
US6295584B1 (en) 1997-08-29 2001-09-25 International Business Machines Corporation Multiprocessor computer system with memory map translation
US6385638B1 (en) * 1997-09-04 2002-05-07 Equator Technologies, Inc. Processor resource distributor and method
US20040100952A1 (en) 1997-10-14 2004-05-27 Boucher Laurence B. Method and apparatus for dynamic packet batching with a high performance network interface
US7237036B2 (en) 1997-10-14 2007-06-26 Alacritech, Inc. Fast-path apparatus for receiving data corresponding a TCP connection
US6128664A (en) 1997-10-20 2000-10-03 Fujitsu Limited Address-translating connection device
US6101559A (en) 1997-10-22 2000-08-08 Compaq Computer Corporation System for identifying the physical location of one or more peripheral devices by selecting icons on a display representing the one or more peripheral devices
US5937169A (en) 1997-10-29 1999-08-10 3Com Corporation Offload of TCP segmentation to a smart adapter
US6081879A (en) 1997-11-04 2000-06-27 Adaptec, Inc. Data processing system and virtual partitioning method for creating logical multi-level units of online storage
US6018779A (en) 1997-12-15 2000-01-25 Emc Corporation System for encapsulating a plurality of selected commands within a single command and transmitting the single command to a remote device over a communication link therewith
US6922688B1 (en) 1998-01-23 2005-07-26 Adaptec, Inc. Computer system storage
US6253273B1 (en) 1998-02-06 2001-06-26 Emc Corporation Lock mechanism
US6105122A (en) 1998-02-06 2000-08-15 Ncr Corporation I/O protocol for highly configurable multi-node processing system
US6246683B1 (en) 1998-05-01 2001-06-12 3Com Corporation Receive processing with network protocol bypass
US6549983B1 (en) 1998-05-20 2003-04-15 Samsung Electronics Co., Ltd. Cache memory system and method for managing the same
US6259448B1 (en) 1998-06-03 2001-07-10 International Business Machines Corporation Resource model configuration and deployment in a distributed computer network
US6330236B1 (en) 1998-06-11 2001-12-11 Synchrodyne Networks, Inc. Packet switching method with time-based routing
US6157955A (en) 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US6799255B1 (en) 1998-06-29 2004-09-28 Emc Corporation Storage mapping and partitioning among multiple host processors
US20030130986A1 (en) 1998-06-30 2003-07-10 Tamer Philip E. System for determining the mapping of logical objects in a data storage system
US6449607B1 (en) 1998-09-11 2002-09-10 Hitachi, Ltd. Disk storage with modifiable data management function
US6330615B1 (en) 1998-09-14 2001-12-11 International Business Machines Corporation Method of using address resolution protocol for constructing data frame formats for multiple partitions host network interface communications
US20020029286A1 (en) 1998-09-14 2002-03-07 International Business Machines Corporation Communication between multiple partitions employing host-network interface
US6917616B1 (en) 1998-09-18 2005-07-12 Alcatel Canada Inc. Method and apparatus for reduction and restoration of data elements pertaining to transmitted data packets in a communications network
US6473774B1 (en) 1998-09-28 2002-10-29 Compaq Computer Corporation Method and apparatus for record addressing in partitioned files
US6480934B1 (en) 1998-09-28 2002-11-12 Hitachi, Ltd. Storage control unit and method for handling data storage system using thereof
US6618743B1 (en) 1998-10-09 2003-09-09 Oneworld Internetworking, Inc. Method and system for providing discrete user cells in a UNIX-based environment
US6895511B1 (en) 1998-10-29 2005-05-17 Nortel Networks Limited Method and apparatus providing for internet protocol address authentication
US6907473B2 (en) 1998-10-30 2005-06-14 Science Applications International Corp. Agile network protocol for secure communications with assured system availability
US6941555B2 (en) 1998-11-05 2005-09-06 Bea Systems, Inc. Clustered enterprise Java™ in a secure distributed processing system
US20020052962A1 (en) 1998-11-16 2002-05-02 Ludmila Cherkasova Hybrid and predictive admission control strategies for a server
US6567863B1 (en) 1998-12-07 2003-05-20 Schneider Electric Industries Sa Programmable controller coupler
US6466571B1 (en) 1999-01-19 2002-10-15 3Com Corporation Radius-based mobile internet protocol (IP) address-to-mobile identification number mapping for wireless communication
JP2000267979A (en) 1999-03-12 2000-09-29 Nec Corp Storage system
US6470342B1 (en) 1999-03-12 2002-10-22 Compaq Computer Corporation Process of maintaining a distributed map of transaction identifiers and using hashing to access these maps
US6401183B1 (en) 1999-04-01 2002-06-04 Flash Vos, Inc. System and method for operating system independent storage management
US6701432B1 (en) 1999-04-01 2004-03-02 Netscreen Technologies, Inc. Firewall including local bus
US20020016811A1 (en) * 1999-04-07 2002-02-07 International Business Machines Corporation Computer system and method for sharing a job with other computers on a computer network using IP multicast
US6487555B1 (en) 1999-05-07 2002-11-26 Alta Vista Company Method and apparatus for finding mirrored hosts by analyzing connectivity and IP addresses
US6275898B1 (en) 1999-05-13 2001-08-14 Lsi Logic Corporation Methods and structure for RAID level migration within a logical unit
US6693912B1 (en) 1999-06-04 2004-02-17 Oki Electric Industry Co., Ltd. Network interconnecting apparatus and active quality-of-service mapping method
US20020035621A1 (en) * 1999-06-11 2002-03-21 Zintel William Michael XML-based language description for controlled devices
WO2001001270A1 (en) 1999-06-30 2001-01-04 Webtv Networks, Inc. Interactive television triggers having connected content/disconnected content attribute
US7206805B1 (en) 1999-09-09 2007-04-17 Oracle International Corporation Asynchronous transcription object management system
JP2001094987A (en) 1999-09-22 2001-04-06 Matsushita Electric Ind Co Ltd Image data transmission method
US7333451B1 (en) 1999-10-18 2008-02-19 Nortel Networks Limited Buffer management for mobile internet protocol
US6732230B1 (en) 1999-10-20 2004-05-04 Lsi Logic Corporation Method of automatically migrating information from a source to an assemblage of structured data carriers and associated system and assemblage of data carriers
US6711164B1 (en) 1999-11-05 2004-03-23 Nokia Corporation Method and apparatus for performing IP-ID regeneration to improve header compression efficiency
US6678241B1 (en) 1999-11-30 2004-01-13 Cisc Technology, Inc. Fast convergence with topology switching
US20010020273A1 (en) 1999-12-03 2001-09-06 Yasushi Murakawa Method of virtual private network communication in security gateway apparatus and security gateway apparatus using the same
US6389448B1 (en) * 1999-12-06 2002-05-14 Warp Solutions, Inc. System and method for load balancing
US7039934B2 (en) 1999-12-10 2006-05-02 Sony Corporation Recording system
US6799244B2 (en) 1999-12-13 2004-09-28 Hitachi, Ltd. Storage control unit with a volatile cache and a non-volatile backup cache for processing read and write requests
US6742034B1 (en) 1999-12-16 2004-05-25 Dell Products L.P. Method for storage device masking in a storage area network and storage controller and storage subsystem for using such a method
US6701431B2 (en) 2000-01-28 2004-03-02 Infineon Technologies Ag Method of generating a configuration for a configurable spread spectrum communication device
US20060176903A1 (en) 2000-02-03 2006-08-10 Gemplus Conveying protocol units for portable electronic objects via a protocol for microcomputer peripherals
US6834326B1 (en) 2000-02-04 2004-12-21 3Com Corporation RAID method and device with network protocol between controller and storage devices
US20020039196A1 (en) 2000-02-25 2002-04-04 Luca Chiarabini System and method for downloading and for printing data from an external content source
US7225243B1 (en) 2000-03-14 2007-05-29 Adaptec, Inc. Device discovery methods and systems implementing the same
US6601101B1 (en) 2000-03-15 2003-07-29 3Com Corporation Transparent access to network attached devices
US6947430B2 (en) 2000-03-24 2005-09-20 International Business Machines Corporation Network adapter with embedded deep packet processing
US20010026550A1 (en) 2000-03-29 2001-10-04 Fujitsu Limited Communication device
US6629264B1 (en) 2000-03-30 2003-09-30 Hewlett-Packard Development Company, L.P. Controller-based remote copy system with logical unit grouping
US6993587B1 (en) * 2000-04-07 2006-01-31 Network Appliance Inc. Method and apparatus for election of group leaders in a distributed network
US7463582B2 (en) 2000-04-14 2008-12-09 Hughes Network Systems, Llc System and method for scaling a two-way satellite system
US20030041138A1 (en) * 2000-05-02 2003-02-27 Sun Microsystems, Inc. Cluster membership monitor
US20050033740A1 (en) 2000-05-22 2005-02-10 Broadcom Corporation Method and apparatus for performing a binary search on an expanded tree
US20010049739A1 (en) 2000-06-02 2001-12-06 Koji Wakayama Apparatus and method for interworking between MPLS network and non-MPLS network
US20020026558A1 (en) 2000-06-02 2002-02-28 Reuter James M. Architecture for parallel distributed table driven I/O mapping
US7051087B1 (en) 2000-06-05 2006-05-23 Microsoft Corporation System and method for automatic detection and configuration of network parameters
US6681244B1 (en) 2000-06-09 2004-01-20 3Com Corporation System and method for operating a network adapter when an associated network computing system is in a low-power state
JP2001359200A (en) 2000-06-12 2001-12-26 Yamaha Corp Wireless audio equipment
US6894976B1 (en) 2000-06-15 2005-05-17 Network Appliance, Inc. Prevention and detection of IP identification wraparound errors
US20050175005A1 (en) 2000-06-21 2005-08-11 Mosaid Technologies, Inc. Method and apparatus for physical width expansion of longest prefix match lookup table
US7260638B2 (en) 2000-07-24 2007-08-21 Bluesocket, Inc. Method and system for enabling seamless roaming in a wireless network
US6754662B1 (en) 2000-08-01 2004-06-22 Nortel Networks Limited Method and apparatus for fast and consistent packet classification via efficient hash-caching
WO2002015018A1 (en) 2000-08-11 2002-02-21 3Ware, Inc. Architecture for providing block-level storage access over a computer network
US7278142B2 (en) 2000-08-24 2007-10-02 Veritas Operating Corporation Dynamic computing environment using remotely allocable resources
US6795534B2 (en) 2000-09-04 2004-09-21 Nec Corporation Data recording system for IP telephone communication
US6977927B1 (en) 2000-09-18 2005-12-20 Hewlett-Packard Development Company, L.P. Method and system of allocating storage resources in a storage area network
US6928473B1 (en) 2000-09-26 2005-08-09 Microsoft Corporation Measuring network jitter on application packet flows
US6854021B1 (en) 2000-10-02 2005-02-08 International Business Machines Corporation Communications between partitions within a logically partitioned computer
US6853382B1 (en) 2000-10-13 2005-02-08 Nvidia Corporation Controller for a memory system having multiple partitions
US6901497B2 (en) 2000-10-27 2005-05-31 Sony Computer Entertainment Inc. Partition creating method and deleting method
US20020062387A1 (en) 2000-10-30 2002-05-23 Michael Yatziv Interface emulation for storage devices
US6985956B2 (en) 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US6434683B1 (en) 2000-11-07 2002-08-13 Storage Technology Corporation Method and system for transferring delta difference data to a storage device
US20040025477A1 (en) 2000-11-14 2004-02-12 Massimiliano Sichera Method and apparatus for inserting a cover sheet between a transparent film and a book-like case
US6601135B1 (en) 2000-11-16 2003-07-29 International Business Machines Corporation No-integrity logical volume management method and system
US7406523B1 (en) 2000-11-21 2008-07-29 Microsoft Corporation Client-server communications system and method using a semi-connectionless protocol
US7353266B2 (en) 2000-11-30 2008-04-01 Microsoft Corporation System and method for managing states and user context over stateless protocols
US20020065875A1 (en) 2000-11-30 2002-05-30 Shawn Bracewell System and method for managing states and user context over stateless protocols
US6757845B2 (en) 2000-11-30 2004-06-29 Bitmicro Networks, Inc. Method and apparatus for testing a storage device
CN1359214A (en) 2000-12-13 2002-07-17 Lg电子株式会社 Device and method for long distance controlling domestic electric appliance
US6876657B1 (en) 2000-12-14 2005-04-05 Chiaro Networks, Ltd. System and method for router packet control and ordering
US20020087811A1 (en) 2000-12-28 2002-07-04 Manoj Khare Method and apparatus for reducing memory latency in a cache coherent multi-node architecture
US7200641B1 (en) 2000-12-29 2007-04-03 Emc Corporation Method and system for encoding SCSI requests for transmission using TCP/IP
US20020091830A1 (en) * 2001-01-10 2002-07-11 Koji Muramatsu Distributed-processing system and command transfer method in the same
US7065579B2 (en) 2001-01-22 2006-06-20 Sun Microsystems, Inc. System using peer discovery and peer membership protocols for accessing peer-to-peer platform resources on a network
US20030018784A1 (en) 2001-01-25 2003-01-23 Lette John T. System and method for processing requests from newly registered remote application consumers
US7203730B1 (en) 2001-02-13 2007-04-10 Network Appliance, Inc. Method and apparatus for identifying storage devices
US7069295B2 (en) 2001-02-14 2006-06-27 The Escher Group, Ltd. Peer-to-peer enterprise storage
JP2002252880A (en) 2001-02-26 2002-09-06 Sanyo Electric Co Ltd Liquid crystal projector
US7145866B1 (en) 2001-03-01 2006-12-05 Emc Corporation Virtual network devices
US20020126658A1 (en) 2001-03-06 2002-09-12 Nec Corporation Main unit address restricted notification system
US6983326B1 (en) 2001-04-06 2006-01-03 Networks Associates Technology, Inc. System and method for distributed function discovery in a peer-to-peer network environment
JP2002318725A (en) 2001-04-20 2002-10-31 Hitachi Ltd Disk array system
US20020165978A1 (en) 2001-05-07 2002-11-07 Terence Chui Multi-service optical infiniband router
US6862606B1 (en) 2001-05-11 2005-03-01 Novell, Inc. System and method for partitioning address space in a proxy cache server cluster
US20030081592A1 (en) 2001-06-01 2003-05-01 Ainkaran Krishnarajah Method and apparatus for transporting different classes of data bits in a payload over a radio interface
US20030026246A1 (en) 2001-06-06 2003-02-06 Zarlink Semiconductor V.N. Inc. Cached IP routing tree for longest prefix search
US20030172157A1 (en) 2001-06-28 2003-09-11 Wright Michael H. System and method for managing replication sets of data distributed over one or more computer systems
US20030023811A1 (en) 2001-07-27 2003-01-30 Chang-Soo Kim Method for managing logical volume in order to support dynamic online resizing and software raid
US20030065733A1 (en) 2001-09-28 2003-04-03 Pecone Victor Key Modular architecture for a network storage controller
US20030069995A1 (en) 2001-10-05 2003-04-10 Fayette Brad K. Method and system for communicating among heterogeneous systems
US7072986B2 (en) 2001-11-07 2006-07-04 Hitachi, Ltd. System and method for displaying storage system topology
US6775673B2 (en) 2001-12-19 2004-08-10 Hewlett-Packard Development Company, L.P. Logical volume-level migration in a partition-based distributed file system
US6775672B2 (en) 2001-12-19 2004-08-10 Hewlett-Packard Development Company, L.P. Updating references to a migrated object in a partition-based distributed file system
US6772161B2 (en) 2001-12-19 2004-08-03 Hewlett-Packard Development Company, L.P. Object-level migration in a partition-based distributed file system
US20030118053A1 (en) 2001-12-26 2003-06-26 Andiamo Systems, Inc. Methods and apparatus for encapsulating a frame for transmission in a storage area network
US20030152041A1 (en) * 2002-01-10 2003-08-14 Falk Herrmann Protocol for reliable, self-organizing, low-power wireless network for security and building automation systems
US7296050B2 (en) 2002-01-18 2007-11-13 Hewlett-Packard Development Company L.P. Distributed computing system and method
US6934799B2 (en) 2002-01-18 2005-08-23 International Business Machines Corporation Virtualization of iSCSI storage
US20030161312A1 (en) 2002-02-27 2003-08-28 International Business Machines Corporation Apparatus and method of maintaining two-byte IP identification fields in IP headers
US20030182349A1 (en) 2002-03-21 2003-09-25 James Leong Method and apparatus for decomposing I/O tasks in a raid system
US7149769B2 (en) 2002-03-26 2006-12-12 Hewlett-Packard Development Company, L.P. System and method for multi-destination merge in a storage area network
US6683883B1 (en) 2002-04-09 2004-01-27 Sancastle Technologies Ltd. ISCSI-FCP gateway
US6912622B2 (en) 2002-04-15 2005-06-28 Microsoft Corporation Multi-level cache architecture and cache management method for peer-to-peer name resolution protocol
US6895461B1 (en) 2002-04-22 2005-05-17 Cisco Technology, Inc. Method and apparatus for accessing remote storage using SCSI and an IP network
US7188194B1 (en) 2002-04-22 2007-03-06 Cisco Technology, Inc. Session-based target/LUN mapping for a storage area network and associated method
US7146427B2 (en) 2002-04-23 2006-12-05 Lsi Logic Corporation Polling-based mechanism for improved RPC timeout handling
US20030202510A1 (en) 2002-04-26 2003-10-30 Maxxan Systems, Inc. System and method for scalable switch fabric for computer network
US20030204611A1 (en) 2002-04-29 2003-10-30 Mccosh John C. Communications tester and method of using same
US6732171B2 (en) 2002-05-31 2004-05-04 Lefthand Networks, Inc. Distributed network storage system with virtualization
US20050144199A2 (en) 2002-05-31 2005-06-30 Lefthand Networks, Inc. Distributed Network Storage System With Virtualization
US7111303B2 (en) 2002-07-16 2006-09-19 International Business Machines Corporation Virtual machine operating system LAN
JP2004054562A (en) 2002-07-19 2004-02-19 Nec Corp Method of controlling input and output for network file system
US7263108B2 (en) 2002-08-06 2007-08-28 Netxen, Inc. Dual-mode network storage systems and methods
US6741554B2 (en) 2002-08-16 2004-05-25 Motorola Inc. Method and apparatus for reliably communicating information packets in a wireless communication network
US20040047367A1 (en) 2002-09-05 2004-03-11 Litchfield Communications, Inc. Method and system for optimizing the size of a variable buffer
US7243144B2 (en) 2002-09-26 2007-07-10 Hitachi, Ltd. Integrated topology management method for storage and IP networks
US7428584B2 (en) 2002-10-07 2008-09-23 Hitachi, Ltd. Method for managing a network including a storage system
US7152069B1 (en) 2002-10-15 2006-12-19 Network Appliance, Inc. Zero copy writes through use of mbufs
US20040078465A1 (en) 2002-10-17 2004-04-22 Coates Joshua L. Methods and apparatus for load balancing storage nodes in a distributed stroage area network system
US7120666B2 (en) 2002-10-30 2006-10-10 Riverbed Technology, Inc. Transaction accelerator for client-server communication systems
US7184424B2 (en) 2002-11-12 2007-02-27 Zetera Corporation Multiplexing storage element interface
US20040181476A1 (en) 2003-03-13 2004-09-16 Smith William R. Dynamic network resource brokering
US20040184455A1 (en) 2003-03-19 2004-09-23 Institute For Information Industry System and method used by a gateway for processing fragmented IP packets from a private network
US7181521B2 (en) 2003-03-21 2007-02-20 Intel Corporation Method and system for selecting a local registry master from among networked mobile devices based at least in part on abilities of the mobile devices
US6904470B1 (en) 2003-03-26 2005-06-07 Emc Corporation Device selection by a disk adapter scheduler
US20050058131A1 (en) 2003-07-29 2005-03-17 Samuels Allen R. Wavefront detection and disambiguation of acknowledgments
WO2005017738A1 (en) 2003-08-13 2005-02-24 Fujitsu Limited Print control method, print controller and print control program
US20060126118A1 (en) 2003-08-13 2006-06-15 Fujitsu Limited Print control method, print control apparatus, and computer product
US7415018B2 (en) 2003-09-17 2008-08-19 Alcatel Lucent IP Time to Live (TTL) field used as a covert channel
US7526577B2 (en) 2003-09-19 2009-04-28 Microsoft Corporation Multiple offload of network state objects with support for failover events
US20080279106A1 (en) 2003-10-03 2008-11-13 3Com Corporation Switching fabrics and control protocols for them
US7436789B2 (en) 2003-10-09 2008-10-14 Sarnoff Corporation Ad Hoc wireless node and network
US20050102522A1 (en) 2003-11-12 2005-05-12 Akitsugu Kanda Authentication device and computer system
US20050166022A1 (en) 2004-01-28 2005-07-28 Hitachi, Ltd. Method and apparatus for copying and backup in storage systems
US20070110047A1 (en) 2004-01-30 2007-05-17 Sun-Kwon Kim Method of collecting and searching for access route of information resource on internet and computer readable medium stored thereon program for implementing the same
US20050198371A1 (en) 2004-02-19 2005-09-08 Smith Michael R. Interface bundles in virtual network devices
US7447209B2 (en) 2004-03-09 2008-11-04 The University Of North Carolina Methods, systems, and computer program products for modeling and simulating application-level traffic characteristics in a network based on transport and network layer header information
JP2005265914A (en) 2004-03-16 2005-09-29 Ricoh Co Ltd Zoom lens, camera and personal digital assistance
US20050246401A1 (en) 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file system layout
US20050267929A1 (en) 2004-06-01 2005-12-01 Hitachi, Ltd. Method of dynamically balancing workload of a storage system
US20050270856A1 (en) 2004-06-03 2005-12-08 Inphase Technologies, Inc. Multi-level format for information storage
US20050286517A1 (en) 2004-06-29 2005-12-29 Babbar Uppinder S Filtering and routing of fragmented datagrams in a data network
US20060036602A1 (en) 2004-08-13 2006-02-16 Unangst Marc J Distributed object-based storage system that stores virtualization maps in object attributes
US20060077902A1 (en) 2004-10-08 2006-04-13 Kannan Naresh K Methods and apparatus for non-intrusive measurement of delay variation of data traffic on communication networks
US20060133365A1 (en) 2004-12-16 2006-06-22 Shankar Manjunatha Method, system and article for improved network performance by avoiding IP-ID wrap-arounds causing data corruption on fast networks
US20060168345A1 (en) 2005-01-21 2006-07-27 Microsoft Corporation Resource identifier zone translation
US20080181158A1 (en) 2005-03-24 2008-07-31 Nokia Corporation Notification of a Receiving Device About a Forthcoming Transmission Session
US20070101023A1 (en) 2005-10-28 2007-05-03 Microsoft Corporation Multiple task offload to a peripheral device

Non-Patent Citations (26)

* Cited by examiner, † Cited by third party
Title
"Computer Networking Essentials" Copyright 2001, Cisco Systems, Inc., 2001.
"Limited distributed DASD Checksum, a RAID Hybrid" IBM Technical Disclosure Bulletin, vol. 35, No. 4a, Sep. 1992, pp. 404-405, XP000314813 Armonk, NY, USA.
B. Quinn et al. IP Multicast Applications: Challenges and Solutions. Sep. 2001. Network Working Group, RFC 3170.
Bruschi and Rosti, "Secure multicast in wireless networks of mobile hosts: protocols and issues", Mobile Networks and Applications, vol. 7, issue 6 (Dec. 2002), pp. 503-511.
Canadian Office action for 2,632,889 mailed Aug. 6, 2010.
Chavez A. et al Association for Computing Machinery, Challenger: A Multi-Agent System for Distributed Resource Allocation, Proceedings of the First International Conference on Autonomous Agents Marina Del Rey, CA, vol. Conf 1 Feb. 5, 1997, pp. 323-332, New York, ACM, US. *
Chavez, a Multi-Agent System for Distributed Resource Allocation, MIT Media Lab, XP-002092534.
Chinese Office action for 200580052247.7 mailed Apr. 6, 2010.
Chinese Office action for 200580052247.7 mailed Nov. 18, 2010.
European Office action for 05 804 426.4 mailed Nov. 12, 2008.
European Office action for EP 05 804 426.4 mailed Jul. 31, 2008.
Gibson, Garth; File Server Scaling with Network-Attached Secure Disks; Joint Int'l Conference on Measurement & Modeling of Computer Systems Proceedings of the 1997 ACM SIGMETRICS Int'l Conference on Measurement & Modeling of Computer Systems; pp. 272-284; 1997.
International Preliminary Report on Patentability for PCT/US2005/036026 mailed Apr. 9, 2008.
International Search Report for Application No. PCT/US02/40205 dated May 12, 2003.
International Search Report for PCT/US2005/036026 mailed Jul. 7, 2006.
Japanese Office action for Application No. 2008-535508, mailed Apr. 5, 2011.
Kim et al., "Internet Multicast Provisioning Issues for Hierarchical Architecture", Dept of Computer Science, Chung-Nam National University, Daejeon, Korea, Ninth IEEE International Conference, pp. 401-404., IEEE, published Oct. 12, 2001.
Lee and Thekkath, "Petal: Distributed VITRULA Disks", Systems Research Center.
Lee et al. "A Comparison of Two Distributed Disk Systems" Digital Systems Research Center-Research Report SRC-155, Apr. 30, 1998, XP002368118.
Lee et al. "Petal: Distributed Virtual Disks", 7th International Conference on Architectural Support for Programming Languages and Operation Systems. Cambridge, MA., Oct. 1-5, 1996. International Conference on Architectural Support for Programming Languages and Operation Systems (ASPLOS), New, vol. Conf. 7, Oct. 1, 1996, pp. 84-92, XP000681711, ISBN: 0-89791-767-7.
Lin JC and Paul S, "Rmtp: a reliable multicast transport protocol," Proceedings of IEEE INFOCOM '96, vol. 3, pp. 1414-1424, 1996.
PCT International Search Report for PCT App. No. PCTUS05/01542 dated Aug. 25, 2008.
Satran et al. "Internet Small Computer Systems Interface (iSCSI)" IETF Standard, Internet Engineering Task Force, IETF, CH, Apr. 2004, XP015009500, ISSN: 000-0003.
Satran et al., iSCSI, Internet Draft draft-ietf-ips-iscsi-19.txt.
Thomas E. Anderson, Michael D. Dahlin, Jeanna M. Neefe, David A. Patterson, Drew S. Roselli, and Randolph Y. Wang, Serverless network file systems. Dec. 1995. In Proceedings of the 15th Symposium on Operating Systems Principles.
VMWare Workstation User's Manual, VMWare, Inc., p. 1-420, XP002443319; www.vmware.com/pdf/ms32-manual.pdf; p. 18-21; p. 214-216; p. 273-282, 1998-2002.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10021008B1 (en) * 2015-06-29 2018-07-10 Amazon Technologies, Inc. Policy-based scaling of computing resource groups
US10148592B1 (en) 2015-06-29 2018-12-04 Amazon Technologies, Inc. Prioritization-based scaling of computing resources

Also Published As

Publication number Publication date
US11601334B2 (en) 2023-03-07
US20230421447A1 (en) 2023-12-28
US20230171161A1 (en) 2023-06-01
US20160036641A1 (en) 2016-02-04
US20070083662A1 (en) 2007-04-12
US11848822B2 (en) 2023-12-19

Similar Documents

Publication Publication Date Title
US11848822B2 (en) Resource command messages and methods
US11539753B2 (en) Network-accessible service for executing virtual machines using client-provided virtual machine images
US9355036B2 (en) System and method for operating a system to cache a networked file system utilizing tiered storage and customizable eviction policies based on priority and tiers
US8099615B2 (en) Method and system for power management in a virtual machine environment without disrupting network connectivity
US7826359B2 (en) Method and system for load balancing using queued packet information
US10148744B2 (en) Random next iteration for data update management
EP3198806B1 (en) Network communications using pooled memory in rack-scale architecture
US9264369B2 (en) Technique for managing traffic at a router
US20030236837A1 (en) Content delivery system providing accelerate content delivery
US20030236919A1 (en) Network connected computing system
US20030229702A1 (en) Server network controller including packet forwarding and method therefor
WO2010019629A2 (en) Distributed load balancer
US20110302287A1 (en) Quality of service control
US10810143B2 (en) Distributed storage system and method for managing storage access bandwidth for multiple clients
WO2021120633A1 (en) Load balancing method and related device
US8051213B2 (en) Method for server-directed packet forwarding by a network controller based on a packet buffer threshold
WO2007043999A1 (en) Resource command messages and methods
US10999364B1 (en) Emulation of memory access transport services
Fesehaye (Latest Version) SCDA: SLA-aware Cloud Datacenter Architecture for Efficient Content Storage and Retrieval

Legal Events

Date Code Title Description
AS Assignment

Owner name: ZETERA CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ADAMS, MARK;LUDWIG, THOMAS EARL;FRANK, CHARLES WILLIAM;AND OTHERS;REEL/FRAME:017112/0899

Effective date: 20051005

AS Assignment

Owner name: CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998,CALIFOR

Free format text: SECURITY AGREEMENT;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:019453/0845

Effective date: 20070615

Owner name: CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998, CALIFO

Free format text: SECURITY AGREEMENT;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:019453/0845

Effective date: 20070615

AS Assignment

Owner name: THE FRANK REVOCABLE LIVING TRUST OF CHARLES W. FRA

Free format text: SECURITY AGREEMENT;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:019583/0681

Effective date: 20070711

AS Assignment

Owner name: WARBURG PINCUS PRIVATE EQUITY VIII, L.P., NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:019927/0793

Effective date: 20071001

Owner name: WARBURG PINCUS PRIVATE EQUITY VIII, L.P.,NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:019927/0793

Effective date: 20071001

AS Assignment

Owner name: ZETERA CORPORATION, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:THE FRANK REVOCABLE LIVING TRUST OF CHARLES W. FRANK AND KAREN L. FRANK;REEL/FRAME:020823/0949

Effective date: 20080418

Owner name: ZETERA CORPORATION,CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:THE FRANK REVOCABLE LIVING TRUST OF CHARLES W. FRANK AND KAREN L. FRANK;REEL/FRAME:020823/0949

Effective date: 20080418

Owner name: ZETERA CORPORATION, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WARBURG PINCUS PRIVATE EQUITY VIII, L.P.;REEL/FRAME:020824/0074

Effective date: 20080418

Owner name: ZETERA CORPORATION, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998;REEL/FRAME:020824/0215

Effective date: 20080418

Owner name: ZETERA CORPORATION, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998;REEL/FRAME:020824/0376

Effective date: 20080418

Owner name: ZETERA CORPORATION,CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WARBURG PINCUS PRIVATE EQUITY VIII, L.P.;REEL/FRAME:020824/0074

Effective date: 20080418

Owner name: ZETERA CORPORATION,CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998;REEL/FRAME:020824/0215

Effective date: 20080418

Owner name: ZETERA CORPORATION,CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CORTRIGHT FAMILY TRUST, DATED MAY 13, 1998;REEL/FRAME:020824/0376

Effective date: 20080418

AS Assignment

Owner name: RATEZE REMOTE MGMT. L.L.C., DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:020866/0888

Effective date: 20080415

Owner name: RATEZE REMOTE MGMT. L.L.C.,DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZETERA CORPORATION;REEL/FRAME:020866/0888

Effective date: 20080415

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8