US20120215741A1 - LDAP Replication Priority Queuing Mechanism - Google Patents

LDAP Replication Priority Queuing Mechanism Download PDF

Info

Publication number
US20120215741A1
US20120215741A1 US13/462,220 US201213462220A US2012215741A1 US 20120215741 A1 US20120215741 A1 US 20120215741A1 US 201213462220 A US201213462220 A US 201213462220A US 2012215741 A1 US2012215741 A1 US 2012215741A1
Authority
US
United States
Prior art keywords
update request
update
requests
request
assigned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/462,220
Inventor
Jack Poole
Timothy Culver
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Mobility II LLC
Original Assignee
Cingular Wireless II LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cingular Wireless II LLC filed Critical Cingular Wireless II LLC
Priority to US13/462,220 priority Critical patent/US20120215741A1/en
Assigned to CINGULAR WIRELESS II, LLC reassignment CINGULAR WIRELESS II, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CULVER, TIMOTHY, POOLE, JACK
Publication of US20120215741A1 publication Critical patent/US20120215741A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Definitions

  • a telecommunications network provider may store millions of records of customer data on database server networks that are heavily accessed and frequently updated.
  • This customer data can include customer identifications, passwords, addresses, preferences, etc. which must be accessible by a variety of different users.
  • a common method for managing such data includes creating directory listings using the Lightweight Directory Access Protocol (LDAP).
  • LDAP is a TCP/IP compliant protocol that provides for the quick access and update of directory listings.
  • LDAP-supported systems have been implemented in a variety of contexts such as web browsers and email programs.
  • LDAP directory listings are stored on database server networks that typically include multiple server tiers, each tier having one or more servers.
  • a server network can include a master server tier, a HUB server tier, and a proxy server tier, among other tiers.
  • Each server tier is located at different proximities from a user.
  • a proxy server may be located in close proximity to a user whereas a higher level master server may be located further from the user.
  • the closer the data is stored to a user the quicker the response to a user query.
  • data that is frequently accessed by a user is typically stored on a server in close proximity to the user.
  • data associated with common user queries can be stored on a proxy server near the user.
  • the data is typically updated throughout the server network depending upon the particular characteristics of the update.
  • a typical update involves providing the modification to a master server and then propagating the modification throughout the server network.
  • the master server serves as a gatekeeper for data updates ensuring data integrity.
  • the modification also referred to herein as an update can then be sent to other servers as required. For example, when a user creates a new password for use on the communications network then this change is updated on the server network. If the user has multiple access points to the network the new password must be made available to servers serving those access points so that the user can login from all the access points.
  • the new password can be sent to a master server associated with the user along with an update request requesting that the server network be updated with the new password.
  • the master server receives the user's new password and update request and updates the network by propagating the new password throughout the server network as required. This propagation of this modification or data update to other servers is often referred to as “replication.”
  • the modification is “pushed” to server tiers closer to the user, thereby enabling the network to provide the user with a quick and accurate response from multiple server locations.
  • a modification of the network triggers an update request requesting that the server network be updated to reflect the modification.
  • This update request may be referred to as a replication request herein.
  • This update or replication process allows other servers in addition to a master server to respond to user requests.
  • LDAP servers are typically arranged in a master-slave arrangement. Replication requests are thus first sent to the master server and then updates sent to server destinations or “replicated” as required.
  • the present invention provides systems and methods for processing database requests in accordance with predetermined schemes thereby allowing more important or time critical updates to be processed prior to less important updates.
  • requests it is meant update requests received by a master server that require the replication of data across a network.
  • the requests can include modifications, updates, or other operations but will be referred to generically herein as a request, an update request, or a replication request.
  • a Replication Management System (RMS) is provided which is adapted to prioritize requests based upon a predetermined scheme.
  • An update request can be received by the RMS, assigned a particular priority level, and stored in a designated priority queue.
  • the update request can then be executed in accordance to the priority queue to which it is assigned.
  • an update request can be assigned to one of 5 priority queues and processed according to priority.
  • the RMS can comprise a Replication Priority Queue Manager (RPQM) which receives update requests from a server network.
  • the RMS can further comprise storage means for storing update requests and a Replication Decision Engine (RDE) for executing update requests in accordance with the predetermined rules and a request's particular assigned priority level. For example, all requests having highest priority, designated as priority 1 , can be stored in a first queue at a first storage means, requests having the next highest priority, designated as priority 2 , can be stored in a second queue at a second storage means, and so on.
  • priority 1 the Replication Priority Queue Manager
  • RDE Replication Decision Engine
  • the Replication Decision Engine processes the update requests in accordance with these priority levels.
  • the RDE simply processes the updates for each storage means sequentially by priority level. For example, the RDE processes all of the priority 1 level requests stored in a first storage means queue and if all the priority one requests have been satisfied (i.e., when the first storage means is empty) then the RDE processes the priority 2 level requests until either another priority one request is received or until all priority 2 level requests are executed. This process can continue through the various priority levels.
  • the RDE can begin processing the next highest priority level, level 3 .
  • level 3 updates in the highest priority queue are processed first and then the updates the other queues are processed in order of decreasing priority.
  • the RDE system can continuously monitor the receipt of new requests so that if a new higher priority request is received, such as a level 1 priority request, then the new higher priority request can be processed prior to an existing lower priority request.
  • a new higher priority request is received, such as a level 1 priority request, then the new higher priority request can be processed prior to an existing lower priority request.
  • variations of the scheme can be implemented, for example more tiers can be employed and various methods used for determining a priority scheme.
  • the priority level of an update request can also be dynamic. For example, an update request can increase in priority as it ages.
  • a method of prioritizing update requests comprises: receiving an update request at a Replication Priority Queue Manager; determining a priority for the request based upon a predetermined prioritization scheme; and storing the update request in a priority queue for retrieval by a Replication Decision Engine.
  • a method of replicating an update or modification comprises: receiving an update request; determining a priority of the request in accordance with a predetermined scheme; storing the request in a priority queue in accordance with its assigned priority; and retrieving the update request in accordance with its assigned queue.
  • the step of executing the update request in accordance with its assigned queue can comprise determining a destination for the request and sending the request to that destination.
  • An exemplary method of executing the replication requests comprises: retrieving a request from a queue, comparing the request with predetermined replication rules to determine a destination for the request; and sending the update request to the destination.
  • FIG. 1 shows a communications system in accordance with an exemplary embodiment of the invention.
  • FIG. 2 shows a Replication Management System in accordance with an exemplary embodiment of the invention.
  • FIG. 3 shows a communications system in accordance with an exemplary embodiment of the invention.
  • FIG. 4 shows a method of processing update requests in accordance with an exemplary embodiment of the invention.
  • FIG. 5 shows a method of executing an update request in accordance with an exemplary embodiment of the invention.
  • FIG. 6 shows a method of replicating in accordance with an exemplary embodiment of the invention.
  • FIG. 7 shows a system flow diagram of a method or processing replication requests in accordance with an exemplary embodiment of the invention.
  • Embodiments of the invention provide methods and systems for efficiently processing update requests to a listing directory.
  • the system provides for the processing of update requests in accordance with a predetermined scheme so as to provide for the efficient updating of a directory.
  • the present invention provides systems and methods for replicating updates throughout a server system.
  • a Replication Management System receives an update request and assigns a queue value to the update request depending upon the predetermined scheme. The update request is then stored in a corresponding queue associated with its assigned priority. Updates are then executed in accordance with the assigned priority.
  • a Replication Management System comprises a Replication Priority Queue Manager (RPQM) and a Replication Decision Engine (RDE).
  • FIG. 1 shows a communications system 100 in accordance with an exemplary embodiment of the invention.
  • the communications system 100 includes a server network 103 adapted to receive update requests 104 from a client application 105 .
  • update request it is meant a request sent to the server network requesting an update to the server network 103 that is typically triggered by a modification made by a client application.
  • the update request can include a request to replicate the modification to servers in addition to a master server and/or all LDAP servers in a server network, and may be referred to herein as a replication request.
  • the client application 105 can be an email application residing on a cellular telephone, but it should be understood that the client application could be one of a variety of applications.
  • the server network 103 is adapted to receive the update request and communicate with a Replication Management System 107 as will be described in more detail below.
  • the Replication Management System (RMS) 107 receives the update requests 104 from the server network 103 and assigns each update request a priority level based upon a predetermined scheme. Each update request is then assigned to a queue based upon it assigned priority. The update requests are then processed in accordance with the particular queue in which they are located.
  • FIG. 2 shows a Replication Management System (RMS) 107 in accordance with an exemplary embodiment of the invention.
  • the RMS 107 comprises a Replication Priority Queue Manager (RPQM) 201 , a plurality of storage means 203 A- 203 E, and a Replication Decision Engine (RDE) 205 .
  • RQM Replication Priority Queue Manager
  • RDE Replication Decision Engine
  • FIG. 2 shows a Replication Management System (RMS) 107 in accordance with an exemplary embodiment of the invention.
  • the RMS 107 comprises a Replication Priority Queue Manager (RPQM) 201 , a plurality of storage means 203 A- 203 E, and a Replication Decision Engine (RDE) 205 .
  • RQM Replication Priority Queue Manager
  • RDE Replication Decision Engine
  • the RPQM 201 can include an instruction module 210 including instructions for determining a priority of a request in accordance with a predetermined scheme, a processor 212 communicatively coupled to the instruction module 210 and adapted for executing the instructions.
  • the instruction can include memory for storing the instructions of the instruction module. Though shown as a processor 212 and an instruction module 210 in FIG. 2 , the RPQM 201 can be in the form of hardware, software, or firmware.
  • the RPQM 201 is communicatively coupled to the server network 103 so that when the server network 103 receives an update request, the request can be sent from the server network 103 to the RPQM 201 .
  • the RPQM 201 determines a priority for the update request in accordance with a predetermined scheme. For example, if the update is of high priority it can be assigned a priority level 1 whereas if the update is of low priority it can be assigned a priority level of 5.
  • the update request is then stored in one of the storage means 203 A-E in accordance with its assigned priority level.
  • FIG. 3 shows an exemplary method 300 of operation that can be practiced by the RPQM 201 .
  • an update request is received by the RPQM 201 .
  • the RPQM 201 assigns a priority level to the update request in accordance with a predetermined scheme.
  • the particular scheme employed by the RPQM 201 can vary and can be periodically modified. The scheme can be based upon a variety of factors such as, by way of example and not limitation, the field or record which will be updated, the identity of the requester of the update, the time the update is requested, the age of the update request, and the identity of the application being modified.
  • an update request to change a password field can be given a higher priority than an update request to modify to a billing address field; an update request from a third party content provider can be given a higher priority than an update request from a party that does not provide content; an update request associated with a premium application can be assigned a higher priority than an update request associated with a non-premium application; and an update request submitted during day time hours can be assigned a higher priority than a request submitted during night time hours.
  • update requests can be assigned a higher priority when it ages beyond a predetermined time threshold.
  • the RPQM 201 stores the update request in the storage means 203 A-E in accordance with its assigned priority. This process is repeated for each request received.
  • the RPQM 201 can continue to monitor the status of the requests and can change a request's priority level over time. For example, if the selected scheme includes a rule which increases a request's priority as it ages, the RPQM 201 can monitor the age of the requests and reassign an update request priority accordingly. For example, an update request that was assigned a priority level 2 and stored in storage means 203 B can be moved up to a priority level 1 and moved to storage means 203 A if the update request ages beyond a predetermined time threshold. This helps prevent the situation in which a low priority request that has already been received is never processed due to continuously incoming higher priority requests.
  • the Replication Decision Engine (RDE) 205 processes the update requests stored by the RPQM 201 .
  • the RDE 205 simply executes the update requests in accordance with their storage location. Because the RPQM 201 has stored the update requests in locations (storage means 203 A-E) according to their priority, the RDE 205 can simply progress through the different storage means in order of each storage means priority. For example, storage means 203 A can be used to store update requests having a priority 1 , storage means 203 B can be used to store update requests having priority level 2 , and so on. The RDE 205 can then process the requests stored in storage means 203 A, then 203 B, and so on, effectively processing the update requests in order of priority.
  • the RDE 205 can also be provided with a scheme for processing the update requests within each storage means 203 A-E.
  • the RDE 205 can be assigned a scheme for processing the update requests within each storage means, such as on a first-in-first-out basis.
  • the RDE 205 can also continually check as to whether a higher priority update request has been received while the RDE 205 is processing lower level requests. If that is the case, then the RDE 205 can stop executing a lower level requests to execute the newly received higher level update request.
  • the scheme of the RDE 205 should be compatible with the scheme used by the RPQM 201 in assigning the priority levels to ensure that higher priority requests are processed prior to lower priority requests.
  • FIG. 4 shows an exemplary method of processing the update requests.
  • the RDE 205 determines whether the queue for the highest priority update requests, queue 1 , is empty. This can be done by determining whether the storage means 203 A associated with the priority 1 update requests is empty. If the priority 1 queue is not empty, i.e., there are priority 1 update requests to be processed, the RDE 205 processes those update requests at step 404 . If at step 402 it is determined that the priority 1 queue is empty then at step 406 the RDE 205 determines whether the priority 2 queue is empty. If the priority 2 queue is not empty, then the RDE 205 processes update requests in the priority 2 queue at step 408 .
  • the RDE 205 again goes to step 402 to determine whether the priority 1 queue is empty.
  • the RDE 205 processes all of the priority 2 queue updates before again determining whether the priority 1 queue is empty. It is contemplated however that the RDE 205 can be provided with various rules for monitoring whether there are update requests in the priority 1 queue. For example, the RDE 205 can continuously monitor the priority 1 queue and if an update request is found immediately execute that update.
  • step 410 the RDE 205 determines whether there are any priority 3 updates to process. If so, then at step 412 the priority 3 requests are processed. If not then at step 414 the RDE 205 determines whether there are priority 4 update requests to process and either processes the priority 4 update requests 416 or looks for priority 5 requests 418 . If priority 5 requests are found they are processed at step 420 . As shown in FIG. 4 , the RDE 205 continues to determine whether a higher priority update request is found and if so, processes that request.
  • FIG. 4 shows an exemplary method 400 of how the RDE 205 determines which update request to process
  • FIG. 5 500 shows an exemplary method of executing a particular update request.
  • the RDE 205 retrieves the request from the storage means 203 A-E.
  • the RDE 205 then reads the request and matches the request against predetermined replication rules to determine a final destination for the update request.
  • the destination can be another server, by way of example and not limitation, another LDAP master server, a LDAP hub server, or a LDAP proxy server.
  • the RDE 205 retrieves an update request.
  • the RDE 205 determines the desired destination of the update. This can be done by comparing the request with predetermined replication rules.
  • the RDE 205 sends the update to the appropriate destination thereby replicating the update throughout the server network.
  • the present invention thus provides methods and systems for processing update requests and replicating data in accordance with predetermined business rules.
  • FIG. 6 shows an exemplary flow diagram of processing a request by a server network.
  • a server network 700 can include two branches 710 ; 712 that serve different locations of the server network 700 .
  • Each server branch 710 ; 712 includes servers that form a part of different server tiers, including a LDAP client server tier 702 , an LDAP proxy server tier 704 , a LDAP consumer/hub server tier 706 , and a LDAP master server tier 708 .
  • a real time LDAP modification is made at the LDAP client server tier 702 and received by a LDAP proxy server 716 at an LDAP proxy tier 704 .
  • This modification includes a replication request.
  • a request is made to send updated data to a LDAP consumer server 720 at the LDAP consumer/hub server tier 706 .
  • the replication request is sent to the Replication Priority Queue Manager (RPQM) 201 that resides at a primary master server 718 .
  • the RPQM 201 receives the replication request at step 606 and at step 608 determines a priority for the replication request using a predetermined scheme of business rules stored in the instruction module 210 .
  • the update is stored in a storage means 203 A-E. In this example it will be assumed that the replication request is assigned a priority level of 2 and stored in storage means 203 B.
  • the Replication Decision Engine (RDE) 205 retrieves the request from the storage means 203 B in step 612 after processing the updates from the higher priority queue of storage means 203 A.
  • the RDE 205 matches the replication rules with the replication request to determine the destination associated with the replication request, In this case, because the destination is a LDAP consumer server 722 on the LDAP proxy tier 704 the update is replicated to the LDAP consumer server 722 at step 616 .
  • client LDAP searches 724 can be satisfied as the LDAP proxy server 716 of the LDAP proxy tier 704 performs a search on the updated LDAP consumer server 722 of the LDAP consumer/hub tier 706 .
  • the update was sent to a LDAP consumer server 722 on the server branch 710 at which the request is received it is contemplated that the update can be replicated to other servers on the server network 700 such as LDAP hub servers 718 , LDAP consumer servers 720 , and LDAP proxy servers 716 .
  • the Replication Management System 750 is shown associated with a primary master tier server 718 it is contemplated that an RMS 750 could be used on other servers.

Abstract

A replication priority queuing system prioritizes replication requests in accordance with a predetermined scheme. An exemplary system includes a Replication Priority Queue Manager that receives update requests and assigns a priority based upon business rules and stores the requests in associated storage means. A Replication Decision Engine retrieves the requests from storage and determines a destination for the update based upon predetermined replication rules, and sends the update to the destination.

Description

    CROSS REFERENCED TO RELATED APPLICATION
  • This application is a continuation of U.S. application Ser. No. 11/567,234, filed Dec. 6, 2006, the entirety of which is herein incorporated by reference.
  • BACKGROUND
  • Telecommunications network providers must manage large volumes of data. For example, a telecommunications network provider may store millions of records of customer data on database server networks that are heavily accessed and frequently updated. This customer data can include customer identifications, passwords, addresses, preferences, etc. which must be accessible by a variety of different users. A common method for managing such data includes creating directory listings using the Lightweight Directory Access Protocol (LDAP). LDAP is a TCP/IP compliant protocol that provides for the quick access and update of directory listings. LDAP-supported systems have been implemented in a variety of contexts such as web browsers and email programs.
  • These LDAP directory listings are stored on database server networks that typically include multiple server tiers, each tier having one or more servers. For example, a server network can include a master server tier, a HUB server tier, and a proxy server tier, among other tiers. Each server tier is located at different proximities from a user. For example, a proxy server may be located in close proximity to a user whereas a higher level master server may be located further from the user. Generally, the closer the data is stored to a user, the quicker the response to a user query. Thus, in an effort to avoid delays, data that is frequently accessed by a user is typically stored on a server in close proximity to the user. For example, data associated with common user queries can be stored on a proxy server near the user.
  • When a modification to the data is made by a client application, the data is typically updated throughout the server network depending upon the particular characteristics of the update. A typical update involves providing the modification to a master server and then propagating the modification throughout the server network. The master server serves as a gatekeeper for data updates ensuring data integrity. The modification also referred to herein as an update can then be sent to other servers as required. For example, when a user creates a new password for use on the communications network then this change is updated on the server network. If the user has multiple access points to the network the new password must be made available to servers serving those access points so that the user can login from all the access points. For example, the new password can be sent to a master server associated with the user along with an update request requesting that the server network be updated with the new password. The master server receives the user's new password and update request and updates the network by propagating the new password throughout the server network as required. This propagation of this modification or data update to other servers is often referred to as “replication.” By replicating data from the master server to other servers, the modification is “pushed” to server tiers closer to the user, thereby enabling the network to provide the user with a quick and accurate response from multiple server locations. Thus, a modification of the network triggers an update request requesting that the server network be updated to reflect the modification. This update request may be referred to as a replication request herein.
  • This update or replication process allows other servers in addition to a master server to respond to user requests. To maintain control over the replication process and ensure data integrity, LDAP servers are typically arranged in a master-slave arrangement. Replication requests are thus first sent to the master server and then updates sent to server destinations or “replicated” as required.
  • As discussed above, in order to update data throughout the server network, data is replicated across various server tiers so that multiple servers can provide up-to-date data. Problems can arise however when data is not efficiently updated at the master server or efficiently replicated to other servers. Under prior art LDAP schemes updates are processed on a first-in-first-out basis without regard to business decisions or priorities. But many of these updates do not require immediate replication throughout the server network. For example, a modification changing a user's mailing address will not immediately affect a user's use of the telecommunications system. However, the change of a password can significantly affect a user's ability to access the network if it is not immediately replicated. In addition, large numbers of replication requests are frequently stored as batch update requests. These batch requests can require a large amount of resources to process. Under the present first-in-first-out approach if a large batch file is received at an LDAP server prior to the afore-mentioned password update request, the batch request would be processed first, thereby resulting in the delay of the replication of the password request. This delay is undesirable as it can affect the user's ability to access the network. Thus, there are a variety of update requests which can be received by the master for which immediate replication throughout the server network is desireable. Thus, problems can arise with the prior art first-in-first-out replication method. For example, large batch files of low priority may be received prior to more important requests which will be delayed as the system processes or replicates the earlier batch files.
  • Thus, it is desirable to have an improved method of updating server networks and more particularly processing update or replication requests and replicating data on a server network that overcomes these difficulties and allows high priority requests to be processed in a more timely manner.
  • SUMMARY OF THE INVENTION
  • As required, exemplary embodiments of the present invention are disclosed herein. These embodiments should be viewed with the knowledge that they are only examples of the invention and that the invention may be embodied in many various and alternative forms. The figures are not to scale and some features may be exaggerated or minimized to show details of particular elements, while related elements may have been eliminated to prevent obscuring novel aspects. Well known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the invention. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention.
  • The present invention provides systems and methods for processing database requests in accordance with predetermined schemes thereby allowing more important or time critical updates to be processed prior to less important updates. By requests it is meant update requests received by a master server that require the replication of data across a network. The requests can include modifications, updates, or other operations but will be referred to generically herein as a request, an update request, or a replication request. In an exemplary embodiment of a system of the invention, a Replication Management System (RMS) is provided which is adapted to prioritize requests based upon a predetermined scheme. An update request can be received by the RMS, assigned a particular priority level, and stored in a designated priority queue. The update request can then be executed in accordance to the priority queue to which it is assigned. For example, an update request can be assigned to one of 5 priority queues and processed according to priority. In one exemplary embodiment of the invention the RMS can comprise a Replication Priority Queue Manager (RPQM) which receives update requests from a server network. The RMS can further comprise storage means for storing update requests and a Replication Decision Engine (RDE) for executing update requests in accordance with the predetermined rules and a request's particular assigned priority level. For example, all requests having highest priority, designated as priority 1, can be stored in a first queue at a first storage means, requests having the next highest priority, designated as priority 2, can be stored in a second queue at a second storage means, and so on. In the exemplary embodiment discussed herein, five priority levels and five associated queues are employed but it is contemplated that any number of priority levels or queues can be used.
  • While in the exemplary embodiments the update requests are shown as being stored in different physical storage means such as different databases, updates requests could be stored in a single physical structure but flagged as a particular priority in some way. The Replication Decision Engine processes the update requests in accordance with these priority levels. In one exemplary embodiment the RDE simply processes the updates for each storage means sequentially by priority level. For example, the RDE processes all of the priority 1 level requests stored in a first storage means queue and if all the priority one requests have been satisfied (i.e., when the first storage means is empty) then the RDE processes the priority 2 level requests until either another priority one request is received or until all priority 2 level requests are executed. This process can continue through the various priority levels. For example, once all priority 2 and priority 1 level requests are completed the RDE can begin processing the next highest priority level, level 3. Thus, updates in the highest priority queue are processed first and then the updates the other queues are processed in order of decreasing priority. The RDE system can continuously monitor the receipt of new requests so that if a new higher priority request is received, such as a level 1 priority request, then the new higher priority request can be processed prior to an existing lower priority request. It is contemplated that variations of the scheme can be implemented, for example more tiers can be employed and various methods used for determining a priority scheme. The priority level of an update request can also be dynamic. For example, an update request can increase in priority as it ages.
  • In an exemplary embodiment of the invention, a method of prioritizing update requests, comprises: receiving an update request at a Replication Priority Queue Manager; determining a priority for the request based upon a predetermined prioritization scheme; and storing the update request in a priority queue for retrieval by a Replication Decision Engine. In an exemplary embodiment of a method of the invention, a method of replicating an update or modification is provided which comprises: receiving an update request; determining a priority of the request in accordance with a predetermined scheme; storing the request in a priority queue in accordance with its assigned priority; and retrieving the update request in accordance with its assigned queue. The step of executing the update request in accordance with its assigned queue can comprise determining a destination for the request and sending the request to that destination.
  • An exemplary method of executing the replication requests comprises: retrieving a request from a queue, comparing the request with predetermined replication rules to determine a destination for the request; and sending the update request to the destination.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a communications system in accordance with an exemplary embodiment of the invention.
  • FIG. 2 shows a Replication Management System in accordance with an exemplary embodiment of the invention.
  • FIG. 3 shows a communications system in accordance with an exemplary embodiment of the invention.
  • FIG. 4 shows a method of processing update requests in accordance with an exemplary embodiment of the invention.
  • FIG. 5 shows a method of executing an update request in accordance with an exemplary embodiment of the invention.
  • FIG. 6 shows a method of replicating in accordance with an exemplary embodiment of the invention.
  • FIG. 7 shows a system flow diagram of a method or processing replication requests in accordance with an exemplary embodiment of the invention.
  • DETAILED DESCRIPTION
  • The embodiments of the present invention disclosed herein are merely examples of the present invention which may be embodied in various and alternative forms. The figures are not to scale and some features may be exaggerated or minimized to show details of particular elements, while related elements may have been eliminated to prevent obscuring novel aspects. Therefore, the structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention.
  • Embodiments of the invention provide methods and systems for efficiently processing update requests to a listing directory. The system provides for the processing of update requests in accordance with a predetermined scheme so as to provide for the efficient updating of a directory. Furthermore, the present invention provides systems and methods for replicating updates throughout a server system.
  • In one exemplary embodiment a Replication Management System (RMS) is provided that receives an update request and assigns a queue value to the update request depending upon the predetermined scheme. The update request is then stored in a corresponding queue associated with its assigned priority. Updates are then executed in accordance with the assigned priority. In an exemplary embodiment, a Replication Management System comprises a Replication Priority Queue Manager (RPQM) and a Replication Decision Engine (RDE).
  • Turning to the figures where like reference numerals represent like features throughout, FIG. 1 shows a communications system 100 in accordance with an exemplary embodiment of the invention. The communications system 100 includes a server network 103 adapted to receive update requests 104 from a client application 105. By update request it is meant a request sent to the server network requesting an update to the server network 103 that is typically triggered by a modification made by a client application. The update request can include a request to replicate the modification to servers in addition to a master server and/or all LDAP servers in a server network, and may be referred to herein as a replication request. In the exemplary embodiments shown, the client application 105 can be an email application residing on a cellular telephone, but it should be understood that the client application could be one of a variety of applications.
  • The server network 103 is adapted to receive the update request and communicate with a Replication Management System 107 as will be described in more detail below. The Replication Management System (RMS) 107 receives the update requests 104 from the server network 103 and assigns each update request a priority level based upon a predetermined scheme. Each update request is then assigned to a queue based upon it assigned priority. The update requests are then processed in accordance with the particular queue in which they are located.
  • FIG. 2 shows a Replication Management System (RMS) 107 in accordance with an exemplary embodiment of the invention. In this example, the RMS 107 comprises a Replication Priority Queue Manager (RPQM) 201, a plurality of storage means 203A-203E, and a Replication Decision Engine (RDE) 205. Although shown as outside the sever network 103, it is contemplated that the RMS 107 could be part of the server network 103 such as residing on a master server as discussed in more detail below.
  • The RPQM 201 can include an instruction module 210 including instructions for determining a priority of a request in accordance with a predetermined scheme, a processor 212 communicatively coupled to the instruction module 210 and adapted for executing the instructions. The instruction can include memory for storing the instructions of the instruction module. Though shown as a processor 212 and an instruction module 210 in FIG. 2, the RPQM 201 can be in the form of hardware, software, or firmware.
  • The RPQM 201 is communicatively coupled to the server network 103 so that when the server network 103 receives an update request, the request can be sent from the server network 103 to the RPQM 201. When the RPQM 201 receives the update request from the server network 103, it determines a priority for the update request in accordance with a predetermined scheme. For example, if the update is of high priority it can be assigned a priority level 1 whereas if the update is of low priority it can be assigned a priority level of 5. The update request is then stored in one of the storage means 203A-E in accordance with its assigned priority level.
  • FIG. 3 shows an exemplary method 300 of operation that can be practiced by the RPQM 201. At step 310 an update request is received by the RPQM 201. At step 320 the RPQM 201 assigns a priority level to the update request in accordance with a predetermined scheme. The particular scheme employed by the RPQM 201 can vary and can be periodically modified. The scheme can be based upon a variety of factors such as, by way of example and not limitation, the field or record which will be updated, the identity of the requester of the update, the time the update is requested, the age of the update request, and the identity of the application being modified. For example, an update request to change a password field can be given a higher priority than an update request to modify to a billing address field; an update request from a third party content provider can be given a higher priority than an update request from a party that does not provide content; an update request associated with a premium application can be assigned a higher priority than an update request associated with a non-premium application; and an update request submitted during day time hours can be assigned a higher priority than a request submitted during night time hours. In addition, update requests can be assigned a higher priority when it ages beyond a predetermined time threshold. At step 330 the RPQM 201 stores the update request in the storage means 203A-E in accordance with its assigned priority. This process is repeated for each request received. The RPQM 201 can continue to monitor the status of the requests and can change a request's priority level over time. For example, if the selected scheme includes a rule which increases a request's priority as it ages, the RPQM 201 can monitor the age of the requests and reassign an update request priority accordingly. For example, an update request that was assigned a priority level 2 and stored in storage means 203B can be moved up to a priority level 1 and moved to storage means 203A if the update request ages beyond a predetermined time threshold. This helps prevent the situation in which a low priority request that has already been received is never processed due to continuously incoming higher priority requests.
  • The Replication Decision Engine (RDE) 205 processes the update requests stored by the RPQM 201. In one exemplary embodiment, the RDE 205 simply executes the update requests in accordance with their storage location. Because the RPQM 201 has stored the update requests in locations (storage means 203A-E) according to their priority, the RDE 205 can simply progress through the different storage means in order of each storage means priority. For example, storage means 203A can be used to store update requests having a priority 1, storage means 203B can be used to store update requests having priority level 2, and so on. The RDE 205 can then process the requests stored in storage means 203A, then 203B, and so on, effectively processing the update requests in order of priority. It is contemplated that the RDE 205 can also be provided with a scheme for processing the update requests within each storage means 203A-E. For example, the RDE 205 can be assigned a scheme for processing the update requests within each storage means, such as on a first-in-first-out basis. The RDE 205 can also continually check as to whether a higher priority update request has been received while the RDE 205 is processing lower level requests. If that is the case, then the RDE 205 can stop executing a lower level requests to execute the newly received higher level update request. Of course the scheme of the RDE 205 should be compatible with the scheme used by the RPQM 201 in assigning the priority levels to ensure that higher priority requests are processed prior to lower priority requests.
  • FIG. 4 shows an exemplary method of processing the update requests. At step 402 the RDE 205 determines whether the queue for the highest priority update requests, queue 1, is empty. This can be done by determining whether the storage means 203A associated with the priority 1 update requests is empty. If the priority 1 queue is not empty, i.e., there are priority 1 update requests to be processed, the RDE 205 processes those update requests at step 404. If at step 402 it is determined that the priority 1 queue is empty then at step 406 the RDE 205 determines whether the priority 2 queue is empty. If the priority 2 queue is not empty, then the RDE 205 processes update requests in the priority 2 queue at step 408. After the priority 2 queue updates are processed the RDE 205 again goes to step 402 to determine whether the priority 1 queue is empty. In this example, the RDE 205 processes all of the priority 2 queue updates before again determining whether the priority 1 queue is empty. It is contemplated however that the RDE 205 can be provided with various rules for monitoring whether there are update requests in the priority 1 queue. For example, the RDE 205 can continuously monitor the priority 1 queue and if an update request is found immediately execute that update.
  • If there were no update requests found in step 406 then at step 410 the RDE 205 determines whether there are any priority 3 updates to process. If so, then at step 412 the priority 3 requests are processed. If not then at step 414 the RDE 205 determines whether there are priority 4 update requests to process and either processes the priority 4 update requests 416 or looks for priority 5 requests 418. If priority 5 requests are found they are processed at step 420. As shown in FIG. 4, the RDE 205 continues to determine whether a higher priority update request is found and if so, processes that request.
  • While FIG. 4 shows an exemplary method 400 of how the RDE 205 determines which update request to process, FIG. 5 500 shows an exemplary method of executing a particular update request. To execute a particular update request the RDE 205 retrieves the request from the storage means 203A-E. The RDE 205 then reads the request and matches the request against predetermined replication rules to determine a final destination for the update request. Depending upon the particular request, the destination can be another server, by way of example and not limitation, another LDAP master server, a LDAP hub server, or a LDAP proxy server.
  • Thus, as shown in an exemplary method of processing an update request in FIG. 5, at step 510 the RDE 205 retrieves an update request. At step 520 the RDE 205 determines the desired destination of the update. This can be done by comparing the request with predetermined replication rules. Once the appropriate destination is determined, at step 530 the RDE 205 sends the update to the appropriate destination thereby replicating the update throughout the server network. The present invention thus provides methods and systems for processing update requests and replicating data in accordance with predetermined business rules.
  • FIG. 6 shows an exemplary flow diagram of processing a request by a server network. As seen in FIG, 7, a server network 700 can include two branches 710; 712 that serve different locations of the server network 700. Each server branch 710; 712 includes servers that form a part of different server tiers, including a LDAP client server tier 702, an LDAP proxy server tier 704, a LDAP consumer/hub server tier 706, and a LDAP master server tier 708.
  • At step 602 a real time LDAP modification is made at the LDAP client server tier 702 and received by a LDAP proxy server 716 at an LDAP proxy tier 704. This modification includes a replication request. For example, a request is made to send updated data to a LDAP consumer server 720 at the LDAP consumer/hub server tier 706.
  • At step 604 the replication request is sent to the Replication Priority Queue Manager (RPQM) 201 that resides at a primary master server 718. As discussed above, the RPQM 201 receives the replication request at step 606 and at step 608 determines a priority for the replication request using a predetermined scheme of business rules stored in the instruction module 210. At step 610 the update is stored in a storage means 203A-E. In this example it will be assumed that the replication request is assigned a priority level of 2 and stored in storage means 203B.
  • As discussed above the Replication Decision Engine (RDE) 205 retrieves the request from the storage means 203B in step 612 after processing the updates from the higher priority queue of storage means 203A. At step 614 the RDE 205 matches the replication rules with the replication request to determine the destination associated with the replication request, In this case, because the destination is a LDAP consumer server 722 on the LDAP proxy tier 704 the update is replicated to the LDAP consumer server 722 at step 616. With the update now replicated to LDAP consumer server 722, client LDAP searches 724 can be satisfied as the LDAP proxy server 716 of the LDAP proxy tier 704 performs a search on the updated LDAP consumer server 722 of the LDAP consumer/hub tier 706. In this exemplary embodiment the update was sent to a LDAP consumer server 722 on the server branch 710 at which the request is received it is contemplated that the update can be replicated to other servers on the server network 700 such as LDAP hub servers 718, LDAP consumer servers 720, and LDAP proxy servers 716. Furthermore, whereas the Replication Management System 750 is shown associated with a primary master tier server 718 it is contemplated that an RMS 750 could be used on other servers.
  • Again, the illustrated and described embodiments of the present invention contained herein are exemplary examples set forth for a clear understanding of the invention and are not intended to be interpreted as limitations. Variations and modifications may be made to the above-described embodiments, and the embodiments may be combined, without departing from the scope of the claims.

Claims (20)

1. A method, comprising:
receiving, at a replication management system, a plurality of update requests for replicating data;
assigning, at the replication management system, a first priority level to each update request based upon an age of the update request;
monitoring, at the replication management system, a status of each update request to determine whether the age of the update request has exceeded a predetermined time threshold; and
assigning, at the replication management system, a second, higher priority level to each update request for which it is determined that the age of the update request has exceeded the predetermined time threshold.
2. The method of claim 1, further comprising:
storing, at the replication management system, each update request in one of a plurality of assigned storage locations, wherein the storage locations are assigned to each update request in accordance with a priority level last assigned to the update request.
3. The method of claim 2, wherein if an update request is assigned the second, higher priority level, storing the update request includes moving the update request from a first storage location of the plurality of ordered storage locations to a second storage location.
4. The method of claim 3, wherein the first storage location corresponds to the first priority level, and the second storage location corresponds to the second priority level.
5. The method of claim 2, further comprising processing, at the replication management system, each update request in order based upon the assigned storage locations of the update requests.
6. The method of claim 2, further comprising replicating data according to each of the update requests in order based upon the assigned storage locations of the update requests.
7. The method of claim 2, further comprising determining a destination server among a plurality of servers in a server network that corresponds to each update request.
8. The method of claim 7, further comprising replicating data according to each update request from a master server to the destination server corresponding to the update request.
9. The method of claim 8, wherein the data according to each of the update requests is replicated to the destination servers corresponding to each of the update requests in order based upon the assigned storage locations of the update requests.
10. An apparatus, comprising:
a processor; and
a memory storing instructions which, when executed by the processor, cause the processor to perform a method, comprising:
receiving a plurality of update requests for replicating data;
assigning, to each update request, a first priority level based upon an age of the update request;
monitoring a status of each update request to determine whether the age of the update request has exceeded a predetermined time threshold; and
assigning a second, higher priority level to each update request for which it is determined that the age of the update request has exceeded the predetermined time threshold.
11. The apparatus of claim 10, wherein the instructions further cause the processor to store each update request in one of a plurality of assigned storage locations in accordance with a priority level last assigned to the update request.
12. The apparatus of claim 11, wherein if an update request is assigned the second, higher priority level, the instructions further cause the processor to move the update request from a first storage location of the plurality of assigned storage locations to a second storage location.
13. The apparatus of claim 12, wherein the first storage location corresponds to the first priority level, and the second storage location corresponds to the second, higher priority level.
14. The apparatus of claim 11, wherein data according to each of the update requests is replicated in order based upon the assigned storage locations of the update requests.
15. The apparatus of claim 14, wherein the data according to each of the update requests is replicated to destination servers corresponding to each of the update requests in order based on the assigned storage locations of the update requests.
16. A non-transitory computer readable medium comprising instructions which, when executed by a processor, cause the processor to perform a method comprising:
receiving a plurality of update requests for replicating data;
assigning, to each update request, a first priority level based upon an age of the update request;
monitoring a status of each update request to determine whether the age of the update request has exceeded a predetermined time threshold; and
assigning a second, higher priority level to each update request for which it is determined that the age of the update request has exceeded the predetermined time threshold.
17. The non-transitory computer readable medium of claim 16, wherein the instructions further cause the processor to store each update request in one of a plurality of assigned storage locations in accordance with a priority level last assigned to the update request.
18. The non-transitory computer readable medium of claim 17, wherein if an update request is assigned the second, higher priority level, the instructions further cause the processor to move the update request from a first storage location of the plurality of assigned storage locations to a second storage location.
19. The non-transitory computer readable medium of claim 18, wherein the first storage location corresponds to the first priority level, and the second storage location corresponds to the second, higher priority level.
20. The non-transitory computer readable medium of claim 17, wherein the data according to each of the update requests is replicated to destination servers corresponding to each of the update requests in order based on the assigned storage locations of the update requests.
US13/462,220 2006-12-06 2012-05-02 LDAP Replication Priority Queuing Mechanism Abandoned US20120215741A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/462,220 US20120215741A1 (en) 2006-12-06 2012-05-02 LDAP Replication Priority Queuing Mechanism

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/567,234 US8190561B1 (en) 2006-12-06 2006-12-06 LDAP replication priority queuing mechanism
US13/462,220 US20120215741A1 (en) 2006-12-06 2012-05-02 LDAP Replication Priority Queuing Mechanism

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/567,234 Continuation US8190561B1 (en) 2006-12-06 2006-12-06 LDAP replication priority queuing mechanism

Publications (1)

Publication Number Publication Date
US20120215741A1 true US20120215741A1 (en) 2012-08-23

Family

ID=46092206

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/567,234 Expired - Fee Related US8190561B1 (en) 2006-12-06 2006-12-06 LDAP replication priority queuing mechanism
US13/462,220 Abandoned US20120215741A1 (en) 2006-12-06 2012-05-02 LDAP Replication Priority Queuing Mechanism

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/567,234 Expired - Fee Related US8190561B1 (en) 2006-12-06 2006-12-06 LDAP replication priority queuing mechanism

Country Status (1)

Country Link
US (2) US8190561B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160148217A1 (en) * 2013-06-19 2016-05-26 Yong Jin Kim Application sharing service method and apparatus applied thereto
WO2016183540A1 (en) * 2015-05-14 2016-11-17 Walleye Software, LLC Method and system for data source refreshing
US20170118278A1 (en) * 2012-10-25 2017-04-27 International Business Machines Corporation Work-load management in a client-server infrastructure
CN107239544A (en) * 2017-06-05 2017-10-10 山东浪潮云服务信息科技有限公司 The implementation method and device of a kind of distributed storage
US10002154B1 (en) 2017-08-24 2018-06-19 Illumon Llc Computer data system data source having an update propagation graph with feedback cyclicality
WO2020033150A1 (en) * 2018-08-08 2020-02-13 Micron Technology, Inc. Quality of service control for read operations in memory systems
US20220092083A1 (en) * 2019-03-04 2022-03-24 Hitachi Vantara Llc Asynchronous storage management in a distributed system

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2028813A1 (en) * 2007-07-02 2009-02-25 British Telecmmunications public limited campany Method of synchronizing intermittently connected mobile terminals
SE533007C2 (en) * 2008-10-24 2010-06-08 Ilt Productions Ab Distributed data storage
US9305069B2 (en) 2010-02-09 2016-04-05 Google Inc. Method and system for uploading data into a distributed storage system
US20110196900A1 (en) * 2010-02-09 2011-08-11 Alexandre Drobychev Storage of Data In A Distributed Storage System
US8380659B2 (en) 2010-02-09 2013-02-19 Google Inc. Method and system for efficiently replicating data in non-relational databases
US8615485B2 (en) 2010-02-09 2013-12-24 Google, Inc. Method and system for managing weakly mutable data in a distributed storage system
US8886602B2 (en) 2010-02-09 2014-11-11 Google Inc. Location assignment daemon (LAD) for a distributed storage system
US8874523B2 (en) * 2010-02-09 2014-10-28 Google Inc. Method and system for providing efficient access to a tape storage system
US8862617B2 (en) * 2010-02-09 2014-10-14 Google Inc. System and method for replicating objects in a distributed storage system
EP2387200B1 (en) 2010-04-23 2014-02-12 Compuverde AB Distributed data storage
US8769138B2 (en) 2011-09-02 2014-07-01 Compuverde Ab Method for data retrieval from a distributed data storage system
US8650365B2 (en) 2011-09-02 2014-02-11 Compuverde Ab Method and device for maintaining data in a data storage system comprising a plurality of data storage nodes
US8645978B2 (en) 2011-09-02 2014-02-04 Compuverde Ab Method for data maintenance
US9021053B2 (en) 2011-09-02 2015-04-28 Compuverde Ab Method and device for writing data to a data storage system comprising a plurality of data storage nodes
US8997124B2 (en) 2011-09-02 2015-03-31 Compuverde Ab Method for updating data in a distributed data storage system
US9626378B2 (en) 2011-09-02 2017-04-18 Compuverde Ab Method for handling requests in a storage system and a storage node for a storage system
CN104081801A (en) * 2012-01-27 2014-10-01 惠普发展公司,有限责任合伙企业 Intelligent edge device
US9081840B2 (en) 2012-09-21 2015-07-14 Citigroup Technology, Inc. Methods and systems for modeling a replication topology
GB201303081D0 (en) * 2013-02-21 2013-04-10 Postcode Anywhere Europ Ltd Common service environment
CN104580306B (en) * 2013-10-21 2018-02-16 北京计算机技术及应用研究所 A kind of multiple terminals backup services system and its method for scheduling task
US10152490B2 (en) * 2015-12-29 2018-12-11 Successfactors, Inc. Sequential replication with limited number of objects
US10768830B1 (en) * 2018-07-16 2020-09-08 Amazon Technologies, Inc. Streaming data service with isolated read channels
US10691378B1 (en) 2018-11-30 2020-06-23 International Business Machines Corporation Data replication priority management
US11086726B2 (en) * 2019-12-11 2021-08-10 EMC IP Holding Company LLC User-based recovery point objectives for disaster recovery
US11494304B2 (en) * 2020-03-13 2022-11-08 International Business Machines Corporation Indicating extents of tracks in mirroring queues based on information gathered on tracks in extents in cache
US11425196B1 (en) * 2021-11-18 2022-08-23 International Business Machines Corporation Prioritizing data replication packets in cloud environment

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4620276A (en) * 1983-06-02 1986-10-28 International Business Machines Corporation Method and apparatus for asynchronous processing of dynamic replication messages
US4965716A (en) * 1988-03-11 1990-10-23 International Business Machines Corporation Fast access priority queue for managing multiple messages at a communications node or managing multiple programs in a multiprogrammed data processor
EP0650135A2 (en) * 1993-10-22 1995-04-26 International Business Machines Corporation Data capture variable priority method and system for managing varying processing capacities
US5701495A (en) * 1993-09-20 1997-12-23 International Business Machines Corporation Scalable system interrupt structure for a multi-processing system
US5905876A (en) * 1996-12-16 1999-05-18 Intel Corporation Queue ordering for memory and I/O transactions in a multiple concurrent transaction computer system
US5937205A (en) * 1995-12-06 1999-08-10 International Business Machines Corporation Dynamic queue prioritization by modifying priority value based on queue's level and serving less than a maximum number of requests per queue
US6157963A (en) * 1998-03-24 2000-12-05 Lsi Logic Corp. System controller with plurality of memory queues for prioritized scheduling of I/O requests from priority assigned clients
US20020105924A1 (en) * 2001-02-08 2002-08-08 Shuowen Yang Apparatus and methods for managing queues on a mobile device system
US6449701B1 (en) * 2000-09-20 2002-09-10 Broadcom Corporation Out of order associative queue in two clock domains
US6658485B1 (en) * 1998-10-19 2003-12-02 International Business Machines Corporation Dynamic priority-based scheduling in a message queuing system
US6816458B1 (en) * 2000-09-13 2004-11-09 Harris Corporation System and method prioritizing message packets for transmission
US20050050578A1 (en) * 2003-08-29 2005-03-03 Sony Corporation And Sony Electronics Inc. Preference based program deletion in a PVR
US20050135398A1 (en) * 2003-12-22 2005-06-23 Raman Muthukrishnan Scheduling system utilizing pointer perturbation mechanism to improve efficiency
EP1585282A1 (en) * 2004-04-08 2005-10-12 Research In Motion Limited Message send queue reordering based on priority
US6973464B1 (en) * 1999-11-15 2005-12-06 Novell, Inc. Intelligent replication method
US6981044B1 (en) * 1998-06-08 2005-12-27 Thomson Licensing S.A. Domestic system resource access priority management method and device for the implementation thereof
US20060080486A1 (en) * 2004-10-07 2006-04-13 International Business Machines Corporation Method and apparatus for prioritizing requests for information in a network environment
US20060095610A1 (en) * 2004-10-29 2006-05-04 International Business Machines Corporation Moving, resizing, and memory management for producer-consumer queues
US20060221945A1 (en) * 2003-04-22 2006-10-05 Chin Chung K Method and apparatus for shared multi-bank memory in a packet switching system
US20080015968A1 (en) * 2005-10-14 2008-01-17 Leviathan Entertainment, Llc Fee-Based Priority Queuing for Insurance Claim Processing
US8463627B1 (en) * 2003-12-16 2013-06-11 Ticketmaster Systems and methods for queuing requests and providing queue status

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3658420B2 (en) * 1994-04-14 2005-06-08 株式会社日立製作所 Distributed processing system
US5956714A (en) * 1997-08-13 1999-09-21 Southwestern Bell Telephone Company Queuing system using a relational database
US6073199A (en) * 1997-10-06 2000-06-06 Cisco Technology, Inc. History-based bus arbitration with hidden re-arbitration during wait cycles
US5907681A (en) * 1997-10-20 1999-05-25 International Business Machines Corporation Intelligent method, apparatus and computer program product for automated refreshing of internet web pages
JP4105293B2 (en) * 1998-06-30 2008-06-25 富士通株式会社 Network monitoring system, monitoring device and monitored device
US6338092B1 (en) * 1998-09-24 2002-01-08 International Business Machines Corporation Method, system and computer program for replicating data in a distributed computed environment
US6708187B1 (en) * 1999-06-10 2004-03-16 Alcatel Method for selective LDAP database synchronization
US7028264B2 (en) * 1999-10-29 2006-04-11 Surfcast, Inc. System and method for simultaneous display of multiple information sources
US7136857B2 (en) * 2000-09-01 2006-11-14 Op40, Inc. Server system and method for distributing and scheduling modules to be executed on different tiers of a network
US20020091815A1 (en) * 2001-01-10 2002-07-11 Center 7, Inc. Methods for enterprise management from a central location using intermediate systems
JP4294879B2 (en) * 2001-02-05 2009-07-15 株式会社日立製作所 Transaction processing system having service level control mechanism and program therefor
US6880028B2 (en) * 2002-03-18 2005-04-12 Sun Microsystems, Inc Dynamic request priority arbitration
US20050144189A1 (en) * 2002-07-19 2005-06-30 Keay Edwards Electronic item management and archival system and method of operating the same
CA2497825A1 (en) * 2002-09-10 2004-03-25 Exagrid Systems, Inc. Method and apparatus for server share migration and server recovery using hierarchical storage management
EP2148475A2 (en) * 2002-11-27 2010-01-27 RGB Networks, Inc. apparatus and method for dynamic channel mapping and optimized scheduling of data packets
GB0229572D0 (en) * 2002-12-19 2003-01-22 Cognima Ltd Quality of service provisioning
US7415467B2 (en) * 2003-03-06 2008-08-19 Ixion, Inc. Database replication system
US7657574B2 (en) * 2005-06-03 2010-02-02 Microsoft Corporation Persistent storage file change tracking
US20070219816A1 (en) * 2005-10-14 2007-09-20 Leviathan Entertainment, Llc System and Method of Prioritizing Items in a Queue
US20080208651A1 (en) * 2006-08-24 2008-08-28 Scott Johnston Lead disbursement system and method
US7774356B2 (en) * 2006-12-04 2010-08-10 Sap Ag Method and apparatus for application state synchronization

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4620276A (en) * 1983-06-02 1986-10-28 International Business Machines Corporation Method and apparatus for asynchronous processing of dynamic replication messages
US4965716A (en) * 1988-03-11 1990-10-23 International Business Machines Corporation Fast access priority queue for managing multiple messages at a communications node or managing multiple programs in a multiprogrammed data processor
US5701495A (en) * 1993-09-20 1997-12-23 International Business Machines Corporation Scalable system interrupt structure for a multi-processing system
EP0650135A2 (en) * 1993-10-22 1995-04-26 International Business Machines Corporation Data capture variable priority method and system for managing varying processing capacities
US5937205A (en) * 1995-12-06 1999-08-10 International Business Machines Corporation Dynamic queue prioritization by modifying priority value based on queue's level and serving less than a maximum number of requests per queue
US5905876A (en) * 1996-12-16 1999-05-18 Intel Corporation Queue ordering for memory and I/O transactions in a multiple concurrent transaction computer system
US6157963A (en) * 1998-03-24 2000-12-05 Lsi Logic Corp. System controller with plurality of memory queues for prioritized scheduling of I/O requests from priority assigned clients
US6981044B1 (en) * 1998-06-08 2005-12-27 Thomson Licensing S.A. Domestic system resource access priority management method and device for the implementation thereof
US6658485B1 (en) * 1998-10-19 2003-12-02 International Business Machines Corporation Dynamic priority-based scheduling in a message queuing system
US6973464B1 (en) * 1999-11-15 2005-12-06 Novell, Inc. Intelligent replication method
US6816458B1 (en) * 2000-09-13 2004-11-09 Harris Corporation System and method prioritizing message packets for transmission
US6449701B1 (en) * 2000-09-20 2002-09-10 Broadcom Corporation Out of order associative queue in two clock domains
US20020184455A1 (en) * 2000-09-20 2002-12-05 Broadcom Corporation Out of order associative queue in two clock domains
US20020105924A1 (en) * 2001-02-08 2002-08-08 Shuowen Yang Apparatus and methods for managing queues on a mobile device system
US20060221945A1 (en) * 2003-04-22 2006-10-05 Chin Chung K Method and apparatus for shared multi-bank memory in a packet switching system
US20050050578A1 (en) * 2003-08-29 2005-03-03 Sony Corporation And Sony Electronics Inc. Preference based program deletion in a PVR
US8463627B1 (en) * 2003-12-16 2013-06-11 Ticketmaster Systems and methods for queuing requests and providing queue status
US20050135398A1 (en) * 2003-12-22 2005-06-23 Raman Muthukrishnan Scheduling system utilizing pointer perturbation mechanism to improve efficiency
EP1585282A1 (en) * 2004-04-08 2005-10-12 Research In Motion Limited Message send queue reordering based on priority
US20060080486A1 (en) * 2004-10-07 2006-04-13 International Business Machines Corporation Method and apparatus for prioritizing requests for information in a network environment
US20060095610A1 (en) * 2004-10-29 2006-05-04 International Business Machines Corporation Moving, resizing, and memory management for producer-consumer queues
US20080015968A1 (en) * 2005-10-14 2008-01-17 Leviathan Entertainment, Llc Fee-Based Priority Queuing for Insurance Claim Processing

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170118278A1 (en) * 2012-10-25 2017-04-27 International Business Machines Corporation Work-load management in a client-server infrastructure
US11330047B2 (en) * 2012-10-25 2022-05-10 International Business Machines Corporation Work-load management in a client-server infrastructure
US20160148217A1 (en) * 2013-06-19 2016-05-26 Yong Jin Kim Application sharing service method and apparatus applied thereto
US10353893B2 (en) 2015-05-14 2019-07-16 Deephaven Data Labs Llc Data partitioning and ordering
US9805084B2 (en) 2015-05-14 2017-10-31 Walleye Software, LLC Computer data system data source refreshing using an update propagation graph
US9613018B2 (en) 2015-05-14 2017-04-04 Walleye Software, LLC Applying a GUI display effect formula in a hidden column to a section of data
US9619210B2 (en) 2015-05-14 2017-04-11 Walleye Software, LLC Parsing and compiling data system queries
US9639570B2 (en) 2015-05-14 2017-05-02 Walleye Software, LLC Data store access permission system with interleaved application of deferred access control filters
US10496639B2 (en) 2015-05-14 2019-12-03 Deephaven Data Labs Llc Computer data distribution architecture
US9679006B2 (en) 2015-05-14 2017-06-13 Walleye Software, LLC Dynamic join processing using real time merged notification listener
US9690821B2 (en) 2015-05-14 2017-06-27 Walleye Software, LLC Computer data system position-index mapping
US9710511B2 (en) 2015-05-14 2017-07-18 Walleye Software, LLC Dynamic table index mapping
US9760591B2 (en) 2015-05-14 2017-09-12 Walleye Software, LLC Dynamic code loading
US10540351B2 (en) 2015-05-14 2020-01-21 Deephaven Data Labs Llc Query dispatch and execution architecture
US9836494B2 (en) 2015-05-14 2017-12-05 Illumon Llc Importation, presentation, and persistent storage of data
US9836495B2 (en) 2015-05-14 2017-12-05 Illumon Llc Computer assisted completion of hyperlink command segments
US9886469B2 (en) 2015-05-14 2018-02-06 Walleye Software, LLC System performance logging of complex remote query processor query operations
US9898496B2 (en) 2015-05-14 2018-02-20 Illumon Llc Dynamic code loading
US9934266B2 (en) 2015-05-14 2018-04-03 Walleye Software, LLC Memory-efficient computer system for dynamic updating of join processing
US10003673B2 (en) 2015-05-14 2018-06-19 Illumon Llc Computer data distribution architecture
US10002153B2 (en) 2015-05-14 2018-06-19 Illumon Llc Remote data object publishing/subscribing system having a multicast key-value protocol
US10002155B1 (en) 2015-05-14 2018-06-19 Illumon Llc Dynamic code loading
US10019138B2 (en) 2015-05-14 2018-07-10 Illumon Llc Applying a GUI display effect formula in a hidden column to a section of data
US10069943B2 (en) 2015-05-14 2018-09-04 Illumon Llc Query dispatch and execution architecture
US10176211B2 (en) 2015-05-14 2019-01-08 Deephaven Data Labs Llc Dynamic table index mapping
US10198466B2 (en) 2015-05-14 2019-02-05 Deephaven Data Labs Llc Data store access permission system with interleaved application of deferred access control filters
US10198465B2 (en) 2015-05-14 2019-02-05 Deephaven Data Labs Llc Computer data system current row position query language construct and array processing query language constructs
US10212257B2 (en) 2015-05-14 2019-02-19 Deephaven Data Labs Llc Persistent query dispatch and execution architecture
US11687529B2 (en) 2015-05-14 2023-06-27 Deephaven Data Labs Llc Single input graphical user interface control element and method
US11663208B2 (en) 2015-05-14 2023-05-30 Deephaven Data Labs Llc Computer data system current row position query language construct and array processing query language constructs
US10242041B2 (en) 2015-05-14 2019-03-26 Deephaven Data Labs Llc Dynamic filter processing
US10241960B2 (en) 2015-05-14 2019-03-26 Deephaven Data Labs Llc Historical data replay utilizing a computer system
US10452649B2 (en) 2015-05-14 2019-10-22 Deephaven Data Labs Llc Computer data distribution architecture
US11556528B2 (en) 2015-05-14 2023-01-17 Deephaven Data Labs Llc Dynamic updating of query result displays
US10242040B2 (en) 2015-05-14 2019-03-26 Deephaven Data Labs Llc Parsing and compiling data system queries
US10346394B2 (en) 2015-05-14 2019-07-09 Deephaven Data Labs Llc Importation, presentation, and persistent storage of data
US9612959B2 (en) 2015-05-14 2017-04-04 Walleye Software, LLC Distributed and optimized garbage collection of remote and exported table handle links to update propagation graph nodes
US11514037B2 (en) 2015-05-14 2022-11-29 Deephaven Data Labs Llc Remote data object publishing/subscribing system having a multicast key-value protocol
US9672238B2 (en) 2015-05-14 2017-06-06 Walleye Software, LLC Dynamic filter processing
US9613109B2 (en) 2015-05-14 2017-04-04 Walleye Software, LLC Query task processing based on memory allocation and performance criteria
US10552412B2 (en) 2015-05-14 2020-02-04 Deephaven Data Labs Llc Query task processing based on memory allocation and performance criteria
WO2016183540A1 (en) * 2015-05-14 2016-11-17 Walleye Software, LLC Method and system for data source refreshing
US10565194B2 (en) 2015-05-14 2020-02-18 Deephaven Data Labs Llc Computer system for join processing
US10565206B2 (en) 2015-05-14 2020-02-18 Deephaven Data Labs Llc Query task processing based on memory allocation and performance criteria
US10572474B2 (en) 2015-05-14 2020-02-25 Deephaven Data Labs Llc Computer data system data source refreshing using an update propagation graph
US10621168B2 (en) 2015-05-14 2020-04-14 Deephaven Data Labs Llc Dynamic join processing using real time merged notification listener
US10642829B2 (en) 2015-05-14 2020-05-05 Deephaven Data Labs Llc Distributed and optimized garbage collection of exported data objects
US11263211B2 (en) 2015-05-14 2022-03-01 Deephaven Data Labs, LLC Data partitioning and ordering
US10678787B2 (en) 2015-05-14 2020-06-09 Deephaven Data Labs Llc Computer assisted completion of hyperlink command segments
US10691686B2 (en) 2015-05-14 2020-06-23 Deephaven Data Labs Llc Computer data system position-index mapping
US11249994B2 (en) 2015-05-14 2022-02-15 Deephaven Data Labs Llc Query task processing based on memory allocation and performance criteria
US11238036B2 (en) 2015-05-14 2022-02-01 Deephaven Data Labs, LLC System performance logging of complex remote query processor query operations
US11151133B2 (en) 2015-05-14 2021-10-19 Deephaven Data Labs, LLC Computer data distribution architecture
US10915526B2 (en) 2015-05-14 2021-02-09 Deephaven Data Labs Llc Historical data replay utilizing a computer system
US10922311B2 (en) 2015-05-14 2021-02-16 Deephaven Data Labs Llc Dynamic updating of query result displays
US10929394B2 (en) 2015-05-14 2021-02-23 Deephaven Data Labs Llc Persistent query dispatch and execution architecture
US11023462B2 (en) 2015-05-14 2021-06-01 Deephaven Data Labs, LLC Single input graphical user interface control element and method
CN107239544A (en) * 2017-06-05 2017-10-10 山东浪潮云服务信息科技有限公司 The implementation method and device of a kind of distributed storage
US10866943B1 (en) 2017-08-24 2020-12-15 Deephaven Data Labs Llc Keyed row selection
US11574018B2 (en) 2017-08-24 2023-02-07 Deephaven Data Labs Llc Computer data distribution architecture connecting an update propagation graph through multiple remote query processing
US11449557B2 (en) 2017-08-24 2022-09-20 Deephaven Data Labs Llc Computer data distribution architecture for efficient distribution and synchronization of plotting processing and data
US10783191B1 (en) 2017-08-24 2020-09-22 Deephaven Data Labs Llc Computer data distribution architecture for efficient distribution and synchronization of plotting processing and data
US10657184B2 (en) 2017-08-24 2020-05-19 Deephaven Data Labs Llc Computer data system data source having an update propagation graph with feedback cyclicality
US10198469B1 (en) 2017-08-24 2019-02-05 Deephaven Data Labs Llc Computer data system data source refreshing using an update propagation graph having a merged join listener
US10909183B2 (en) 2017-08-24 2021-02-02 Deephaven Data Labs Llc Computer data system data source refreshing using an update propagation graph having a merged join listener
US11941060B2 (en) 2017-08-24 2024-03-26 Deephaven Data Labs Llc Computer data distribution architecture for efficient distribution and synchronization of plotting processing and data
US11126662B2 (en) 2017-08-24 2021-09-21 Deephaven Data Labs Llc Computer data distribution architecture connecting an update propagation graph through multiple remote query processors
US10241965B1 (en) 2017-08-24 2019-03-26 Deephaven Data Labs Llc Computer data distribution architecture connecting an update propagation graph through multiple remote query processors
US11860948B2 (en) 2017-08-24 2024-01-02 Deephaven Data Labs Llc Keyed row selection
US10002154B1 (en) 2017-08-24 2018-06-19 Illumon Llc Computer data system data source having an update propagation graph with feedback cyclicality
WO2020033150A1 (en) * 2018-08-08 2020-02-13 Micron Technology, Inc. Quality of service control for read operations in memory systems
US11023166B2 (en) 2018-08-08 2021-06-01 Micron Technology, Inc. Quality of service control for read operations in memory systems
US20220092083A1 (en) * 2019-03-04 2022-03-24 Hitachi Vantara Llc Asynchronous storage management in a distributed system

Also Published As

Publication number Publication date
US8190561B1 (en) 2012-05-29

Similar Documents

Publication Publication Date Title
US8190561B1 (en) LDAP replication priority queuing mechanism
US8392586B2 (en) Method and apparatus to manage transactions at a network storage device
US10924438B2 (en) Techniques for handling message queues
US7571168B2 (en) Asynchronous file replication and migration in a storage network
US8862644B2 (en) Data distribution system
US20160292249A1 (en) Dynamic replica failure detection and healing
US8775373B1 (en) Deleting content in a distributed computing environment
US8856068B2 (en) Replicating modifications of a directory
US8364635B2 (en) Temporary session data storage
US10817203B1 (en) Client-configurable data tiering service
US20090182855A1 (en) Method using a hashing mechanism to select data entries in a directory for use with requested operations
CN106464669B (en) Intelligent file prefetching based on access patterns
CN108008918A (en) Data processing method, memory node and distributed memory system
US10579597B1 (en) Data-tiering service with multiple cold tier quality of service levels
CN111200657A (en) Method for managing resource state information and resource downloading system
CN103607424A (en) Server connection method and server system
US11256719B1 (en) Ingestion partition auto-scaling in a time-series database
US20220374407A1 (en) Multi-tenant partitioning in a time-series database
US8417679B1 (en) Fast storage writes
US10242102B2 (en) Network crawling prioritization
CN107786668B (en) Weight caching website method based on CDN (content delivery network)
CN106708636A (en) Cluster-based data caching method and apparatus
US11714566B2 (en) Customizable progressive data-tiering service
US11914590B1 (en) Database request router improving server cache utilization
US11336739B1 (en) Intent-based allocation of database connections

Legal Events

Date Code Title Description
AS Assignment

Owner name: CINGULAR WIRELESS II, LLC, GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POOLE, JACK;CULVER, TIMOTHY;REEL/FRAME:028143/0365

Effective date: 20061128

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION