WO2000062199A2 - Database management architecture - Google Patents

Database management architecture Download PDF

Info

Publication number
WO2000062199A2
WO2000062199A2 PCT/US2000/010184 US0010184W WO0062199A2 WO 2000062199 A2 WO2000062199 A2 WO 2000062199A2 US 0010184 W US0010184 W US 0010184W WO 0062199 A2 WO0062199 A2 WO 0062199A2
Authority
WO
WIPO (PCT)
Prior art keywords
database
queries
query
processes
database server
Prior art date
Application number
PCT/US2000/010184
Other languages
French (fr)
Other versions
WO2000062199A9 (en
WO2000062199A3 (en
Inventor
Jeffrey A. Deverin
Jonathan M. Liss
Avinash Kachhy
Richard Sedlak
Original Assignee
Tyco Submarine Systems, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tyco Submarine Systems, Ltd. filed Critical Tyco Submarine Systems, Ltd.
Publication of WO2000062199A2 publication Critical patent/WO2000062199A2/en
Publication of WO2000062199A3 publication Critical patent/WO2000062199A3/en
Publication of WO2000062199A9 publication Critical patent/WO2000062199A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/506Constraint

Definitions

  • the present invention relates generally to database manager architectures and more particularly to a database manager architecture for use in a high reliability network environment, such as Undersea Network Management Equipment.
  • the current Undersea Network Management Equipment (UNME) database architecture and implementation is inadequate to service the high performance demands of the Fiber Link Around the Globe (FLAG) customer.
  • the database architecture is based on a single threaded, single process. Queries that yield large result sets and require processing time in excess of one minute create significant bottlenecks, which affect the service of other critical processes. Handling of large data set queries is based on an architecture designed to ensure that the user interface would continue processing while waiting for query results to be returned from the database. Another important reason for this architecture was that the Intelligent Process Control (IPC) Manager process cannot pass data through notifications that exceed any reasonable data structure size (e.g., in excess of 200- 300K). Moreover, the existing architecture relies on a dynamic notification number (DNN).
  • DNN dynamic notification number
  • This DNN is requested of the database and inserted into a query notification, which is constructed by the client process.
  • the DNN is then used to connect to the query requester and pass the data back to that process.
  • Another side effect of the current architecture is that the database Server and client processes may deadlock if the client who requested the large query attempts to query the database for another query before the database is finished processing the first query.
  • One possibility for this deadlock might be that the database Server is trying to connect to the client to return the query results at the same time the client is attempting to connect to the database Server.
  • the IPC Manager manages multiple processes servicing the same requests using a distribution mechanism, whereby the process selected for servicing a particular request is simply moved to the end of the IPC Manager process service queue after it is assigned. The next service request is then assigned to the next process in the process service queue.
  • the first problem with this approach is that if only one process is configured to provide the requested service then that process will be immediately assigned the task of providing service. However, the assigned process may be unavailable and the system will exhibit behavior similar to the current problem.
  • the second problem is that any given process may service many types of requests.
  • the process service queue is organized by service number and not by process. Therefore, the same process may be selected to handle a different request although it is busy. Again, the system will exhibit the undesirable behavior discussed above.
  • the third problem is that IPC Manager's current implementation relies on sending the potential service provider a "health check" message. If a potential service provider does not respond to the "health check," which often happens when a process is busy, the IPC Manager will remove that process from all of its management lists and that process will no longer be considered for servicing requests. Multi-threading of the Database Server Process
  • SMS Start and Monitor Server
  • splitting of the database Server process is not a good solution for the above mentioned problems for essentially the same reasons discussed in relation to the other approaches.
  • the basic premise of this approach is to move each of the large query service providers to their own individual processes. For instance, the alarm summary, event history, and performance history queries will each be handled by their own processes.
  • the complexity of extracting these service providers and creating new processes creates significant management complexity.
  • Second, some queries are requested very rarely and do not warrant the constant processing time required for those programs to sit idling.
  • the process could be split between the three basic queries, however, the performance history query encompasses the Add/Drop Multiplexer (ADM), Submarine Lightwave Terminating Equipment (SLTE), and Power Feed Equipment (PFE) network elements and they are further subdivided into their performance items.
  • ADM Add/Drop Multiplexer
  • SLTE Submarine Lightwave Terminating Equipment
  • PFE Power Feed Equipment
  • the present invention is therefore directed to the problem of developing an architecture for managing a database process that can process many large queries simultaneously without creating bottlenecks and do so in a highly reliable manner sufficient to meet the requirements of Undersea Network Management Equipment.
  • the present invention solves this problem by splitting the database process into at least two processes, a database server interface process and a database engine process.
  • the present invention is integrated into the existing architecture of the database manager.
  • the database process is split into two processes.
  • the database Server interface remains the same and a separate database Engine process is constructed, to which the database server interface forwards requests for service.
  • the database Server continues to receive the query notifications as before, however, the process just queues the query structure it receives and passes the query to the database Engine when it becomes available.
  • the present invention allows the database Server to service small queries it receives, such as requests for configuration data, which are critical. Also, the present invention eliminates the deadlocking problem and allows a configuration of multiple database Engines to service several large queries simultaneously.
  • the database Engine process of the present invention does not actively receive requests from the database Server or any other process of the present invention.
  • a database apparatus for processing queries from a plurality of processes includes a database server and at least one database engine.
  • the database server interfaces with the plurality of processes, processes predetermined types of queries from any processes within the plurality of processes, provides responses to the predetermined types of queries to those processes within the plurality of processes that originated the predetermined types of queries, and forwards all other queries for further processing.
  • the database engine is coupled to the database server, receives all other queries from the database server, processes them and provides results to those queries directly to the processes that originated them.
  • the database engine can be one or a plurality of database engines. If there is more than one database engine, then the database server assigns the queries to the database engines in a predetermined order, e.g., a first-in-first-out order to each of the plurality of database engines in cyclic fashion.
  • the database engines signal the database server upon completion of processing a query.
  • the database engine signals the database server upon elapse of a predetermined time interval during which the database engine has remained idle. This ensures that the database Server will not inadvertently remove this database engine from its list of available database engines.
  • the database server places all queries not processed by the database server in a queue and assigns the queries in the queue to each of the plurality of database engines in a predetermined manner, e.g., a random assignment, a circular queue, a ring buffer type, etc.
  • a method for interfacing a database process with a plurality of processes that generate a plurality of database queries, each query in a form of a query structure includes the steps of: a) splitting the database process into a database server process and a database engine process; b) using the database server process to interface with all processes generating the plurality of database queries; c) queuing a query structure received by the database server process; and d) passing the queued query structure received by the database server process to the database engine process when the database engine process becomes available.
  • the database server queues additional queries.
  • an apparatus for interfacing a database process with a plurality of processes that generate a plurality of database queries includes an interfacing means, a queuing means, a forwarding means, and a servicing means.
  • the interfacing means interfaces with all processes generating the plurality of database queries.
  • the queuing means queues a query received by the interface means.
  • the forwarding means forwards the queued queries received by the interfacing means for further servicing.
  • the servicing means services the query forwarded by the forwarding means.
  • the servicing means can include a plurality of servicing means.
  • the above apparatus can include means for assigning a plurality of query structures received from the plurality of processes by the interfacing means to the plurality of servicing means in a predetermined order.
  • each of the plurality of servicing means includes means for requesting work from the assigning means when the servicing means becomes available to service another query.
  • the above apparatus can include means for queueing a query from one of the plurality of processes if all of the plurality of servicing means are not available.
  • the assigning means can include means for creating a list of available servicing means among the plurality of servicing means that are available to service queries.
  • the means for interfacing can include means for servicing small queries without forwarding the small queries.
  • each of the servicing means can include means for requesting work from the assigning means after a predetermined time has elapsed and servicing means has remained idle throughout the predetermined time.
  • FIG 1 depicts a prior art implementation of the database manager architecture interfacing with two processes, each generating a database query.
  • FIG 2 A depicts one embodiment of the database manager architecture according to the present invention, with two processes each generating a database query.
  • FIG 2B depicts another embodiment of the database manager architecture according to the present invention, with two processes each generating a database query and two database engines for processing the queries.
  • FIG 3 depicts a flow chart of one embodiment of the method of the present invention.
  • the database server process 13 is split into two separate processes - a database server interface process 23 and a database engine process 24.
  • the database server interface process 23 acts as an interface to all of the processes 21, 22 generating queries, however, upon receipt of a large query, the database server interface process 23 passes this large query to the database engine process 24. This prevents the above mentioned bottleneck from occurring. If the query is small and critical, the database server 23 quickly services the query to prevent any delay. In this embodiment, the database server 23 queues the queries it receives, and sends them one at a time to the database engine process 24. Upon completion of processing a query, the database engine process transmits the results directly to the process that originated the query.
  • the database engine 24 Upon successful transmission of the results to the originating process, the database engine 24 transmits a request for work signal to the database server interface process 23, which causes the database server interface process 23 to send the next queued query, if any.
  • query A was received first by the database server interface process 23, it would place query A in its queue and then forward query A to the database engine 24 when query A reached the top of the queue and the database engine 24 transmitted a request for work signal to the database server interface process 23.
  • Query B would also be placed in the queue behind query A, and would then be transmitted to the database engine 24 when it reached the top of the queue and the database server interface process 23 received a request for work from the database engine 24.
  • the database engine process completes processing of query A, it forwards the results directly to process A 21 without passing through the database server interface process 23. Only after these results are successfully transmitted to process A, will the database engine 24 request work from the database server interface process 23.
  • the dotted lines in FIG 2A indicate the temporary connection between the processes A 21 and B 22. In this case, the temporary connection to process A 21 occurs before the temporary connection to process B 22.
  • the database engine process 24 can be split into multiple database engine processes 24a, 24b.
  • process A 21 generates a large query
  • process B 22 also generates a large query.
  • the database server interface 23 receives the query from Process A first, and passes this query to the first available database engine, e.g., database engine 24a. While the database server interface 23 is acting on the query from Process A 21 , Process B 22 attempts to query the database, however, a brief conflict is encountered. Fortunately, as the time involved for the database server interface 23 to pass a query to the database engine 24a is extremely small, this delay is hardly noticeable. The database server interface 23 then becomes available to act on the query from Process B 22.
  • the database server interface 23 passes the query to the next available database engine 24b, in this case.
  • database engine 24a transmits the results of the query A sent by process A 21 directly to process A without passing the results to the database server interface 23.
  • database engine 24b transmits the results of query B sent by process B 22 directly to process B 22 without sending the results to the database server interface 23.
  • two separate processes 21, 22 with large simultaneous queries received service without creating a bottleneck.
  • the number of database engines is only limited by the ability of the database server to address, and manage these database engines.
  • the database server interface process queues the received queries that it does not process and assigns these queued queries to the database engines in a manner that is predetermined.
  • An example of such an assignment includes forming a circular order of the database engines and assigning the queued queries to the circular order of database engines in a first-in-first-out manner.
  • Another possibility is to assign the queries in a random fashion, which prevents the same queries from being processed by the same database engines.
  • Yet another example is to assign the queries to a predefined order of the database engines, in which the database engines are ordered by capability, high powered ones first. This can ensure the fastest processing at all times if there are database engines with varying levels of capability.
  • the database server interface process queues the query to prevent bottle necking from occurring.
  • the database server interface process creates a list of available processes. The list is updated every time the database server interface process receives a signal (e.g., a request for work) from one of the database engines that it is available to process additional queries. This occurs on at least two occasions, first, when the database engine has remained idle for a predetermined interval, and second, when the database engine completes processing of a previously assigned query.
  • a signal e.g., a request for work
  • FIG 3 depicts a flow chart of the present invention responding to multiple queries.
  • the database server interface 23 first determines whether the new query is a critical query (i.e., either by a predetermined type, or other indicator) (step 33), and if the new query is a critical query, then the database server determines if the new query is sufficiently small for the database server to process the new query without causing a bottleneck (step 34). If the answers to both of these questions is YES, then the database server processes the new query (step 37) and sends the results to the originating process (step 40).
  • a critical query i.e., either by a predetermined type, or other indicator
  • the database server places the new query in a queue (step 35), first-in-first-out (FIFO) type, and waits for more queries. If the new query is critical, but large, the database server places the new query at the beginning of the queue (step 36). Steps 38-42 depicts the assignment of queries from the queue.
  • the database server assigns the queries from the queue as database engines become available (step 38). This determination is possible because the database engines transmit a signal to the database server upon completing processing of a query (step 41), or at the end of a time-out interval during which they have remained idle (step 42). In all cases, the results of the query are transmitted directly to the originating process without passing through the database server, unless of course the database server processed the query (step 40). Summary
  • the present invention outlined herein satisfies these needs without any major rework of existing processes and integrates well into the existing implementation.

Abstract

An architecture for a database process that can process many large queries simultaneously without creating bottlenecks uses a separate database server process to act as the interface with the processes generating the database queries, and a database engine process to process the queries. The database server receives the query notifications and queues the query structure it then passes the query to the database engine when it becomes available. The present invention allows the database server to service small queries it receives, such as requests for configuration data which are critical, and allows a configuration of multiple database engines to service several large queries simultaneously. The database engine process does not actively receive requests from the database server. Rather, it requests work from the database as it becomes available employing a time-out mechanism when the database engine is idle. According to another aspect of the present invention, the database processes perform their own load balancing.

Description

DATABASE MANAGEMENT ARCHITECTURE
RELATED APPLICATIONS
The present application is related to U.S. Patent Application No. 08/_ filed concurrently herewith by the same inventor, and entitled "Method and Apparatus for Managing Communications Between Multiple Processes," which has been assigned to the same assignee, and which is hereby incorporated by reference herein including the drawings, as if repeated in its entirety in this application.
BACKGROUND OF THE INVENTION
The present invention relates generally to database manager architectures and more particularly to a database manager architecture for use in a high reliability network environment, such as Undersea Network Management Equipment.
The current Undersea Network Management Equipment (UNME) database architecture and implementation is inadequate to service the high performance demands of the Fiber Link Around the Globe (FLAG) customer. Currently, the database architecture is based on a single threaded, single process. Queries that yield large result sets and require processing time in excess of one minute create significant bottlenecks, which affect the service of other critical processes. Handling of large data set queries is based on an architecture designed to ensure that the user interface would continue processing while waiting for query results to be returned from the database. Another important reason for this architecture was that the Intelligent Process Control (IPC) Manager process cannot pass data through notifications that exceed any reasonable data structure size (e.g., in excess of 200- 300K). Moreover, the existing architecture relies on a dynamic notification number (DNN). This DNN is requested of the database and inserted into a query notification, which is constructed by the client process. The DNN is then used to connect to the query requester and pass the data back to that process. Although this solution is efficient and solves the user interface's problem, it is insufficient from the database manager architectural standpoint because the database blocks all other queries, large and small, from receiving service until the database is finished processing the current query. In some cases, large queries can take many minutes to process. For example, an alarm summary query resulting in 20,000 records retrieved can take up to twelve minutes of the database Server's process time.
Another side effect of the current architecture is that the database Server and client processes may deadlock if the client who requested the large query attempts to query the database for another query before the database is finished processing the first query. One possibility for this deadlock might be that the database Server is trying to connect to the client to return the query results at the same time the client is attempting to connect to the database Server.
Possible reasons for the delay in processing time include inefficiencies in the Sybase software, Rogue Wave's DBTools.h++ libraries, third party Structured Query Language (SQL) code, or all three. One must assume that the Sybase and Rogue Wave software is as efficient as possible. Furthermore, there is nothing that can easily be done about these packages other than to implement one's own libraries, which is not cost effective. While attempting to improve the quality and efficiency of the third party SQL code will help somewhat, this cannot eliminate all the potential bottlenecks because it does not address the problem created when several users simultaneously request service from the database Server. For example, when several users request alarm summary, event history, or performance history data simultaneously.
There have been several proposals put forth to date in an effort to correct these problems, which include load balancing of the PC Manager process, multi-threading of the database Server process, forking of the database Server process, and splitting of database Server into multiple processes to service specific queries. Unfortunately, each of these proposals remains inadequate to solve the above mentioned problems for one reason or another. The following discussion outlines the inadequacies of each of these proposals.
Load Balancing of the IPC Manager This approach will certainly alleviate bottlenecks for the database architecture as well as the entire UNME software system. The basic premise is that the IPC Manager manages multiple processes servicing the same requests using a distribution mechanism, whereby the process selected for servicing a particular request is simply moved to the end of the IPC Manager process service queue after it is assigned. The next service request is then assigned to the next process in the process service queue. Unfortunately, there are many problems with this approach. The first problem with this approach is that if only one process is configured to provide the requested service then that process will be immediately assigned the task of providing service. However, the assigned process may be unavailable and the system will exhibit behavior similar to the current problem.
The second problem is that any given process may service many types of requests. According to the above proposal, the process service queue is organized by service number and not by process. Therefore, the same process may be selected to handle a different request although it is busy. Again, the system will exhibit the undesirable behavior discussed above.
The third problem is that IPC Manager's current implementation relies on sending the potential service provider a "health check" message. If a potential service provider does not respond to the "health check," which often happens when a process is busy, the IPC Manager will remove that process from all of its management lists and that process will no longer be considered for servicing requests. Multi-threading of the Database Server Process
While on its surface, multi-threading of the database Server process appears ideal for handling the above mentioned problems, however, it has its limitations. The basic premise of this approach is that UNIX™ threads are created and run whenever a query request is received. This approach also yields some problems. One problem is that UNIX™ threads are considered light weight processes (LWP). The problem here is that "light weight" is probably insufficient. Forking of the Database Server Process
Forking of the database Server process is also a potential solution for the above mentioned problems. The basic premise of this approach is not unlike the multithreading approach, except that the UNIX™ system call fork is used instead of the thread libraries. Although this approach has not been tried for the database Server process, it has been tested in the Session Manager process successfully. However, the one problem found with using fork is that the Start and Monitor Server (SMS) software can no longer monitor and manage the forked process. SMS monitors processes according to their UNIX™ process identification (PID) numbers. If a process calls fork, then two PIDs will exist for that process, which can cause the SMS difficulty when it attempts to normally shutdown a process. Splitting of the Database Server Process As with the above mentioned solutions, splitting of the database Server process is not a good solution for the above mentioned problems for essentially the same reasons discussed in relation to the other approaches. The basic premise of this approach is to move each of the large query service providers to their own individual processes. For instance, the alarm summary, event history, and performance history queries will each be handled by their own processes. However, the complexity of extracting these service providers and creating new processes creates significant management complexity. First, many more processes and their associated source code must be maintained under configuration control. Second, some queries are requested very rarely and do not warrant the constant processing time required for those programs to sit idling. Third, if two clients request the same service in a small time period, then the same bottleneck will occur that this solution was trying to avoid. And finally, where should the split in the database Server process be made? For example, the process could be split between the three basic queries, however, the performance history query encompasses the Add/Drop Multiplexer (ADM), Submarine Lightwave Terminating Equipment (SLTE), and Power Feed Equipment (PFE) network elements and they are further subdivided into their performance items. When to stop dividing this process, if it is not subdivided into those fourteen separate queries that are considered large queries.
The present invention is therefore directed to the problem of developing an architecture for managing a database process that can process many large queries simultaneously without creating bottlenecks and do so in a highly reliable manner sufficient to meet the requirements of Undersea Network Management Equipment.
SUMMARY OF THE INVENTION The present invention solves this problem by splitting the database process into at least two processes, a database server interface process and a database engine process.
The present invention is integrated into the existing architecture of the database manager. The database process is split into two processes. The database Server interface remains the same and a separate database Engine process is constructed, to which the database server interface forwards requests for service. The database Server continues to receive the query notifications as before, however, the process just queues the query structure it receives and passes the query to the database Engine when it becomes available. In addition, the present invention allows the database Server to service small queries it receives, such as requests for configuration data, which are critical. Also, the present invention eliminates the deadlocking problem and allows a configuration of multiple database Engines to service several large queries simultaneously. The database Engine process of the present invention does not actively receive requests from the database Server or any other process of the present invention. Rather, the database Engine process of the present invention requests work from the database Server as it becomes available employing a time-out mechanism when the database Engine is idle. This aspect of the present invention prevents the IPC Manager from removing the database Engine process from its process queues because the IPC Manager will not attempt to connect to the process. The database Engine process of the present invention eliminates the need to fork or multi-thread the database Server because the work is accomplished by a separate process. Furthermore, according to another aspect of the present invention, it is not necessary to invest time to add a load balancing feature to IPC Manager because the database processes perform their own load balancing. According to the present invention, a database apparatus for processing queries from a plurality of processes includes a database server and at least one database engine. The database server interfaces with the plurality of processes, processes predetermined types of queries from any processes within the plurality of processes, provides responses to the predetermined types of queries to those processes within the plurality of processes that originated the predetermined types of queries, and forwards all other queries for further processing. The database engine is coupled to the database server, receives all other queries from the database server, processes them and provides results to those queries directly to the processes that originated them.
According to another aspect of the present invention, the database engine can be one or a plurality of database engines. If there is more than one database engine, then the database server assigns the queries to the database engines in a predetermined order, e.g., a first-in-first-out order to each of the plurality of database engines in cyclic fashion.
According to yet another aspect of the present invention, the database engines signal the database server upon completion of processing a query. In addition, the database engine signals the database server upon elapse of a predetermined time interval during which the database engine has remained idle. This ensures that the database Server will not inadvertently remove this database engine from its list of available database engines. According to another aspect of the present invention, the database server places all queries not processed by the database server in a queue and assigns the queries in the queue to each of the plurality of database engines in a predetermined manner, e.g., a random assignment, a circular queue, a ring buffer type, etc.
According to the present invention, a method for interfacing a database process with a plurality of processes that generate a plurality of database queries, each query in a form of a query structure, includes the steps of: a) splitting the database process into a database server process and a database engine process; b) using the database server process to interface with all processes generating the plurality of database queries; c) queuing a query structure received by the database server process; and d) passing the queued query structure received by the database server process to the database engine process when the database engine process becomes available. In this case, if there are no database engines available, the database server queues additional queries.
According to the present invention, an apparatus for interfacing a database process with a plurality of processes that generate a plurality of database queries, includes an interfacing means, a queuing means, a forwarding means, and a servicing means. The interfacing means interfaces with all processes generating the plurality of database queries. The queuing means queues a query received by the interface means. The forwarding means forwards the queued queries received by the interfacing means for further servicing. And, the servicing means services the query forwarded by the forwarding means. In this embodiment, the servicing means can include a plurality of servicing means.
According to one aspect of the present invention, the above apparatus can include means for assigning a plurality of query structures received from the plurality of processes by the interfacing means to the plurality of servicing means in a predetermined order.
According to another aspect of the present invention, each of the plurality of servicing means includes means for requesting work from the assigning means when the servicing means becomes available to service another query.
According to yet another aspect of the present invention, the above apparatus can include means for queueing a query from one of the plurality of processes if all of the plurality of servicing means are not available.
According to a further aspect of the present invention, the assigning means can include means for creating a list of available servicing means among the plurality of servicing means that are available to service queries. According to another aspect of the present invention, the means for interfacing can include means for servicing small queries without forwarding the small queries.
Finally, according to yet another aspect of the present invention, each of the servicing means can include means for requesting work from the assigning means after a predetermined time has elapsed and servicing means has remained idle throughout the predetermined time.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG 1 depicts a prior art implementation of the database manager architecture interfacing with two processes, each generating a database query.
FIG 2 A depicts one embodiment of the database manager architecture according to the present invention, with two processes each generating a database query.
FIG 2B depicts another embodiment of the database manager architecture according to the present invention, with two processes each generating a database query and two database engines for processing the queries.
FIG 3 depicts a flow chart of one embodiment of the method of the present invention.
DETAILED DESCRIPTION As shown in FIG 1 , the prior art implementation of the architecture of the database server 13 causes a bottleneck when two processes 11, 12 attempt to query the database 13 with large queries. Process A l l first sends its query to the database server 13, which then attempts to service the query. If the query is large, this can take several minutes, during which time process B 12 cannot receive a response to its query, even if its query is critical and requires very little processing time.
Referring to FIG 2A, according to the present invention, the database server process 13 is split into two separate processes - a database server interface process 23 and a database engine process 24. The database server interface process 23 acts as an interface to all of the processes 21, 22 generating queries, however, upon receipt of a large query, the database server interface process 23 passes this large query to the database engine process 24. This prevents the above mentioned bottleneck from occurring. If the query is small and critical, the database server 23 quickly services the query to prevent any delay. In this embodiment, the database server 23 queues the queries it receives, and sends them one at a time to the database engine process 24. Upon completion of processing a query, the database engine process transmits the results directly to the process that originated the query. Upon successful transmission of the results to the originating process, the database engine 24 transmits a request for work signal to the database server interface process 23, which causes the database server interface process 23 to send the next queued query, if any. In the above example, as query A was received first by the database server interface process 23, it would place query A in its queue and then forward query A to the database engine 24 when query A reached the top of the queue and the database engine 24 transmitted a request for work signal to the database server interface process 23. Query B would also be placed in the queue behind query A, and would then be transmitted to the database engine 24 when it reached the top of the queue and the database server interface process 23 received a request for work from the database engine 24. Once the database engine process completes processing of query A, it forwards the results directly to process A 21 without passing through the database server interface process 23. Only after these results are successfully transmitted to process A, will the database engine 24 request work from the database server interface process 23. The dotted lines in FIG 2A indicate the temporary connection between the processes A 21 and B 22. In this case, the temporary connection to process A 21 occurs before the temporary connection to process B 22.
Referring to FIG 2B, according to another aspect of the present invention, the database engine process 24 can be split into multiple database engine processes 24a, 24b. In this architecture, process A 21 generates a large query, and process B 22 also generates a large query. The database server interface 23 receives the query from Process A first, and passes this query to the first available database engine, e.g., database engine 24a. While the database server interface 23 is acting on the query from Process A 21 , Process B 22 attempts to query the database, however, a brief conflict is encountered. Fortunately, as the time involved for the database server interface 23 to pass a query to the database engine 24a is extremely small, this delay is hardly noticeable. The database server interface 23 then becomes available to act on the query from Process B 22. Upon recognizing that the query from Process B 22 is also a large query, the database server interface 23 passes the query to the next available database engine 24b, in this case. As soon as each of the database engines 24a, 24b complete their processing, they transmit the results directly to the process that originated the query. Thus, shown by dotted lines, database engine 24a transmits the results of the query A sent by process A 21 directly to process A without passing the results to the database server interface 23. Similarly, the database engine 24b transmits the results of query B sent by process B 22 directly to process B 22 without sending the results to the database server interface 23. Thus, two separate processes 21, 22 with large simultaneous queries received service without creating a bottleneck. It should be noted that while only two database engines are depicted for simplicity purposes, the number of database engines is only limited by the ability of the database server to address, and manage these database engines.
According to the present invention, if there are multiple database engines, then the database server interface process queues the received queries that it does not process and assigns these queued queries to the database engines in a manner that is predetermined. An example of such an assignment includes forming a circular order of the database engines and assigning the queued queries to the circular order of database engines in a first-in-first-out manner. Another possibility is to assign the queries in a random fashion, which prevents the same queries from being processed by the same database engines. Yet another example is to assign the queries to a predefined order of the database engines, in which the database engines are ordered by capability, high powered ones first. This can ensure the fastest processing at all times if there are database engines with varying levels of capability.
If all of the database engines are not available, the database server interface process queues the query to prevent bottle necking from occurring. To properly assign the queries to the database engines, the database server interface process creates a list of available processes. The list is updated every time the database server interface process receives a signal (e.g., a request for work) from one of the database engines that it is available to process additional queries. This occurs on at least two occasions, first, when the database engine has remained idle for a predetermined interval, and second, when the database engine completes processing of a previously assigned query.
FIG 3 depicts a flow chart of the present invention responding to multiple queries. When a new query is received by the database server interface 23 (step 32), the database server interface 23 first determines whether the new query is a critical query (i.e., either by a predetermined type, or other indicator) (step 33), and if the new query is a critical query, then the database server determines if the new query is sufficiently small for the database server to process the new query without causing a bottleneck (step 34). If the answers to both of these questions is YES, then the database server processes the new query (step 37) and sends the results to the originating process (step 40). If the new query is not critical, the database server places the new query in a queue (step 35), first-in-first-out (FIFO) type, and waits for more queries. If the new query is critical, but large, the database server places the new query at the beginning of the queue (step 36). Steps 38-42 depicts the assignment of queries from the queue. The database server assigns the queries from the queue as database engines become available (step 38). This determination is possible because the database engines transmit a signal to the database server upon completing processing of a query (step 41), or at the end of a time-out interval during which they have remained idle (step 42). In all cases, the results of the query are transmitted directly to the originating process without passing through the database server, unless of course the database server processed the query (step 40). Summary
Because of the high availability requirements for the UNME software and the critical need for an efficient database architecture, the efficiency of the database architecture must be increased. The present invention outlined herein satisfies these needs without any major rework of existing processes and integrates well into the existing implementation.

Claims

WHAT IS CLAIMED IS:
1. An apparatus for processing queries to a database from a plurality of processes, comprising: a) a database server interfacing with the plurality of processes, processing predetermined types of queries from any processes within the plurality of processes, providing responses to the predetermined types of queries to those processes within the plurality of processes that originated the predetermined types of queries, and forwarding all other queries for further processing; and b) at least one database engine being coupled to the database server, receiving said all other queries from the database server, processing said all other queries received from the database server and providing results to said all other queries directly to those originating processes within the plurality of processes that originated said all other queries.
2. The apparatus according to claim 1 , wherein said at least one database engine further comprises a plurality of database engines, each of said database engines being coupled to the database server, said database server assigning said all other queries to the plurality of database engines in a predetermined order.
3. The apparatus according to claim 2, wherein the predetermined order comprises a first-in-first-out order to each of the plurality of database engines in cyclic fashion.
4. The apparatus according to claim 2, wherein each of the plurality of database engines signals the database server upon completion of processing a query.
5. The apparatus according to claim 2, wherein each of the plurality of database engines signals the database server upon elapse of a predetermined time interval during which said each of the plurality of database engines has been idle.
6. The apparatus according to claim 1, wherein the predetermined query type includes a critical query.
7. The apparatus according to claim 1, wherein the predetermined query type includes a small, critical query.
8. The apparatus according to claim 1 , wherein the predetermined query type includes a request for configuration information.
9. The apparatus according to claim 2, wherein the database server places all queries not processed by the database server in a queue and assigns the queries in the queue to each of the plurality of database engines in a predetermined manner.
10. The apparatus according to claim 9, wherein the predetermined manner includes a random assignment.
11. A method for interfacing a database process with a plurality of processes that generate a plurality of database queries, each query in a form of a query structure, comprising the steps of: a) splitting the database process into a database server interface process and a database engine process; b) using the database server interface process to interface with all processes generating the plurality of database queries; c) queuing a query structure received by the database server interface process; and d) passing the queued query structure received by the database server interface process to the database engine process when the database engine process becomes available. 199
- 15 - 12. The method according to claim 11, further comprising the step of splitting the database engine process into a plurality of database engine processes.
13. The method according to claim 12, further comprising the step of assigning a plurality of query structures received from the plurality of processes to the plurality of database engine processes in a predetermined order.
14. The method according to claim 13, wherein the predetermined order includes a random assignment.
15. The method according to claim 13, wherein the predetermined order includes a circular order.
16. The method according to claim 12, further comprising the step of sending a signal to the database server process from an available database engine process of the plurality of database engine processes when the available database engine process becomes available to process another query.
17. The method according to claim 12, further comprising the step of queueing a query from one of the plurality of processes if all of the plurality of database engine processes are not available.
18. The method according to claim 12, further comprising the step of creating a list of available database engine processes within the plurality of database engine processes that are available to process queries.
19. The method according to claim 11 , further comprising the step of servicing small queries received by the database server process with the database server process.
20. The method according to claim 11, further comprising the step of servicing a request for configuration data received by the database server process with the database server process.
21. The method according to claim 11 , further comprising the step of splitting the database engine process into a plurality of database engine processes to service several queries simultaneously.
22. The method according to claim 11 , further comprising the step of requesting work from the database server process by the database engine process as the database engine process becomes available.
23. The method according to claim 11 , further comprising the step of requesting work from the database server process by the database engine process after a predetermined time has elapsed and the database engine process is idle.
24. The method according to claim 11 , further comprising the step of requesting work from the database server process by an idle database engine process of the plurality of database engine processes after a predetermined time has elapsed.
25. An apparatus for interfacing a database process with a plurality of processes that generate a plurality of database queries, comprising: a) means for interfacing with all processes generating the plurality of database queries; b) means for queuing a query received by the interface means; c) means for forwarding the queued query received by the interfacing means for further servicing; and d) means for servicing a query forwarded by the forwarding means.
26. The apparatus according to claim 25, wherein the servicing means comprises a plurality of means for servicing query structures.
27. The apparatus according to claim 26, further comprising means for assigning a plurality of query structures received from the plurality of processes by the interfacing means to the plurality of servicing means in a predetermined order.
28. The apparatus according to claim 27, wherein each of the plurality of servicing means includes means for requesting work from the assigning means when the servicing means becomes available to service another query.
29. The apparatus according to claim 26, further comprising means for rejecting a query from one of the plurality of processes if all of the plurality of servicing means are not available.
30. The apparatus according to claim 27, wherein the assigning means further comprising means for creating a list of available servicing means among the plurality of servicing means that are available to service queries.
31. The apparatus according to claim 26, wherein the means for interfacing includes means for servicing small queries without forwarding the small queries.
32. The apparatus according to claim 27, wherein each of the servicing means includes means for requesting work from the assigning means after a predetermined time has elapsed and servicing means has remained idle throughout the predetermined time.
PCT/US2000/010184 1999-04-14 2000-04-14 Database management architecture WO2000062199A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29133499A 1999-04-14 1999-04-14
US09/291,334 1999-04-14

Publications (3)

Publication Number Publication Date
WO2000062199A2 true WO2000062199A2 (en) 2000-10-19
WO2000062199A3 WO2000062199A3 (en) 2001-05-10
WO2000062199A9 WO2000062199A9 (en) 2002-06-27

Family

ID=23119889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/010184 WO2000062199A2 (en) 1999-04-14 2000-04-14 Database management architecture

Country Status (1)

Country Link
WO (1) WO2000062199A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100426270B1 (en) * 2002-05-21 2004-04-08 이승룡 A multimedia streaming server and an Interconnection Method of multimedia streaming system and multimedia database

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4769772A (en) * 1985-02-28 1988-09-06 Honeywell Bull, Inc. Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases
US5694593A (en) * 1994-10-05 1997-12-02 Northeastern University Distributed computer database system and method
US5742816A (en) * 1995-09-15 1998-04-21 Infonautics Corporation Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4769772A (en) * 1985-02-28 1988-09-06 Honeywell Bull, Inc. Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases
US5694593A (en) * 1994-10-05 1997-12-02 Northeastern University Distributed computer database system and method
US5742816A (en) * 1995-09-15 1998-04-21 Infonautics Corporation Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100426270B1 (en) * 2002-05-21 2004-04-08 이승룡 A multimedia streaming server and an Interconnection Method of multimedia streaming system and multimedia database

Also Published As

Publication number Publication date
WO2000062199A9 (en) 2002-06-27
WO2000062199A3 (en) 2001-05-10

Similar Documents

Publication Publication Date Title
US6195682B1 (en) Concurrent server and method of operation having client-server affinity using exchanged client and server keys
US7315616B2 (en) System and method for maintaining real-time agent information for multi-channel communication queuing
US7406515B1 (en) System and method for automated and customizable agent availability and task assignment management
US8365205B2 (en) Adaptive communication application programming interface
US8190743B2 (en) Most eligible server in a common work queue environment
US6424993B1 (en) Method, apparatus, and computer program product for server bandwidth utilization management
EP1430412A4 (en) Asynchronous message push to web browser
US6324567B2 (en) Method and apparatus for providing multiple commands to a server
US8024744B2 (en) Method and system for off-loading user queries to a task manager
JPH10187639A (en) High-availability computer server system
US8037153B2 (en) Dynamic partitioning of messaging system topics
CN111510474A (en) Data transmission method based on message middleware and related equipment
CN102571568A (en) Method and device for processing task
US20070201673A1 (en) System and method for multi-channel communication queuing
CN110096381B (en) Method, device, equipment and medium for realizing remote procedure call
CN116737395A (en) Asynchronous information processing system and method
WO2000062199A2 (en) Database management architecture
CN111597033A (en) Task scheduling method and device
CN112714181A (en) Data transmission method and device
JPH1023005A (en) Multi-cast distribution method and system
EP1182892B1 (en) A short message method and system
KR100460493B1 (en) EMS and controlling method therefore
KR19990053527A (en) Client-server communication method using multilevel priority queue
CN116016546A (en) Method and system for preheating resource files in batches in CDN
CN116991618A (en) Information processing method and device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CA IL JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): CA IL JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: C2

Designated state(s): CA IL JP

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

COP Corrected version of pamphlet

Free format text: PAGES 1/4-4/4, DRAWINGS, REPLACED BY NEW PAGES 1/4-4/4; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP