WO2000062199A2 - Database management architecture - Google Patents
Database management architecture Download PDFInfo
- Publication number
- WO2000062199A2 WO2000062199A2 PCT/US2000/010184 US0010184W WO0062199A2 WO 2000062199 A2 WO2000062199 A2 WO 2000062199A2 US 0010184 W US0010184 W US 0010184W WO 0062199 A2 WO0062199 A2 WO 0062199A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- database
- queries
- query
- processes
- database server
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/506—Constraint
Definitions
- the present invention relates generally to database manager architectures and more particularly to a database manager architecture for use in a high reliability network environment, such as Undersea Network Management Equipment.
- the current Undersea Network Management Equipment (UNME) database architecture and implementation is inadequate to service the high performance demands of the Fiber Link Around the Globe (FLAG) customer.
- the database architecture is based on a single threaded, single process. Queries that yield large result sets and require processing time in excess of one minute create significant bottlenecks, which affect the service of other critical processes. Handling of large data set queries is based on an architecture designed to ensure that the user interface would continue processing while waiting for query results to be returned from the database. Another important reason for this architecture was that the Intelligent Process Control (IPC) Manager process cannot pass data through notifications that exceed any reasonable data structure size (e.g., in excess of 200- 300K). Moreover, the existing architecture relies on a dynamic notification number (DNN).
- DNN dynamic notification number
- This DNN is requested of the database and inserted into a query notification, which is constructed by the client process.
- the DNN is then used to connect to the query requester and pass the data back to that process.
- Another side effect of the current architecture is that the database Server and client processes may deadlock if the client who requested the large query attempts to query the database for another query before the database is finished processing the first query.
- One possibility for this deadlock might be that the database Server is trying to connect to the client to return the query results at the same time the client is attempting to connect to the database Server.
- the IPC Manager manages multiple processes servicing the same requests using a distribution mechanism, whereby the process selected for servicing a particular request is simply moved to the end of the IPC Manager process service queue after it is assigned. The next service request is then assigned to the next process in the process service queue.
- the first problem with this approach is that if only one process is configured to provide the requested service then that process will be immediately assigned the task of providing service. However, the assigned process may be unavailable and the system will exhibit behavior similar to the current problem.
- the second problem is that any given process may service many types of requests.
- the process service queue is organized by service number and not by process. Therefore, the same process may be selected to handle a different request although it is busy. Again, the system will exhibit the undesirable behavior discussed above.
- the third problem is that IPC Manager's current implementation relies on sending the potential service provider a "health check" message. If a potential service provider does not respond to the "health check," which often happens when a process is busy, the IPC Manager will remove that process from all of its management lists and that process will no longer be considered for servicing requests. Multi-threading of the Database Server Process
- SMS Start and Monitor Server
- splitting of the database Server process is not a good solution for the above mentioned problems for essentially the same reasons discussed in relation to the other approaches.
- the basic premise of this approach is to move each of the large query service providers to their own individual processes. For instance, the alarm summary, event history, and performance history queries will each be handled by their own processes.
- the complexity of extracting these service providers and creating new processes creates significant management complexity.
- Second, some queries are requested very rarely and do not warrant the constant processing time required for those programs to sit idling.
- the process could be split between the three basic queries, however, the performance history query encompasses the Add/Drop Multiplexer (ADM), Submarine Lightwave Terminating Equipment (SLTE), and Power Feed Equipment (PFE) network elements and they are further subdivided into their performance items.
- ADM Add/Drop Multiplexer
- SLTE Submarine Lightwave Terminating Equipment
- PFE Power Feed Equipment
- the present invention is therefore directed to the problem of developing an architecture for managing a database process that can process many large queries simultaneously without creating bottlenecks and do so in a highly reliable manner sufficient to meet the requirements of Undersea Network Management Equipment.
- the present invention solves this problem by splitting the database process into at least two processes, a database server interface process and a database engine process.
- the present invention is integrated into the existing architecture of the database manager.
- the database process is split into two processes.
- the database Server interface remains the same and a separate database Engine process is constructed, to which the database server interface forwards requests for service.
- the database Server continues to receive the query notifications as before, however, the process just queues the query structure it receives and passes the query to the database Engine when it becomes available.
- the present invention allows the database Server to service small queries it receives, such as requests for configuration data, which are critical. Also, the present invention eliminates the deadlocking problem and allows a configuration of multiple database Engines to service several large queries simultaneously.
- the database Engine process of the present invention does not actively receive requests from the database Server or any other process of the present invention.
- a database apparatus for processing queries from a plurality of processes includes a database server and at least one database engine.
- the database server interfaces with the plurality of processes, processes predetermined types of queries from any processes within the plurality of processes, provides responses to the predetermined types of queries to those processes within the plurality of processes that originated the predetermined types of queries, and forwards all other queries for further processing.
- the database engine is coupled to the database server, receives all other queries from the database server, processes them and provides results to those queries directly to the processes that originated them.
- the database engine can be one or a plurality of database engines. If there is more than one database engine, then the database server assigns the queries to the database engines in a predetermined order, e.g., a first-in-first-out order to each of the plurality of database engines in cyclic fashion.
- the database engines signal the database server upon completion of processing a query.
- the database engine signals the database server upon elapse of a predetermined time interval during which the database engine has remained idle. This ensures that the database Server will not inadvertently remove this database engine from its list of available database engines.
- the database server places all queries not processed by the database server in a queue and assigns the queries in the queue to each of the plurality of database engines in a predetermined manner, e.g., a random assignment, a circular queue, a ring buffer type, etc.
- a method for interfacing a database process with a plurality of processes that generate a plurality of database queries, each query in a form of a query structure includes the steps of: a) splitting the database process into a database server process and a database engine process; b) using the database server process to interface with all processes generating the plurality of database queries; c) queuing a query structure received by the database server process; and d) passing the queued query structure received by the database server process to the database engine process when the database engine process becomes available.
- the database server queues additional queries.
- an apparatus for interfacing a database process with a plurality of processes that generate a plurality of database queries includes an interfacing means, a queuing means, a forwarding means, and a servicing means.
- the interfacing means interfaces with all processes generating the plurality of database queries.
- the queuing means queues a query received by the interface means.
- the forwarding means forwards the queued queries received by the interfacing means for further servicing.
- the servicing means services the query forwarded by the forwarding means.
- the servicing means can include a plurality of servicing means.
- the above apparatus can include means for assigning a plurality of query structures received from the plurality of processes by the interfacing means to the plurality of servicing means in a predetermined order.
- each of the plurality of servicing means includes means for requesting work from the assigning means when the servicing means becomes available to service another query.
- the above apparatus can include means for queueing a query from one of the plurality of processes if all of the plurality of servicing means are not available.
- the assigning means can include means for creating a list of available servicing means among the plurality of servicing means that are available to service queries.
- the means for interfacing can include means for servicing small queries without forwarding the small queries.
- each of the servicing means can include means for requesting work from the assigning means after a predetermined time has elapsed and servicing means has remained idle throughout the predetermined time.
- FIG 1 depicts a prior art implementation of the database manager architecture interfacing with two processes, each generating a database query.
- FIG 2 A depicts one embodiment of the database manager architecture according to the present invention, with two processes each generating a database query.
- FIG 2B depicts another embodiment of the database manager architecture according to the present invention, with two processes each generating a database query and two database engines for processing the queries.
- FIG 3 depicts a flow chart of one embodiment of the method of the present invention.
- the database server process 13 is split into two separate processes - a database server interface process 23 and a database engine process 24.
- the database server interface process 23 acts as an interface to all of the processes 21, 22 generating queries, however, upon receipt of a large query, the database server interface process 23 passes this large query to the database engine process 24. This prevents the above mentioned bottleneck from occurring. If the query is small and critical, the database server 23 quickly services the query to prevent any delay. In this embodiment, the database server 23 queues the queries it receives, and sends them one at a time to the database engine process 24. Upon completion of processing a query, the database engine process transmits the results directly to the process that originated the query.
- the database engine 24 Upon successful transmission of the results to the originating process, the database engine 24 transmits a request for work signal to the database server interface process 23, which causes the database server interface process 23 to send the next queued query, if any.
- query A was received first by the database server interface process 23, it would place query A in its queue and then forward query A to the database engine 24 when query A reached the top of the queue and the database engine 24 transmitted a request for work signal to the database server interface process 23.
- Query B would also be placed in the queue behind query A, and would then be transmitted to the database engine 24 when it reached the top of the queue and the database server interface process 23 received a request for work from the database engine 24.
- the database engine process completes processing of query A, it forwards the results directly to process A 21 without passing through the database server interface process 23. Only after these results are successfully transmitted to process A, will the database engine 24 request work from the database server interface process 23.
- the dotted lines in FIG 2A indicate the temporary connection between the processes A 21 and B 22. In this case, the temporary connection to process A 21 occurs before the temporary connection to process B 22.
- the database engine process 24 can be split into multiple database engine processes 24a, 24b.
- process A 21 generates a large query
- process B 22 also generates a large query.
- the database server interface 23 receives the query from Process A first, and passes this query to the first available database engine, e.g., database engine 24a. While the database server interface 23 is acting on the query from Process A 21 , Process B 22 attempts to query the database, however, a brief conflict is encountered. Fortunately, as the time involved for the database server interface 23 to pass a query to the database engine 24a is extremely small, this delay is hardly noticeable. The database server interface 23 then becomes available to act on the query from Process B 22.
- the database server interface 23 passes the query to the next available database engine 24b, in this case.
- database engine 24a transmits the results of the query A sent by process A 21 directly to process A without passing the results to the database server interface 23.
- database engine 24b transmits the results of query B sent by process B 22 directly to process B 22 without sending the results to the database server interface 23.
- two separate processes 21, 22 with large simultaneous queries received service without creating a bottleneck.
- the number of database engines is only limited by the ability of the database server to address, and manage these database engines.
- the database server interface process queues the received queries that it does not process and assigns these queued queries to the database engines in a manner that is predetermined.
- An example of such an assignment includes forming a circular order of the database engines and assigning the queued queries to the circular order of database engines in a first-in-first-out manner.
- Another possibility is to assign the queries in a random fashion, which prevents the same queries from being processed by the same database engines.
- Yet another example is to assign the queries to a predefined order of the database engines, in which the database engines are ordered by capability, high powered ones first. This can ensure the fastest processing at all times if there are database engines with varying levels of capability.
- the database server interface process queues the query to prevent bottle necking from occurring.
- the database server interface process creates a list of available processes. The list is updated every time the database server interface process receives a signal (e.g., a request for work) from one of the database engines that it is available to process additional queries. This occurs on at least two occasions, first, when the database engine has remained idle for a predetermined interval, and second, when the database engine completes processing of a previously assigned query.
- a signal e.g., a request for work
- FIG 3 depicts a flow chart of the present invention responding to multiple queries.
- the database server interface 23 first determines whether the new query is a critical query (i.e., either by a predetermined type, or other indicator) (step 33), and if the new query is a critical query, then the database server determines if the new query is sufficiently small for the database server to process the new query without causing a bottleneck (step 34). If the answers to both of these questions is YES, then the database server processes the new query (step 37) and sends the results to the originating process (step 40).
- a critical query i.e., either by a predetermined type, or other indicator
- the database server places the new query in a queue (step 35), first-in-first-out (FIFO) type, and waits for more queries. If the new query is critical, but large, the database server places the new query at the beginning of the queue (step 36). Steps 38-42 depicts the assignment of queries from the queue.
- the database server assigns the queries from the queue as database engines become available (step 38). This determination is possible because the database engines transmit a signal to the database server upon completing processing of a query (step 41), or at the end of a time-out interval during which they have remained idle (step 42). In all cases, the results of the query are transmitted directly to the originating process without passing through the database server, unless of course the database server processed the query (step 40). Summary
- the present invention outlined herein satisfies these needs without any major rework of existing processes and integrates well into the existing implementation.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US29133499A | 1999-04-14 | 1999-04-14 | |
US09/291,334 | 1999-04-14 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2000062199A2 true WO2000062199A2 (en) | 2000-10-19 |
WO2000062199A3 WO2000062199A3 (en) | 2001-05-10 |
WO2000062199A9 WO2000062199A9 (en) | 2002-06-27 |
Family
ID=23119889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/010184 WO2000062199A2 (en) | 1999-04-14 | 2000-04-14 | Database management architecture |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2000062199A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100426270B1 (en) * | 2002-05-21 | 2004-04-08 | 이승룡 | A multimedia streaming server and an Interconnection Method of multimedia streaming system and multimedia database |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4769772A (en) * | 1985-02-28 | 1988-09-06 | Honeywell Bull, Inc. | Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases |
US5694593A (en) * | 1994-10-05 | 1997-12-02 | Northeastern University | Distributed computer database system and method |
US5742816A (en) * | 1995-09-15 | 1998-04-21 | Infonautics Corporation | Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic |
-
2000
- 2000-04-14 WO PCT/US2000/010184 patent/WO2000062199A2/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4769772A (en) * | 1985-02-28 | 1988-09-06 | Honeywell Bull, Inc. | Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases |
US5694593A (en) * | 1994-10-05 | 1997-12-02 | Northeastern University | Distributed computer database system and method |
US5742816A (en) * | 1995-09-15 | 1998-04-21 | Infonautics Corporation | Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100426270B1 (en) * | 2002-05-21 | 2004-04-08 | 이승룡 | A multimedia streaming server and an Interconnection Method of multimedia streaming system and multimedia database |
Also Published As
Publication number | Publication date |
---|---|
WO2000062199A9 (en) | 2002-06-27 |
WO2000062199A3 (en) | 2001-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6195682B1 (en) | Concurrent server and method of operation having client-server affinity using exchanged client and server keys | |
US7315616B2 (en) | System and method for maintaining real-time agent information for multi-channel communication queuing | |
US7406515B1 (en) | System and method for automated and customizable agent availability and task assignment management | |
US8365205B2 (en) | Adaptive communication application programming interface | |
US8190743B2 (en) | Most eligible server in a common work queue environment | |
US6424993B1 (en) | Method, apparatus, and computer program product for server bandwidth utilization management | |
EP1430412A4 (en) | Asynchronous message push to web browser | |
US6324567B2 (en) | Method and apparatus for providing multiple commands to a server | |
US8024744B2 (en) | Method and system for off-loading user queries to a task manager | |
JPH10187639A (en) | High-availability computer server system | |
US8037153B2 (en) | Dynamic partitioning of messaging system topics | |
CN111510474A (en) | Data transmission method based on message middleware and related equipment | |
CN102571568A (en) | Method and device for processing task | |
US20070201673A1 (en) | System and method for multi-channel communication queuing | |
CN110096381B (en) | Method, device, equipment and medium for realizing remote procedure call | |
CN116737395A (en) | Asynchronous information processing system and method | |
WO2000062199A2 (en) | Database management architecture | |
CN111597033A (en) | Task scheduling method and device | |
CN112714181A (en) | Data transmission method and device | |
JPH1023005A (en) | Multi-cast distribution method and system | |
EP1182892B1 (en) | A short message method and system | |
KR100460493B1 (en) | EMS and controlling method therefore | |
KR19990053527A (en) | Client-server communication method using multilevel priority queue | |
CN116016546A (en) | Method and system for preheating resource files in batches in CDN | |
CN116991618A (en) | Information processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): CA IL JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): CA IL JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
AK | Designated states |
Kind code of ref document: C2 Designated state(s): CA IL JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: C2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
COP | Corrected version of pamphlet |
Free format text: PAGES 1/4-4/4, DRAWINGS, REPLACED BY NEW PAGES 1/4-4/4; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase in: |
Ref country code: JP |