US20090013209A1 - Apparatus for connection management and the method therefor - Google Patents
Apparatus for connection management and the method therefor Download PDFInfo
- Publication number
- US20090013209A1 US20090013209A1 US12/170,616 US17061608A US2009013209A1 US 20090013209 A1 US20090013209 A1 US 20090013209A1 US 17061608 A US17061608 A US 17061608A US 2009013209 A1 US2009013209 A1 US 2009013209A1
- Authority
- US
- United States
- Prior art keywords
- session
- job
- available
- network connection
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5055—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1443—Transmit or communication errors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5014—Reservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/503—Resource availability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
Definitions
- the present invention relates generally to data processing systems, and in particular, to bulk data distributions within networked data processing systems.
- a source a source
- endpoints a multiplicity of endpoints systems
- Such large data transfers may occur within a network, for example, to distribute software updates.
- the system administrator may need to allocate a specific period of time for the data transfer to more efficiently utilize network resources. This may typically occur when the communication load on the system is lowest, usually at night when most endpoint users are not working at their stations.
- the system administrator may load the bulk data and the corresponding transfer instructions onto the network system's source, or server, in preparation for the transfer.
- the server will push the data while ensuring that the bulk data is successfully transferred to each of the desired endpoint locations.
- a portion of the system server is dedicated to the data transfer and thus unavailable for other networking tasks.
- network bandwidth demands are concomitantly increased. This complicates scalability of the bulk distribution systems.
- a connection scheduling method determines if a job is available for scheduling. It is also determined if a session for effecting an execution the job is available. The session is included in a pool of sessions, in which the pool of sessions has a preselected one of a set of priority levels. The preselected priority level corresponds to a priority level of the job being scheduled for execution. If available, the session is launched to effect the execution of the job.
- a data processing system for connection scheduling.
- the system contains circuitry operable for determining if a job is available for scheduling. Also included is circuitry operable for determining, in response to the circuitry operable for determining if a job is available, if a session is available.
- the session is included in a pool of sessions, the pool of sessions having a preselected one of a set of priority levels corresponding to a priority level of the job.
- the session effects an execution of the job.
- the system also has circuitry operable for launching the session to effect the execution of the job, if the session is available.
- a computer program product embodied in a machine readable storage medium.
- the program product for job scheduling includes instructions for determining if a job is available for scheduling.
- the program product also contains instructions for determining, in response to the instructions for determining if the job is available, if a session is available, wherein the session is included in a pool of sessions, the pool of sessions having a preselected one of a set of priority levels corresponding to a priority level of the job.
- the session effects an execution of the available job.
- the program product also contains instructions for launching the session to effect the execution of the job, if the session is available.
- FIG. 1 illustrates, in block diagram form, a data processing network in accordance with an embodiment of the present invention
- FIG. 2 illustrates, in block diagram form, a data processing system implemented in accordance with an embodiment of the present invention
- FIG. 3A illustrates, in flowchart form, a connection management thread in accordance with an embodiment of the present invention
- FIG. 3B illustrates, in flowchart form, a session thread in accordance with an embodiment of the present invention
- FIG. 3C illustrates, in flowchart form, an error handling thread which may be used with the session thread of FIG. 3B ;
- FIG. 3D illustrates, in flowchart form, a retry timer thread, in accordance with an embodiment of the present invention
- FIG. 4 illustrates, in flowchart form, a methodology implemented to determine priority resource availability in accordance with an embodiment of the present invention.
- FIG. 5 schematically illustrates a repeater connection list which may be used in an embodiment of the present invention.
- the present invention is a method and apparatus for managing connections in a system for distributing and collecting data between an originating source system and a plurality of endpoint systems (which may also be referred to as “endpoint nodes” or simply “endpoints”).
- the method and apparatus provides a mechanism for managing a plurality of sessions, or threads, for sending a distribution to or receiving results information from the corresponding target machine. Sessions are allocated in accordance with a preselected distribution priority. Each distribution priority level has a predetermined number of sessions available to it in a corresponding sessions pool. By scheduling distributions on their priorities, large, low priority distributions will no longer “bottleneck” high priority small distributions.
- the present invention has an originating source system followed by repeaters.
- the use of repeaters allows data to be delivered essentially simultaneously to a large number of machines.
- the present invention can be scaled to handle more destinations by adding repeaters.
- numerous specific details are set forth to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well-known circuits have been shown in block diagram form in order not to obscure the present invention in unnecessary detail. For the most part, details concerning timing considerations and the like have been omitted inasmuch as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art.
- FIG. 1 illustrates a communications network 100 .
- the subsequent discussion and description of FIG. 1 are provided to illustrate an exemplary environment used by the present invention.
- the network system 100 includes source system 101 , one or more fan out/collector nodes, or, repeaters 110 , 111 , 118 , 119 , and a plurality of endpoints 112 - 117 . Additionally, certain repeaters, such as 118 and 119 , are directly connected to one or more endpoints, in the exemplary embodiment of FIG. 1 , endpoints 112 - 114 or 115 - 117 , respectively, and may be referred to as “gateway” repeaters (or, simply, “gateways”).
- Source system 101 provides distribution services with respect to resources 112 - 117 .
- source system 101 and endpoints 112 - 117 interfaces to repeaters 110 and 111 using the same methodologies as repeaters 110 and 111 interface with, for example, repeaters 118 and 119 .
- source system 110 and endpoints 112 - 117 each may include a “repeater.”
- a repeater may be a logical element that may be, but is not necessarily associated with a physical, stand-alone hardware device in network 100 .
- Repeater 110 may be the primary repeater through which resources 112 - 114 receive their data transfers, and repeater 111 , likewise, may primarily service endpoints 115 - 117 .
- the connection management methodologies described below in conjunction with FIGS. 3A-3D may be performed by repeaters 110 , 111 , 118 and 119 . It would be understood by an artisan of ordinary skill that additional repeaters may be inserted into the network and may be arranged in a multi-level hierarchy according to the demands imposed by the network size. Gateway repeaters 118 and 1 19 are such repeaters in the exemplary embodiment of FIG. 1 .
- network system 100 provides cross connections in order to provide redundant, parallel communication paths should the primary communication path to the endpoint become unavailable.
- endpoint 114 has a primary pathway to source system 101 through repeaters 118 and 110 .
- a source system such as source system 101 may also be referred to as a source node.
- source system 101 can transfer bulk data to endpoint 114 via an alternative pathway through repeaters 118 and 111 .
- endpoint 114 may receive data via repeaters 111 and 119 .
- Source system 101 maintains database 120 for storing information used in managing a data distribution.
- a data processing system 200 which may be used to implement a source system such as system 101 , repeaters, such as repeaters 110 , 111 , 118 , or 119 or endpoints, such as endpoints 112 - 117 , executing the methodology of the present invention.
- the system has a central processing unit (CPU) 210 , which is coupled to various other components by system bus 212 .
- Read only memory (“ROM”) 216 is coupled to the system bus 212 and includes a basic input/output system (“BIOS”) that controls certain basic functions of the data processing system 200 .
- BIOS basic input/output system
- Random access memory (“RAM”) 214 , L/O adapter 218 , and communications adapter 234 are also coupled to the system bus 212 .
- VO adapter 218 may be a small computer system interface (“SCSI”) adapter that communicates with a disk storage device 220 .
- Disk storage device 220 may be used to hold database 120 , FIG. 1 .
- Communications adapter 234 interconnects bus 212 with the network as well as outside networks enabling the data processing system to communicate with other such systems.
- Input/Output devices are also connected to system bus 212 via user interface adapter 222 and display adapter 236 .
- Keyboard 224 , track ball 232 , mouse 226 and speaker 228 are all interconnected to bus 212 via user interface adapter 222 .
- Display monitor 238 is connected to system bus 212 by display adapter 236 . In this manner, a user is capable of inputting to the system throughout the keyboard 224 , trackball 232 or mouse 226 and receiving output from the system via speaker 228 and display 238 .
- Implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product.
- sets of instructions for executing the method or methods are resident in the random access memory 214 of one or more computer systems configured generally as described above.
- the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 220 (which may include a removable memory such as an optical disk or floppy disk for eventual use in the disk drive 220 ).
- the computer program product can also be stored at another computer and transmitted when desired to the user's work station by a network or by an external network such as the Internet.
- the physical storage of the sets of instructions physically changes the medium upon which it is stored so that the medium carries computer readable information.
- the change may be electrical, magnetic, chemical, biological, or some other physical change. While it is convenient to describe the invention in terms of instructions, symbols, characters, or the like, the reader should remember that all of these and similar terms should be associated with the appropriate physical elements.
- the invention may describe terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator.
- terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator.
- no action by a human operator is desirable.
- the operations described are, in large part, machine operations processing electrical signals to generate other electrical signals.
- Thread 300 may be used by repeaters, such as repeaters 110 , 111 , 118 , and 119 of network 100 , FIG. 1 .
- Distributions or results information to be transferred by a repeater are enqueued in an output “job” queue in accordance with the assigned priority of the distribution. (Distributions targeted for ultimate delivery to an endpoint and results information for a report-to system may collectively be referred to simply as “jobs.”) The transfer of distributions and results information is discussed in the commonly owned co-pending U.S.
- the distribution or results information may be assigned one of three priority levels, low, medium, or high, in an embodiment of the present invention. Distributions are enqueued in order of priority.
- step 302 the job queue is locked, and, while the job queue is not empty, step 304 , jobs are scheduled until the queue is exhausted.
- step 306 the output queue is unlocked, whereby new distributions received by the repeater may be enqueued for sending to a target repeater or end point, as appropriate.
- Scheduling is initiated, when, in step 308 , the output job queue is no longer empty.
- Step 308 constitutes event loop, wherein scheduling is initiated in response to an event such as a “running job completed” event. After a running job completes, a session, as described below, becomes available. A “new job” event, signaling that a new job has arrived at the repeater performing thread 300 will also initiate scheduling.
- step 307 it is determined if a current job is ready for scheduling. If not, in step 308 thread 300 proceeds to the next job.
- a job may be determined to be ready for scheduling by determining the job state. If the job state has a first predetermined value, which in an embodiment of the present invention may be referred to as “WAITING”, then the job is ready for dispatch. The setting of the job state for a particular job will be described below in conjunction with steps 382 - 384 , and 386 , FIG. 3C .
- step 310 the session pools are searched.
- distributions may have one of three priorities, high, medium, or low.
- Each repeater has a pool of sessions, or threads, which are run to transfer data to a target system, either a target repeater or an target endpoint, as appropriate for the particular distribution.
- a logical connection to the target system is established for a new job unless the connection is already present because of an ongoing distribution to the same target.
- a connection can have multiple logical sessions associated with it.
- Each new session between repeaters or a repeater and an endpoint will establish a network channel or path to transfer data, so between a repeater and an endpoint or another repeater, there can be multiple sessions to execute jobs. That is, there may be parallel data transfer to the same target.
- Each distribution to or results information from a target machine “consumes” a session from the pool of sessions available to the distribution or results information.
- Each repeater has a pool of sessions allocated for each priority level.
- a session is used by thread 300 to run a job on a preselected target. That is, a session initiates a data transfer through an establish network channel and waits for the target to finish processing data. Jobs are dispatched in order of priority, with higher priority jobs being dispatched preferentially over lower priority jobs. A higher priority job may obtain the session from the pool allocated to its priority, and successively lower priority pools as well.
- a higher priority distribution may use sessions from its own pool first. If no sessions are found, then the distribution looks for sessions in the lower priority pools. This may be understood by referring now to FIG. 4 which illustrates in detail a methodology for performing step 310 in accordance with this protocol.
- step 410 it is determined if the distribution has a high priority level. If not, then in step 420 , it is determined if the distribution has a medium priority level. If not, then the distribution has a low priority, step 430 and, in step 435 , it is determined if a low priority to low 10 priority session is available. If a low priority session is available, then in step 450 , methodology 500 signals that a session is available. Conversely, if no low priority sessions are available in step 435 , in step 440 methodology 400 signals that no session is available.
- step 415 it is determined if a high priority session is available. If so, then methodology 400 proceeds to step 450 . Otherwise, if a high priority session is unavailable, that is, fully used by other jobs, then in step 425 it is determined if a medium priority session is available Again, if session is available, then step 450 is performed; otherwise, in step 435 , it is determined if session allocated to the low priority pool is available. If so, step 435 , then step 450 is performed; otherwise, no connections are available and methodology 400 proceeds to step 440 .
- step 410 it has been determined that the job is not a high priority distribution, it is determined if the job has a medium priority, step 420 . If not, it must again be a low priority job. step 430 , previously described. Otherwise, in step 420 it is a medium priority job, and in step 425 it is determined if a medium priority session is available. As before, if no medium priority sessions are available, it is determined if a low priority session is available, step 435 . In this manner, a with a given priority level can use the number of sessions reserved for its priority level plus any sessions allocated to lower priority levels. If no sessions are available at the assigned priority or lower, the methodology signals no available sessions, step 440 , as previously described.
- step 312 if, in step 312 , it is determined a session is available to the job as reported in step 450 , FIG. 4 , then the session is reserved in step 314 . Otherwise, if it is reported not available, step 440 , FIG. 4 , step 312 proceeds by the “No” branch to step 307 . Because, as previously described, jobs are enqueued in priority order, the unavailability of a session for the current job also means that the succeeding jobs cannot also be scheduled because they have a priority that is the same or lower than the current job.
- Thread 300 then loops in step [[ 308 ]] 301 for an event indicating that a session has become available, which then triggers thread 300 via the “yes” branch in step 301 .
- an “UNREACHABLE” state job may become available, whereby the state goes to “WAITING”, which will also trigger scheduling.
- connection object or simply connection
- the connection list contains one or more connection objects that are a logical representation of a communication channel, or path, between the repeater running the thread and the target.
- a connection object maintains the status of the channel. For example if the channel breaks the status of the connection may change from a first value, say “LIP”, to a second value, say “DOWN.”
- the connection object also includes all of the active sessions associated with the target.
- a new connection object is created, and in step 320 a new session is created and run.
- a connection can have multiple sessions associated with it, and each session is a thread.
- FIG. 5 is a repeater connection list with three connections, C- 0 , C- 1 and C- 2 .
- Connection C- 0 has two active sessions, S- 0 and S- 1 , running Job 0 and Job 1 , respectively, for the target, repeater R- 1 .
- connection C- 1 has two sessions, S- 2 and S- 3 running Job 2 and Job 3 , respectively on a target endpoint, E- 2 .
- An exclusive session, S- 4 runs Job 4 on connection C- 2 , for target endpoint E- 3 . No other jobs will run on connection C- 2 until S- 4 ends. If, however, in step 316 a connection exists, it is determined if the connection is being exclusively used by a job, step 322 . If the existing connection is exclusive, then the session reserved in step 314 is released, step 324 and thread 300 proceeds to step 308 to schedule the next job.
- step 320 launches a session, which, in an embodiment of the present invention, may be session 350 , FIG. 3B .
- a session which, in an embodiment of the present invention, may be session 350 , FIG. 3B .
- step 352 it is determined if the target system is accessible.
- the 10 target system may be inaccessible if, for example, the target system is unavailable or due to a network outage. If a target is inaccessible, in step 354 a retry thread is launched. This will be discussed subsequently in conjunction with FIG. 3C below.
- thread 350 ends.
- step 356 the job is executed, where data is transferred to the target and while the 15 target is processing the data, waits and then posts results information to the repeater performing the session. Recall that repeaters transfer results information to one or more report-to nodes, as discussed in detail in the commonly owned co-pending U.S. Patent Application entitled “An Apparatus and Method for Distributing and Collecting Bulk Data Between a Large Number of Machines,” (Attorney Docket No. AT9-99-274) incorporated herein by reference. It is determined, in step 358 , if the distribution is complete. If not, thread 350 returns to step 352 . Otherwise, on completion of the execution of the distribution, again in step 360 the session is released and the thread ends in step 362 .
- step 3C describing retry thread 370 which may be launched in response to a distribution failure arising from the unavailability of the target in step 354 above.
- the session used to run the job is returned to the corresponding pool.
- thread 370 releases the session, whereby it is returned to the pool having the priority level of the session.
- step 372 it is determined if a fatal error has occurred. For example, if the distribution segment, in executing the distribution in step 356 above, is too large to fit into an available disc space, a fatal error will result. If, in step 372 , a fatal error has occurred in step 374 , the job state is set to “FAILED”.
- a results information segment is built which includes the job state.
- results information is generated by repeaters and endpoint system and transmitted to one or more report-to systems.
- the corresponding results information that is sent to one or more preselected report-to systems in accordance with the methodologies described in the aforesaid co-pending U.S. patent application may be generated in step 376 .
- step 372 if the error was non-fatal step 372 proceeds by the “No” branch. If, in step 382 , the target system was unavailable on the first attempt to connect to the target system, in step 384 the job state is set to “LTNREACHABLE”. Otherwise, the connection has broken during the execution of the distribution, and in step 386 the job state is set to “INTERRUPTED”.
- step 388 It is then determined in step 388 if a retry cutoff has been reached.
- Each connection has a predetermined connection retry time interval that is used to supply a maximum amount of time over which retries for failed jobs will be attempted.
- step 308 it is determined if an application was specified “no retry.” A user, for example, may wish to take corrective action quickly rather than wait for a predetermined number of retries to elapse before receiving notification that the distribution has failed. If the connection retry interval has elapsed, “no retry” is specified, in step 388 , the “Yes” branch is followed.
- step 389 the job state is tested, and if “UNREACHABLE,” then the job fails and the job state is set to “FAILED” in step 374 . Otherwise, step 389 proceeds to step 390 .
- step 390 the job state is set to “UNREACHABLE” and a login callback method is registered with the corresponding gateway of the target endpoint system. The login callback will be invoked and a login notification thereby provided to the repeater performing management thread 300 . Thread 370 proceeds to step 379 , discussed below.
- step 388 proceeds by the “No” branch to step 392 and in step 392 a retry timer thread is launched, bypassing step 390 .
- step 388 proceeds by the “No” branch.
- retry timer thread 340 In step 342 retry timer is started. In step 344 it is determined if the retry timer has expired. If not, thread 340 loops until the timer expires and then, in step 346 , on expiration of the timer the job state is set to “WAITING”. In step 348 , timer thread 340 ends.
- step 392 thread 370 proceeds to step 379 .
- step 379 it is determined if the job state is “WAITING” or “FAILED.” The job state may be set to “WAITING” in step 346 , FIGLTIE 3 D. If so, then thread 370 notifies thread 300 , FIG. 3A , signaling an 15 terminates, step 385 . Otherwise, in step 381 thread 370 loops until the endpoint logs in and, in step 383 , the job state is set equal to “WAITING,” and again notifies thread 300 in step 308 .
- step 306 of thread 300 determines that the distribution that launched error handling thread 370 is ready for scheduling, and then initiates a session to execute the distribution in steps 310 - 320 , as previously described.
Abstract
An apparatus and method for scheduling data distributions to or results information from, or collectively, “jobs” a plurality of data processing systems via a network. A connection to a target system is created. For each distribution, a session, which is an independent thread, is allocated from one of a plurality of pool of sessions and launched to effect execution of the job. Each pool corresponds to a predetermined priority level, and the session is allocated from the pool having the same priority level as the priority level of the job being scheduled. A connection supports a multiplicity of independent threads. In the event of an error, the session is released, and the scheduling of the aborted job is retried after a predetermined retry interval expires. After expiry of the retry interval, a callback method is invoked when the target system on which the scheduled job is executed becomes accessible.
Description
- This application is a continuation of application Ser. No. 09/438,436, filed Nov. 12, 1999, status allowed.
- Related subject matter may be found in the following commonly assigned, co-pending U.S. Patent Applications, both of which are hereby incorporated by reference herein.
- Ser. No. 09/460,855, entitled “APPARATUS FOR DATA DEPOTING AND METHOD THEREFOR”
- Ser. No. 09/460,853, entitled “APPARATUS FOR RELIABLY RESTARTING INTERRUPTED DATA TRANSFER AT LAST SUCCESSFUL TRANSFER POINT AND METHOD THEREFOR”
- Ser. No. 09/438,437, entitled “AN APPARATUS AND METHOD FOR DISTRIBUTING AND COLLECTING BULK DATA BETWEEN A LARGE NUMBER OF MACHINES” and filed concurrently herewith;
- Ser. No. 09/458,268, entitled “COMPUTER NETWORK CONTROL SYSTEMS AND METHODS” and filed concurrently herewith;
- Ser. No. 09/460,852 entitled “METHODS OF DISTRIBUTING DATA IN A COMPUTER NETWORK AND SYSTEMS USING THE SAME”
- Ser. No. 09/458,269, entitled “SYSTEMS AND METHODS FOR REAL TIME PROGRESS MONITORING IN A COMPUTER NETWORK”;
- Ser. No. 09/460,851, entitled “APPARATUS FOR AUTOMATICALLY GENERATING RESTORE PROCESS DURING SOFTWARE DEPLOYMENT AND METHOD THEREFOR”, and
- Ser. No. 09/460,854, entitled “AN APPARATUS FOR JOURNALING DURING SOFTWARE DEPLOYMENT AND METHOD THEREFOR”.
- The present invention relates generally to data processing systems, and in particular, to bulk data distributions within networked data processing systems.
- Present day data processing systems are often configured in large multi-user networks. Management of such networks may typically include the need to transfer bulk data to an endpoint system from a source system (or, simply, “a source”) and the collection of information, for example, error reports from a multiplicity of endpoints systems (or, simply, “endpoints”).
- Such large data transfers may occur within a network, for example, to distribute software updates. The system administrator may need to allocate a specific period of time for the data transfer to more efficiently utilize network resources. This may typically occur when the communication load on the system is lowest, usually at night when most endpoint users are not working at their stations. The system administrator may load the bulk data and the corresponding transfer instructions onto the network system's source, or server, in preparation for the transfer. At the predetermined time set by the administrator, the server will push the data while ensuring that the bulk data is successfully transferred to each of the desired endpoint locations. However, during the transfer a portion of the system server is dedicated to the data transfer and thus unavailable for other networking tasks. Moreover, as the number of endpoints which must be simultaneously serviced by the bulk data distribution increases, network bandwidth demands are concomitantly increased. This complicates scalability of the bulk distribution systems.
- Therefore, a need exists in the art for a bulk distribution mechanism that can transfer large amounts of data between network connected subsystems (or nodes) while maintaining scalability. Additionally, there is a need in such distribution mechanisms for methods and apparatus to distribute bulk data to a multiplicity of endpoints and to collect bulk data, including large log files, from the endpoints.
- The aforementioned needs are addressed by the present invention. Accordingly, there is provided, in a first form, a connection scheduling method. The method determines if a job is available for scheduling. It is also determined if a session for effecting an execution the job is available. The session is included in a pool of sessions, in which the pool of sessions has a preselected one of a set of priority levels. The preselected priority level corresponds to a priority level of the job being scheduled for execution. If available, the session is launched to effect the execution of the job.
- There is also provided, in a second form, a data processing system for connection scheduling. The system contains circuitry operable for determining if a job is available for scheduling. Also included is circuitry operable for determining, in response to the circuitry operable for determining if a job is available, if a session is available. The session is included in a pool of sessions, the pool of sessions having a preselected one of a set of priority levels corresponding to a priority level of the job. The session effects an execution of the job. The system also has circuitry operable for launching the session to effect the execution of the job, if the session is available.
- Additionally, there is provided, in a third form, a computer program product embodied in a machine readable storage medium. The program product for job scheduling includes instructions for determining if a job is available for scheduling. The program product also contains instructions for determining, in response to the instructions for determining if the job is available, if a session is available, wherein the session is included in a pool of sessions, the pool of sessions having a preselected one of a set of priority levels corresponding to a priority level of the job. The session effects an execution of the available job. The program product also contains instructions for launching the session to effect the execution of the job, if the session is available.
- For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 illustrates, in block diagram form, a data processing network in accordance with an embodiment of the present invention; -
FIG. 2 illustrates, in block diagram form, a data processing system implemented in accordance with an embodiment of the present invention; -
FIG. 3A illustrates, in flowchart form, a connection management thread in accordance with an embodiment of the present invention; -
FIG. 3B illustrates, in flowchart form, a session thread in accordance with an embodiment of the present invention; -
FIG. 3C illustrates, in flowchart form, an error handling thread which may be used with the session thread ofFIG. 3B ; -
FIG. 3D illustrates, in flowchart form, a retry timer thread, in accordance with an embodiment of the present invention; -
FIG. 4 illustrates, in flowchart form, a methodology implemented to determine priority resource availability in accordance with an embodiment of the present invention; and -
FIG. 5 schematically illustrates a repeater connection list which may be used in an embodiment of the present invention. - The present invention is a method and apparatus for managing connections in a system for distributing and collecting data between an originating source system and a plurality of endpoint systems (which may also be referred to as “endpoint nodes” or simply “endpoints”). The method and apparatus provides a mechanism for managing a plurality of sessions, or threads, for sending a distribution to or receiving results information from the corresponding target machine. Sessions are allocated in accordance with a preselected distribution priority. Each distribution priority level has a predetermined number of sessions available to it in a corresponding sessions pool. By scheduling distributions on their priorities, large, low priority distributions will no longer “bottleneck” high priority small distributions.
- According to the principles of the present invention, the present invention has an originating source system followed by repeaters. The use of repeaters allows data to be delivered essentially simultaneously to a large number of machines. The present invention can be scaled to handle more destinations by adding repeaters. In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well-known circuits have been shown in block diagram form in order not to obscure the present invention in unnecessary detail. For the most part, details concerning timing considerations and the like have been omitted inasmuch as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art.
- A more detailed description of the implementation of the present invention will subsequently be provided. Prior to that discussion, an environment in which the present invention may be implemented will be described in greater detail.
-
FIG. 1 illustrates a communications network 100. The subsequent discussion and description ofFIG. 1 are provided to illustrate an exemplary environment used by the present invention. - The network system 100 includes
source system 101, one or more fan out/collector nodes, or,repeaters FIG. 1 , endpoints 112-114 or 115-117, respectively, and may be referred to as “gateway” repeaters (or, simply, “gateways”). -
Source system 101 provides distribution services with respect to resources 112-117. Note thatsource system 101 and endpoints 112-117 interfaces torepeaters repeaters repeaters source system 110 and endpoints 112-117 each may include a “repeater.” In other words, as an artisan of ordinary skill would recognize, a repeater may be a logical element that may be, but is not necessarily associated with a physical, stand-alone hardware device in network 100.Repeater 110 may be the primary repeater through which resources 112-114 receive their data transfers, andrepeater 111, likewise, may primarily service endpoints 115-117. The connection management methodologies described below in conjunction withFIGS. 3A-3D may be performed byrepeaters Gateway repeaters FIG. 1 . - However, network system 100 provides cross connections in order to provide redundant, parallel communication paths should the primary communication path to the endpoint become unavailable. For example, in
FIG. 1 ,endpoint 114 has a primary pathway to sourcesystem 101 throughrepeaters source system 101 may also be referred to as a source node.) Should repeater 110 become unavailable,source system 101 can transfer bulk data toendpoint 114 via an alternative pathway throughrepeaters endpoint 114 may receive data viarepeaters Source system 101 maintainsdatabase 120 for storing information used in managing a data distribution. - Referring next to
FIG. 2 , an example is shown of a data processing system 200 which may be used to implement a source system such assystem 101, repeaters, such asrepeaters system bus 212. Read only memory (“ROM”) 216 is coupled to thesystem bus 212 and includes a basic input/output system (“BIOS”) that controls certain basic functions of the data processing system 200. Random access memory (“RAM”) 214, L/O adapter 218, and communications adapter 234 are also coupled to thesystem bus 212.VO adapter 218 may be a small computer system interface (“SCSI”) adapter that communicates with a disk storage device 220. Disk storage device 220 may be used to holddatabase 120,FIG. 1 . Communications adapter 234interconnects bus 212 with the network as well as outside networks enabling the data processing system to communicate with other such systems. Input/Output devices are also connected tosystem bus 212 viauser interface adapter 222 anddisplay adapter 236.Keyboard 224,track ball 232,mouse 226 andspeaker 228 are all interconnected tobus 212 viauser interface adapter 222.Display monitor 238 is connected tosystem bus 212 bydisplay adapter 236. In this manner, a user is capable of inputting to the system throughout thekeyboard 224,trackball 232 ormouse 226 and receiving output from the system viaspeaker 228 anddisplay 238. - Implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product. According to the computer system implementation, sets of instructions for executing the method or methods are resident in the
random access memory 214 of one or more computer systems configured generally as described above. Until required by the computer system, the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 220 (which may include a removable memory such as an optical disk or floppy disk for eventual use in the disk drive 220). Further, the computer program product can also be stored at another computer and transmitted when desired to the user's work station by a network or by an external network such as the Internet. One skilled in the art would appreciate that the physical storage of the sets of instructions physically changes the medium upon which it is stored so that the medium carries computer readable information. The change may be electrical, magnetic, chemical, biological, or some other physical change. While it is convenient to describe the invention in terms of instructions, symbols, characters, or the like, the reader should remember that all of these and similar terms should be associated with the appropriate physical elements. - Note that the invention may describe terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator. However, for at least a number of the operations described herein which form part of at least one of the embodiments, no action by a human operator is desirable. The operations described are, in large part, machine operations processing electrical signals to generate other electrical signals.
- Refer now to
FIG. 3A illustrating aconnection management thread 300 which may be used in an embodiment of the present invention.Thread 300 may be used by repeaters, such asrepeaters FIG. 1 . Distributions or results information to be transferred by a repeater are enqueued in an output “job” queue in accordance with the assigned priority of the distribution. (Distributions targeted for ultimate delivery to an endpoint and results information for a report-to system may collectively be referred to simply as “jobs.”) The transfer of distributions and results information is discussed in the commonly owned co-pending U.S. Patent Application entitled “An Apparatus and Method for Distributing and Collecting Bulk Data between a Large Number of Machines,” incorporated herein by reference and as described herein, the distribution or results information may be assigned one of three priority levels, low, medium, or high, in an embodiment of the present invention. Distributions are enqueued in order of priority. - In
step 302, the job queue is locked, and, while the job queue is not empty,step 304, jobs are scheduled until the queue is exhausted. When the queue is exhausted, instep 306 the output queue is unlocked, whereby new distributions received by the repeater may be enqueued for sending to a target repeater or end point, as appropriate. Scheduling is initiated, when, instep 308, the output job queue is no longer empty. Step 308 constitutes event loop, wherein scheduling is initiated in response to an event such as a “running job completed” event. After a running job completes, a session, as described below, becomes available. A “new job” event, signaling that a new job has arrived at therepeater performing thread 300 will also initiate scheduling. - Returning to step 304, while the queue is not empty,
thread 300 proceeds through the queue to schedule distributions for sending to a target repeater or endpoint as appropriate. In step 307 it is determined if a current job is ready for scheduling. If not, instep 308thread 300 proceeds to the next job. A job may be determined to be ready for scheduling by determining the job state. If the job state has a first predetermined value, which in an embodiment of the present invention may be referred to as “WAITING”, then the job is ready for dispatch. The setting of the job state for a particular job will be described below in conjunction with steps 382-384, and 386,FIG. 3C . - If, in step 307, the current job is ready for scheduling, in
step 310 the session pools are searched. As indicated hereinabove, distributions may have one of three priorities, high, medium, or low. Each repeater has a pool of sessions, or threads, which are run to transfer data to a target system, either a target repeater or an target endpoint, as appropriate for the particular distribution. As discussed further below, a logical connection to the target system is established for a new job unless the connection is already present because of an ongoing distribution to the same target. A connection can have multiple logical sessions associated with it. Each new session between repeaters or a repeater and an endpoint will establish a network channel or path to transfer data, so between a repeater and an endpoint or another repeater, there can be multiple sessions to execute jobs. That is, there may be parallel data transfer to the same target. - Each distribution to or results information from a target machine “consumes” a session from the pool of sessions available to the distribution or results information. Each repeater has a pool of sessions allocated for each priority level. A session is used by
thread 300 to run a job on a preselected target. That is, a session initiates a data transfer through an establish network channel and waits for the target to finish processing data. Jobs are dispatched in order of priority, with higher priority jobs being dispatched preferentially over lower priority jobs. A higher priority job may obtain the session from the pool allocated to its priority, and successively lower priority pools as well. - A higher priority distribution may use sessions from its own pool first. If no sessions are found, then the distribution looks for sessions in the lower priority pools. This may be understood by referring now to
FIG. 4 which illustrates in detail a methodology for performingstep 310 in accordance with this protocol. In step 410, it is determined if the distribution has a high priority level. If not, then instep 420, it is determined if the distribution has a medium priority level. If not, then the distribution has a low priority,step 430 and, instep 435, it is determined if a low priority to low 10 priority session is available. If a low priority session is available, then instep 450, methodology 500 signals that a session is available. Conversely, if no low priority sessions are available instep 435, instep 440 methodology 400 signals that no session is available. - Returning to step 410, if the job is determined to have a high priority, then in
step 415 it is determined if a high priority session is available. If so, then methodology 400 proceeds to step 450. Otherwise, if a high priority session is unavailable, that is, fully used by other jobs, then instep 425 it is determined if a medium priority session is available Again, if session is available, then step 450 is performed; otherwise, instep 435, it is determined if session allocated to the low priority pool is available. If so,step 435, then step 450 is performed; otherwise, no connections are available and methodology 400 proceeds to step 440. - Similarly, if in step 410, it has been determined that the job is not a high priority distribution, it is determined if the job has a medium priority,
step 420. If not, it must again be a low priority job.step 430, previously described. Otherwise, instep 420 it is a medium priority job, and instep 425 it is determined if a medium priority session is available. As before, if no medium priority sessions are available, it is determined if a low priority session is available,step 435. In this manner, a with a given priority level can use the number of sessions reserved for its priority level plus any sessions allocated to lower priority levels. If no sessions are available at the assigned priority or lower, the methodology signals no available sessions,step 440, as previously described. - Returning to
FIG. 3A , if, instep 312, it is determined a session is available to the job as reported instep 450,FIG. 4 , then the session is reserved instep 314. Otherwise, if it is reported not available,step 440,FIG. 4 , step 312 proceeds by the “No” branch to step 307. Because, as previously described, jobs are enqueued in priority order, the unavailability of a session for the current job also means that the succeeding jobs cannot also be scheduled because they have a priority that is the same or lower than the current job.Thread 300 then loops in step [[308]] 301 for an event indicating that a session has become available, which then triggersthread 300 via the “yes” branch in step 301. Similarly as discussed below, an “UNREACHABLE” state job may become available, whereby the state goes to “WAITING”, which will also trigger scheduling. - It is then determined in
thread 300 if a connection object, or simply connection, has been established for the target system (which may also be referred to as a target node). Thus, instep 316, a connection list is searched to determine if a connection has been established for the target node. The connection list contains one or more connection objects that are a logical representation of a communication channel, or path, between the repeater running the thread and the target. A connection object maintains the status of the channel. For example if the channel breaks the status of the connection may change from a first value, say “LIP”, to a second value, say “DOWN.” The connection object also includes all of the active sessions associated with the target. - If an existing connection does not exist, in step 318 a new connection object is created, and in step 320 a new session is created and run. Recall that a connection can have multiple sessions associated with it, and each session is a thread. For example, schematically shown in
FIG. 5 , is a repeater connection list with three connections, C-0, C-1 and C-2. Connection C-0 has two active sessions, S-0 and S-1, runningJob 0 andJob 1, respectively, for the target, repeater R-1. similarly, connection C-1 has two sessions, S-2 and S-3running Job 2 andJob 3, respectively on a target endpoint, E-2. An exclusive session, S-4, runsJob 4 on connection C-2, for target endpoint E-3. No other jobs will run on connection C-2 until S-4 ends. If, however, in step 316 a connection exists, it is determined if the connection is being exclusively used by a job,step 322. If the existing connection is exclusive, then the session reserved instep 314 is released, step 324 andthread 300 proceeds to step 308 to schedule the next job. - Returning to
FIG. 3A , the creation of a new session, instep 320, launches a session, which, in an embodiment of the present invention, may besession 350,FIG. 3B . Instep 352 it is determined if the target system is accessible. The 10 target system may be inaccessible if, for example, the target system is unavailable or due to a network outage. If a target is inaccessible, in step 354 a retry thread is launched. This will be discussed subsequently in conjunction withFIG. 3C below. Instep 362,thread 350 ends. If, however, instep 352 the target is accessible, instep 356 the job is executed, where data is transferred to the target and while the 15 target is processing the data, waits and then posts results information to the repeater performing the session. Recall that repeaters transfer results information to one or more report-to nodes, as discussed in detail in the commonly owned co-pending U.S. Patent Application entitled “An Apparatus and Method for Distributing and Collecting Bulk Data Between a Large Number of Machines,” (Attorney Docket No. AT9-99-274) incorporated herein by reference. It is determined, instep 358, if the distribution is complete. If not,thread 350 returns to step 352. Otherwise, on completion of the execution of the distribution, again instep 360 the session is released and the thread ends instep 362. - Refer now to
FIG. 3C describing retrythread 370 which may be launched in response to a distribution failure arising from the unavailability of the target instep 354 above. When an error occurs, the session used to run the job is returned to the corresponding pool. Instep 371,thread 370 releases the session, whereby it is returned to the pool having the priority level of the session. Instep 372 it is determined if a fatal error has occurred. For example, if the distribution segment, in executing the distribution instep 356 above, is too large to fit into an available disc space, a fatal error will result. If, instep 372, a fatal error has occurred instep 374, the job state is set to “FAILED”. Instep 376, a results information segment is built which includes the job state. As described in the commonly owned co-pending U.S. patent application entitled “An Apparatus and Method for Distributing and Collecting Bulk Data between a Large Number of Machines” incorporated herein by reference, results information is generated by repeaters and endpoint system and transmitted to one or more report-to systems. In the event of an error, the corresponding results information that is sent to one or more preselected report-to systems in accordance with the methodologies described in the aforesaid co-pending U.S. patent application may be generated instep 376. - Returning to step 372, if the error was
non-fatal step 372 proceeds by the “No” branch. If, instep 382, the target system was unavailable on the first attempt to connect to the target system, instep 384 the job state is set to “LTNREACHABLE”. Otherwise, the connection has broken during the execution of the distribution, and instep 386 the job state is set to “INTERRUPTED”. - It is then determined in
step 388 if a retry cutoff has been reached. Each connection has a predetermined connection retry time interval that is used to supply a maximum amount of time over which retries for failed jobs will be attempted. Additionally, instep 308 it is determined if an application was specified “no retry.” A user, for example, may wish to take corrective action quickly rather than wait for a predetermined number of retries to elapse before receiving notification that the distribution has failed. If the connection retry interval has elapsed, “no retry” is specified, instep 388, the “Yes” branch is followed. Instep 389, the job state is tested, and if “UNREACHABLE,” then the job fails and the job state is set to “FAILED” instep 374. Otherwise, step 389 proceeds to step 390. - In
step 390 the job state is set to “UNREACHABLE” and a login callback method is registered with the corresponding gateway of the target endpoint system. The login callback will be invoked and a login notification thereby provided to the repeater performingmanagement thread 300.Thread 370 proceeds to step 379, discussed below. - If the target system is a repeater rather than an endpoint, there is no retry cutoff because there is no login event from repeaters. Thus, if the target system is a repeater,
step 388 proceeds by the “No” branch to step 392 and in step 392 a retry timer thread is launched, bypassingstep 390. Likewise, if the target is an endpoint system and the retry cutoff has not expired,step 388 proceeds by the “No” branch. - Referring now to
FIG. 3D , there is illustrated therein retry timer thread 340. Instep 342 retry timer is started. Instep 344 it is determined if the retry timer has expired. If not, thread 340 loops until the timer expires and then, instep 346, on expiration of the timer the job state is set to “WAITING”. Instep 348, timer thread 340 ends. - Returning to
FIG. 3C , after launching the retry timer thread, instep 392,thread 370 proceeds to step 379. Instep 379, it is determined if the job state is “WAITING” or “FAILED.” The job state may be set to “WAITING” instep 346, FIGLTIE 3D. If so, thenthread 370 notifiesthread 300,FIG. 3A , signaling an 15 terminates,step 385. Otherwise, instep 381thread 370 loops until the endpoint logs in and, instep 383, the job state is set equal to “WAITING,” and again notifiesthread 300 instep 308. - Returning to
FIG. 3A after the job state is set to “WAITING” instep 346,FIG. 3D or instep 383,FIG. 3C , depending on the path followed instep 379,FIG. 3C , step 306 ofthread 300 then determines that the distribution that launchederror handling thread 370 is ready for scheduling, and then initiates a session to execute the distribution in steps 310-320, as previously described. - Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (23)
1-11. (canceled)
12. A data processing system for connection scheduling within a network comprising a plurality of nodes, comprising:
circuitry operable for determining if a job is available for scheduling;
circuitry operable for determining, in response to said circuitry operable for determining if said job is available, if a session is available, wherein said session is included in a pool of sessions, said pool of sessions having a preselected one of a set of priority levels corresponding to a priority level of said job and wherein said session effects an execution of said job;
circuitry operable for creating a network connection to a target system for said execution of said job, wherein said target system is another node of the networked data processing system;
circuitry operable for launching said session to effect said execution of said job, if said session is available; and
circuitry operable for launching an error handling thread in response to an error condition, said error handling thread releasing said session.
13. The system of claim 12 wherein said session comprises a thread.
14. (canceled)
15. The system of claim 12 further comprising circuitry operable for determining if said network connection is an existing network connection, and wherein said circuitry operable for creating said network connection is operated if said network connection is not an existing network connection, and wherein said session is launched using said existing network connection if said network connection is an existing network connection such that said existing network connection supports multiple logical sessions.
16. (canceled)
17. The system of claim 12 further comprising circuitry operable for changing value of a job state from a first value to a second value in response to said launching of said error handling thread.
18. The system of claim 17 wherein said first value signals that said job is available for scheduling.
19. The system of claim 12 further comprising circuitry operable for retrying said steps of determining if a job is available for scheduling, determining if a session is available, and launching said session, in response to an error condition.
20. The system of claim 19 wherein said circuitry operable for retrying is operated until a predetermined time interval has elapsed.
21. The system of claim 20 further comprising circuitry operable for registering a callback method in response to an expiry of said predetermined time interval.
22. The system of claim 21 wherein said circuitry operable for determining if a job is available for scheduling, determining if a session is available, and launching said session are operated in response to an invoking of said callback method by said target system.
23. A computer program product embodied in a machine readable storage medium, the program product for job scheduling comprising instructions for:
determining if a job is available for scheduling;
determining, in response to instructions for determining if said job is available, if a session is available, wherein said session is included in a pool of sessions, said pool of sessions having a preselected one of a set of priority levels corresponding to a priority level of said job and wherein said session effects an execution of said job;
creating a network connection to a target system for said execution of said job, wherein said target system is another node of the networked data processing system;
launching said session to effect said execution of said job, if said session is available; and
launching an error handling thread in response to an error condition, said error handling thread releasing said session.
24. The program product of claim 23 wherein said session comprises a thread.
25. (canceled)
26. The program product of claim 23 further comprising instructions for determining if said network connection is an existing network connection, and wherein said instructions for creating said network connection are performed if said connection is not an existing network connection, and wherein said session is launched using said existing network connection if said network connection is an existing network connection such that said existing network connection supports multiple logical sessions.
27. (canceled)
28. The program product of claim 23 further comprising instructions for changing value of a job state from a first value to a second value in response to said launching of said error handling thread.
29. The program product of claim 28 wherein said first value signals that said job is available for scheduling.
30. The program product of claim 23 further comprising programming for retrying said steps of determining if a job is available for scheduling, determining if a session is available, and launching said session, in response to an error condition.
31. The program product of claim 30 wherein said instructions for retrying are repeated until a predetermined time interval has elapsed.
32. The program product of claim 31 further comprising programming for registering a callback method in response to an expiry of said predetermined time interval.
33. The program product of claim 32 wherein said instructions for determining if a job is available for scheduling, determining if a session is available, and launching said session are executed in response to an invoking of said callback method by said target system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/170,616 US20090013209A1 (en) | 1999-11-12 | 2008-07-10 | Apparatus for connection management and the method therefor |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/438,436 US7418506B1 (en) | 1999-11-12 | 1999-11-12 | Apparatus for connection management and the method therefor |
US12/170,616 US20090013209A1 (en) | 1999-11-12 | 2008-07-10 | Apparatus for connection management and the method therefor |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/438,436 Continuation US7418506B1 (en) | 1999-11-12 | 1999-11-12 | Apparatus for connection management and the method therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090013209A1 true US20090013209A1 (en) | 2009-01-08 |
Family
ID=39711314
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/438,436 Expired - Fee Related US7418506B1 (en) | 1999-11-12 | 1999-11-12 | Apparatus for connection management and the method therefor |
US12/170,616 Abandoned US20090013209A1 (en) | 1999-11-12 | 2008-07-10 | Apparatus for connection management and the method therefor |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/438,436 Expired - Fee Related US7418506B1 (en) | 1999-11-12 | 1999-11-12 | Apparatus for connection management and the method therefor |
Country Status (1)
Country | Link |
---|---|
US (2) | US7418506B1 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7418506B1 (en) * | 1999-11-12 | 2008-08-26 | International Business Machines Corporation | Apparatus for connection management and the method therefor |
US7043758B2 (en) * | 2001-06-15 | 2006-05-09 | Mcafee, Inc. | Scanning computer files for specified content |
JP4603256B2 (en) * | 2003-12-01 | 2010-12-22 | 日本電気株式会社 | User authentication system |
US8611378B2 (en) | 2007-05-29 | 2013-12-17 | Red Hat, Inc. | Message handling multiplexer |
US8505028B2 (en) * | 2007-05-30 | 2013-08-06 | Red Hat, Inc. | Flow control protocol |
US7921227B2 (en) * | 2007-05-30 | 2011-04-05 | Red Hat, Inc. | Concurrent stack |
US7992153B2 (en) * | 2007-05-30 | 2011-08-02 | Red Hat, Inc. | Queuing for thread pools using number of bytes |
US7733863B2 (en) * | 2007-05-30 | 2010-06-08 | Red Hat, Inc. | Out of band messages |
US8886787B2 (en) * | 2009-02-26 | 2014-11-11 | Microsoft Corporation | Notification for a set of sessions using a single call issued from a connection pool |
US9621964B2 (en) * | 2012-09-30 | 2017-04-11 | Oracle International Corporation | Aborting data stream using a location value |
US10802890B2 (en) * | 2015-07-20 | 2020-10-13 | Oracle International Corporation | System and method for multidimensional search with a resource pool in a computing environment |
US10313477B2 (en) * | 2015-07-20 | 2019-06-04 | Oracle International Corporation | System and method for use of a non-blocking process with a resource pool in a computing environment |
Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4642758A (en) * | 1984-07-16 | 1987-02-10 | At&T Bell Laboratories | File transfer scheduling arrangement |
US5414845A (en) * | 1992-06-26 | 1995-05-09 | International Business Machines Corporation | Network-based computer system with improved network scheduling system |
US5442730A (en) * | 1993-10-08 | 1995-08-15 | International Business Machines Corporation | Adaptive job scheduling using neural network priority functions |
US5630128A (en) * | 1991-08-09 | 1997-05-13 | International Business Machines Corporation | Controlled scheduling of program threads in a multitasking operating system |
US5644768A (en) * | 1994-12-09 | 1997-07-01 | Borland International, Inc. | Systems and methods for sharing resources in a multi-user environment |
US5737498A (en) * | 1995-07-11 | 1998-04-07 | Beckman Instruments, Inc. | Process automation method and apparatus |
US5742829A (en) * | 1995-03-10 | 1998-04-21 | Microsoft Corporation | Automatic software installation on heterogeneous networked client computer systems |
US5812469A (en) * | 1996-12-31 | 1998-09-22 | Logic Vision, Inc. | Method and apparatus for testing multi-port memory |
US5887141A (en) * | 1994-12-02 | 1999-03-23 | Xcellenet, Inc. | Systems for work assignment and distribution from a server to remote/mobile nodes by a hierarchy of session work objects into which events can be assigned |
US5920546A (en) * | 1997-02-28 | 1999-07-06 | Excel Switching Corporation | Method and apparatus for conferencing in an expandable telecommunications system |
US6014760A (en) * | 1997-09-22 | 2000-01-11 | Hewlett-Packard Company | Scheduling method and apparatus for a distributed automated testing system |
US6021425A (en) * | 1992-04-03 | 2000-02-01 | International Business Machines Corporation | System and method for optimizing dispatch latency of tasks in a data processing system |
US6058424A (en) * | 1997-11-17 | 2000-05-02 | International Business Machines Corporation | System and method for transferring a session from one application server to another without losing existing resources |
US6105067A (en) * | 1998-06-05 | 2000-08-15 | International Business Machines Corp. | Connection pool management for backend servers using common interface |
US6134313A (en) * | 1998-10-23 | 2000-10-17 | Toshiba America Information Systems, Inc. | Software architecture for a computer telephony system |
US6141677A (en) * | 1995-10-13 | 2000-10-31 | Apple Computer, Inc. | Method and system for assigning threads to active sessions |
US6167537A (en) * | 1997-09-22 | 2000-12-26 | Hewlett-Packard Company | Communications protocol for an automated testing system |
US6182110B1 (en) * | 1996-06-28 | 2001-01-30 | Sun Microsystems, Inc. | Network tasks scheduling |
US6182120B1 (en) * | 1997-09-30 | 2001-01-30 | International Business Machines Corporation | Method and system for scheduling queued messages based on queue delay and queue priority |
US6208661B1 (en) * | 1998-01-07 | 2001-03-27 | International Business Machines Corporation | Variable resolution scheduler for virtual channel communication devices |
US6249836B1 (en) * | 1996-12-30 | 2001-06-19 | Intel Corporation | Method and apparatus for providing remote processing of a task over a network |
US6260077B1 (en) * | 1997-10-24 | 2001-07-10 | Sun Microsystems, Inc. | Method, apparatus and program product for interfacing a multi-threaded, client-based API to a single-threaded, server-based API |
US6388687B1 (en) * | 1999-04-28 | 2002-05-14 | General Electric Company | Operator-interactive display menu showing status of image transfer to remotely located devices |
US6477569B1 (en) * | 1998-11-20 | 2002-11-05 | Eugene Sayan | Method and apparatus for computer network management |
US6487577B1 (en) * | 1998-06-29 | 2002-11-26 | Intel Corporation | Distributed compiling |
US6490273B1 (en) * | 1998-08-05 | 2002-12-03 | Sprint Communications Company L.P. | Asynchronous transfer mode architecture migration |
US6496851B1 (en) * | 1999-08-04 | 2002-12-17 | America Online, Inc. | Managing negotiations between users of a computer network by automatically engaging in proposed activity using parameters of counterproposal of other user |
US6560626B1 (en) * | 1998-04-02 | 2003-05-06 | Microsoft Corporation | Thread interruption with minimal resource usage using an asynchronous procedure call |
US6567840B1 (en) * | 1999-05-14 | 2003-05-20 | Honeywell Inc. | Task scheduling and message passing |
US6671713B2 (en) * | 1994-12-12 | 2003-12-30 | Charles J. Northrup | Execution of dynamically configured application service in access method-independent exchange |
US6779182B1 (en) * | 1996-05-06 | 2004-08-17 | Sun Microsystems, Inc. | Real time thread dispatcher for multiprocessor applications |
US6782410B1 (en) * | 2000-08-28 | 2004-08-24 | Ncr Corporation | Method for managing user and server applications in a multiprocessor computer system |
US6801943B1 (en) * | 1999-04-30 | 2004-10-05 | Honeywell International Inc. | Network scheduler for real time applications |
US6829764B1 (en) * | 1997-06-23 | 2004-12-07 | International Business Machines Corporation | System and method for maximizing usage of computer resources in scheduling of application tasks |
US6915509B1 (en) * | 2000-06-28 | 2005-07-05 | Microsoft Corporation | Method and system for debugging a program |
US7020878B1 (en) * | 1998-08-28 | 2006-03-28 | Oracle International Corporation | System for allocating resource using the weight that represents a limitation on number of allowance active sessions associated with each resource consumer group |
US7418506B1 (en) * | 1999-11-12 | 2008-08-26 | International Business Machines Corporation | Apparatus for connection management and the method therefor |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5925096A (en) * | 1994-12-30 | 1999-07-20 | Intel Corporation | Method and apparatus for localized preemption in an otherwise synchronous, non-preemptive computing environment |
US6032193A (en) * | 1997-03-20 | 2000-02-29 | Niobrara Research And Development Corporation | Computer system having virtual circuit address altered by local computer to switch to different physical data link to increase data transmission bandwidth |
US6411982B2 (en) * | 1998-05-28 | 2002-06-25 | Hewlett-Packard Company | Thread based governor for time scheduled process execution |
US6289371B1 (en) * | 1998-09-30 | 2001-09-11 | Hewlett-Packard Company | Network scan server support method using a web browser |
US6502121B1 (en) * | 1999-01-27 | 2002-12-31 | J. D. Edwards World Source Company | System and method for processing a recurrent operation |
-
1999
- 1999-11-12 US US09/438,436 patent/US7418506B1/en not_active Expired - Fee Related
-
2008
- 2008-07-10 US US12/170,616 patent/US20090013209A1/en not_active Abandoned
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4642758A (en) * | 1984-07-16 | 1987-02-10 | At&T Bell Laboratories | File transfer scheduling arrangement |
US5630128A (en) * | 1991-08-09 | 1997-05-13 | International Business Machines Corporation | Controlled scheduling of program threads in a multitasking operating system |
US6021425A (en) * | 1992-04-03 | 2000-02-01 | International Business Machines Corporation | System and method for optimizing dispatch latency of tasks in a data processing system |
US5414845A (en) * | 1992-06-26 | 1995-05-09 | International Business Machines Corporation | Network-based computer system with improved network scheduling system |
US5442730A (en) * | 1993-10-08 | 1995-08-15 | International Business Machines Corporation | Adaptive job scheduling using neural network priority functions |
US5887141A (en) * | 1994-12-02 | 1999-03-23 | Xcellenet, Inc. | Systems for work assignment and distribution from a server to remote/mobile nodes by a hierarchy of session work objects into which events can be assigned |
US5644768A (en) * | 1994-12-09 | 1997-07-01 | Borland International, Inc. | Systems and methods for sharing resources in a multi-user environment |
US6671713B2 (en) * | 1994-12-12 | 2003-12-30 | Charles J. Northrup | Execution of dynamically configured application service in access method-independent exchange |
US5742829A (en) * | 1995-03-10 | 1998-04-21 | Microsoft Corporation | Automatic software installation on heterogeneous networked client computer systems |
US5737498A (en) * | 1995-07-11 | 1998-04-07 | Beckman Instruments, Inc. | Process automation method and apparatus |
US6141677A (en) * | 1995-10-13 | 2000-10-31 | Apple Computer, Inc. | Method and system for assigning threads to active sessions |
US6779182B1 (en) * | 1996-05-06 | 2004-08-17 | Sun Microsystems, Inc. | Real time thread dispatcher for multiprocessor applications |
US6182110B1 (en) * | 1996-06-28 | 2001-01-30 | Sun Microsystems, Inc. | Network tasks scheduling |
US6249836B1 (en) * | 1996-12-30 | 2001-06-19 | Intel Corporation | Method and apparatus for providing remote processing of a task over a network |
US5812469A (en) * | 1996-12-31 | 1998-09-22 | Logic Vision, Inc. | Method and apparatus for testing multi-port memory |
US5920546A (en) * | 1997-02-28 | 1999-07-06 | Excel Switching Corporation | Method and apparatus for conferencing in an expandable telecommunications system |
US6829764B1 (en) * | 1997-06-23 | 2004-12-07 | International Business Machines Corporation | System and method for maximizing usage of computer resources in scheduling of application tasks |
US6014760A (en) * | 1997-09-22 | 2000-01-11 | Hewlett-Packard Company | Scheduling method and apparatus for a distributed automated testing system |
US6167537A (en) * | 1997-09-22 | 2000-12-26 | Hewlett-Packard Company | Communications protocol for an automated testing system |
US6182120B1 (en) * | 1997-09-30 | 2001-01-30 | International Business Machines Corporation | Method and system for scheduling queued messages based on queue delay and queue priority |
US6260077B1 (en) * | 1997-10-24 | 2001-07-10 | Sun Microsystems, Inc. | Method, apparatus and program product for interfacing a multi-threaded, client-based API to a single-threaded, server-based API |
US6058424A (en) * | 1997-11-17 | 2000-05-02 | International Business Machines Corporation | System and method for transferring a session from one application server to another without losing existing resources |
US6208661B1 (en) * | 1998-01-07 | 2001-03-27 | International Business Machines Corporation | Variable resolution scheduler for virtual channel communication devices |
US6560626B1 (en) * | 1998-04-02 | 2003-05-06 | Microsoft Corporation | Thread interruption with minimal resource usage using an asynchronous procedure call |
US6105067A (en) * | 1998-06-05 | 2000-08-15 | International Business Machines Corp. | Connection pool management for backend servers using common interface |
US6487577B1 (en) * | 1998-06-29 | 2002-11-26 | Intel Corporation | Distributed compiling |
US6490273B1 (en) * | 1998-08-05 | 2002-12-03 | Sprint Communications Company L.P. | Asynchronous transfer mode architecture migration |
US7020878B1 (en) * | 1998-08-28 | 2006-03-28 | Oracle International Corporation | System for allocating resource using the weight that represents a limitation on number of allowance active sessions associated with each resource consumer group |
US6134313A (en) * | 1998-10-23 | 2000-10-17 | Toshiba America Information Systems, Inc. | Software architecture for a computer telephony system |
US6477569B1 (en) * | 1998-11-20 | 2002-11-05 | Eugene Sayan | Method and apparatus for computer network management |
US6388687B1 (en) * | 1999-04-28 | 2002-05-14 | General Electric Company | Operator-interactive display menu showing status of image transfer to remotely located devices |
US6801943B1 (en) * | 1999-04-30 | 2004-10-05 | Honeywell International Inc. | Network scheduler for real time applications |
US6567840B1 (en) * | 1999-05-14 | 2003-05-20 | Honeywell Inc. | Task scheduling and message passing |
US6496851B1 (en) * | 1999-08-04 | 2002-12-17 | America Online, Inc. | Managing negotiations between users of a computer network by automatically engaging in proposed activity using parameters of counterproposal of other user |
US7418506B1 (en) * | 1999-11-12 | 2008-08-26 | International Business Machines Corporation | Apparatus for connection management and the method therefor |
US6915509B1 (en) * | 2000-06-28 | 2005-07-05 | Microsoft Corporation | Method and system for debugging a program |
US6782410B1 (en) * | 2000-08-28 | 2004-08-24 | Ncr Corporation | Method for managing user and server applications in a multiprocessor computer system |
Also Published As
Publication number | Publication date |
---|---|
US7418506B1 (en) | 2008-08-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090013209A1 (en) | Apparatus for connection management and the method therefor | |
US6839752B1 (en) | Group data sharing during membership change in clustered computer system | |
US5870604A (en) | Job execution processor changing method and system, for load distribution among processors | |
US6393455B1 (en) | Workload management method to enhance shared resource access in a multisystem environment | |
US7606867B1 (en) | Ordered application message delivery using multiple processors in a network element | |
JP6963168B2 (en) | Information processing device, memory control method and memory control program | |
US6816860B2 (en) | Database load distribution processing method and recording medium storing a database load distribution processing program | |
US11734073B2 (en) | Systems and methods for automatically scaling compute resources based on demand | |
US20090113050A1 (en) | Method and system for automated session resource clean-up in a distributed client-server environment | |
US20070033205A1 (en) | Method or apparatus for selecting a cluster in a group of nodes | |
CN109582335B (en) | Method, device and equipment for on-line upgrading of non-interrupt storage cluster nodes | |
US20060294239A1 (en) | Method and system for controlling computer in system | |
US7085831B2 (en) | Intelligent system control agent for managing jobs on a network by managing a plurality of queues on a client | |
CN101533417A (en) | A method and system for realizing ETL scheduling | |
US20010018710A1 (en) | System and method for improved automation of a computer network | |
US7996507B2 (en) | Intelligent system control agent for managing jobs on a network by managing a plurality of queues on a client | |
JP3860966B2 (en) | Delivery and queuing of certified messages in multipoint publish / subscribe communication | |
US20030028640A1 (en) | Peer-to-peer distributed mechanism | |
JP4607999B2 (en) | How to handle lock-related inconsistencies | |
US6912586B1 (en) | Apparatus for journaling during software deployment and method therefor | |
JP2005309838A (en) | Information management system and information management method, and information management sub-system therefor | |
US8180846B1 (en) | Method and apparatus for obtaining agent status in a network management application | |
US6990608B2 (en) | Method for handling node failures and reloads in a fault tolerant clustered database supporting transaction registration and fault-in logic | |
US6314462B1 (en) | Sub-entry point interface architecture for change management in a computer network | |
CN108243205A (en) | A kind of method, equipment and system for being used to control cloud platform resource allocation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |