US20020083063A1 - Software and data processing system with priority queue dispatching - Google Patents

Software and data processing system with priority queue dispatching Download PDF

Info

Publication number
US20020083063A1
US20020083063A1 US09/748,404 US74840400A US2002083063A1 US 20020083063 A1 US20020083063 A1 US 20020083063A1 US 74840400 A US74840400 A US 74840400A US 2002083063 A1 US2002083063 A1 US 2002083063A1
Authority
US
United States
Prior art keywords
task
dispatching priority
lock
temporary
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/748,404
Inventor
David Egolf
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bull HN Information Systems Inc
Original Assignee
Bull HN Information Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bull HN Information Systems Inc filed Critical Bull HN Information Systems Inc
Priority to US09/748,404 priority Critical patent/US20020083063A1/en
Assigned to BULL HN INFORMATIONA SYSTEMS INC. reassignment BULL HN INFORMATIONA SYSTEMS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EGOLF, DAVID A.
Priority to EP01992090A priority patent/EP1346278A4/en
Priority to PCT/US2001/048127 priority patent/WO2002052370A2/en
Publication of US20020083063A1 publication Critical patent/US20020083063A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Definitions

  • the present invention generally relates to data processing system operating system software, and more specifically to a dispatcher in such operating system software.
  • a data processing system comprises one or more computer processors plus their peripheral and support devices. Operation of a data processing system is controlled by an operating system.
  • work units such as tasks, activities, jobs, and threads (hereinafter “tasks”), are scheduled and/or dispatched for execution by a portion of the operating system termed the dispatcher or scheduler (hereinafter, the “dispatcher”).
  • Access to shared resources in a data processing system is typically controlled by locks and queues.
  • One task locks a lock controlling a shared resource, then accesses the shared resource. Upon completion of its access to that shared resource, that task unlocks the lock.
  • Tasks arriving at the lock while locked typically enter into a queue.
  • the task that has locked the lock unlocks it, and one or more tasks are in the queue for the lock, one (or more) of the tasks in the queue is activated and dequeued from the queue so that that task can take its turn locking the resource.
  • this queuing methodology can cause severe problems in certain situations. For example, if the first task in the queue has a sufficiently low priority, then it may never be selected for a dispatch. This is not really a problem unless there are higher priority tasks behind it in the FIFO queue. In that case, these higher priority tasks in the queue may be starved. This is termed a “Priority Inversion” problem. Upon investigation, it turned out that this situation was the cause of the recent failure of one of NASA's Mars missions.
  • FIG. 1 is a block diagram illustrating a General Purpose Computer 20 in a data processing system
  • FIG. 2 is a block diagram illustrating an example of queue of tasks awaiting ownership of a shared resource controlled by a lock
  • FIG. 3 is a block diagram illustrating assignment of a temporary dispatching priority to queue members, in accordance with a preferred embodiment of the present invention
  • FIG. 4 is a flowchart illustrating operation of a task attempting to lock a lock, in accordance with a preferred embodiment of the present invention
  • FIG. 5 is a flowchart illustrating operation of a task unlocking a lock, in accordance with a preferred embodiment of the present invention.
  • FIG. 6. is a flow chart illustrating partial operation of a dispatcher, in accordance with the present invention.
  • a dispatcher in a multiprogramming or multitasking operating system in a data processing system selects the next task to be executed by an available processor.
  • Access to shared resources are controlled by locks and queues, where tasks are queued when they find the shared resource locked, and dequeued one by one as the lock is unlocked.
  • the first task in a FIFO queue is dispatched with a temporary priority at least as high as any in the queue. This first task must retain this temporary urgency until it releases the resource or until its urgency is further increased due to the addition of a higher priority task to the resource queue or a dependent resource queue. This prevents starvation of higher priority tasks waiting in the FIFO queue.
  • FIG. 1 is a block diagram illustrating a General Purpose Computer 20 in a data processing system.
  • the General Purpose Computer 20 has a Computer Processor 22 , and Memory 24 , connected by a Bus 26 .
  • Memory 24 is a relatively high speed machine readable medium and includes Volatile Memories such as DRAM, and SRAM, and Non-Volatile Memories such as, ROM, FLASH, EPROM, and EEPROM.
  • Secondary Storage 30 also connected to the Bus are Secondary Storage 30 , External Storage 32 , output devices such as a monitor 34 , input devices such as a keyboard 36 (with mouse 37 ), and printers 38 .
  • Secondary Storage 30 includes machine-readable media such as hard disk drives (or DASD).
  • External Storage 32 includes machine-readable media such as floppy disks, removable hard drives, magnetic tape, CD-ROM, and even other computers, possibly connected via a communications line 28 .
  • the distinction drawn here between Secondary Storage 30 and External Storage 32 is primarily for convenience in describing the invention. As such, it should be appreciated that there is substantial functional overlap between these elements.
  • Computer software such as data base management software, operating systems, and user programs can be stored in a Computer Software Storage Medium, such as memory 24 , Secondary Storage 30 , and External Storage 32 .
  • Executable versions of computer software 33 can be read from a Non-Volatile Storage Medium such as External Storage 32 , Secondary Storage 30 , and Non-Volatile Memory and loaded for execution directly into Volatile Memory, executed directly out of Non-Volatile Memory, or stored on the Secondary Storage 30 prior to loading into Volatile Memory for execution.
  • a Non-Volatile Storage Medium such as External Storage 32 , Secondary Storage 30 , and Non-Volatile Memory and loaded for execution directly into Volatile Memory, executed directly out of Non-Volatile Memory, or stored on the Secondary Storage 30 prior to loading into Volatile Memory for execution.
  • Operation and control of data processing systems are typically controlled by an operating system.
  • Operating systems may be single user or multi-user. As they become more sophisticated, they tend to be multi-tasking and/or multi-programming, simultaneously supporting multiple execution units, such as tasks, activities, jobs, and threads (hereinafter “tasks”).
  • Tasks are scheduled and dispatched for execution on a processor by a portion of the operating system termed here the “dispatcher” (also called in some systems the “scheduler”). Tasks typically execute until interrupted, either by a timer, or by some other interrupt, or they voluntarily give up control of the processor to the operating system, at which time another task is dispatched. Typically, the selection of which task to dispatch is determined on a priority basis, with each task having a priority. Higher priority tasks are typically dispatched before lower priority tasks. These priorities may be fixed, or may vary through time. The complexity of such a dispatching priority scheme varies among operating systems, ranging from fairly simple, to extremely complex.
  • a dynamic dispatching priority scheme is found in the OS/2200 operating system from Unisys where I/O bound tasks are given increasing priority.
  • Another example is found in the GCOS 8 operating system from Bull where tasks are grouped by class, and dispatching priority for a class is adjusted dynamically to meet specified class performance goals and quotas.
  • Another function of a modern operating system is to provide to programs (including the operating system itself) the ability to serialize access to resources shared among a plurality of tasks. In higher performance operating systems, this is often done through the use of locks and queues, with tasks entering a queue when they find a lock locked. In the case of a FIFO queue, the first task in the queue is activated for dispatch by the dispatcher when the lock is unlocked by another task. When dispatched, this newly dispatched task will then typically lock the lock, access the shared resource, then unlock the lock, resulting in activation of the next task in the queue.
  • critical section of code or execution to exist for a task between the time that it has attempted to lock a lock protecting a shared resource, and when it ultimately unlocks that lock.
  • usage of shared resources, and thus use of the corresponding locks can be embedded.
  • a task may lock one lock controlling access to one shared resource, then attempt to lock a lock for another shared resource. This sort of embedded locking must be done carefully in order to prevent deadly embracing between two or more tasks.
  • critical section shall mean when any task currently has a lock locked, or is in a queue for any lock.
  • FIG. 2 is a block diagram illustrating an example of queue of tasks awaiting ownership of a shared resource controlled by a lock.
  • the example queue is a doubly linked FIFO queue with a queue head 60 pointing at the first task in the queue 61 and the last task in the queue 64 .
  • the example queue consists of four tasks 61 , 62 , 63 , 64 chained in a First In/First Out (FIFO) order.
  • the first task 61 has a dispatch priority of “5”.
  • the second task 62 has a dispatch priority of “7”.
  • the third task 63 has a dispatch priority of “3”.
  • the fourth task 64 has a dispatch priority of “4”.
  • the first task 61 in the queue is activated for dispatch.
  • the second task 62 will also be dispatched with a temporary dispatching priority of at least 7.
  • the third task 63 is dispatched, assuming that no other tasks have entered the queue, it is dispatched with a temporary dispatching priority of at least 4, as is the fourth task 64 .
  • a doubly linked FIFO queue is shown. Each task has a forward link to the next task in the queue, as well as a backward link to the previous task in the queue.
  • One reason for a doubly linked FIFO queue will be seen in FIG. 4.
  • FIFO queues are often only linked in a forward direction. Such a queue structure is also within this invention.
  • the determination of the temporary dispatching priority is determined to be at least as high as any tasks in the queue.
  • a highest priority value is maintained for the queue. Then, whenever a task is enqueued on the queue, its priority is compared to the highest priority value for that queue, and if higher, the task's priority replaces the previous highest priority value for the queue. Then, whenever a task releases the resource by unlocking the lock, the remainder of the queue is searched for the highest task priority, and the highest priority value for the queue is adjusted accordingly.
  • the temporary dispatch priority is set to a fixed value.
  • the temporary dispatch priority is set to a value that is the highest possible in the data processing system.
  • it is set to a specified value for the data processing system. This value may be set by the operating system vendor or when the operating system is configured, or may be dynamically modified, for example by system operator command. This may be appropriate if the operating system can guarantee that tasks with this priority will receive a timely dispatch.
  • each shared resource queue, or group of shared resource queues has a specified temporary dispatch priority value. This specified temporary dispatch priority value for the queue or group of queues may again be set by the operating system vendor, may be specified at system configuration time, or may be dynamically modified.
  • One advantage of the preferred and first alternate embodiments, where the temporary dispatch priority value is dynamically set based on the priority of the tasks in the shared resource queue, is that the additional priority given tasks being dispatched after the unlock of a lock is minimized, thereby minimizing the impact of this invention on the remainder of the data processing system.
  • One advantage of the second and third alternative embodiments where the temporary dispatch priority is set to a specified value is that it minimizes the amount of dispatcher code that needs to be executed to implement this invention.
  • FIG. 3 is a block diagram illustrating assignment of a temporary dispatching priority to queue members, in accordance with a preferred embodiment of the present invention.
  • An example FIFO queue has a queue head 70 followed by eight tasks: T 1 71 with a priority of “5”; T 2 72 with a priority of “10”; T 3 73 with a priority of “12”; T 4 74 with a priority of “6”; T 5 75 with a priority of “5”; T 6 76 with a priority of “4”; T 7 77 with a priority of “6”; and T 8 78 with a priority of “4”.
  • a horizontal axis, parallel to the queue is a bar graph 68 showing the temporary dispatch priority of the tasks in the queue.
  • Tasks T 1 71 , T 2 72 , and T 3 73 will dispatch with a temporary dispatch priority of at least “12”. After task T 3 73 releases the resource, the highest priority in the queue is now “6”. Thus, tasks T 4 74 , T 5 75 , T 6 76 , and T 7 77 are dispatched with a temporary dispatch priority of at least “6”, which is the standard dispatch priority of task T 7 77 . Finally, the only task remaining in the queue is task T 8 78 , with a standard dispatch priority of “4”, which is also the temporary dispatch priority for this task.
  • T 9 (not shown), enters at the end of the FIFO queue with a standard dispatching priority of “7”, before task T 1 71 is dispatched, the temporary dispatching priority for tasks T 4 74 , T 5 75 , T 6 76 , T 7 77 , and T 8 78 will be accordingly set to “7”.
  • the temporary dispatching priority for tasks T 1 71 , T 2 72 , and T 3 73 remains “12” as this is larger than the standard dispatching priority of task T 9 of “7”.
  • a pointer is maintained at the queue head to the task that has locked the corresponding lock. This pointer is used to update the temporary dispatching priority of the task that has the lock locked to correspond to the greatest dispatching priority of any tasks in the queue.
  • the temporary dispatching priority for each of the tasks ahead of that task in the queue, plus the task with the lock locked are compared to the dispatching priority of the newly entered task, and the temporary dispatching priority for these other tasks are updated accordingly to this new dispatching priority if it is higher.
  • locks and queues may be embedded within the use of other locks. This is a common occurrence in many multiprocessing operating systems.
  • the task having locked the corresponding lock is checked to see if it is queued in a second queue. If so, all of the tasks ahead of it in that second queue, including the task having the corresponding lock locked, have their temporary dispatching priority updated accordingly. This is done recursively through a third, fourth, etc. queues, if necessary.
  • each task has a push down stack for identification of queues for which it is either attempting to lock, or has already locked.
  • the standard task priority is utilized for dispatching.
  • the push down stack is non-empty, the task is in a critical section, and the temporary dispatching priority is utilized whenever the task is dispatched.
  • a counter is incremented for each task whenever it attempts to lock a lock protecting a shared resource, and is decremented whenever it unlocks such a lock. Then, whenever the counter is zero, the standard task priority is utilized for dispatching that task. Whenever the counter is greater than zero, the task is in a critical section, and the temporary dispatching priority is utilized whenever the task is dispatched. In either implementation, whenever a task enters a critical section, its temporary dispatching priority is set to its standard dispatching priority.
  • FIG. 4 is a flowchart illustrating operation of a task attempting to lock a lock, in accordance with a preferred embodiment of the present invention.
  • the lock level for this task is modified, step 102 .
  • this is done by use of a push down stack.
  • this is done by incrementing a lock level index for that task. If this is the first level of locking, the temporary dispatching priority for the task is set to the current or standard dispatch priority for that task.
  • step 104 Whenever a task is dispatched with this lock level index greater than zero (if incremented) or non-null (if pushed), the task is dispatched with its dispatch priority at least as high as its temporary dispatching priority instead of with its current or standard dispatch priority. An attempt is then made to lock the lock, step 104 . If successful, the task exits the locking logic, step 109 .
  • the task is placed at the end of the queue for the lock (not shown).
  • the current temporary dispatching priority is then set to the temporary dispatching priority of the task just queued, step 106 , and an outer loop is entered.
  • the queue is scanned in a reverse direction, forming an inner loop, moving from the latest task in the queue, to the first task in the queue.
  • a check is made whether there is a previous task in the queue, step 108 . If there is a previous task in the queue, step 108 , its temporary dispatching priority is compared to the current temporary dispatching priority, and if lower, is set to the current temporary dispatching priority, step 110 .
  • This inner loop then repeats until no more tasks remain in the queue, step 108 .
  • a reverse processing order for the queue is utilized. While a reverse processing order is conceptually preferable to a forward processing order, sometimes lock queues are only linked in a forward direction. In such a case, a forward processing of the queue may be utilized, stopping the processing in the inner loop when the task is encountered from which the current temporary dispatching priority was obtained.
  • One additional feature is that in the case where the queue is searched from tail to front setting the temporary priority, the search through the preceding tasks in the queue can be terminated as soon as a task is encountered whose priority is at least as high as that of the task being inserted.
  • step 108 the inner loop is exited, a test is made whether there is currently an owner to the lock, and if that owner is available, step 112 . If the owner to the lock (e.g. the task that locked the lock) can not be ascertained, step 108 , the lock processing is complete, step 119 .
  • step 108 that task's temporary dispatching priority is compared to the current temporary dispatching priority and is updated if lower, step 114 .
  • the temporary dispatching priority for each task in the queue ahead of the task providing the current temporary dispatching priority is set to a maximum of its own temporary dispatching priority and the current temporary dispatching priority.
  • the lock owner is tested to see if it is currently waiting in another queue, step 116 . If it is not currently in a queue, step 116 , the lock processing is complete, step 119 . Otherwise, the current temporary dispatching priority is set to the temporary dispatching priority of the lock owner, and the outer loop is reentered, starting at step 108 , processing the tasks ahead of that task in the second (or subsequent) queue and updating their temporary dispatching priority to the current temporary dispatching priority, if lower, step 110 . The temporary dispatching priority of the lock owner for that second (and subsequent) task is also compared to the current temporary dispatching priority, and updated if necessary, step 114 . This outer loop repeats until no more embedded locks are found.
  • FIG. 5 is a flowchart illustrating operation of a task unlocking a lock, in accordance with a preferred embodiment of the present invention.
  • the task starts the unlocking of a lock, step 120 , by first unlocking the lock, step 122 .
  • the priority of the queue is reduced to the temporary priority of the first waiting process.
  • the first task in the queue is then activated for dispatch, step 124 .
  • the lock level of the process is modified, step 126 , and the task locking operation is then complete, step 129 .
  • the priority stack in the process is popped up. If the process had been participating in another resource lock, then the temporary priority of the process is restored to the current priority of that queue. Note that when locks are nested, the temporary priority of a process can be increased due to a higher priority task waiting behind it in the inner lock.
  • the lock level stack mechanism allows the task to revert to the appropriate priority level as soon as it releases the inner lock.
  • the lock level stack mechanism requires both a task level push down stack which keeps track of each resource queue controlling each held lock. It also requires that the queues maintain their own value of the current queue priority in a static variable.
  • the temporary dispatch priority of the first waiting process can be used instead of a static queue variable.
  • the first task in a FIFO queue is activated with a potentially higher temporary priority when being activated for dispatch after the corresponding lock is unlocked.
  • This provides a mechanism for preventing processor time starvation of higher priority tasks later in the shared resource queue.
  • the higher temporary dispatching priority is at least as high as the highest priority of any task in the shared resource queue.
  • FIG. 6. is a flow chart illustrating partial operation of a dispatcher, in accordance with the present invention.
  • a loop is entered to search through all tasks eligible for dispatch. While there are more tasks to check, step 132 , the next task to be checked is tested to see if it is in a critical section, step 136 .
  • a task is in a critical section when it has either one or more locks locked, or is in a queue awaiting the chance to lock a lock. If the task is determined to be in a critical section, step 136 , the temporary task priority is utilized as its dispatching priority, step 140 . Otherwise, the standard task priority is utilized as its dispatching priority, step 142 .
  • step 144 The task priority determined in steps 140 or 142 is then compared to the current highest priority. If this new task priority is higher than the previous highest task priority, step 144 , this task is set to be the task dispatched, step 146 . Regardless, the loop is then repeated, checking for another task that is eligible for being dispatched, step 132 . When there are no more tasks eligible for dispatch, the task with the highest priority, as determined in steps 140 , 142 , 144 , and 146 , is dispatched, step 148 . The dispatcher then exits, step 149 . Note that a typical dispatcher performs other functions. This FIG. is illustrative only and includes only those steps that are related to this invention.
  • the temporary dispatching priority is set for a task when it is activated for dispatch when it is the first task in a FIFO queue and the task that had locked the corresponding lock unlocks it. This priority is then utilized when the task is ultimately dispatched.
  • the actual dispatch of any task within a critical section i.e. within the outermost locking of locks
  • the temporary dispatching priority may change between the time that a task is activated for dispatch, and the time that it is actually dispatched.
  • a task may be dispatched multiple times within the critical section, and that the temporary dispatching priority for that task is dynamically adjusted as other tasks attempt to lock locks protecting shared resources.
  • the resulting difference is that the alternative embodiments significantly reduce, but may not totally eliminate, the priority inversion problem, whereas the preferred embodiment can be guaranteed to eliminate that problem.
  • this surety is garnered at a higher cost in both processor resources and in programming complexity.
  • the present invention prevents starving of processor resources for higher priority tasks by lower priority tasks in a shared resource FIFO queue by activating tasks after unlock of the lock corresponding to the queue with a temporary dispatch priority at least as high, if not higher, than any tasks currently remaining in the queue.
  • Implementation of this invention therefore solves performance problems introduced by sharing a FIFO shared resource queue among tasks with different dispatching priorities, and prevents the Priority Inversion type of system failure that resulted in the loss of the Mars lander.

Abstract

A dispatcher in a multiprogramming or multitasking operating system in a data processing system selects the next task to be executed by an available processor. Access to shared resources are controlled by locks and queues, where tasks are queued when they find the shared resource locked, and dequeued one by one as the lock is unlocked. When a lock is unlocked, the first task in a FIFO queue is dispatched with a temporary priority at least as high as any in the queue. This first task must retain this temporary urgency until it releases the resource or until its urgency is further increased due to the addition of a higher priority task to the resource queue or a dependent resource queue. This prevents starvation of higher priority tasks waiting in the FIFO queue.

Description

    FIELD OF THE INVENTION
  • The present invention generally relates to data processing system operating system software, and more specifically to a dispatcher in such operating system software. [0001]
  • BACKGROUND OF THE INVENTION
  • A data processing system comprises one or more computer processors plus their peripheral and support devices. Operation of a data processing system is controlled by an operating system. In a multiprogramming or multitasking operating system, work units, such as tasks, activities, jobs, and threads (hereinafter “tasks”), are scheduled and/or dispatched for execution by a portion of the operating system termed the dispatcher or scheduler (hereinafter, the “dispatcher”). [0002]
  • Access to shared resources in a data processing system is typically controlled by locks and queues. One task locks a lock controlling a shared resource, then accesses the shared resource. Upon completion of its access to that shared resource, that task unlocks the lock. Tasks arriving at the lock while locked typically enter into a queue. When the task that has locked the lock unlocks it, and one or more tasks are in the queue for the lock, one (or more) of the tasks in the queue is activated and dequeued from the queue so that that task can take its turn locking the resource. [0003]
  • The specifics of how this dequeueing is done varies by operating system and even within operating systems, depending on what type of resource is being serialized. In one case, in a multiprocessor system, instead of entering a queue, processors keep trying the lock until they finally find it unlocked and can lock it themselves. If failing to lock the lock does not cause an interrupt, these are termed “spin locks”. They are typically restricted to critical operating system functions of very short duration. In the case where an interrupt occurs when the task fails to lock a lock, the task may be placed in a queue, or may just be redispatched later. One problem with the later is that unless few tasks are competing for a resource, the redispatching can be costly and inefficient. [0004]
  • When tasks try to lock a lock and fail and are placed in a queue, they can be redispatched in a number of orders. For example, the highest priority task may be the task dispatched when the lock is unlocked. However, this often leads to starvation of lower priority tasks. Alternatively, the tasks can be dispatched in a First In-First Out (FIFO) order. This prevents starvation of lower priority tasks. [0005]
  • However, this queuing methodology can cause severe problems in certain situations. For example, if the first task in the queue has a sufficiently low priority, then it may never be selected for a dispatch. This is not really a problem unless there are higher priority tasks behind it in the FIFO queue. In that case, these higher priority tasks in the queue may be starved. This is termed a “Priority Inversion” problem. Upon investigation, it turned out that this situation was the cause of the recent failure of one of NASA's Mars missions. [0006]
  • One solution found in many Unix systems is to activate all of the tasks in a resource queue. The system dispatcher will then select the task with the highest priority which will ultimately end up locking the lock and acquiring the shared resource. The other tasks will either acquire the lock by the time they are dispatched, or will find the lock locked and will put themselves back to sleep. [0007]
  • While this effectively eliminates the “Priority Inversion” problem, it does so at a high cost in terms of resources utilized in solving the problem as each task attempting to lock the lock over the shared resource must be dispatched, just to be requeued if it finds the lock again locked. This technique also violates the desired FIFO ordering of the queue, resulting in potential starvation of low priority tasks. [0008]
  • It would thus be advantageous to find a solution to both the above mentioned Priority Inversion problem without the overhead and low priority task starvation found in most Unix solutions.[0009]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The features and advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying FIGURES where like numerals refer to like and corresponding parts and in which: [0010]
  • FIG. 1 is a block diagram illustrating a [0011] General Purpose Computer 20 in a data processing system;
  • FIG. 2 is a block diagram illustrating an example of queue of tasks awaiting ownership of a shared resource controlled by a lock; [0012]
  • FIG. 3 is a block diagram illustrating assignment of a temporary dispatching priority to queue members, in accordance with a preferred embodiment of the present invention; [0013]
  • FIG. 4 is a flowchart illustrating operation of a task attempting to lock a lock, in accordance with a preferred embodiment of the present invention; [0014]
  • FIG. 5 is a flowchart illustrating operation of a task unlocking a lock, in accordance with a preferred embodiment of the present invention; and [0015]
  • FIG. 6. is a flow chart illustrating partial operation of a dispatcher, in accordance with the present invention. [0016]
  • DETAILED DESCRIPTION
  • A dispatcher in a multiprogramming or multitasking operating system in a data processing system selects the next task to be executed by an available processor. Access to shared resources are controlled by locks and queues, where tasks are queued when they find the shared resource locked, and dequeued one by one as the lock is unlocked. When a lock is unlocked, the first task in a FIFO queue is dispatched with a temporary priority at least as high as any in the queue. This first task must retain this temporary urgency until it releases the resource or until its urgency is further increased due to the addition of a higher priority task to the resource queue or a dependent resource queue. This prevents starvation of higher priority tasks waiting in the FIFO queue. [0017]
  • FIG. 1 is a block diagram illustrating a [0018] General Purpose Computer 20 in a data processing system. The General Purpose Computer 20 has a Computer Processor 22, and Memory 24, connected by a Bus 26. Memory 24 is a relatively high speed machine readable medium and includes Volatile Memories such as DRAM, and SRAM, and Non-Volatile Memories such as, ROM, FLASH, EPROM, and EEPROM. Also connected to the Bus are Secondary Storage 30, External Storage 32, output devices such as a monitor 34, input devices such as a keyboard 36 (with mouse 37), and printers 38. Secondary Storage 30 includes machine-readable media such as hard disk drives (or DASD). External Storage 32 includes machine-readable media such as floppy disks, removable hard drives, magnetic tape, CD-ROM, and even other computers, possibly connected via a communications line 28. The distinction drawn here between Secondary Storage 30 and External Storage 32 is primarily for convenience in describing the invention. As such, it should be appreciated that there is substantial functional overlap between these elements. Computer software such as data base management software, operating systems, and user programs can be stored in a Computer Software Storage Medium, such as memory 24, Secondary Storage 30, and External Storage 32. Executable versions of computer software 33, can be read from a Non-Volatile Storage Medium such as External Storage 32, Secondary Storage 30, and Non-Volatile Memory and loaded for execution directly into Volatile Memory, executed directly out of Non-Volatile Memory, or stored on the Secondary Storage 30 prior to loading into Volatile Memory for execution.
  • Operation and control of data processing systems are typically controlled by an operating system. Operating systems may be single user or multi-user. As they become more sophisticated, they tend to be multi-tasking and/or multi-programming, simultaneously supporting multiple execution units, such as tasks, activities, jobs, and threads (hereinafter “tasks”). [0019]
  • Tasks are scheduled and dispatched for execution on a processor by a portion of the operating system termed here the “dispatcher” (also called in some systems the “scheduler”). Tasks typically execute until interrupted, either by a timer, or by some other interrupt, or they voluntarily give up control of the processor to the operating system, at which time another task is dispatched. Typically, the selection of which task to dispatch is determined on a priority basis, with each task having a priority. Higher priority tasks are typically dispatched before lower priority tasks. These priorities may be fixed, or may vary through time. The complexity of such a dispatching priority scheme varies among operating systems, ranging from fairly simple, to extremely complex. One example of a dynamic dispatching priority scheme is found in the OS/2200 operating system from Unisys where I/O bound tasks are given increasing priority. Another example is found in the GCOS 8 operating system from Bull where tasks are grouped by class, and dispatching priority for a class is adjusted dynamically to meet specified class performance goals and quotas. [0020]
  • Another function of a modern operating system is to provide to programs (including the operating system itself) the ability to serialize access to resources shared among a plurality of tasks. In higher performance operating systems, this is often done through the use of locks and queues, with tasks entering a queue when they find a lock locked. In the case of a FIFO queue, the first task in the queue is activated for dispatch by the dispatcher when the lock is unlocked by another task. When dispatched, this newly dispatched task will then typically lock the lock, access the shared resource, then unlock the lock, resulting in activation of the next task in the queue. [0021]
  • Here we define a “critical” section of code or execution to exist for a task between the time that it has attempted to lock a lock protecting a shared resource, and when it ultimately unlocks that lock. Note that usage of shared resources, and thus use of the corresponding locks, can be embedded. Thus, a task may lock one lock controlling access to one shared resource, then attempt to lock a lock for another shared resource. This sort of embedded locking must be done carefully in order to prevent deadly embracing between two or more tasks. Herein, the term “critical” section shall mean when any task currently has a lock locked, or is in a queue for any lock. [0022]
  • FIG. 2 is a block diagram illustrating an example of queue of tasks awaiting ownership of a shared resource controlled by a lock. The example queue is a doubly linked FIFO queue with a [0023] queue head 60 pointing at the first task in the queue 61 and the last task in the queue 64. The example queue consists of four tasks 61, 62, 63, 64 chained in a First In/First Out (FIFO) order. The first task 61 has a dispatch priority of “5”. The second task 62 has a dispatch priority of “7”. The third task 63 has a dispatch priority of “3”. The fourth task 64 has a dispatch priority of “4”. In the present invention, when the lock corresponding to the queue is unlocked, the first task 61 in the queue is activated for dispatch. When dispatched, it is dispatched with a temporary dispatching priority of at least “7”, which is the maximum of (5, 7, 3, and 4). When that first task 61 unlocks the lock, and if no more tasks have entered the queue by that time, finding the lock locked, the second task 62 will also be dispatched with a temporary dispatching priority of at least 7. However, when the third task 63 is dispatched, assuming that no other tasks have entered the queue, it is dispatched with a temporary dispatching priority of at least 4, as is the fourth task 64.
  • It should be noted that a doubly linked FIFO queue is shown. Each task has a forward link to the next task in the queue, as well as a backward link to the previous task in the queue. One reason for a doubly linked FIFO queue will be seen in FIG. 4. However, FIFO queues are often only linked in a forward direction. Such a queue structure is also within this invention. [0024]
  • In the preferred and a first alternate embodiments of the present invention, the determination of the temporary dispatching priority is determined to be at least as high as any tasks in the queue. In the first alternate embodiment, a highest priority value is maintained for the queue. Then, whenever a task is enqueued on the queue, its priority is compared to the highest priority value for that queue, and if higher, the task's priority replaces the previous highest priority value for the queue. Then, whenever a task releases the resource by unlocking the lock, the remainder of the queue is searched for the highest task priority, and the highest priority value for the queue is adjusted accordingly. [0025]
  • In a second alternate embodiment, the temporary dispatch priority is set to a fixed value. In a first implementation of this embodiment, the temporary dispatch priority is set to a value that is the highest possible in the data processing system. In a second implementation of this embodiment, it is set to a specified value for the data processing system. This value may be set by the operating system vendor or when the operating system is configured, or may be dynamically modified, for example by system operator command. This may be appropriate if the operating system can guarantee that tasks with this priority will receive a timely dispatch. In a third alternate embodiment, each shared resource queue, or group of shared resource queues, has a specified temporary dispatch priority value. This specified temporary dispatch priority value for the queue or group of queues may again be set by the operating system vendor, may be specified at system configuration time, or may be dynamically modified. [0026]
  • One advantage of the preferred and first alternate embodiments, where the temporary dispatch priority value is dynamically set based on the priority of the tasks in the shared resource queue, is that the additional priority given tasks being dispatched after the unlock of a lock is minimized, thereby minimizing the impact of this invention on the remainder of the data processing system. One advantage of the second and third alternative embodiments where the temporary dispatch priority is set to a specified value is that it minimizes the amount of dispatcher code that needs to be executed to implement this invention. [0027]
  • FIG. 3 is a block diagram illustrating assignment of a temporary dispatching priority to queue members, in accordance with a preferred embodiment of the present invention. An example FIFO queue has a [0028] queue head 70 followed by eight tasks: T1 71 with a priority of “5”; T2 72 with a priority of “10”; T3 73 with a priority of “12”; T4 74 with a priority of “6”; T5 75 with a priority of “5”; T6 76 with a priority of “4”; T7 77 with a priority of “6”; and T8 78 with a priority of “4”. Along a horizontal axis, parallel to the queue, is a bar graph 68 showing the temporary dispatch priority of the tasks in the queue. Tasks T1 71, T2 72, and T3 73 will dispatch with a temporary dispatch priority of at least “12”. After task T3 73 releases the resource, the highest priority in the queue is now “6”. Thus, tasks T4 74, T5 75, T6 76, and T7 77 are dispatched with a temporary dispatch priority of at least “6”, which is the standard dispatch priority of task T7 77. Finally, the only task remaining in the queue is task T8 78, with a standard dispatch priority of “4”, which is also the temporary dispatch priority for this task.
  • If a ninth task, T[0029] 9 (not shown), enters at the end of the FIFO queue with a standard dispatching priority of “7”, before task T1 71 is dispatched, the temporary dispatching priority for tasks T4 74, T5 75, T6 76, T7 77, and T8 78 will be accordingly set to “7”. The temporary dispatching priority for tasks T1 71, T2 72, and T3 73 remains “12” as this is larger than the standard dispatching priority of task T9 of “7”.
  • In the preferred embodiment, a pointer is maintained at the queue head to the task that has locked the corresponding lock. This pointer is used to update the temporary dispatching priority of the task that has the lock locked to correspond to the greatest dispatching priority of any tasks in the queue. Thus, whenever a task enters a queue with a dispatching priority higher than some tasks in the queue, the temporary dispatching priority for each of the tasks ahead of that task in the queue, plus the task with the lock locked, are compared to the dispatching priority of the newly entered task, and the temporary dispatching priority for these other tasks are updated accordingly to this new dispatching priority if it is higher. [0030]
  • In some situations, locks and queues may be embedded within the use of other locks. This is a common occurrence in many multiprocessing operating systems. In such a situation, in the preferred embodiment, when updating the temporary dispatching priority for tasks when a task enters a first queue after finding a lock locked, the task having locked the corresponding lock is checked to see if it is queued in a second queue. If so, all of the tasks ahead of it in that second queue, including the task having the corresponding lock locked, have their temporary dispatching priority updated accordingly. This is done recursively through a third, fourth, etc. queues, if necessary. [0031]
  • In a first implementation of the preferred embodiment, each task has a push down stack for identification of queues for which it is either attempting to lock, or has already locked. Whenever the push down stack is empty, the standard task priority is utilized for dispatching. Whenever the push down stack is non-empty, the task is in a critical section, and the temporary dispatching priority is utilized whenever the task is dispatched. In a second implementation, a counter is incremented for each task whenever it attempts to lock a lock protecting a shared resource, and is decremented whenever it unlocks such a lock. Then, whenever the counter is zero, the standard task priority is utilized for dispatching that task. Whenever the counter is greater than zero, the task is in a critical section, and the temporary dispatching priority is utilized whenever the task is dispatched. In either implementation, whenever a task enters a critical section, its temporary dispatching priority is set to its standard dispatching priority. [0032]
  • FIG. 4 is a flowchart illustrating operation of a task attempting to lock a lock, in accordance with a preferred embodiment of the present invention. Upon attempting to lock a lock protecting a shared resource, [0033] step 100, the lock level for this task is modified, step 102. As noted above, in one embodiment, this is done by use of a push down stack. In another embodiment, this is done by incrementing a lock level index for that task. If this is the first level of locking, the temporary dispatching priority for the task is set to the current or standard dispatch priority for that task. Whenever a task is dispatched with this lock level index greater than zero (if incremented) or non-null (if pushed), the task is dispatched with its dispatch priority at least as high as its temporary dispatching priority instead of with its current or standard dispatch priority. An attempt is then made to lock the lock, step 104. If successful, the task exits the locking logic, step 109.
  • Otherwise, the task is placed at the end of the queue for the lock (not shown). The current temporary dispatching priority is then set to the temporary dispatching priority of the task just queued, [0034] step 106, and an outer loop is entered. The queue is scanned in a reverse direction, forming an inner loop, moving from the latest task in the queue, to the first task in the queue. A check is made whether there is a previous task in the queue, step 108. If there is a previous task in the queue, step 108, its temporary dispatching priority is compared to the current temporary dispatching priority, and if lower, is set to the current temporary dispatching priority, step 110. This inner loop then repeats until no more tasks remain in the queue, step 108.
  • Note here that a reverse processing order for the queue is utilized. While a reverse processing order is conceptually preferable to a forward processing order, sometimes lock queues are only linked in a forward direction. In such a case, a forward processing of the queue may be utilized, stopping the processing in the inner loop when the task is encountered from which the current temporary dispatching priority was obtained. [0035]
  • One additional feature is that in the case where the queue is searched from tail to front setting the temporary priority, the search through the preceding tasks in the queue can be terminated as soon as a task is encountered whose priority is at least as high as that of the task being inserted. [0036]
  • When the queue is exhausted, [0037] step 108, the inner loop is exited, a test is made whether there is currently an owner to the lock, and if that owner is available, step 112. If the owner to the lock (e.g. the task that locked the lock) can not be ascertained, step 108, the lock processing is complete, step 119.
  • Otherwise, if the owner to the lock can be ascertained, [0038] step 108, that task's temporary dispatching priority is compared to the current temporary dispatching priority and is updated if lower, step 114. Thus, the temporary dispatching priority for each task in the queue ahead of the task providing the current temporary dispatching priority is set to a maximum of its own temporary dispatching priority and the current temporary dispatching priority.
  • Then, the lock owner is tested to see if it is currently waiting in another queue, [0039] step 116. If it is not currently in a queue, step 116, the lock processing is complete, step 119. Otherwise, the current temporary dispatching priority is set to the temporary dispatching priority of the lock owner, and the outer loop is reentered, starting at step 108, processing the tasks ahead of that task in the second (or subsequent) queue and updating their temporary dispatching priority to the current temporary dispatching priority, if lower, step 110. The temporary dispatching priority of the lock owner for that second (and subsequent) task is also compared to the current temporary dispatching priority, and updated if necessary, step 114. This outer loop repeats until no more embedded locks are found.
  • FIG. 5 is a flowchart illustrating operation of a task unlocking a lock, in accordance with a preferred embodiment of the present invention. The task starts the unlocking of a lock, [0040] step 120, by first unlocking the lock, step 122. The priority of the queue is reduced to the temporary priority of the first waiting process. The first task in the queue is then activated for dispatch, step 124. The lock level of the process is modified, step 126, and the task locking operation is then complete, step 129. When the lock level of the process is modified, the priority stack in the process is popped up. If the process had been participating in another resource lock, then the temporary priority of the process is restored to the current priority of that queue. Note that when locks are nested, the temporary priority of a process can be increased due to a higher priority task waiting behind it in the inner lock. The lock level stack mechanism allows the task to revert to the appropriate priority level as soon as it releases the inner lock.
  • The lock level stack mechanism requires both a task level push down stack which keeps track of each resource queue controlling each held lock. It also requires that the queues maintain their own value of the current queue priority in a static variable. In an alternative embodiment, the temporary dispatch priority of the first waiting process can be used instead of a static queue variable. [0041]
  • Note that other orderings of these steps are within the scope of this invention and may be required to implement this invention in different computer architectures. [0042]
  • In the present invention, the first task in a FIFO queue is activated with a potentially higher temporary priority when being activated for dispatch after the corresponding lock is unlocked. This provides a mechanism for preventing processor time starvation of higher priority tasks later in the shared resource queue. In a preferred embodiment, the higher temporary dispatching priority is at least as high as the highest priority of any task in the shared resource queue. [0043]
  • FIG. 6. is a flow chart illustrating partial operation of a dispatcher, in accordance with the present invention. Upon entering the dispatcher, [0044] step 130, a loop is entered to search through all tasks eligible for dispatch. While there are more tasks to check, step 132, the next task to be checked is tested to see if it is in a critical section, step 136. As noted above, a task is in a critical section when it has either one or more locks locked, or is in a queue awaiting the chance to lock a lock. If the task is determined to be in a critical section, step 136, the temporary task priority is utilized as its dispatching priority, step 140. Otherwise, the standard task priority is utilized as its dispatching priority, step 142. The task priority determined in steps 140 or 142 is then compared to the current highest priority. If this new task priority is higher than the previous highest task priority, step 144, this task is set to be the task dispatched, step 146. Regardless, the loop is then repeated, checking for another task that is eligible for being dispatched, step 132. When there are no more tasks eligible for dispatch, the task with the highest priority, as determined in steps 140, 142, 144, and 146, is dispatched, step 148. The dispatcher then exits, step 149. Note that a typical dispatcher performs other functions. This FIG. is illustrative only and includes only those steps that are related to this invention.
  • While the dispatcher in this FIG. is described in terms of searching through a list of tasks for the next task to dispatch, in the preferred embodiment, as in most modem operating systems, an actual search of the queues of tasks ready for dispatch is extremely costly in terms of processor cycles. Instead, the tasks eligible for dispatch are usually kept in a queue ordered by priority. Whenever a task ready for dispatch has its temporary priority increased as a result of the enqueuing of another task in the dispatch queue, that task is removed from the appropriate dispatch queue, and is then reinserted at the proper place for that priority. [0045]
  • One subtlety needs further elaboration here. In the alternative embodiments, the temporary dispatching priority is set for a task when it is activated for dispatch when it is the first task in a FIFO queue and the task that had locked the corresponding lock unlocks it. This priority is then utilized when the task is ultimately dispatched. In the preferred embodiment, the actual dispatch of any task within a critical section (i.e. within the outermost locking of locks) is done at last at a priority as high as, if not higher than, the temporary dispatching priority of that task being dispatched. The difference here is that in the preferred embodiment, the temporary dispatching priority may change between the time that a task is activated for dispatch, and the time that it is actually dispatched. Also note that a task may be dispatched multiple times within the critical section, and that the temporary dispatching priority for that task is dynamically adjusted as other tasks attempt to lock locks protecting shared resources. The resulting difference is that the alternative embodiments significantly reduce, but may not totally eliminate, the priority inversion problem, whereas the preferred embodiment can be guaranteed to eliminate that problem. However, this surety is garnered at a higher cost in both processor resources and in programming complexity. [0046]
  • The present invention prevents starving of processor resources for higher priority tasks by lower priority tasks in a shared resource FIFO queue by activating tasks after unlock of the lock corresponding to the queue with a temporary dispatch priority at least as high, if not higher, than any tasks currently remaining in the queue. Implementation of this invention therefore solves performance problems introduced by sharing a FIFO shared resource queue among tasks with different dispatching priorities, and prevents the Priority Inversion type of system failure that resulted in the loss of the Mars lander. [0047]
  • Those skilled in the art will recognize that modifications and variations can be made without departing from the spirit of the invention. Therefore, it is intended that this invention encompass all such variations and modifications as fall within the scope of the appended claims. [0048]
  • Claim elements and steps herein have been numbered and/or lettered solely as an aid in readability and understanding. As such, the numbering and/or lettering in itself is not intended to and should not be taken to indicate the ordering of elements and/or steps in the claims. [0049]

Claims (21)

What is claimed is:
1. A method of priority queue dispatching in a data processing system, wherein:
a critical section of code for a task is when the task has at least one of a plurality of locks locked; and
said method comprises:
A) dispatching a first task for execution at a temporary dispatching priority higher than a standard dispatching priority for that first task whenever the first task is in a critical section of code for that first task, wherein the critical section of code for that first task is a result of that first task having locked a first lock without having unlocked the first lock; and
B) dispatching the first task for execution at the standard dispatching priority for that first task whenever the first task is not in the critical section of code for that first task.
2. The method in claim 1 wherein:
the method further comprises:
C) queuing a second task in a first FIFO queue when the second task attempts to lock the first lock and fails, wherein:
the FIFO queue comprises a set of tasks waiting to lock the first lock; and
step (C) comprises:
1) placing the second task at an end of the first FIFO queue;
2) comparing the temporary dispatching priority for a third task that is ahead of the second task in the FIFO queue to the temporary dispatching priority of the second task; and
3) setting the temporary dispatching priority of the third task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined in substep (2) to be greater than the temporary dispatching priority of the third task.
3. The method in claim 2 wherein:
step (C) further comprises:
4) setting the temporary dispatching priority for the second task to the standard dispatching priority for the second task before substep (2) and (3).
4. The method in claim 1 wherein:
the method further comprises:
C) queuing a second task in a first FIFO queue when the second task attempts to lock the first lock and fails, wherein:
the FIFO queue comprises a set of tasks waiting to lock the first lock; and
step (C) comprises:
1) placing the second task at an end of the first FIFO queue;
2) comparing the temporary dispatching priority for the first task to the temporary dispatching priority of the second task; and
3) setting the temporary dispatching priority of the first task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined in substep (2) to be greater than the temporary dispatching priority of the first task.
5. The method in claim 4 wherein step (C) further comprises:
4) testing whether the first task is in a second FIFO queue awaiting a chance to lock a second lock; and
5) upgrading the temporary dispatching priority of a third task that has a second lock locked when the first task is determined in substep (4) to be in the second FIFO queue awaiting a chance to lock the second lock, wherein substep (5) comprises:
a) comparing the temporary dispatching priority for the third task to the temporary dispatching priority of the second task; and
b) setting the temporary dispatching priority of the third task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined to be greater than the temporary dispatching priority of the third task.
6. The method in claim 5 wherein:
the method further comprises:
D) comparing a highest temporary dispatching priority of any task in the first FIFO queue to the temporary dispatching priority of the first task after the first task unlocks the second lock; and
E) setting the temporary dispatching priority of the first task to the highest temporary dispatching priority of any task in the first FIFO queue when the temporary dispatching priority of the first task is determined in step (D) to be greater than the highest temporary dispatching priority of any task in the first FIFO queue.
7. The method in claim 4 wherein:
step (c) further comprises:
4) comparing the temporary dispatching priority for a third task that is ahead of the second task in the first FIFO queue to the temporary dispatching priority of the second task; and
5) setting the temporary dispatching priority of the third task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined in substep (4) to be greater than the temporary dispatching priority of the third task.
8. The method in claim 1 wherein:
the temporary dispatching priority for the first task is determined dynamically and adjusted dynamically depending on the temporary dispatching priority for other tasks behind the first task in a first FIFO queue associated with the first lock.
9. The method in claim 1 wherein:
the temporary dispatching priority for the first task is a system wide value.
10. The method in claim 1 wherein:
the temporary dispatching priority for the first task is a value associated with the first lock when the first lock is a last lock that the first task has attempted to lock.
11. Software stored in a Computer Software Storage Medium for priority queue dispatching in a data processing system, wherein:
a critical section of code for a task is when the task has at least one of a plurality of locks locked; and
said software comprises:
A) dispatching a first task for execution at a temporary dispatching priority higher than a standard dispatching priority for that first task whenever the first task is in a critical section of code for that first task, wherein the critical section of code for that first task is a result of that first task having locked a first lock without having unlocked the first lock; and
B) dispatching the first task for execution at the standard dispatching priority for that first task whenever the first task is not in the critical section of code for that first task.
12. The software in claim 11 wherein:
the software further comprises:
C) queuing a second task in a first FIFO queue when the second task attempts to lock the first lock and fails, wherein:
the FIFO queue comprises a set of tasks waiting to lock the first lock; and
set (C) comprises:
1) placing the second task at an end of the first FIFO queue;
2) comparing the temporary dispatching priority for a third task that is ahead of the second task in the FIFO queue to the temporary dispatching priority of the second task; and
3) setting the temporary dispatching priority of the third task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined in subset (2) to be greater than the temporary dispatching priority of the third task.
13. The software in claim 12 wherein:
set (C) further comprises:
4) setting the temporary dispatching priority for the second task to the standard dispatching priority for the second task before subset (2) and (3).
14. The software in claim 11 wherein:
the software further comprises:
C) queuing a second task in a first FIFO queue when the second task attempts to lock the first lock and fails, wherein:
the FIFO queue comprises a set of tasks waiting to lock the first lock; and
set (C) comprises:
1) placing the second task at an end of the first FIFO queue;
2) comparing the temporary dispatching priority for the first task to the temporary dispatching priority of the second task; and
3) setting the temporary dispatching priority of the first task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined in subset (2) to be greater than the temporary dispatching priority of the first task.
15. The software in claim 14 wherein set (C) further comprises:
4) testing whether the first task is in a second FIFO queue awaiting a chance to lock a second lock; and
5) upgrading the temporary dispatching priority of a third task that has a second lock locked when the first task is determined in subset (4) to be in the second FIFO queue awaiting a chance to lock the second lock, wherein subset (5) comprises:
a) comparing the temporary dispatching priority for the third task to the temporary dispatching priority of the second task; and
b) setting the temporary dispatching priority of the third task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined to be greater than the temporary dispatching priority of the third task.
16. The software in claim 15 wherein:
the software further comprises:
D) comparing a highest temporary dispatching priority of any task in the first FIFO queue to the temporary dispatching priority of the first task after the first task unlocks the second lock; and
E) setting the temporary dispatching priority of the first task to the highest temporary dispatching priority of any task in the first FIFO queue when the temporary dispatching priority of the first task is determined in set (D) to be greater than the highest temporary dispatching priority of any task in the first FIFO queue.
17. The software in claim 14 wherein:
set (c) further comprises:
4) comparing the temporary dispatching priority for a third task that is ahead of the second task in the first FIFO queue to the temporary dispatching priority of the second task; and
5) setting the temporary dispatching priority of the third task to the temporary dispatching priority of the second task when the temporary dispatching priority of the second task is determined in subset (4) to be greater than the temporary dispatching priority of the third task.
18. The software in claim 11 wherein:
the temporary dispatching priority for the first task is determined dynamically and adjusted dynamically depending on the temporary dispatching priority for other tasks behind the first task in a first FIFO queue associated with the first lock.
19. The software in claim 11 wherein:
the temporary dispatching priority for the first task is a system wide value.
20. The software in claim 11 wherein:
the temporary dispatching priority for the first task is a value associated with the first lock when the first lock is a last lock that the first task has attempted to lock.
21. A computer readable Non-Volatile Storage Medium encoded with software for priority queue dispatching in a data processing system, wherein:
a critical section of code for a task is when the task has at least one of a plurality of locks locked; and
said software comprises:
A) dispatching a first task for execution at a temporary dispatching priority higher than a standard dispatching priority for that first task whenever the first task is in a critical section of code for that first task, wherein the critical section of code for that first task is a result of that first task having locked a first lock without having unlocked the first lock; and
B) dispatching the first task for execution at the standard dispatching priority for that first task whenever the first task is not in the critical section of code for that first task.
US09/748,404 2000-12-26 2000-12-26 Software and data processing system with priority queue dispatching Abandoned US20020083063A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US09/748,404 US20020083063A1 (en) 2000-12-26 2000-12-26 Software and data processing system with priority queue dispatching
EP01992090A EP1346278A4 (en) 2000-12-26 2001-12-07 Software and data processing system with priority queue dispatching
PCT/US2001/048127 WO2002052370A2 (en) 2000-12-26 2001-12-07 Software and data processing system with priority queue dispatching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/748,404 US20020083063A1 (en) 2000-12-26 2000-12-26 Software and data processing system with priority queue dispatching

Publications (1)

Publication Number Publication Date
US20020083063A1 true US20020083063A1 (en) 2002-06-27

Family

ID=25009305

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/748,404 Abandoned US20020083063A1 (en) 2000-12-26 2000-12-26 Software and data processing system with priority queue dispatching

Country Status (3)

Country Link
US (1) US20020083063A1 (en)
EP (1) EP1346278A4 (en)
WO (1) WO2002052370A2 (en)

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040139142A1 (en) * 2002-12-31 2004-07-15 International Business Machines Corporation Method and apparatus for managing resource contention
US20040136720A1 (en) * 2003-01-15 2004-07-15 Mahowald Peter H. Task prioritization in firmware controlled optical transceiver
US20050125406A1 (en) * 2003-12-03 2005-06-09 Miroslav Cina Database access with multilevel lock
US20070039000A1 (en) * 2005-08-10 2007-02-15 Hewlett-Packard Development Company, L.P. Lock order determination method and system
US20070136725A1 (en) * 2005-12-12 2007-06-14 International Business Machines Corporation System and method for optimized preemption and reservation of software locks
US20070294446A1 (en) * 2006-06-15 2007-12-20 Sony Corporation Information processing apparatus, method of same, and program for same
US20080069115A1 (en) * 2006-09-16 2008-03-20 Mips Technologies, Inc. Bifurcated transaction selector supporting dynamic priorities in multi-port switch
US20080069130A1 (en) * 2006-09-16 2008-03-20 Mips Technologies, Inc. Transaction selector employing transaction queue group priorities in multi-port switch
EP1942413A2 (en) 2007-01-05 2008-07-09 Samsung Electronics Co., Ltd. Multi-Tasking Method According to Simple Priority Inheritance Scheme and Embedded System Therefor
US7454579B1 (en) * 2005-12-15 2008-11-18 Emc Corporation Managing access to shared resources
US20100037086A1 (en) * 2006-09-25 2010-02-11 Koninklijke Philips Electronics N.V. Robust critical section design in multithreaded applications
US20120304186A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Scheduling Mapreduce Jobs in the Presence of Priority Classes
US20130080652A1 (en) * 2011-07-26 2013-03-28 International Business Machines Corporation Dynamic runtime choosing of processing communication methods
US20130212591A1 (en) * 2006-03-15 2013-08-15 Mihai-Daniel Fecioru Task scheduling method and apparatus
US8578380B1 (en) * 2003-12-17 2013-11-05 Vmware, Inc. Program concurrency control using condition variables
US8612649B2 (en) 2010-12-17 2013-12-17 At&T Intellectual Property I, L.P. Validation of priority queue processing
US8612648B1 (en) * 2010-07-19 2013-12-17 Xilinx, Inc. Method and apparatus for implementing quality of service in a data bus interface
CN103473126A (en) * 2013-09-09 2013-12-25 北京思特奇信息技术股份有限公司 Multiple-level task processing method
US8954713B2 (en) 2011-07-26 2015-02-10 International Business Machines Corporation Using predictive determinism within a streaming environment
US8990452B2 (en) 2011-07-26 2015-03-24 International Business Machines Corporation Dynamic reduction of stream backpressure
US9135057B2 (en) 2012-04-26 2015-09-15 International Business Machines Corporation Operator graph changes in response to dynamic connections in stream computing applications
US20160077870A1 (en) * 2014-09-16 2016-03-17 Freescale Semiconductor, Inc. Starvation control in a data processing system
US20160098298A1 (en) * 2009-04-24 2016-04-07 Pegasystems Inc. Methods and apparatus for integrated work management
US9317574B1 (en) 2012-06-11 2016-04-19 Dell Software Inc. System and method for managing and identifying subject matter experts
US9349016B1 (en) 2014-06-06 2016-05-24 Dell Software Inc. System and method for user-context-based data loss prevention
US9390240B1 (en) 2012-06-11 2016-07-12 Dell Software Inc. System and method for querying data
US9405553B2 (en) 2012-01-30 2016-08-02 International Business Machines Corporation Processing element management in a streaming data system
US9501744B1 (en) 2012-06-11 2016-11-22 Dell Software Inc. System and method for classifying data
TWI560609B (en) * 2015-10-14 2016-12-01 Realtek Semiconductor Corp Data output dispatching device and method
US9563782B1 (en) 2015-04-10 2017-02-07 Dell Software Inc. Systems and methods of secure self-service access to content
US9569626B1 (en) 2015-04-10 2017-02-14 Dell Software Inc. Systems and methods of reporting content-exposure events
US9578060B1 (en) 2012-06-11 2017-02-21 Dell Software Inc. System and method for data loss prevention across heterogeneous communications platforms
US9641555B1 (en) 2015-04-10 2017-05-02 Dell Software Inc. Systems and methods of tracking content-exposure events
US9756099B2 (en) 2012-11-13 2017-09-05 International Business Machines Corporation Streams optional execution paths depending upon data rates
US9842220B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US9842218B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US9990506B1 (en) 2015-03-30 2018-06-05 Quest Software Inc. Systems and methods of securing network-accessible peripheral devices
US20180293100A1 (en) * 2017-04-05 2018-10-11 Cavium, Inc. Managing lock and unlock operations using traffic prioritization
US10142391B1 (en) 2016-03-25 2018-11-27 Quest Software Inc. Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization
US10157358B1 (en) 2015-10-05 2018-12-18 Quest Software Inc. Systems and methods for multi-stream performance patternization and interval-based prediction
US10218588B1 (en) 2015-10-05 2019-02-26 Quest Software Inc. Systems and methods for multi-stream performance patternization and optimization of virtual meetings
US10326748B1 (en) 2015-02-25 2019-06-18 Quest Software Inc. Systems and methods for event-based authentication
US10331500B2 (en) * 2017-04-05 2019-06-25 Cavium, Llc Managing fairness for lock and unlock operations using operation prioritization
US10409800B2 (en) * 2015-08-03 2019-09-10 Sap Se Priority queue for exclusive locks
US10417613B1 (en) 2015-03-17 2019-09-17 Quest Software Inc. Systems and methods of patternizing logged user-initiated events for scheduling functions
US10469396B2 (en) 2014-10-10 2019-11-05 Pegasystems, Inc. Event processing with enhanced throughput
US10467200B1 (en) 2009-03-12 2019-11-05 Pegasystems, Inc. Techniques for dynamic data processing
US10489220B2 (en) * 2017-01-26 2019-11-26 Microsoft Technology Licensing, Llc Priority based scheduling
US10536352B1 (en) 2015-08-05 2020-01-14 Quest Software Inc. Systems and methods for tuning cross-platform data collection
US10572236B2 (en) 2011-12-30 2020-02-25 Pegasystems, Inc. System and method for updating or modifying an application without manual coding
US10628221B1 (en) * 2015-09-30 2020-04-21 EMC IP Holding Company LLC Method and system for deadline inheritance for resource synchronization
US10698599B2 (en) 2016-06-03 2020-06-30 Pegasystems, Inc. Connecting graphical shapes using gestures
US10698647B2 (en) 2016-07-11 2020-06-30 Pegasystems Inc. Selective sharing for collaborative application usage
US10838569B2 (en) 2006-03-30 2020-11-17 Pegasystems Inc. Method and apparatus for user interface non-conformance detection and correction
CN112328386A (en) * 2021-01-05 2021-02-05 北京国科环宇科技股份有限公司 Operating system process scheduling method, device, medium and electronic equipment
US11048488B2 (en) 2018-08-14 2021-06-29 Pegasystems, Inc. Software code optimizer and method
US20210342202A1 (en) * 2018-10-18 2021-11-04 Oracle International Corporation Critical Section Speedup Using Help-Enabled Locks
CN115292025A (en) * 2022-09-30 2022-11-04 神州数码融信云技术服务有限公司 Task scheduling method and device, computer equipment and computer readable storage medium
WO2022236816A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Task allocation method and apparatus
US11567945B1 (en) 2020-08-27 2023-01-31 Pegasystems Inc. Customized digital content generation systems and methods
US11954518B2 (en) * 2019-12-20 2024-04-09 Nvidia Corporation User-defined metered priority queues

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6021425A (en) * 1992-04-03 2000-02-01 International Business Machines Corporation System and method for optimizing dispatch latency of tasks in a data processing system
US6473819B1 (en) * 1999-12-17 2002-10-29 International Business Machines Corporation Scalable interruptible queue locks for shared-memory multiprocessor
US6560627B1 (en) * 1999-01-28 2003-05-06 Cisco Technology, Inc. Mutual exclusion at the record level with priority inheritance for embedded systems using one semaphore
US6560628B1 (en) * 1998-04-27 2003-05-06 Sony Corporation Apparatus, method, and recording medium for scheduling execution using time slot data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5012409A (en) * 1988-03-10 1991-04-30 Fletcher Mitchell S Operating system for a multi-tasking operating environment
JPH02300939A (en) * 1989-05-16 1990-12-13 Toshiba Corp Semaphore operation system
US5333319A (en) * 1992-03-02 1994-07-26 International Business Machines Corporation Virtual storage data processor with enhanced dispatching priority allocation of CPU resources

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6021425A (en) * 1992-04-03 2000-02-01 International Business Machines Corporation System and method for optimizing dispatch latency of tasks in a data processing system
US6560628B1 (en) * 1998-04-27 2003-05-06 Sony Corporation Apparatus, method, and recording medium for scheduling execution using time slot data
US6560627B1 (en) * 1999-01-28 2003-05-06 Cisco Technology, Inc. Mutual exclusion at the record level with priority inheritance for embedded systems using one semaphore
US6473819B1 (en) * 1999-12-17 2002-10-29 International Business Machines Corporation Scalable interruptible queue locks for shared-memory multiprocessor

Cited By (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040139142A1 (en) * 2002-12-31 2004-07-15 International Business Machines Corporation Method and apparatus for managing resource contention
US20040136720A1 (en) * 2003-01-15 2004-07-15 Mahowald Peter H. Task prioritization in firmware controlled optical transceiver
GB2398945A (en) * 2003-01-15 2004-09-01 Agilent Technologies Inc Optical transceiver controlled by priority ordered task code modules
GB2398945B (en) * 2003-01-15 2005-11-30 Agilent Technologies Inc Transceiver and method of operating an optical transceiver
US7574438B2 (en) * 2003-12-03 2009-08-11 Sap Aktiengesellschaft Database access with multilevel lock
US20050125406A1 (en) * 2003-12-03 2005-06-09 Miroslav Cina Database access with multilevel lock
US8578380B1 (en) * 2003-12-17 2013-11-05 Vmware, Inc. Program concurrency control using condition variables
US20070039000A1 (en) * 2005-08-10 2007-02-15 Hewlett-Packard Development Company, L.P. Lock order determination method and system
US20070136725A1 (en) * 2005-12-12 2007-06-14 International Business Machines Corporation System and method for optimized preemption and reservation of software locks
US8261279B2 (en) 2005-12-12 2012-09-04 International Business Machines Corporation Optimized preemption and reservation of software locks for woken threads
US20080163217A1 (en) * 2005-12-12 2008-07-03 Jos Manuel Accapadi Optimized Preemption and Reservation of Software Locks
US7454579B1 (en) * 2005-12-15 2008-11-18 Emc Corporation Managing access to shared resources
US20130212591A1 (en) * 2006-03-15 2013-08-15 Mihai-Daniel Fecioru Task scheduling method and apparatus
US9372729B2 (en) * 2006-03-15 2016-06-21 Freescale Semiconductor, Inc. Task scheduling method and apparatus
US10838569B2 (en) 2006-03-30 2020-11-17 Pegasystems Inc. Method and apparatus for user interface non-conformance detection and correction
US20070294446A1 (en) * 2006-06-15 2007-12-20 Sony Corporation Information processing apparatus, method of same, and program for same
US8065458B2 (en) * 2006-06-15 2011-11-22 Sony Corporation Arbitration apparatus, method, and computer readable medium with dynamically adjustable priority scheme
US20080069115A1 (en) * 2006-09-16 2008-03-20 Mips Technologies, Inc. Bifurcated transaction selector supporting dynamic priorities in multi-port switch
US7961745B2 (en) * 2006-09-16 2011-06-14 Mips Technologies, Inc. Bifurcated transaction selector supporting dynamic priorities in multi-port switch
US7990989B2 (en) 2006-09-16 2011-08-02 Mips Technologies, Inc. Transaction selector employing transaction queue group priorities in multi-port switch
US20080069130A1 (en) * 2006-09-16 2008-03-20 Mips Technologies, Inc. Transaction selector employing transaction queue group priorities in multi-port switch
US20100037086A1 (en) * 2006-09-25 2010-02-11 Koninklijke Philips Electronics N.V. Robust critical section design in multithreaded applications
EP1942413A3 (en) * 2007-01-05 2010-01-27 Samsung Electronics Co., Ltd. Multi-Tasking Method According to Simple Priority Inheritance Scheme and Embedded System Therefor
CN101216785A (en) * 2007-01-05 2008-07-09 三星电子株式会社 Multi-tasking method according to simple priority inheritance scheme and embedded system therefor
EP1942413A2 (en) 2007-01-05 2008-07-09 Samsung Electronics Co., Ltd. Multi-Tasking Method According to Simple Priority Inheritance Scheme and Embedded System Therefor
US8612982B2 (en) 2007-01-05 2013-12-17 Samsung Electronics Co., Ltd. Multi-tasking method according to simple priority inheritance scheme and embedded system therefor
US20080168454A1 (en) * 2007-01-05 2008-07-10 Samsung Electronics Co., Ltd. Multi-tasking method according to simple priority inheritance scheme and embedded system therefor
US10467200B1 (en) 2009-03-12 2019-11-05 Pegasystems, Inc. Techniques for dynamic data processing
US20160098298A1 (en) * 2009-04-24 2016-04-07 Pegasystems Inc. Methods and apparatus for integrated work management
US8612648B1 (en) * 2010-07-19 2013-12-17 Xilinx, Inc. Method and apparatus for implementing quality of service in a data bus interface
US8612649B2 (en) 2010-12-17 2013-12-17 At&T Intellectual Property I, L.P. Validation of priority queue processing
US20130031558A1 (en) * 2011-05-26 2013-01-31 International Business Machines Corporation Scheduling Mapreduce Jobs in the Presence of Priority Classes
US8869159B2 (en) * 2011-05-26 2014-10-21 International Business Machines Corporation Scheduling MapReduce jobs in the presence of priority classes
US20120304186A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Scheduling Mapreduce Jobs in the Presence of Priority Classes
US9148495B2 (en) 2011-07-26 2015-09-29 International Business Machines Corporation Dynamic runtime choosing of processing communication methods
US20130080652A1 (en) * 2011-07-26 2013-03-28 International Business Machines Corporation Dynamic runtime choosing of processing communication methods
US8954713B2 (en) 2011-07-26 2015-02-10 International Business Machines Corporation Using predictive determinism within a streaming environment
US8959313B2 (en) 2011-07-26 2015-02-17 International Business Machines Corporation Using predictive determinism within a streaming environment
US9148496B2 (en) * 2011-07-26 2015-09-29 International Business Machines Corporation Dynamic runtime choosing of processing communication methods
US9389911B2 (en) 2011-07-26 2016-07-12 International Business Machines Corporation Dynamic reduction of stream backpressure
US8990452B2 (en) 2011-07-26 2015-03-24 International Business Machines Corporation Dynamic reduction of stream backpressure
US9588812B2 (en) 2011-07-26 2017-03-07 International Business Machines Corporation Dynamic reduction of stream backpressure
US10324756B2 (en) 2011-07-26 2019-06-18 International Business Machines Corporation Dynamic reduction of stream backpressure
US10572236B2 (en) 2011-12-30 2020-02-25 Pegasystems, Inc. System and method for updating or modifying an application without manual coding
US10296386B2 (en) 2012-01-30 2019-05-21 International Business Machines Corporation Processing element management in a streaming data system
US9405553B2 (en) 2012-01-30 2016-08-02 International Business Machines Corporation Processing element management in a streaming data system
US9870262B2 (en) 2012-01-30 2018-01-16 International Business Machines Corporation Processing element management in a streaming data system
US9535707B2 (en) 2012-01-30 2017-01-03 International Business Machines Corporation Processing element management in a streaming data system
US9135057B2 (en) 2012-04-26 2015-09-15 International Business Machines Corporation Operator graph changes in response to dynamic connections in stream computing applications
US9146775B2 (en) 2012-04-26 2015-09-29 International Business Machines Corporation Operator graph changes in response to dynamic connections in stream computing applications
US9501744B1 (en) 2012-06-11 2016-11-22 Dell Software Inc. System and method for classifying data
US9578060B1 (en) 2012-06-11 2017-02-21 Dell Software Inc. System and method for data loss prevention across heterogeneous communications platforms
US9317574B1 (en) 2012-06-11 2016-04-19 Dell Software Inc. System and method for managing and identifying subject matter experts
US10146954B1 (en) 2012-06-11 2018-12-04 Quest Software Inc. System and method for data aggregation and analysis
US9779260B1 (en) 2012-06-11 2017-10-03 Dell Software Inc. Aggregation and classification of secure data
US9390240B1 (en) 2012-06-11 2016-07-12 Dell Software Inc. System and method for querying data
US9756099B2 (en) 2012-11-13 2017-09-05 International Business Machines Corporation Streams optional execution paths depending upon data rates
US9930081B2 (en) 2012-11-13 2018-03-27 International Business Machines Corporation Streams optional execution paths depending upon data rates
CN103473126A (en) * 2013-09-09 2013-12-25 北京思特奇信息技术股份有限公司 Multiple-level task processing method
US9349016B1 (en) 2014-06-06 2016-05-24 Dell Software Inc. System and method for user-context-based data loss prevention
US9639396B2 (en) * 2014-09-16 2017-05-02 Nxp Usa, Inc. Starvation control in a data processing system
US20160077870A1 (en) * 2014-09-16 2016-03-17 Freescale Semiconductor, Inc. Starvation control in a data processing system
US10469396B2 (en) 2014-10-10 2019-11-05 Pegasystems, Inc. Event processing with enhanced throughput
US11057313B2 (en) 2014-10-10 2021-07-06 Pegasystems Inc. Event processing with enhanced throughput
US10326748B1 (en) 2015-02-25 2019-06-18 Quest Software Inc. Systems and methods for event-based authentication
US10417613B1 (en) 2015-03-17 2019-09-17 Quest Software Inc. Systems and methods of patternizing logged user-initiated events for scheduling functions
US9990506B1 (en) 2015-03-30 2018-06-05 Quest Software Inc. Systems and methods of securing network-accessible peripheral devices
US9842220B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US9842218B1 (en) 2015-04-10 2017-12-12 Dell Software Inc. Systems and methods of secure self-service access to content
US9563782B1 (en) 2015-04-10 2017-02-07 Dell Software Inc. Systems and methods of secure self-service access to content
US10140466B1 (en) 2015-04-10 2018-11-27 Quest Software Inc. Systems and methods of secure self-service access to content
US9641555B1 (en) 2015-04-10 2017-05-02 Dell Software Inc. Systems and methods of tracking content-exposure events
US9569626B1 (en) 2015-04-10 2017-02-14 Dell Software Inc. Systems and methods of reporting content-exposure events
US10409800B2 (en) * 2015-08-03 2019-09-10 Sap Se Priority queue for exclusive locks
US10536352B1 (en) 2015-08-05 2020-01-14 Quest Software Inc. Systems and methods for tuning cross-platform data collection
US11372682B2 (en) 2015-09-30 2022-06-28 EMC IP Holding Company LLC Method and system for deadline inheritance for resource synchronization
US10628221B1 (en) * 2015-09-30 2020-04-21 EMC IP Holding Company LLC Method and system for deadline inheritance for resource synchronization
US10218588B1 (en) 2015-10-05 2019-02-26 Quest Software Inc. Systems and methods for multi-stream performance patternization and optimization of virtual meetings
US10157358B1 (en) 2015-10-05 2018-12-18 Quest Software Inc. Systems and methods for multi-stream performance patternization and interval-based prediction
TWI560609B (en) * 2015-10-14 2016-12-01 Realtek Semiconductor Corp Data output dispatching device and method
US10142391B1 (en) 2016-03-25 2018-11-27 Quest Software Inc. Systems and methods of diagnosing down-layer performance problems via multi-stream performance patternization
US10698599B2 (en) 2016-06-03 2020-06-30 Pegasystems, Inc. Connecting graphical shapes using gestures
US10698647B2 (en) 2016-07-11 2020-06-30 Pegasystems Inc. Selective sharing for collaborative application usage
US10489220B2 (en) * 2017-01-26 2019-11-26 Microsoft Technology Licensing, Llc Priority based scheduling
US20180293100A1 (en) * 2017-04-05 2018-10-11 Cavium, Inc. Managing lock and unlock operations using traffic prioritization
US10445096B2 (en) * 2017-04-05 2019-10-15 Cavium, Llc Managing lock and unlock operations using traffic prioritization
US10331500B2 (en) * 2017-04-05 2019-06-25 Cavium, Llc Managing fairness for lock and unlock operations using operation prioritization
US10248420B2 (en) 2017-04-05 2019-04-02 Cavium, Llc Managing lock and unlock operations using active spinning
US10599430B2 (en) 2017-04-05 2020-03-24 Cavium, Llc Managing lock and unlock operations using operation prediction
US11048488B2 (en) 2018-08-14 2021-06-29 Pegasystems, Inc. Software code optimizer and method
US20210342202A1 (en) * 2018-10-18 2021-11-04 Oracle International Corporation Critical Section Speedup Using Help-Enabled Locks
US11861416B2 (en) * 2018-10-18 2024-01-02 Oracle International Corporation Critical section speedup using help-enabled locks
US11954518B2 (en) * 2019-12-20 2024-04-09 Nvidia Corporation User-defined metered priority queues
US11567945B1 (en) 2020-08-27 2023-01-31 Pegasystems Inc. Customized digital content generation systems and methods
CN112328386A (en) * 2021-01-05 2021-02-05 北京国科环宇科技股份有限公司 Operating system process scheduling method, device, medium and electronic equipment
WO2022236816A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Task allocation method and apparatus
CN115292025A (en) * 2022-09-30 2022-11-04 神州数码融信云技术服务有限公司 Task scheduling method and device, computer equipment and computer readable storage medium

Also Published As

Publication number Publication date
EP1346278A4 (en) 2008-01-02
EP1346278A2 (en) 2003-09-24
WO2002052370A2 (en) 2002-07-04
WO2002052370A3 (en) 2003-01-23

Similar Documents

Publication Publication Date Title
US20020083063A1 (en) Software and data processing system with priority queue dispatching
US6449614B1 (en) Interface system and method for asynchronously updating a share resource with locking facility
US7975271B2 (en) System and method for dynamically determining a portion of a resource for which a thread is to obtain a lock
US6560627B1 (en) Mutual exclusion at the record level with priority inheritance for embedded systems using one semaphore
US7797704B2 (en) System and method for performing work by one of plural threads using a lockable resource
EP0145889B1 (en) Non-spinning task locking using compare and swap
US5442763A (en) System and method for preventing deadlock in multiprocessor multiple resource instructions
US4807111A (en) Dynamic queueing method
US8250047B2 (en) Hybrid multi-threaded access to data structures using hazard pointers for reads and locks for updates
US8145817B2 (en) Reader/writer lock with reduced cache contention
US6934950B1 (en) Thread dispatcher for multi-threaded communication library
US6247025B1 (en) Locking and unlocking mechanism for controlling concurrent access to objects
US8055860B2 (en) Read-copy-update (RCU) operations with reduced memory barrier usage
US7818306B2 (en) Read-copy-update (RCU) operations with reduced memory barrier usage
US6668291B1 (en) Non-blocking concurrent queues with direct node access by threads
US20060242644A1 (en) Architecture for a read/write thread lock
JP2514299B2 (en) Serialization method of interrupt handling for process level programming
KR100976280B1 (en) Multi processor and multi thread safe message queue with hardware assistance
US6845504B2 (en) Method and system for managing lock contention in a computer system
JPH0318935A (en) Serialization system for access to data list
US20070067770A1 (en) System and method for reduced overhead in multithreaded programs
US6691304B1 (en) Monitor conversion in a multi-threaded computer system
US6094663A (en) Method and apparatus for implementing atomic queues
US6976260B1 (en) Method and apparatus for serializing a message queue in a multiprocessing environment
US20060048162A1 (en) Method for implementing a multiprocessor message queue without use of mutex gate objects

Legal Events

Date Code Title Description
AS Assignment

Owner name: BULL HN INFORMATIONA SYSTEMS INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EGOLF, DAVID A.;REEL/FRAME:011409/0463

Effective date: 20001224

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION