US5524247A - System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof - Google Patents

System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof Download PDF

Info

Publication number
US5524247A
US5524247A US08/011,142 US1114293A US5524247A US 5524247 A US5524247 A US 5524247A US 1114293 A US1114293 A US 1114293A US 5524247 A US5524247 A US 5524247A
Authority
US
United States
Prior art keywords
lock
variable
status
shared resource
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/011,142
Inventor
Satoshi Mizuno
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: NAGASAWA, ATSUHI
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR IN A DOCUMENT PREVIOUSLY RECORDED ON REEL 6413 FRAME 0260 Assignors: MIZUNO, SATOSHI
Application granted granted Critical
Publication of US5524247A publication Critical patent/US5524247A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Definitions

  • the present invention relates to a scheduler for use in a computer system and also to a scheduling method, and more particularly to a scheduler and a scheduling method in a computer system, in which mutual exclusion of threads or processes is achieved by the use of a shared variable.
  • scheduling is effected such that the CPU time is allocated to processes or users as uniformly as possible.
  • the conventional scheduling method is applied to parallel programming, a representative example of which is thread programming, the programming cannot be accomplished with high efficiency.
  • FIG. 12 representing the memory space of process which is controlled by a typical conventional operating system (e.g., UNIX operating system).
  • processes 1 and 2 have a text space, a data space, and a stack space each.
  • One of the process cannot refer to any memory space controlled by the other process.
  • Communications between the processes 1 and 2 and the exclusion control thereof are effected by using system calls.
  • the conventional operating system has a large overhead. Hence, it is difficult to prepare such a program as would make a plurality of processors cooperate to perform one processing with high efficiency.
  • FIG. 13 illustrates the memory space for thread programming.
  • a plurality of threads can share a text space, a data space, and a stack space.
  • the threads can, therefore, achieve mutual communications and exclusion control, without using system calls, by the use of variables shared in their memory spaces.
  • the thread programming is suitable for a multi-processor system.
  • the threads which shares memory spaces are called a "task" collectively.
  • a method has been proposed in which a plurality of processes share a memory space as is illustrated in FIG. 14.
  • a shared memory is a special data space which a plurality of processes have in common, and is utilized to accomplish communications between the processes or synchronization of processes.
  • a variable shared by a plurality of threads is used to achieve exclusion control of threads.
  • Such a shared variable is called a "lock variable.”
  • the lock variable is used such that when the lock variable is, for example, "1" one of the threads uses a shared resource exclusively, and any other threads cannot use the shared resource to perform processing.
  • FIG. 15 schematically shows the thread 3 of task 1 which is exclusively using a shared resource corresponding to a lock variable S (held in lock state).
  • FIG. 15 also shows the other threads 1, 2 and 4 of the task 1 which are waiting for the release of the shared resource (held in lock-wait state).
  • FIG. 16 is a flow chart explaining how exclusion control (or synchronization control) is carried out, using a lock variable S.
  • step S1 a thread reads the lock variable S corresponding to the shared resource in order to perform processing by using a shared resource.
  • step S2 it is determined whether the value for the lock variable S is "0" or not. If Yes, the flow goes to step S3, in which the lock variable S is rewritten to "1.” Then, the thread performs processing via exclusive control of the shared resource.
  • step S4 upon completion of the processing, the lock variable is rewritten to "0" allowing any other thread to use the shared resource in order to perform processing.
  • step S2 If No in step S2, that is, if the lock variable S is found to have the value of "1" steps S1 and 2 are repeated until the lock variable S acquires the value of "0". The thread remains in so-called “busy-wait state” or "spin-loop state” until the variable S becomes "0".
  • Steps 1 to 3 i.e., retrieval and up-dating of the lock variable S, need to be executed indivisibly, as test & set instruction should be.
  • a processor instruction executed to acquire the shared source is basically a single machine instruction.
  • the use of the lock variable makes it possible to accomplish mutual exclusion of threads, both easily and reliably. In this method, however, the processing efficiency may degrade in some cases where a plurality of threads undergoes frequent mutual exclusion.
  • a thread is made to stop processing when a predetermined time (i.e., time quantum) elapses or when the processing becomes no longer possible due to I/O waiting or the like, and scheduling is then performed.
  • a predetermined time i.e., time quantum
  • the system is a single-processor system (having only one CPU), and threads 1, 2 or 4 are selected, the CPU will undergo spin-looping since these threads remain in the lock-wait state, this reducing processing efficiency. If the thread 3 is selected, however, the CPU can perform processing immediately, and the other threads 1, 2 and 4 can start processing the moment the thread 3 finishes processing. In this case, the processing efficiency is high.
  • the CPU is efficiently used when the threads X, Y and Z of the tasks 2 and 3 are executed, as well.
  • the selecting of a thread in the lock-wait state wastes the CPU time
  • the selection of a thread in the lock state i.e., a thread locking the shared resource
  • the selection of a thread in the lock state i.e., a thread locking the shared resource
  • the scheduler may select a thread remaining in the lock-wait state.
  • FIG. 17 is a diagram representing how the CPU of the single-processor system operates in the case where it takes each thread to complete the execution that must be done exclusively about 1.5 times T, the unit time quantum T.
  • the thread 3 locks the shared resource between time t0 and time t1.
  • the CPU is sequentially assigned to the threads 1, 2, and 4, each in the lock-wait state during the period between time t1 and t4. Accordingly, it will be used for nothing due to the spin looping.
  • the thread 1 releases the shared resource at time between t4 and t5, the thread 1 can secure the shared resource for itself during the executing time T elapsing from t5.
  • the activity ratio of the CPU is 0.5 or less, and the processing efficiency is low.
  • FIG. 18 is diagram explaining the case where threads are sequentially executed with a low efficiency in a multi-processor system which has four CPUs.
  • the threads 1, 2, and 4 each remaining in lock-wait state, keeps on spin-looping until the the thread 3 releases the shared resource, and the activity ratio of each CPU is 0.5 or less, too.
  • the processing efficiency is low.
  • An object of the invention is to provide a computer and a data-processing method, in which a CPU can be used with an increased efficiency.
  • Another object of the invention is to provide a computer or a data-processing method, using a scheduler or a scheduling method in which it is determined whether a thread or a process is in lock-wait state or not, and which can perform scheduling with the highest possible efficiency.
  • a computer according to the invention comprises:
  • scheduling means for allocating, with priority, a CPU to the program unit in any state other than the lock-wait state.
  • a data-processing method comprises:
  • the program-executing step includes a step of setting predetermined data when the selected program unit starts waiting for a resource shared by the program unit
  • the scheduling step includes a step of selecting, with priority, any program unit other than the program unit which is waiting for the shared resource, in accordance with the predetermined data.
  • any program unit not in the lock-wait state is selected with priority.
  • the time spent in spin looping is short, thus increasing the use efficiency of the processor incorporated in the computer or utilized in the method.
  • FIG. 1 is a diagram explaining scheduling according to an embodiment of the present invention
  • FIG. 2 is a diagram representing the operating system and task incorporated in a computer system according to the embodiment of the invention
  • FIG. 3 is a block diagram showing the hardware of the computer system
  • FIG. 4 is a flow chart explaining how to initialize status variables and register them in a kernel in the computer system
  • FIG. 5 is a table showing the correspondence between the threads controlled by a scheduler and status variables
  • FIG. 6 is a flow chart explaining how threads lock a shared resource
  • FIG. 7 is a C-language version of the flow chart shown in FIG. 6;
  • FIG. 8 is a flow chart explaining how scheduling is performed in the embodiment of the invention.
  • FIG. 9 is a diagram showing how the CPU incorporated in a single-processor system operates to accomplish the scheduling illustrated in FIG. 8;
  • FIG. 10 is a diagram showing how the CPUs incorporated in a multi-processor system operates to accomplish the scheduling illustrated in FIG. 8;
  • FIG. 11 is a block diagram showing an operating system according to a second embodiment of the present invention.
  • FIG. 12 is a schematic representation of the memory space of an ordinary process
  • FIG. 13 is a schematic representation of the memory space of a ordinary thread
  • FIG. 14 is a schematic representation of the memory space of a process having a shared memory
  • FIG. 15 is a block diagram explaining the operation of a conventional scheduling system
  • FIG. 16 is a flow chart explaining the mutual exclusion in the conventional system
  • FIG. 17 is a diagram indicating how the CPU incorporated in a single-processor system operates to perform a conventional scheduling method
  • FIG. 18 is a diagram illustrating how the CPUs incorporated in a multi-processor system operates to perform the conventional scheduling method.
  • a scheduler 100 existing in the kernel 10 of an operating system refers to lock variables 106,107, and 108, thereby to determine whether threads A to E to which CPUs are (to be allocated) are in lock-wait (lock-waiting) state or not.
  • the scheduler 100 selects, with priority, the threads which are in lock-wait state or which need not be set in lock state, and allocates the CPUs to these threads.
  • FIG. 1 illustrates the case where the threads A, D, and E are in lock state, and the threads B and C are in lock-wait state.
  • the scheduler 100 selects the threads A, D, and E and allocates the CPUs to the threads selected.
  • the CPUs therefore waste no time in spin looping, accomplishing high-efficient data processing.
  • FIG. 2 shows an operating system and a task which the scheduler 100 employs to refer to lock variables.
  • the configuration of FIG. 2 includes status variables and status variable pointers.
  • a "status variable” is a variable existing in a task space and assigned to one thread. It is statically declared in the thread space and assumes the address of the corresponding lock variable when the associated thread enters into lock-wait state. The status variable takes the value of "0" when the thread is not in lock-wait state. Hence, it is assumed that the address of the lock variable is of a value other than "0.”
  • a “status variable pointer” is a pointer existing in the space of the kernel 10 and assigned to one thread. Each thread sets the address of its own status variable in the status variable pointer.
  • a specific system call “a status -- variable -- declare ()” (later described) is provided to set the address of the status variable in the status variable pointer.
  • the threads A and B exist in a task 21.
  • the thread A locks a lock variable 106.
  • the thread B remains in lock-wait state, waiting for the lock variable 106 which will be released from the thread A. (In other words, the thread B is waiting for the shared resource associated with the lock variable 106.)
  • "1" is set in the lock variable 106, and "0” is maintained at the clock variables 109 and 110.
  • the value "0" is set in status variable 201 for the thread A, and the address L1 of the lock variable 106 is set in 10 the status variable 202 of the thread B.
  • the scheduler 100 a lock variable pointer 301 for a thread 101, and a lock variable pointer 302 for a thread 102.
  • the address of the status variable 201 is set in the status variable pointer 301, and the address of the status variable 202 is set in the status variable pointer 302.
  • the hardware of a computer system which realizes the configuration of FIG. 2 is illustrating FIG. 3.
  • the CPUs are connected to a main memory by a bus.
  • the main memory has an OS area and a user area.
  • the OS area is used for storing the operating system (OS).
  • the user area has regions, each for storing the threads and lock variables pertaining to one task, and also regions, each for storing the status variables pertaining to one thread.
  • step S11 "long int status [10];” the status variables 0 to 9 are declared in the form of an array.
  • the status variable [i] is initialized to "0.”
  • the status variable for the thread 0 corresponds to "status [0]”
  • the status variable for the thread 1 corresponds to "status [1]”
  • the status variable for the thread 9 corresponds to "status [9].”
  • step S13 the addresses of the status variables are sent to the kernel 10 by means of system call, "status -- variable -- declare (),” and are set in a status variable pointer.
  • a table shown in FIG. 5 is formed in the kernel 10 which shows the correspondence between the ten threads 0 to 9, on the one hand, and the addresses "&status [0]” to "&status [9]” of the status variables for the threads, on the other hand.
  • This table functions as a status variable pointer.
  • any thread requiring the shared resource determines, in step S21, whether or not the shared resource is locked by any other thread. If Yes, that is, if the lock variable is "1," the flow jumps to step S25, in which the address of the lock variable is set in the corresponding status variable. Then, the flow returns to step S21, whereby a spin-looping is initiated.
  • step S21 that is, if the lock variable is "0”
  • step S22 in which the value "0" is set in the status variable of the thread.
  • step S23 the thread tries to obtain the shared resource by virtue of a test & set instruction or the like.
  • step S24 it is determined whether or not the shared resource has been successfully obtained. If Yes in step S24, the thread exclusively performs processing, using the shared resource it has just acquired.
  • step S24 If No in step S24, that is, if it is determined that the shared resource has not been obtained, the flow goes to step S25.
  • step S25 the address of the lock variable checked is set in the corresponding status variable. Thereafter, the flow returns to step S21, and a spin-looping is thereby initiated.
  • step S21 the value "0" is set in the status variable 201.
  • step S23 the thread 101 tries to obtain the shared resource by virtue of a test & set instruction and obtains it. This fact is confirmed in step S24. Thereafter, the thread 101 is executed, exclusively using the shared resource.
  • step S21 the address "&L1" of the lock variable 106 is set in the status variable 202.
  • step S25 the address "&L1" of the lock variable 106 is set in the status variable 202.
  • the flow then returns to step S21, and a spin-looping is initiated. As a result, the condition indicated in FIG. 2 is established.
  • step S31 the scheduler 100 selects the thread having the highest priority. For instance, the scheduler 100 selects one of the threads set in a queue based on their priorities.
  • step S32 the scheduler 100 determines whether or not the status-variable pointer for the thread selected is registered in the space of the kernel 10. If No, the flow jumps to step S38.
  • step 38 the CPU is assigned to the thread selected in step S31. This is because, the thread does not utilize a lock variable, thus it cannot be in lock-wait state, unless its status-variable pointer is registered in the kernel 10.
  • step S32 that is, if the status-variable pointer for the thread is registered in the kernel 10, the flow goes to step S33 so that it may ultimately be determined whether or not the CPU should be assigned to the thread. This is because the thread may possibly be in lock-wait state.
  • step S33 the scheduler 100 refers to the status-variable pointer corresponding to the thread selected in step S31, obtains the address of the status variable corresponding to the thread, and reads the value of the status variable. If necessary, the scheduler 100 checks the authenticity of the address, converts a logical address to a physical address, transfers pages of data to the main memory from a secondary memory, or performs a similar operation.
  • step S34 the scheduler 100 determines whether the status variable read in step S33 is "0" or not. If Yes, the flow goes to step S38.
  • step S38 the scheduler 100 assigns the CPU to the thread selected in step S31, and executes this thread. This is because the thread is not in lock-wait state if the status variable is "0.”
  • step S34 determines, in accordance with the address set in the status variable, whether or not the shared resource is locked by any other thread.
  • Steps S33 to S37 are repeated until the thread to be executed is identified.
  • the scheduler directly refers to the lock variable for which the thread is waiting. If the lock has been released, the scheduler assigns the CPU to the thread. Therefore, the possibility of spin-looping decreases and the CPU time is not used by the spin-looping idly.
  • FIGS. 9 shows how the CPU incorporated in a single-processor system operates to accomplish the scheduling explained with reference to the flow chart of FIG. 8.
  • FIG. 10 illustrates how the four CPUs incorporated in a multi-processor system operates to achieve the same scheduling.
  • thread 3 is in lock state
  • threads 1, 2, and 4 are in lock-wait state
  • threads X, Y, and Z are in neither state.
  • the thread 3 remains in lock state during the first executing time T (t0 to t1), and the CPU is allocated to the thread 3 during the second executing time T (t1 to t2).
  • the CPU is immediately assigned to the thread 1 which has been in lock-wait state.
  • the thread 3 which is in lock state and threads X, Y and Z which are in neither lock state or lock-wait state are executed prior to the other threads 1, 2, and 4 which remain in lock-wait state.
  • the scheduling method according to the present invention can be applied also to a system of the type shown in FIG. 14 in which processes share a single memory and utilize the shared variable stored in the shared memory to achieve mutual exclusion.
  • FIG. 11 shows a system of this type, wherein processes A and B use the shared variable stored in a shared memory to accomplish mutual exclusion.
  • the processes A and B have status variables A and B, respectively.
  • the addresses of the status variables A and B are registered in two status-variable pointers incorporated in the kernel, respectively. While one of the processes remains in lock state, exclusively using a shared resource, the lock variable corresponding to the shared resource has the value of "0,” and the status variable of the process has the value of "0.” Meanwhile, the address of the lock variable of the other process held in lock-wait state is set in the status variable of the process.
  • the status variable is declared and initialized, and the addresses of the status variables are registered into the status-variable pointers, in the same procedure as is illustrated in FIG. 4. Further, each process acquires the shared resources, essentially in the same manner as is illustrated in FIG. 6. Also, the scheduling procedure is identical to that procedure of FIG. 8, provided that the word "THREAD" used in FIG. 8 is read as "PROCESS.”
  • a CPU is not allocated to any thread or any process which is in lock-wait state. There is the possibility, however, that a thread or process locking the shared resource and executing it will release it in a relatively short time. Hence, the CPU may be assigned to another thread or process remaining in lock-wait state. This method effectively works in a multi-processor system which has a number of processors.

Abstract

A computer system comprising a CPU and a scheduler. The CPU sets a predetermined value in the status variable corresponding to a thread when the thread starts waiting for a resource which it shares with other threads. The scheduler refers to the status variable, selects, with priority, a thread other than the thread waiting for the shared resource, and allocates the CPU to the thread thus selected.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a scheduler for use in a computer system and also to a scheduling method, and more particularly to a scheduler and a scheduling method in a computer system, in which mutual exclusion of threads or processes is achieved by the use of a shared variable.
2. Description of the Related Art
In a conventional computer system, scheduling is effected such that the CPU time is allocated to processes or users as uniformly as possible. When the conventional scheduling method is applied to parallel programming, a representative example of which is thread programming, the programming cannot be accomplished with high efficiency.
This problem will be described in detail, with reference to FIG. 12 representing the memory space of process which is controlled by a typical conventional operating system (e.g., UNIX operating system). As is shown in FIG. 12, processes 1 and 2 have a text space, a data space, and a stack space each. One of the process cannot refer to any memory space controlled by the other process. Communications between the processes 1 and 2 and the exclusion control thereof are effected by using system calls. The conventional operating system has a large overhead. Hence, it is difficult to prepare such a program as would make a plurality of processors cooperate to perform one processing with high efficiency.
What has been devised to solve this problem is thread programming. FIG. 13 illustrates the memory space for thread programming. A plurality of threads can share a text space, a data space, and a stack space. The threads can, therefore, achieve mutual communications and exclusion control, without using system calls, by the use of variables shared in their memory spaces. In view of this, the thread programming is suitable for a multi-processor system. The threads which shares memory spaces are called a "task" collectively.
A method has been proposed in which a plurality of processes share a memory space as is illustrated in FIG. 14. A shared memory is a special data space which a plurality of processes have in common, and is utilized to accomplish communications between the processes or synchronization of processes.
In thread programming of ordinary type, a variable shared by a plurality of threads is used to achieve exclusion control of threads. Such a shared variable is called a "lock variable." The lock variable is used such that when the lock variable is, for example, "1" one of the threads uses a shared resource exclusively, and any other threads cannot use the shared resource to perform processing.
FIG. 15 schematically shows the thread 3 of task 1 which is exclusively using a shared resource corresponding to a lock variable S (held in lock state). FIG. 15 also shows the other threads 1, 2 and 4 of the task 1 which are waiting for the release of the shared resource (held in lock-wait state).
FIG. 16 is a flow chart explaining how exclusion control (or synchronization control) is carried out, using a lock variable S.
In step S1, a thread reads the lock variable S corresponding to the shared resource in order to perform processing by using a shared resource. In step S2 it is determined whether the value for the lock variable S is "0" or not. If Yes, the flow goes to step S3, in which the lock variable S is rewritten to "1." Then, the thread performs processing via exclusive control of the shared resource. In step S4, upon completion of the processing, the lock variable is rewritten to "0" allowing any other thread to use the shared resource in order to perform processing.
If No in step S2, that is, if the lock variable S is found to have the value of "1" steps S1 and 2 are repeated until the lock variable S acquires the value of "0". The thread remains in so-called "busy-wait state" or "spin-loop state" until the variable S becomes "0".
Steps 1 to 3, i.e., retrieval and up-dating of the lock variable S, need to be executed indivisibly, as test & set instruction should be. Hence, a processor instruction executed to acquire the shared source is basically a single machine instruction.
The use of the lock variable makes it possible to accomplish mutual exclusion of threads, both easily and reliably. In this method, however, the processing efficiency may degrade in some cases where a plurality of threads undergoes frequent mutual exclusion.
In an operating system of ordinary type, for example, a thread is made to stop processing when a predetermined time (i.e., time quantum) elapses or when the processing becomes no longer possible due to I/O waiting or the like, and scheduling is then performed.
Let us assume that the operating system selects the next thread to execute, in the condition illustrated in FIG. 15. Let us also assume that threads 1 to 4 of task 1, thread X of task 2, and threads Y and Z of task 3 have equivalent priority and that they are each executable. In this case, a scheduler can select any one of the threads 1 to 4, but the processing efficiency will greatly differ in accordance with the method of selecting the next thread to execute.
If the system is a single-processor system (having only one CPU), and threads 1, 2 or 4 are selected, the CPU will undergo spin-looping since these threads remain in the lock-wait state, this reducing processing efficiency. If the thread 3 is selected, however, the CPU can perform processing immediately, and the other threads 1, 2 and 4 can start processing the moment the thread 3 finishes processing. In this case, the processing efficiency is high. The CPU is efficiently used when the threads X, Y and Z of the tasks 2 and 3 are executed, as well.
In summary, in the single-processor system, (1) the selecting of a thread in the lock-wait state (i.e., a thread waiting for the release of the shared resource) wastes the CPU time, and (2) the selection of a thread in the lock state (i.e., a thread locking the shared resource) or a thread which need not be set in the lock state can make an effective use of the CPU time. A conventional scheduler cannot determine whether or not a thread is in the lock-wait state. Hence, in some cases, the scheduler may select a thread remaining in the lock-wait state.
FIG. 17 is a diagram representing how the CPU of the single-processor system operates in the case where it takes each thread to complete the execution that must be done exclusively about 1.5 times T, the unit time quantum T. In FIG. 17 the thread 3 locks the shared resource between time t0 and time t1. In this instance, the CPU is sequentially assigned to the threads 1, 2, and 4, each in the lock-wait state during the period between time t1 and t4. Accordingly, it will be used for nothing due to the spin looping. When the thread 3 releases the shared resource at time between t4 and t5, the thread 1 can secure the shared resource for itself during the executing time T elapsing from t5. In the case shown in FIG. 17, the activity ratio of the CPU is 0.5 or less, and the processing efficiency is low.
In a multi-processor system, as well, the same problem will arise. FIG. 18 is diagram explaining the case where threads are sequentially executed with a low efficiency in a multi-processor system which has four CPUs. In this instance, the threads 1, 2, and 4, each remaining in lock-wait state, keeps on spin-looping until the the thread 3 releases the shared resource, and the activity ratio of each CPU is 0.5 or less, too. Obviously, the processing efficiency is low.
The cases explained with reference to FIGS. 17 and 18, respectively, are two of the worst cases. However, phenomena similar to these actually occur in the conventional operating system which employs round robin scheduling.
The scheduling problem explained above arise also when the processes shown in FIG. 14 use the shared variable to achieve mutual exclusion.
SUMMARY OF THE INVENTION
The present invention was made for the purpose of solving the problems described above. An object of the invention is to provide a computer and a data-processing method, in which a CPU can be used with an increased efficiency.
Another object of the invention is to provide a computer or a data-processing method, using a scheduler or a scheduling method in which it is determined whether a thread or a process is in lock-wait state or not, and which can perform scheduling with the highest possible efficiency.
A computer according to the invention comprises:
means for determining whether each of plurality of program units to execute is in lock-wait state; and
scheduling means for allocating, with priority, a CPU to the program unit in any state other than the lock-wait state.
A data-processing method according to the present invention comprises:
a scheduling step of selecting at least one of a plurality of program units forming a program; and
program-executing step of executing said at least one of the program units which has been selected in the scheduling step,
wherein the program-executing step includes a step of setting predetermined data when the selected program unit starts waiting for a resource shared by the program unit, and the scheduling step includes a step of selecting, with priority, any program unit other than the program unit which is waiting for the shared resource, in accordance with the predetermined data.
In the computer and the data-processing method of this invention, any program unit not in the lock-wait state is selected with priority. The time spent in spin looping is short, thus increasing the use efficiency of the processor incorporated in the computer or utilized in the method.
Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the invention, and together with the general description given above and the detailed description of the preferred embodiments given below, serve to explain the principles of the invention.
FIG. 1 is a diagram explaining scheduling according to an embodiment of the present invention;
FIG. 2 is a diagram representing the operating system and task incorporated in a computer system according to the embodiment of the invention;
FIG. 3 is a block diagram showing the hardware of the computer system;
FIG. 4 is a flow chart explaining how to initialize status variables and register them in a kernel in the computer system;
FIG. 5 is a table showing the correspondence between the threads controlled by a scheduler and status variables;
FIG. 6 is a flow chart explaining how threads lock a shared resource;
FIG. 7 is a C-language version of the flow chart shown in FIG. 6;
FIG. 8 is a flow chart explaining how scheduling is performed in the embodiment of the invention;
FIG. 9 is a diagram showing how the CPU incorporated in a single-processor system operates to accomplish the scheduling illustrated in FIG. 8;
FIG. 10 is a diagram showing how the CPUs incorporated in a multi-processor system operates to accomplish the scheduling illustrated in FIG. 8;
FIG. 11 is a block diagram showing an operating system according to a second embodiment of the present invention;
FIG. 12 is a schematic representation of the memory space of an ordinary process;
FIG. 13 is a schematic representation of the memory space of a ordinary thread;
FIG. 14 is a schematic representation of the memory space of a process having a shared memory;
FIG. 15 is a block diagram explaining the operation of a conventional scheduling system;
FIG. 16 is a flow chart explaining the mutual exclusion in the conventional system;
FIG. 17 is a diagram indicating how the CPU incorporated in a single-processor system operates to perform a conventional scheduling method; and
FIG. 18 is a diagram illustrating how the CPUs incorporated in a multi-processor system operates to perform the conventional scheduling method.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
An embodiment of the invention will be described, with reference to the accompanying drawings.
First, the principle of scheduling according to the embodiment will be explained, with reference to FIG. 1.
To perform scheduling, a scheduler 100 existing in the kernel 10 of an operating system refers to lock variables 106,107, and 108, thereby to determine whether threads A to E to which CPUs are (to be allocated) are in lock-wait (lock-waiting) state or not. The scheduler 100 selects, with priority, the threads which are in lock-wait state or which need not be set in lock state, and allocates the CPUs to these threads. FIG. 1 illustrates the case where the threads A, D, and E are in lock state, and the threads B and C are in lock-wait state. In the next executing time T, the scheduler 100 selects the threads A, D, and E and allocates the CPUs to the threads selected. The CPUs therefore waste no time in spin looping, accomplishing high-efficient data processing.
FIG. 2 shows an operating system and a task which the scheduler 100 employs to refer to lock variables. The configuration of FIG. 2 includes status variables and status variable pointers.
A "status variable" is a variable existing in a task space and assigned to one thread. It is statically declared in the thread space and assumes the address of the corresponding lock variable when the associated thread enters into lock-wait state. The status variable takes the value of "0" when the thread is not in lock-wait state. Hence, it is assumed that the address of the lock variable is of a value other than "0."
A "status variable pointer" is a pointer existing in the space of the kernel 10 and assigned to one thread. Each thread sets the address of its own status variable in the status variable pointer. In this embodiment, a specific system call "a status-- variable-- declare ()" (later described) is provided to set the address of the status variable in the status variable pointer.
As FIG. 2 shows, the threads A and B exist in a task 21. The thread A locks a lock variable 106. (Namely, the thread A locks the shared resource associated with the lock variable 106.) The thread B remains in lock-wait state, waiting for the lock variable 106 which will be released from the thread A. (In other words, the thread B is waiting for the shared resource associated with the lock variable 106.) There are no threads which lock lock variables 109 or 110. Thus, "1" is set in the lock variable 106, and "0" is maintained at the clock variables 109 and 110. The value "0" is set in status variable 201 for the thread A, and the address L1 of the lock variable 106 is set in 10 the status variable 202 of the thread B.
Existing in the kernel 10 are: the scheduler 100, a lock variable pointer 301 for a thread 101, and a lock variable pointer 302 for a thread 102. The address of the status variable 201 is set in the status variable pointer 301, and the address of the status variable 202 is set in the status variable pointer 302.
The hardware of a computer system which realizes the configuration of FIG. 2 is illustrating FIG. 3. As is seen in FIG. 3, the CPUs are connected to a main memory by a bus. The main memory has an OS area and a user area. The OS area is used for storing the operating system (OS). The user area has regions, each for storing the threads and lock variables pertaining to one task, and also regions, each for storing the status variables pertaining to one thread.
Declaration and initialization of the status variables, and the registering of the status variable pointers will be explained, with reference to the C-language flow chart of FIG. 4. Here it is assumed that 10 threads are available and that 10 status variables exist.
First, in step S11, "long int status [10];" the status variables 0 to 9 are declared in the form of an array. Next, in step S12, the status variable [i] is initialized to "0." The status variable for the thread 0 corresponds to "status [0]," the status variable for the thread 1 corresponds to "status [1]," and so forth. Thus, the status variable for the thread 9 corresponds to "status [9]." In step S13, the addresses of the status variables are sent to the kernel 10 by means of system call, "status-- variable-- declare ()," and are set in a status variable pointer.
As a result, a table shown in FIG. 5 is formed in the kernel 10 which shows the correspondence between the ten threads 0 to 9, on the one hand, and the addresses "&status [0]" to "&status [9]" of the status variables for the threads, on the other hand. This table functions as a status variable pointer.
How each of the threads shown in FIG. 2 locks a shared resource will now be explained, with reference to the flow chart of FIG. 6. FIG. 7 is a program of the C-language for realizing the flow chart shown in FIG. 6, in which the value OK (=0) indicates that the lock variable 106 at the address L1 is not locked, and the notation of "tas ()" is a function used to achieve "test & set." When the result obtained by executing this function is "OK," it means that a lock variable designated by an argument has been acquired.
As can be understood from the flow chart of FIG. 6, any thread requiring the shared resource determines, in step S21, whether or not the shared resource is locked by any other thread. If Yes, that is, if the lock variable is "1," the flow jumps to step S25, in which the address of the lock variable is set in the corresponding status variable. Then, the flow returns to step S21, whereby a spin-looping is initiated.
If No in step S21, that is, if the lock variable is "0," the flow goes to step S22, in which the value "0" is set in the status variable of the thread. Next, in step S23, the thread tries to obtain the shared resource by virtue of a test & set instruction or the like. In step S24, it is determined whether or not the shared resource has been successfully obtained. If Yes in step S24, the thread exclusively performs processing, using the shared resource it has just acquired.
If No in step S24, that is, if it is determined that the shared resource has not been obtained, the flow goes to step S25. In step S25, the address of the lock variable checked is set in the corresponding status variable. Thereafter, the flow returns to step S21, and a spin-looping is thereby initiated.
More specifically, let us assume that, in the configuration of FIG. 2, the thread 101 checks the lock variable 106 in order to acquire a shared resource, and finds that the variable 106 has the value of "0." In this case, the flow advances from step S21 to step S22. In step S22, the value "0" is set in the status variable 201. Then, in step S23, the thread 101 tries to obtain the shared resource by virtue of a test & set instruction and obtains it. This fact is confirmed in step S24. Thereafter, the thread 101 is executed, exclusively using the shared resource.
Assuming that the thread 102 needs to acquire the shared resource in this condition, the flow jumps from step S21 to step S25 since the lock variable 106 has the value of "1." In step S25, the address "&L1" of the lock variable 106 is set in the status variable 202. The flow then returns to step S21, and a spin-looping is initiated. As a result, the condition indicated in FIG. 2 is established.
The scheduling procedure will now be described, with reference to the flow chart of FIG. 8.
First, in step S31, the scheduler 100 selects the thread having the highest priority. For instance, the scheduler 100 selects one of the threads set in a queue based on their priorities.
Next, in step S32, the scheduler 100 determines whether or not the status-variable pointer for the thread selected is registered in the space of the kernel 10. If No, the flow jumps to step S38. In step 38, the CPU is assigned to the thread selected in step S31. This is because, the thread does not utilize a lock variable, thus it cannot be in lock-wait state, unless its status-variable pointer is registered in the kernel 10.
If Yes in step S32, that is, if the status-variable pointer for the thread is registered in the kernel 10, the flow goes to step S33 so that it may ultimately be determined whether or not the CPU should be assigned to the thread. This is because the thread may possibly be in lock-wait state.
In step S33, the scheduler 100 refers to the status-variable pointer corresponding to the thread selected in step S31, obtains the address of the status variable corresponding to the thread, and reads the value of the status variable. If necessary, the scheduler 100 checks the authenticity of the address, converts a logical address to a physical address, transfers pages of data to the main memory from a secondary memory, or performs a similar operation.
In step S34, the scheduler 100 determines whether the status variable read in step S33 is "0" or not. If Yes, the flow goes to step S38. In step S38, the scheduler 100 assigns the CPU to the thread selected in step S31, and executes this thread. This is because the thread is not in lock-wait state if the status variable is "0."
If No in step S34, that is, if the status variable is "1," indicating that the status was in lock-wait state during the preceding executing time, the flow goes to step S35. In step S35, the scheduler 100 directly refers to the lock variable. In step S36, the scheduler 100 determines, in accordance with the address set in the status variable, whether or not the shared resource is locked by any other thread.
If No in step S36, that is, if the shared resource has been released (lock variable=0), the flow goes to step S38, in which the scheduler 100 assigns the CPU to the thread selected in step S31 and executes this thread. If Yes in step S36, that is, if the shared resource is locked (lock variable=1), the flow goes to step S37, in which the scheduler 100 selects another thread.
Then, the flow returns to step S32. Steps S33 to S37 are repeated until the thread to be executed is identified.
As stated above, the scheduler directly refers to the lock variable for which the thread is waiting. If the lock has been released, the scheduler assigns the CPU to the thread. Therefore, the possibility of spin-looping decreases and the CPU time is not used by the spin-looping idly.
FIGS. 9 shows how the CPU incorporated in a single-processor system operates to accomplish the scheduling explained with reference to the flow chart of FIG. 8. FIG. 10 illustrates how the four CPUs incorporated in a multi-processor system operates to achieve the same scheduling. In both cases shown in FIGS. 9 and 10, thread 3 is in lock state, threads 1, 2, and 4 are in lock-wait state, and threads X, Y, and Z are in neither state.
In the single-processor system of FIG. 9, the thread 3 remains in lock state during the first executing time T (t0 to t1), and the CPU is allocated to the thread 3 during the second executing time T (t1 to t2). When the thread 3 is released from the lock state during the second executing time T, the CPU is immediately assigned to the thread 1 which has been in lock-wait state.
In the multi-possessor system of FIG. 10, the thread 3 which is in lock state and threads X, Y and Z which are in neither lock state or lock-wait state are executed prior to the other threads 1, 2, and 4 which remain in lock-wait state.
Since the CPU or CPUs are assigned to a thread or threads, which are not in lock-wait state, the spin-looping time can be reduced in both the single-processor system (FIG. 9) and the multi-processor system (FIG. 10).
The scheduling method according to the present invention can be applied also to a system of the type shown in FIG. 14 in which processes share a single memory and utilize the shared variable stored in the shared memory to achieve mutual exclusion. FIG. 11 shows a system of this type, wherein processes A and B use the shared variable stored in a shared memory to accomplish mutual exclusion.
As is shown in FIG. 11, the processes A and B have status variables A and B, respectively. The addresses of the status variables A and B are registered in two status-variable pointers incorporated in the kernel, respectively. While one of the processes remains in lock state, exclusively using a shared resource, the lock variable corresponding to the shared resource has the value of "0," and the status variable of the process has the value of "0." Meanwhile, the address of the lock variable of the other process held in lock-wait state is set in the status variable of the process. The status variable is declared and initialized, and the addresses of the status variables are registered into the status-variable pointers, in the same procedure as is illustrated in FIG. 4. Further, each process acquires the shared resources, essentially in the same manner as is illustrated in FIG. 6. Also, the scheduling procedure is identical to that procedure of FIG. 8, provided that the word "THREAD" used in FIG. 8 is read as "PROCESS."
In the embodiments described above, a CPU is not allocated to any thread or any process which is in lock-wait state. There is the possibility, however, that a thread or process locking the shared resource and executing it will release it in a relatively short time. Hence, the CPU may be assigned to another thread or process remaining in lock-wait state. This method effectively works in a multi-processor system which has a number of processors.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative devices, and illustrated examples shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.

Claims (6)

What is claimed is:
1. A computer system comprising:
a plurality of program units;
a shared resource which can be locked by any one of said program units;
a processor unit including at least one processor means for executing one of said program units;
control means for holding a status variable indicating that a corresponding one of said program units is in a lock-wait state when said program unit uses said shared resource, and for holding a lock variable indicating whether said shared resource is locked, said status variable holding an address of said lock variable when said corresponding program unit is in said lock-wait state; and
scheduling means including:
selecting means for selecting one of said program units as a candidate for allocation of said processor unit;
a status-variable pointer for holding an address of said status variable held by said control means;
means for allocating said processor unit to said program unit selected by said selecting means when said scheduling means does not have a status-variable pointer corresponding to said selected program unit;
referring means for referring to said status-variable pointer to obtain said address of said status variable corresponding to said selected program unit when said scheduling means has a status-variable pointer corresponding to said selected program unit;
first referring means for referring to said status variable in accordance with said address obtained by said referring means;
means for allocating said processor unit to said selected program unit when said status variable referred to by said first referring means indicates that said selected program unit is not in a lock-wait state;
second referring means for referring to said lock variable corresponding to said address held in said status variable referred to by said first referring means when said selected program unit is in a lock-wait state; and
means for allocating said processor unit to said selected program unit when said lock variable referred to by said second referring means indicates that said shared resource is unlocked.
2. The computer system according to claim 1, wherein said program units comprise one of threads and processes.
3. The computer system according to claim 1, wherein said processor unit includes a plurality of processor means, and said scheduling means allocates said processor means to said program units to execute said program units in parallel.
4. The computer system according to claim 1, wherein each of said program units lock said shared resource when said lock variable indicates that said shared resource in unlocked, said program unit remaining in said lock-wait state when said lock variable indicates that said shared resource is locked.
5. The computer system according to claim 1, wherein said control means includes means for changing said lock variable to a value indicating that said shared resource is locked and for changing said status variable to a value indicating that said program unit is not in lock-wait state when said program unit locks said shared resource.
6. A scheduling method for allocating a processor unit to a program unit in a computer system comprising a plurality of program units and a shared resource which can be locked by any one of the program units, said method comprising the steps of:
holding a status variable indicating that one of the program units is in a lock-wait state when using the shared resource, and for holding a lock variable indicating that the shared resource is locked when being used;
setting an address of the lock variable to the status variable of any program unit that is in a lock-wait state;
selecting one of the program units as a candidate for allocating the processor unit;
holding an address of the status variable in a scheduler;
allocating the processor unit to the selected program unit when the scheduler does not hold the address of the status variable corresponding to the selected program unit;
obtaining the address of the status variable which corresponds to the selected program unit when the scheduler holds the address of the status variable corresponding to the selected program unit;
referring to the status variable in accordance with the address obtained;
allocating the processor unit to the selected program unit when the status variable referred to indicates that the selected program unit is not in a lock-wait state;
referring to the lock variable corresponding to the address held in the status variable referred to; and
allocating the processor unit to the selected program unit when the lock variable referred to indicates that the shared resource is unlocked.
US08/011,142 1992-01-30 1993-01-29 System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof Expired - Lifetime US5524247A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP4-015120 1992-01-30
JP4015120A JP2866241B2 (en) 1992-01-30 1992-01-30 Computer system and scheduling method

Publications (1)

Publication Number Publication Date
US5524247A true US5524247A (en) 1996-06-04

Family

ID=11879967

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/011,142 Expired - Lifetime US5524247A (en) 1992-01-30 1993-01-29 System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof

Country Status (2)

Country Link
US (1) US5524247A (en)
JP (1) JP2866241B2 (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590326A (en) * 1993-09-13 1996-12-31 Kabushiki Kaisha Toshiba Shared data management scheme using shared data locks for multi-threading
US5715395A (en) * 1994-09-12 1998-02-03 International Business Machines Corporation Method and apparatus for reducing network resource location traffic in a network
US5796954A (en) * 1995-10-13 1998-08-18 Apple Computer, Inc. Method and system for maximizing the use of threads in a file server for processing network requests
US5815727A (en) * 1994-12-20 1998-09-29 Nec Corporation Parallel processor for executing plural thread program in parallel using virtual thread numbers
US5822588A (en) * 1995-06-09 1998-10-13 Sun Microsystem, Inc. System and method for checking the use of synchronization locks in a multi-threaded target program
US5887166A (en) * 1996-12-16 1999-03-23 International Business Machines Corporation Method and system for constructing a program including a navigation instruction
US6026427A (en) * 1997-11-21 2000-02-15 Nishihara; Kazunori Condition variable to synchronize high level communication between processing threads
US6073159A (en) * 1996-12-31 2000-06-06 Compaq Computer Corporation Thread properties attribute vector based thread selection in multithreading processor
US6192388B1 (en) * 1996-06-20 2001-02-20 Avid Technology, Inc. Detecting available computers to participate in computationally complex distributed processing problem
US6223204B1 (en) * 1996-12-18 2001-04-24 Sun Microsystems, Inc. User level adaptive thread blocking
US6289410B1 (en) * 1996-07-18 2001-09-11 Electronic Data Systems Corporation Method and system for maintaining consistency of shared objects based upon instance variable locking
US6295611B1 (en) 1998-12-14 2001-09-25 Sun Microsystems, Inc.. Method and system for software recovery
US20020069245A1 (en) * 2000-10-13 2002-06-06 Han-Gyoo Kim Disk system adapted to be directly attached to network
US6418460B1 (en) * 1997-02-18 2002-07-09 Silicon Graphics, Inc. System and method for finding preempted threads in a multi-threaded application
US20020124043A1 (en) * 2001-03-05 2002-09-05 Otero Perez Clara Maria Method of and system for withdrawing budget from a blocking task
US6493741B1 (en) 1999-10-01 2002-12-10 Compaq Information Technologies Group, L.P. Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit
US20030014569A1 (en) * 2001-07-16 2003-01-16 Han-Gyoo Kim Scheme for dynamically connecting I/O devices through network
US20030126186A1 (en) * 2001-12-31 2003-07-03 Dion Rodgers Method and apparatus for suspending execution of a thread until a specified memory access occurs
US6622155B1 (en) 1998-11-24 2003-09-16 Sun Microsystems, Inc. Distributed monitor concurrency control
US6671795B1 (en) * 2000-01-21 2003-12-30 Intel Corporation Method and apparatus for pausing execution in a processor or the like
US20040181791A1 (en) * 2003-03-13 2004-09-16 Kunihiko Hayashi Task switching apparatus, method and program
US6810422B1 (en) 2000-01-14 2004-10-26 Lockheed Martin Tactical Defense Systems System and method for probabilistic quality of communication service determination
US20040216112A1 (en) * 2003-04-23 2004-10-28 International Business Machines Corporation System and method for thread prioritization during lock processing
US20050060710A1 (en) * 1999-04-05 2005-03-17 International Business Machines Corporation System, method and program for implementing priority inheritance in an operating system
US20050081185A1 (en) * 2003-09-26 2005-04-14 International Business Machines Corporation Transforming locks in software loops
US20050149682A1 (en) * 2001-10-09 2005-07-07 Han-Gyoo Kim Virtual multiple removable media jukebox
US20050193189A1 (en) * 2004-02-17 2005-09-01 Han-Gyoo Kim Device and method for booting an operating system for a computer from a passive directly attached network device
US20050193017A1 (en) * 2004-02-19 2005-09-01 Han-Gyoo Kim Portable multimedia player/recorder that accesses data contents from and writes to networked device
US6947987B2 (en) * 1998-05-29 2005-09-20 Ncr Corporation Method and apparatus for allocating network resources and changing the allocation based on dynamic workload changes
US20060010444A1 (en) * 2004-07-09 2006-01-12 Seidman David I Lock contention pinpointing
US20060045130A1 (en) * 2004-07-22 2006-03-02 Han-Gyoo Kim Low-level communication layers and device employing same
US20060067356A1 (en) * 2004-08-23 2006-03-30 Han-Gyoo Kim Method and apparatus for network direct attached storage
US20060069884A1 (en) * 2004-02-27 2006-03-30 Han-Gyoo Kim Universal network to device bridge chip that enables network directly attached device
US20060075404A1 (en) * 2004-10-06 2006-04-06 Daniela Rosu Method and system for scheduling user-level I/O threads
US20060155805A1 (en) * 1999-09-01 2006-07-13 Netkingcall, Co., Ltd. Scalable server architecture based on asymmetric 3-way TCP
US20070011687A1 (en) * 2005-07-08 2007-01-11 Microsoft Corporation Inter-process message passing
US20070008988A1 (en) * 2004-08-23 2007-01-11 Han-Gyoo Kim Enhanced network direct attached storage controller
US20070044104A1 (en) * 2005-08-18 2007-02-22 International Business Machines Corporation Adaptive scheduling and management of work processing in a target context in resource contention
CN1311333C (en) * 2002-11-12 2007-04-18 英特尔公司 Method and device for serial exclusive body
US20070288895A1 (en) * 2006-06-08 2007-12-13 Sun Microsystems, Inc. Configuration tool with multi-level priority semantic
US20070288908A1 (en) * 2006-06-09 2007-12-13 International Business Machines Corporation Computer implemented method and system for accurate, efficient and adaptive calling context profiling
US20080104600A1 (en) * 2004-04-02 2008-05-01 Symbian Software Limited Operating System for a Computing Device
US20080209162A1 (en) * 2002-01-09 2008-08-28 Kazuya Furukawa Processor and program execution method capable of efficient program execution
US7457880B1 (en) * 2003-09-26 2008-11-25 Ximeta Technology, Inc. System using a single host to receive and redirect all file access commands for shared data storage device from other hosts on a network
US7496918B1 (en) * 2004-06-01 2009-02-24 Sun Microsystems, Inc. System and methods for deadlock detection
US20090063881A1 (en) * 2007-08-31 2009-03-05 Mips Technologies, Inc. Low-overhead/power-saving processor synchronization mechanism, and applications thereof
US20090172686A1 (en) * 2007-12-28 2009-07-02 Chen Chih-Ho Method for managing thread group of process
US7849257B1 (en) 2005-01-06 2010-12-07 Zhe Khi Pak Method and apparatus for storing and retrieving data
US20110099551A1 (en) * 2009-10-26 2011-04-28 Microsoft Corporation Opportunistically Scheduling and Adjusting Time Slices
US20110202930A1 (en) * 2008-11-07 2011-08-18 Panasonic Corporation Resource exclusion control method and exclusion control system in multiprocessors and technology associated with the same
US20110246694A1 (en) * 2008-12-12 2011-10-06 Panasonic Corporation Multi-processor system and lock arbitration method thereof
WO2013171362A1 (en) * 2012-05-16 2013-11-21 Nokia Corporation Method in a processor, an apparatus and a computer program product
US20150149737A1 (en) * 2013-11-22 2015-05-28 Yahoo! Inc. Method or system for access to shared resource
US20190050254A1 (en) * 2017-08-09 2019-02-14 Servicenow, Inc. Systems and methods for recomputing services

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4857325B2 (en) * 2008-10-31 2012-01-18 パナソニック株式会社 Task switching device, method and program
JP5990139B2 (en) * 2013-08-01 2016-09-07 日本電信電話株式会社 Execution control device and execution control method
JP6443125B2 (en) * 2015-02-25 2018-12-26 富士通株式会社 Compiler program, computer program, and compiler apparatus
JP7014010B2 (en) * 2018-03-30 2022-02-01 日本電気株式会社 Job execution management system and job execution management method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4802164A (en) * 1986-01-10 1989-01-31 Hitachi Ltd. Method and apparatus for testing a multi-processor system
US4989133A (en) * 1984-11-30 1991-01-29 Inmos Limited System for executing, scheduling, and selectively linking time dependent processes based upon scheduling time thereof
US5179702A (en) * 1989-12-29 1993-01-12 Supercomputer Systems Limited Partnership System and method for controlling a highly parallel multiprocessor using an anarchy based scheduler for parallel execution thread scheduling
US5274809A (en) * 1988-05-26 1993-12-28 Hitachi, Ltd. Task execution control method for a multiprocessor system with enhanced post/wait procedure
US5377352A (en) * 1988-05-27 1994-12-27 Hitachi, Ltd. Method of scheduling tasks with priority to interrupted task locking shared resource

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5422738A (en) * 1977-07-21 1979-02-20 Nec Corp Lock control device
JPH02204838A (en) * 1989-02-03 1990-08-14 Hitachi Ltd Task priority managing system
JPH031244A (en) * 1989-05-29 1991-01-07 Nec Corp Task dispatch system
JPH0365732A (en) * 1989-08-03 1991-03-20 Matsushita Electric Ind Co Ltd Resource managing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4989133A (en) * 1984-11-30 1991-01-29 Inmos Limited System for executing, scheduling, and selectively linking time dependent processes based upon scheduling time thereof
US4802164A (en) * 1986-01-10 1989-01-31 Hitachi Ltd. Method and apparatus for testing a multi-processor system
US5274809A (en) * 1988-05-26 1993-12-28 Hitachi, Ltd. Task execution control method for a multiprocessor system with enhanced post/wait procedure
US5377352A (en) * 1988-05-27 1994-12-27 Hitachi, Ltd. Method of scheduling tasks with priority to interrupted task locking shared resource
US5179702A (en) * 1989-12-29 1993-01-12 Supercomputer Systems Limited Partnership System and method for controlling a highly parallel multiprocessor using an anarchy based scheduler for parallel execution thread scheduling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Maurice J. Back: "UNIX Kernl Design" 1986 ISBN4-320-02551-2 C3041.
Maurice J. Back: UNIX Kernl Design 1986 ISBN4 320 02551 2 C3041. *

Cited By (105)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590326A (en) * 1993-09-13 1996-12-31 Kabushiki Kaisha Toshiba Shared data management scheme using shared data locks for multi-threading
US5715395A (en) * 1994-09-12 1998-02-03 International Business Machines Corporation Method and apparatus for reducing network resource location traffic in a network
US5815727A (en) * 1994-12-20 1998-09-29 Nec Corporation Parallel processor for executing plural thread program in parallel using virtual thread numbers
US5822588A (en) * 1995-06-09 1998-10-13 Sun Microsystem, Inc. System and method for checking the use of synchronization locks in a multi-threaded target program
US5796954A (en) * 1995-10-13 1998-08-18 Apple Computer, Inc. Method and system for maximizing the use of threads in a file server for processing network requests
US6192388B1 (en) * 1996-06-20 2001-02-20 Avid Technology, Inc. Detecting available computers to participate in computationally complex distributed processing problem
US6289410B1 (en) * 1996-07-18 2001-09-11 Electronic Data Systems Corporation Method and system for maintaining consistency of shared objects based upon instance variable locking
US5887166A (en) * 1996-12-16 1999-03-23 International Business Machines Corporation Method and system for constructing a program including a navigation instruction
US6223204B1 (en) * 1996-12-18 2001-04-24 Sun Microsystems, Inc. User level adaptive thread blocking
US6073159A (en) * 1996-12-31 2000-06-06 Compaq Computer Corporation Thread properties attribute vector based thread selection in multithreading processor
US6418460B1 (en) * 1997-02-18 2002-07-09 Silicon Graphics, Inc. System and method for finding preempted threads in a multi-threaded application
US6026427A (en) * 1997-11-21 2000-02-15 Nishihara; Kazunori Condition variable to synchronize high level communication between processing threads
US6947987B2 (en) * 1998-05-29 2005-09-20 Ncr Corporation Method and apparatus for allocating network resources and changing the allocation based on dynamic workload changes
US6622155B1 (en) 1998-11-24 2003-09-16 Sun Microsystems, Inc. Distributed monitor concurrency control
US6295611B1 (en) 1998-12-14 2001-09-25 Sun Microsystems, Inc.. Method and system for software recovery
US6430703B1 (en) 1998-12-14 2002-08-06 Sun Microsystems, Inc. Method and system for software recovery
US7752621B2 (en) 1999-04-05 2010-07-06 International Business Machines Corporation System, method and program for implementing priority inheritance in an operating system
US6874144B1 (en) 1999-04-05 2005-03-29 International Business Machines Corporation System, method, and program for implementing priority inheritance in an operating system
US20050060710A1 (en) * 1999-04-05 2005-03-17 International Business Machines Corporation System, method and program for implementing priority inheritance in an operating system
US20060155805A1 (en) * 1999-09-01 2006-07-13 Netkingcall, Co., Ltd. Scalable server architecture based on asymmetric 3-way TCP
US7483967B2 (en) 1999-09-01 2009-01-27 Ximeta Technology, Inc. Scalable server architecture based on asymmetric 3-way TCP
US6493741B1 (en) 1999-10-01 2002-12-10 Compaq Information Technologies Group, L.P. Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit
US6675192B2 (en) 1999-10-01 2004-01-06 Hewlett-Packard Development Company, L.P. Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers
US20040073905A1 (en) * 1999-10-01 2004-04-15 Emer Joel S. Method and apparatus to quiesce a portion of a simultaneous multithreaded central processing unit
US6810422B1 (en) 2000-01-14 2004-10-26 Lockheed Martin Tactical Defense Systems System and method for probabilistic quality of communication service determination
US7451296B2 (en) 2000-01-21 2008-11-11 Intel Corporation Method and apparatus for pausing execution in a processor or the like
US6671795B1 (en) * 2000-01-21 2003-12-30 Intel Corporation Method and apparatus for pausing execution in a processor or the like
US20040117604A1 (en) * 2000-01-21 2004-06-17 Marr Deborah T. Method and apparatus for pausing execution in a processor or the like
US7870225B2 (en) 2000-10-13 2011-01-11 Zhe Khi Pak Disk system adapted to be directly attached to network
US20060010287A1 (en) * 2000-10-13 2006-01-12 Han-Gyoo Kim Disk system adapted to be directly attached
US7792923B2 (en) 2000-10-13 2010-09-07 Zhe Khi Pak Disk system adapted to be directly attached to network
US7849153B2 (en) 2000-10-13 2010-12-07 Zhe Khi Pak Disk system adapted to be directly attached
US20020069245A1 (en) * 2000-10-13 2002-06-06 Han-Gyoo Kim Disk system adapted to be directly attached to network
WO2002071218A2 (en) * 2001-03-05 2002-09-12 Koninklijke Philips Electronics N.V. Method of and system for withdrawing budget from a blocking task
US20020124043A1 (en) * 2001-03-05 2002-09-05 Otero Perez Clara Maria Method of and system for withdrawing budget from a blocking task
WO2002071218A3 (en) * 2001-03-05 2003-10-30 Koninkl Philips Electronics Nv Method of and system for withdrawing budget from a blocking task
US20030014569A1 (en) * 2001-07-16 2003-01-16 Han-Gyoo Kim Scheme for dynamically connecting I/O devices through network
US7783761B2 (en) 2001-07-16 2010-08-24 Zhe Khi Pak Scheme for dynamically connecting I/O devices through network
US20050149682A1 (en) * 2001-10-09 2005-07-07 Han-Gyoo Kim Virtual multiple removable media jukebox
US20030126186A1 (en) * 2001-12-31 2003-07-03 Dion Rodgers Method and apparatus for suspending execution of a thread until a specified memory access occurs
US20080034190A1 (en) * 2001-12-31 2008-02-07 Dion Rodgers Method and apparatus for suspending execution of a thread until a specified memory access occurs
US7363474B2 (en) 2001-12-31 2008-04-22 Intel Corporation Method and apparatus for suspending execution of a thread until a specified memory access occurs
US8719827B2 (en) 2002-01-09 2014-05-06 Panasonic Corporation Processor and program execution method capable of efficient program execution
US7921281B2 (en) 2002-01-09 2011-04-05 Panasonic Corporation Processor and program execution method capable of efficient program execution
US8006076B2 (en) 2002-01-09 2011-08-23 Panasonic Corporation Processor and program execution method capable of efficient program execution
US20080215858A1 (en) * 2002-01-09 2008-09-04 Kazuya Furukawa Processor and program execution method capable of efficient program execution
US20080209192A1 (en) * 2002-01-09 2008-08-28 Kazuya Furukawa Processor and program execution method capable of efficient program execution
US20080209162A1 (en) * 2002-01-09 2008-08-28 Kazuya Furukawa Processor and program execution method capable of efficient program execution
US9823946B2 (en) 2002-01-09 2017-11-21 Socionext Inc. Processor and program execution method capable of efficient program execution
US7930520B2 (en) 2002-01-09 2011-04-19 Panasonic Corporation Processor and program execution method capable of efficient program execution
CN1311333C (en) * 2002-11-12 2007-04-18 英特尔公司 Method and device for serial exclusive body
US7950016B2 (en) 2003-03-13 2011-05-24 Panasonic Corporation Apparatus for switching the task to be completed in a processor by switching to the task assigned time slot
US8276156B2 (en) 2003-03-13 2012-09-25 Panasonic Corporation Task switching based on assigned time slot
US7735087B2 (en) 2003-03-13 2010-06-08 Panasonic Corporation Task switching apparatus, method and program
US20040181791A1 (en) * 2003-03-13 2004-09-16 Kunihiko Hayashi Task switching apparatus, method and program
US20040216112A1 (en) * 2003-04-23 2004-10-28 International Business Machines Corporation System and method for thread prioritization during lock processing
US7278141B2 (en) 2003-04-23 2007-10-02 International Business Machines Corporation System and method for adding priority change value corresponding with a lock to a thread during lock processing
US8276134B2 (en) 2003-09-26 2012-09-25 International Business Machines Corporation Transforming locks in software loops
US7404183B2 (en) 2003-09-26 2008-07-22 International Business Machines Corporation Transforming locks in software loops
US20090043971A1 (en) * 2003-09-26 2009-02-12 Ximeta Technology, Inc. Data integrity for data storage devices shared by multiple hosts via a network
US20080250396A1 (en) * 2003-09-26 2008-10-09 International Business Machines Corporation Transforming Locks in Software Loops
US20050081185A1 (en) * 2003-09-26 2005-04-14 International Business Machines Corporation Transforming locks in software loops
US7457880B1 (en) * 2003-09-26 2008-11-25 Ximeta Technology, Inc. System using a single host to receive and redirect all file access commands for shared data storage device from other hosts on a network
US7664836B2 (en) 2004-02-17 2010-02-16 Zhe Khi Pak Device and method for booting an operation system for a computer from a passive directly attached network device
US20050193189A1 (en) * 2004-02-17 2005-09-01 Han-Gyoo Kim Device and method for booting an operating system for a computer from a passive directly attached network device
US20050193017A1 (en) * 2004-02-19 2005-09-01 Han-Gyoo Kim Portable multimedia player/recorder that accesses data contents from and writes to networked device
US20060069884A1 (en) * 2004-02-27 2006-03-30 Han-Gyoo Kim Universal network to device bridge chip that enables network directly attached device
US20080104600A1 (en) * 2004-04-02 2008-05-01 Symbian Software Limited Operating System for a Computing Device
US8161481B2 (en) 2004-04-02 2012-04-17 Nokia Corporation Operating system providing a mutual exclusion mechanism
US7496918B1 (en) * 2004-06-01 2009-02-24 Sun Microsystems, Inc. System and methods for deadlock detection
US8046760B2 (en) * 2004-07-09 2011-10-25 Hewlett-Packard Development Company, L.P. Lock contention pinpointing
US20060010444A1 (en) * 2004-07-09 2006-01-12 Seidman David I Lock contention pinpointing
US7746900B2 (en) 2004-07-22 2010-06-29 Zhe Khi Pak Low-level communication layers and device employing same
US20060045130A1 (en) * 2004-07-22 2006-03-02 Han-Gyoo Kim Low-level communication layers and device employing same
US20070008988A1 (en) * 2004-08-23 2007-01-11 Han-Gyoo Kim Enhanced network direct attached storage controller
US20060067356A1 (en) * 2004-08-23 2006-03-30 Han-Gyoo Kim Method and apparatus for network direct attached storage
US7860943B2 (en) 2004-08-23 2010-12-28 Zhe Khi Pak Enhanced network direct attached storage controller
US20060075404A1 (en) * 2004-10-06 2006-04-06 Daniela Rosu Method and system for scheduling user-level I/O threads
US20080263554A1 (en) * 2004-10-06 2008-10-23 International Business Machines Corporation Method and System for Scheduling User-Level I/O Threads
US7849257B1 (en) 2005-01-06 2010-12-07 Zhe Khi Pak Method and apparatus for storing and retrieving data
US20070011687A1 (en) * 2005-07-08 2007-01-11 Microsoft Corporation Inter-process message passing
US7823158B2 (en) * 2005-08-18 2010-10-26 International Business Machines Corporation Adaptive scheduling and management of work processing in a target context in resource contention
US20070044104A1 (en) * 2005-08-18 2007-02-22 International Business Machines Corporation Adaptive scheduling and management of work processing in a target context in resource contention
US20070288895A1 (en) * 2006-06-08 2007-12-13 Sun Microsystems, Inc. Configuration tool with multi-level priority semantic
US7831960B2 (en) * 2006-06-08 2010-11-09 Oracle America, Inc. Configuration tool with multi-level priority semantic
US20080288926A1 (en) * 2006-06-09 2008-11-20 International Business Machine Corporation Computer Implemented Method and System for Accurate, Efficient and Adaptive Calling Context Profiling
US8122438B2 (en) 2006-06-09 2012-02-21 International Business Machines Corporation Computer implemented method and system for accurate, efficient and adaptive calling context profiling
US7818722B2 (en) * 2006-06-09 2010-10-19 International Business Machines Corporation Computer implemented method and system for accurate, efficient and adaptive calling context profiling
US20070288908A1 (en) * 2006-06-09 2007-12-13 International Business Machines Corporation Computer implemented method and system for accurate, efficient and adaptive calling context profiling
US20090063881A1 (en) * 2007-08-31 2009-03-05 Mips Technologies, Inc. Low-overhead/power-saving processor synchronization mechanism, and applications thereof
US20090172686A1 (en) * 2007-12-28 2009-07-02 Chen Chih-Ho Method for managing thread group of process
CN102209955A (en) * 2008-11-07 2011-10-05 松下电器产业株式会社 Resource exclusion control method and exclusion control system in multiprocessors and technology associated with the same
US20110202930A1 (en) * 2008-11-07 2011-08-18 Panasonic Corporation Resource exclusion control method and exclusion control system in multiprocessors and technology associated with the same
US20110246694A1 (en) * 2008-12-12 2011-10-06 Panasonic Corporation Multi-processor system and lock arbitration method thereof
US10423451B2 (en) 2009-10-26 2019-09-24 Microsoft Technology Licensing, Llc Opportunistically scheduling and adjusting time slices
US9086922B2 (en) 2009-10-26 2015-07-21 Microsoft Technology Licensing, Llc Opportunistically scheduling and adjusting time slices
US20110099551A1 (en) * 2009-10-26 2011-04-28 Microsoft Corporation Opportunistically Scheduling and Adjusting Time Slices
EP2850555A4 (en) * 2012-05-16 2016-01-13 Nokia Technologies Oy Method in a processor, an apparatus and a computer program product
US9443095B2 (en) 2012-05-16 2016-09-13 Nokia Corporation Method in a processor, an apparatus and a computer program product
WO2013171362A1 (en) * 2012-05-16 2013-11-21 Nokia Corporation Method in a processor, an apparatus and a computer program product
US20150149737A1 (en) * 2013-11-22 2015-05-28 Yahoo! Inc. Method or system for access to shared resource
US10203995B2 (en) * 2013-11-22 2019-02-12 Excalibur Ip, Llc Method or system for access to shared resource
US11449326B2 (en) 2017-08-09 2022-09-20 Servicenow, Inc. Systems and methods for recomputing services
US20190050254A1 (en) * 2017-08-09 2019-02-14 Servicenow, Inc. Systems and methods for recomputing services
US10705875B2 (en) * 2017-08-09 2020-07-07 Servicenow, Inc. Systems and methods for recomputing services

Also Published As

Publication number Publication date
JP2866241B2 (en) 1999-03-08
JPH05204675A (en) 1993-08-13

Similar Documents

Publication Publication Date Title
US5524247A (en) System for scheduling programming units to a resource based on status variables indicating a lock or lock-wait state thereof
US4847754A (en) Extended atomic operations
US5404521A (en) Opportunistic task threading in a shared-memory, multi-processor computer system
US4914570A (en) Process distribution and sharing system for multiple processor computer system
US7506339B2 (en) High performance synchronization of accesses by threads to shared resources
US6173442B1 (en) Busy-wait-free synchronization
US6622155B1 (en) Distributed monitor concurrency control
US4779194A (en) Event allocation mechanism for a large data processing system
US5485626A (en) Architectural enhancements for parallel computer systems utilizing encapsulation of queuing allowing small grain processing
US5893157A (en) Blocking symbol control in a computer system to serialize accessing a data resource by simultaneous processor requests
US4631674A (en) Active wait
US5448732A (en) Multiprocessor system and process synchronization method therefor
US5666523A (en) Method and system for distributing asynchronous input from a system input queue to reduce context switches
US5291581A (en) Apparatus and method for synchronization of access to main memory signal groups in a multiprocessor data processing system
US6662364B1 (en) System and method for reducing synchronization overhead in multithreaded code
US6148325A (en) Method and system for protecting shared code and data in a multitasking operating system
US6581089B1 (en) Parallel processing apparatus and method of the same
EP0362903B1 (en) A special purpose processor for off-loading many operating system functions in a large data processing system
Takada et al. A novel approach to multiprogrammed multiprocessor synchronization for real-time kernels
CA2252238A1 (en) Method and apparatus for sharing a time quantum
Michael et al. Relative performance of preemption-safe locking and non-blocking synchronization on multiprogrammed shared memory multiprocessors
US6701429B1 (en) System and method of start-up in efficient way for multi-processor systems based on returned identification information read from pre-determined memory location
EP0697653A1 (en) Processor system and method for controlling the same
JP2804478B2 (en) Task control system and online transaction system
EP0297895B1 (en) Apparatus and method using lockout for synchronization of access to main memory signal groups in a multiprocessor data processing system

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:NAGASAWA, ATSUHI;REEL/FRAME:006413/0260

Effective date: 19930121

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12