CA2334393A1 - Controller for multiple instruction thread processors - Google Patents

Controller for multiple instruction thread processors Download PDF

Info

Publication number
CA2334393A1
CA2334393A1 CA002334393A CA2334393A CA2334393A1 CA 2334393 A1 CA2334393 A1 CA 2334393A1 CA 002334393 A CA002334393 A CA 002334393A CA 2334393 A CA2334393 A CA 2334393A CA 2334393 A1 CA2334393 A1 CA 2334393A1
Authority
CA
Canada
Prior art keywords
thread
execution
control
threads
fifo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002334393A
Other languages
French (fr)
Inventor
Gordon Taylor Davis
Marco C. Heddes
Ross Boyd Leavens
Fabrice Jean Verplanken
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2334393A1 publication Critical patent/CA2334393A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming

Abstract

A prefetch buffer is used in connection with a plurality of independent thread processes in such a manner as to avoid an immediate stall when execution is given to an idle thread. A
mechanism is established to control the switching from one thread to another within a Processor in order to achieve more efficient utilization of processor resources. This mechanism will grant temporary control to an alternate execution thread when a short latency event is encountered, and will grant full control to an alternate execution thread when a long latency event is encountered.
This thread control mechanism comprises a priority FIFO, which is configured such that its outputs control execution priority for two or more execution threads within a processor, based on the length of time each execution thread has been resident within the FIFO. The FIFO is loaded with an execution thread number each time a new task (a networking packet requiring classification and routing within a network) is dispatched for processing, where the execution thread number loaded into the FIFO corresponds to the thread number which is assigned to process the task. When a particular execution thread completes processing of a particular task, and enqueues the results for subsequent handling, the priority FIFO is further controlled to remove the corresponding execution thread number from the FIFO. When an active execution thread encounters a long latency event, the corresponding thread number within the FIFO is removed from a high priority position in the FIFO, and placed into the lowest priority position of the FIFO. This thread control mechanism also comprises a Thread Control State Machine for each execution thread supported by the processor. The Thread Control State Machine further comprises four states. An Init state is used while an execution thread is waiting for a task to process. Once a task is enqueued for processing, a Ready state is used to request execution cycles. Once access to the processor is granted, an Execute state is used to support actual processor execution. Requests for additional processor cycles are made from both the Ready state and the Execute state. The state machine is returned to the Init state once processing has been completed for the assigned task. A
Wait state is used to suspend requests for execution cycles while the execution thread is stalled due to either a long-latency event or a short-latency event. This thread control mechanism further comprises an arbiter which uses thread numbers from the priority FIFO to determine which execution thread should be granted access to processor resources. The arbiter further processes requests for execution control from each execution thread, and selects one execution thread to be granted access to processor resources for each processor execution cycle by matching thread numbers from requesting execution threads with corresponding thread numbers in the priority FIFO.

Description

CONTROLLER FOR MULTIPLE INSTRUCTION THREAD PROCESSORS
CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
The present application relates to application Serial No. , Docket RAL9-2000-US 1 entitled "NETWORK PROCESSOR WITH MULTIPLE INSTRUCTION THREADS" filed:
and assigned to the assignee of the present application.
FIELD OF THE INVENTION
This invention relates to computer systems in general, and in particular to computer systems in which the computer executes multiple threads of instruction so as to minimize the impact of latency in accessing data especially data formatted in tree structures.
BACKGROUND OF THE INVENTION
Network processors are designed for efficient implementation of switching and routing functions. The critical performance measurement for Network processors is the number of machine cycles required to process a typical packet or data frame. This processing is typically broken down into two major parts: the instructions executed by the Network processor CPU
(central processing unit), and the access of routing and control tables which are typically stored in a memory source which is shared among several Network processor CPUs. CPU instruction execution is typically stalled during access to the routing tables, adding significantly to the number of machine cycles required to process a packet. In fact, the time to complete an access to one of these tree structures may be 2 or 3 times longer than the time required by the CPU to set up for the access and process the resulting data. The data for these routing and control tables is typically formatted in a tree structure which requires a specialized coprocessor or tree-search engine (TSE) to efficiently access the desired table entry. Other coprocessors, set up to work with data in local data storage, may also stall the CPU, but for shorter durations.
The related art reveals a number of previously patented implementation systems using multiple threads:

US Patent #5,357,617 (Davis, et al.) - This patent deals with switching from one execution thread to another with zero overhead. More specifically, the CPU continuously switches between multiple instruction threads in a time-division multiplexed allocation of CPU
resources. In other words, the multiple instruction threads are controlled via a static interleaving mechanism.
US Patent #5,404,469- This patent extends the concept of time-division multiplexed allocation of CPU resources to a processor with a VLIW (very long instruction word) architecture.
US Patent #5,694,604 - This patent describes a typical software multiprocessing approach in which a selected instruction thread is allocated a specified amount oftime to execute, after which its context is saved, and a previous context for the next instruction thread is restored.
In this type of system, each thread typically executes for an extended period of time since there is significant cost (in machine cycles) to save and restore machine context when switching from one thread to another.
1 S . US Patent #5,812,811 - This patent refers to running multiple instruction threads in parallel which are part of the same program, in order to accelerate completion of the program. It also deals with speculative execution of paths which may or may not be required to complete the execution of the program.
US Patent #5,933,627 - This patent describes switching to an alternate thread when the CPU
is stalled because required data is not found in local cache. The system requires the CPU to explicitly control which thread would gain control of the CPU. This patent also describes multiple threads as pieces of the same program, rather than independent processes.
US Patent #5,694,603 - This patent is another description of a typical software multiprocessing approach which includes preemptive switching from one thread to another.

SUMMARY OF THE INVENTION
It is an object of the current invention to control the switching from one thread to another within a Processor (such as a Network Processor) in order to achieve more efficient utilization of processor resources.
Another object ofthe current invention is to grant temporary control to an alternate execution thread when a short latency event is encountered, and to grant full control to an alternate execution thread when a long latency event is encountered.
The invention comprises a priority FIFO, which is configured such that its outputs control execution priority for two or more execution threads within a processor, based on the length of time each execution thread has been resident within the FIFO. The FIFO is loaded with an execution thread number each time a new task (such as a networking packet requiring classification and routing within a network) is dispatched for processing, where the execution thread number loaded into the FIFO corresponds to the thread number which is assigned to process the task.
When a particular execution thread completes processing of a particular task, and enqueues the results for subsequent handling, the priority FIFO is further controlled to remove the corresponding execution thread number from the FIFO. When an active execution thread encounters a long latency event, the corresponding thread number within the FIFO is removed from a high priority position in the FIFO, and placed into the lowest priority position of the FIFO.
The invention also comprises a Thread Control State Machine for each execution thread supported by the processor. The Thread Control State Machine further comprises four states. An Init (Initial) state is used while an execution thread is waiting for a task to process. Once a task is enqueued for processing, a Ready state is used to request execution cycles.
Once access to the processor is granted, an Execute state is used to support actual processor execution. Requests for additional processor cycles are made from both the Ready state and the Execute state. The state machine is returned to the Init state once processing has been completed for the assigned task. A
Wait state is used to suspend requests for execution cycles while the execution thread is stalled due to either a long-latency event or a short-latency event.

The current invention further comprises an arbitor which uses thread numbers from the priority FIFO to determine which execution thread should be granted access to processor resources.
The arbitor further processes requests for execution control from each execution thread, and selects one execution thread to be granted access to processor resources for each processor execution cycle by matching thread numbers from requesting execution threads with corresponding thread numbers in the priority FIFO. The logical function of the arbitor is further defined by the following Boolean expression:
Gn-Rn~f~PA-n)+RpA' (PB=n)+RpA' RpB' (P~=n~...
Where:
Gn is a grant from a given thread n;
Rn is a request from a given thread n;
PA, PB and P~ represent threads ranked by alphabetical subscript according to priority;
n is a subscript identifying a thread by the bit or binary number.
The invention also involves the use of a prefetch buffer in connection with a plurality of independent thread processes in such a manner as to avoid an immediate stall when execution is granted to an idle thread. This involves determining whether the buffer is being utilized by an active execution thread. During periods that the buffer is not being used by the active execution thread, the buffer is enabled to prefetch instructions for an idle execution thread.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a network processor architecture with two coprocessors;
and Figure 2 illustrates one embodiment of the current invention; and Figure 3 is a thread execution control diagram; and Figure 4 shows waveforms for two execution threads and a single CPU.

DETAILED DESCRIPTION OF THE INVENTION
The current invention is distinct from the related art in that the invention specifically relates to independent processes in each of the instruction execution threads (each of which relates to a different packet being processed), and the invention specifically deals with latency in accessing data.
Each of the execution threads is an independent process executing a sequence of instructions as the threads are allowed to gain access to the processor hardware. An additional aspect of the current invention is that the tree search coprocessor is pipelined to enable multiple execution threads to each have access simultaneously but at different phases (overlapping) in the tree search pipeline.
Preferably, the invention employs multiple instruction execution threads with zero overhead to switch execution from one thread to the next. the threads are queued to provide rapid distribution of access to shared memory. Queueing of the threads serves to get the thread of highest priority to its long latency event as quickly as possible.
Another aspect of the current invention relates to multiple instruction prefetch buffers, one for each execution thread. These prefetch buffers enable prefetch of instructions for idle execution threads during intervals where instruction bandwidth is not being fully utilized by active execution threads. This helps to insure that when control is switched to a new execution thread, the instruction prefetch buffer for that thread will be full, thus avoiding the possibility of the new thread stalling immediately due to lack of available instructions to execute. Accordingly, access priority to instruction memory is controlled so that the currently executing thread receives top priority, while the execution thread positioned to take control if the current thread stalls is given second priority.
Likewise, the execution thread at the bottom of the execution queue is given last priority in instruction fetch access.
An additional aspect of the current invention is a thread control state machine which determines the current active execution thread and which grants full control to the next thread when execution of the active thread stalls due to a long latency event (i.e. tree search), or temporary control to the next thread when execution stalls due to a short latency event (i.e.
coprocessor action in local data storage, or instruction fetch latency). If temporary control is granted to an alternate thread, then control is returned to the original thread as soon as it is unblocked. In contrast, if full control is granted to an alternate thread, that alternate thread will remain in control until it becomes blocked.
This avoids wasting cycles for short latency events, but also allows the primary execution thread to reach the long latency event sooner. Otherwise, multiple execution threads might reach a long latency event at about the same time which would reduce the benefit of overlapping one thread's CPU execution with an alternate thread's tree search.
Figure 1 shows a typical network processor configuration comprising a single thread central processing unit (CPU) 10 and a plurality of general purpose registers 12 implemented in a single register array in two-way communication with the CPU. Instructions are transmitted between an instruction memory 16 and a single prefetch queue 18 coupled to the CPU. A
first coprocessor 20 communicates with the CPU 10 and accesses data contained in remote storage 22.
This remote storage can share data with a plurality of other processors (not shown) through the coprocessor 20.
Local data storage 26 is used exclusively by the coprocessor 24 and is not shared with the other processors. In the case of multiple threads, all of the threads have access to the local data storage.
Turning now to Figure 2, where the same numbers are used to refer to the identical components as in Figure 1, there is shown a CPU 110 configured with multiple execution threads.
Instructions are transmitted between an instruction memory 16 and prefetch queues 118 coupled to the CPU 110. One prefetch queue is used for each independent execution thread.
A plurality of general purpose registers 112 are implemented in a single register array serving the CPU. The array has one address bit that is subject to control by a thread execution control (TEC) 30 which determines which part of the register array is used by a thread. The remaining address bit or bits are controlled by the CPU. In a preferred embodiment, the local storage 126 is segmented so that each thread has its own logical private space in the local storage. For example, two threads would each share %z of the space, and four threads would each have % of the local storage space. The TEC 30 also determines which segment of the local data storage 126 is to be used for a particular thread.
Data can be exchanged directly between the local data storage 126 and the CPU
110. The local data storage is fully addressable by the CPU with working areas identified by an index register within the general purpose register array. A first coprocessor 120 is pipelined between the CPU 110 and the shared remote storage 22. A second coprocessor 24 accesses the local data storage 126 and communicates with the CPU 110.
Referring again to Figure 2, the CPU, even though it supports multiple threads, is not substantially different from the single-threaded CPU of Figure 1. The key difference required to support multiple threads is found in the functioning of the thread execution control (TEC) 30.
Control logic within the TEC constantly monitors the current execution thread, and if the current thread stalls, the control logic switches control to an alternate execution thread. In addition, the control logic identifies the nature of the event which causes an active execution thread to stall and transfers either temporary or full control based on the length of the event.
Figure 3 shows the thread execution control (TEC) 30 including FIFO 52, Arbiter 46 and a plurality of Thread Controls #0 through #N. Each of the Thread Controls includes a thread control state machine 38. Controls different from the state machine 38 may be used without deviating from the teachings of the present invention.
The thread execution control operates in the following manner. When the computer is first powered up, each thread is in the initialize state 40. When a packet 42 is dispatched to a processor, the corresponding thread is moved to the ready state 44 at which time it starts requesting cycles for execution.
The arbiter 46 is the device that grants the execution cycle to the thread. If the cycle is granted, then the thread moves from the ready state 44 to the execute state 48. In the execute state, the thread continues to make requests until execution stalls due to a latency event or the packet being processed is enqueued, thereby implying that the code work on that packet is done. If cycles are no longer granted, this implies that another thread is in control. That is the only reason that the arbiter 46 would not grant a cycle to the thread control state machine 38. But in either of these two states (ready or execute), the thread will continuously request new execution cycles pausing for latency events, until the end of the packet processing is reached and the next packet 42 is queued to be dispatched to the arbiter. The system then goes back to the initialize state and waits for the next packet 42.

The wait state 50 deals with either a long or a short latency event.
Regardless of which event occurs, the processor stalls and the active thread defaults to the wait state.
The thread then quits requesting execution cycles until the latency event is completed.
The same dispatch action that moves a thread from the initialize stage 40 to the ready state S 44 enters the thread number into the FIFO 52 so that the thread to which the first packet is dispatched will become the highest priority thread PA. Subsequent dispatch actions supply additional thread numbers into the FIFO. The thread number in the highest priority position of the FIFO will stay in that position until it encounters a long latency event whereupon the thread is rotated back to the beginning of the FIFO and goes from the highest priority PA to the lowest priority thread Px. A
short latency event will not cause the thread to lose its priority in the FIFO.
If the thread is done with the processing of the packet 42, the packet is enqueued for transmission to an output port, the thread control state machine transitions to the initiate state from the execute to initialize state, and the thread number is removed from the FIFO 52.
New packets are dispatched from a high-level controller (not shown). This controller, outside of the processor chooses a thread and a processor to handle each packet . That decision provides an input command to the FIFO 52. It also provides an input to the state machine 38 instructing the machine to move from the initialize state to the ready state. Along with that command from the external controller, the thread number to which the packet is to be dispatched must also be delivered from the controller to the FIFO. As an example when using 4 threads, the two bit binary code (00;
O 1; 10; or 11 ) identifies the thread that is going to handle the packet being dispatched. If the system uses two threads, these are identified by a single bit binary code (0 or 1 ).
From the FIFO are multiple outputs to the arbiter 46 for each thread if all of the threads are active. Two such outputs are shown as 60 for the highest priority thread, PA
and 62 for the lowest priority thread Px. In the case of two threads, PX = PB, and there are two outputs. For four threads, PX = Pp , resulting in 4 outputs. Most likely the system would handle threads in multiples of two.
However it is possible for three or some other number to be used.
RAL9-2000-0058 g As previously mentioned, going to four threads produces some gain in performance while requiring additional hardware and the expenses associated therewith. Four threads would make sense with different design parameters. However, the preferred embodiment of the present invention utilizes two threads. There are a number of factors that go into the decision as to whether to use two or four threads. One factor is the size of local storage. The smaller the storage, the more logical it is to use four threads or even more. Also the question about how long the latency event is relative to the length of the code execution path is a factor as well.
Granting execution cycles to a specific thread by the thread execution control is based on the logical function of the arbiter based on the Boolean expression:
Gn =R~~ ~ (PA -n)+RpA' (PB =n)+RPn~RPB~ (Pc =n)... ~
This equation is a generalized equation of how the arbiter decides whether or not to activate the grant signal (G) given that it has a request (R) coming in from the state machine 38. In the formula, G
equals Go, G, etc. up to as many threads as there are. The priority to be given to a thread is represented by (P). The equation reduces to two terms for two threads, and is extended to four terms for four threads.
There are multiple elements to the grant if the request is a command for Ro and Go. Looking at Ro, it must be active before the system will consider issuing grant Go.
Then the system looks at multiple ways to decide to execute that grant assuming the request is active.
If the thread is the highest priority, there is no need to look at what any of the other threads are doing. The arbiter immediately signals a grant to the thread number allowing it to execute.
Otherwise, with the thread number PA the system finds a request number RPA for that thread, which is the request with the highest priority. If the request having a highest priority is not active then it looks at the request (RPB) having the second highest priority and matches it with the thread (PB) in which the system is interested. This thread number is represented by one bit (for 2 threads) or two bits (for 4 threads).
The equation stops at two terms if there are two threads or at four terms for four threads.

Turning now to Figure 4, there are shown two timing diagrams 70, 72 for two tree search threads generally showing the overlap of the tree searches and a CPU execution on the two thread wave forms. When the wave forms are low, the CPU is executing. When the wave forms are high, the CPU is waiting for a tree search. When comparing the wave forms of the timing diagrams for the two threads it is noted that they are never low at the same time. They both share the same CPU
and it is intuitive that they can not both be executing CPU cycles at the same time. On the other hand, because of pipelining of the tree search engine, they can be in various overlapping stages of tree searches at the same time.
There are basically two types of events which might cause execution to stall, those which cause a short interruption and those which cause an extended interruption of the current program flow. A short interruption may be caused by a branch instruction which requires the instruction prefetch queue to be refilled because of a change in the program flow.
Alternately, the program may stall while waiting for a coprocessor to perform a task relating to data in the processor's local memory. An example of this would be for a checksum coprocessor to calculate a new checksum on a modified header field. An event is considered a short interruption if the latency is less than 25 processor cycles. Long latency events typically introduce a latency more than 25 and typically in excess of 50 to 100 processor cycles. These have a much more significant impact on overall performance.
There are numerous alternative means for determining a long or a short latency event. The length of latency can be under the control of the programmer whereupon the hardware or its configuration is not a factor in the determination. On the other hand, a threshold register could be set with a 25 cycle threshold, and the hardware would determine how many cycles an operation was going to require and make an automatic decision based on that determination.
A coprocessor instruction is one type of instruction that the processor executes. Some of the bits in the field identify which coprocessor is intended. One bit defines the particular instruction as a long or a short latency event. Thus, it is possible that a programmer can define two identical accesses to control memory, one defined as a long latency event and the other as a short latency event. The thread execution control function is designed to minimize the impact of these long latency events. Accordingly, a long latency event will cause full control to switch to an alternate execution thread, while a short latency event will cause only a temporary switch to an alternate thread.
Even though the multi-thread CPU is substantially the same as a single threaded CPU, a number of the peripheral functions are replicated for each execution thread.
General purpose registers and local data storage are both replicated for each instruction thread, as illustrated in Figure 2. This allows a complete context switch with zero overhead (in terms of processor clock cycles).
In the preferred embodiment, the multiple sets of general purpose registers are actually implemented in a single larger register array, with one (or more if the number of threads exceeds 2) address bit being controlled by the Thread execution control logic and the remaining address bits being controlled by the CPU according to instructions being executed.
Alternately, two register arrays could be addressed simultaneously by the CPU, and the Thread execution control logic can control an array select or multiplexer circuit to determine which array output would be delivered to the CPU. Each execution thread may be given a completely independent working area in Local data storage by using a single larger memory array, with one (or more if the number of threads exceeds 2) address bit being controlled by the Thread execution control logic and the remaining address bits being controlled by the CPU
according to instructions being executed. Alternately, the Local data storage can be fully addressable by the CPU, with individual working areas identified by an index register within the general purpose register array.
This has the advantage of enabling some shared memory for common data such as tables, but would require all accesses to private space to be done with indexed address modes which might limit the flexibility of available instructions.
Although there is a common path to instruction memory, each instruction thread is associated with a different instruction pointer and instruction prefetch queue, each of which may contain multiple instruction words staged for future execution. In the preferred embodiment, there are two execution threads, each of which has an eight-instruction prefetch queue. The active execution thread is given first priority for fetching instructions. In the preferred embodiment, multiple network processors are implemented on the same chip and share a common instruction storage. Accordingly, if multiple processors request access to the instruction memory at the same time, the instruction fetch requests for active threads will always be given precedence over those for idle threads, even if the request from an idle thread comes in earlier.
Note that while working registers and local storage are replicated for each instruction thread, all threads share a common CPU (including its coprocessors) and path to instruction memory. The peak bandwidth requirement for instruction fetching does not increase, but the effective utilization of the available bandwidth for instruction fetching is increased significantly with multiple execution threads.
The typical processing required in the network processing system results in tree search access which may be two or three times the number of machine cycles as that required to set up the search and process the results. This has two significant implications. First, the CPU
execution for each of two threads can easily be overlapped with the tree search cycles for the opposite thread. In fact, with just two threads, there will still be a significant number of CPU cycles for which both threads are stalled, suggesting that three or four threads would further improve the utilization of the CPU.
While doubling from one to two threads essentially doubles the CPU
utilization, doubling the number of threads again to four may not quite double the efficiency of CPU
utilization to 4 x, at least within the framework of the preferred embodiment of the present invention.
This is because with four threads, the tree search latency isn't long enough to ensure the other three threads will run. The preferred embodiment is limited to two threads, because the additional cost of additional threads (larger local data storage and general purpose register arrays) is significantly more that the cost saved by not replicating the CPU. Thus, it makes sense if doubling the threads results in a corresponding doubling of processing power, but when doubling the number of threads results in something less than doubling (i.e. 1.5 x) of processing power, then adding additional independent CPUs tends to be preferable. The decision of how many threads is preferable is within the capability of a person having the requisite skills in the art and depends on the relative difference between CPU clock cycles and tree-search clock cycles for the processing system of interest, as well as the cost of implementing the core CPU vs. the cost of replicating the general purpose registers and local data storage.
The second implication of the distribution of machine cycles between CPU
execution and tree-searches is that if interleaving is implemented with a requirement for one tree search to complete before the next one can be started, then the overlapping of two instruction threads will not be as S efficient. Each packet process will in fact be stretched out due to numerous instances where a tree search is started by the CPU but the tree search is stalled waiting for the tree search from the other thread to complete. To avoid this penalty, the tree search coprocessor is modified to include several pipelined phases. Thus, a tree search from one thread does not need to wait until the other thread's tree search is complete, but only until the other thread's tree search progresses to the second phase of its pipeline. In reality, by the time a second thread has executed the instructions to set up a tree search, a previous tree search from the other thread will in all likelihood be already beyond that first pipeline phase, thus resulting in a complete avoidance of stalls in the tree search process. This of course leads to additional motivation for the temporary thread switching on short latency events which was described previously, in order to avoid having tree searches from two different threads contending for the same pipeline phase.
An alternate approach is to replicate more single threaded CPUs. The disadvantage of that approach is that it costs more to achieve the same level of performance. It also increases the peak bandwidth requirements on various busses (i.e. to instruction memory or shared remote storage).
Multiple threads result in the same average bandwidth, but half the peak bandwidth (in the case of two threads) which can have significant secondary effects on performance due to contention for these shared resources.
The invention has been described in connection with its use on a network processor and a tree search structure. However, it should be noted that the invention is also useful with other processor systems and for retrieving data from sources other than tree search engines.
For instance, the thread execution control can be used to access other coprocessors.

While the invention has been described in combination with embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the foregoing teachings. Accordingly, the invention is intended to embrace all such alternatives, modifications and variations as fall within the spirit and scope of the appended claims.

Claims (33)

1. The use of multiple threads in association with a processor and accessible data, including the steps of:
a) providing multiple instruction execution threads as independent processes in a sequential time frame;
b) queueing the multiple execution threads to have overlapping access to the accessible data;
c) executing a first thread in the queue; and d) transferring control of the execution to the next thread in the queue upon the occurrence of an event that causes execution of the first thread to stall.
2. The use of the multiple threads according to claim 1 wherein the control of the execution is temporarily transferred to the next thread when execution stalls due to a short latency event, and the control is returned to the original thread when the event is completed.
3. The use of multiple threads according to claim 2 wherein a processor instruction is encoded to select a short latency event.
4. The use of the multiple threads according to claim 1 wherein full control of the execution is transferred to the next thread when execution of the first thread stalls due to a long latency event.
5. The use of multiple threads according to claim 4 wherein a processor instruction is encoded to select a long latency event.
6. The use of multiple threads according to claim 1 including queueing the threads to provide rapid distribution of access to shared memory.
7. The use of the multiple threads according to claim 1 further including the step of providing a separate instruction pre-fetch buffer for each execution thread, and collecting instructions in a prefetch buffer for its execution thread when the thread is idle and when the instruction bandwidth is not being fully utilized.
8. The use of the multiple threads according to claim 1 wherein the threads are used with zero overhead to switch execution from one thread to the next.
9. The use of the multiple threads according to claim 8 wherein each thread is given access to general purpose registers and local data storage to enable switching with zero overhead.
10. A processing system that uses multiple threads to access data, including:
a) a CPU configured with multiple instruction execution threads as independent processes in a sequential time frame;
b) a thread execution control for 1) queueing the multiple execution threads to have overlapping access to the accessible data;
2) executing a first thread in the queue; and 3) transferring control of the execution to the next thread in the queue upon the occurrence of an event that causes execution of the first thread to stall.
11. A processing system utilizing multiple threads according to claim 10 wherein the thread execution control includes control logic for temporarily transferring the control to the next thread when execution stalls due to a short latency event, and for returning control to the original thread when the latency event is completed.
12. The processing system according to claim 10 wherein a processor instruction is encoded to select a short latency event.
13. The processing system according to claim 10 wherein the control transfer means includes the means for transferring full control of the execution to the next thread when execution of the first thread stalls due to a long latency event.
14. The processing system according to claim 13 wherein a processor instruction is encoded to select a long latency event.
15. The processing system according to claim 10 further including a separate instruction pre-fetch buffer for each execution thread, and means for collecting instructions in a prefetch buffer for an idle execution thread when the instruction bandwidth is not being fully utilized.
16. The processing system according to claim 10 wherein the processor is a network processor.
17. The system according to claim 10 wherein the processor uses zero overhead to switch execution from one thread to the next.
18. The system according to claim 17 wherein each thread is given access to an array of general purpose registers and local data storage to enable switching with zero overhead.
19. The system according to claim 18 wherein the general purpose registers and the local data storage are made available to the processor by providing one address bit under the control of the thread execution control logic and by providing the remaining address bits under the control of the processor.
20. The system according to claim 18 wherein the processor is capable of simultaneously addressing multiple register arrays, and the thread execution control logic includes a selector to select which array will be delivered to the processor for a given thread.
21. The system according to claim 18 wherein the local data storage is fully addressable by the processor, an index register is contained within the register array, and the thread execution control has no address control over the local data storage or the register arrays.
22. A thread execution control including, a thread control state machine for granting control of execution from a first thread to a second thread when a latency event causes execution of the first thread to stall, wherein the transfer is for temporary control if the latency event causes a short latency stall, and the transfer is for full control if the latency event causes a long latency stall.
23. The thread execution control of claim 22 further including means for returning control to the first thread when the short latency event is complete.
24. The thread execution control according to claim 27, including means for providing that control is retained by the second thread after full control has been transferred to the second thread, until the second thread incurs a latency event.
25. A method for execution of multiple independent threads in a processor comprising:
a) using a priority FIFO to grant priority to one of a plurality of threads;
b) using an arbiter to control the execution of the prioritized threads, and c) using a thread control state machine for shifting execution control between threads upon the occurrence of latency events.
26. The method according to claim 25 wherein thread priority is granted by the FIFO by:
a) loading a thread number into FIFO when a task is dispatched to the processor;
b) unloading a thread number from the FIFO when a task has been completed;
c) circulating a thread number from highest priority to lowest priority in the FIFO when a long latency event occurs, and d) using the thread outlets of the FIFO to determine priority depending on the length of time a thread has been in FIFO.
27. The method according to claim 25 wherein controlling the priority of execution of multiple independent threads is based on the logical function of the arbiter based on the Boolean expression:

where: G is a grant R n is a request from a given thread;
P A, P B and P C represent threads ranked by alphabetical subscript according to priority;
n is a subscript identifying a thread by the bit or binary number comprising the steps of a) determining whether a request R is active or inactive;
b) determining the priority of the threads P;
c) matching the request R with the corresponding thread P; and d) granting a request for execution if the request is active and if the corresponding thread P has the highest priority.
28. The method according to claim 25 of using a thread control state machine comprising:
a) dispatching a task to a thread;
b) moving the thread from an initialize state to a ready state;
c) requesting execution cycles for the task;
d) moving the thread to the execute state upon grant by the arbiter of an execution cycle;
e) continuing to request execution cycles while the task is queued in the execute state; and f) returning the thread to the initialize state if there is no latency event, or sending the thread to the wait state upon occurrence of a latency event.
29. The use of prefetch buffers in connection with a plurality of independent instruction threads comprising the steps of:
a) associating each thread with a prefetch buffer;
b) determining whether a buffer associated with an execution thread is full;
c) determining whether the thread associated with the buffer is active; and d) during periods that the buffer is not being used by an active execution thread, enabling the buffer to prefetch instructions for the execution thread.
30. A thread execution controller:
a) a priority FIFO;
b) a plurality of thread control state machines, one for each thread in a set of multiple threads; and c) an arbiter for determining a thread execution priority among the multiple threads operatively coupled to the FIFO and the plurality of thread control state machines.
31. The thread execution controller according to claim 30 wherein the FIFO
includes :
a) means for loading a thread number into FIFO when a task is dispatched to the processor;
b) means for unloading a thread number from the FIFO when a task has been completed;
c) thread number transfer from highest priority to lowest priority in the FIFO
when a long latency event occurs, and d) the thread outlets of the FIFO used to determine priority depending on the length of time a thread has been in FIFO.
32. The thread execution controller according to claim 30 wherein the arbiter controls the priority of execution of multiple independent threads based on the Boolean expression:

where: G is a grant R n is a request from a given thread;
P A, P B and P C represent threads ranked by alphabetical subscript according to priority;
n is a subscript identifying a thread by the bit or binary number comprising a) determining whether a request R is active or inactive;
b) determining the priority of the threads;
c) matching the request R with the corresponding thread P; and d) granting a request for execution if the request is active and if the corresponding thread P has the highest priority.
33. The thread execution controller according to claim 30 wherein the thread control state machine comprises control logic to:
a) dispatch a task to a thread;
b) move the thread from an initialize state to a ready state;
c) request execution cycles for the task;
d) move the thread to the execute state upon grant by the arbiter of an execution cycle;
e) continue to request execution cycles while the task is queued in the execute state; and f) return the packet to the initialize state if there is no latency event, or send the packet to the wait state upon occurrence of a latency event.
CA002334393A 2000-04-04 2001-02-02 Controller for multiple instruction thread processors Abandoned CA2334393A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/542,206 US6931641B1 (en) 2000-04-04 2000-04-04 Controller for multiple instruction thread processors
US09/542,206 2000-04-04

Publications (1)

Publication Number Publication Date
CA2334393A1 true CA2334393A1 (en) 2001-10-04

Family

ID=24162787

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002334393A Abandoned CA2334393A1 (en) 2000-04-04 2001-02-02 Controller for multiple instruction thread processors

Country Status (5)

Country Link
US (2) US6931641B1 (en)
JP (1) JP2001350638A (en)
KR (1) KR100368350B1 (en)
CA (1) CA2334393A1 (en)
DE (1) DE10110504B4 (en)

Families Citing this family (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983350B1 (en) 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6694380B1 (en) 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
US6661794B1 (en) 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
US7480706B1 (en) * 1999-12-30 2009-01-20 Intel Corporation Multi-threaded round-robin receive for fast network port
US7093109B1 (en) * 2000-04-04 2006-08-15 International Business Machines Corporation Network processor which makes thread execution control decisions based on latency event lengths
US7237022B1 (en) * 2000-06-29 2007-06-26 Microsoft Corporation Suspension and reinstatement of reference handles
US8762581B2 (en) * 2000-12-22 2014-06-24 Avaya Inc. Multi-thread packet processor
KR100440577B1 (en) * 2001-12-28 2004-07-21 한국전자통신연구원 A Priority based Method Invocation and Execution using Pre-defined Priority on MSMP System
US7275247B2 (en) * 2002-09-19 2007-09-25 International Business Machines Corporation Method and apparatus for handling threads in a data processing system
US7924828B2 (en) 2002-10-08 2011-04-12 Netlogic Microsystems, Inc. Advanced processor with mechanism for fast packet queuing operations
US9088474B2 (en) 2002-10-08 2015-07-21 Broadcom Corporation Advanced processor with interfacing messaging network to a CPU
US7334086B2 (en) 2002-10-08 2008-02-19 Rmi Corporation Advanced processor with system on a chip interconnect technology
US7627721B2 (en) 2002-10-08 2009-12-01 Rmi Corporation Advanced processor with cache coherency
US8037224B2 (en) 2002-10-08 2011-10-11 Netlogic Microsystems, Inc. Delegating network processor operations to star topology serial bus interfaces
US8478811B2 (en) 2002-10-08 2013-07-02 Netlogic Microsystems, Inc. Advanced processor with credit based scheme for optimal packet flow in a multi-processor system on a chip
US7961723B2 (en) 2002-10-08 2011-06-14 Netlogic Microsystems, Inc. Advanced processor with mechanism for enforcing ordering between information sent on two independent networks
US7461215B2 (en) * 2002-10-08 2008-12-02 Rmi Corporation Advanced processor with implementation of memory ordering on a ring based data movement network
US8176298B2 (en) * 2002-10-08 2012-05-08 Netlogic Microsystems, Inc. Multi-core multi-threaded processing systems with instruction reordering in an in-order pipeline
US20050044324A1 (en) * 2002-10-08 2005-02-24 Abbas Rashid Advanced processor with mechanism for maximizing resource usage in an in-order pipeline with multiple threads
US8015567B2 (en) 2002-10-08 2011-09-06 Netlogic Microsystems, Inc. Advanced processor with mechanism for packet distribution at high line rate
US7346757B2 (en) * 2002-10-08 2008-03-18 Rmi Corporation Advanced processor translation lookaside buffer management in a multithreaded system
US7984268B2 (en) 2002-10-08 2011-07-19 Netlogic Microsystems, Inc. Advanced processor scheduling in a multithreaded system
US7062606B2 (en) * 2002-11-01 2006-06-13 Infineon Technologies Ag Multi-threaded embedded processor using deterministic instruction memory to guarantee execution of pre-selected threads during blocking events
US7237242B2 (en) * 2002-12-31 2007-06-26 International Business Machines Corporation Dynamic thread pool tuning techniques
US7496915B2 (en) 2003-04-24 2009-02-24 International Business Machines Corporation Dynamic switching of multithreaded processor between single threaded and simultaneous multithreaded modes
US7500239B2 (en) * 2003-05-23 2009-03-03 Intel Corporation Packet processing system
US20050055594A1 (en) * 2003-09-05 2005-03-10 Doering Andreas C. Method and device for synchronizing a processor and a coprocessor
US7472390B2 (en) * 2003-10-01 2008-12-30 Intel Corporation Method and apparatus to enable execution of a thread in a multi-threaded computer system
US8140829B2 (en) * 2003-11-20 2012-03-20 International Business Machines Corporation Multithreaded processor and method for switching threads by swapping instructions between buffers while pausing execution
US20060212874A1 (en) * 2003-12-12 2006-09-21 Johnson Erik J Inserting instructions
US7555753B2 (en) * 2004-02-26 2009-06-30 International Business Machines Corporation Measuring processor use in a hardware multithreading processor environment
US7890734B2 (en) 2004-06-30 2011-02-15 Open Computing Trust I & II Mechanism for selecting instructions for execution in a multithreaded processor
US8042116B2 (en) * 2004-09-17 2011-10-18 Panasonic Corporation Task switching based on the execution control information held in register groups
DE102004059972B4 (en) * 2004-12-13 2010-07-01 Infineon Technologies Ag Thread scheduling method, and thread list scheduler device
US7664936B2 (en) 2005-02-04 2010-02-16 Mips Technologies, Inc. Prioritizing thread selection partly based on stall likelihood providing status information of instruction operand register usage at pipeline stages
US7657891B2 (en) 2005-02-04 2010-02-02 Mips Technologies, Inc. Multithreading microprocessor with optimized thread scheduler for increasing pipeline utilization efficiency
US7853777B2 (en) 2005-02-04 2010-12-14 Mips Technologies, Inc. Instruction/skid buffers in a multithreading microprocessor that store dispatched instructions to avoid re-fetching flushed instructions
US7681014B2 (en) 2005-02-04 2010-03-16 Mips Technologies, Inc. Multithreading instruction scheduler employing thread group priorities
US7657883B2 (en) 2005-02-04 2010-02-02 Mips Technologies, Inc. Instruction dispatch scheduler employing round-robin apparatus supporting multiple thread priorities for use in multithreading microprocessor
US7490230B2 (en) 2005-02-04 2009-02-10 Mips Technologies, Inc. Fetch director employing barrel-incrementer-based round-robin apparatus for use in multithreading microprocessor
US7631130B2 (en) 2005-02-04 2009-12-08 Mips Technologies, Inc Barrel-incrementer-based round-robin apparatus and instruction dispatch scheduler employing same for use in multithreading microprocessor
US7613904B2 (en) 2005-02-04 2009-11-03 Mips Technologies, Inc. Interfacing external thread prioritizing policy enforcing logic with customer modifiable register to processor internal scheduler
US7506140B2 (en) 2005-02-04 2009-03-17 Mips Technologies, Inc. Return data selector employing barrel-incrementer-based round-robin apparatus
US7266674B2 (en) * 2005-02-24 2007-09-04 Microsoft Corporation Programmable delayed dispatch in a multi-threaded pipeline
US7743233B2 (en) 2005-04-05 2010-06-22 Intel Corporation Sequencer address management
JP2007026095A (en) * 2005-07-15 2007-02-01 Matsushita Electric Ind Co Ltd Parallel arithmetic operation device
US9003421B2 (en) * 2005-11-28 2015-04-07 Intel Corporation Acceleration threads on idle OS-visible thread execution units
US20070150895A1 (en) * 2005-12-06 2007-06-28 Kurland Aaron S Methods and apparatus for multi-core processing with dedicated thread management
KR100731983B1 (en) * 2005-12-29 2007-06-25 전자부품연구원 Hardwired scheduler for low power wireless device processor and method of scheduling using the same
US7870307B2 (en) * 2006-01-30 2011-01-11 Sony Computer Entertainment Inc. DMA and graphics interface emulation
US7961745B2 (en) * 2006-09-16 2011-06-14 Mips Technologies, Inc. Bifurcated transaction selector supporting dynamic priorities in multi-port switch
US7773621B2 (en) * 2006-09-16 2010-08-10 Mips Technologies, Inc. Transaction selector employing round-robin apparatus supporting dynamic priorities in multi-port switch
US7990989B2 (en) * 2006-09-16 2011-08-02 Mips Technologies, Inc. Transaction selector employing transaction queue group priorities in multi-port switch
US7760748B2 (en) * 2006-09-16 2010-07-20 Mips Technologies, Inc. Transaction selector employing barrel-incrementer-based round-robin apparatus supporting dynamic priorities in multi-port switch
GB2447907B (en) * 2007-03-26 2009-02-18 Imagination Tech Ltd Processing long-latency instructions in a pipelined processor
US9588810B2 (en) * 2007-08-08 2017-03-07 Microsoft Technology Licensing, Llc Parallelism-aware memory request scheduling in shared memory controllers
JP5128972B2 (en) * 2008-01-25 2013-01-23 学校法人日本大学 Security processing equipment
US9596324B2 (en) 2008-02-08 2017-03-14 Broadcom Corporation System and method for parsing and allocating a plurality of packets to processor core threads
TW201007557A (en) * 2008-08-06 2010-02-16 Inventec Corp Method for reading/writing data in a multithread system
US9207943B2 (en) * 2009-03-17 2015-12-08 Qualcomm Incorporated Real time multithreaded scheduler and scheduling method
US8719223B2 (en) * 2010-05-06 2014-05-06 Go Daddy Operating Company, LLC Cloud storage solution for reading and writing files
US20110276784A1 (en) * 2010-05-10 2011-11-10 Telefonaktiebolaget L M Ericsson (Publ) Hierarchical multithreaded processing
US20120198458A1 (en) * 2010-12-16 2012-08-02 Advanced Micro Devices, Inc. Methods and Systems for Synchronous Operation of a Processing Device
US9354926B2 (en) * 2011-03-22 2016-05-31 International Business Machines Corporation Processor management via thread status
US9268542B1 (en) * 2011-04-28 2016-02-23 Google Inc. Cache contention management on a multicore processor based on the degree of contention exceeding a threshold
US9141391B2 (en) 2011-05-26 2015-09-22 Freescale Semiconductor, Inc. Data processing system with latency tolerance execution
US8656367B1 (en) * 2011-07-11 2014-02-18 Wal-Mart Stores, Inc. Profiling stored procedures
CN102281095A (en) * 2011-07-28 2011-12-14 航天东方红卫星有限公司 Task return method
US9110656B2 (en) 2011-08-16 2015-08-18 Freescale Semiconductor, Inc. Systems and methods for handling instructions of in-order and out-of-order execution queues
CN104011703B (en) 2011-12-22 2017-04-12 英特尔公司 Instruction processing method for instruction of specifies application thread performance state and related method
US9135014B2 (en) 2012-02-15 2015-09-15 Freescale Semiconductor, Inc Data processing system with latency tolerance execution
EP2831721B1 (en) 2012-03-30 2020-08-26 Intel Corporation Context switching mechanism for a processing core having a general purpose cpu core and a tightly coupled accelerator
US8904068B2 (en) * 2012-05-09 2014-12-02 Nvidia Corporation Virtual memory structure for coprocessors having memory allocation limitations
US9087202B2 (en) * 2013-05-10 2015-07-21 Intel Corporation Entry/exit architecture for protected device modules
KR102377726B1 (en) * 2015-04-17 2022-03-24 한국전자통신연구원 Apparatus for controlling reproduction of file in distributed file system and method
US10061619B2 (en) 2015-05-29 2018-08-28 Red Hat, Inc. Thread pool management
JP6503958B2 (en) * 2015-07-23 2019-04-24 富士通株式会社 Display information control program, method and apparatus
US10185564B2 (en) * 2016-04-28 2019-01-22 Oracle International Corporation Method for managing software threads dependent on condition variables
CN108337295B (en) * 2018-01-12 2022-09-23 青岛海尔智能家电科技有限公司 Internet of things communication method, server and system
JP7157542B2 (en) * 2018-03-30 2022-10-20 株式会社デンソー prefetch controller
US11119972B2 (en) * 2018-05-07 2021-09-14 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US10649922B2 (en) * 2018-08-06 2020-05-12 Apple Inc. Systems and methods for scheduling different types of memory requests with varying data sizes
KR102142498B1 (en) * 2018-10-05 2020-08-10 성균관대학교산학협력단 GPU memory controller for GPU prefetching through static analysis and method of control
US11210104B1 (en) * 2020-09-11 2021-12-28 Apple Inc. Coprocessor context priority
CN115421931B (en) * 2022-11-07 2023-03-28 深圳市明源云科技有限公司 Business thread control method and device, electronic equipment and readable storage medium

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5353418A (en) 1989-05-26 1994-10-04 Massachusetts Institute Of Technology System storing thread descriptor identifying one of plural threads of computation in storage only when all data for operating on thread is ready and independently of resultant imperative processing of thread
FR2678121B1 (en) * 1991-06-18 1994-04-29 Matra Communication DEVICE FOR INSERTING DIGITAL PACKETS IN A TRANSMISSION CHANNEL.
US5430850A (en) 1991-07-22 1995-07-04 Massachusetts Institute Of Technology Data processing system with synchronization coprocessor for multiple threads
US5630128A (en) 1991-08-09 1997-05-13 International Business Machines Corporation Controlled scheduling of program threads in a multitasking operating system
US5357617A (en) 1991-11-22 1994-10-18 International Business Machines Corporation Method and apparatus for substantially concurrent multiple instruction thread processing by a single pipeline processor
US5483641A (en) 1991-12-17 1996-01-09 Dell Usa, L.P. System for scheduling readahead operations if new request is within a proximity of N last read requests wherein N is dependent on independent activities
US5404469A (en) 1992-02-25 1995-04-04 Industrial Technology Research Institute Multi-threaded microprocessor architecture utilizing static interleaving
US5428769A (en) * 1992-03-31 1995-06-27 The Dow Chemical Company Process control interface system having triply redundant remote field units
JPH0659906A (en) * 1992-08-10 1994-03-04 Hitachi Ltd Method for controlling execution of parallel
US5485626A (en) * 1992-11-03 1996-01-16 International Business Machines Corporation Architectural enhancements for parallel computer systems utilizing encapsulation of queuing allowing small grain processing
US5608720A (en) * 1993-03-09 1997-03-04 Hubbell Incorporated Control system and operations system interface for a network element in an access system
WO1994027216A1 (en) * 1993-05-14 1994-11-24 Massachusetts Institute Of Technology Multiprocessor coupling system with integrated compile and run time scheduling for parallelism
JP3547482B2 (en) 1994-04-15 2004-07-28 株式会社日立製作所 Information processing equipment
US5812811A (en) 1995-02-03 1998-09-22 International Business Machines Corporation Executing speculative parallel instructions threads with forking and inter-thread communication
US6237074B1 (en) * 1995-05-26 2001-05-22 National Semiconductor Corp. Tagged prefetch and instruction decoder for variable length instruction set and method of operation
JPH096633A (en) * 1995-06-07 1997-01-10 Internatl Business Mach Corp <Ibm> Method and system for operation of high-performance multiplelogical route in data-processing system
GB2311882B (en) 1996-04-04 2000-08-09 Videologic Ltd A data processing management system
US5944816A (en) 1996-05-17 1999-08-31 Advanced Micro Devices, Inc. Microprocessor configured to execute multiple threads including interrupt service routines
US5933627A (en) * 1996-07-01 1999-08-03 Sun Microsystems Thread switch on blocked load or store using instruction thread field
JP2970553B2 (en) 1996-08-30 1999-11-02 日本電気株式会社 Multi-thread execution method
US5887166A (en) 1996-12-16 1999-03-23 International Business Machines Corporation Method and system for constructing a program including a navigation instruction
US6088788A (en) * 1996-12-27 2000-07-11 International Business Machines Corporation Background completion of instruction and associated fetch request in a multithread processor
US5907702A (en) 1997-03-28 1999-05-25 International Business Machines Corporation Method and apparatus for decreasing thread switch latency in a multithread processor
US6212544B1 (en) * 1997-10-23 2001-04-03 International Business Machines Corporation Altering thread priorities in a multithreaded processor
US5987492A (en) 1997-10-31 1999-11-16 Sun Microsystems, Inc. Method and apparatus for processor sharing
US6161166A (en) * 1997-11-10 2000-12-12 International Business Machines Corporation Instruction cache for multithreaded processor
US6240509B1 (en) 1997-12-16 2001-05-29 Intel Corporation Out-of-pipeline trace buffer for holding instructions that may be re-executed following misspeculation
US6504621B1 (en) * 1998-01-28 2003-01-07 Xerox Corporation System for managing resource deficient jobs in a multifunctional printing system
US6330584B1 (en) 1998-04-03 2001-12-11 Mmc Networks, Inc. Systems and methods for multi-tasking, resource sharing and execution of computer instructions
US6507862B1 (en) * 1999-05-11 2003-01-14 Sun Microsystems, Inc. Switching method in a multi-threaded processor
US6661794B1 (en) * 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing

Also Published As

Publication number Publication date
US8006244B2 (en) 2011-08-23
US20050022196A1 (en) 2005-01-27
US6931641B1 (en) 2005-08-16
KR100368350B1 (en) 2003-01-24
DE10110504B4 (en) 2006-11-23
KR20010094951A (en) 2001-11-03
JP2001350638A (en) 2001-12-21
DE10110504A1 (en) 2001-10-18

Similar Documents

Publication Publication Date Title
US8006244B2 (en) Controller for multiple instruction thread processors
US7093109B1 (en) Network processor which makes thread execution control decisions based on latency event lengths
US8516280B2 (en) Parallel processing computer systems with reduced power consumption and methods for providing the same
JP3604091B2 (en) Multitasking data processing system
US5867735A (en) Method for storing prioritized memory or I/O transactions in queues having one priority level less without changing the priority when space available in the corresponding queues exceed
US5185868A (en) Apparatus having hierarchically arranged decoders concurrently decoding instructions and shifting instructions not ready for execution to vacant decoders higher in the hierarchy
US5574939A (en) Multiprocessor coupling system with integrated compile and run time scheduling for parallelism
US5251306A (en) Apparatus for controlling execution of a program in a computing device
US6829697B1 (en) Multiple logical interfaces to a shared coprocessor resource
US5832304A (en) Memory queue with adjustable priority and conflict detection
US5812799A (en) Non-blocking load buffer and a multiple-priority memory system for real-time multiprocessing
US5987601A (en) Zero overhead computer interrupts with task switching
US6944850B2 (en) Hop method for stepping parallel hardware threads
US20040172631A1 (en) Concurrent-multitasking processor
US8635621B2 (en) Method and apparatus to implement software to hardware thread priority
US8595747B2 (en) Efficient task scheduling by assigning fixed registers to scheduler
US20020103990A1 (en) Programmed load precession machine
WO2003038602A2 (en) Method and apparatus for the data-driven synchronous parallel processing of digital data
US20080320240A1 (en) Method and arrangements for memory access
KR100618248B1 (en) Supporting multiple outstanding requests to multiple targets in a pipelined memory system
US7725659B2 (en) Alignment of cache fetch return data relative to a thread
JPH10301779A (en) Method for fetching and issuing dual word or plural instruction and device therefor
WO2002046887A2 (en) Concurrent-multitasking processor
CA2464506A1 (en) Method and apparatus for the data-driven synchronous parallel processing of digital data
CZ20001437A3 (en) Method and apparatus for selecting thread switch events in a multithreaded processor

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued