US20030200424A1 - Master-slave latch circuit for multithreaded processing - Google Patents

Master-slave latch circuit for multithreaded processing Download PDF

Info

Publication number
US20030200424A1
US20030200424A1 US10/459,646 US45964603A US2003200424A1 US 20030200424 A1 US20030200424 A1 US 20030200424A1 US 45964603 A US45964603 A US 45964603A US 2003200424 A1 US2003200424 A1 US 2003200424A1
Authority
US
United States
Prior art keywords
master
elements
multithreaded processor
thread
outputs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/459,646
Inventor
Anthony Aipperspach
Merwin Alferness
Gregory Uhlmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/459,646 priority Critical patent/US20030200424A1/en
Publication of US20030200424A1 publication Critical patent/US20030200424A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/30141Implementation provisions of register files, e.g. ports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/30105Register structure
    • G06F9/30116Shadow registers, e.g. coupled registers, not forming part of the register space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming

Definitions

  • the present invention relates to digital data processing systems, and in particular to high-speed latches used in register memory of digital computing devices.
  • a modern computer system typically comprises a central processing unit (CPU) and supporting hardware necessary to store, retrieve and transfer information, such as communications buses and memory. It also includes hardware necessary to communicate with the outside world, such as input/output controllers or storage controllers, and devices attached thereto such as keyboards, monitors, tape drives, disk drives, communication lines coupled to a network, etc.
  • the CPU is the heart of the system. It executes the instructions which comprise a computer program and directs the operation of the other system components.
  • the overall speed of a computer system may be crudely measured as the number of operations performed per unit of time.
  • the simplest of all possible improvements to system speed is to increase the clock speeds of the various components, and particularly the clock speed of the processor(s). E.g., if everything runs twice as fast but otherwise works in exactly the same manner, the system will perform a given task in half the time.
  • Early computer processors which were constructed from many discrete components, were susceptible to significant speed improvements by shrinking component size, reducing component number, and eventually, packaging the entire processor as an integrated circuit on a single chip. The reduced size made it possible to increase clock speed of the processor, and accordingly increase system speed.
  • Piplines will stall under certain circumstances.
  • An instruction that is dependent upon the results of a previously dispatched instruction that has not yet completed may cause the pipeline to stall.
  • instructions dependent on a load/store instruction in which the necessary data is not in the cache i.e., a cache miss
  • Maintaining the requisite data in the cache necessary for continued execution and sustaining a high hit ratio i.e., the number of requests for data compared to the number of times the data was readily available in the cache
  • a cache miss can cause the pipelines to stall for several cycles, and the total amount of memory latency will be severe if the data is not available most of the time.
  • memory devices used for main memory are becoming faster, the speed gap between such memory chips and high-end processors is becoming increasingly larger. Accordingly, a significant amount of execution time in current high-end processor designs is spent waiting for resolution of cache misses.
  • multithreading as defined in the computer architecture community is not the same as the software use of the term. In the case of software, “multithreading” refers to one task being subdivided into multiple related threads. In the hardware definition, the threads being concurrently maintained in a processor are merely arbitrary sequences of instructions, which don't necessarily have any relationship with one another. Therefore the term “hardware multithreading” is often used to distinguish the two used of the term. As used herein, “multithreading” will refer to hardware multithreading.
  • multithreading There are two basic forms of multithreading.
  • the processor executes N threads concurrently by interleaving execution on a regular basis, such as interleaving cycle-by-cycle. This creates a gap in time between the execution of each instruction within a single thread, which removes the need for the processor to wait for certain short term latency events, such as refilling an instruction pipeline.
  • the second form of multithreading sometimes called “coarse-grained multithreading”
  • multiple instructions in a single thread are sequentially executed until the processor encounters some longer term latency event, such as a cache miss, which triggers a switch to another thread.
  • multithreading comes with a price.
  • multithreading involves replicating the processor registers for each thread in order to maintain the state of multiple threads.
  • general purpose registers floating point registers, condition registers, floating point status and control register, count register, link register, exception register, save/restore registers, and special purpose registers.
  • special buffers such as a segment lookaside buffer, can be replicated or each entry can be tagged with the thread number (or alternatively, be flushed on every thread switch).
  • Another object of this invention is to provide an improved master-slave latch circuit for supporting hardware multithreading operation of a digital data computing device.
  • Another object of this invention is to reduce the size and complexity of latch circuitry for supporting hardware multithreading operation of a digital data computing device.
  • a master-slave latch circuit stores information for multiple threads.
  • the basic cell contains multiple master elements, each corresponding to a respective thread, selection logic coupled to the master elements for selecting a single one of the master outputs, and a single slave element coupled to the selector logic.
  • the circuit supports operation in a scan mode for testing purposes.
  • scan mode cells are paired.
  • One cell of each pair contains one or more elements which normally function as master elements, but which may also function as slave elements during scan mode operation. These dual function elements are coupled to master elements of the other cell of the pair.
  • the number of master elements in the pair of cells equals the number of slave elements, even though the number of master elements exceeds the number of slave elements during normal operation. This permits data to be successively scanned through all elements of the circuit, ensuring thorough testing.
  • elements function as in scan mode during a HOLD mode of operation, and a feedback loop controlled by a HOLD signal is added to each pair of master/slave elements.
  • the feedback loop drives the master element with the value of the slave.
  • FIG. 1 shows the major hardware components of a computer system for utilizing the multithreaded master-slave latch circuit according to the preferred embodiment of the present invention.
  • FIG. 2 is a conceptual high-level view of a typical register which records state information in a multithreaded processor.
  • FIGS. 3A and 3B (herein collectively referred to as FIG. 3) show a typical prior art master-slave latch circuit.
  • FIGS. 4 A- 1 , 4 A- 2 and 4 A- 3 (herein collectively referred to as FIG. 4A) show a prior art implementation of a master-slave latch circuit in a multithreaded register environment, and functional waveforms associated therewith.
  • FIG. 4B shows the prior art master-slave latch circuit of FIG. 4A at a higher level of abstraction.
  • FIG. 5 represents at a high level of abstraction a master-slave latch circuit, in accordance with an embodiment of the present invention.
  • FIGS. 6 A- 1 , 6 A- 2 , 6 A- 3 and 6 A- 4 (herein collectively referred to as FIG. 6A) show a detailed implementation and functional waveforms of one embodiment of the circuit of FIG. 5.
  • FIGS. 6 B- 1 , 6 B- 2 and 6 B- 3 (herein collectively referred to as FIG. 6B), FIGS. 6 C- 1 , 6 C- 2 and 6 C- 3 (herein collectively referred to as FIG. 6C), and FIGS. 6 D- 1 , 6 D- 2 and 6 D- 3 (herein collectively referred to as FIG. 6D) illustrate detailed implementations of various alternative embodiments of the circuit of FIG. 5.
  • FIGS. 7 A- 1 , 7 A- 2 and 7 A- 3 (herein collectively referred to as FIG. 7A) and FIGS. 7 B- 1 , 7 B- 2 and 7 B- 3 (herein collectively referred to as FIG. 7B) illustrate the application of input control signals, including scan capability, to the multithreaded latch circuits of FIG. 6A and FIG. 6C, respectively, in accordance with certain embodiments of the present invention.
  • FIG. 8 illustrates at a high level of abstraction a pair of latch circuits having a scan path in which some K0 cells operate as K1 cells, in accordance with certain embodiments of the present invention.
  • FIGS. 9 A- 1 , 9 A- 2 , 9 A- 3 , 9 A- 4 , 9 A- 5 and 9 A- 6 (herein collectively referred to as FIG. 9A) illustrate a detailed implementation of one embodiment of the circuit of FIG. 8.
  • FIGS. 9 B- 1 , 9 B- 2 and 9 B- 3 (herein collectively referred to as FIG. 9B) illustrate functional waveforms of the circuit of FIG. 9A.
  • FIGS. 10 A- 1 , 10 A- 2 , 10 A- 3 , 10 A- 4 , 10 A- 5 , 10 A- 6 , 10 A- 7 and 10 A- 8 (herein collectively referred to as FIG. 10A)
  • FIGS. 10 B- 1 , 10 B 2 , 10 B- 3 , 10 B- 4 , 10 B- 5 , 10 B- 6 , 10 B- 7 and 10 B- 8 (herein collectively referred to as FIG. 10B) illustrate detailed implementations of various alternative embodiments of the circuit of FIG. 8.
  • FIG. 11 illustrates at a high level of abstraction a pair of latch circuits having scan and feedback paths, in accordance with certain embodiments of the present invention.
  • FIG. 12 illustrates a detailed implementation of one embodiment of the circuit of FIG. 11.
  • FIG. 1 The major hardware components of a computer system 100 for utilizing the multithreaded master-slave latch circuit according to the preferred embodiment of the present invention are shown in FIG. 1.
  • Central processing units (CPUs) 101 A and 101 B support hardware multithreaded operation in performing basic machine processing function on instructions and data from main memory 102 .
  • Each processor contains respective internal level one instruction caches 106 A, 106 B (L1 I-cache), and level one data caches 107 A, 107 B (L1 D-cache).
  • Each L1 I-cache 106 A, 106 B stores instructions for execution by its CPU.
  • Each L1 D-cache stores data (other than instructions) to be processed by its CPU.
  • Each CPU 101 A, 101 B is coupled to a respective level two cache (L2-cache) 108 A, 108 B, which can be used to hold both instructions and data.
  • Memory bus 109 transfers data among CPUs and memory.
  • CPUs and memory also communicate via memory bus 109 and bus interface 105 with system I/O bus 110 .
  • IOPs I/O processing unist
  • Various I/O processing unist (IOPs) 111 - 115 attach to system I/O bus 110 and support communication with a variety of storage and I/O devices, such as direct access storage devices (DASD), tape drives, workstations, printers, and remote communication lines for communicating with remote devices or other computer systems.
  • DASD direct access storage devices
  • CPU, L1 I-cache, L1 D-cache, and L2 cache are herein designated generically by reference numbers 101 , 106 , 107 and 108 , respectively.
  • buses are shown in FIG. 1, it should be understood that these are intended to represent various communications paths at a conceptual level, and that the actual physical configuration of buses may vary, and in fact may be considerably more complex.
  • FIG. 1 is intended as but one example of a system configuration, and that the actual number, type and configuration of components in a computer system may vary. In particular, the present invention could be employed in systems having a single multithreaded CPU, or in systems have multiple multithreaded CPUs.
  • Each CPU 101 is capable of maintaining the state of multiple threads.
  • CPU 101 will typically include a plurality of general purpose registers for storing data, and various special-purpose registers for storing conditions, intermediate results, instructions, and other information which collectively determines the state of the processor. This information is replicated for each thread supported by CPU 101 .
  • FIG. 2 is a conceptual high-level view of a typical register which records state information in a multithreaded processor.
  • four threads are supported, it being understood that the number of supported threads may vary.
  • a separate register portion 202 - 205 is required for each thread.
  • Each register portion 202 - 205 typically contains multiple bits, although a single-bit register is possible. At any given instant, only one thread state is needed by the processor. This is typically, although not always, the active thread.
  • a thread select input 207 to multiplexer logic 206 selects the contents of one of the register portions 202 - 205 for output on output lines 210 .
  • FIG. 2 represents a multithreading register configuration at a conceptual level.
  • a register is typically implemented as a set of individual bit storage circuits. While FIG. 2 shows each register portion 202 - 205 corresponding to a thread as a separate entity for ease of understanding, in fact the individual bit circuits of the registers would typically be physically interleaved on a chip, i.e., bit 0 of register portion 202 would be physically adjacent bit 0 of register portions 203 , 204 and 205 ; the respective bit 1 circuits would all be physically adjacent each other, etc.
  • FIG. 3 shows a typical master-slave latch circuit 301 , also known as a K0-K1 latch, as is well known in the art, and functional waveforms associated therewith.
  • Master-slave latch circuit 301 contains a master (K0) storage element 302 , comprising K0) latch inverter 303 and K0 feedback inverter 304 , and a slave (K1) storage element 305 , comprising K1 latch inverter 306 and K1 feedback inverter 307 .
  • Master storage element 302 is set by K0 clocked inverter 310 , which drives an output when the processor clock (CLK) is low.
  • Slave storage element 305 is set by K1 clocked inverter 311 , which drives an output when the processor clock is high.
  • processor clocks may have additional phases to avoid timing overlaps which could corrupt data, but for purposes of understanding the present application, one may simply assume that clocked inverters on different clock phases, e.g., inverters 310 and 311 , are never active simultaneously).
  • a pair of gated inverters 320 , 321 coupled to the input of clocked inverter 310 are controlled by a HOLD signal; these determine whether the master element will be set with new data from the Data_Input line, or with be refreshed with existing data in the latch from latch node K1.
  • master-slave latch circuit there are many different variations of master-slave latch circuit. However, in general they function in a similar manner, i.e., data is clocked into a master element, and then a slave element, on different phases of a clock, thus preventing input data from going directly through the latch to the output on the same clock cycle. This characteristic enables the existing data in the latch to be read on the same cycle that new data is written to the latch.
  • the present invention is not necessarily limited to the type of master-slave latch circuit shown in FIG. 3, and could be applied to different types of master-slave latch circuits.
  • FIG. 4A shows the detailed circuit and functional waveforms
  • FIG. 4B shows the same circuit at a higher level of abstraction.
  • the multithreaded latch circuit contains two K0-K1 latch circuits similar to that of FIG. 3, one for each thread. I.e., circuit elements 401 - 404 constitute a K0-K1 latch circuit for thread 0 , while circuit elements 405 - 408 constitute a K0-K1 latch circuit for thread 1 .
  • circuit may be conceptualized as a pair of write ports 401 , 405 , each of which is physically implemented as a pair of gated inverters 421 - 424 which provide input to a clocked inverter 425 , 426 , the clocked inverted serving as a driver for the K0 storage elements 402 , 406 .
  • the gated inverters 421 - 424 (equivalents of gated inverted 320 and 321 in FIG.
  • Each K0 storage element 402 , 406 is physically implemented as a pair of inverters driving each other's input, as shown.
  • the K0 storage elements provide input to respective K1 drivers 403 , 407 , which are implemented as clocked inverters. These provide input to the K1 storage cells 404 , 408 , implemented as shown.
  • Each of the two latch circuits serve as input to thread select logic 409 , which in this embodiment is a pair of transmission gates controlled by a thread select signal.
  • the read port 410 for the latch is represented as a single inverter driver coupled to the output of the thread select logic.
  • the number of transistors required in a multithreaded master-slave latch as illustrated by FIG. 4A is reduced by placing the thread select logic between the K0 storage elements and the K1 driver. As a result, only one K1 driver, and only one K1 storage element, are required, regardless of the number of threads supported.
  • FIG. 5 represents at a high level of abstraction a master-slave latch circuit in accordance with an embodiment of the present invention.
  • the circuit of FIG. 5 supports two threads, it being understood that the number of threads may vary.
  • the improved master-slave latch circuit contains a pair of K0 write ports 501 , 505 , which drive a respective pair of K0 storage elements 502 , 506 .
  • a thread select circuit 520 is coupled to the outputs of each K0 storage element 502 , 506 .
  • Thread select circuit 520 selects one and only one of the storage elements 502 , 506 to provide input to a common K1 driver 503 , which drives a common K1 storage element 504 .
  • K1 Storage element 504 is coupled to read port 510 , which drives the output.
  • the improved circuit of FIG. 5 reduces the number of storage elements, and thus the complexity of the latch circuit. Furthermore, the concept is easily extendable to latch circuits supporting a larger number of threads for even greater savings of circuit elements.
  • FIG. 6A shows a detailed implementation and functional waveforms of one embodiment of the circuit of FIG. 5.
  • CNTRL0 and CNTRL1 control gated inverters 601 , 602 , and are derived as explained above with respect to FIG. 4A. Because there is only one K1 storage element, it is not possible to take feedback from the K1 element as gated input to the K0 stage. Therefore, the K0 write ports 501 , 502 must be designed to hold state when no input is present. This is accomplished by replacing clocked inverters 425 , 426 of FIG. 4 with clocked transmission gates 603 , 604 .
  • transmission gates 603 , 604 do not invert the input signal, the polarity of the K0 elements 502 , 506 is effectively reversed. Therefore the latch nodes of K0 elements 502 , 506 are used to provide input to selection logic 520 , which is implemented as a pair of transmission gates. The output of the gates is input to common K1 driver 503 , which is a clocked inverter. This inturn drives common K1 storage element 504 , and ultimately read port 520 , which is shown as a single inverter coupled to the feedback node of K1 storage element 504 .
  • FIGS. 6B, 6C and 6 D illustrate detailed implementations of various alternative embodiments of the circuit of FIG. 5.
  • the gated inverters 601 , 602 of FIG. 6A are replaced by a pair of transmission gates 611 , 612 , controlled by the same control signals. This has the effect of eliminating four more transistors from the latch (each gated inverter requiring four transistors as opposed to two for a transmission gate), but will further increase the set-up time.
  • FIG. 6B the gated inverters 601 , 602 of FIG. 6A are replaced by a pair of transmission gates 611 , 612 , controlled by the same control signals.
  • gated inverter 601 and clocked transmission gate 603 have been replaced by a single gated inverter 621 (and similarly, inverter 602 and gate 604 have been replaced by gated inverter 622 ), where the control signals to inverters 621 and 622 include the clock, i.e.:
  • circuit of FIG. 6D gated inverter 601 and clocked transmission gate 603 of FIG. 6A have been replaced by a single transmission gate 631 (and similarly, inverter 602 and gate 604 have been replaced by transmission gate 632 ), where the control signals to inverters 631 and 632 are the same as to inverters 621 and 622 , respectively. While circuits 6 C and 6 D show fewer components than circuit 6 A, whether there actually are fewer components is problematic, because a more complex control signal must be generated for controlling inverters 621 , 622 or gates 631 , 632 .
  • FIG. 7A illustrates the application of input control signals, including scan capability, to the multithreaded latch circuit of FIG. 6A.
  • the data input line is received from the output of a multiplexer 701 , controlled by multiplexer control circuit 702 .
  • multiplexer control 702 directs multiplexor 701 to pass Data_in signal 710 through to the Data_Input 712 of the latch circuit.
  • Scan signal 721 is active, multiplexer control 702 causes multiplexer to pass Scann_In signal 711 through to the Data_Input 712 of the latch.
  • FIG. 7A also represents the generation of control signals for gated inverters 601 , 602 from inverter 731 , NAND gates 732 , 733 , and inverters 734 , 735
  • FIG. 7B illustrates the application of input control signals, including scan capability, to the multithreaded latch circuit of FIG. 6C.
  • This circuit uses the same multiplexer 701 and multiplexer control 702 as the circuit of FIG. 7A. However, it will be recalled that the circuit of FIG. 6C uses a gated inverter in which the control signal is the CLK combined with the HOLD and Write_Select signals to produce CLK_CNTRL0 and CLK_CNTRL1.
  • Inverters 741 , 744 and 745 and 3-input NAND gates 742 , 743 generate the control signals.
  • An alternative approach is to modify the basic multi-threaded latch circuit as described above, so that the scan path is fundamentally different from the logic path in functional mode, and in particular, so that at least some of the K0 cells function as K1 cells when in scan mode.
  • FIG. 8 illustrates such a circuit configuration at a high level.
  • four threads are supported, it being understood that a different number of threads could be supported.
  • two latch circuits 800 , 820 are paired, the latch circuits storing a pair of bits (designated Bit 1 (circuit 800 ) and bit 2 (circuit 820 )).
  • Bit 1 circuit 800
  • bit 2 circuit 820
  • these circuits are illustrated in normal functional mode. I.e., in circuit 800 four K0 cells 801 - 804 provide input to a common K1 cell 805 , and similarly in circuit 820 , K0 cells 821 - 824 provide input to common K1 cell 825 .
  • select logic, write ports, etc. are omitted from this high-level diagram for clarity of illustration.
  • these circuits are illustrated in scan mode.
  • cells 822 , 823 and 824 which function as K0 cells in normal functional mode, operate as K1 cells.
  • the number of K0 cells in the pair of latches 800 , 820 equals the number of K1 cells, when the latches are being operated in scan mode. This makes it possible to construct a scan path through each cell once and only once. The scan path is shown on the left side of FIG. 8.
  • the scan path goes from K0 cell 801 to K1 cell 805 to K0 cell 802 to cell 823 (acting as a K1 cell for scan mode only) to K0 cell 804 to cell 822 (acting as a K1 cell for scan mode only) to K0 cell 803 to cell 824 (acting as a K1 cell for scan mode only) to K0 cell 821 to K1 cell 825 .
  • FIG. 9A illustrates one implementation of the scannable paired circuit configuration of FIG. 8.
  • a pair of gated inverters 901 , 902 drive K0 memory element 801 .
  • Inverter 901 receives scan data, while inverter 902 receives data from functional logic.
  • the scan control is input to the gates of both inverters, so that inverter 901 is gated on only when scan control is active, while inverter 902 is gated on only if scan control is inactive.
  • the output of K0 cell 801 feeds gated inverter 903 and transmission gate 904 , which is part of the selection logic.
  • Inverter 903 is gated on only if scan control is active.
  • the four transmission gates which make up selection logic feed clocked inverter 905 , which is also shut off is scan control is active.
  • scan control is active.
  • two separate logic paths are provided from K0 cell 801 to K1 cell 805 , one (through inverter 903 ) being active only if scan control is active, while the other (through gate 904 and inverter 905 ) being active only if scan control is inactive.
  • the path followed when scan control is active is completely independent of write select and read select signals.
  • the scan path in turn traverses clocked inverter 906 , element 802 , clocked inverter 907 , element 823 , clocked inverter 908 , element 804 , clocked inverter 909 , element 822 , clocked inverter 910 , element 803 , clocked inverter 911 , element 824 , clocked inverter 912 , element 821 , clocked inverter 913 , and element 825 .
  • FIG. 9B is a timing diagram of the circuit of FIG. 9A.
  • FIGS. 10A and 10B represent alternative embodiments of the circuit configuration of FIG. 8.
  • the clock is logically ANDed with other control lines for control of various gated inverters, while in FIG. 10A the clock is separate.
  • the circuit of FIG. 10B the circuit of FIG. 10A is further modified by combining the two inputs to each of elements 801 - 804 and 821 before the clocked transmission gates, thus eliminating five transmission gates (one for each element). Note that it is not possible to combine the inputs for elements 822 - 824 , because these inputs (scan and normal functional mode) require different clocks.
  • the prior art circuit of FIG. 4A contains a feedback loop from the K1 element to the K0 element, gated by the HOLD signal.
  • the K0 element holds its value because all the drivers are shut off in the presence of the HOLD signal, obviating the need for a feedback loop.
  • a HOLD signal when active, should take precedence over all other control signals. I.e., if HOLD is active, the circuit should hold all values, regardless of the state of other control signals. If the HOLD is inactive, and Scan control is active, the circuit operates in scan mode. If neither HOLD nor Scan is active, the circuit operates in normal functional mode.
  • FIG. 11 illustrates this concept at a high level.
  • FIG. 11 is the circuit of FIG. 8, in which the graphical arrangement of the memory elements has been changed for clarity of illustration, and feedback has been added.
  • the normal operating mode has been superimposed upon the scan mode.
  • the scan mode connections are represented as double lines, to distinguish them from normal operating mode.
  • HOLD mode the K0 and K1 elements take the same clock phase as they do in Scan mode, so that there is an equal number of K0 and K1 elements.
  • a respective feedback line 1101 - 1105 is provided to each K0 element 801 , 802 , 804 , 803 , 821 in HOLD mode, the feedback lines receiving input from respective K1 elements 805 , 823 , 822 , 824 , 825 , as shown.
  • FIG. 12 illustrates one embodiment of the high-level circuit of FIG. 11.
  • FIG. 12 is essentially the circuit of FIG. 10B, to which feedback paths 1101 - 1105 have been added.
  • each feedback path 101 - 1105 is gated by the HOLD signal at a gated inverter. If HOLD is active, the value of the K1 element 805 , 823 , 822 , 824 , 825 will be fed back to the corresponding K0 element 801 , 802 , 804 , 803 , 821 ; no other conditions need be satisfied.
  • a multithreaded latch circuit may be constructed with or without scan path support, and a scan path may be different from that shown in the examples herein.
  • a HOLD feedback loop may exist independently or in conjunction with scan logic.

Abstract

A master-slave latch circuit for a multithreaded processor stores information for multiple threads. The basic cell contains multiple master elements, each corresponding to a respective thread, selection logic coupled to the master elements for selecting a single one of the master outputs, and a single slave element coupled to the selector logic. Preferably, the circuit supports operation in multiple modes, including a scan mode for testing purposes.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This is a continuation of pending U.S. patent application Ser. No. 09/439,581, filed Nov. 12, 1999, entitled “MASTER-SLAVE LATCH CIRCUIT FOR MULTITHREADED PROCESSING”, which is herein incorporated by reference.[0001]
  • FIELD OF THE INVENTION
  • The present invention relates to digital data processing systems, and in particular to high-speed latches used in register memory of digital computing devices. [0002]
  • BACKGROUND OF THE INVENTION
  • A modern computer system typically comprises a central processing unit (CPU) and supporting hardware necessary to store, retrieve and transfer information, such as communications buses and memory. It also includes hardware necessary to communicate with the outside world, such as input/output controllers or storage controllers, and devices attached thereto such as keyboards, monitors, tape drives, disk drives, communication lines coupled to a network, etc. The CPU is the heart of the system. It executes the instructions which comprise a computer program and directs the operation of the other system components. [0003]
  • From the standpoint of the computer's hardware, most systems operate in fundamentally the same manner. Processors are capable of performing a limited set of very simple operations, such as arithmetic, logical comparisons, and movement of data from one location to another. But each operation is performed very quickly. Programs which direct a computer to perform massive numbers of these simple operations give the illusion that the computer is doing something sophisticated. What is perceived by the user as a new or improved capability of a computer system is made possible by performing essentially the same set of very simple operations, but doing it much faster. Therefore continuing improvements to computer systems require that these systems be made ever faster. [0004]
  • The overall speed of a computer system (also called the throughput) may be crudely measured as the number of operations performed per unit of time. Conceptually, the simplest of all possible improvements to system speed is to increase the clock speeds of the various components, and particularly the clock speed of the processor(s). E.g., if everything runs twice as fast but otherwise works in exactly the same manner, the system will perform a given task in half the time. Early computer processors, which were constructed from many discrete components, were susceptible to significant speed improvements by shrinking component size, reducing component number, and eventually, packaging the entire processor as an integrated circuit on a single chip. The reduced size made it possible to increase clock speed of the processor, and accordingly increase system speed. [0005]
  • Despite the enormous improvement in speed obtained from integrated circuitry, the demand for ever faster computer systems has continued. Hardware designers have been able to obtain still further improvements in speed by greater integration (i.e., increasing the number of circuits packed onto a single chip), by further reducing the size of circuits, and by various other techniques. However, designers can see that physical size reductions can not continue indefinitely, and there are limits to their ability to continue to increase clock speeds of processors. Attention has therefore been directed to other approaches for further improvements in overall speed of the computer system. [0006]
  • Without changing the clock speed, it is possible to improve system throughput by using multiple processors. The modest cost of individual processors packaged on integrated circuit chips has made this approach practical. However, one does not simply double a system's throughput by going from one processor to two. The introduction of multiple processors to a system creates numerous architectural problems. For example, the multiple processors will typically share the same main memory (although each processor may have its own cache). It is therefore necessary to devise mechanisms that avoid memory access conflicts, and assure that extra copies of data in caches are tracked in a coherent fashion. Furthermore, each processor puts additional demands on the other components of the system such as storage, I/O, memory, and particularly, the communications buses that connect various components. As more processors are introduced, there is greater likelihood that processors will spend significant time waiting for some resource being used by another processor. [0007]
  • Without delving into further architectural complications of multiple processor systems, it can still be observed that there are many reasons to improve the speed of the individual CPU, whether a system uses multiple CPUs or a single CPU. If the CPU clock speed is given, it is possible to further increase the work done by the individual CPU, i.e., the number of operations executed per unit time, by increasing the average number of operations executed per clock cycle. [0008]
  • In order to boost CPU speed, it is common in high performance processor designs to employ instruction pipelining, as well as one or more levels of cache memory. Pipeline instruction execution allows subsequent instructions to begin execution before previously issued instructions have finished. Cache memories store frequently used and other data nearer the processor and allow instruction execution to continue, in most cases, without waiting the full access time of a main memory access. [0009]
  • Piplines will stall under certain circumstances. An instruction that is dependent upon the results of a previously dispatched instruction that has not yet completed may cause the pipeline to stall. For instance, instructions dependent on a load/store instruction in which the necessary data is not in the cache, i.e., a cache miss, cannot be executed until the data becomes available in the cache. Maintaining the requisite data in the cache necessary for continued execution and sustaining a high hit ratio (i.e., the number of requests for data compared to the number of times the data was readily available in the cache), is not trivial, especially for computations involving large data structures. A cache miss can cause the pipelines to stall for several cycles, and the total amount of memory latency will be severe if the data is not available most of the time. Although memory devices used for main memory are becoming faster, the speed gap between such memory chips and high-end processors is becoming increasingly larger. Accordingly, a significant amount of execution time in current high-end processor designs is spent waiting for resolution of cache misses. [0010]
  • Reducing the amount of time that the processor is idle waiting for certain events, such as re-filling a pipeline or retrieving data from memory, will increase the average number of operations per clock cycle. One architectural innovation directed to this problem is called “hardware multithreading” or simply “multithreading”. This technique involves concurrently maintaining the state of multiple executable sequences of instructions, called threads, within a single CPU. As a result, it is relatively simple and fast to switch threads. [0011]
  • The term “multithreading” as defined in the computer architecture community is not the same as the software use of the term. In the case of software, “multithreading” refers to one task being subdivided into multiple related threads. In the hardware definition, the threads being concurrently maintained in a processor are merely arbitrary sequences of instructions, which don't necessarily have any relationship with one another. Therefore the term “hardware multithreading” is often used to distinguish the two used of the term. As used herein, “multithreading” will refer to hardware multithreading. [0012]
  • There are two basic forms of multithreading. In the more traditional form, sometimes called “fine-grained multithreading”, the processor executes N threads concurrently by interleaving execution on a regular basis, such as interleaving cycle-by-cycle. This creates a gap in time between the execution of each instruction within a single thread, which removes the need for the processor to wait for certain short term latency events, such as refilling an instruction pipeline. In the second form of multithreading, sometimes called “coarse-grained multithreading”, multiple instructions in a single thread are sequentially executed until the processor encounters some longer term latency event, such as a cache miss, which triggers a switch to another thread. [0013]
  • Like any innovation, multithreading comes with a price. Typically, multithreading involves replicating the processor registers for each thread in order to maintain the state of multiple threads. For instance, for a processor implementing the architecture sold under the trade name PowerPC™ to perform multithreading, it will generally be necessary to replicate the following registers for each thread: general purpose registers, floating point registers, condition registers, floating point status and control register, count register, link register, exception register, save/restore registers, and special purpose registers. Additionally, the special buffers, such as a segment lookaside buffer, can be replicated or each entry can be tagged with the thread number (or alternatively, be flushed on every thread switch). Some branch prediction mechanisms, e.g., the correlation register and the return stack, should also be replicated.. [0014]
  • The replication of so many registers consumes a significant amount of chip area. Since chip area is typically in great demand, the hardware designer must face difficult choices. One can reduce cache sizes, reduce the number of general purpose registers available to each thread, or make other significant concessions, but none of these choices is desirable. There is a need for an improved method of dealing with the proliferation of registers which accompanies multithreading. [0015]
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to provide an improved multithreaded processor. [0016]
  • Another object of this invention is to provide an improved master-slave latch circuit for supporting hardware multithreading operation of a digital data computing device. [0017]
  • Another object of this invention is to reduce the size and complexity of latch circuitry for supporting hardware multithreading operation of a digital data computing device. [0018]
  • In a digital processor supporting hardware multithreading, a master-slave latch circuit stores information for multiple threads. The basic cell contains multiple master elements, each corresponding to a respective thread, selection logic coupled to the master elements for selecting a single one of the master outputs, and a single slave element coupled to the selector logic. [0019]
  • In the preferred embodiment, the circuit supports operation in a scan mode for testing purposes. In scan mode, cells are paired. One cell of each pair contains one or more elements which normally function as master elements, but which may also function as slave elements during scan mode operation. These dual function elements are coupled to master elements of the other cell of the pair. When operating in scan mode using this arrangement, the number of master elements in the pair of cells equals the number of slave elements, even though the number of master elements exceeds the number of slave elements during normal operation. This permits data to be successively scanned through all elements of the circuit, ensuring thorough testing. [0020]
  • In an alternative embodiment, elements function as in scan mode during a HOLD mode of operation, and a feedback loop controlled by a HOLD signal is added to each pair of master/slave elements. The feedback loop drives the master element with the value of the slave. [0021]
  • The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:[0022]
  • BRIEF DESCRIPTION OF THE DRAWING
  • FIG. 1 shows the major hardware components of a computer system for utilizing the multithreaded master-slave latch circuit according to the preferred embodiment of the present invention. [0023]
  • FIG. 2 is a conceptual high-level view of a typical register which records state information in a multithreaded processor. [0024]
  • FIGS. 3A and 3B (herein collectively referred to as FIG. 3) show a typical prior art master-slave latch circuit. [0025]
  • FIGS. [0026] 4A-1, 4A-2 and 4A-3 (herein collectively referred to as FIG. 4A) show a prior art implementation of a master-slave latch circuit in a multithreaded register environment, and functional waveforms associated therewith.
  • FIG. 4B shows the prior art master-slave latch circuit of FIG. 4A at a higher level of abstraction. [0027]
  • FIG. 5 represents at a high level of abstraction a master-slave latch circuit, in accordance with an embodiment of the present invention. [0028]
  • FIGS. [0029] 6A-1, 6A-2, 6A-3 and 6A-4 (herein collectively referred to as FIG. 6A) show a detailed implementation and functional waveforms of one embodiment of the circuit of FIG. 5.
  • FIGS. [0030] 6B-1, 6B-2 and 6B-3 (herein collectively referred to as FIG. 6B), FIGS. 6C-1, 6C-2 and 6C-3 (herein collectively referred to as FIG. 6C), and FIGS. 6D-1, 6D-2 and 6D-3 (herein collectively referred to as FIG. 6D) illustrate detailed implementations of various alternative embodiments of the circuit of FIG. 5.
  • FIGS. [0031] 7A-1, 7A-2 and 7A-3 (herein collectively referred to as FIG. 7A) and FIGS. 7B-1, 7B-2 and 7B-3 (herein collectively referred to as FIG. 7B) illustrate the application of input control signals, including scan capability, to the multithreaded latch circuits of FIG. 6A and FIG. 6C, respectively, in accordance with certain embodiments of the present invention.
  • FIGS. 8A and 8B (herein collectively referred to as FIG. 8) illustrate at a high level of abstraction a pair of latch circuits having a scan path in which some K0 cells operate as K1 cells, in accordance with certain embodiments of the present invention. [0032]
  • FIGS. [0033] 9A-1, 9A-2, 9A-3, 9A-4, 9A-5 and 9A-6 (herein collectively referred to as FIG. 9A) illustrate a detailed implementation of one embodiment of the circuit of FIG. 8.
  • FIGS. [0034] 9B-1, 9B-2 and 9B-3 (herein collectively referred to as FIG. 9B) illustrate functional waveforms of the circuit of FIG. 9A.
  • FIGS. [0035] 10A-1, 10A-2, 10A-3, 10A-4, 10A-5, 10A-6, 10A-7 and 10A-8 (herein collectively referred to as FIG. 10A), and FIGS. 10B-1, 10B2, 10B-3, 10B-4, 10B-5, 10B-6, 10B-7 and 10B-8 (herein collectively referred to as FIG. 10B) illustrate detailed implementations of various alternative embodiments of the circuit of FIG. 8.
  • FIG. 11 illustrates at a high level of abstraction a pair of latch circuits having scan and feedback paths, in accordance with certain embodiments of the present invention. [0036]
  • FIGS. 12A, 12B, [0037] 12C, 12D, 12E, 12F, 12G, 12H and 12I (herein collectively referred to as FIG. 12) illustrate a detailed implementation of one embodiment of the circuit of FIG. 11.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • The major hardware components of a [0038] computer system 100 for utilizing the multithreaded master-slave latch circuit according to the preferred embodiment of the present invention are shown in FIG. 1. Central processing units (CPUs) 101A and 101B support hardware multithreaded operation in performing basic machine processing function on instructions and data from main memory 102. Each processor contains respective internal level one instruction caches 106A, 106B (L1 I-cache), and level one data caches 107A, 107B (L1 D-cache). Each L1 I- cache 106A, 106B stores instructions for execution by its CPU. Each L1 D-cache stores data (other than instructions) to be processed by its CPU. Each CPU 101A, 101B is coupled to a respective level two cache (L2-cache) 108A, 108B, which can be used to hold both instructions and data. Memory bus 109 transfers data among CPUs and memory. CPUs and memory also communicate via memory bus 109 and bus interface 105 with system I/O bus 110. Various I/O processing unist (IOPs) 111-115 attach to system I/O bus 110 and support communication with a variety of storage and I/O devices, such as direct access storage devices (DASD), tape drives, workstations, printers, and remote communication lines for communicating with remote devices or other computer systems. For simplicity, CPU, L1 I-cache, L1 D-cache, and L2 cache are herein designated generically by reference numbers 101, 106, 107 and 108, respectively. While various buses are shown in FIG. 1, it should be understood that these are intended to represent various communications paths at a conceptual level, and that the actual physical configuration of buses may vary, and in fact may be considerably more complex. It should further be understood that FIG. 1 is intended as but one example of a system configuration, and that the actual number, type and configuration of components in a computer system may vary. In particular, the present invention could be employed in systems having a single multithreaded CPU, or in systems have multiple multithreaded CPUs.
  • Each CPU [0039] 101 is capable of maintaining the state of multiple threads. CPU 101 will typically include a plurality of general purpose registers for storing data, and various special-purpose registers for storing conditions, intermediate results, instructions, and other information which collectively determines the state of the processor. This information is replicated for each thread supported by CPU 101.
  • Additional background information concerning multithreaded processor design is contained in the following commonly assigned U.S. patents and copending U.S. patent applications, herein incorporated by reference: U.S. Pat. No. 6,161,166 to Doing, et al.; U.S. Pat. No. 6,263,404 to Borkenhagen, et al.; U.S. Pat. No. 6,021,481 to Eickemeyer, et al; U.S. Pat. No. 6,212,544 to Borkenhagen, et al.; Ser. No. 08/958,716, filed Oct. 23, 1997, entitled Method and Apparatus for Selecting Thread Switch Events in a Multithreaded Processor (Assignee's docket no. RO997-104); U.S. Pat. No. 6,567,839 to Borkenhagen, et al.; U.S. Pat. No. 6,105,051 to Borkenhagen, et al.; U.S. Pat. No. 6,076,57 to Borkenhagen, et al.; U.S. Pat. No. 6,088,788 to Borkenhagen, et al. While the multithreaded processor design described in the above applications is a coarse-grained multithreading implementation, it should be understood that the present invention is applicable to either coarse-grained or fine-grained multithreading. [0040]
  • FIG. 2 is a conceptual high-level view of a typical register which records state information in a multithreaded processor. In the example of FIG. 2, four threads are supported, it being understood that the number of supported threads may vary. Where hardware multithreading is used, it is necessary to maintain the register state of each thread supported by the hardware. Therefore, a separate register portion [0041] 202-205 is required for each thread. Each register portion 202-205 typically contains multiple bits, although a single-bit register is possible. At any given instant, only one thread state is needed by the processor. This is typically, although not always, the active thread. I.e., in fine-grained multithreading, where processor cycles are allocated to threads on a round-robin basis, only the register information corresponding to the thread for the current machine cycle is typically needed. In coarse-grained multithreading, where the active thread changes from time to time upon a cache miss, interrupt or similar event, only the register information corresponding to the currently active thread is typically needed. In either case, a thread select input 207 to multiplexer logic 206 selects the contents of one of the register portions 202-205 for output on output lines 210.
  • It will be understood that FIG. 2 represents a multithreading register configuration at a conceptual level. A register is typically implemented as a set of individual bit storage circuits. While FIG. 2 shows each register portion [0042] 202-205 corresponding to a thread as a separate entity for ease of understanding, in fact the individual bit circuits of the registers would typically be physically interleaved on a chip, i.e., bit 0 of register portion 202 would be physically adjacent bit 0 of register portions 203, 204 and 205; the respective bit 1 circuits would all be physically adjacent each other, etc.
  • For certain types of state information, it is desirable to store the bits in master-slave latches. In particular, information which may need to be read and written to on the same processor clock cycle is often stored in such a circuit. [0043]
  • FIG. 3 shows a typical master-[0044] slave latch circuit 301, also known as a K0-K1 latch, as is well known in the art, and functional waveforms associated therewith. Master-slave latch circuit 301 contains a master (K0) storage element 302, comprising K0) latch inverter 303 and K0 feedback inverter 304, and a slave (K1) storage element 305, comprising K1 latch inverter 306 and K1 feedback inverter 307. Master storage element 302 is set by K0 clocked inverter 310, which drives an output when the processor clock (CLK) is low. Slave storage element 305 is set by K1 clocked inverter 311, which drives an output when the processor clock is high. (In fact, processor clocks may have additional phases to avoid timing overlaps which could corrupt data, but for purposes of understanding the present application, one may simply assume that clocked inverters on different clock phases, e.g., inverters 310 and 311, are never active simultaneously). A pair of gated inverters 320, 321 coupled to the input of clocked inverter 310 are controlled by a HOLD signal; these determine whether the master element will be set with new data from the Data_Input line, or with be refreshed with existing data in the latch from latch node K1.
  • It will be appreciated by those skilled in the art that there are many different variations of master-slave latch circuit. However, in general they function in a similar manner, i.e., data is clocked into a master element, and then a slave element, on different phases of a clock, thus preventing input data from going directly through the latch to the output on the same clock cycle. This characteristic enables the existing data in the latch to be read on the same cycle that new data is written to the latch. The present invention is not necessarily limited to the type of master-slave latch circuit shown in FIG. 3, and could be applied to different types of master-slave latch circuits. [0045]
  • FIGS. 4A and 4B shows a straightforward implementation of the master-[0046] slave latch circuit 301 for a multithreaded register environment, and functional waveforms associated therewith. FIG. 4A shows the detailed circuit and functional waveforms, while FIG. 4B shows the same circuit at a higher level of abstraction. In the example of FIG. 4, two threads are supported, it being understood that this circuit could be generalized for a larger number of threads. The multithreaded latch circuit contains two K0-K1 latch circuits similar to that of FIG. 3, one for each thread. I.e., circuit elements 401-404 constitute a K0-K1 latch circuit for thread 0, while circuit elements 405-408 constitute a K0-K1 latch circuit for thread 1. As shown in the high-level view of FIG. 4B, circuit may be conceptualized as a pair of write ports 401,405, each of which is physically implemented as a pair of gated inverters 421-424 which provide input to a clocked inverter 425, 426, the clocked inverted serving as a driver for the K0 storage elements 402, 406. Because a common data input line feeds both latch circuits, the data on this line will generally be intended for the Thread 0 latch or the Thread 1 latch, but not both. Therefore the gated inverters 421-424 (equivalents of gated inverted 320 and 321 in FIG. 3) are gated by signals designated CNTRL0 and CNTRL1, where CNTRL0=(
    Figure US20030200424A1-20031023-P00900
    HOLD AND Write_Select_T0), and CNTRL1=(
    Figure US20030200424A1-20031023-P00900
    HOLD AND Write_Select_T1). Each K0 storage element 402, 406 is physically implemented as a pair of inverters driving each other's input, as shown. The K0 storage elements provide input to respective K1 drivers 403, 407, which are implemented as clocked inverters. These provide input to the K1 storage cells 404, 408, implemented as shown. Each of the two latch circuits serve as input to thread select logic 409, which in this embodiment is a pair of transmission gates controlled by a thread select signal. The read port 410 for the latch is represented as a single inverter driver coupled to the output of the thread select logic.
  • In accordance with the present invention, the number of transistors required in a multithreaded master-slave latch as illustrated by FIG. 4A is reduced by placing the thread select logic between the K0 storage elements and the K1 driver. As a result, only one K1 driver, and only one K1 storage element, are required, regardless of the number of threads supported. [0047]
  • FIG. 5 represents at a high level of abstraction a master-slave latch circuit in accordance with an embodiment of the present invention. For simplicity of illustration, the circuit of FIG. 5 supports two threads, it being understood that the number of threads may vary. [0048]
  • The improved master-slave latch circuit contains a pair of K0 write [0049] ports 501, 505, which drive a respective pair of K0 storage elements 502, 506. A thread select circuit 520 is coupled to the outputs of each K0 storage element 502, 506. Thread select circuit 520 selects one and only one of the storage elements 502, 506 to provide input to a common K1 driver 503, which drives a common K1 storage element 504. K1 Storage element 504 is coupled to read port 510, which drives the output.
  • It can readily be seen that the improved circuit of FIG. 5 reduces the number of storage elements, and thus the complexity of the latch circuit. Furthermore, the concept is easily extendable to latch circuits supporting a larger number of threads for even greater savings of circuit elements. [0050]
  • FIG. 6A shows a detailed implementation and functional waveforms of one embodiment of the circuit of FIG. 5. In order to support a [0051] select circuit 520 between the K0 and K1 stages, several modifications are made to the circuit of FIG. 4. CNTRL0 and CNTRL1 control gated inverters 601, 602, and are derived as explained above with respect to FIG. 4A. Because there is only one K1 storage element, it is not possible to take feedback from the K1 element as gated input to the K0 stage. Therefore, the K0 write ports 501, 502 must be designed to hold state when no input is present. This is accomplished by replacing clocked inverters 425, 426 of FIG. 4 with clocked transmission gates 603, 604. Since transmission gates 603, 604 do not invert the input signal, the polarity of the K0 elements 502, 506 is effectively reversed. Therefore the latch nodes of K0 elements 502, 506 are used to provide input to selection logic 520, which is implemented as a pair of transmission gates. The output of the gates is input to common K1 driver 503, which is a clocked inverter. This inturn drives common K1 storage element 504, and ultimately read port 520, which is shown as a single inverter coupled to the feedback node of K1 storage element 504.
  • The replacement of clocked [0052] inverters 425, 426 of FIG. 4 with transmission gates 603, 604 of FIG. 6A may result in a small increase in set-up time for the improved latch circuit of FIG. 6A vis-a-vis that of FIG. 4. On the other hand, the removal of additional logic level from the output path (i.e., removal of select logic 409) should improve read performance. From a performance standpoint, the circuit of FIG. 6A is therefore more or less equivalent, if not slightly better, than the prior art circuit. At the same time, a very substantial savings in transistors is accomplished.
  • FIGS. 6B, 6C and [0053] 6D illustrate detailed implementations of various alternative embodiments of the circuit of FIG. 5. In the circuit of FIG. 6B, the gated inverters 601,602 of FIG. 6A are replaced by a pair of transmission gates 611, 612, controlled by the same control signals. This has the effect of eliminating four more transistors from the latch (each gated inverter requiring four transistors as opposed to two for a transmission gate), but will further increase the set-up time. In the circuit of FIG. 6C, gated inverter 601 and clocked transmission gate 603 have been replaced by a single gated inverter 621 (and similarly, inverter 602 and gate 604 have been replaced by gated inverter 622), where the control signals to inverters 621 and 622 include the clock, i.e.:
  • CLK_CNTRL0=CLK AND CNTLR0=CLK AND [0054]
    Figure US20030200424A1-20031023-P00900
    HOLD AND Write_Select_T0
  • CLK_CNTRL1=CLK AND CNTLR1=CLK AND [0055]
    Figure US20030200424A1-20031023-P00900
    HOLD AND Write_Select_T1.
  • In the circuit of FIG. 6D, [0056] gated inverter 601 and clocked transmission gate 603 of FIG. 6A have been replaced by a single transmission gate 631 (and similarly, inverter 602 and gate 604 have been replaced by transmission gate 632), where the control signals to inverters 631 and 632 are the same as to inverters 621 and 622, respectively. While circuits 6C and 6D show fewer components than circuit 6A, whether there actually are fewer components is problematic, because a more complex control signal must be generated for controlling inverters 621, 622 or gates 631, 632.
  • In the design of processors and other complex logic, it is frequently desirable to implement scannable logic designs for testing purposes. In a scannable design, a global scan signal (usually imposed on an input pin of the processor chip) forces the processor into a scan mode of operation. In scan mode, pre-determined data patterns are sequentially clocked through the processor's registers to verify that the logic is working properly. Scanned data must pass through each register to be tested individually, and therefore has its own data path. [0057]
  • FIG. 7A illustrates the application of input control signals, including scan capability, to the multithreaded latch circuit of FIG. 6A. As shown in FIG. 7A, the basic circuit of FIG. 6A is unchanged. The data input line is received from the output of a [0058] multiplexer 701, controlled by multiplexer control circuit 702. In normal operating mode, multiplexer control 702 directs multiplexor 701 to pass Data_in signal 710 through to the Data_Input 712 of the latch circuit. When Scan signal 721 is active, multiplexer control 702 causes multiplexer to pass Scann_In signal 711 through to the Data_Input 712 of the latch. There may be one Data_In signal, or more typically, multiple Data_In signals selected by the multiplexer and multiplexer control, as shown. Thus, Scan_In signal 711 is typically only one of many possible inputs to the latch. When Scan signal 721 is active, it overrides all other control lines so that Scan_In is forced into the latch. FIG. 7A also represents the generation of control signals for gated inverters 601, 602 from inverter 731, NAND gates 732, 733, and inverters 734, 735
  • FIG. 7B illustrates the application of input control signals, including scan capability, to the multithreaded latch circuit of FIG. 6C. This circuit uses the [0059] same multiplexer 701 and multiplexer control 702 as the circuit of FIG. 7A. However, it will be recalled that the circuit of FIG. 6C uses a gated inverter in which the control signal is the CLK combined with the HOLD and Write_Select signals to produce CLK_CNTRL0 and CLK_CNTRL1. Inverters 741, 744 and 745 and 3- input NAND gates 742, 743 generate the control signals.
  • While it is possible to implement scan capability as shown in FIGS. 7A and 7B, whether a particular K0 element is used depends upon the state of the Write_Select lines. Write_Select lines are not normally controlled by the scan signals, but instead, by their own complex logic, which may depend on many state variables. This makes 100% testing of the register cells problematical. It would be possible to modify the Write_Select logic, so that when scan is active, the normal logic is overridden with scanned in values, but this adds additional complexity to the circuit and to the testing procedure. Furthermore, it makes the scan test unnecessarily redundant with respect to the K1 logic cells, because every value scanned into every one of the K0 cells will have to be scanned through the common K1 cell. All of these problems are aggravated as support for additional threads (and consequently, additional K0 cells sharing the same K1 cell) are added to the processor. [0060]
  • An alternative approach is to modify the basic multi-threaded latch circuit as described above, so that the scan path is fundamentally different from the logic path in functional mode, and in particular, so that at least some of the K0 cells function as K1 cells when in scan mode. [0061]
  • FIG. 8 illustrates such a circuit configuration at a high level. In the circuit of FIG. 8, four threads are supported, it being understood that a different number of threads could be supported. Preferably, two [0062] latch circuits 800, 820 are paired, the latch circuits storing a pair of bits (designated Bit 1 (circuit 800) and bit 2 (circuit 820)). On the right, these circuits are illustrated in normal functional mode. I.e., in circuit 800 four K0 cells 801-804 provide input to a common K1 cell 805, and similarly in circuit 820, K0 cells 821-824 provide input to common K1 cell 825. It is understood that select logic, write ports, etc., are omitted from this high-level diagram for clarity of illustration. On the left, these circuits are illustrated in scan mode. In scan mode, cells 822, 823 and 824, which function as K0 cells in normal functional mode, operate as K1 cells. By changing the clock phase of certain cells, the number of K0 cells in the pair of latches 800, 820 equals the number of K1 cells, when the latches are being operated in scan mode. This makes it possible to construct a scan path through each cell once and only once. The scan path is shown on the left side of FIG. 8. I.e., the scan path goes from K0 cell 801 to K1 cell 805 to K0 cell 802 to cell 823 (acting as a K1 cell for scan mode only) to K0 cell 804 to cell 822 (acting as a K1 cell for scan mode only) to K0 cell 803 to cell 824 (acting as a K1 cell for scan mode only) to K0 cell 821 to K1 cell 825.
  • FIG. 9A illustrates one implementation of the scannable paired circuit configuration of FIG. 8. As shown in FIG. 9A, a pair of [0063] gated inverters 901, 902 drive K0 memory element 801. Inverter 901 receives scan data, while inverter 902 receives data from functional logic. The scan control is input to the gates of both inverters, so that inverter 901 is gated on only when scan control is active, while inverter 902 is gated on only if scan control is inactive. The output of K0 cell 801 feeds gated inverter 903 and transmission gate 904, which is part of the selection logic. Inverter 903 is gated on only if scan control is active. The four transmission gates which make up selection logic feed clocked inverter 905, which is also shut off is scan control is active. Thus two separate logic paths are provided from K0 cell 801 to K1 cell 805, one (through inverter 903) being active only if scan control is active, while the other (through gate 904 and inverter 905) being active only if scan control is inactive. Thus, the path followed when scan control is active is completely independent of write select and read select signals. From K1 cell 805, the scan path in turn traverses clocked inverter 906, element 802, clocked inverter 907, element 823, clocked inverter 908, element 804, clocked inverter 909, element 822, clocked inverter 910, element 803, clocked inverter 911, element 824, clocked inverter 912, element 821, clocked inverter 913, and element 825. It will be observed that inverters 907, 909 and 911 are driven on the high phase of the clock while in scan mode, but that the corresponding inverters 921, 922, 923 which drive the same elements 822, 823, 824 in normal functional mode are driven on the low clock phase. Thus, elements 822, 823, 824 act as K0 elements in normal functional mode, and as K1 elements in scan mode. FIG. 9B is a timing diagram of the circuit of FIG. 9A.
  • FIGS. 10A and 10B represent alternative embodiments of the circuit configuration of FIG. 8. In comparing the circuit of FIG. 10A to that of FIG. 9A, it will be observed that in FIG. 9A, the clock is logically ANDed with other control lines for control of various gated inverters, while in FIG. 10A the clock is separate. In the circuit of FIG. 10B, the circuit of FIG. 10A is further modified by combining the two inputs to each of elements [0064] 801-804 and 821 before the clocked transmission gates, thus eliminating five transmission gates (one for each element). Note that it is not possible to combine the inputs for elements 822-824, because these inputs (scan and normal functional mode) require different clocks.
  • As observed, the prior art circuit of FIG. 4A contains a feedback loop from the K1 element to the K0 element, gated by the HOLD signal. In the various embodiments of the present invention discussed above, the K0 element holds its value because all the drivers are shut off in the presence of the HOLD signal, obviating the need for a feedback loop. However, there may be applications in which it is desirable to drive the K0 elements in the presence of a HOLD signal, rather than simply shut off all input drivers. There may, for example, be subtle timing concerns with the various input controls. [0065]
  • A HOLD signal, when active, should take precedence over all other control signals. I.e., if HOLD is active, the circuit should hold all values, regardless of the state of other control signals. If the HOLD is inactive, and Scan control is active, the circuit operates in scan mode. If neither HOLD nor Scan is active, the circuit operates in normal functional mode. [0066]
  • If one considers only a single latch circuit in functional mode, it would appear impossible to provide a feedback loop for each K0 element, because there is only one K1 element, which is shared. However, as discussed above and shown in the configuration of FIG. 8, and various embodiments thereof, it is possible to pair two latch circuits, and to alter the clock phase for some of the elements, so that the number of K0 and K1 elements is equal. If the number of K0 and K1 elements is equal, it is also possible to provide feedback from each K1 element to a corresponding K0 element, thus positively driving the K0 element in the presence of a HOLD signal. [0067]
  • FIG. 11 illustrates this concept at a high level. FIG. 11 is the circuit of FIG. 8, in which the graphical arrangement of the memory elements has been changed for clarity of illustration, and feedback has been added. In FIG. 11, the normal operating mode has been superimposed upon the scan mode. For clarity, the scan mode connections are represented as double lines, to distinguish them from normal operating mode. In HOLD mode, the K0 and K1 elements take the same clock phase as they do in Scan mode, so that there is an equal number of K0 and K1 elements. A respective feedback line [0068] 1101-1105 is provided to each K0 element 801, 802, 804, 803, 821 in HOLD mode, the feedback lines receiving input from respective K1 elements 805, 823, 822, 824, 825, as shown.
  • FIG. 12 illustrates one embodiment of the high-level circuit of FIG. 11. FIG. 12 is essentially the circuit of FIG. 10B, to which feedback paths [0069] 1101-1105 have been added. As shown, each feedback path 101-1105 is gated by the HOLD signal at a gated inverter. If HOLD is active, the value of the K1 element 805, 823, 822, 824, 825 will be fed back to the corresponding K0 element 801, 802, 804, 803, 821; no other conditions need be satisfied.
  • Various circuit embodiments have been shown in the figures, but it will be understood that there are a large number of possible permutations within the spirit and scope of the present invention. For example, various combinations of transmission gates, gated drivers, or other logic may be used to control and drive the input signals to the latch, of which those illustrated are only a sample. A multithreaded latch circuit may be constructed with or without scan path support, and a scan path may be different from that shown in the examples herein. A HOLD feedback loop may exist independently or in conjunction with scan logic. [0070]
  • Although a specific embodiment of the invention has been disclosed along with certain alternatives, it will be recognized by those skilled in the art that additional variations in form and detail may be made within the scope of the following claims:[0071]

Claims (18)

What is claimed is:
1. A master-slave latch circuit for use in a multithreaded processor, comprising:
a plurality of master elements, each master element storing a state corresponding to a respective thread supported by said multithreaded processor;
selection logic coupled to the outputs of said master elements, said selection logic selecting a single one of said outputs of said master elements in response to a thread designation input; and
a common slave element coupled to said selection logic, said common slave element receiving as input and storing the selected single one of said outputs of said master elements.
2. The master-slave latch circuit for use in a multithreaded processor of claim 1, wherein each said master element stores a state corresponding to a respective thread at times determined by a first phase of a common clock signal, and wherein said common slave element stores said selected single one of said outputs of said master elements at times determined by a second phase of said common clock signal, said second phase being different from said first phase.
3. The master-slave latch circuit for use in a multithreaded processor of claim 1, wherein said thread designation input designates the currently active thread of said multithreaded processor.
4. The master-slave latch circuit for use in a multithreaded processor of claim 1, wherein said plurality of master elements function as master elements during a first mode of operation, and wherein at least one of said master elements functions alternatively as a slave element during a second mode of operation.
5. The master-slave latch circuit for use in a multithreaded processor of claim 1, wherein each said master element and said common slave element comprise a respective pair of inverters, the output of a first inverter of each pair driving the input of a second inverter of each pair, and the output of the second inverter driving the input of the first inverter.
6. A multithreaded processor supporting concurrent processing of a plurality of threads, comprising:
at least one register for storing data for each of said plurality of threads, said register comprising a plurality of master-slave latch circuits, each master-slave latch circuit comprising:
(a) a plurality of master elements, each master element storing a state corresponding to a respective thread supported by said multithreaded processor;
(b) selection logic coupled to the outputs of said master elements, said selection logic selecting a single one of said outputs of said master elements in response to a thread designation input; and
(c) a common slave element coupled to said selection logic, said common slave element receiving as input and storing the selected single one of said outputs of said master elements.
7. The multithreaded processor of claim 6, wherein each said master element stores a state corresponding to a respective thread at times determined by a first phase of a common clock signal, and wherein said common slave element stores said selected single one of said outputs of said master elements at times determined by a second phase of said common clock signal, said second phase being different from said first phase.
8. The multithreaded processor of claim 6, wherein said thread designation input designates the currently active thread of said multithreaded processor.
9. The multithreaded processor of claim 6, wherein said processor supports first and second modes of operation, and wherein said plurality of master elements function as master elements during said first mode of operation, and wherein at least one of said master elements functions alternatively as a slave element during said second mode of operation.
10. The multithreaded processor of claim 6, wherein each said master element and said common slave element comprise a respective pair of inverters, the output of a first inverter of each pair driving the input of a second inverter of each pair, and the output of the second inverter driving the input of the first inverter.
11. The multithreaded processor of claim 6, wherein said multithreaded processor supports fine-grained multithreading.
12. The multithreaded processor of claim 6, wherein said multithreaded processor supports coarse-grained multithreading.
13. A computer system for supporting hardware multithreading, comprising:
a memory for storing instructions and data for a plurality of threads;
at least one multithreaded processor communicating with said memory and supporting concurrent processing of a plurality of threads, said processor having at least one register for storing data for each of said plurality of threads, said register comprising a plurality of master-slave latch circuits, each master-slave latch circuit comprising:
(a) a plurality of master elements, each master element storing a state corresponding to a respective thread supported by said multithreaded processor;
(b) selection logic coupled to the outputs of said master elements, said selection logic selecting a single one of said outputs of said master elements in response to a thread designation input; and
(c) a common slave element coupled to said selection logic, said common slave element receiving as input and storing the selected single one of said outputs of said master elements.
14. The computer system of claim 13, wherein each said master element stores a state corresponding to a respective thread at times determined by a first phase of a common clock signal, and wherein said common slave element stores said selected single one of said outputs of said master elements at times determined by a second phase of said common clock signal, said second phase being different from said first phase.
15. The computer system of claim 13, wherein said thread designation input designates the currently active thread of said multithreaded processor.
16. The computer system of claim 13, wherein said multithreaded processor supports coarse-grained multithreading.
17. A multi-stage latch circuit for use in a multithreaded processor, comprising:
a first stage having a plurality of first stage memory elements, each first stage memory element corresponding to a respective thread supported by said multithreaded processor, each said first stage memory element storing a state corresponding to a respective input at times determined by a first phase of a clock signal;
selection logic coupled to the outputs of said first stage memory elements, said selection logic selecting a single one of said outputs of said first stage memory elements in response to a thread designation input; and
a common second stage memory element coupled to said selection logic and receiving as input the selected single one of said outputs of said first stage memory elements, said common second stage memory element storing a state corresponding to said selected single one of said outputs of said first stage memory elements at times determined by a second phase of said clock signal, said second phase being different from said first phase.
18. The multi-stage latch circuit of claim 17, wherein said thread designation input designates the currently active thread of said multithreaded processor.
US10/459,646 1999-11-12 2003-06-10 Master-slave latch circuit for multithreaded processing Abandoned US20030200424A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/459,646 US20030200424A1 (en) 1999-11-12 2003-06-10 Master-slave latch circuit for multithreaded processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/439,581 US6629236B1 (en) 1999-11-12 1999-11-12 Master-slave latch circuit for multithreaded processing
US10/459,646 US20030200424A1 (en) 1999-11-12 2003-06-10 Master-slave latch circuit for multithreaded processing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/439,581 Continuation US6629236B1 (en) 1999-11-12 1999-11-12 Master-slave latch circuit for multithreaded processing

Publications (1)

Publication Number Publication Date
US20030200424A1 true US20030200424A1 (en) 2003-10-23

Family

ID=28454960

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/439,581 Expired - Lifetime US6629236B1 (en) 1999-11-12 1999-11-12 Master-slave latch circuit for multithreaded processing
US10/459,646 Abandoned US20030200424A1 (en) 1999-11-12 2003-06-10 Master-slave latch circuit for multithreaded processing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/439,581 Expired - Lifetime US6629236B1 (en) 1999-11-12 1999-11-12 Master-slave latch circuit for multithreaded processing

Country Status (1)

Country Link
US (2) US6629236B1 (en)

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983350B1 (en) 1999-08-31 2006-01-03 Intel Corporation SDRAM controller for parallel processor architecture
US6629236B1 (en) * 1999-11-12 2003-09-30 International Business Machines Corporation Master-slave latch circuit for multithreaded processing
TW477949B (en) * 1999-12-20 2002-03-01 Winbond Electronics Corp Data processing system
US6532509B1 (en) 1999-12-22 2003-03-11 Intel Corporation Arbitrating command requests in a parallel multi-threaded processing system
US6694380B1 (en) 1999-12-27 2004-02-17 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
US6661794B1 (en) 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
US7480706B1 (en) * 1999-12-30 2009-01-20 Intel Corporation Multi-threaded round-robin receive for fast network port
US6952824B1 (en) * 1999-12-30 2005-10-04 Intel Corporation Multi-threaded sequenced receive for fast network port stream of packets
US6857062B2 (en) * 2001-11-30 2005-02-15 Intel Corporation Broadcast state renaming in a microprocessor
US6981083B2 (en) * 2002-12-05 2005-12-27 International Business Machines Corporation Processor virtualization mechanism via an enhanced restoration of hard architected states
US6850093B1 (en) * 2003-07-28 2005-02-01 Hewlett-Packard Development Company, L.P. Circuit and method for improving noise tolerance in multi-threaded memory circuits
US7653904B2 (en) * 2003-09-26 2010-01-26 Intel Corporation System for forming a critical update loop to continuously reload active thread state from a register storing thread state until another active thread is detected
US7941642B1 (en) 2004-06-30 2011-05-10 Oracle America, Inc. Method for selecting between divide instructions associated with respective threads in a multi-threaded processor
US7676655B2 (en) * 2004-06-30 2010-03-09 Sun Microsystems, Inc. Single bit control of threads in a multithreaded multicore processor
US8095778B1 (en) 2004-06-30 2012-01-10 Open Computing Trust I & II Method and system for sharing functional units of a multithreaded processor
US7216216B1 (en) 2004-06-30 2007-05-08 Sun Microsystems, Inc. Register window management using first pipeline to change current window and second pipeline to read operand from old window and write operand to new window
US7434000B1 (en) 2004-06-30 2008-10-07 Sun Microsystems, Inc. Handling duplicate cache misses in a multithreaded/multi-core processor
US7178005B1 (en) 2004-06-30 2007-02-13 Sun Microsystems, Inc. Efficient implementation of timers in a multithreaded processor
US8225034B1 (en) 2004-06-30 2012-07-17 Oracle America, Inc. Hybrid instruction buffer
US7426630B1 (en) 2004-06-30 2008-09-16 Sun Microsystems, Inc. Arbitration of window swap operations
US7330988B2 (en) * 2004-06-30 2008-02-12 Sun Microsystems, Inc. Method and apparatus for power throttling in a multi-thread processor
US7437538B1 (en) 2004-06-30 2008-10-14 Sun Microsystems, Inc. Apparatus and method for reducing execution latency of floating point operations having special case operands
US7861063B1 (en) 2004-06-30 2010-12-28 Oracle America, Inc. Delay slot handling in a processor
US7185178B1 (en) 2004-06-30 2007-02-27 Sun Microsystems, Inc. Fetch speculation in a multithreaded processor
US7373489B1 (en) 2004-06-30 2008-05-13 Sun Microsystems, Inc. Apparatus and method for floating-point exception prediction and recovery
US7747771B1 (en) 2004-06-30 2010-06-29 Oracle America, Inc. Register access protocol in a multihreaded multi-core processor
US7353364B1 (en) 2004-06-30 2008-04-01 Sun Microsystems, Inc. Apparatus and method for sharing a functional unit execution resource among a plurality of functional units
US7370243B1 (en) 2004-06-30 2008-05-06 Sun Microsystems, Inc. Precise error handling in a fine grain multithreaded multicore processor
US7890734B2 (en) * 2004-06-30 2011-02-15 Open Computing Trust I & II Mechanism for selecting instructions for execution in a multithreaded processor
US7774393B1 (en) 2004-06-30 2010-08-10 Oracle America, Inc. Apparatus and method for integer to floating-point format conversion
US7383403B1 (en) 2004-06-30 2008-06-03 Sun Microsystems, Inc. Concurrent bypass to instruction buffers in a fine grain multithreaded processor
US7523330B2 (en) * 2004-06-30 2009-04-21 Sun Microsystems, Inc. Thread-based clock enabling in a multi-threaded processor
US7702887B1 (en) 2004-06-30 2010-04-20 Sun Microsystems, Inc. Performance instrumentation in a fine grain multithreaded multicore processor
US7343474B1 (en) 2004-06-30 2008-03-11 Sun Microsystems, Inc. Minimal address state in a fine grain multithreaded processor
US7533248B1 (en) 2004-06-30 2009-05-12 Sun Microsystems, Inc. Multithreaded processor including a functional unit shared between multiple requestors and arbitration therefor
US7401206B2 (en) * 2004-06-30 2008-07-15 Sun Microsystems, Inc. Apparatus and method for fine-grained multithreading in a multipipelined processor core
US7478225B1 (en) 2004-06-30 2009-01-13 Sun Microsystems, Inc. Apparatus and method to support pipelining of differing-latency instructions in a multithreaded processor
KR100568545B1 (en) * 2004-10-05 2006-04-07 삼성전자주식회사 Signal driving circuit
US8037250B1 (en) 2004-12-09 2011-10-11 Oracle America, Inc. Arbitrating cache misses in a multithreaded/multi-core processor
US7475224B2 (en) * 2007-01-03 2009-01-06 International Business Machines Corporation Register map unit supporting mapping of multiple register specifier classes
US9250899B2 (en) 2007-06-13 2016-02-02 International Business Machines Corporation Method and apparatus for spatial register partitioning with a multi-bit cell register file
US8812824B2 (en) 2007-06-13 2014-08-19 International Business Machines Corporation Method and apparatus for employing multi-bit register file cells and SMT thread groups
US20100115494A1 (en) * 2008-11-03 2010-05-06 Gorton Jr Richard C System for dynamic program profiling
US8024719B2 (en) 2008-11-03 2011-09-20 Advanced Micro Devices, Inc. Bounded hash table sorting in a dynamic program profiling system
US8478948B2 (en) * 2008-12-04 2013-07-02 Oracle America, Inc. Method and system for efficient tracing and profiling of memory accesses during program execution
US8723548B2 (en) * 2012-03-06 2014-05-13 Broadcom Corporation Hysteresis-based latch design for improved soft error rate with low area/performance overhead
US10706101B2 (en) 2016-04-14 2020-07-07 Advanced Micro Devices, Inc. Bucketized hash tables with remap entries

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4276488A (en) * 1978-11-13 1981-06-30 Hughes Aircraft Company Multi-master single-slave ECL flip-flop
US5345588A (en) * 1989-09-08 1994-09-06 Digital Equipment Corporation Thread private memory storage of multi-thread digital data processors using access descriptors for uniquely identifying copies of data created on an as-needed basis
US5353418A (en) * 1989-05-26 1994-10-04 Massachusetts Institute Of Technology System storing thread descriptor identifying one of plural threads of computation in storage only when all data for operating on thread is ready and independently of resultant imperative processing of thread
US5421014A (en) * 1990-07-13 1995-05-30 I-Tech Corporation Method for controlling multi-thread operations issued by an initiator-type device to one or more target-type peripheral devices
US5499349A (en) * 1989-05-26 1996-03-12 Massachusetts Institute Of Technology Pipelined processor with fork, join, and start instructions using tokens to indicate the next instruction for each of multiple threads of execution
US5778243A (en) * 1996-07-03 1998-07-07 International Business Machines Corporation Multi-threaded cell for a memory
US5799188A (en) * 1995-12-15 1998-08-25 International Business Machines Corporation System and method for managing variable weight thread contexts in a multithreaded computer system
US5809554A (en) * 1994-11-18 1998-09-15 International Business Machines Corp. User control of multiple memory heaps
US5907702A (en) * 1997-03-28 1999-05-25 International Business Machines Corporation Method and apparatus for decreasing thread switch latency in a multithread processor
US6341347B1 (en) * 1999-05-11 2002-01-22 Sun Microsystems, Inc. Thread switch logic in a multiple-thread processor
US6629236B1 (en) * 1999-11-12 2003-09-30 International Business Machines Corporation Master-slave latch circuit for multithreaded processing

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4276488A (en) * 1978-11-13 1981-06-30 Hughes Aircraft Company Multi-master single-slave ECL flip-flop
US5353418A (en) * 1989-05-26 1994-10-04 Massachusetts Institute Of Technology System storing thread descriptor identifying one of plural threads of computation in storage only when all data for operating on thread is ready and independently of resultant imperative processing of thread
US5499349A (en) * 1989-05-26 1996-03-12 Massachusetts Institute Of Technology Pipelined processor with fork, join, and start instructions using tokens to indicate the next instruction for each of multiple threads of execution
US5345588A (en) * 1989-09-08 1994-09-06 Digital Equipment Corporation Thread private memory storage of multi-thread digital data processors using access descriptors for uniquely identifying copies of data created on an as-needed basis
US5421014A (en) * 1990-07-13 1995-05-30 I-Tech Corporation Method for controlling multi-thread operations issued by an initiator-type device to one or more target-type peripheral devices
US5809554A (en) * 1994-11-18 1998-09-15 International Business Machines Corp. User control of multiple memory heaps
US5799188A (en) * 1995-12-15 1998-08-25 International Business Machines Corporation System and method for managing variable weight thread contexts in a multithreaded computer system
US5778243A (en) * 1996-07-03 1998-07-07 International Business Machines Corporation Multi-threaded cell for a memory
US5907702A (en) * 1997-03-28 1999-05-25 International Business Machines Corporation Method and apparatus for decreasing thread switch latency in a multithread processor
US6341347B1 (en) * 1999-05-11 2002-01-22 Sun Microsystems, Inc. Thread switch logic in a multiple-thread processor
US6629236B1 (en) * 1999-11-12 2003-09-30 International Business Machines Corporation Master-slave latch circuit for multithreaded processing

Also Published As

Publication number Publication date
US6629236B1 (en) 2003-09-30

Similar Documents

Publication Publication Date Title
US6629236B1 (en) Master-slave latch circuit for multithreaded processing
JP3562552B2 (en) Multi-threaded cell for memory
KR100647526B1 (en) Zero overhead computer interrupts with task switching
US7124318B2 (en) Multiple parallel pipeline processor having self-repairing capability
US6212544B1 (en) Altering thread priorities in a multithreaded processor
US5568380A (en) Shadow register file for instruction rollback
US5301340A (en) IC chips including ALUs and identical register files whereby a number of ALUs directly and concurrently write results to every register file per cycle
US6567839B1 (en) Thread switch control in a multithreaded processor system
US6061710A (en) Multithreaded processor incorporating a thread latch register for interrupt service new pending threads
US6697935B1 (en) Method and apparatus for selecting thread switch events in a multithreaded processor
US6105051A (en) Apparatus and method to guarantee forward progress in execution of threads in a multithreaded processor
US5339268A (en) Content addressable memory cell and content addressable memory circuit for implementing a least recently used algorithm
US5644780A (en) Multiple port high speed register file with interleaved write ports for use with very long instruction word (vlin) and n-way superscaler processors
US7117389B2 (en) Multiple processor core device having shareable functional units for self-repairing capability
US7743237B2 (en) Register file bit and method for fast context switch
US20060294344A1 (en) Computer processor pipeline with shadow registers for context switching, and method
US7143271B2 (en) Automatic register backup/restore system and method
US7120915B1 (en) Thread switch circuit design and signal encoding for vertical threading
EP0680051B1 (en) Testable memory array
JPH06214871A (en) Dual-port electronic data storage system and electronic data storage system as well as simultaneous access method
Morton et al. ECSTAC: A fast asynchronous microprocessor

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION