US20080010393A1 - Adaptive thread id cache mechanism for autonomic performance tuning - Google Patents

Adaptive thread id cache mechanism for autonomic performance tuning Download PDF

Info

Publication number
US20080010393A1
US20080010393A1 US11/775,325 US77532507A US2008010393A1 US 20080010393 A1 US20080010393 A1 US 20080010393A1 US 77532507 A US77532507 A US 77532507A US 2008010393 A1 US2008010393 A1 US 2008010393A1
Authority
US
United States
Prior art keywords
indicator
cache
thread
index
selecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/775,325
Inventor
David Luick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/775,325 priority Critical patent/US20080010393A1/en
Publication of US20080010393A1 publication Critical patent/US20080010393A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0842Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • G06F12/0848Partitioned cache, e.g. separate instruction and operand caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing

Definitions

  • the present invention generally relates to computing devices, more specifically, the present invention relates to a processor architecture having an improved caching system.
  • Memory access is essential in any computer system and substantially affects the performance of a computer system. Much advancement has been made to improve memory access and among them, use of cache memory to store data that is most likely to be next accessed in fast-coupling memory typically on the main processor.
  • Cache memory improves computer's performance when a desired data is found in the cache memory, but the cache memory does not contain all the data needed for a particular application. Where a cache miss occurs, i.e., a needed data is not found in the cache memory, the needed data must be brought in from another slower memory and data from the cache memory must be removed to yield the space for this needed data.
  • Cache misses increase especially when a computer is executing in a simultaneous multi-threading mode.
  • multiple applications access the memory simultaneously and a cache miss by one application may thrash the cache for a second application by removing a data needed by the second application and thus causing a cache miss for the second application.
  • each cache memory access yields more than one set of data. For example, in a 32 KB cache memory, each access retrieves two pieces of data. After the two pieces of data is retrieved from the cache memory, additional steps must be taken to select one of them for the application's use, thus adding more delay to the data access. This additional delay becomes especially aggravated when the number of data simultaneously retrieved increases.
  • An apparatus includes at least one instruction register having a thread ID indicator, an address generator having a cache index indicator and a plurality of cache index bits, a cache memory, and a selector for selecting between the thread ID indicator and the cache index indicator.
  • the selector outputs an upper index indicator.
  • the invention is a method for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache.
  • the method includes the steps of loading at least one instruction register having a thread ID indicator, generating an effective address having a cache index indicator and a plurality of cache index bits, selecting an upper index indictor between the thread ID indicator and the cache index indicator, forming an address by concatenating the upper index indicator with the plurality of cache index bits, and retrieving an entry from the cache memory indicated by the address.
  • FIGS. 1 A-B illustrate a prior art architecture for a cache access.
  • FIG. 2 illustrates architecture for a cache access according to the invention.
  • FIG. 3 illustrates an alternative architecture for a cache access.
  • the invention temporarily divides a cache to inhibit thrashing and to simulate the performance of higher level associativity.
  • the cache is divided using thread ID bits without requiring extra select time.
  • FIG. 1A illustrates a problem with a L1 cache.
  • An address generator 105 generates an effective address 106 using information from two registers RA 102 and RB 104 .
  • the effective address 106 is fed to a cache memory 107 having two sets of data 108 and 110 .
  • the effective address is connected to two comparators 114 and 116 for use in a 2-way late selecting unit 118 , which will select which data to output to a cache data bus.
  • the late selecting unit can become more complex if more data are needed from a single cache memory access.
  • FIG. 1B illustrates an example, when eight comparators will be needed if eight pieces of data 122 are retrieved from the cache memory, and a 8-way late selecting unit 120 is needed.
  • FIG. 2 illustrates an architecture 200 according to the invention.
  • a thread ID indicator 222 is added to an instruction register 220 . Although only one instruction register 220 is shown, there can be more than one instruction register when the system is in the multi-thread execution mode.
  • the thread ID indicator can one bit or more bits depending how a cache memory is used during a multi-thread execution mode.
  • This thread ID indicator 222 and bit 0 of an effective address from the address generator 205 are connected to a selector 224 .
  • Bit 0 of the effective address, shown as element 208 is also known as the cache index indicator.
  • the rest of bits from the effective address are the cache index bits 210 .
  • the selector 224 is controlled by a bit (enable cache split indicator) 234 from a machine state register (MSR) 232 and the selector 224 selectively allows either the thread ID indicator or cache index bit 0 be connected to its output. This output (upper index indicator) is concatenated with the cache index bits 210 from the effective address to form an index into the data cache 207 .
  • MSR machine state register
  • the enable cache split indicator 234 will be set, which in turn directs the selector 224 to connect the thread ID indicator 222 to the selector's output.
  • An application may cause thrashing if it involves technical streaming, i.e., loop operation that involves heavy computation, and this may be indicated by a streaming bit being set by the operating system.
  • the thread ID indicator 222 divides the cache 207 into two halves, upper half 228 and lower half 230 .
  • the index formed by the thread ID indicator 222 and the rest of effective address 210 retrieves a set of data from either upper half 228 or lower half 230 .
  • the 2-way late selecting unit 218 selects either a data from cache set 0 or cache set 1 to be output onto the cache data bus.
  • the enable cache split bit 234 is set by the operating system when the cache 207 is divided to support the multi-thread execution mode. By setting the enable cache split bit 214 and using the thread ID indicator 222 , the 2-way late selecting unit 218 can be kept simple and have minimal delay.
  • the embodiment shown in FIG. 2 minimizes cache thrashing when a cache miss from a first application causes a data needed by a second application to be discarded.
  • the thrashing is minimized without incurring additional delays with the late selecting unit.
  • FIG. 3 illustrates an alternative embodiment 300 for dynamically splitting the cache.
  • a system may be in a cache thrashing situation if there is a substantial number of cache misses.
  • two cache miss counters 318 and 320 can be set up, one for each application. If application 1 has a cache miss, then counter 320 is incremented. If application 0 has a cache miss, then counter 318 is incremented. The cache miss counters are compared with a reference counter 322 . Each cache miss counter 318 , 320 is reset when a new application start is started.
  • Instructions for each application is loaded in one instruction register, and for a system that supports two simultaneous applications two instruction registers 302 , 304 are used.
  • the two instruction registers 302 , 304 are identical.
  • the instruction register 304 has a stream ID indicator 308 to indicate the application is in stream mode, i.e., in a computational loop.
  • the stream ID indicator 308 is set by hardware or the operating system.
  • the instruction register 304 also has a valid bit 308 that is normally set to indicate that instruction in the instruction buffer is valid.
  • the valid bit 308 is unset by hardware if there is a branch condition or a cache miss.
  • the 2-way late selecting unit 336 will select the thread ID indicator 334 .
  • the thread ID indicator 334 is from an instruction register currently accessing the cache memory.
  • the method may be implemented, for example, by operating portion(s) of a computing device to execute a sequence of machine-readable instructions.
  • the instructions can reside in various types of signal-bearing or data storage media.
  • the media may comprise, for example, RAM registers, or other memory components of the processor.

Abstract

An apparatus and method for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache. The apparatus temporarily splits a data cache into multiple regions and each region is selected according to a thread ID indicator in an instruction register. The data cache is split when the apparatus is in the multi-threading execution mode indicated by an enable cache split bit.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S)
  • This application is a continuation of, and claims the benefit of, U.S. patent application Ser. No. 10/670,717, filed Sep. 25, 2003, the entirety of which is hereby incorporated herein by reference. In the parent case, Ser. No. 10/670,717, claims 1-2, 4-6 and 10 were rejected under 325 USC 102(b) as anticipated by US Patent Application Publication No. 2006/0195683 A1 filed by Kissell. Applicant disagrees on the grounds that the present invention was invented prior to the invention of the Kissell reference, as evidenced by the Affidavit pursuant to 37 C.F.R., section 1.131, filed with respect to U.S. patent application Ser. No. 10/670,717. For this reason, applicant believes he is entitled to the broader claims now presented. Applicant rescinds any disclaimer in the parent application that may have resulted from the amendment to the claims of the parent application that lead to the allowance of the claims therein and requests that the examiner reconsider the claims now presented in view of the Kissell reference cited in the parent application.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention generally relates to computing devices, more specifically, the present invention relates to a processor architecture having an improved caching system.
  • 2. Description of the Related Art
  • Memory access is essential in any computer system and substantially affects the performance of a computer system. Much advancement has been made to improve memory access and among them, use of cache memory to store data that is most likely to be next accessed in fast-coupling memory typically on the main processor.
  • Cache memory improves computer's performance when a desired data is found in the cache memory, but the cache memory does not contain all the data needed for a particular application. Where a cache miss occurs, i.e., a needed data is not found in the cache memory, the needed data must be brought in from another slower memory and data from the cache memory must be removed to yield the space for this needed data.
  • Cache misses increase especially when a computer is executing in a simultaneous multi-threading mode. In a multi-threading mode, multiple applications access the memory simultaneously and a cache miss by one application may thrash the cache for a second application by removing a data needed by the second application and thus causing a cache miss for the second application.
  • As the size of cache memory increases, each cache memory access yields more than one set of data. For example, in a 32 KB cache memory, each access retrieves two pieces of data. After the two pieces of data is retrieved from the cache memory, additional steps must be taken to select one of them for the application's use, thus adding more delay to the data access. This additional delay becomes especially aggravated when the number of data simultaneously retrieved increases.
  • SUMMARY OF THE INVENTION
  • The invention introduces a way to inhibit data cache thrashing during a multi-threading execution mode through simulating a higher level of associativity in a data cache. An apparatus according to the invention includes at least one instruction register having a thread ID indicator, an address generator having a cache index indicator and a plurality of cache index bits, a cache memory, and a selector for selecting between the thread ID indicator and the cache index indicator. The selector outputs an upper index indicator. When the thread ID indicator is selected by the selector, the thread ID indicator is output to the upper index indicator, and the upper index indicator is concatenated with the plurality of cache index bits to form an address for retrieving an entry from the cache memory.
  • In another aspect, the invention is a method for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache. The method includes the steps of loading at least one instruction register having a thread ID indicator, generating an effective address having a cache index indicator and a plurality of cache index bits, selecting an upper index indictor between the thread ID indicator and the cache index indicator, forming an address by concatenating the upper index indicator with the plurality of cache index bits, and retrieving an entry from the cache memory indicated by the address.
  • Other objects, advantages, and features of the present invention will become apparent after review of the hereinafter set forth in Brief Description of the Drawings, Detailed Description of the Invention, and the Claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1A-B illustrate a prior art architecture for a cache access.
  • FIG. 2 illustrates architecture for a cache access according to the invention.
  • FIG. 3 illustrates an alternative architecture for a cache access.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In this description, like numerals refer to like elements throughout the several views. The invention temporarily divides a cache to inhibit thrashing and to simulate the performance of higher level associativity. The cache is divided using thread ID bits without requiring extra select time.
  • Generally, a one level (L1) cache is unable to implement a high set associativity to support a simultaneous multi-threading execution mode without significant costs in area, power, additional access latency, and significant redesign resource and schedule. FIG. 1A illustrates a problem with a L1 cache. An address generator 105 generates an effective address 106 using information from two registers RA 102 and RB 104. The effective address 106 is fed to a cache memory 107 having two sets of data 108 and 110. The effective address is connected to two comparators 114 and 116 for use in a 2-way late selecting unit 118, which will select which data to output to a cache data bus.
  • The late selecting unit can become more complex if more data are needed from a single cache memory access. FIG. 1B illustrates an example, when eight comparators will be needed if eight pieces of data 122 are retrieved from the cache memory, and a 8-way late selecting unit 120 is needed.
  • FIG. 2 illustrates an architecture 200 according to the invention. A thread ID indicator 222 is added to an instruction register 220. Although only one instruction register 220 is shown, there can be more than one instruction register when the system is in the multi-thread execution mode. The thread ID indicator can one bit or more bits depending how a cache memory is used during a multi-thread execution mode. This thread ID indicator 222 and bit 0 of an effective address from the address generator 205 are connected to a selector 224. Bit 0 of the effective address, shown as element 208, is also known as the cache index indicator. The rest of bits from the effective address are the cache index bits 210. The selector 224 is controlled by a bit (enable cache split indicator) 234 from a machine state register (MSR) 232 and the selector 224 selectively allows either the thread ID indicator or cache index bit 0 be connected to its output. This output (upper index indicator) is concatenated with the cache index bits 210 from the effective address to form an index into the data cache 207.
  • If the system is in a multi-thread execution mode and the operating system, or hardware, is aware of an application that may cause thrashing, the enable cache split indicator 234 will be set, which in turn directs the selector 224 to connect the thread ID indicator 222 to the selector's output. An application may cause thrashing if it involves technical streaming, i.e., loop operation that involves heavy computation, and this may be indicated by a streaming bit being set by the operating system. The thread ID indicator 222 divides the cache 207 into two halves, upper half 228 and lower half 230. The index formed by the thread ID indicator 222 and the rest of effective address 210 retrieves a set of data from either upper half 228 or lower half 230. The 2-way late selecting unit 218 then selects either a data from cache set 0 or cache set 1 to be output onto the cache data bus.
  • The enable cache split bit 234 is set by the operating system when the cache 207 is divided to support the multi-thread execution mode. By setting the enable cache split bit 214 and using the thread ID indicator 222, the 2-way late selecting unit 218 can be kept simple and have minimal delay.
  • The embodiment shown in FIG. 2 minimizes cache thrashing when a cache miss from a first application causes a data needed by a second application to be discarded. By setting the enable cache split bit, dividing the cache into different regions, and associating these regions with different applications, the thrashing is minimized without incurring additional delays with the late selecting unit.
  • FIG. 3 illustrates an alternative embodiment 300 for dynamically splitting the cache. A system may be in a cache thrashing situation if there is a substantial number of cache misses. For a system with two applications running, two cache miss counters 318 and 320 can be set up, one for each application. If application 1 has a cache miss, then counter 320 is incremented. If application 0 has a cache miss, then counter 318 is incremented. The cache miss counters are compared with a reference counter 322. Each cache miss counter 318, 320 is reset when a new application start is started.
  • Instructions for each application is loaded in one instruction register, and for a system that supports two simultaneous applications two instruction registers 302, 304 are used. The two instruction registers 302, 304 are identical. The instruction register 304 has a stream ID indicator 308 to indicate the application is in stream mode, i.e., in a computational loop. The stream ID indicator 308 is set by hardware or the operating system. The instruction register 304 also has a valid bit 308 that is normally set to indicate that instruction in the instruction buffer is valid. The valid bit 308 is unset by hardware if there is a branch condition or a cache miss.
  • If either instruction register has the stream ID indicator 308 set and the valid bit 310 unset and either cache miss counter exceeds the reference counter 322 and the enable cache split bit 342 also set, then the 2-way late selecting unit 336 will select the thread ID indicator 334. The thread ID indicator 334 is from an instruction register currently accessing the cache memory.
  • Although the invention is described in scenarios supporting one or two threads, the invention can be easily implemented to support more threads without departing from the spirit of the invention.
  • In the context of the invention, the method may be implemented, for example, by operating portion(s) of a computing device to execute a sequence of machine-readable instructions. The instructions can reside in various types of signal-bearing or data storage media. The media may comprise, for example, RAM registers, or other memory components of the processor.
  • While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and detail maybe made without departing from the spirit and scope of the present invention as set for the in the following claims. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.

Claims (10)

1. An apparatus for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache, comprising:
at least one instruction register, the at least one instruction register having a thread ID indicator;
an address generator having a cache index indicator and a plurality of cache index bits;
a cache memory; and
a selector for selecting between the thread ID indicator and the cache index indicator, the selector outputting an upper index indicator,
wherein when the thread ID indicator is selected by the selector, the thread ID indicator is output to the upper index indicator, and the upper index indicator is concatenated with the plurality of cache index bits to form an address for retrieving an entry from the cache memory.
2. The apparatus of claim 1, further comprising a machine state register, the machine state register having an enable cache split indicator that, at least, controls the selector.
3. The apparatus of claim 1, further comprising at least one cache miss counter, the at least one cache miss counter counting cache misses, wherein the at least one cache miss counter controls the selector.
4. The apparatus of claim 1, wherein each thread ID indicator further comprises a plurality of bits.
5. The apparatus of claim 1, wherein each thread ID indicator further comprises a single bit.
6. A method for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache, comprising the steps of:
loading at least one instruction register, the at least one instruction register having a thread ID indicator;
generating an effective address having a cache index indicator and a plurality of cache index bits;
selecting an upper index indictor between the thread ID indicator and the cache index indicator;
forming an address by concatenating the upper index indicator with the plurality of cache index bits; and
retrieving an entry from the cache memory indicated by the address.
7. The method of claim 6, wherein the step of selecting an upper index indicator further comprises the steps of:
checking an enable cache indicator in a machine state register;
if the enable cache indicator is set, selecting the thread ID bit; and
if the enable cache indicator is not set, selecting the cache index indicator.
8. The method of claim 6, wherein the step of selecting an upper index indicator further comprises the steps of:
counting cache misses;
if the number of cache misses exceeds a predefined limit, selecting the thread ID bit; and
if the number of cache misses is less than the predefined limit, selecting the cache index indicator.
9. The method of claim 6, wherein each instruction register further comprises a stream ID indicator, wherein the step of selecting an upper index indicator further comprises the steps of:
checking a valid indicator and the stream ID indicator of the at least one instruction register; and
if at least one instruction register having both the valid indicator and the stream ID indicator set, selecting the thread ID bit.
10. An apparatus for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache, comprising:
a first means for storing instruction having a thread ID indicator;
a second means for generating addresses having a cache index indicator and a plurality of cache index bits;
a third means for storing data; and
a fourth means for selecting between the thread ID indicator and the cache index indicator, the fourth means having an upper index indicator,
wherein when the thread ID indicator is set and selected by the fourth means, the thread ID indicator is connected to the upper index indicator, and the upper index indicator is concatenated with the plurality of cache index bits to form an address to retrieve an entry from the third means.
US11/775,325 2003-09-25 2007-07-10 Adaptive thread id cache mechanism for autonomic performance tuning Abandoned US20080010393A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/775,325 US20080010393A1 (en) 2003-09-25 2007-07-10 Adaptive thread id cache mechanism for autonomic performance tuning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/670,717 US7302524B2 (en) 2003-09-25 2003-09-25 Adaptive thread ID cache mechanism for autonomic performance tuning
US11/775,325 US20080010393A1 (en) 2003-09-25 2007-07-10 Adaptive thread id cache mechanism for autonomic performance tuning

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/670,717 Continuation US7302524B2 (en) 2003-09-25 2003-09-25 Adaptive thread ID cache mechanism for autonomic performance tuning

Publications (1)

Publication Number Publication Date
US20080010393A1 true US20080010393A1 (en) 2008-01-10

Family

ID=34375990

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/670,717 Expired - Fee Related US7302524B2 (en) 2003-09-25 2003-09-25 Adaptive thread ID cache mechanism for autonomic performance tuning
US11/775,325 Abandoned US20080010393A1 (en) 2003-09-25 2007-07-10 Adaptive thread id cache mechanism for autonomic performance tuning

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/670,717 Expired - Fee Related US7302524B2 (en) 2003-09-25 2003-09-25 Adaptive thread ID cache mechanism for autonomic performance tuning

Country Status (1)

Country Link
US (2) US7302524B2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7747820B2 (en) * 2007-06-15 2010-06-29 Microsoft Corporation Managing working set use of a cache via page coloring
US7996656B2 (en) * 2007-09-25 2011-08-09 Intel Corporation Attaching and virtualizing reconfigurable logic units to a processor
US9547593B2 (en) * 2011-02-28 2017-01-17 Nxp Usa, Inc. Systems and methods for reconfiguring cache memory
CN104881258A (en) * 2015-06-10 2015-09-02 北京金山安全软件有限公司 Buffer concurrent access method and device
US20170083441A1 (en) * 2015-09-23 2017-03-23 Qualcomm Incorporated Region-based cache management

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040154012A1 (en) * 2003-01-31 2004-08-05 Hong Wang Safe store for speculative helper threads
US6874056B2 (en) * 2001-10-09 2005-03-29 Agere Systems Inc. Method and apparatus for reducing cache thrashing
US6883171B1 (en) * 1999-06-02 2005-04-19 Microsoft Corporation Dynamic address windowing on a PCI bus
US20060195683A1 (en) * 2003-08-28 2006-08-31 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6883171B1 (en) * 1999-06-02 2005-04-19 Microsoft Corporation Dynamic address windowing on a PCI bus
US6874056B2 (en) * 2001-10-09 2005-03-29 Agere Systems Inc. Method and apparatus for reducing cache thrashing
US20040154012A1 (en) * 2003-01-31 2004-08-05 Hong Wang Safe store for speculative helper threads
US20060195683A1 (en) * 2003-08-28 2006-08-31 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts
US20070043935A2 (en) * 2003-08-28 2007-02-22 Mips Technologies, Inc. Symmetric multiprocessor operating system for execution on non-independent lightweight thread contexts

Also Published As

Publication number Publication date
US20050071535A1 (en) 2005-03-31
US7302524B2 (en) 2007-11-27

Similar Documents

Publication Publication Date Title
US7949855B1 (en) Scheduler in multi-threaded processor prioritizing instructions passing qualification rule
JP3718319B2 (en) Hardware mechanism for optimizing instruction and data prefetching
US5649144A (en) Apparatus, systems and methods for improving data cache hit rates
US6629208B2 (en) Cache system for concurrent processes
US6401192B1 (en) Apparatus for software initiated prefetch and method therefor
EP0496439A2 (en) Computer system with multi-buffer data cache
JPH0371354A (en) Method and apparatus for processing memory access request
JPS63150731A (en) Computer system and execution thereof
US6493791B1 (en) Prioritized content addressable memory
US5214765A (en) Method and apparatus for executing floating point instructions utilizing complimentary floating point pipeline and multi-level caches
US4914582A (en) Cache tag lookaside
US7237067B2 (en) Managing a multi-way associative cache
US20080010393A1 (en) Adaptive thread id cache mechanism for autonomic performance tuning
US5257360A (en) Re-configurable block length cache
JP2009528612A (en) Data processing system and data and / or instruction prefetch method
US6799264B2 (en) Memory accelerator for ARM processor pre-fetching multiple instructions from cyclically sequential memory partitions
US7461211B2 (en) System, apparatus and method for generating nonsequential predictions to access a memory
JPH0695972A (en) Digital computer system
US7712101B2 (en) Method and apparatus for dynamic allocation of resources to executing threads in a multi-threaded processor
US9946654B2 (en) High-bandwidth prefetcher for high-bandwidth memory
JPH07210460A (en) Move-in control method for buffer storage
Ross Optimizing read convoys in main-memory query processing
GB2235554A (en) Computer system architecture
US6922767B2 (en) System for allowing only a partial value prediction field/cache size
US20040181786A1 (en) Method to partition large code across multiple e-caches

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION