US20020069325A1 - Caching method using cache data stored in dynamic ram embedded in logic chip and cache tag stored in static ram external to logic chip - Google Patents

Caching method using cache data stored in dynamic ram embedded in logic chip and cache tag stored in static ram external to logic chip Download PDF

Info

Publication number
US20020069325A1
US20020069325A1 US09/344,660 US34466099A US2002069325A1 US 20020069325 A1 US20020069325 A1 US 20020069325A1 US 34466099 A US34466099 A US 34466099A US 2002069325 A1 US2002069325 A1 US 2002069325A1
Authority
US
United States
Prior art keywords
cache
processor
shared bus
embedded
bus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/344,660
Other versions
US6449690B1 (en
Inventor
Fong Pong
Gopalakrishnan Janakiraman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Co filed Critical Hewlett Packard Co
Priority to US09/344,660 priority Critical patent/US6449690B1/en
Assigned to HEWLETT-PACKARD COMPANY reassignment HEWLETT-PACKARD COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JANAKIRAMAN, GOPALAKRISHNAN, PONG, FONG
Publication of US20020069325A1 publication Critical patent/US20020069325A1/en
Application granted granted Critical
Publication of US6449690B1 publication Critical patent/US6449690B1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates generally to the field of computer system memory and pertains more particularly to a caching method using cache data stored in dynamic RAM embedded in a logic chip and cache tag stored in static RAM external to the logic chip.
  • Modern computer systems are often comprised of multiple forms and locations of memory.
  • the memory subsystem is typically organized hierarchically. For example, from cache memory of various levels at the top to main memory and finally to hard disc memory.
  • a processor in search of data or instructions looks first in the cache memory, which is closest to the processor. If the information is not found there, then the request is passed next to the main memory and finally to the hard disc.
  • the relative sizes and performance of the memory units are conditioned primarily by economic considerations. Generally, the higher the memory unit is in the hierarchy the higher its performance and the higher its cost.
  • the memory subsystem will be divided into “caches” and “memory.” The term memory will cover every form of memory other than caches.
  • Information that is frequently accessed is stored in caches and information that is less frequently accessed is stored in memory.
  • Caches allow higher system performance because the information can typically be accessed from the cache faster than from the memory. Relatively speaking, this is especially true when the memory is in the form of a hard disk.
  • a cache consists of a cache data portion and a cache tag portion.
  • the cache data portion contains the information that is currently stored in the cache.
  • the cache tag portion contains the addresses of the locations where the information is stored.
  • the cache data will be larger than the cache tags.
  • the cache data and the cache tags will not necessarily be stored together depending on the design.
  • one or more of the cache tags are searched for the address of the requested information. Which cache tags are searched will depend on the cache design. If the address of the requested information is present in the cache tags, then the information will be available from that address in the cache data. If the address is not present, then the information may be available from memory.
  • caches In general, there are two cache applications that will be considered. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus. Caches must be designed in such a way that their latency meets the timing requirements of the requesting components such as the processor pipeline or the shared bus. For example, consider the design of the shared bus. A cache or other agent on the bus that requires a specific piece of information will issue the address of the information on the bus. This is known as the address phase. Subsequently, all caches or other agents attached to the bus must indicate whether the information at the issued address is located there. This is known as the snoop phase.
  • the bus design specifies that the cache must supply its snoop response within a fixed time interval after the address has been issued on the bus. If the cache is not designed to satisfy this timing requirement, it will lead to sub-optimal usage of the bus thus lowering system performance.
  • FIGS. 1 - 3 block diagrams of a processor 10 having an integral cache 12 that is interfaced to a processor pipeline 14 are shown.
  • the processor 10 further consists of a register file 16 , an address buffer 18 , and a data buffer 20 .
  • the various elements are connected together by unidirectional and bidirectional conductors as shown.
  • SRAM static random access memory
  • Such an implementation is shown as caches 12 a and 12 b in FIG. 3.
  • SRAM static random access memory
  • FIGS. 4 - 6 block diagrams of a cache 12 external to a processor 10 and interfaced with a shared bus 22 are shown. Also interfaced with the shared bus 22 is a memory 24 . The cache 12 and the memory 24 are interfaced with the shared bus 22 through a bus interface 26 as shown.
  • the cache 12 of FIG. 4 is external to the processor 10 , conventionally the cache tags are stored in a SRAM cache and the cache data is stored in a DRAM cache.
  • both the SRAM cache 12 a containing cache tags and the DRAM cache 12 b containing cache data are external to the bus interface 26 as shown in FIG. 5.
  • the DRAM cache 12 b containing cache data is external to the bus interface 26 while the SRAM cache 12 a containing cache tags is integral to the bus interface as shown in FIG. 6.
  • the drawbacks to these implementations are that the latency of accessing the cache data is long since it is stored in slower DRAM external to the logic chip. This may force a delay in transferring data to the shared bus thus degrading the system performance.
  • the cache tags are implemented in SRAM embedded on the logic chip, the size of the cache is limited by the higher cost, the lower density, and the greater power consumption of SRAM.
  • such a system would have a lower cost and a higher capacity than conventional systems.
  • system performance can be enhanced.
  • a primary purpose of the present invention is to solve this need and provide further, related advantages.
  • a caching method is disclosed for using cache data stored in dynamic RAM embedded in a logic chip and cache tags stored in static RAM external to the logic chip.
  • this method can be employed. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus.
  • FIG. 1 is a block diagram of a processor having an integral cache that is interfaced to a processor pipeline according to the prior art
  • FIG. 2 is a prior art block diagram of a processor having an integral SRAM cache that is interfaced to a processor pipeline;
  • FIG. 3 is a prior art block diagram of a processor having an integral SRAM cache and an external supplemental SRAM cache both of which are interfaced to a processor pipeline;
  • FIG. 4 is a prior art block diagram of a cache external to a processor and interfaced with a shared bus;
  • FIG. 5 is a prior art block diagram of a SRAM cache containing cache tags and a DRAM cache containing cache data both of which are external to a processor and interfaced with a shared bus;
  • FIG. 6 is a prior art block diagram of a DRAM cache containing cache data and a SRAM cache containing cache tags which is integral to a bus interface both of which are external to a processor and interfaced with a shared bus;
  • FIG. 7 is a block diagram of a logic chip having embedded logic and embedded DRAM cache containing cache data according to one embodiment of the present invention
  • FIG. 8 is a block diagram of a processor having an embedded DRAM cache containing cache data that is interfaced to a processor pipeline according to another embodiment of the present invention
  • FIG. 9 is a block diagram of a SRAM cache containing cache tags and an embedded DRAM cache containing cache data which is integral to a bus interface both of which are external to a processor and interfaced with a shared bus according to a further embodiment of the present invention.
  • FIG. 10 is a block diagram of a pair of SRAM caches containing cache tags and a pair of embedded DRAM caches containing cache data each of which is integral to one of a pair of bus interfaces both pairs of which are external to a processor and interfaced with a shared sub-bus according to still another embodiment of the present invention.
  • FIG. 7 a block diagram of a logic chip 30 having embedded logic 32 and embedded DRAM cache 34 containing cache data according to one embodiment of the present invention is shown.
  • the embedded logic 32 can be any of a wide variety of logic that is well known to one of ordinary skill in the art.
  • the embedded logic 32 may be a floating point unit or a bus interface.
  • the logic chip 30 is connected to an external SRAM cache 36 containing cache tags.
  • the external SRAM cache 36 can be accessed within the minimum time delay specified between the address and snoop phases of the shared bus. Concurrent with the tag access, the cache data can also be accessed from the embedded DRAM cache 34 on the logic chip 30 .
  • the latency of accessing the embedded DRAM cache 34 is substantially lower than accessing the external DRAM cache 12 b as in FIGS. 5 and 6 above.
  • the advantages of the method of the present invention are that the embedded DRAM cache results in faster data access and lower pin-count than an external DRAM cache.
  • the method of the present invention allows a cache with a larger capacity than a cache implemented with an integral SRAM as DRAM is cheaper, is more dense, and consumes less power.
  • FIG. 8 a block diagram of a processor 10 having an embedded DRAM cache 34 containing cache data that is interfaced to a processor pipeline 14 according to one embodiment of the present invention is shown.
  • the processor 10 further consists of a register file 16 , an address buffer 18 , and a data buffer 20 .
  • the processor 10 is connected to an external SRAM cache 36 containing cache tags. Such an implementation is able to meet the stringent time requirements of the processor.
  • FIGS. 9 and 10 are block diagrams of caches external to a processor and interfaced with a shared bus.
  • the implementation shown in FIG. 9 is for a single shared bus while the implementation shown in FIG. 10 is for a hierarchical shared bus.
  • FIG. 9 shows a block diagram of a SRAM cache 36 containing cache tags and an embedded DRAM cache 34 containing cache data which is integral to a bus interface 26 , both of which are external to a processor 10 and interfaced with a shared bus 22 according to one embodiment of the present invention.
  • FIG. 9 shows a block diagram of a SRAM cache 36 containing cache tags and an embedded DRAM cache 34 containing cache data which is integral to a bus interface 26 , both of which are external to a processor 10 and interfaced with a shared bus 22 according to one embodiment of the present invention.
  • FIG. 9 shows a block diagram of a SRAM cache 36 containing cache tags and an embedded DRAM cache 34 containing cache data which is integral to a bus interface 26 , both of which are external to a
  • FIG. 10 is a block diagram of a pair of SRAM caches 36 containing cache tags and a pair of embedded DRAM caches 34 containing cache data each of which is integral to one of a pair of bus interfaces 26 both pairs of which are external to a processor 10 and interfaced with a shared sub-bus 38 according to another embodiment of the present invention.
  • a shared bus 22 also interfaced with the shared bus 22 is a memory 24 . Both such implementations support faster access to cache data than conventional approaches while continuing to meet the requirements of the shared bus.

Abstract

A caching method for using cache data stored in dynamic RAM embedded in a logic chip and cache tags stored in static RAM external to the logic chip. In general, there are at least two cache applications where this method can be employed. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of Invention [0001]
  • The present invention relates generally to the field of computer system memory and pertains more particularly to a caching method using cache data stored in dynamic RAM embedded in a logic chip and cache tag stored in static RAM external to the logic chip. [0002]
  • 2. Discussion of the Prior Art [0003]
  • Modern computer systems are often comprised of multiple forms and locations of memory. The memory subsystem is typically organized hierarchically. For example, from cache memory of various levels at the top to main memory and finally to hard disc memory. A processor in search of data or instructions looks first in the cache memory, which is closest to the processor. If the information is not found there, then the request is passed next to the main memory and finally to the hard disc. The relative sizes and performance of the memory units are conditioned primarily by economic considerations. Generally, the higher the memory unit is in the hierarchy the higher its performance and the higher its cost. For reference purposes, the memory subsystem will be divided into “caches” and “memory.” The term memory will cover every form of memory other than caches. Information that is frequently accessed is stored in caches and information that is less frequently accessed is stored in memory. Caches allow higher system performance because the information can typically be accessed from the cache faster than from the memory. Relatively speaking, this is especially true when the memory is in the form of a hard disk. [0004]
  • A cache consists of a cache data portion and a cache tag portion. The cache data portion contains the information that is currently stored in the cache. The cache tag portion contains the addresses of the locations where the information is stored. Generally, the cache data will be larger than the cache tags. The cache data and the cache tags will not necessarily be stored together depending on the design. When a specific piece of information is requested, one or more of the cache tags are searched for the address of the requested information. Which cache tags are searched will depend on the cache design. If the address of the requested information is present in the cache tags, then the information will be available from that address in the cache data. If the address is not present, then the information may be available from memory. [0005]
  • In general, there are two cache applications that will be considered. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus. Caches must be designed in such a way that their latency meets the timing requirements of the requesting components such as the processor pipeline or the shared bus. For example, consider the design of the shared bus. A cache or other agent on the bus that requires a specific piece of information will issue the address of the information on the bus. This is known as the address phase. Subsequently, all caches or other agents attached to the bus must indicate whether the information at the issued address is located there. This is known as the snoop phase. Typically, the bus design specifies that the cache must supply its snoop response within a fixed time interval after the address has been issued on the bus. If the cache is not designed to satisfy this timing requirement, it will lead to sub-optimal usage of the bus thus lowering system performance. [0006]
  • Examples of prior art systems will now be discussed in greater detail. Turning first to FIGS. [0007] 1-3, block diagrams of a processor 10 having an integral cache 12 that is interfaced to a processor pipeline 14 are shown. The processor 10 further consists of a register file 16, an address buffer 18, and a data buffer 20. The various elements are connected together by unidirectional and bidirectional conductors as shown. When the cache 12 of FIG. 1 is integral to the processor 10, conventionally both the cache tags and the cache data are stored in fast static random access memory (SRAM) technology. In general, such an implementation is shown as cache 12 in FIG. 2. Sometimes, insufficient cache is provided integral to the processor, so a supplemental cache is provided external to the processor. Such an implementation is shown as caches 12 a and 12 b in FIG. 3. Among the drawbacks to implementations of caches exclusively in SRAM are that, relatively speaking, SRAM is expensive, is less dense, and uses more power than dynamic random access memory (DRAM) technology.
  • With reference to FIGS. [0008] 4-6, block diagrams of a cache 12 external to a processor 10 and interfaced with a shared bus 22 are shown. Also interfaced with the shared bus 22 is a memory 24. The cache 12 and the memory 24 are interfaced with the shared bus 22 through a bus interface 26 as shown. When the cache 12 of FIG. 4 is external to the processor 10, conventionally the cache tags are stored in a SRAM cache and the cache data is stored in a DRAM cache. In one implementation, both the SRAM cache 12 a containing cache tags and the DRAM cache 12 b containing cache data are external to the bus interface 26 as shown in FIG. 5. In another implementation, only the DRAM cache 12 b containing cache data is external to the bus interface 26 while the SRAM cache 12 a containing cache tags is integral to the bus interface as shown in FIG. 6. Among the drawbacks to these implementations are that the latency of accessing the cache data is long since it is stored in slower DRAM external to the logic chip. This may force a delay in transferring data to the shared bus thus degrading the system performance. Further, when the cache tags are implemented in SRAM embedded on the logic chip, the size of the cache is limited by the higher cost, the lower density, and the greater power consumption of SRAM.
  • A definite need exists for a system having an ability to meet the latency timing requirements of the requesting components of the system. In particular, a need exists for a system which is capable of accessing cache memory in a timely manner. Ideally, such a system would have a lower cost and a higher capacity than conventional systems. With a system of this type, system performance can be enhanced. A primary purpose of the present invention is to solve this need and provide further, related advantages. [0009]
  • SUMMARY OF THE INVENTION
  • A caching method is disclosed for using cache data stored in dynamic RAM embedded in a logic chip and cache tags stored in static RAM external to the logic chip. In general, there are at least two cache applications where this method can be employed. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus. [0010]
  • BRIEF DESCRIPTION OF THE DRAWING
  • The above and other objects and advantages of the present invention will be more readily appreciated from the following detailed description when read in conjunction with the accompanying drawing, wherein: [0011]
  • FIG. 1 is a block diagram of a processor having an integral cache that is interfaced to a processor pipeline according to the prior art; [0012]
  • FIG. 2 is a prior art block diagram of a processor having an integral SRAM cache that is interfaced to a processor pipeline; [0013]
  • FIG. 3 is a prior art block diagram of a processor having an integral SRAM cache and an external supplemental SRAM cache both of which are interfaced to a processor pipeline; [0014]
  • FIG. 4 is a prior art block diagram of a cache external to a processor and interfaced with a shared bus; [0015]
  • FIG. 5 is a prior art block diagram of a SRAM cache containing cache tags and a DRAM cache containing cache data both of which are external to a processor and interfaced with a shared bus; [0016]
  • FIG. 6 is a prior art block diagram of a DRAM cache containing cache data and a SRAM cache containing cache tags which is integral to a bus interface both of which are external to a processor and interfaced with a shared bus; [0017]
  • FIG. 7 is a block diagram of a logic chip having embedded logic and embedded DRAM cache containing cache data according to one embodiment of the present invention; [0018]
  • FIG. 8 is a block diagram of a processor having an embedded DRAM cache containing cache data that is interfaced to a processor pipeline according to another embodiment of the present invention; [0019]
  • FIG. 9 is a block diagram of a SRAM cache containing cache tags and an embedded DRAM cache containing cache data which is integral to a bus interface both of which are external to a processor and interfaced with a shared bus according to a further embodiment of the present invention; and [0020]
  • FIG. 10 is a block diagram of a pair of SRAM caches containing cache tags and a pair of embedded DRAM caches containing cache data each of which is integral to one of a pair of bus interfaces both pairs of which are external to a processor and interfaced with a shared sub-bus according to still another embodiment of the present invention.[0021]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Turning now to FIG. 7, a block diagram of a [0022] logic chip 30 having embedded logic 32 and embedded DRAM cache 34 containing cache data according to one embodiment of the present invention is shown. The embedded logic 32 can be any of a wide variety of logic that is well known to one of ordinary skill in the art. For example, the embedded logic 32 may be a floating point unit or a bus interface. The logic chip 30 is connected to an external SRAM cache 36 containing cache tags. In general, there are at least two cache applications where this method can be employed. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus. For example, in a shared bus design, the external SRAM cache 36 can be accessed within the minimum time delay specified between the address and snoop phases of the shared bus. Concurrent with the tag access, the cache data can also be accessed from the embedded DRAM cache 34 on the logic chip 30. The latency of accessing the embedded DRAM cache 34 is substantially lower than accessing the external DRAM cache 12 b as in FIGS. 5 and 6 above. Among the advantages of the method of the present invention are that the embedded DRAM cache results in faster data access and lower pin-count than an external DRAM cache. Further, by storing the cache tags in external SRAM, the method of the present invention allows a cache with a larger capacity than a cache implemented with an integral SRAM as DRAM is cheaper, is more dense, and consumes less power.
  • With reference to FIG. 8, a block diagram of a [0023] processor 10 having an embedded DRAM cache 34 containing cache data that is interfaced to a processor pipeline 14 according to one embodiment of the present invention is shown. As above with respect to FIGS. 1-3, the processor 10 further consists of a register file 16, an address buffer 18, and a data buffer 20. The processor 10 is connected to an external SRAM cache 36 containing cache tags. Such an implementation is able to meet the stringent time requirements of the processor.
  • FIGS. 9 and 10 are block diagrams of caches external to a processor and interfaced with a shared bus. The implementation shown in FIG. 9 is for a single shared bus while the implementation shown in FIG. 10 is for a hierarchical shared bus. FIG. 9 shows a block diagram of a [0024] SRAM cache 36 containing cache tags and an embedded DRAM cache 34 containing cache data which is integral to a bus interface 26, both of which are external to a processor 10 and interfaced with a shared bus 22 according to one embodiment of the present invention. FIG. 10 is a block diagram of a pair of SRAM caches 36 containing cache tags and a pair of embedded DRAM caches 34 containing cache data each of which is integral to one of a pair of bus interfaces 26 both pairs of which are external to a processor 10 and interfaced with a shared sub-bus 38 according to another embodiment of the present invention. As above with respect to FIGS. 4-6, also interfaced with the shared bus 22 is a memory 24. Both such implementations support faster access to cache data than conventional approaches while continuing to meet the requirements of the shared bus.
  • While the invention has been illustrated and described by means of specific embodiments, it is to be understood that numerous changes and modifications may be made therein without departing from the spirit and scope of the invention as defined in the appended claims and equivalents thereof. [0025]

Claims (15)

What is claimed is:
1. A method of caching memory for a device comprising a logic chip having embedded logic and embedded DRAM and an external SRAM connected to the logic chip, the method comprising the steps of:
storing at least a portion of the cache data in the embedded DRAM; and
storing at least a portion of the cache tags in the external SRAM.
2. A cache memory comprising:
a logic chip having embedded logic and embedded DRAM;
an external SRAM connected to the logic chip;
means for storing at least a portion of the cache data in the embedded DRAM; and
means for storing at least a portion of the cache tags in the external SRAM.
3. A cache memory comprising:
a logic chip having embedded logic and embedded DRAM wherein at least a portion of the cache data is stored; and
an external SRAM connected to the logic chip wherein at least a portion of the cache tags are stored.
4. A computer system comprising:
a processor having embedded logic; and
a cache memory comprising:
a DRAM embedded in the processor wherein at least a portion of the cache data is stored; and
an external SRAM connected to the processor wherein at least a portion of the cache tags are stored.
5. The computer system according to claim 4, wherein the processor further comprises:
an address buffer connected to the embedded DRAM;
a data buffer connected to the embedded DRAM;
a register file connected to the data buffer; and
a pipeline connected to the address buffer, the data buffer, and the register file.
6. A shared bus computer system comprising:
at least one shared bus;
at least one processor connected to the at least one shared bus;
a bus interface having embedded logic connected to the at least one shared bus; and
a cache memory comprising:
a DRAM embedded in the bus interface wherein at least a portion of the cache data is stored; and
an external SRAM connected to the bus interface wherein at least a portion of the cache tags are stored.
7. The shared bus computer system according to claim 6, further comprising a memory connected to the bus interface.
8. The shared bus computer system according to claim 6, further comprising a second processor connected to the at least one shared bus.
9. The shared bus computer system according to claim 6, further comprising
a memory connected to the bus interface; and
a second processor connected to the at least one shared bus.
10. The shared bus computer system according to claim 6, further comprising:
a second shared bus connected to the bus interface;
a second bus interface connected to the second shared bus; and
a memory connected to the second bus interface.
11. The shared bus computer system according to claim 10, further comprising a second processor connected to the at least one shared bus.
12. The shared bus computer system according to claim 10, further comprising:
a third bus interface having embedded logic connected to the second shared bus;
a second cache memory comprising:
a second DRAM embedded in the third bus interface wherein at least a portion of the second cache data is stored; and
a second external SRAM connected to the third bus interface wherein at least a portion of the second cache tags are stored;
a third shared bus connected to the third bus interface; and
a second processor connected to the third shared bus.
13. The shared bus computer system according to claim 12, further comprising a third processor connected to the at least one shared bus.
14. The shared bus computer system according to claim 12, further comprising a third processor connected to the third shared bus.
15. The shared bus computer system according to claim 12, further comprising:
a third processor connected to the at least one shared bus; and
a fourth processor connected to the third shared bus.
US09/344,660 1999-06-25 1999-06-25 Caching method using cache data stored in dynamic RAM embedded in logic chip and cache tag stored in static RAM external to logic chip Expired - Fee Related US6449690B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/344,660 US6449690B1 (en) 1999-06-25 1999-06-25 Caching method using cache data stored in dynamic RAM embedded in logic chip and cache tag stored in static RAM external to logic chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/344,660 US6449690B1 (en) 1999-06-25 1999-06-25 Caching method using cache data stored in dynamic RAM embedded in logic chip and cache tag stored in static RAM external to logic chip

Publications (2)

Publication Number Publication Date
US20020069325A1 true US20020069325A1 (en) 2002-06-06
US6449690B1 US6449690B1 (en) 2002-09-10

Family

ID=23351449

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/344,660 Expired - Fee Related US6449690B1 (en) 1999-06-25 1999-06-25 Caching method using cache data stored in dynamic RAM embedded in logic chip and cache tag stored in static RAM external to logic chip

Country Status (1)

Country Link
US (1) US6449690B1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040128433A1 (en) * 2002-12-31 2004-07-01 Bains Kuljit S. Refresh port for a dynamic memory
US6779076B1 (en) * 2000-10-05 2004-08-17 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US6862654B1 (en) 2000-08-17 2005-03-01 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
CN100377118C (en) * 2006-03-16 2008-03-26 浙江大学 Built-in file system realization based on SRAM
WO2012135431A2 (en) * 2011-04-01 2012-10-04 Intel Corporation Mechanisms and techniques for providing cache tags in dynamic random access memory

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4818820B2 (en) * 2006-06-07 2011-11-16 ルネサスエレクトロニクス株式会社 Bus system, bus slave and bus control method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5067078A (en) * 1989-04-17 1991-11-19 Motorola, Inc. Cache which provides status information
DE69324508T2 (en) * 1992-01-22 1999-12-23 Enhanced Memory Systems Inc DRAM with integrated registers
US5687131A (en) * 1996-03-22 1997-11-11 Sharp Microelectronics Technology, Inc. Multi-mode cache structure
US6026478A (en) * 1997-08-01 2000-02-15 Micron Technology, Inc. Split embedded DRAM processor
US6151664A (en) * 1999-06-09 2000-11-21 International Business Machines Corporation Programmable SRAM and DRAM cache interface with preset access priorities

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7155561B2 (en) 2000-08-17 2006-12-26 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US6862654B1 (en) 2000-08-17 2005-03-01 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US7917692B2 (en) 2000-08-17 2011-03-29 Round Rock Research, Llc Method and system for using dynamic random access memory as cache memory
US20080177943A1 (en) * 2000-08-17 2008-07-24 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US20070055818A1 (en) * 2000-08-17 2007-03-08 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US6948027B2 (en) 2000-08-17 2005-09-20 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US20060015679A1 (en) * 2000-08-17 2006-01-19 Brent Keeth Method and system for using dynamic random access memory as cache memory
US7350018B2 (en) 2000-08-17 2008-03-25 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US6779076B1 (en) * 2000-10-05 2004-08-17 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US6965536B2 (en) 2000-10-05 2005-11-15 Micron Technology, Inc. Method and system for using dynamic random access memory as cache memory
US20050007848A1 (en) * 2000-10-05 2005-01-13 Shirley Brian M. Method and system for using dynamic random access memory as cache memory
WO2004061858A1 (en) * 2002-12-31 2004-07-22 Intel Corporation A refresh port for a dynamic memory
US7617356B2 (en) 2002-12-31 2009-11-10 Intel Corporation Refresh port for a dynamic memory
US20040128433A1 (en) * 2002-12-31 2004-07-01 Bains Kuljit S. Refresh port for a dynamic memory
CN100377118C (en) * 2006-03-16 2008-03-26 浙江大学 Built-in file system realization based on SRAM
WO2012135431A2 (en) * 2011-04-01 2012-10-04 Intel Corporation Mechanisms and techniques for providing cache tags in dynamic random access memory
WO2012135431A3 (en) * 2011-04-01 2012-12-27 Intel Corporation Mechanisms and techniques for providing cache tags in dynamic random access memory

Also Published As

Publication number Publication date
US6449690B1 (en) 2002-09-10

Similar Documents

Publication Publication Date Title
US11636038B2 (en) Method and apparatus for controlling cache line storage in cache memory
EP1196850B1 (en) Techniques for improving memory access in a virtual memory system
US5802554A (en) Method and system for reducing memory access latency by providing fine grain direct access to flash memory concurrent with a block transfer therefrom
US7269708B2 (en) Memory controller for non-homogenous memory system
US7669011B2 (en) Method and apparatus for detecting and tracking private pages in a shared memory multiprocessor
US20060004963A1 (en) Apparatus and method for partitioning a shared cache of a chip multi-processor
US6782453B2 (en) Storing data in memory
JP3629519B2 (en) Programmable SRAM and DRAM cache interface
US20080229026A1 (en) System and method for concurrently checking availability of data in extending memories
US20050144390A1 (en) Protocol for maintaining cache coherency in a CMP
US6449690B1 (en) Caching method using cache data stored in dynamic RAM embedded in logic chip and cache tag stored in static RAM external to logic chip
US8250300B2 (en) Cache memory system and method with improved mapping flexibility
US6542969B1 (en) Memory controller and a cache for accessing a main memory, and a system and a method for controlling the main memory
US6654854B1 (en) Caching method using cache tag and cache data stored in dynamic RAM embedded in logic chip
US20090024798A1 (en) Storing Data
US8117393B2 (en) Selectively performing lookups for cache lines
US20130151766A1 (en) Convergence of memory and storage input/output in digital systems
JPH11296432A (en) Information processor and memory management system
JPH10105466A (en) Cache memory control method for disk device
JP2006107021A (en) Memory controller
WO2000045270A1 (en) Techniques for improving memory access in a virtual memory system

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PONG, FONG;JANAKIRAMAN, GOPALAKRISHNAN;REEL/FRAME:010169/0745

Effective date: 19990623

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:026945/0699

Effective date: 20030131

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20140910