WO2009021176A2 - Urgency and time window manipulation to accommodate unpredictable memory operations - Google Patents

Urgency and time window manipulation to accommodate unpredictable memory operations Download PDF

Info

Publication number
WO2009021176A2
WO2009021176A2 PCT/US2008/072609 US2008072609W WO2009021176A2 WO 2009021176 A2 WO2009021176 A2 WO 2009021176A2 US 2008072609 W US2008072609 W US 2008072609W WO 2009021176 A2 WO2009021176 A2 WO 2009021176A2
Authority
WO
WIPO (PCT)
Prior art keywords
storage device
data
data integrity
operations
errors
Prior art date
Application number
PCT/US2008/072609
Other languages
French (fr)
Other versions
WO2009021176A9 (en
WO2009021176A3 (en
Inventor
James J. Tringali
Sergey A. Gorobets
Shai Traister
Yosief Ataklti
Original Assignee
Sandisk Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/864,793 external-priority patent/US8046524B2/en
Application filed by Sandisk Corporation filed Critical Sandisk Corporation
Publication of WO2009021176A2 publication Critical patent/WO2009021176A2/en
Publication of WO2009021176A3 publication Critical patent/WO2009021176A3/en
Publication of WO2009021176A9 publication Critical patent/WO2009021176A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/161Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • ROM read only memory
  • a ROM is typically masked out long in advance for the specific code/application and once masked and subsequently manufactured it cannot be changed in most scenarios. This thus results in large inventories of product that may or may not be well received in the marketplace. For consumer related devices where inventory must be produced before demand can be accurately gauged, this may result in unsold inventory.
  • NAND memory typically includes memory management operations to accommodate for the physical limitations of the NAND memory cells. These operations may be taking place when a read command is received, and thus the called for data may not be immediately received.
  • Various aspects and embodiments allow for a flash memory storage device with variable latency in responding to data storage commands or requests from a host device to be used in demanding environments where a ROM might otherwise be used to provide a program.
  • mechanisms within the flash memory controller allow the memory controller to accommodate both the physical limitations of the flash memory and the needs of a host processor to quickly and regularly access the memory.
  • flash memory storage device allows the flash memory storage device to be used not only in read intensive environments but also in isochronous systems where flow control cannot be introduced.
  • embodiments of the present invention may be used is systems where there is not a wait, busy, or ready signal to assert or de-assert on the bus.
  • FIGS. IA and IB are block diagrams of a non-volatile memory and a host system, respectively, that operate together.
  • FIG. 2 is an illustration of isochronous system read operation.
  • FIG. 3 is a scan and update state diagram.
  • FIGS. 4A and 4B illustrate a first embodiment of command and data structure and flow for normal and wait flow respectively.
  • FIGS. 5 A and 5B illustrate a second embodiment of command and data structure and flow for normal and wait flow respectively.
  • FIGS. 6 A and 6B illustrate a third embodiment of command and data structure and flow for normal and wait flow respectively. DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • a flash memory includes a memory cell array and a controller.
  • the 13 include an array 15 of memory cells and various logic circuits 17.
  • the logic circuits 17 interface with a controller 19 on a separate chip through data, command and status circuits, and also provide addressing, data transfer and sensing, and other support to the array 13.
  • a number of memory array chips can be from one to many, depending upon the storage capacity provided.
  • the controller and part or the entire array can alternatively be combined onto a single integrated circuit chip but this is currently not an economical alternative.
  • a typical controller 19 includes a microprocessor 21, a read-only- memory (ROM) 23 primarily to store firmware and a buffer memory (RAM) 25 primarily for the temporary storage of user data either being written to or read from the memory chips 11 and 13.
  • Circuits 27 interface with the memory array chip(s) and circuits 29 interface with a host though connections 31. The integrity of data is in this example determined by calculating an ECC with circuits 33 dedicated to calculating the code. As user data is being transferred from the host to the flash memory array for storage, the circuit calculates an ECC from the data and the code is stored in the memory.
  • connection 31of memory of Figure IA mate with connections 31' of a host system, an example of which is given in Figure IB.
  • a typical host also includes a microprocessor 37, a ROM 39 for storing firmware code and RAM 41.
  • Other circuits and subsystems 43 often include a high capacity magnetic data storage disk drive, interface circuits for a keyboard, a monitor and the like, depending upon the particular host system.
  • Some examples of such hosts include desktop computers, laptop computers, handheld computers, palmtop computers, personal digital assistants (PDAs), MP3 and other audio players, digital cameras, video cameras, electronic game machines, wireless and wired telephony devices, answering machines, voice recorders, network routers and others.
  • the memory of Figure IA may be implemented as a small enclosed card, cartridge, or drive containing the controller and all its memory array circuit devices in a form that is removably connectable with the host of Figure IB. That is, mating connections 31 and 31' allow a card to be disconnected and moved to another host, or replaced by connecting another card to the host.
  • the memory array devices may be enclosed in a separate card that is electrically and mechanically connectable with a card containing the controller and connections 31.
  • the memory of Figure IA may be embedded within the host of Figure IB, wherein the connections 31 and 31' are permanently made. In this case, the memory is usually contained within an enclosure of the host along with other components.
  • NAND memory media errors are introduced through device cell stresses induced by read and write operations.
  • the useful life of a NAND memory can be maximized by monitoring the increasing error levels and moving the data to a physical location which has experienced lower access activity.
  • this media repair activity is performed as a background operation. These operations happen during bus idle times or during read or write requests extended by hardware flow control techniques.
  • blocks are occasionally copied or updated to other locations when the physical reliability of a particular block cannot be depended upon. For example, if the error rate of a block appears as if it will shortly be unreadable, even with multiple read cycles and thresholds, a block may be updated. For example, if the number of errors is not correctable with ECC, an update would be necessary.
  • NAND NAND
  • read operations cause disturbs, one of the aforementioned cell stresses.
  • Certain systems incorporating NAND memory may be very read intensive, thus increasing the importance of the data correction and scrub techniques.
  • a video game system reads the memory very often to update the image being displayed during the game. While the background display may only be written once, it is read very frequently. Thus the effect of disturbs will be particularly noteworthy in such a case and must be mitigated.
  • the purpose of a scrub operation is to detect disturbed storage elements before the number of bits in error and the level of shifted cells exceed any recovery schemes available on the memory system. To this end, it is generally desirable to detect disturb as early as possible and before much of the guard band for a given voltage threshold level has been lost to disturb.
  • Flash memories usually store data at discrete states, or ranges of charge storage levels, each of which is separated from other states by some guard band. There is generally a nominal sensing level of discrimination between each state above which a storage element is deemed to be in one state, and below which it is deemed to be in another state. As a given storage element is disturbed, the level to which it has been programmed or erased may begin to shift. If the level of the storage element approaches the sensing level of discrimination, or crosses over it, it produces data in a state different that that to which it was programmed or erased. The error will generally manifest itself as one or more bits in error in the data, and will generally be detected through the use of ECC covering the data field.
  • disturb mechanisms are known to affect data storage levels in a specific way, it is possible to target detection of those specific disturb mechanisms by margining read conditions toward the expected level shifts. While the ideal situation would be to target the expected disturb mechanisms with a single read operation under a single set of margin conditions, this may not usually be possible. It may be necessary to perform multiple read operations under different conditions.
  • the scrub read conditions may be margined in order to target certain expected disturb mechanisms, or to simply check for sufficient margin in the stored levels. Whether the data was read under nominal or margined conditions, the decision whether or not to take corrective action may be based on the number of bits in error detected during the scrub read operation. For example, if the number of bits in error are below the ECC correction capabilities of the system, the system may decide to defer the corrective action, or to ignore the error altogether.
  • the system may make the decision to correct based on other factors such as the pattern of bits in error.
  • the ECC correction capabilities may be sensitive to bit error pattern, or bit error patterns may be indicative of a particular known disturb mechanism in the nonvolatile memory. There may be other reasons for basing the threshold on bit error patterns.
  • the bit error pattern is generally revealed during the ECC correction operation.
  • the reasons for doing so may include real-time considerations. For example a host may require a certain data transfer, and dedicating resources to scrub corrective action at certain times might impact the ability of the memory system to meet the guaranteed data rate.
  • the memory system may queue the scrub corrective action operation parameters for later processing, at a time when performing the scrub corrective action would not impact performance to the host.
  • the scrub corrective action operations may be deferred until sometime later in the host command processing, sometime after the command processing, or until a later host command. The main point is that the scrub operation parameters would be stored and processed at a later time when it is most convenient to the host.
  • a scrub read is performed on a small proportion of the memory cells in the block, such as one or a small number of sectors, and the quality of the scrub read data is checked by use of the ECCs stored with the sectors of data.
  • the scrub read most commonly, but not always, reads data stored in one or more pages that were not read in response to the command. If there are an excessive number of errors in the scrub read data, then the entire block is refreshed.
  • a refresh operation involves reading all the data from the block, correcting the errors in the data by use of the ECCs, and then rewriting the corrected data into another block that has been erased.
  • This process is desirably performed often enough to avoid the stored data being disturbed to the extent that they are no longer correctable by use of the ECCs, but not so often that performance of the memory system is excessively degraded.
  • the scrub read By limiting the scrub read to a small amount of the storage capacity of a block, such as just one or a few sectors or one or two pages, the overhead added to the memory operation by the scrub process is minimized.
  • the scrub read and any resulting refresh are preferably performed in the background, when the memory system is not otherwise responding to commands to read or write data therein.
  • the scrub read preferably reads data stored in a page or pages of the block that are more vulnerable to having their data disturbed by the particular partial block command read than other pages of the block. It is preferred to identify a single most vulnerable sector or page, whenever that is possible, and then scrub read the data from it. Either way, a worse case picture of the quality of the data in the block is obtained with only a small amount of data needed to be scrub read. The impact on the performance of the memory system by such scrub reads is therefore minimized.
  • Objective criteria may be established to identify the portion of the group or block of memory cells, such as a page, that is more vulnerable to being disturbed by the command read than other portions of the group. At least some of the criteria are dependent upon the structure of the memory array.
  • Another of the criteria for selecting the more vulnerable page(s) may be established to be dependent upon which pages of the block have been read in response to the command and in what order. For instance, in the above example, even if one or both of the extreme pages of the block has been read in response to the command, one of these pages is desirably scrub read if it was read early in the execution of the command and therefore subject to thereafter being disturbed by the subsequent reading of other pages of the block. In such a case, the ECC check performed as part of the normal command read may no longer represent the quality of the data in that page because of potential disturbs that could have resulted from reading subsequent pages.
  • the ECC bit error checking that occurs as part of a normal data read provides information of the quality of the data in those page(s) so that another scrub read of the same page(s) need not take place.
  • a further possible one of the criteria for identifying a more vulnerable page is to identify a page that has not been read in response to the command but which is physically located adjacent a page that was so read. Disturbs are more likely to occur on this page than other pages in the block, with the possible exception of the two pages at the extreme ends of NAND memory strings. This will depend upon the specific structure of the memory cell array.
  • Yet another of the established criteria can be the relative patterns of data stored in the pages of the block. For example, in the NAND memory array, disturbs of the charge levels of memory cells in states near or at their lowest stored charge levels is more likely than those with charge levels near or at their highest stored charge levels.
  • the ECC threshold for triggering a corrective action may be anywhere in the range of ECC correction capabilities, but is preferably around 75% of the capability. For example, if the ECC is capable of correcting 12 bits, corrective action may be triggered when around 8 bits in error are detected.
  • Handling NAND flash media refresh operations efficiently is important in a low latency operating environment, especially an Isochronous system environment.
  • the memory controller 19 incorporates an isochronous system (“IS”) interface (the "ISI”) in addition to the other interfaces such as those for a Secure Digital (“SD”) card, Memory Stick, Compact Flash card, USB flash drive, or the like.
  • IS isochronous system
  • SD Secure Digital
  • Memory Stick Memory Stick
  • Compact Flash Compact Flash
  • USB flash drive or the like.
  • the ISI monitors media error statistics and take action whenever needed to scrub or update the media, as described earlier. Since the IS environment does not allow flow control at the transaction level, a new approach has been implemented for media repair.
  • IS Isochronous system
  • FIG. 2 One version of the IS read operation is shown in FIG. 2. From the time the last byte of the command is issued until the first byte of read data is returned is 230 usecs.
  • the system processor read operation/structure shown in FIG. 2 has been modified in order to better manage process delays of the NAND memory
  • the system processor command structure is augmented by wrapping the mode dependant RD_PAGE commands with a following READ_ST1 (read status one register) command within the same cycle or period.
  • READ_ST1 read status one register
  • FIG. 4A Each cycle 404A-X contains both a data operation and a status operation.
  • Two bits have been defined in the STATUSl register to relay the need for media error processing (flash memory data integrity operations). These bits can be seen in Table 1.
  • the RefReq bit (D2) will be set as a request for the host to initiate a refresh operation. This request is not considered urgent. The host will honor this request in as timely a manner as possible without sacrificing user interactivity.
  • the second bit, RefReqUrg (D3) is an urgent request for a refresh operation. This host must find a way to honor this request as quickly as possible without regard for effects on the user experience.
  • the queuing of 4 block copies is preferably used to denote an urgent request in the case where the queue holds eight total entries. In other words, when the queue is 50% full the request will be an urgent request.
  • the range in the ratio of entries to available slots of the queue used to indicate an urgent request may be anywhere between ten and ninety percent. The ratio selected will affect the performance of the overall system and may be tailored to each application. A lower ratio will require the host to respond more quickly and may result in higher data reliability or integrity while a higher ratio will allow for better system response because the host will be able to allocate more time to running the processor application (e.g. game).
  • a state diagram of media error processing interacting with the host's refresh status bits is shown in FIG. 3.
  • the All Clear state 302 represents the initial media error free state.
  • the (front end) memory controller firmware will need to post RefReq status (state 306) to STATUSl.
  • the memory controller (front end) firmware will initiate a media scan operation (state 314). If during the time between posting RefReq and receipt of the RFS_BLK command the block update queue goes not empty an update operation rather then a scan operation will be executed.
  • the index save serves to ensure guaranteed error processing across power cycles. Since the index save operation is short, performing the save at the highest priority will not negatively impact the gaming experience. There is a rare but possible situation where the number of copy requests in the queue start to accumulate. Whenever there are more than 4 copies in the queue the system should treat this as a critical event. If this queue overflow is detected the request will be urgent and the time hit will be at a maximum, as represented by state 326.
  • the base assumption is that the host will be timely in it's response to any refresh request, urgent or not.
  • the maximum latency between refresh status requests and the issue of a RFS_BLK command will be less than 6400 read operations. This is 1/10 the number of reads expected to complete successfully before an update event would be required.
  • the maximum latency allowed may be a controlling criteria. For example, a gaming system may specify a maximum latency period, of 230 microseconds
  • a scan and/or copy queue is used to keep track of what blocks or other units of memory need to be scanned and/or copied. The queue may be stored in RAM or alternatively in the NAND flash memory itself. For more information on this, please refer to U.S. Patent Application No. 11/726,648 entitled "Methods For Storing Memory Operations In A Queue" filed 3/21/2007, which is hereby incorporated by reference in the entirety.
  • FIG. 4A illustrates a system processor command structure incorporating the read status (READ-STl discussed above) command 410 in each cycle/period 404A-X in order to read the STATUSl register of Table 1 above.
  • the command/address 406 is followed by the data 408 and the read STATUSl command 410.
  • the register will report that no action is needed.
  • FIGS. 5 A and 5B illustrate another embodiment of a system processor command structure.
  • the mechanism for reporting the need to attend to the flash memory utilizes a data token 510 within each period/cycle 504A-X following the command/address 506 and data 508.
  • the host processor will wait some number of periods/cycles before sending another command.
  • the data token would comprise additional bits of information beyond the data 508 associated with the command (e.g. the data sent in response to a read). This will allow the memory to perform needed data integrity operations.
  • the data token would contain some or all of the information contained in the STATUSl register directly, as opposed to the mechanism in FIGS.
  • FIGS. 6A and 6B illustrate another embodiment of a system processor command structure.
  • the mechanism for reporting the need to attend to the flash memory utilizes a side band bits 510 within each period/cycle
  • the host processor will wait some number of periods/cycles before sending another command.
  • the side band bits 610 contain some or all of the information contained in the STATUSl register directly, or alternatively may direct the processor to read the STATUSl register.
  • FIGS. 4-6 All of the embodiments illustrated in FIGS. 4-6 can be used to signal the need for extra time for data integrity operations in systems where a wait , busy or ready signal is not available and thus can be thought of as alternatives to flow control in isochronous systems lacking flow control.
  • a block refresh cycle is initiated whenever the host issues a RFS_BLK command. Once a RFS_BLK command is issued, only RD_STATUS1 commands are allowed until the refresh operation completes. Completion of refresh is indicated when both RefReq and RefReqUrg bits are set to zero.
  • the embodiments described above ensure that if the power to the NAND memory device is interrupted that the information stored in the memory will be available when the power is restored. For example, a block that needs to be copied to another location that was in the process of being updated will be taken care of upon power restoration because it will be contained in the copy queue. This is true even given the demanding read requirements and timing limitations of an isochronous system such as that of a time sensitive application like video games and the like.

Abstract

The variable latency associated with flash memory due to background data integrity operations is managed in order to allow the flash memory to be used in isochronous systems. A system processor is notified regularly of the nature and urgency of requests for time to ensure data integrity. Minimal interruptions of system processing are achieved and operation is ensured in the event of a power interruption.

Description

URGENCY AND TIME WINDOW MANIPULATION TO ACCOMMODATE UNPREDICTABLE MEMORY OPERATIONS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to and claims the benefit of U.S. Patent application No. 60/954,694 filed August 8, 2007, 11/864,740 filed September 28, 2007, and 11/864,793 filed September 28, 2007, all of which are hereby incorporated by reference in the entirety.
BACKGROUND
[0002] Prior systems based on isochronous processors have typically relied on read only memory (ROM) for rapid access to code that needs to be rapidly executed by the processor. ROM has long been a preferred code storage device for executable applications where uninterrupted and predictable access is an issue. However, ROM cannot be easily reprogrammed or programmed at the last minute. A ROM is typically masked out long in advance for the specific code/application and once masked and subsequently manufactured it cannot be changed in most scenarios. This thus results in large inventories of product that may or may not be well received in the marketplace. For consumer related devices where inventory must be produced before demand can be accurately gauged, this may result in unsold inventory.
[0003] While use of flash memory allows for different programs to be loaded on the same hardware quickly and easily, use of certain types of flash memory is problematic in isochronous systems due to the operations that the memory performs to ensure data reliability. The background operations performed to ensure data integrity result in unpredictable latency times when reading data from the flash memory. This is especially true for NAND flash memory.
[0004] In an isochronous system that incorporates NAND flash memory the unpredictable latency times of the flash memory are problematic. This is particularly true for read operations.
[0005] NAND memory typically includes memory management operations to accommodate for the physical limitations of the NAND memory cells. These operations may be taking place when a read command is received, and thus the called for data may not be immediately received.
SUMMARY
[0006] Various aspects and embodiments allow for a flash memory storage device with variable latency in responding to data storage commands or requests from a host device to be used in demanding environments where a ROM might otherwise be used to provide a program. For example, mechanisms within the flash memory controller allow the memory controller to accommodate both the physical limitations of the flash memory and the needs of a host processor to quickly and regularly access the memory.
[0007] This for example, allows the flash memory storage device to be used not only in read intensive environments but also in isochronous systems where flow control cannot be introduced. For example, embodiments of the present invention may be used is systems where there is not a wait, busy, or ready signal to assert or de-assert on the bus.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] FIGS. IA and IB are block diagrams of a non-volatile memory and a host system, respectively, that operate together.
[0009] FIG. 2 is an illustration of isochronous system read operation.
[0010] FIG. 3 is a scan and update state diagram.
[0011] FIGS. 4A and 4B illustrate a first embodiment of command and data structure and flow for normal and wait flow respectively.
[0012] FIGS. 5 A and 5B illustrate a second embodiment of command and data structure and flow for normal and wait flow respectively.
[0013] FIGS. 6 A and 6B illustrate a third embodiment of command and data structure and flow for normal and wait flow respectively. DESCRIPTION OF EXEMPLARY EMBODIMENTS
Memory Architectures and Their Operation
[0014] Referring initially to Figure IA, a flash memory includes a memory cell array and a controller. In the example shown, two integrated circuit devices (chips) 11 and
13 include an array 15 of memory cells and various logic circuits 17. The logic circuits 17 interface with a controller 19 on a separate chip through data, command and status circuits, and also provide addressing, data transfer and sensing, and other support to the array 13. A number of memory array chips can be from one to many, depending upon the storage capacity provided. The controller and part or the entire array can alternatively be combined onto a single integrated circuit chip but this is currently not an economical alternative.
[0015] A typical controller 19 includes a microprocessor 21, a read-only- memory (ROM) 23 primarily to store firmware and a buffer memory (RAM) 25 primarily for the temporary storage of user data either being written to or read from the memory chips 11 and 13. Circuits 27 interface with the memory array chip(s) and circuits 29 interface with a host though connections 31. The integrity of data is in this example determined by calculating an ECC with circuits 33 dedicated to calculating the code. As user data is being transferred from the host to the flash memory array for storage, the circuit calculates an ECC from the data and the code is stored in the memory. When that user data are later read from the memory, they are again passed through the circuit 33 which calculates the ECC by the same algorithm and compares that code with the one calculated and stored with the data. If they compare, the integrity of the data is confirmed. If they differ, depending upon the specific ECC algorithm utilized, those bits in error, up to a number supported by the algorithm, can be identified and corrected. Typically, an ECC algorithm is used that can correct up to 8 bits in a 512 byte sector. This number of correctable bits is predicted to increase in time.
[0016] The connections 31of memory of Figure IA mate with connections 31' of a host system, an example of which is given in Figure IB. Data transfers between the host and the memory of Figure IA through interface circuits 35. A typical host also includes a microprocessor 37, a ROM 39 for storing firmware code and RAM 41. Other circuits and subsystems 43 often include a high capacity magnetic data storage disk drive, interface circuits for a keyboard, a monitor and the like, depending upon the particular host system. Some examples of such hosts include desktop computers, laptop computers, handheld computers, palmtop computers, personal digital assistants (PDAs), MP3 and other audio players, digital cameras, video cameras, electronic game machines, wireless and wired telephony devices, answering machines, voice recorders, network routers and others.
[0017] The memory of Figure IA may be implemented as a small enclosed card, cartridge, or drive containing the controller and all its memory array circuit devices in a form that is removably connectable with the host of Figure IB. That is, mating connections 31 and 31' allow a card to be disconnected and moved to another host, or replaced by connecting another card to the host. Alternatively, the memory array devices may be enclosed in a separate card that is electrically and mechanically connectable with a card containing the controller and connections 31. As a further alternative, the memory of Figure IA may be embedded within the host of Figure IB, wherein the connections 31 and 31' are permanently made. In this case, the memory is usually contained within an enclosure of the host along with other components.
[0018] In NAND memory, media errors are introduced through device cell stresses induced by read and write operations. The useful life of a NAND memory can be maximized by monitoring the increasing error levels and moving the data to a physical location which has experienced lower access activity. In systems utilizing NAND memory today, this media repair activity is performed as a background operation. These operations happen during bus idle times or during read or write requests extended by hardware flow control techniques.
[0019] As part of those background operations, blocks are occasionally copied or updated to other locations when the physical reliability of a particular block cannot be depended upon. For example, if the error rate of a block appears as if it will shortly be unreadable, even with multiple read cycles and thresholds, a block may be updated. For example, if the number of errors is not correctable with ECC, an update would be necessary.
[0020] Also in NAND, read operations cause disturbs, one of the aforementioned cell stresses. Certain systems incorporating NAND memory may be very read intensive, thus increasing the importance of the data correction and scrub techniques. For example, a video game system reads the memory very often to update the image being displayed during the game. While the background display may only be written once, it is read very frequently. Thus the effect of disturbs will be particularly noteworthy in such a case and must be mitigated.
Error Detection and Data Integrity in NAND memory
[0021] There are two individual measures of data quality that can be used as thresholds to determine if corrective action should be taken: 1) the detection of data errors through use of ECC, and 2) even though few or no data errors are detected, a shift in the charge storage levels can be detected before they cause data read errors.
[0022] The purpose of a scrub operation is to detect disturbed storage elements before the number of bits in error and the level of shifted cells exceed any recovery schemes available on the memory system. To this end, it is generally desirable to detect disturb as early as possible and before much of the guard band for a given voltage threshold level has been lost to disturb.
[0023] Flash memories usually store data at discrete states, or ranges of charge storage levels, each of which is separated from other states by some guard band. There is generally a nominal sensing level of discrimination between each state above which a storage element is deemed to be in one state, and below which it is deemed to be in another state. As a given storage element is disturbed, the level to which it has been programmed or erased may begin to shift. If the level of the storage element approaches the sensing level of discrimination, or crosses over it, it produces data in a state different that that to which it was programmed or erased. The error will generally manifest itself as one or more bits in error in the data, and will generally be detected through the use of ECC covering the data field.
[0024] Margining or biasing the read conditions such that the sensing level of discrimination is shifted more toward one state or another will cause disturbed storage elements to be sensed in the wrong state even if the amount of shift would not cause an error under nominal read conditions. This allows the system to detect shift before it approaches the point at which it would cause errors during normal memory system operation. [0025] If disturb mechanisms are known to affect data storage levels in a specific way, it is possible to target detection of those specific disturb mechanisms by margining read conditions toward the expected level shifts. While the ideal situation would be to target the expected disturb mechanisms with a single read operation under a single set of margin conditions, this may not usually be possible. It may be necessary to perform multiple read operations under different conditions. For example, it is possible that different disturb mechanisms present in a memory cause storage elements to become either more programmed or more erased. Storage elements both above and below a discrimination level may be shift toward it, in which case it may be necessary to check first for a shift in the storage levels toward a discrimination level from one state, and then from the other.
[0026] As discussed above, the scrub read conditions may be margined in order to target certain expected disturb mechanisms, or to simply check for sufficient margin in the stored levels. Whether the data was read under nominal or margined conditions, the decision whether or not to take corrective action may be based on the number of bits in error detected during the scrub read operation. For example, if the number of bits in error are below the ECC correction capabilities of the system, the system may decide to defer the corrective action, or to ignore the error altogether.
[0027] In addition to using the number of bits in error as a threshold to initiating corrective action, the system may make the decision to correct based on other factors such as the pattern of bits in error. For example, the ECC correction capabilities may be sensitive to bit error pattern, or bit error patterns may be indicative of a particular known disturb mechanism in the nonvolatile memory. There may be other reasons for basing the threshold on bit error patterns. The bit error pattern is generally revealed during the ECC correction operation.
[0028] It may be desirable for performance purposes to defer a scrub corrective action even if it has been determined that corrective action is required. The reasons for doing so may include real-time considerations. For example a host may require a certain data transfer, and dedicating resources to scrub corrective action at certain times might impact the ability of the memory system to meet the guaranteed data rate. For such a purpose, the memory system may queue the scrub corrective action operation parameters for later processing, at a time when performing the scrub corrective action would not impact performance to the host. The scrub corrective action operations may be deferred until sometime later in the host command processing, sometime after the command processing, or until a later host command. The main point is that the scrub operation parameters would be stored and processed at a later time when it is most convenient to the host.
Additional vulnerable block/page criteria
[0029] In response to data being read from less than all of a group of memory cells by a host or otherwise, such as fewer than all the pages of a block, a scrub read is performed on a small proportion of the memory cells in the block, such as one or a small number of sectors, and the quality of the scrub read data is checked by use of the ECCs stored with the sectors of data. The scrub read most commonly, but not always, reads data stored in one or more pages that were not read in response to the command. If there are an excessive number of errors in the scrub read data, then the entire block is refreshed. A refresh operation involves reading all the data from the block, correcting the errors in the data by use of the ECCs, and then rewriting the corrected data into another block that has been erased. This process is desirably performed often enough to avoid the stored data being disturbed to the extent that they are no longer correctable by use of the ECCs, but not so often that performance of the memory system is excessively degraded. By limiting the scrub read to a small amount of the storage capacity of a block, such as just one or a few sectors or one or two pages, the overhead added to the memory operation by the scrub process is minimized. The scrub read and any resulting refresh are preferably performed in the background, when the memory system is not otherwise responding to commands to read or write data therein.
[0030] The scrub read preferably reads data stored in a page or pages of the block that are more vulnerable to having their data disturbed by the particular partial block command read than other pages of the block. It is preferred to identify a single most vulnerable sector or page, whenever that is possible, and then scrub read the data from it. Either way, a worse case picture of the quality of the data in the block is obtained with only a small amount of data needed to be scrub read. The impact on the performance of the memory system by such scrub reads is therefore minimized. [0031] Objective criteria may be established to identify the portion of the group or block of memory cells, such as a page, that is more vulnerable to being disturbed by the command read than other portions of the group. At least some of the criteria are dependent upon the structure of the memory array. For example, in a NAND array, it is recognized that the pages formed by word lines at either end of the strings of series connected memory cells are more susceptible to disturbs from programming in other pages of the block than are the remaining pages in between. This is because the memory cells at the ends of the strings behave differently than those located away from the ends. If data in one or both of these pages has not been read in response to the command, it is likely that the data in the unread one of these pages have been disturbed to an extent that is greater than in other unread pages. A scrub read is then performed on the unread one or both of these more vulnerable pages.
[0032] Another of the criteria for selecting the more vulnerable page(s) may be established to be dependent upon which pages of the block have been read in response to the command and in what order. For instance, in the above example, even if one or both of the extreme pages of the block has been read in response to the command, one of these pages is desirably scrub read if it was read early in the execution of the command and therefore subject to thereafter being disturbed by the subsequent reading of other pages of the block. In such a case, the ECC check performed as part of the normal command read may no longer represent the quality of the data in that page because of potential disturbs that could have resulted from reading subsequent pages. If one or both of these extreme pages are read in response to the command at or toward the end of the commanded data read process, however, the ECC bit error checking that occurs as part of a normal data read provides information of the quality of the data in those page(s) so that another scrub read of the same page(s) need not take place.
[0033] A further possible one of the criteria for identifying a more vulnerable page is to identify a page that has not been read in response to the command but which is physically located adjacent a page that was so read. Disturbs are more likely to occur on this page than other pages in the block, with the possible exception of the two pages at the extreme ends of NAND memory strings. This will depend upon the specific structure of the memory cell array. [0034] Yet another of the established criteria can be the relative patterns of data stored in the pages of the block. For example, in the NAND memory array, disturbs of the charge levels of memory cells in states near or at their lowest stored charge levels is more likely than those with charge levels near or at their highest stored charge levels. This is because potentially disturbing voltages experienced by a memory cell with the lowest charge level are higher than those of a memory cell with the highest charge level. Therefore, a page with data represented by predominately low charge levels stored in its memory cells will be more vulnerable to disturbs than one with data represented primarily by higher stored charge levels. This is therefore another factor that may be used to select a more vulnerable page as a candidate for a scrub read.
[0035] For further information on measuring errors and maintaining the integrity of data stored in flash memory, please refer to: U.S. Patent No. 7,012,835 entitled " Flash memory data correction and scrub techniques," to Gonzalez et al; and U.S. Patent Application No. 11/692,829 entitled " Flash Memory With Data Refresh Triggered By Controlled Scrub Data Reads" to Jason Lin, which are hereby incorporated by reference in the entirety.
[0036] The ECC threshold for triggering a corrective action may be anywhere in the range of ECC correction capabilities, but is preferably around 75% of the capability. For example, if the ECC is capable of correcting 12 bits, corrective action may be triggered when around 8 bits in error are detected.
Ensuring Data Integrity in an Isochronous System with NAND Flash Storage
[0037] The use of the NAND flash memory device within an exemplary embodiment of an isochronous system will now be described. Other embodiments are of course contemplated and the present application should not be limited to the embodiments described.
[0038] Handling NAND flash media refresh operations efficiently is important in a low latency operating environment, especially an Isochronous system environment.
The memory controller 19 incorporates an isochronous system ("IS") interface (the "ISI") in addition to the other interfaces such as those for a Secure Digital ("SD") card, Memory Stick, Compact Flash card, USB flash drive, or the like. Thus, one controller can be used to create any of the aforementioned devices.
[0039] The ISI monitors media error statistics and take action whenever needed to scrub or update the media, as described earlier. Since the IS environment does not allow flow control at the transaction level, a new approach has been implemented for media repair.
[0040] The problem is that in systems lacking signal level flow control constructs and requiring high system availability there is no way for the media control processor to test for and correct induced media errors.
[0041] In prior solutions that utilized a ROM to store a program, latency was not an issue due to the nature of ROM. For example, for a gaming system, it is important that the game itself be instantly accessible, so that the game play is fluid and responsive. To accomplish this, prior games were stored on ROM. However, each ROM was therefore masked out and otherwise manufactured specifically for each game title. The up front costs are therefore unnecessarily high in relation to the return if a game is not successful. In such a case a large inventory of game cartridges/cards and/or ROM's may be manufactured without subsequent demand. The present invention, however allows for different programs, including games, to be stored in an identical or even the same NAND chip. The systems and methods described herein can be used to overcome the latencies of the NAND so that it may be used in isochronous systems such as game machines. Of course, it should be understood that, although a game system has been mentioned for illustrative purposes, use of NAND will work for any isochronous system where quick and/or constant response time is desirable.
Isochronous system ("IS") Bus Operation
[0042] One version of the IS read operation is shown in FIG. 2. From the time the last byte of the command is issued until the first byte of read data is returned is 230 usecs.
[0043] The system processor read operation/structure shown in FIG. 2 has been modified in order to better manage process delays of the NAND memory [0044] For integration purposes, in one embodiment the system processor command structure is augmented by wrapping the mode dependant RD_PAGE commands with a following READ_ST1 (read status one register) command within the same cycle or period. This can be seen in FIG. 4A. Each cycle 404A-X contains both a data operation and a status operation. In addition to the command input 202 and data output 204 of FIG. 2, represented as 406 and 408 respectively in FIG. 4A, there is a status operation 410 within each cycle or period 404A-X. Two bits have been defined in the STATUSl register to relay the need for media error processing (flash memory data integrity operations). These bits can be seen in Table 1.
Reserved I CacheRDY I READY I Protect I RefReqUrg I RefReq I Error I Error
Table 1 STATUSl Register Layout
[0045] The RefReq bit (D2) will be set as a request for the host to initiate a refresh operation. This request is not considered urgent. The host will honor this request in as timely a manner as possible without sacrificing user interactivity. The second bit, RefReqUrg (D3) is an urgent request for a refresh operation. This host must find a way to honor this request as quickly as possible without regard for effects on the user experience.
[0046] From the memory controller firmware perspective these requests can be implemented by using the presence of scrub or update requests. Whenever the queue holding block scrub requests goes non-empty a request for a slot of time to go off line is made to the host device. This request is made via the RefReq and RefReqUrg bits in the STATUSl register. Urgent requests are either important yet time efficient safety events, such as copy block index saves, or critical time consuming but low frequency events, such as the queuing of more than 4 block copies.
[0047] The queuing of 4 block copies is preferably used to denote an urgent request in the case where the queue holds eight total entries. In other words, when the queue is 50% full the request will be an urgent request. The range in the ratio of entries to available slots of the queue used to indicate an urgent request may be anywhere between ten and ninety percent. The ratio selected will affect the performance of the overall system and may be tailored to each application. A lower ratio will require the host to respond more quickly and may result in higher data reliability or integrity while a higher ratio will allow for better system response because the host will be able to allocate more time to running the processor application (e.g. game).
[0048] A state diagram of media error processing interacting with the host's refresh status bits is shown in FIG. 3. The All Clear state 302 represents the initial media error free state. Whenever a scan request enters the scrub queue the (front end) memory controller firmware will need to post RefReq status (state 306) to STATUSl. Once the host responds with a RFS_BLK (refresh block) command the memory controller (front end) firmware will initiate a media scan operation (state 314). If during the time between posting RefReq and receipt of the RFS_BLK command the block update queue goes not empty an update operation rather then a scan operation will be executed.
[0049] However, if a copy request enters the queue an urgent request to save the index of the block to be copied is posted, as represented by state 310. Once this request is satisfied a follow-on non-urgent request to satisfy the copy operation will be posted, as represented by state 318.
[0050] The index save serves to ensure guaranteed error processing across power cycles. Since the index save operation is short, performing the save at the highest priority will not negatively impact the gaming experience. There is a rare but possible situation where the number of copy requests in the queue start to accumulate. Whenever there are more than 4 copies in the queue the system should treat this as a critical event. If this queue overflow is detected the request will be urgent and the time hit will be at a maximum, as represented by state 326.
[0051] The base assumption is that the host will be timely in it's response to any refresh request, urgent or not. In one embodiment, the maximum latency between refresh status requests and the issue of a RFS_BLK command will be less than 6400 read operations. This is 1/10 the number of reads expected to complete successfully before an update event would be required. In some environments the maximum latency allowed may be a controlling criteria. For example, a gaming system may specify a maximum latency period, of 230 microseconds [0052] As mentioned above, a scan and/or copy queue is used to keep track of what blocks or other units of memory need to be scanned and/or copied. The queue may be stored in RAM or alternatively in the NAND flash memory itself. For more information on this, please refer to U.S. Patent Application No. 11/726,648 entitled "Methods For Storing Memory Operations In A Queue" filed 3/21/2007, which is hereby incorporated by reference in the entirety.
[0053] FIG. 4A illustrates a system processor command structure incorporating the read status (READ-STl discussed above) command 410 in each cycle/period 404A-X in order to read the STATUSl register of Table 1 above. The command/address 406 is followed by the data 408 and the read STATUSl command 410. During normal flow, the register will report that no action is needed.
[0054] However, as seen in FIG. 4B, when the status register reports that urgent or non urgent processing is needed with bits D2 or D3, the host processor will wait some number of periods/cycles before sending another command. This will allow the memory to perform needed data integrity operations.
[0055] FIGS. 5 A and 5B illustrate another embodiment of a system processor command structure. In this embodiment the mechanism for reporting the need to attend to the flash memory utilizes a data token 510 within each period/cycle 504A-X following the command/address 506 and data 508. As seen in FIG. 5B, when the data token 510 reports that urgent or non urgent processing is needed, the host processor will wait some number of periods/cycles before sending another command. The data token would comprise additional bits of information beyond the data 508 associated with the command (e.g. the data sent in response to a read). This will allow the memory to perform needed data integrity operations. The data token would contain some or all of the information contained in the STATUSl register directly, as opposed to the mechanism in FIGS. 4A and 4B, that utilized status bits to indicate that the processor should read the register. For example, a multiplexer could be utilized/flipped to send the information from the register after data 508. In the embodiments of FIGS. 5A and 5B, rather than include an extra operation within each cycle, as is the case in FIGS. 4A and 4B, extra information (e.g. a few bytes) is appended to data 508 in the form of data token 510. This additional data provided within one cycle, e.g. 504A, informs the system processor about the next cycle, e.g. 504B.
[0056] FIGS. 6A and 6B illustrate another embodiment of a system processor command structure. In this embodiment the mechanism for reporting the need to attend to the flash memory utilizes a side band bits 510 within each period/cycle
504A-X that are transmitted at the same time as command/address 606 and data 608, rather than after them. As seen in FIG. 6B, when the side band bits 610 report that urgent or non urgent processing is needed, the host processor will wait some number of periods/cycles before sending another command. The side band bits 610 contain some or all of the information contained in the STATUSl register directly, or alternatively may direct the processor to read the STATUSl register.
[0057] All of the embodiments illustrated in FIGS. 4-6 can be used to signal the need for extra time for data integrity operations in systems where a wait , busy or ready signal is not available and thus can be thought of as alternatives to flow control in isochronous systems lacking flow control.
Block Refresh Execution
[0058] A block refresh cycle is initiated whenever the host issues a RFS_BLK command. Once a RFS_BLK command is issued, only RD_STATUS1 commands are allowed until the refresh operation completes. Completion of refresh is indicated when both RefReq and RefReqUrg bits are set to zero.
[0059] In order to minimize the impact on (back end) BE memory controller firmware media fixup and host operating delays a minimum poll time of 5OmS has been specified for the NAND flash memory devices used in the IS systems. It is not a requirement that the firmware always finishes the scrub or update operations in 5OmS time but it is desirable that the system be able to respond to RD_STATUS1 commands potentially every 5OmS until the operations do complete. The 5OmS requirement is independent of Low/Normal speed functionality.
[0060] It is the responsibility of the host (system processor) firmware to generate RFS_BLK commands in as timely a manner as possible. In a gaming system this will be the game console processor's responsibility. Power Failure
[0061] There is no early warning for power down in many electronic devices that may incorporate flash memory.
[0062] The embodiments described above ensure that if the power to the NAND memory device is interrupted that the information stored in the memory will be available when the power is restored. For example, a block that needs to be copied to another location that was in the process of being updated will be taken care of upon power restoration because it will be contained in the copy queue. This is true even given the demanding read requirements and timing limitations of an isochronous system such as that of a time sensitive application like video games and the like.
[0063] This is accomplished in part by utilizing command and data structures that notify the host of the need for urgent or non urgent processing of operations needed to assure data integrity. This notification is accomplished while requiring only minimal processing time. The system processor then allocates time to update and service the queue on an as needed basis. Time will be allocated in a timely fashion before the reliability of the data blocks is past an undesirable threshold. An urgent command is used to update the queue. A non urgent command is used to scan or to copy a block. Once an entry is in the queue and the queue is updated, the status will no longer be urgent.

Claims

THE CLAIMS
What is claimed is: 1. A non volatile storage device, comprising: a flash memory array; and a memory controller, the memory controller configured to perform data integrity operations of data in the flash memory array from time to time, the data integrity operations occurring at unpredictable intervals resulting in a variable latency in servicing requests for data storage operations from a host device, the memory controller further configured to interact with the host device to determine that the host device can tolerate a delay without loss of operability.
2. The non volatile storage device of claim 1, wherein the memory controller is further configured to cause the host to delay an operation if it has been determined that the delay can be tolerated.
3. The non volatile storage device of claim 2, wherein the memory controller is further configured to perform a data integrity operation during a delay.
4. The non volatile storage device of claim 3, wherein the memory controller is configured to issue two or more levels of requests to the host device.
5. The non volatile storage device of claim 4, wherein the one of the two or more levels signifies an urgent request, and another of the two or more levels signifies a non-urgent request.
6. A non volatile storage device, comprising: a flash memory array; and a memory controller, the memory controller configured to perform data integrity operations of data in the flash memory array to maintain the integrity of the data in the flash memory array, the data integrity operations occurring at unpredictable intervals resulting in a variable latency in servicing requests for data storage operations from a host device, the memory controller further configured to manage the execution of data integrity operations so that a maximum allowed latency limit is not violated.
7. The non volatile storage device of claim 6, wherein the memory controller is further configured to monitor when the execution of a first data integrity operation triggers the execution of a subsequent data integrity operation, to ensure that the maximum allowed latency limit is not violated due to the triggering of the subsequent data integrity operation, and the time required for carrying out the first and subsequent data integrity operations.
8. A system comprising: a host device comprising a microprocessor; and flash memory storage device comprising a flash memory array and a memory controller, the host device operating in an isochronous manner, the memory controller configured to perform data integrity operations of data in the flash memory array to maintain the integrity of the data in the flash memory array, the data integrity operations occurring at variable intervals resulting in a variable latency in servicing requests for read and write requests by the microprocessor of the flash memory storage device, the flash memory controller comprising a mechanism to service isochronous requests of the host and to notify the host of the need to allocate time for performing data integrity operations.
9. The system of claim 8, wherein the mechanism to service isochronous requests comprises one or more indicators provided at the end of each command cycle.
10. The system of claim 9, wherein the indicators cause a register to be read by the host, the register values signalling whether an urgent or non-urgent request for servicing the memory storage device is present.
11. The system of claim 9, wherein the mechanism to service isochronous requests is in the form of a data token.
12. The system of claim 9, wherein the mechanism to service isochronous requests comprises side band bits provided during the time interval in which a command and data are provided to the flash memory storage device.
13. The system of claim 8, wherein the mechanism comprises a read operation.
14. In a system comprising a host device and flash memory, a method for providing notice of a need for time to attend to needs of the flash memory, the method comprising: providing a fixed period, occurring at regular intervals based upon a system clock, in which to specify and perform a read or write operation; and providing, within the fixed period, a sub period in which to perform an additional operation, said additional operation supplying notice to the host of the need for time to attend to the needs of the flash memory.
15. A method for operating an isochronous system incorporating a NAND memory storage device and a system processor, the method comprising: performing a read operation of the NAND memory; performing a write operation of the NAND memory; gathering an indicia of errors in the NAND memory; assessing the indicia of errors; providing one or more alerts of a first level to the system processor that a data integrity operation should be performed within the NAND memory; changing the level of alerts to be provided to the system processor to a second level; and providing one or more alerts of the second level to the system processor that a data integrity operation should be performed within the NAND memory.
16. The method of claim 15, further comprising receiving a grant of time at the NAND memory from the system processor and performing one or more data integrity operations within the NAND memory within the granted time.
17. The method of claim 16, wherein the grant of time received by the NAND flash memory device from the system processor varies depending upon the level of the alert provided to the system processor.
18. The method of claim 17, wherein when the second level of alert is provided to the system processor, the time granted by the system processor is sufficient to perform a portion of a block copy operation.
19. The method of claim 17, wherein when the first level of alert is provided to the system processor, the time granted by the system processor is sufficient to update a control record of a critical operation not to be lost in case of power interruption.
20. The method of claim 15, further comprising performing a portion of a block copy operation during time granted by the system processor in response to an alert of the second level.
21. The method of claim 15, further comprising performing an erase operation, and gathering a portion of the indicia of errors in the NAND memory as part of the erase operation.
22. The method of claim 15, wherein gathering the indicia of errors is done as part of performing a read operation.
23. The method of claim 15, wherein gathering the indicia of errors is done as part of performing a write operation.
24. The method of claim 15, wherein gathering the indicia of errors comprises performing an ECC operation and counting the number of bits in error during the operation.
25. The method of claim 24, further comprising comparing the number of errors found to a threshold of correctable errors.
26. The method of claim 25, further comprising determining that a block copy operation is needed if the number of errors found exceeds the threshold of correctable errors.
27. The method of claim 15, wherein gathering the indicia of errors comprises measuring the number and pattern of reads and writes.
28. The method of claim 15, wherein gathering the indicia of errors comprises measuring the number and pattern of erase cycles.
29. The method of claim 15, wherein assessing the indicia of errors comprises determining that a copy operation is needed.
30. The method of claim 15, wherein assessing the indicia of errors comprises determining that a scan operation is needed.
31. The method of claim 15, wherein assessing the indicia of errors comprises determining that there is at least one entry in a queue of copy operations.
32. The method of claim 15, wherein assessing the indicia of errors comprises determining that there is at least one entry in a queue of scan operations.
33. A method for operating an isochronous system, the method comprising: providing a NAND memory storage device comprising a memory controller and a NAND memory array; gathering an indicia of errors in the NAND memory storage device during a memory storage operation; providing during each command cycle either: a) an indication of a first type of request from the NAND memory storage device to the system processor indicative of a first time window needed by the NAND memory storage device to ensure data integrity, or b) an indication of a second type of request from the NAND memory storage device to the system processor indicative of a second time window needed by the NAND memory storage device to ensure data integrity, or c) an indication that no time is needed by the NAND memory storage device to ensure data integrity.
34. A method for operating an isochronous system, the method comprising: providing a NAND memory storage device comprising a memory controller and a NAND memory array; providing during each command cycle either: a) a non-urgent request for servicing housekeeping operations from the NAND memory storage device to a system processor, or b) an urgent request for servicing housekeeping operations from the
NAND memory storage device to the system processor; and executing one or more housekeeping operations associated with the non urgent requests between host commands without delaying the system processor.
PCT/US2008/072609 2007-08-08 2008-08-08 Urgency and time window manipulation to accommodate unpredictable memory operations WO2009021176A2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US95469407P 2007-08-08 2007-08-08
US60/954,694 2007-08-08
US11/864,793 2007-09-28
US11/864,793 US8046524B2 (en) 2007-08-08 2007-09-28 Managing processing delays in an isochronous system
US11/864,740 US8099632B2 (en) 2007-08-08 2007-09-28 Urgency and time window manipulation to accommodate unpredictable memory operations
US11/864,740 2007-09-28

Publications (3)

Publication Number Publication Date
WO2009021176A2 true WO2009021176A2 (en) 2009-02-12
WO2009021176A3 WO2009021176A3 (en) 2009-04-16
WO2009021176A9 WO2009021176A9 (en) 2009-05-28

Family

ID=40053180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/072609 WO2009021176A2 (en) 2007-08-08 2008-08-08 Urgency and time window manipulation to accommodate unpredictable memory operations

Country Status (1)

Country Link
WO (1) WO2009021176A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8046524B2 (en) 2007-08-08 2011-10-25 Sandisk Technologies Inc. Managing processing delays in an isochronous system
US20120311408A1 (en) * 2011-06-03 2012-12-06 Sony Corporation Nonvolatile memory, memory controller, nonvolatile memory accessing method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6148357A (en) * 1998-06-17 2000-11-14 Advanced Micro Devices, Inc. Integrated CPU and memory controller utilizing a communication link having isochronous and asynchronous priority modes
US20050073884A1 (en) * 2003-10-03 2005-04-07 Gonzalez Carlos J. Flash memory data correction and scrub techniques
US20060101210A1 (en) * 2004-10-15 2006-05-11 Lance Dover Register-based memory command architecture
US20060161728A1 (en) * 2005-01-20 2006-07-20 Bennett Alan D Scheduling of housekeeping operations in flash memory systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6148357A (en) * 1998-06-17 2000-11-14 Advanced Micro Devices, Inc. Integrated CPU and memory controller utilizing a communication link having isochronous and asynchronous priority modes
US20050073884A1 (en) * 2003-10-03 2005-04-07 Gonzalez Carlos J. Flash memory data correction and scrub techniques
US20060101210A1 (en) * 2004-10-15 2006-05-11 Lance Dover Register-based memory command architecture
US20060161728A1 (en) * 2005-01-20 2006-07-20 Bennett Alan D Scheduling of housekeeping operations in flash memory systems

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8046524B2 (en) 2007-08-08 2011-10-25 Sandisk Technologies Inc. Managing processing delays in an isochronous system
US8099632B2 (en) 2007-08-08 2012-01-17 Sandisk Technologies Inc. Urgency and time window manipulation to accommodate unpredictable memory operations
US20120311408A1 (en) * 2011-06-03 2012-12-06 Sony Corporation Nonvolatile memory, memory controller, nonvolatile memory accessing method, and program
US8862963B2 (en) * 2011-06-03 2014-10-14 Sony Corporation Nonvolatile memory, memory controller, nonvolatile memory accessing method, and program

Also Published As

Publication number Publication date
WO2009021176A9 (en) 2009-05-28
WO2009021176A3 (en) 2009-04-16

Similar Documents

Publication Publication Date Title
US8099632B2 (en) Urgency and time window manipulation to accommodate unpredictable memory operations
US10338985B2 (en) Information processing device, external storage device, host device, relay device, control program, and control method of information processing device
US10509591B2 (en) Distributed power management for non-volatile memory controllers using average and peak power credits allocated to memory channels
US9003108B2 (en) Relocating data based on matching address sequences
US10838806B2 (en) Solid state storage system with latency management mechanism and method of operation thereof
US8745443B2 (en) Memory system
US10996870B2 (en) Deterministic read disturb counter-based data checking for NAND flash
US9898201B2 (en) Non-volatile memory device, and storage apparatus to reduce a read retry occurrence frequency and prevent read performance from lowering
US10310770B2 (en) Nonvolatile memory device, and storage apparatus having nonvolatile memory device
US20240061620A1 (en) Memory system and information processing system
WO2014163952A1 (en) Tracking erase pulses for non-volatile memory
CN111356991B (en) Logical block addressing range conflict crawler
WO2014126263A1 (en) Storage controlling device, storage controlling method, storage system and program
US10782881B2 (en) Storage device for not allowing to write data based on end of life of a disk device
CN111666175B (en) Storage device and data reading method
WO2009021176A2 (en) Urgency and time window manipulation to accommodate unpredictable memory operations
JPWO2015170702A1 (en) Storage apparatus, information processing system, storage control method and program
KR20190062917A (en) Memory system and operating method thereof
KR20230034054A (en) Leak detection circuit, nonvolatile memory device including leak detection circuit, and memory system including nonvolatile memory device
US20110010580A1 (en) Memory apparatus, memory controlling method and program
US20100037004A1 (en) Storage system for backup data of flash memory and method for the same
US20070174738A1 (en) Disk device, method of writing data in disk device, and computer product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08797475

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08797475

Country of ref document: EP

Kind code of ref document: A2