CA2495180A1 - Multi-protocol storage appliance that provides integrated support for file and block access protocols - Google Patents
Multi-protocol storage appliance that provides integrated support for file and block access protocols Download PDFInfo
- Publication number
- CA2495180A1 CA2495180A1 CA002495180A CA2495180A CA2495180A1 CA 2495180 A1 CA2495180 A1 CA 2495180A1 CA 002495180 A CA002495180 A CA 002495180A CA 2495180 A CA2495180 A CA 2495180A CA 2495180 A1 CA2495180 A1 CA 2495180A1
- Authority
- CA
- Canada
- Prior art keywords
- storage
- protocol
- file
- appliance
- access
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0661—Format or protocol conversion arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0664—Virtualisation aspects at device level, e.g. emulation of a storage device or system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Abstract
A multi-protocol storage appliance serves file and block protocol access to information stored on storage devices in an integrated manner for both network attached storage (NAS) and storage area network (SAN) deployments. A storage operating system of the appliance implements a file system (320) that cooperates with novel virtualization modules to provide a virtualization system (300) that ~virtualizes~ the storage space provided by the devices. The file system provides volume management capabilities for use in block-based access to the information stored on the devices. The virtualization system (300) allows the file system to logically organize the information as named file (324), directory (326) and virtual disk storage objects (322, 328) to thereby provide an integrated NAS and SAN appliance approach to storage by enabling file-based access to the files and directories while further enabling block-based access to the virtual disks.
Description
MULTI-PROTOCOL STORAGE APPLIANCE THAT
PROVIDES INTEGRATED SUPPORT FOR FILE AND
BLOCK ACCESS PROTOCOLS
FIELD OF THE INVENTION
The present invention relates to storage systems and, in particular, to a multi-protocol storage appliance that supports file and block access protocols.
BACKGROUND OF THE INVENTION
A storage system is a computer that provides storage service relating to the or-ganization of information on veritable persistent storage devices, such as memories, io tapes or dislCS. The storage system is commonly deployed within a storage area net-work (SAN) or a network attached storage (NAS) environment. When used within a NAS environment, the storage system may be embodied as a file server including an operating system that implements a file system to logically organize the information as a hierarchical structure of directories and files on, e.g. the disks. Each "on-disk" file is may be implemented as a set of data structures, e.g., disk blocks, configured to store information, such as the actual data for the file. A directory, on the other hand, may be implemented as a specially formatted file in which information about other files and directories are stored.
The file server, or filer, may be further configured to operate according to a cli-ao ent/server model of information delivery to thereby allow many client systems (clients) to access shared resources, such as files, stored on the filer. Sharing of files is a hall-mark of a NAS system, which is enabled because of its semantic level of access to files and file systems. Storage of information on a NAS system is typically deployed over a computer network comprising a geographically distributed collection of interconnected zs communication links, such as Ethernet, that allow clients to remotely access the infor mation (files) on the filer. The clients typically communicate with the filer by ex changing discrete frames or packets of data according to pre-defined protocols, such as the Transmission Control ProtocollInternet Protocol (TCP/IP).
In the client/server model, the client may comprise an application executing on a computer that "connects" to the filer over a computer network, such as a point-to-s point link, shared local area network, wide area network or virtual private network im-plemented over a public network, such as the Internet. NAS systems generally utilize file-based access protocols; therefore, each client may request the services of the filer by issuing'file system protocol messages (in the form of packets) to the file system over the network identifying one or more files to be accessed without regard to specific lo-co cations, e.g., blocks, in which the data are stored on disk. By supporting a plurality of file system protocols, such as the conventional Common Internet File System (CIFS), the Network File System (NFS) and the Direct Access File System (DAFS) protocols, the utility of the filer may be enhanced for networking clients.
A SAN is a high-speed network that enables establishment of direct connections is between a storage system and its storage devices. The SAN may thus be viewed as an extension to a storage bus and, as such, an operating system of the storage system en-ables access to stored information using block-based access protocols over the "ex-tended bus". In this context, the extended bus is typically embodied as Fibre Channel (FC) or Ethernet media adapted to operate with block access protocols, such as Small ao Computer Systems Interface (SCSI) protocol encapsulation over FC or TCP/IP/Ethernet.
A SAN arrangement or deployment allows decoupling of storage from the stor-age system, such as an application server, and some level of information storage shar ing at the application server level. There are, however, environments wherein a SAN is zs dedicated to a single server. In some SAN deployments, the information is organized in the form of databases, while in others a file-based organization is employed. Where the information is organized as files, the client requesting the information maintains file mappings and manages file semantics, while its requests (and server responses) address the information in terms of block addressing on disk using, e.g., a logical unit number 30 (lun).
_3_ Previous approaches generally address the SAN and NAS environments using two separate solutions. For those approaches that provide a single solution for both en-vironments, the NAS capabilities are typically "disposed" over the SAN storage system platform using, e.g., a "sidecar" device attached to the SAN platform.
However, even these prior systems typically divide storage into distinct SAN and NAS storage do-mains. That is, the storage spaces for the SAN and NAS domains do not coexist and are physically partitioned by a configuration process implemented by, e.g., a user (sys-tem administrator).
An example of such a prior system is the Symmetrix~ system platform avail-io able from EMC~ Corporation. Broadly stated, individual disks of the SAN
storage system (Symmetrix system) are allocated to a NAS sidecar device (e.g., CelerraTM de-vice) that, in turn, exports those disks to NAS clients via, e.g., the NFS and CIFS proto-cols. A system administrator makes decisions as to the number of disks and the loca-tions of "slices" (extents) of those disks that are aggregated to construct "user-defined is volumes" and, thereafter, how those volumes are used. The term "volume" as conven-tionally used in a SAN environment implies a storage entity that is constructed by specifying physical disks and extents within those disks via operations that combine those extents/disks into a user-defined volume storage entity. Notably, the SAN-based disks and NAS-based disks comprising the user-defined volumes are physically parti-ao tinned within the system platform.
Typically, the system administrator renders its decisions through a complex user interface oriented towards users that are knowledgeable about the underlying physical aspects of the system. That is, the user interface revolves primarily around physical disk structures and management that a system administrator must manipulate in order to zs present a view of the SAN platform on behalf of a client. For example, the user inter-face may prompt the administrator to specify the physical disks, along with the sizes of extents within those disks, needed to construct the user-,defined volume. In addition, the interface prompts the administrator for the physical locations of those extents and disks, as well as the manner in which they are "glued together" (organized) and made so visible (exported) to a SAN client as a user-defined volume corresponding to a disk or lun. Once the physical disks and their extents are selected to construct a volume, only those disks/extents comprise that volume. The system administrator must also specify the form of reliability, e.g., a Redundant Array of Independent (or Ihexpe~sive) Disks (RAID) protection level and/or mirroring, for that constructed volume. RAID
groups are then overlaid on top of those selected disks/extents.
In sum, the prior system approach requires a system administrator to finely con-figure the physical layout of the disks and their organization to create a user-defined volume that is exported as a single lun to a SAN client. All of the administration asso-ciated with this prior approach is grounded on a physical disk basis. Fox the system administrator to increase the size of the user-defined volume, disks are added and io RAID calculations are re-computed to include redundant information associated with data stored on the disks constituting the volume. Clearly, this is a complex and costly approach. The present invention is directed to providing a simple and efficient inte-grated solution to SAN and NAS storage environments.
SUMMARY OF THE INVENTION
is The present invention relates to a multi-protocol storage appliance that serves file and block protocol access to information stored on storage devices in an integrated manner for both network attached storage (NAS) and storage area network (SAN) de-ployments. A storage operating system of the appliance implements a file system that cooperates with novel virtualization modules to provide a virtualization system that ao "virtualizes" the storage space provided by the devices. Notably, the file system pro-vides volume management capabilities for use in block-based access to the information stored on the devices. The virtualization system allows the file system to logically or-ganize the information as named file, directory and virtual disk (vdisk) storage objects to thereby provide an integrated NAS and SAN appliance approach to storage by ena-as bling file-based access to the files and directories, while further enabling block-based access to the vdisks.
In the illustrative embodiment, the virtualization modules are embodied, e.g., as a vdisk module and a Small Computer Systems Interface (SCSI) target module.
The vdisk module provides a data path from the block-based SCSI target module to blocks so managed by the file system. The vdisk module also interacts with the file system to enable access by administrative interfaces, such as a streamlined user interface (UI), in response to a system administrator issuing commands to the mufti-protocol storage ap-pliance. In addition, the vdisk module manages SAN deployments by, among other things, implementing a comprehensive set of vdisk commands issued through the LTI by a system administrator. These vdisk commands are converted to primitive file system operations that interact with the file system and the SCSI target module to implement the vdisks.
The SCSI target module, in turn, initiates emulation of a disk or logical unit number (fun) by providing a mapping procedure that translates logical block access to io funs specified in access requests into virtual block access to vdisks and, for responses to the requests, vdisks into lens. The SCSI target module thus provides a translation layer of the virtualization system between a SAN block (fun) space and a file system space, where Tuns are represented as vdisks. By "disposing" SAN virtualization over the file system, the mufti-protocol storage appliance reverses the approaches taken by prior is systems to thereby provide a single unified storage platform for essentially all storage access protocols.
Advantageously, the integrated mufti-protocol storage appliance provides access controls and, if appropriate, sharing of files and vdisks for all protocols, while preserv-ing data integrity. The storage appliance further provides embedded/integrated virtu-ao alization capabilities that obviate the need for a user to apportion storage resources when creating NAS and SAN storage objects. These capabilities include a virtualized storage space that allows the SAN and NAS objects to coexist with respect to global space management within the appliance. Moreover, the integrated storage appliance provides simultaneous support for block access protocols to the same vdisk, as well as a as heterogeneous SAN environment with support for clustering.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and further advantages of invention may be better understood by re-ferring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements:
Fig. 1 is a schematic block diagram of a mufti-protocol storage appliance con-figured to operate in storage area network (SAN) and network attached storage (NAS) environments in accordance with the present invention;
Fig. 2 is a schematic block diagram of a storage operating system of the multi-protocol storage appliance that may be advantageously used with the present invention;
io Fig. 3 is a schematic block diagram of a virtualization system that is imple-mented by a file system interacting with virtualization modules according to the present invention; and Fig. 4 is a flowchart illustrating the seduence of steps involved when accessing information stored on the mufti-protocol storage appliance over a SAN network.
rs DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT
The present invention is directed to a mufti-protocol storage appliance that serves both file and block protocol access to information stored on storage devices in an ixitegrated manner. In this context, the integrated mufti-pxotocol appliance denotes a computer having features such as simplicity of storage service management and ease of ao storage reconfiguration, including reusable storage space, for users (system adminis-trators) and clients of network attached storage (NAS) and storage area network (SAN) deployments. The storage appliance may provide NAS services through a file system, while the same appliance provides SAN services through SAN virtualization, including logical unit number (fun) emulation.
as Fig. 1 is a schematic block diagram of the mufti-protocol storage appliance configured to provide storage service relating to the organization of information on storage devices, such as disks 130. The storage appliance 100 is illustratively embod-ied as a storage system comprising a processor 122, a memory 124, a plurality of net-work adapters 125, 126 and a storage adapter 128 interconnected by a system bus 123.
3o The mufti-protocol storage appliance 100 also includes a storage operating system 200 that provides a virtualization system (and, in particular, a file system) to logically or-ganize the information as a hierarchical structure of named directory, file and virtual disk (vdisk) storage objects on the disks 130.
Whereas clients of a NAS-based network environment have a storage viewpoint of files, the clients of a SAN-based network environment have a storage viewpoint of blocks or disks. To that end, the multi-protocol storage appliance 100 presents (ex-ports) disks to SAN clients through the creation of Tuns or vdisk objects. A
vdisk ob-ject (hereinafter "vdisk") is a special file type that is implemented by the virtualization system and translated into an emulated disk as viewed by the. SAN clients. The multi-io protocol storage appliance thereafter makes these emulated disks accessible to the SAN
clients through controlled exports, as described further herein.
In the illustrative embodiment, the memory 124 comprises storage locations that are addressable by the processor and adapters for storing software program code and data structures'~associated with the present invention. The processor and adapters may, is in turn, comprise processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. The storage operating system 200, portions of which are typically resident in memory and executed by the processing elements, functionally organizes the storage appliance by, inter alia, invoking storage operations in support of the storage service implemented by the appliance. It will be ao apparent to those skilled in the art that other processing and memory means, including various computer readable media, may be used for storing and executing program in-structions pertaining to the inventive system and method described herein.
The network adapter 125 couples the storage appliance to a plurality of clients 160a,b over point-to-point links, wide area networks, virtual private networks imple-as merited over a public network (Internet) or a shared local area network, hereinafter re-ferred to as an illustrative Ethernet network 165. Therefore, the network adapter 125 may comprise a network interface card (NIC) having the mechanical, electrical and signaling circuitry needed to connect the appliance to a network switch, such as a con-ventional Ethernet switch 170. For this NAS-based network environment, the clients 3o are configured to access information stored on the mufti-protocol appliance as files.
The clients 160 communicate with the storage appliance over network 165 by ex-_g-changing discrete frames or packets of data according to pre-defined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP).
The clients 160 may be general-purpose computers configured to execute appli-cations over a variety of operating systems, including the UNIX~ and Microsoft~
WindowsT"' operating systems. Client systems generally utilize file-based access proto-cols when accessing information (in the form of files and directories) over a NAS-based network. Therefore, each client 160 may request the services of the storage ap-pliance 100 by issuing file access protocol messages (in the form of packets) to the ap-pliance over the network 165. Fox example, a client 160a running the Windows oper-io ating system may communicate with the storage appliance 100 using the Common Internet File System (CIFS) protocol over TCP/IP. On the other hand, a client 160b running the UNIX operating system may communicate with the mufti-protocol appli-ance using either the Network File System (NFS) protocol over TCP/IP or the Direct Access File System (DAFS) protocol over a virtual interface (VI) transport in accor-is dance with a remote DMA (RDMA) protocol over TCP/IP. It will be apparent to those skilled in the art that other clients running other types of operating systems may also communicate with the integrated mufti-protocol storage appliance using other file ac-cess protocols.
The storage network "target" adapter I26 also couples the mufti-protocol stor-ao age appliance 100 to clients 160 that may be further configured to access the stored in-formation as blocks or disks. For this SAN-based network environment, the storage appliance is coupled to an illustrative Fibre Channel (FC) network 185. FC is a net-working standard describing a suite of protocols and media that is primarily found in SAN deployments. The network target adapter 126 may comprise a FC host bus as adapter (HBA) having the mechanical, electrical and signaling circuitry needed to con-nect the appliance 100 to a SAN network switch, such as a conventional FC
switch 180.
In addition to providing FC access, the FC HBA may offload fiber channel network processing operations for the storage appliance.
The clients 160 generally utilize block-based access protocols, such as the Small so Computer Systems Interface (SCSI) protocol, when accessing information (in the form of blocks, disks or vdisks) over a SAN-based network. SCSI is a peripheral in-put/output (I/O) interface with a standard, device independent protocol that allows dif ferent peripheral devices, such as disks 130, to attach to the storage appliance 100. In SCSI terminology, clients I60 operating in a SAN environment are initiatof~s that initi-ate requests and commands for data. The multi-protocol storage appliance is thus a s target configured to respond to the requests issued by the initiators in accordance with a request/response protocol. The initiators and targets have endpoint addresses that, in accordance with the FC protocol, comprise worldwide names (WWN). A WWN is a unique identifier, e.g., a node name or a port name, consisting of an 8-byte number.
The multi-protocol storage appliance 100 supports various SCSI-based proto-io cols used in SAN deployments, including SCSI encapsulated over TCP (iSCSI) and SCSI encapsulated over FC (FCP). The initiators (hereinafter clients 160) may thus request the services of the target (hereinafter storage appliance 100) by issuing iSCSI
and FCP messages over the network 165, 185 to access information stored on the disks.
It will be apparent to those skilled in the art that the clients may also request the serv-es ices of the integrated mufti-protocol storage appliance using other block access proto-cols. By supporting a plurality of block access protocols, the mufti-protocol storage appliance provides a unified and coherent access solution to vdisks/luns in a heteroge-neous SAN environment.
The storage adapter 128 cooperates with the storage operating system 200 exe-ao cuting on the storage appliance to access information requested by the clients. The in-formation may be stored on the disks 130 or other similar media adapted to store in-formation. The storage adapter includes I/O interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a conventional high-performance, FC
serial link topology. The information is retrieved by the storage adapter and, if neces-zs sary, processed by the processor 122 (or the adapter 128 itself) prior to being forwarded over the system bus 123 to the network adapters 125, 126, where the information is formatted into packets or messages and returned to the clients.
Storage of information on the appliance 100 is preferably implemented as one or more storage volumes (e.g., VOLl-2 150) that comprise a cluster of physical storage so disks 130, defining an overall logical arrangement of disk space. The disks within a volume are typically organized as one or more groups of Redundant Array of Inde-pendent (or I~expehsive) Disks (RAID). RAID implementations enhance the reliabil-ity/integrity of data storage through the writing of data "stripes" across a given number of physical disks in the RAID group, and the appropriate storing of redundant informa-tion with respect to the striped data. The redundant information enables recovery of data lost when a storage device fails. It will be apparent to those skilled in the art that other redundancy techniques, such as mirroring, may used in accordance with the pres-ent invention.
Specifically, each volume 150 is constructed from an array of physical disks 130 that are organized as RAID groups I40, I42, and 144. The physical disks of each io RAID group include those disks configured to store striped data (D) and those config-ured to store parity (P) for the data, in accordance with an illustrative RAID
4 level configuration. It should be noted that other RAID level conf gurations (e.g.
RAID 5) are also contemplated for use with the teachings described herein. In the illustrative embodiment, a minimum of one parity disk and one data disk may be employed.
How-ls ever, a typical implementation may include three data and one parity disk per RAID
group and at least one RAID group per volume.
To facilitate access to the disks ISO, the storage operating system 200 imple-ments a write-anywhere file system of a novel virtualization system that "virtualizes"
the storage space provided by disks 130. The file system logically organizes the infor-Zo mation as a hierarchical structure of named directory and file objects (hereinafter "di-rectories" and "files") on the disks. Each "on-disk" file may be implemented as set of disk blocks configured to store information, such as data, whereas the directory may be implemented as a specially formatted file in which names and links to other files and directories are stored. The virtualization system allows the file system to further logi-Zs cally organize information as a hierarchical structure of named vdisks on the disks, thereby providing an integrated NAS and SAN appliance approach to storage by ena-bling file-based (NAS) access to the named files and directories, while further enabling block-based (SAN) access to the named vdisks on a file-based storage platform.
The file system simplifies the complexity of management of the underlying physical storage 3o in SAN deployments.
As noted, a vdisk is a special file type in a volume that derives from a plain (regular) file, but that has associated export controls and operation restrictions that sup-port emulation of a disk. Unlike a file that can be created by a client using, e.g., the NFS or CIFS protocol, a vdisk is created on the multi-protocol storage appliance via, s e.g. a user interface (UI) as a special typed file (object). Illustratively, the vdisk is a multi-mode object comprising a special file mode that holds data and at least one asso-ciated stream mode that holds attributes, including security information. The special file inode functions as a main container for storing data, such as application data, asso-ciated with the emulated disk. The stream mode stores attributes that allow lens and ~o exports to persist over, e.g., reboot operations, while also enabling management of the vdisk as a single disk object in relation to SAN clients. An example of a vdisk and its associated modes that inay be advantageously used with the present invention is de-scribed in co-pending and commonly assigned U.S. Patent Application Serial No.
(112056-0069) titled Storage Virtualizatio~ by Layering hdisks on a File System, which is application is hereby incorporated by reference as though fully set forth herein.
in the illustrative embodiment, the storage operating system is preferably the NetApp~ Data ONTAPT"" operating system available from Network Appliance, Inc., Sunnyvale, California that implements a Write Anywhere File Layout (WAFLT"') file system. However, it is expressly contemplated that any appropriate storage operating ao system, including a write in-place file system, may be enhanced for use in accordance .
with the inventive principles described herein. As such, where the term "WAFL"
is employed, it should be taken broadly to refer to any storage operating system that is otherwise adaptable to the teachings of this invention.
As used herein, the term "storage operating system" generally refers to the as computer-executable code operable on a computer that manages data access and may, in the case of a multi-protocol storage appliance, implement data access semantics, such as the Data ONTAP storage operating system, which is implemented as a micro-kernel. The storage operating system can also be implemented as an application pro-gram operating over a general-purpose operating system, such as UNIX~ or Windows 3o NT~, or as a general-purpose operating system with configurable functionality, which is configured for storage applications as described herein.
In addition, it will be understood to those skilled in the art that the inventive system and method described herein may apply to any type of special-purpose (e.g., storage serving appliance) or general-purpose computer, including a standalone com-puter or portion thereof, embodied as or including a storage system. Moreover, the teachings of this invention can be adapted to a variety of storage system architectures including, but not limited to, a network-attached storage environment, a storage area network and disk assembly directly-attached to a client or host computer. The term "storage system" should therefore be taken broadly to include such arrangements in ad-dition to any subsystems configured to perform a storage function and associated with io other equipment or systems.
Fig. 2 is a schematic block diagram of the storage operating system 200 that may be advantageously used with the present invention. The storage operating system comprises a series of software layers organized to form an integrated network protocol stack or, more generally, a mufti-protocol engine that provides data paths for clients to is access information stored on the mufti-protocol storage appliance using block and file access protocols. The protocol stack includes a media access layer 210 of network drivers (e.g., gigabit Ethernet drivers) that interfaces to network protocol layers, such as the IP layer 212 and its supporting transport mechanisms, the TCP layer 214 and the User Datagram Protocol (UDP) layer 216. A file system protocol layer provides multi-ao protocol file access and, to that end, includes support for the DAFS
protocol 218, the NFS protocol 220, the CIFS protocol 222 and the Hypertext Transfer Protocol (HTTP) protocol 224. A VI layer 226 implements the VI architecture to provide direct access transport (DAT) capabilities, such as RDMA, as required by the DAFS protocol 218.
An iSCSI driver layer 228 provides block protocol access over the TCP/IP net-as work protocol layers, while a FC driver layer 230 operates with the FC HBA
126 to receive and transmit block access requests and responses to and from the integrated storage appliance. The FC and iSCSI drivers provide FC-specific and iSCSI-specific access control to the funs (vdisks) and, thus, manage exports of vdisks to either iSCSI
or FCP or, alternatively, to both iSCSI and FCP when accessing a single vdisk on the so mufti-protocol storage appliance. In addition, the storage operating system includes a disk storage layer 240 that implements a disk storage protocol, such as a RAID
proto-col, and a disk driver layer 250 that implements a disk access protocol such as, e.g., a SCSI protocol.
Bridging the disk software layers with the integrated network protocol stack layers is a virtualization system 300 according to the present invention. Fig.
3 is a s schematic block diagram of the virtualization system 300 that is implemented by a file system 320 cooperating with virtualization modules illustratively embodied as, e.g., vdisk module 330 and SCSI target module 310. It should be noted that the vdisk mod-ule 330, file system 320 and SCSI target module 310 can be implemented in software, hardware, firmware, or a combination thereof. The vdisk module 330 is layered on io (and interacts with) the file system 320 to provide a data path from the block-based SCSI target module to blocks managed by the file system. The vdisk module also en-ables access by administrative interfaces, such as a streamlined user interface (UI 350), in response to a system administrator issuing commands to the mufti-protocol storage appliance 100. In essence, the vdisk module 330 manages SAN deployments by, zs among other things, implementing a comprehensive set of vdisk (fun) commands issued through the UI 350 by a system administrator. These vdisk commands axe converted to primitive file system operations ("primitives") that interact with the file system 320 and the SCSI target module 310 to implement the vdisks.
The SCSI target module 310, in turn, initiates emulation of a disk or fun by pro-ao viding a mapping procedure that translates logical block access to Iuns specified in ac-cess requests into virtual block access to the special vdisk file types and, for responses to the requests, vdisks into funs. The SCSI target module is illustratively disposed be-tween the FC and iSCSI drivers 228, 230 and the file system 320 to thereby provide a translation layer of the virtualization system 300 between the SAN block (fun) space as and the file system space, where funs axe represented as vdisks. By "disposing" SAN
virtualization over the file system 320, the mufti-protocol storage appliance reverses the approaches taken by prior systems to thereby provide a single unified storage platform for essentially all storage access protocols.
According to the invention, file system provides capabilities for use in file-so based access to information stored on the storage devices, such as disks.
In addition, the file system provides volume management capabilities for use in block-based access to the stored information. That is, in addition to providing file system semantics (such as differentiation of storage into discrete objects and naming of those storage objects), the file system 320 provides functions normally associated with a volume manager. As described herein, these functions include (i) aggregation of the disks, (ii) aggregation of storage bandwidth of the disks, and (iii) reliability guarantees, such as mirroring and/or parity (RAID), to thereby present one or more storage objects layered on the file sys-tem. A feature of the multi-protocol storage appliance is the simplicity of use associ-ated with these volume management capabilities, particularly when used in SAN
de-ployments.
io The file system 320 illustratively implements the WAFL file system having an on-disk format representation that is block-based using, e.g., 4 kilobyte (kB) blocks and using modes to describe the files. The WAFL file system uses files to store meta-data describing the layout of its file system; these meta-data files include, among others, an mode file. A file handle, i.e., an identifier that includes an mode number, is used to re-is trieve an mode from disk. A description of the structure of the file system, including the mode file, is provided in U.S. Patent No. 5,819,292, titled Method fog Maintaining Consistent States of a File System and fog Cr-eati~g Uses Accessible Read Only Copies of a File System by David Hitz et al., issued October 6, 1998, which patent is hereby incorporated by reference as though fully set forth herein.
ao Broadly stated, all modes of the file system are organized into the mode file. A
file system (FS) info block specifies the layout of information in the file system and includes an mode of a file that includes all other modes- of the file system.
Each vol-ume has an FS info block that is preferably stored at a fixed location within, e.g., a RAID group of the file system. The mode of the root FS info block may directly refer-as ence (point to) blocks of the mode file or may reference indirect blocks of the inode file that, in turn, reference direct blocks of the mode file. Within each direct block of the mode file are embedded modes, each of which may reference indirect blocks that, in turn, reference data blocks of a file or vdisk.
According to an aspect of the invention, the file system implements access op-3o erations to vdisks 322, as well as to files 324 and directories (dir 326) that coexist with respect to global space management of units of storage, such as volumes 150 and/or qtrees 328. A qtree 328 is a special directory that has the properties of a logical sub-volume within the namespace of a physical volume. Each file system storage object (file, directory or vdisk) is illustratively associated with one qtree, and quotas, security properties and other items can be assigned on a per-qtree basis. The vdisks and files/directories may be layered on top of qtrees 328 that, in turn, are layered on top of volumes 150 as abstracted by the file system "virtualization" layer 320.
Note that the vdisk storage objects in the file system 320 are associated with SAN deployments of the multi-protocol storage appliance, whereas the file and direc-tory storage objects are associated with NAS deployments of the appliance. The files io and directories are generally not accessible via the FC or SCSI block access protocols;
however, a file can be converted to a vdisk and then accessed by either the SAN or NAS protocol. The vdisks are accessible as luns from the SAN (FC and SCSI) proto-cols and as files by the NAS (NFS and GIFS) protocols.
In another aspect of the invention, the virtualization system 300 provides a vir-is tualized storage space that allows SAN and NAS storage objects to coexist with respect to global space management by the file system 320. To that end, the virtualization system 300 exploits the characteristics of the file system, including its inherent ability to aggregate disks and abstract them into a single pool of storage. For example, the system 300 leverages the volume management capability of the file system 320 to or-ao ganize a collection of disks 130 into one or more volumes 150 representing a pool of global storage space. The pool of global storage is then made available for both SAN
and NAS deployments through the creation of vdisks 322 and files 324, respectively.
In addition to sharing the same global storage space, the vdisks and files share the same pool of available storage from which to draw on when expanding the SAN and/or NAS
as deployments. Unlike prior systems, there is no physical partitioning of disks within the global storage space of the multi-protocol storage appliance.
The mufti-protocol storage appliance substantially simplifies management of the global storage space by allowing a user to manage both NAS and SAN storage objects using the single pool of storage resources. In particular, free block space is managed 3o from a global free pool on a fine-grained block basis fox both SAN and NAS
deploy-ments. If those storage objects were managed discretely (separately), the user would be required to keep a certain amount of "spare" disks on hand for each type of object to respond to changes in, e.g., business objectives. The overhead required to maintain that discrete approach is greater than if those objects could be managed out of a single pool of resources with only a single group of spared disks available for expansion as busi-ness dictates. Blocks released individually by vdisk operations are immediately reus-able by NAS objects (and vice versa.) The details of such management are transparent to the administrator. This represents a "total cost of ownership" advantage of the inte-grated multi-protocol storage appliance.
The virtualization system 300 further provides reliability guarantees for those io SAN and NAS storage objects coexisting in the global storage space of the multi-protocol appliance 100. In particular, reliability guarantees in face of disk failures through techniques such as RAID or mirroring performed at a physical block level in conventional SAN systems is an inherited feature from the file system 320 of the appli-ance 100. This simplifies administration by allowing an administrator to make global is decisions on the underlying redundant physical storage that apply equally to vdisks and NAS objects in the file system.
As noted, the file system 320 organizes information as file, directory and vdisk objects within volumes 150 of disks 130. Underlying each volume 150 is a collection of RAID groups 140-144 that provide protection and reliability against disk failures) ao within the volume. The information serviced by the mufti-protocol storage appliance is protected according to an illustrative RAID 4 configuration. This level of protection may be extended to include, e.g., synchronous mirroring on the appliance platform. A
vdisk 322 created on a volume that is protected by RAID 4 "inherits" the added protec-tion of synchronous mirroring if that latter protection is specified for the volume 150.
as In this case, the synchronous mirroring protection is not a property of the vdisk but rather a property of the underlying volume and the reliability guarantees of the file system 320. This "inheritance" feature of the mufti-protocol storage appliance simpli-fies management of a vdisk because a system administrator does not have to deal with reliability issues.
so In addition, the virtualization system 300 aggregates bandwidth of the disks 130 without requiring user knowledge of the physical construction of those disks.
The file system 320 is configured to write (store) data on the disks as long, continuous stripes across those disks in accordance with input/output (I/O) storage operations that aggre-gate the bandwidth of all the disks of a volume for stored data. When information is stored or retrieved from the vdisks, the I/O operations are not directed to disks specified s by a user. Rather, those operations are transparent to the user because the file system "stripes" that data across all the disks of the volume in a reliable manner according to its write anywhere layout policy. As a result of virtualization of block storage, I/O
bandwidth to a vdisk can be the maximum bandwidth of the underlying physical disks of the file system, regardless of the size of the vdisk (unlike typical physical imple-io mentations of Tuns in conventional block access products.) Moreover, the virtualization system leverages file system placement, manage-ment and block allocation policies to make the vdisks function correctly within the multi-protocol storage appliance. The vdisk block placement policies are a function of the underlying virtualizing file system and there are no permanent physical bindings of is file system blocks to SCSI logical block addresses in the face of modifications. The vdisks may be transparently reorganized to perhaps alter data access pattern behaviour.
For both SAN and NAS deployments, the block allocation policies axe inde-pendent of physical properties of the disks (e.g., geometries, sizes, cylinders, sector size). The file system provides file-based management of the files 324 and directories ao 326 and, in accordance with the invention, vdisks 322 residing within the volumes 150.
When a disk is added to the array attached to the multi-protocol storage appliance, that disk is integrated into an existing volume to increase the entire volume space, which space may be used for any purpose, e.g., more vdisks or more files.
Management of the integrated multi-protocol storage appliance 100 is simpli-as feed through the use of the UI 350 and the vdisk command set available to the system administrator. The UI 350 illustratively comprises both a command line interface (CLI
352) and a graphical user interface (GUI 354) used to implement the vdisk command set to, among other things, create a vdisk, increase/decrease the size of a vdisk and/or destroy a vdisk. The storage space for the destroyed vdisk may then be reused for, e.g., 3o a NAS-based file in accordance with the virtualized storage space feature of the appli--l~-ance 100. A vdisk may increase ("grow") or decrease ("shrink") under user control while preserving block and NAS mufti-protocol access to its application data.
The UI 350 simplifies management of the mufti-protocol SAN/NAS storage ap-pliance by, e.g., obviating the need for a system administrator to explicitly configure s and specify the disks to be used when creating a vdisk. For instance to create a vdisk, the system administrator need merely issue a vdisk ("fun create") command through, e.g., the CLI 352 or GUI 354. The vdisk command specifies creation of a vdisk (fun), along with the desired size of the vdisk and a path descriptor (pathname) to that vdisk.
In response, the file system 320 cooperates with the vdisk module 330 to "virtualize"
io the storage space provided by the underlying disks and create a vdisk as specified by the create command. Specifically, the vdisk module 330 processes the vdisk command to "call" primitive operations ("primitives") in the file system 320 that implement high-level notions of vdisks (funs). For example, the "fun create" command is translated into a series of file system primitives that create a vdisk with correct information and size, is as well as at the correct location. These file system primitives include operations to create a file mode (create file), create a stream mode (create stream), and store infor-mation in the stream mode (stream write).
The result of the fun create command is the creation of a vdisk 322 having the specified size and that is RAID protected without having to explicitly specify such ao protection. Storage of information on disks of the mufti-protocol storage appliance is not typed; only "raw" bits axe stored on the disks. The file system organizes those bits into vdisks and RAID groups across all of the disks within a volume. Thus, the created vdisk 322 does not have to be explicitly configured because the virtualization system 300 creates a vdisk in a manner that is transparent to the user. The created vdisk inher-as its high-performance characteristics, such as reliability and storage bandwidth, of the underlying volume created by the file system.
The CLI 352 andlor GUI 354 also interact with the vdisk module 330 to intro-duce attributes and persistent fun map bindings that assign numbers to the created vdisk. These Iun map bindings are thereafter used to export vdisks as certain SCSI
so identifiers (IDs) to the clients. In particular, the created vdisk can be exported via a fun mapping technique to enable a SAN client to "view" (access) a disk. Vdisks (funs) generally require strict controlled access in a SAN environment; sharing of luns in a SAN environment typically occurs only in limited circumstances, such as clustered file systems, clustered operating systems and mufti-pathing configurations. A
system ad-ministrator of the mufti-protocol storage appliance determines which vdisks (funs) can be exported to a SAN client. Once a vdisk is exported as a fun, the client may access the vdisk over the SAN network utilizing a block access pxotocol, such as FCP
and iSCSI.
SAN clients typically identify and address disks by logical numbers or Iuns.
However, an "ease of management" feature of the mufti-protocol storage appliance is to that system administrators can manage vdisks and their addressing by logical names.
To that end, the vdisk module 330 of the mufti-protocol storage appliance maps logical names to vdisks. For example when creating a vdisk, the system administrator "right size" allocates the vdisk and assigns it a name that is generally meaningful to its in-tended application (e.g., lvollvol0/database to hold a database). The administrative in-is terface provides name-based management of luns/vdisks (as well as files) exported from the storage appliance on the clients, thereby providing a uniform and unified naming scheme for block-based (as well as file-based) storage.
The mufti-pxotocol storage appliance manages export control of vdisks by logi-cal names through the use of initiator groups (igroups). An igroup is a logical named ao entity that is assigned to one or more addresses associated with one or more initiators (depending upon whether a clustered environment is configured). An "igroup create"
command essentially "binds" (associates) those addresses, which may comprise WWN
addresses or iSCSI IDs, to a logical name or igroup. A "fun map" command is then used to export one or more vdisks to the igroup, i.e., make the vdisk(s) "visible" to the as igroup. In this sense, the "fun map" command is equivalent to an NFS export or a CIFS
share. The WWN addresses or iSCSI IDs thus identify the clients that are allowed to access those vdisks specified by the fun map command. Thereafter, the logical name is used with all operations internal to the storage operating system. This logical naming abstraction is pervasive throughout the entire vdisk command set, including interactions 3o between a user and the mufti-pr~tocol storage appliance. Tn particular, the igroup naming convention is used fir all subsequent export operations and listings of luns that are exported for various SAN clients.
Fig. 4 is a schematic flow chart illustrating the sequence of steps involved when accessing information stored on the mufti-protocol storage appliance over a SAN net-work. Here, a client communicates with the storage appliance 100 using a block access protocol over a network coupled to the appliance. If the client is client 160a running the Windows operating system, the block access protocol is illustratively the FCP pro-tocol used over the network 185. On the other hand, if the client is client 160b running the UNIX operating system, the block access protocol is illustratively the iSCSI proto-io col used over network 165. The sequence starts at Step 400 and proceeds to Step 402 where the client generates a request to access information residing on the multi-protocol storage appliance and, in Step 404, the request is forwarded as a conventional FCP or iSCSI block access request over the network 185, 165.
At Step 406, the request is received at network adapter 126, 125 of the storage is appliance 100, where it is processed by the integrated network protocol stack and passed to the virtualization system 300 at Step 408. Specifically, if the request is a FCP
request, it is processed as, e.g., a 4k block request to access (i.e., read/write) data by the FC driver 230. If the request is an iSCSI protocol request, it is received at the media access layer (the Intel gigabit Ethernet) and passed through the TCP/IP
network proto-2o col layers to the virtualization system.
Command and control operations, including addressing information, associated with the SCSI protocol are generally directed to disks or Tuns; however, the file system 320 does not recognize funs. As a result, the SCSI target module 310 of the virtualiza-tion system initiates emulation of a fun in order to respond to the SCSI
commands is contained in the request (Step 410). To that end, the SCSI target module has a set of application progranuning interfaces (APIs 360) that axe based on the SCSI
protocol and that enable a consistent interface to both the iSCSI and FCP drivers 228, 230.
The SCSI taxget module further implements a mapping/translation procedure that essentially translates a fun into a vdisk. At Step 412, the SCSI target module maps the addressing 3o information, e.g., FC routing information, of the request to the internal structure of the file system.
The file system 320 is illustratively a message-based system; as such, the SCSI
target module 310 transposes the SCSI request into a message representing an operation directed to the file system. For example, the message generated by the SCSI
target module may include a type of operation (e.g., read, write) along with a pathname (e.g., a path descriptor) and a filename (e.g., a special filename) of the vdisk object repre-sented in the file system. The SCSI target module 310 passes the message into the file system layer 320 as, e.g., a function call 365, where the operation is performed.
In response to receiving the message, the file system 320 maps the pathname to mode structures to obtain the file handle corresponding to the vdisk 322.
Armed with a io file handle, the storage operating system 200 can convert that handle to a disk block and, thus, retrieve the block (mode) from disk. Broadly stated, the file handle is an in-ternal representation of the data structure, i.e., a representation of the mode data struc-ture that is used internally within the file system. The file handle generally consists of a plurality of components including a file ID (mode number), a snapshot ID, a genera-ls tion ID and a flag. The file system utilizes the file handle to retrieve the special file mode and at least one associated stream mode that comprise the vdisk within the file system structure implemented on the disks 130.
In Step 414, the file system generates operations to load (retrieve) the requested data from disk 130 if it is not resident "in core", i.e., in the memory 124.
If the infor-ao mation is not in memory, the file system 320 indexes into the mode file using the mode number to access an appropriate entry and retrieve a logical volume block number (VBN). The file system then passes the logical VBN to the disk storage (RAID) layer 240, which maps that logical number to a disk block number and sends the latter to an appropriate driver (e.g., SCSI) of the disk driver layer 250. The disk driver accesses as the disk block number from disk 130 and loads the requested data blocks) in memory 124. In Step 416, the requested data is processed by the virtualization system 300. For example, the data may be processed in connection with a read or write operation di-rected to a vdisk or in connection with a query command for the vdisk.
The SCSI target module 310 of the virtualization system 300 emulates support 3o for the conventional SCSI protocol by providing meaningful "simulated"
information about a requested vdisk. Such information is either calculated by the SCSI
target mod-ule or stored persistently in, e.g., the attributes stream mode of the vdisk.
At Step 418, the SCSI target module 310 loads the requested block-based information (as translated from file-based information provided by the file system 320) into a block access (SCSI) protocol message. For example, the SCSI target module 310 may load information, such as the size of a vdisk, into a SCSI protocol message in response to a SCSI query command request. Upon completion of the request, the storage appliance (and operat-ing system) returns a reply (e.g., as a SCSI "capacity" response message) to the client over the network (Step 420). The sequence then ends at Step 422.
It should be noted that the software "path" through the storage operating system io layers described above needed to perform data storage access for the client request re-ceived at the mufti-protocol storage appliance may alternatively be implemented in hardware. That is, in an alternate embodiment of the invention, a storage access re-quest data path through the operating system layers (including the virtualization system 300) may be implemented as logic circuitry embodied within a field programmable gate is array (FPGA) or an application specific integrated circuit (ASIC). This type of hard-ware implementation increases the performance of the storage service provided by ap-pliance 100 in response to a file access or block access request issued by a client 160.
Moreover, in another alternate embodiment of the invention, the processing elements of network and storage adapters 125-128 may be configured to offload some or all of the ao packet processing and storage access operations, respectively, from processor 122 to thereby increase the performance of the storage service provided by the mufti-protocol storage appliance. It is expressly contemplated that the various processes, architectures and procedures described herein can be implemented in hardware, firmware or soft-ware.
as Advantageously, the integrated mufti-protocol storage appliance provides access controls and, if appropriate, sharing of files and vdisks for all protocols, while preserv-ing data integrity. The storage appliance further provides embedded/integrated virtu-alization capabilities that obviate the need for a user to apportion storage resources when creating NAS and SAN storage objects. These capabilities include a virtualized 3o storage space that allows SAN and NAS storage objects to coexist with respect to global space management within the appliance. Moreover, the integrated storage appli-ante provides simultaneous support for block access protocols (iSCSI and FCP) to the same vdisk, as well as a heterogeneous SAN environment with support for clustering.
In sum, the multi-protocol storage appliance provides a single unified storage platform for all storage access protocols.
The foregoing description has been directed to specific embodiments. of this in-vention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advan-tages. For example, it is expressly contemplated that the teachings of this invention can be implemented as software, including a computer-readable medium having program io instructions executing on a computer, hardware, firmware, or a combination thereof.
Accordingly this description is to be taken only by way of example and not to other-wise limit the scope of the invention. It is thus the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
is What is claimed is:
PROVIDES INTEGRATED SUPPORT FOR FILE AND
BLOCK ACCESS PROTOCOLS
FIELD OF THE INVENTION
The present invention relates to storage systems and, in particular, to a multi-protocol storage appliance that supports file and block access protocols.
BACKGROUND OF THE INVENTION
A storage system is a computer that provides storage service relating to the or-ganization of information on veritable persistent storage devices, such as memories, io tapes or dislCS. The storage system is commonly deployed within a storage area net-work (SAN) or a network attached storage (NAS) environment. When used within a NAS environment, the storage system may be embodied as a file server including an operating system that implements a file system to logically organize the information as a hierarchical structure of directories and files on, e.g. the disks. Each "on-disk" file is may be implemented as a set of data structures, e.g., disk blocks, configured to store information, such as the actual data for the file. A directory, on the other hand, may be implemented as a specially formatted file in which information about other files and directories are stored.
The file server, or filer, may be further configured to operate according to a cli-ao ent/server model of information delivery to thereby allow many client systems (clients) to access shared resources, such as files, stored on the filer. Sharing of files is a hall-mark of a NAS system, which is enabled because of its semantic level of access to files and file systems. Storage of information on a NAS system is typically deployed over a computer network comprising a geographically distributed collection of interconnected zs communication links, such as Ethernet, that allow clients to remotely access the infor mation (files) on the filer. The clients typically communicate with the filer by ex changing discrete frames or packets of data according to pre-defined protocols, such as the Transmission Control ProtocollInternet Protocol (TCP/IP).
In the client/server model, the client may comprise an application executing on a computer that "connects" to the filer over a computer network, such as a point-to-s point link, shared local area network, wide area network or virtual private network im-plemented over a public network, such as the Internet. NAS systems generally utilize file-based access protocols; therefore, each client may request the services of the filer by issuing'file system protocol messages (in the form of packets) to the file system over the network identifying one or more files to be accessed without regard to specific lo-co cations, e.g., blocks, in which the data are stored on disk. By supporting a plurality of file system protocols, such as the conventional Common Internet File System (CIFS), the Network File System (NFS) and the Direct Access File System (DAFS) protocols, the utility of the filer may be enhanced for networking clients.
A SAN is a high-speed network that enables establishment of direct connections is between a storage system and its storage devices. The SAN may thus be viewed as an extension to a storage bus and, as such, an operating system of the storage system en-ables access to stored information using block-based access protocols over the "ex-tended bus". In this context, the extended bus is typically embodied as Fibre Channel (FC) or Ethernet media adapted to operate with block access protocols, such as Small ao Computer Systems Interface (SCSI) protocol encapsulation over FC or TCP/IP/Ethernet.
A SAN arrangement or deployment allows decoupling of storage from the stor-age system, such as an application server, and some level of information storage shar ing at the application server level. There are, however, environments wherein a SAN is zs dedicated to a single server. In some SAN deployments, the information is organized in the form of databases, while in others a file-based organization is employed. Where the information is organized as files, the client requesting the information maintains file mappings and manages file semantics, while its requests (and server responses) address the information in terms of block addressing on disk using, e.g., a logical unit number 30 (lun).
_3_ Previous approaches generally address the SAN and NAS environments using two separate solutions. For those approaches that provide a single solution for both en-vironments, the NAS capabilities are typically "disposed" over the SAN storage system platform using, e.g., a "sidecar" device attached to the SAN platform.
However, even these prior systems typically divide storage into distinct SAN and NAS storage do-mains. That is, the storage spaces for the SAN and NAS domains do not coexist and are physically partitioned by a configuration process implemented by, e.g., a user (sys-tem administrator).
An example of such a prior system is the Symmetrix~ system platform avail-io able from EMC~ Corporation. Broadly stated, individual disks of the SAN
storage system (Symmetrix system) are allocated to a NAS sidecar device (e.g., CelerraTM de-vice) that, in turn, exports those disks to NAS clients via, e.g., the NFS and CIFS proto-cols. A system administrator makes decisions as to the number of disks and the loca-tions of "slices" (extents) of those disks that are aggregated to construct "user-defined is volumes" and, thereafter, how those volumes are used. The term "volume" as conven-tionally used in a SAN environment implies a storage entity that is constructed by specifying physical disks and extents within those disks via operations that combine those extents/disks into a user-defined volume storage entity. Notably, the SAN-based disks and NAS-based disks comprising the user-defined volumes are physically parti-ao tinned within the system platform.
Typically, the system administrator renders its decisions through a complex user interface oriented towards users that are knowledgeable about the underlying physical aspects of the system. That is, the user interface revolves primarily around physical disk structures and management that a system administrator must manipulate in order to zs present a view of the SAN platform on behalf of a client. For example, the user inter-face may prompt the administrator to specify the physical disks, along with the sizes of extents within those disks, needed to construct the user-,defined volume. In addition, the interface prompts the administrator for the physical locations of those extents and disks, as well as the manner in which they are "glued together" (organized) and made so visible (exported) to a SAN client as a user-defined volume corresponding to a disk or lun. Once the physical disks and their extents are selected to construct a volume, only those disks/extents comprise that volume. The system administrator must also specify the form of reliability, e.g., a Redundant Array of Independent (or Ihexpe~sive) Disks (RAID) protection level and/or mirroring, for that constructed volume. RAID
groups are then overlaid on top of those selected disks/extents.
In sum, the prior system approach requires a system administrator to finely con-figure the physical layout of the disks and their organization to create a user-defined volume that is exported as a single lun to a SAN client. All of the administration asso-ciated with this prior approach is grounded on a physical disk basis. Fox the system administrator to increase the size of the user-defined volume, disks are added and io RAID calculations are re-computed to include redundant information associated with data stored on the disks constituting the volume. Clearly, this is a complex and costly approach. The present invention is directed to providing a simple and efficient inte-grated solution to SAN and NAS storage environments.
SUMMARY OF THE INVENTION
is The present invention relates to a multi-protocol storage appliance that serves file and block protocol access to information stored on storage devices in an integrated manner for both network attached storage (NAS) and storage area network (SAN) de-ployments. A storage operating system of the appliance implements a file system that cooperates with novel virtualization modules to provide a virtualization system that ao "virtualizes" the storage space provided by the devices. Notably, the file system pro-vides volume management capabilities for use in block-based access to the information stored on the devices. The virtualization system allows the file system to logically or-ganize the information as named file, directory and virtual disk (vdisk) storage objects to thereby provide an integrated NAS and SAN appliance approach to storage by ena-as bling file-based access to the files and directories, while further enabling block-based access to the vdisks.
In the illustrative embodiment, the virtualization modules are embodied, e.g., as a vdisk module and a Small Computer Systems Interface (SCSI) target module.
The vdisk module provides a data path from the block-based SCSI target module to blocks so managed by the file system. The vdisk module also interacts with the file system to enable access by administrative interfaces, such as a streamlined user interface (UI), in response to a system administrator issuing commands to the mufti-protocol storage ap-pliance. In addition, the vdisk module manages SAN deployments by, among other things, implementing a comprehensive set of vdisk commands issued through the LTI by a system administrator. These vdisk commands are converted to primitive file system operations that interact with the file system and the SCSI target module to implement the vdisks.
The SCSI target module, in turn, initiates emulation of a disk or logical unit number (fun) by providing a mapping procedure that translates logical block access to io funs specified in access requests into virtual block access to vdisks and, for responses to the requests, vdisks into lens. The SCSI target module thus provides a translation layer of the virtualization system between a SAN block (fun) space and a file system space, where Tuns are represented as vdisks. By "disposing" SAN virtualization over the file system, the mufti-protocol storage appliance reverses the approaches taken by prior is systems to thereby provide a single unified storage platform for essentially all storage access protocols.
Advantageously, the integrated mufti-protocol storage appliance provides access controls and, if appropriate, sharing of files and vdisks for all protocols, while preserv-ing data integrity. The storage appliance further provides embedded/integrated virtu-ao alization capabilities that obviate the need for a user to apportion storage resources when creating NAS and SAN storage objects. These capabilities include a virtualized storage space that allows the SAN and NAS objects to coexist with respect to global space management within the appliance. Moreover, the integrated storage appliance provides simultaneous support for block access protocols to the same vdisk, as well as a as heterogeneous SAN environment with support for clustering.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and further advantages of invention may be better understood by re-ferring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements:
Fig. 1 is a schematic block diagram of a mufti-protocol storage appliance con-figured to operate in storage area network (SAN) and network attached storage (NAS) environments in accordance with the present invention;
Fig. 2 is a schematic block diagram of a storage operating system of the multi-protocol storage appliance that may be advantageously used with the present invention;
io Fig. 3 is a schematic block diagram of a virtualization system that is imple-mented by a file system interacting with virtualization modules according to the present invention; and Fig. 4 is a flowchart illustrating the seduence of steps involved when accessing information stored on the mufti-protocol storage appliance over a SAN network.
rs DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT
The present invention is directed to a mufti-protocol storage appliance that serves both file and block protocol access to information stored on storage devices in an ixitegrated manner. In this context, the integrated mufti-pxotocol appliance denotes a computer having features such as simplicity of storage service management and ease of ao storage reconfiguration, including reusable storage space, for users (system adminis-trators) and clients of network attached storage (NAS) and storage area network (SAN) deployments. The storage appliance may provide NAS services through a file system, while the same appliance provides SAN services through SAN virtualization, including logical unit number (fun) emulation.
as Fig. 1 is a schematic block diagram of the mufti-protocol storage appliance configured to provide storage service relating to the organization of information on storage devices, such as disks 130. The storage appliance 100 is illustratively embod-ied as a storage system comprising a processor 122, a memory 124, a plurality of net-work adapters 125, 126 and a storage adapter 128 interconnected by a system bus 123.
3o The mufti-protocol storage appliance 100 also includes a storage operating system 200 that provides a virtualization system (and, in particular, a file system) to logically or-ganize the information as a hierarchical structure of named directory, file and virtual disk (vdisk) storage objects on the disks 130.
Whereas clients of a NAS-based network environment have a storage viewpoint of files, the clients of a SAN-based network environment have a storage viewpoint of blocks or disks. To that end, the multi-protocol storage appliance 100 presents (ex-ports) disks to SAN clients through the creation of Tuns or vdisk objects. A
vdisk ob-ject (hereinafter "vdisk") is a special file type that is implemented by the virtualization system and translated into an emulated disk as viewed by the. SAN clients. The multi-io protocol storage appliance thereafter makes these emulated disks accessible to the SAN
clients through controlled exports, as described further herein.
In the illustrative embodiment, the memory 124 comprises storage locations that are addressable by the processor and adapters for storing software program code and data structures'~associated with the present invention. The processor and adapters may, is in turn, comprise processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. The storage operating system 200, portions of which are typically resident in memory and executed by the processing elements, functionally organizes the storage appliance by, inter alia, invoking storage operations in support of the storage service implemented by the appliance. It will be ao apparent to those skilled in the art that other processing and memory means, including various computer readable media, may be used for storing and executing program in-structions pertaining to the inventive system and method described herein.
The network adapter 125 couples the storage appliance to a plurality of clients 160a,b over point-to-point links, wide area networks, virtual private networks imple-as merited over a public network (Internet) or a shared local area network, hereinafter re-ferred to as an illustrative Ethernet network 165. Therefore, the network adapter 125 may comprise a network interface card (NIC) having the mechanical, electrical and signaling circuitry needed to connect the appliance to a network switch, such as a con-ventional Ethernet switch 170. For this NAS-based network environment, the clients 3o are configured to access information stored on the mufti-protocol appliance as files.
The clients 160 communicate with the storage appliance over network 165 by ex-_g-changing discrete frames or packets of data according to pre-defined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP).
The clients 160 may be general-purpose computers configured to execute appli-cations over a variety of operating systems, including the UNIX~ and Microsoft~
WindowsT"' operating systems. Client systems generally utilize file-based access proto-cols when accessing information (in the form of files and directories) over a NAS-based network. Therefore, each client 160 may request the services of the storage ap-pliance 100 by issuing file access protocol messages (in the form of packets) to the ap-pliance over the network 165. Fox example, a client 160a running the Windows oper-io ating system may communicate with the storage appliance 100 using the Common Internet File System (CIFS) protocol over TCP/IP. On the other hand, a client 160b running the UNIX operating system may communicate with the mufti-protocol appli-ance using either the Network File System (NFS) protocol over TCP/IP or the Direct Access File System (DAFS) protocol over a virtual interface (VI) transport in accor-is dance with a remote DMA (RDMA) protocol over TCP/IP. It will be apparent to those skilled in the art that other clients running other types of operating systems may also communicate with the integrated mufti-protocol storage appliance using other file ac-cess protocols.
The storage network "target" adapter I26 also couples the mufti-protocol stor-ao age appliance 100 to clients 160 that may be further configured to access the stored in-formation as blocks or disks. For this SAN-based network environment, the storage appliance is coupled to an illustrative Fibre Channel (FC) network 185. FC is a net-working standard describing a suite of protocols and media that is primarily found in SAN deployments. The network target adapter 126 may comprise a FC host bus as adapter (HBA) having the mechanical, electrical and signaling circuitry needed to con-nect the appliance 100 to a SAN network switch, such as a conventional FC
switch 180.
In addition to providing FC access, the FC HBA may offload fiber channel network processing operations for the storage appliance.
The clients 160 generally utilize block-based access protocols, such as the Small so Computer Systems Interface (SCSI) protocol, when accessing information (in the form of blocks, disks or vdisks) over a SAN-based network. SCSI is a peripheral in-put/output (I/O) interface with a standard, device independent protocol that allows dif ferent peripheral devices, such as disks 130, to attach to the storage appliance 100. In SCSI terminology, clients I60 operating in a SAN environment are initiatof~s that initi-ate requests and commands for data. The multi-protocol storage appliance is thus a s target configured to respond to the requests issued by the initiators in accordance with a request/response protocol. The initiators and targets have endpoint addresses that, in accordance with the FC protocol, comprise worldwide names (WWN). A WWN is a unique identifier, e.g., a node name or a port name, consisting of an 8-byte number.
The multi-protocol storage appliance 100 supports various SCSI-based proto-io cols used in SAN deployments, including SCSI encapsulated over TCP (iSCSI) and SCSI encapsulated over FC (FCP). The initiators (hereinafter clients 160) may thus request the services of the target (hereinafter storage appliance 100) by issuing iSCSI
and FCP messages over the network 165, 185 to access information stored on the disks.
It will be apparent to those skilled in the art that the clients may also request the serv-es ices of the integrated mufti-protocol storage appliance using other block access proto-cols. By supporting a plurality of block access protocols, the mufti-protocol storage appliance provides a unified and coherent access solution to vdisks/luns in a heteroge-neous SAN environment.
The storage adapter 128 cooperates with the storage operating system 200 exe-ao cuting on the storage appliance to access information requested by the clients. The in-formation may be stored on the disks 130 or other similar media adapted to store in-formation. The storage adapter includes I/O interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a conventional high-performance, FC
serial link topology. The information is retrieved by the storage adapter and, if neces-zs sary, processed by the processor 122 (or the adapter 128 itself) prior to being forwarded over the system bus 123 to the network adapters 125, 126, where the information is formatted into packets or messages and returned to the clients.
Storage of information on the appliance 100 is preferably implemented as one or more storage volumes (e.g., VOLl-2 150) that comprise a cluster of physical storage so disks 130, defining an overall logical arrangement of disk space. The disks within a volume are typically organized as one or more groups of Redundant Array of Inde-pendent (or I~expehsive) Disks (RAID). RAID implementations enhance the reliabil-ity/integrity of data storage through the writing of data "stripes" across a given number of physical disks in the RAID group, and the appropriate storing of redundant informa-tion with respect to the striped data. The redundant information enables recovery of data lost when a storage device fails. It will be apparent to those skilled in the art that other redundancy techniques, such as mirroring, may used in accordance with the pres-ent invention.
Specifically, each volume 150 is constructed from an array of physical disks 130 that are organized as RAID groups I40, I42, and 144. The physical disks of each io RAID group include those disks configured to store striped data (D) and those config-ured to store parity (P) for the data, in accordance with an illustrative RAID
4 level configuration. It should be noted that other RAID level conf gurations (e.g.
RAID 5) are also contemplated for use with the teachings described herein. In the illustrative embodiment, a minimum of one parity disk and one data disk may be employed.
How-ls ever, a typical implementation may include three data and one parity disk per RAID
group and at least one RAID group per volume.
To facilitate access to the disks ISO, the storage operating system 200 imple-ments a write-anywhere file system of a novel virtualization system that "virtualizes"
the storage space provided by disks 130. The file system logically organizes the infor-Zo mation as a hierarchical structure of named directory and file objects (hereinafter "di-rectories" and "files") on the disks. Each "on-disk" file may be implemented as set of disk blocks configured to store information, such as data, whereas the directory may be implemented as a specially formatted file in which names and links to other files and directories are stored. The virtualization system allows the file system to further logi-Zs cally organize information as a hierarchical structure of named vdisks on the disks, thereby providing an integrated NAS and SAN appliance approach to storage by ena-bling file-based (NAS) access to the named files and directories, while further enabling block-based (SAN) access to the named vdisks on a file-based storage platform.
The file system simplifies the complexity of management of the underlying physical storage 3o in SAN deployments.
As noted, a vdisk is a special file type in a volume that derives from a plain (regular) file, but that has associated export controls and operation restrictions that sup-port emulation of a disk. Unlike a file that can be created by a client using, e.g., the NFS or CIFS protocol, a vdisk is created on the multi-protocol storage appliance via, s e.g. a user interface (UI) as a special typed file (object). Illustratively, the vdisk is a multi-mode object comprising a special file mode that holds data and at least one asso-ciated stream mode that holds attributes, including security information. The special file inode functions as a main container for storing data, such as application data, asso-ciated with the emulated disk. The stream mode stores attributes that allow lens and ~o exports to persist over, e.g., reboot operations, while also enabling management of the vdisk as a single disk object in relation to SAN clients. An example of a vdisk and its associated modes that inay be advantageously used with the present invention is de-scribed in co-pending and commonly assigned U.S. Patent Application Serial No.
(112056-0069) titled Storage Virtualizatio~ by Layering hdisks on a File System, which is application is hereby incorporated by reference as though fully set forth herein.
in the illustrative embodiment, the storage operating system is preferably the NetApp~ Data ONTAPT"" operating system available from Network Appliance, Inc., Sunnyvale, California that implements a Write Anywhere File Layout (WAFLT"') file system. However, it is expressly contemplated that any appropriate storage operating ao system, including a write in-place file system, may be enhanced for use in accordance .
with the inventive principles described herein. As such, where the term "WAFL"
is employed, it should be taken broadly to refer to any storage operating system that is otherwise adaptable to the teachings of this invention.
As used herein, the term "storage operating system" generally refers to the as computer-executable code operable on a computer that manages data access and may, in the case of a multi-protocol storage appliance, implement data access semantics, such as the Data ONTAP storage operating system, which is implemented as a micro-kernel. The storage operating system can also be implemented as an application pro-gram operating over a general-purpose operating system, such as UNIX~ or Windows 3o NT~, or as a general-purpose operating system with configurable functionality, which is configured for storage applications as described herein.
In addition, it will be understood to those skilled in the art that the inventive system and method described herein may apply to any type of special-purpose (e.g., storage serving appliance) or general-purpose computer, including a standalone com-puter or portion thereof, embodied as or including a storage system. Moreover, the teachings of this invention can be adapted to a variety of storage system architectures including, but not limited to, a network-attached storage environment, a storage area network and disk assembly directly-attached to a client or host computer. The term "storage system" should therefore be taken broadly to include such arrangements in ad-dition to any subsystems configured to perform a storage function and associated with io other equipment or systems.
Fig. 2 is a schematic block diagram of the storage operating system 200 that may be advantageously used with the present invention. The storage operating system comprises a series of software layers organized to form an integrated network protocol stack or, more generally, a mufti-protocol engine that provides data paths for clients to is access information stored on the mufti-protocol storage appliance using block and file access protocols. The protocol stack includes a media access layer 210 of network drivers (e.g., gigabit Ethernet drivers) that interfaces to network protocol layers, such as the IP layer 212 and its supporting transport mechanisms, the TCP layer 214 and the User Datagram Protocol (UDP) layer 216. A file system protocol layer provides multi-ao protocol file access and, to that end, includes support for the DAFS
protocol 218, the NFS protocol 220, the CIFS protocol 222 and the Hypertext Transfer Protocol (HTTP) protocol 224. A VI layer 226 implements the VI architecture to provide direct access transport (DAT) capabilities, such as RDMA, as required by the DAFS protocol 218.
An iSCSI driver layer 228 provides block protocol access over the TCP/IP net-as work protocol layers, while a FC driver layer 230 operates with the FC HBA
126 to receive and transmit block access requests and responses to and from the integrated storage appliance. The FC and iSCSI drivers provide FC-specific and iSCSI-specific access control to the funs (vdisks) and, thus, manage exports of vdisks to either iSCSI
or FCP or, alternatively, to both iSCSI and FCP when accessing a single vdisk on the so mufti-protocol storage appliance. In addition, the storage operating system includes a disk storage layer 240 that implements a disk storage protocol, such as a RAID
proto-col, and a disk driver layer 250 that implements a disk access protocol such as, e.g., a SCSI protocol.
Bridging the disk software layers with the integrated network protocol stack layers is a virtualization system 300 according to the present invention. Fig.
3 is a s schematic block diagram of the virtualization system 300 that is implemented by a file system 320 cooperating with virtualization modules illustratively embodied as, e.g., vdisk module 330 and SCSI target module 310. It should be noted that the vdisk mod-ule 330, file system 320 and SCSI target module 310 can be implemented in software, hardware, firmware, or a combination thereof. The vdisk module 330 is layered on io (and interacts with) the file system 320 to provide a data path from the block-based SCSI target module to blocks managed by the file system. The vdisk module also en-ables access by administrative interfaces, such as a streamlined user interface (UI 350), in response to a system administrator issuing commands to the mufti-protocol storage appliance 100. In essence, the vdisk module 330 manages SAN deployments by, zs among other things, implementing a comprehensive set of vdisk (fun) commands issued through the UI 350 by a system administrator. These vdisk commands axe converted to primitive file system operations ("primitives") that interact with the file system 320 and the SCSI target module 310 to implement the vdisks.
The SCSI target module 310, in turn, initiates emulation of a disk or fun by pro-ao viding a mapping procedure that translates logical block access to Iuns specified in ac-cess requests into virtual block access to the special vdisk file types and, for responses to the requests, vdisks into funs. The SCSI target module is illustratively disposed be-tween the FC and iSCSI drivers 228, 230 and the file system 320 to thereby provide a translation layer of the virtualization system 300 between the SAN block (fun) space as and the file system space, where funs axe represented as vdisks. By "disposing" SAN
virtualization over the file system 320, the mufti-protocol storage appliance reverses the approaches taken by prior systems to thereby provide a single unified storage platform for essentially all storage access protocols.
According to the invention, file system provides capabilities for use in file-so based access to information stored on the storage devices, such as disks.
In addition, the file system provides volume management capabilities for use in block-based access to the stored information. That is, in addition to providing file system semantics (such as differentiation of storage into discrete objects and naming of those storage objects), the file system 320 provides functions normally associated with a volume manager. As described herein, these functions include (i) aggregation of the disks, (ii) aggregation of storage bandwidth of the disks, and (iii) reliability guarantees, such as mirroring and/or parity (RAID), to thereby present one or more storage objects layered on the file sys-tem. A feature of the multi-protocol storage appliance is the simplicity of use associ-ated with these volume management capabilities, particularly when used in SAN
de-ployments.
io The file system 320 illustratively implements the WAFL file system having an on-disk format representation that is block-based using, e.g., 4 kilobyte (kB) blocks and using modes to describe the files. The WAFL file system uses files to store meta-data describing the layout of its file system; these meta-data files include, among others, an mode file. A file handle, i.e., an identifier that includes an mode number, is used to re-is trieve an mode from disk. A description of the structure of the file system, including the mode file, is provided in U.S. Patent No. 5,819,292, titled Method fog Maintaining Consistent States of a File System and fog Cr-eati~g Uses Accessible Read Only Copies of a File System by David Hitz et al., issued October 6, 1998, which patent is hereby incorporated by reference as though fully set forth herein.
ao Broadly stated, all modes of the file system are organized into the mode file. A
file system (FS) info block specifies the layout of information in the file system and includes an mode of a file that includes all other modes- of the file system.
Each vol-ume has an FS info block that is preferably stored at a fixed location within, e.g., a RAID group of the file system. The mode of the root FS info block may directly refer-as ence (point to) blocks of the mode file or may reference indirect blocks of the inode file that, in turn, reference direct blocks of the mode file. Within each direct block of the mode file are embedded modes, each of which may reference indirect blocks that, in turn, reference data blocks of a file or vdisk.
According to an aspect of the invention, the file system implements access op-3o erations to vdisks 322, as well as to files 324 and directories (dir 326) that coexist with respect to global space management of units of storage, such as volumes 150 and/or qtrees 328. A qtree 328 is a special directory that has the properties of a logical sub-volume within the namespace of a physical volume. Each file system storage object (file, directory or vdisk) is illustratively associated with one qtree, and quotas, security properties and other items can be assigned on a per-qtree basis. The vdisks and files/directories may be layered on top of qtrees 328 that, in turn, are layered on top of volumes 150 as abstracted by the file system "virtualization" layer 320.
Note that the vdisk storage objects in the file system 320 are associated with SAN deployments of the multi-protocol storage appliance, whereas the file and direc-tory storage objects are associated with NAS deployments of the appliance. The files io and directories are generally not accessible via the FC or SCSI block access protocols;
however, a file can be converted to a vdisk and then accessed by either the SAN or NAS protocol. The vdisks are accessible as luns from the SAN (FC and SCSI) proto-cols and as files by the NAS (NFS and GIFS) protocols.
In another aspect of the invention, the virtualization system 300 provides a vir-is tualized storage space that allows SAN and NAS storage objects to coexist with respect to global space management by the file system 320. To that end, the virtualization system 300 exploits the characteristics of the file system, including its inherent ability to aggregate disks and abstract them into a single pool of storage. For example, the system 300 leverages the volume management capability of the file system 320 to or-ao ganize a collection of disks 130 into one or more volumes 150 representing a pool of global storage space. The pool of global storage is then made available for both SAN
and NAS deployments through the creation of vdisks 322 and files 324, respectively.
In addition to sharing the same global storage space, the vdisks and files share the same pool of available storage from which to draw on when expanding the SAN and/or NAS
as deployments. Unlike prior systems, there is no physical partitioning of disks within the global storage space of the multi-protocol storage appliance.
The mufti-protocol storage appliance substantially simplifies management of the global storage space by allowing a user to manage both NAS and SAN storage objects using the single pool of storage resources. In particular, free block space is managed 3o from a global free pool on a fine-grained block basis fox both SAN and NAS
deploy-ments. If those storage objects were managed discretely (separately), the user would be required to keep a certain amount of "spare" disks on hand for each type of object to respond to changes in, e.g., business objectives. The overhead required to maintain that discrete approach is greater than if those objects could be managed out of a single pool of resources with only a single group of spared disks available for expansion as busi-ness dictates. Blocks released individually by vdisk operations are immediately reus-able by NAS objects (and vice versa.) The details of such management are transparent to the administrator. This represents a "total cost of ownership" advantage of the inte-grated multi-protocol storage appliance.
The virtualization system 300 further provides reliability guarantees for those io SAN and NAS storage objects coexisting in the global storage space of the multi-protocol appliance 100. In particular, reliability guarantees in face of disk failures through techniques such as RAID or mirroring performed at a physical block level in conventional SAN systems is an inherited feature from the file system 320 of the appli-ance 100. This simplifies administration by allowing an administrator to make global is decisions on the underlying redundant physical storage that apply equally to vdisks and NAS objects in the file system.
As noted, the file system 320 organizes information as file, directory and vdisk objects within volumes 150 of disks 130. Underlying each volume 150 is a collection of RAID groups 140-144 that provide protection and reliability against disk failures) ao within the volume. The information serviced by the mufti-protocol storage appliance is protected according to an illustrative RAID 4 configuration. This level of protection may be extended to include, e.g., synchronous mirroring on the appliance platform. A
vdisk 322 created on a volume that is protected by RAID 4 "inherits" the added protec-tion of synchronous mirroring if that latter protection is specified for the volume 150.
as In this case, the synchronous mirroring protection is not a property of the vdisk but rather a property of the underlying volume and the reliability guarantees of the file system 320. This "inheritance" feature of the mufti-protocol storage appliance simpli-fies management of a vdisk because a system administrator does not have to deal with reliability issues.
so In addition, the virtualization system 300 aggregates bandwidth of the disks 130 without requiring user knowledge of the physical construction of those disks.
The file system 320 is configured to write (store) data on the disks as long, continuous stripes across those disks in accordance with input/output (I/O) storage operations that aggre-gate the bandwidth of all the disks of a volume for stored data. When information is stored or retrieved from the vdisks, the I/O operations are not directed to disks specified s by a user. Rather, those operations are transparent to the user because the file system "stripes" that data across all the disks of the volume in a reliable manner according to its write anywhere layout policy. As a result of virtualization of block storage, I/O
bandwidth to a vdisk can be the maximum bandwidth of the underlying physical disks of the file system, regardless of the size of the vdisk (unlike typical physical imple-io mentations of Tuns in conventional block access products.) Moreover, the virtualization system leverages file system placement, manage-ment and block allocation policies to make the vdisks function correctly within the multi-protocol storage appliance. The vdisk block placement policies are a function of the underlying virtualizing file system and there are no permanent physical bindings of is file system blocks to SCSI logical block addresses in the face of modifications. The vdisks may be transparently reorganized to perhaps alter data access pattern behaviour.
For both SAN and NAS deployments, the block allocation policies axe inde-pendent of physical properties of the disks (e.g., geometries, sizes, cylinders, sector size). The file system provides file-based management of the files 324 and directories ao 326 and, in accordance with the invention, vdisks 322 residing within the volumes 150.
When a disk is added to the array attached to the multi-protocol storage appliance, that disk is integrated into an existing volume to increase the entire volume space, which space may be used for any purpose, e.g., more vdisks or more files.
Management of the integrated multi-protocol storage appliance 100 is simpli-as feed through the use of the UI 350 and the vdisk command set available to the system administrator. The UI 350 illustratively comprises both a command line interface (CLI
352) and a graphical user interface (GUI 354) used to implement the vdisk command set to, among other things, create a vdisk, increase/decrease the size of a vdisk and/or destroy a vdisk. The storage space for the destroyed vdisk may then be reused for, e.g., 3o a NAS-based file in accordance with the virtualized storage space feature of the appli--l~-ance 100. A vdisk may increase ("grow") or decrease ("shrink") under user control while preserving block and NAS mufti-protocol access to its application data.
The UI 350 simplifies management of the mufti-protocol SAN/NAS storage ap-pliance by, e.g., obviating the need for a system administrator to explicitly configure s and specify the disks to be used when creating a vdisk. For instance to create a vdisk, the system administrator need merely issue a vdisk ("fun create") command through, e.g., the CLI 352 or GUI 354. The vdisk command specifies creation of a vdisk (fun), along with the desired size of the vdisk and a path descriptor (pathname) to that vdisk.
In response, the file system 320 cooperates with the vdisk module 330 to "virtualize"
io the storage space provided by the underlying disks and create a vdisk as specified by the create command. Specifically, the vdisk module 330 processes the vdisk command to "call" primitive operations ("primitives") in the file system 320 that implement high-level notions of vdisks (funs). For example, the "fun create" command is translated into a series of file system primitives that create a vdisk with correct information and size, is as well as at the correct location. These file system primitives include operations to create a file mode (create file), create a stream mode (create stream), and store infor-mation in the stream mode (stream write).
The result of the fun create command is the creation of a vdisk 322 having the specified size and that is RAID protected without having to explicitly specify such ao protection. Storage of information on disks of the mufti-protocol storage appliance is not typed; only "raw" bits axe stored on the disks. The file system organizes those bits into vdisks and RAID groups across all of the disks within a volume. Thus, the created vdisk 322 does not have to be explicitly configured because the virtualization system 300 creates a vdisk in a manner that is transparent to the user. The created vdisk inher-as its high-performance characteristics, such as reliability and storage bandwidth, of the underlying volume created by the file system.
The CLI 352 andlor GUI 354 also interact with the vdisk module 330 to intro-duce attributes and persistent fun map bindings that assign numbers to the created vdisk. These Iun map bindings are thereafter used to export vdisks as certain SCSI
so identifiers (IDs) to the clients. In particular, the created vdisk can be exported via a fun mapping technique to enable a SAN client to "view" (access) a disk. Vdisks (funs) generally require strict controlled access in a SAN environment; sharing of luns in a SAN environment typically occurs only in limited circumstances, such as clustered file systems, clustered operating systems and mufti-pathing configurations. A
system ad-ministrator of the mufti-protocol storage appliance determines which vdisks (funs) can be exported to a SAN client. Once a vdisk is exported as a fun, the client may access the vdisk over the SAN network utilizing a block access pxotocol, such as FCP
and iSCSI.
SAN clients typically identify and address disks by logical numbers or Iuns.
However, an "ease of management" feature of the mufti-protocol storage appliance is to that system administrators can manage vdisks and their addressing by logical names.
To that end, the vdisk module 330 of the mufti-protocol storage appliance maps logical names to vdisks. For example when creating a vdisk, the system administrator "right size" allocates the vdisk and assigns it a name that is generally meaningful to its in-tended application (e.g., lvollvol0/database to hold a database). The administrative in-is terface provides name-based management of luns/vdisks (as well as files) exported from the storage appliance on the clients, thereby providing a uniform and unified naming scheme for block-based (as well as file-based) storage.
The mufti-pxotocol storage appliance manages export control of vdisks by logi-cal names through the use of initiator groups (igroups). An igroup is a logical named ao entity that is assigned to one or more addresses associated with one or more initiators (depending upon whether a clustered environment is configured). An "igroup create"
command essentially "binds" (associates) those addresses, which may comprise WWN
addresses or iSCSI IDs, to a logical name or igroup. A "fun map" command is then used to export one or more vdisks to the igroup, i.e., make the vdisk(s) "visible" to the as igroup. In this sense, the "fun map" command is equivalent to an NFS export or a CIFS
share. The WWN addresses or iSCSI IDs thus identify the clients that are allowed to access those vdisks specified by the fun map command. Thereafter, the logical name is used with all operations internal to the storage operating system. This logical naming abstraction is pervasive throughout the entire vdisk command set, including interactions 3o between a user and the mufti-pr~tocol storage appliance. Tn particular, the igroup naming convention is used fir all subsequent export operations and listings of luns that are exported for various SAN clients.
Fig. 4 is a schematic flow chart illustrating the sequence of steps involved when accessing information stored on the mufti-protocol storage appliance over a SAN net-work. Here, a client communicates with the storage appliance 100 using a block access protocol over a network coupled to the appliance. If the client is client 160a running the Windows operating system, the block access protocol is illustratively the FCP pro-tocol used over the network 185. On the other hand, if the client is client 160b running the UNIX operating system, the block access protocol is illustratively the iSCSI proto-io col used over network 165. The sequence starts at Step 400 and proceeds to Step 402 where the client generates a request to access information residing on the multi-protocol storage appliance and, in Step 404, the request is forwarded as a conventional FCP or iSCSI block access request over the network 185, 165.
At Step 406, the request is received at network adapter 126, 125 of the storage is appliance 100, where it is processed by the integrated network protocol stack and passed to the virtualization system 300 at Step 408. Specifically, if the request is a FCP
request, it is processed as, e.g., a 4k block request to access (i.e., read/write) data by the FC driver 230. If the request is an iSCSI protocol request, it is received at the media access layer (the Intel gigabit Ethernet) and passed through the TCP/IP
network proto-2o col layers to the virtualization system.
Command and control operations, including addressing information, associated with the SCSI protocol are generally directed to disks or Tuns; however, the file system 320 does not recognize funs. As a result, the SCSI target module 310 of the virtualiza-tion system initiates emulation of a fun in order to respond to the SCSI
commands is contained in the request (Step 410). To that end, the SCSI target module has a set of application progranuning interfaces (APIs 360) that axe based on the SCSI
protocol and that enable a consistent interface to both the iSCSI and FCP drivers 228, 230.
The SCSI taxget module further implements a mapping/translation procedure that essentially translates a fun into a vdisk. At Step 412, the SCSI target module maps the addressing 3o information, e.g., FC routing information, of the request to the internal structure of the file system.
The file system 320 is illustratively a message-based system; as such, the SCSI
target module 310 transposes the SCSI request into a message representing an operation directed to the file system. For example, the message generated by the SCSI
target module may include a type of operation (e.g., read, write) along with a pathname (e.g., a path descriptor) and a filename (e.g., a special filename) of the vdisk object repre-sented in the file system. The SCSI target module 310 passes the message into the file system layer 320 as, e.g., a function call 365, where the operation is performed.
In response to receiving the message, the file system 320 maps the pathname to mode structures to obtain the file handle corresponding to the vdisk 322.
Armed with a io file handle, the storage operating system 200 can convert that handle to a disk block and, thus, retrieve the block (mode) from disk. Broadly stated, the file handle is an in-ternal representation of the data structure, i.e., a representation of the mode data struc-ture that is used internally within the file system. The file handle generally consists of a plurality of components including a file ID (mode number), a snapshot ID, a genera-ls tion ID and a flag. The file system utilizes the file handle to retrieve the special file mode and at least one associated stream mode that comprise the vdisk within the file system structure implemented on the disks 130.
In Step 414, the file system generates operations to load (retrieve) the requested data from disk 130 if it is not resident "in core", i.e., in the memory 124.
If the infor-ao mation is not in memory, the file system 320 indexes into the mode file using the mode number to access an appropriate entry and retrieve a logical volume block number (VBN). The file system then passes the logical VBN to the disk storage (RAID) layer 240, which maps that logical number to a disk block number and sends the latter to an appropriate driver (e.g., SCSI) of the disk driver layer 250. The disk driver accesses as the disk block number from disk 130 and loads the requested data blocks) in memory 124. In Step 416, the requested data is processed by the virtualization system 300. For example, the data may be processed in connection with a read or write operation di-rected to a vdisk or in connection with a query command for the vdisk.
The SCSI target module 310 of the virtualization system 300 emulates support 3o for the conventional SCSI protocol by providing meaningful "simulated"
information about a requested vdisk. Such information is either calculated by the SCSI
target mod-ule or stored persistently in, e.g., the attributes stream mode of the vdisk.
At Step 418, the SCSI target module 310 loads the requested block-based information (as translated from file-based information provided by the file system 320) into a block access (SCSI) protocol message. For example, the SCSI target module 310 may load information, such as the size of a vdisk, into a SCSI protocol message in response to a SCSI query command request. Upon completion of the request, the storage appliance (and operat-ing system) returns a reply (e.g., as a SCSI "capacity" response message) to the client over the network (Step 420). The sequence then ends at Step 422.
It should be noted that the software "path" through the storage operating system io layers described above needed to perform data storage access for the client request re-ceived at the mufti-protocol storage appliance may alternatively be implemented in hardware. That is, in an alternate embodiment of the invention, a storage access re-quest data path through the operating system layers (including the virtualization system 300) may be implemented as logic circuitry embodied within a field programmable gate is array (FPGA) or an application specific integrated circuit (ASIC). This type of hard-ware implementation increases the performance of the storage service provided by ap-pliance 100 in response to a file access or block access request issued by a client 160.
Moreover, in another alternate embodiment of the invention, the processing elements of network and storage adapters 125-128 may be configured to offload some or all of the ao packet processing and storage access operations, respectively, from processor 122 to thereby increase the performance of the storage service provided by the mufti-protocol storage appliance. It is expressly contemplated that the various processes, architectures and procedures described herein can be implemented in hardware, firmware or soft-ware.
as Advantageously, the integrated mufti-protocol storage appliance provides access controls and, if appropriate, sharing of files and vdisks for all protocols, while preserv-ing data integrity. The storage appliance further provides embedded/integrated virtu-alization capabilities that obviate the need for a user to apportion storage resources when creating NAS and SAN storage objects. These capabilities include a virtualized 3o storage space that allows SAN and NAS storage objects to coexist with respect to global space management within the appliance. Moreover, the integrated storage appli-ante provides simultaneous support for block access protocols (iSCSI and FCP) to the same vdisk, as well as a heterogeneous SAN environment with support for clustering.
In sum, the multi-protocol storage appliance provides a single unified storage platform for all storage access protocols.
The foregoing description has been directed to specific embodiments. of this in-vention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advan-tages. For example, it is expressly contemplated that the teachings of this invention can be implemented as software, including a computer-readable medium having program io instructions executing on a computer, hardware, firmware, or a combination thereof.
Accordingly this description is to be taken only by way of example and not to other-wise limit the scope of the invention. It is thus the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
is What is claimed is:
Claims (27)
1. A multi-protocol storage appliance adapted to serve file and block protocol access to information stored on storage devices in an integrated manner for both network at-tached storage (NAS) and storage area network (SAN) deployments, the appliance comprising:
a storage operating system adapted to implement a file system cooperating with virtualization modules to virtualize a storage space provided by the storage devices.
a storage operating system adapted to implement a file system cooperating with virtualization modules to virtualize a storage space provided by the storage devices.
2. The multi-protocol storage appliance of Claim 1 wherein the file system logically organizes the information as files, directories and virtual disks (vdisks) to thereby pro-vide an integrated NAS and SAN appliance approach to storage by enabling file-based access to the files and directories, while further enabling block-based access to the vdisks.
3. The multi-protocol storage appliance of Claim 1 wherein the virtualization modules comprises a vdisk module and a Small Computer Systems Interface (SCSI) target mod-ule.
4. The multi-protocol storage appliance of Claim 3 wherein the vdisk module is lay-ered on the file system to enable access by administrative interfaces in response to a system administrator issuing commands to the multi-protocol storage appliance.
5. The multi-protocol storage appliance of Claim 4 wherein the administrative inter-faces include a user interface (UI).
6. The multi-protocol storage appliance of Claim 5 wherein the vdisk module manages the SAN deployments by implementing a set of vdisk commands issued through the UI.
7. The multi-protocol storage appliance of Claim 6 wherein the vdisk commands are converted to primitive file system operations that interact with the file system and the SCSI target module to implement the vdisks.
8. The multi-protocol storage appliance of Claim 7 wherein the SCSI target module initiates emulation of a disk or logical unit number (fun) by providing a mapping pro-cedure that translates the lun into a vdisk.
9. The multi-protocol storage appliance of Claim 8 wherein the SCSI target module provides a translation layer between a SAN block space and a file system space.
10. The multi-protocol storage appliance of Claim 1 wherein the virtualized storage space allows SAN and NAS storage objects to coexist with respect to global space management by the file system.
11. The multi-protocol storage appliance of Claim 10 wherein the file system cooper-ates with the virtualization modules to provide a virtualization system that provides re-liability guarantees for the SAN and NAS storage objects coexisting in the virtualized storage space.
12. The multi-protocol storage appliance of Claim 1 wherein the file system provides volume management capabilities for use in block-based access to the information stored on the storage devices.
13. The multi-protocol storage appliance of Claim 12 wherein the storage devices are disks.
14. The multi-protocol storage appliance of Claim 1 wherein the file system provides (i) file system semantics, such as naming of storage objects, and (ii) functions associ-ated with a volume manager.
15. The multi-protocol storage appliance of Claim 14 wherein the functions associated with the volume manager comprise at least one of:
aggregation of the storage devices;
aggregation of storage bandwidth of the devices; and reliability guarantees, such as mirroring or redundant array of independent disks (RAID).
aggregation of the storage devices;
aggregation of storage bandwidth of the devices; and reliability guarantees, such as mirroring or redundant array of independent disks (RAID).
16. A method for providing storage service relating to organization of information stored on storage devices coupled to a multi-protocol storage appliance, the method comprising the steps of:
virtualizing a storage space provided by the storage devices using a file system in cooperating relation with virtualization modules of a storage operating system exe-cuting on the multi-protocol storage appliance;
logically organizing the information as file, directory and virtual disk (vdisk) objects in the virtualized storage space to thereby provide an integrated network at-tached storage (NAS) and storage area network (SAN) appliance approach to storage that allows the objects to coexist with respect to global space management by the file system in the virtualized storage space; and accessing the logically organized objects stored on the storage devices using block and file access protocols over data paths provided by an integrated network pro-tocol stack of the multi-protocol storage appliance.
virtualizing a storage space provided by the storage devices using a file system in cooperating relation with virtualization modules of a storage operating system exe-cuting on the multi-protocol storage appliance;
logically organizing the information as file, directory and virtual disk (vdisk) objects in the virtualized storage space to thereby provide an integrated network at-tached storage (NAS) and storage area network (SAN) appliance approach to storage that allows the objects to coexist with respect to global space management by the file system in the virtualized storage space; and accessing the logically organized objects stored on the storage devices using block and file access protocols over data paths provided by an integrated network pro-tocol stack of the multi-protocol storage appliance.
17. The method of Claim 16 further comprising the step of providing reliability guar-antees for the file, directory and vdisk objects coexisting in the virtualized storage space.
18. A storage operating system of a multi-protocol storage appliance configured to provide storage service relating to organization of information stored on storage de-vices coupled to the appliance, the storage operating system comprising:
an integrated network protocol stack that provides data paths for clients to ac-cess the information stored on the multi-protocol storage appliance using block and file access protocols; and a file system cooperating with virtualization modules to virtualize a storage space provided by the storage devices.
an integrated network protocol stack that provides data paths for clients to ac-cess the information stored on the multi-protocol storage appliance using block and file access protocols; and a file system cooperating with virtualization modules to virtualize a storage space provided by the storage devices.
19. The storage operating system of Claim 18 wherein the file system logically organ-izes the information as files, directories and virtual disks (vdisks) to thereby provide an integrated network attached storage (NAS) and storage area network (SAN) appliance approach to storage using file-based and block-based access protocols.
20. The storage operating system of Claim 19 wherein the block-based access proto-cols include Small Computer Systems Interface (SCSI) based protocols, such as SCSI
encapsulated over a Transport Control Protocol (iSCSI) and SCSI encapsulated over Fibre Channel (FCP).
encapsulated over a Transport Control Protocol (iSCSI) and SCSI encapsulated over Fibre Channel (FCP).
21. The storage operating system of Claim 20 wherein the integrated network protocol stack comprises:
network protocol layers;
a file system protocol layer that interfaces to the network protocol layers and provides file-based protocol access to the files and directories organized by the file system; and an iSCSI driver disposed over the network protocol layers to provide block-based protocol access to the vdisks organized by the file system.
network protocol layers;
a file system protocol layer that interfaces to the network protocol layers and provides file-based protocol access to the files and directories organized by the file system; and an iSCSI driver disposed over the network protocol layers to provide block-based protocol access to the vdisks organized by the file system.
22. The storage operating system of Claim 21 wherein the integrated network protocol stack further comprises a virtual interface layer that provides direct access transport ca-pabilities for a file access protocol of the file system protocol layer.
23. The storage operating system of Claim 21 wherein the integrated network protocol stack further comprises a Fibre Channel (FC) driver adapted to receive and transmit block access requests to access the vdisks organized by the file system.
24. The storage operating system of Claim 23 wherein the FC and iSCSI drivers pro-vide FC-specific and iSCSI-specific access control to the vdisks and further manage exports of vdisks to iSCSI and FCP when accessing the vdisks on the multi-protocol storage appliance.
25. A method for serving file and block protocol access to information stored on stor-age devices of a multi-protocol storage appliance in an integrated manner for both net-work attached storage (NAS) and storage area network (SAN) deployments, the method comprising the steps of providing NAS services using (i) a network adapter connecting the appliance to a first network and (ii) file system capabilities that allow the appliance to respond to file-based requests issued by a NAS client to access the stored information as files; and providing SAN services using (i) a network target adapter coupling the appli-ante to a second network and (ii) volume management capabilities that allow the appli-ante to respond to block-based requests issued by a SAN client to access the stored in-formation as virtual disks (vdisks).
26. The method of Claim 25 further comprising the steps of:
providing name-based management of the files and vdisks stored on the multi-protocol storage appliance to thereby providing a uniform naming scheme for file-based and block-based storage; and providing a hierarchical structure of named files and vdisks stored on the stor-age devices.
providing name-based management of the files and vdisks stored on the multi-protocol storage appliance to thereby providing a uniform naming scheme for file-based and block-based storage; and providing a hierarchical structure of named files and vdisks stored on the stor-age devices.
27. A method for serving file and block protocol access to storage objects stored on storage devices of a multi-protocol storage appliance in an integrated manner for both network attached storage (NAS) and storage area network (SAN) deployments, the method comprising the steps of:
organizing the storage devices into one or more volumes representing a global storage space;
allowing SAN and NAS storage objects to coexist in the global storage space;
receiving block-based and file-based requests to access the SAN and NAS stor-age objects at a multi-protocol engine of the storage appliance; and responding to block-based and file-based requests to access and return the SAN
and NAS storage objects.
organizing the storage devices into one or more volumes representing a global storage space;
allowing SAN and NAS storage objects to coexist in the global storage space;
receiving block-based and file-based requests to access the SAN and NAS stor-age objects at a multi-protocol engine of the storage appliance; and responding to block-based and file-based requests to access and return the SAN
and NAS storage objects.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/215,917 | 2002-08-09 | ||
US10/215,917 US7873700B2 (en) | 2002-08-09 | 2002-08-09 | Multi-protocol storage appliance that provides integrated support for file and block access protocols |
PCT/US2003/023597 WO2004015521A2 (en) | 2002-08-09 | 2003-07-28 | Multi-protocol storage appliance that provides integrated support for file and block access protocols |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2495180A1 true CA2495180A1 (en) | 2004-02-19 |
CA2495180C CA2495180C (en) | 2013-04-30 |
Family
ID=31494968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2495180A Expired - Fee Related CA2495180C (en) | 2002-08-09 | 2003-07-28 | Multi-protocol storage appliance that provides integrated support for file and block access protocols |
Country Status (10)
Country | Link |
---|---|
US (1) | US7873700B2 (en) |
EP (1) | EP1543399A4 (en) |
JP (1) | JP4440098B2 (en) |
CN (1) | CN100357916C (en) |
AU (1) | AU2003254238B2 (en) |
CA (1) | CA2495180C (en) |
HK (1) | HK1082976A1 (en) |
IL (1) | IL166786A (en) |
RU (1) | RU2302034C9 (en) |
WO (1) | WO2004015521A2 (en) |
Families Citing this family (347)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7424529B2 (en) * | 1999-12-10 | 2008-09-09 | International Business Machines Corporation | System using host bus adapter connection tables and server tables to generate connection topology of servers and controllers |
US6868417B2 (en) * | 2000-12-18 | 2005-03-15 | Spinnaker Networks, Inc. | Mechanism for handling file level and block level remote file accesses using the same server |
US8402346B2 (en) * | 2001-12-28 | 2013-03-19 | Netapp, Inc. | N-way parity technique for enabling recovery from up to N storage device failures |
US7640484B2 (en) | 2001-12-28 | 2009-12-29 | Netapp, Inc. | Triple parity technique for enabling efficient recovery from triple failures in a storage array |
US7613984B2 (en) * | 2001-12-28 | 2009-11-03 | Netapp, Inc. | System and method for symmetric triple parity for failing storage devices |
US7313557B1 (en) | 2002-03-15 | 2007-12-25 | Network Appliance, Inc. | Multi-protocol lock manager |
US7043485B2 (en) * | 2002-03-19 | 2006-05-09 | Network Appliance, Inc. | System and method for storage of snapshot metadata in a remote file |
US7010553B2 (en) | 2002-03-19 | 2006-03-07 | Network Appliance, Inc. | System and method for redirecting access to a remote mirrored snapshot |
US6993539B2 (en) | 2002-03-19 | 2006-01-31 | Network Appliance, Inc. | System and method for determining changes in two snapshots and for transmitting changes to destination snapshot |
US7873700B2 (en) | 2002-08-09 | 2011-01-18 | Netapp, Inc. | Multi-protocol storage appliance that provides integrated support for file and block access protocols |
US7107385B2 (en) * | 2002-08-09 | 2006-09-12 | Network Appliance, Inc. | Storage virtualization by layering virtual disk objects on a file system |
US7711539B1 (en) * | 2002-08-12 | 2010-05-04 | Netapp, Inc. | System and method for emulating SCSI reservations using network file access protocols |
US8631162B2 (en) * | 2002-08-30 | 2014-01-14 | Broadcom Corporation | System and method for network interfacing in a multiple network environment |
US7340486B1 (en) * | 2002-10-10 | 2008-03-04 | Network Appliance, Inc. | System and method for file system snapshot of a virtual logical disk |
US7171452B1 (en) | 2002-10-31 | 2007-01-30 | Network Appliance, Inc. | System and method for monitoring cluster partner boot status over a cluster interconnect |
US7069307B1 (en) | 2002-12-20 | 2006-06-27 | Network Appliance, Inc. | System and method for inband management of a virtual disk |
JP4567293B2 (en) * | 2003-01-21 | 2010-10-20 | 株式会社日立製作所 | file server |
US7809693B2 (en) * | 2003-02-10 | 2010-10-05 | Netapp, Inc. | System and method for restoring data on demand for instant volume restoration |
US7769722B1 (en) | 2006-12-08 | 2010-08-03 | Emc Corporation | Replication and restoration of multiple data storage object types in a data network |
US7360072B1 (en) * | 2003-03-28 | 2008-04-15 | Cisco Technology, Inc. | iSCSI system OS boot configuration modification |
US7457982B2 (en) | 2003-04-11 | 2008-11-25 | Network Appliance, Inc. | Writable virtual disk of read-only snapshot file objects |
US7383378B1 (en) * | 2003-04-11 | 2008-06-03 | Network Appliance, Inc. | System and method for supporting file and block access to storage object on a storage appliance |
US7293152B1 (en) * | 2003-04-23 | 2007-11-06 | Network Appliance, Inc. | Consistent logical naming of initiator groups |
US7330862B1 (en) | 2003-04-25 | 2008-02-12 | Network Appliance, Inc. | Zero copy write datapath |
US7181439B1 (en) * | 2003-04-25 | 2007-02-20 | Network Appliance, Inc. | System and method for transparently accessing a virtual disk using a file-based protocol |
US7716323B2 (en) * | 2003-07-18 | 2010-05-11 | Netapp, Inc. | System and method for reliable peer communication in a clustered storage system |
US7593996B2 (en) * | 2003-07-18 | 2009-09-22 | Netapp, Inc. | System and method for establishing a peer connection using reliable RDMA primitives |
US7239989B2 (en) * | 2003-07-18 | 2007-07-03 | Oracle International Corporation | Within-distance query pruning in an R-tree index |
US7055014B1 (en) * | 2003-08-11 | 2006-05-30 | Network Applicance, Inc. | User interface system for a multi-protocol storage appliance |
US7953819B2 (en) * | 2003-08-22 | 2011-05-31 | Emc Corporation | Multi-protocol sharable virtual storage objects |
US7647451B1 (en) | 2003-11-24 | 2010-01-12 | Netapp, Inc. | Data placement technique for striping data containers across volumes of a storage system cluster |
US7698289B2 (en) * | 2003-12-02 | 2010-04-13 | Netapp, Inc. | Storage system architecture for striping data container content across volumes of a cluster |
US7478101B1 (en) | 2003-12-23 | 2009-01-13 | Networks Appliance, Inc. | System-independent data format in a mirrored storage system environment and method for using the same |
US7921110B1 (en) | 2003-12-23 | 2011-04-05 | Netapp, Inc. | System and method for comparing data sets |
US7340639B1 (en) | 2004-01-08 | 2008-03-04 | Network Appliance, Inc. | System and method for proxying data access commands in a clustered storage system |
JP4477365B2 (en) * | 2004-01-29 | 2010-06-09 | 株式会社日立製作所 | Storage device having a plurality of interfaces and control method of the storage device |
US7293195B1 (en) | 2004-01-29 | 2007-11-06 | Network Appliance, Inc. | System and method for coordinated bringup of a storage appliance in a cluster configuration |
US7949792B2 (en) | 2004-02-27 | 2011-05-24 | Cisco Technology, Inc. | Encoding a TCP offload engine within FCP |
US7966293B1 (en) | 2004-03-09 | 2011-06-21 | Netapp, Inc. | System and method for indexing a backup using persistent consistency point images |
US20050210028A1 (en) * | 2004-03-18 | 2005-09-22 | Shoji Kodama | Data write protection in a storage area network and network attached storage mixed environment |
US8230085B2 (en) * | 2004-04-12 | 2012-07-24 | Netapp, Inc. | System and method for supporting block-based protocols on a virtual storage appliance executing within a physical storage appliance |
US7328144B1 (en) | 2004-04-28 | 2008-02-05 | Network Appliance, Inc. | System and method for simulating a software protocol stack using an emulated protocol over an emulated network |
US8621029B1 (en) * | 2004-04-28 | 2013-12-31 | Netapp, Inc. | System and method for providing remote direct memory access over a transport medium that does not natively support remote direct memory access operations |
US8996455B2 (en) * | 2004-04-30 | 2015-03-31 | Netapp, Inc. | System and method for configuring a storage network utilizing a multi-protocol storage appliance |
US7409494B2 (en) | 2004-04-30 | 2008-08-05 | Network Appliance, Inc. | Extension of write anywhere file system layout |
US7430571B2 (en) * | 2004-04-30 | 2008-09-30 | Network Appliance, Inc. | Extension of write anywhere file layout write allocation |
US7409511B2 (en) * | 2004-04-30 | 2008-08-05 | Network Appliance, Inc. | Cloning technique for efficiently creating a copy of a volume in a storage system |
JP2005321913A (en) * | 2004-05-07 | 2005-11-17 | Hitachi Ltd | Computer system with file sharing device, and transfer method of file sharing device |
US7761284B2 (en) * | 2004-08-30 | 2010-07-20 | Overland Storage, Inc. | Tape emulating disk based storage system and method with automatically resized emulated tape capacity |
US7260678B1 (en) | 2004-10-13 | 2007-08-21 | Network Appliance, Inc. | System and method for determining disk ownership model |
US7730277B1 (en) | 2004-10-25 | 2010-06-01 | Netapp, Inc. | System and method for using pvbn placeholders in a flexible volume of a storage system |
US7769975B2 (en) * | 2004-11-15 | 2010-08-03 | International Business Machines Corporation | Method for configuring volumes in a storage system |
US7523286B2 (en) * | 2004-11-19 | 2009-04-21 | Network Appliance, Inc. | System and method for real-time balancing of user workload across multiple storage systems with shared back end storage |
US7844444B1 (en) * | 2004-11-23 | 2010-11-30 | Sanblaze Technology, Inc. | Fibre channel disk emulator system and method |
US7506111B1 (en) * | 2004-12-20 | 2009-03-17 | Network Appliance, Inc. | System and method for determining a number of overwitten blocks between data containers |
US7409495B1 (en) * | 2004-12-22 | 2008-08-05 | Symantec Operating Corporation | Method and apparatus for providing a temporal storage appliance with block virtualization in storage networks |
FR2880444B1 (en) * | 2005-01-06 | 2007-03-09 | Gemplus Sa | DATA STORAGE DEVICE |
US8180855B2 (en) * | 2005-01-27 | 2012-05-15 | Netapp, Inc. | Coordinated shared storage architecture |
US8019842B1 (en) | 2005-01-27 | 2011-09-13 | Netapp, Inc. | System and method for distributing enclosure services data to coordinate shared storage |
US7574464B2 (en) * | 2005-02-14 | 2009-08-11 | Netapp, Inc. | System and method for enabling a storage system to support multiple volume formats simultaneously |
US8291160B2 (en) * | 2005-02-17 | 2012-10-16 | Overland Storage, Inc. | Tape library emulation with automatic configuration and data retention |
US20060195425A1 (en) * | 2005-02-28 | 2006-08-31 | Microsoft Corporation | Composable query building API and query language |
US7747836B2 (en) * | 2005-03-08 | 2010-06-29 | Netapp, Inc. | Integrated storage virtualization and switch system |
US7757056B1 (en) | 2005-03-16 | 2010-07-13 | Netapp, Inc. | System and method for efficiently calculating storage required to split a clone volume |
JP4574408B2 (en) * | 2005-03-24 | 2010-11-04 | 株式会社日立製作所 | Storage system control technology |
US7689609B2 (en) * | 2005-04-25 | 2010-03-30 | Netapp, Inc. | Architecture for supporting sparse volumes |
US8055702B2 (en) * | 2005-04-25 | 2011-11-08 | Netapp, Inc. | System and method for caching network file systems |
US8073899B2 (en) * | 2005-04-29 | 2011-12-06 | Netapp, Inc. | System and method for proxying data access commands in a storage system cluster |
US7904649B2 (en) | 2005-04-29 | 2011-03-08 | Netapp, Inc. | System and method for restriping data across a plurality of volumes |
US7617370B2 (en) * | 2005-04-29 | 2009-11-10 | Netapp, Inc. | Data allocation within a storage system architecture |
US7698501B1 (en) | 2005-04-29 | 2010-04-13 | Netapp, Inc. | System and method for utilizing sparse data containers in a striped volume set |
US7962689B1 (en) | 2005-04-29 | 2011-06-14 | Netapp, Inc. | System and method for performing transactional processing in a striped volume set |
US7743210B1 (en) | 2005-04-29 | 2010-06-22 | Netapp, Inc. | System and method for implementing atomic cross-stripe write operations in a striped volume set |
US7698334B2 (en) * | 2005-04-29 | 2010-04-13 | Netapp, Inc. | System and method for multi-tiered meta-data caching and distribution in a clustered computer environment |
US20060253658A1 (en) * | 2005-05-04 | 2006-11-09 | International Business Machines Corporation | Provisioning or de-provisioning shared or reusable storage volumes |
US20060271579A1 (en) * | 2005-05-10 | 2006-11-30 | Arun Batish | Storage usage analysis |
US20060265358A1 (en) * | 2005-05-17 | 2006-11-23 | Junichi Hara | Method and apparatus for providing information to search engines |
US7739318B2 (en) | 2005-06-20 | 2010-06-15 | Netapp, Inc. | System and method for maintaining mappings from data containers to their parent directories |
US20070022314A1 (en) * | 2005-07-22 | 2007-01-25 | Pranoop Erasani | Architecture and method for configuring a simplified cluster over a network with fencing and quorum |
US7653682B2 (en) * | 2005-07-22 | 2010-01-26 | Netapp, Inc. | Client failure fencing mechanism for fencing network file system data in a host-cluster environment |
US7516285B1 (en) | 2005-07-22 | 2009-04-07 | Network Appliance, Inc. | Server side API for fencing cluster hosts via export access rights |
US8484213B2 (en) * | 2005-08-31 | 2013-07-09 | International Business Machines Corporation | Heterogenous high availability cluster manager |
US7650366B1 (en) | 2005-09-09 | 2010-01-19 | Netapp, Inc. | System and method for generating a crash consistent persistent consistency point image set |
US9990133B2 (en) * | 2005-09-12 | 2018-06-05 | Oracle America, Inc. | Storage library client interface system and method |
US7707193B2 (en) * | 2005-09-22 | 2010-04-27 | Netapp, Inc. | System and method for verifying and restoring the consistency of inode to pathname mappings in a filesystem |
US20070088917A1 (en) * | 2005-10-14 | 2007-04-19 | Ranaweera Samantha L | System and method for creating and maintaining a logical serial attached SCSI communication channel among a plurality of storage systems |
US7467276B1 (en) | 2005-10-25 | 2008-12-16 | Network Appliance, Inc. | System and method for automatic root volume creation |
EP1949214B1 (en) | 2005-10-28 | 2012-12-19 | Network Appliance, Inc. | System and method for optimizing multi-pathing support in a distributed storage system environment |
US8549252B2 (en) * | 2005-12-13 | 2013-10-01 | Emc Corporation | File based volumes and file systems |
US7693864B1 (en) | 2006-01-03 | 2010-04-06 | Netapp, Inc. | System and method for quickly determining changed metadata using persistent consistency point image differencing |
US7734603B1 (en) | 2006-01-26 | 2010-06-08 | Netapp, Inc. | Content addressable storage array element |
US8560503B1 (en) | 2006-01-26 | 2013-10-15 | Netapp, Inc. | Content addressable storage system |
CN100423491C (en) | 2006-03-08 | 2008-10-01 | 杭州华三通信技术有限公司 | Virtual network storing system and network storing equipment thereof |
US7734951B1 (en) | 2006-03-20 | 2010-06-08 | Netapp, Inc. | System and method for data protection management in a logical namespace of a storage system environment |
US8285817B1 (en) | 2006-03-20 | 2012-10-09 | Netapp, Inc. | Migration engine for use in a logical namespace of a storage system environment |
US7590660B1 (en) | 2006-03-21 | 2009-09-15 | Network Appliance, Inc. | Method and system for efficient database cloning |
US7926049B1 (en) | 2006-03-23 | 2011-04-12 | Netapp, Inc. | System and method for determining differences between software configurations |
US7565519B1 (en) | 2006-03-23 | 2009-07-21 | Netapp, Inc. | System and method for automatically upgrading/reverting configurations across a plurality of product release lines |
US8260831B2 (en) * | 2006-03-31 | 2012-09-04 | Netapp, Inc. | System and method for implementing a flexible storage manager with threshold control |
US8090908B1 (en) | 2006-04-26 | 2012-01-03 | Netapp, Inc. | Single nodename cluster system for fibre channel |
US8165221B2 (en) | 2006-04-28 | 2012-04-24 | Netapp, Inc. | System and method for sampling based elimination of duplicate data |
US7769723B2 (en) * | 2006-04-28 | 2010-08-03 | Netapp, Inc. | System and method for providing continuous data protection |
US7464238B1 (en) | 2006-04-28 | 2008-12-09 | Network Appliance, Inc. | System and method for verifying the consistency of mirrored data sets |
US9026495B1 (en) | 2006-05-26 | 2015-05-05 | Netapp, Inc. | System and method for creating and accessing a host-accessible storage entity |
US20070288535A1 (en) * | 2006-06-13 | 2007-12-13 | Hitachi, Ltd. | Long-term data archiving system and method |
US8185751B2 (en) * | 2006-06-27 | 2012-05-22 | Emc Corporation | Achieving strong cryptographic correlation between higher level semantic units and lower level components in a secure data storage system |
US7921077B2 (en) * | 2006-06-29 | 2011-04-05 | Netapp, Inc. | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US8412682B2 (en) * | 2006-06-29 | 2013-04-02 | Netapp, Inc. | System and method for retrieving and using block fingerprints for data deduplication |
US8010509B1 (en) | 2006-06-30 | 2011-08-30 | Netapp, Inc. | System and method for verifying and correcting the consistency of mirrored data sets |
US7747584B1 (en) | 2006-08-22 | 2010-06-29 | Netapp, Inc. | System and method for enabling de-duplication in a storage system architecture |
US7526619B1 (en) * | 2006-09-05 | 2009-04-28 | Nvidia Corporation | Method for providing emulated flexible magnetic storage medium using network storage services |
US7971234B1 (en) | 2006-09-15 | 2011-06-28 | Netapp, Inc. | Method and apparatus for offline cryptographic key establishment |
US8245050B1 (en) | 2006-09-29 | 2012-08-14 | Netapp, Inc. | System and method for initial key establishment using a split knowledge protocol |
US7739546B1 (en) | 2006-10-20 | 2010-06-15 | Netapp, Inc. | System and method for storing and retrieving file system log information in a clustered computer system |
US7720889B1 (en) | 2006-10-31 | 2010-05-18 | Netapp, Inc. | System and method for nearly in-band search indexing |
US8996487B1 (en) | 2006-10-31 | 2015-03-31 | Netapp, Inc. | System and method for improving the relevance of search results using data container access patterns |
US7933921B2 (en) | 2006-11-29 | 2011-04-26 | Netapp, Inc. | Referent-controlled location resolution of resources in a federated distributed system |
US7711683B1 (en) | 2006-11-30 | 2010-05-04 | Netapp, Inc. | Method and system for maintaining disk location via homeness |
US7546302B1 (en) * | 2006-11-30 | 2009-06-09 | Netapp, Inc. | Method and system for improved resource giveback |
US7613947B1 (en) | 2006-11-30 | 2009-11-03 | Netapp, Inc. | System and method for storage takeover |
US8706833B1 (en) | 2006-12-08 | 2014-04-22 | Emc Corporation | Data storage server having common replication architecture for multiple storage object types |
US8489811B1 (en) | 2006-12-29 | 2013-07-16 | Netapp, Inc. | System and method for addressing data containers using data set identifiers |
US8301673B2 (en) * | 2006-12-29 | 2012-10-30 | Netapp, Inc. | System and method for performing distributed consistency verification of a clustered file system |
US20080177907A1 (en) * | 2007-01-23 | 2008-07-24 | Paul Boerger | Method and system of a peripheral port of a server system |
US7853750B2 (en) * | 2007-01-30 | 2010-12-14 | Netapp, Inc. | Method and an apparatus to store data patterns |
US7865663B1 (en) * | 2007-02-16 | 2011-01-04 | Vmware, Inc. | SCSI protocol emulation for virtual storage device stored on NAS device |
US8868495B2 (en) * | 2007-02-21 | 2014-10-21 | Netapp, Inc. | System and method for indexing user data on storage systems |
US7870356B1 (en) | 2007-02-22 | 2011-01-11 | Emc Corporation | Creation of snapshot copies using a sparse file for keeping a record of changed blocks |
US8312046B1 (en) | 2007-02-28 | 2012-11-13 | Netapp, Inc. | System and method for enabling a data container to appear in a plurality of locations in a super-namespace |
US8024518B1 (en) | 2007-03-02 | 2011-09-20 | Netapp, Inc. | Optimizing reads for verification of a mirrored file system |
US8219821B2 (en) | 2007-03-27 | 2012-07-10 | Netapp, Inc. | System and method for signature based data container recognition |
US7653612B1 (en) | 2007-03-28 | 2010-01-26 | Emc Corporation | Data protection services offload using shallow files |
US8312214B1 (en) | 2007-03-28 | 2012-11-13 | Netapp, Inc. | System and method for pausing disk drives in an aggregate |
US7734947B1 (en) | 2007-04-17 | 2010-06-08 | Netapp, Inc. | System and method for virtual interface failover within a cluster |
US9134921B1 (en) | 2007-04-23 | 2015-09-15 | Netapp, Inc. | Uniquely naming storage devices in a global storage environment |
US20080270480A1 (en) * | 2007-04-26 | 2008-10-30 | Hanes David H | Method and system of deleting files from a remote server |
US8611542B1 (en) | 2007-04-26 | 2013-12-17 | Netapp, Inc. | Peer to peer key synchronization |
US8219749B2 (en) * | 2007-04-27 | 2012-07-10 | Netapp, Inc. | System and method for efficient updates of sequential block storage |
US7882304B2 (en) * | 2007-04-27 | 2011-02-01 | Netapp, Inc. | System and method for efficient updates of sequential block storage |
US7840837B2 (en) * | 2007-04-27 | 2010-11-23 | Netapp, Inc. | System and method for protecting memory during system initialization |
US7827350B1 (en) | 2007-04-27 | 2010-11-02 | Netapp, Inc. | Method and system for promoting a snapshot in a distributed file system |
US8824686B1 (en) | 2007-04-27 | 2014-09-02 | Netapp, Inc. | Cluster key synchronization |
US7958385B1 (en) | 2007-04-30 | 2011-06-07 | Netapp, Inc. | System and method for verification and enforcement of virtual interface failover within a cluster |
US8005993B2 (en) | 2007-04-30 | 2011-08-23 | Hewlett-Packard Development Company, L.P. | System and method of a storage expansion unit for a network attached storage device |
US9110920B1 (en) | 2007-05-03 | 2015-08-18 | Emc Corporation | CIFS access to NFS files and directories by translating NFS file handles into pseudo-pathnames |
US7836331B1 (en) | 2007-05-15 | 2010-11-16 | Netapp, Inc. | System and method for protecting the contents of memory during error conditions |
US8762345B2 (en) * | 2007-05-31 | 2014-06-24 | Netapp, Inc. | System and method for accelerating anchor point detection |
US7797489B1 (en) | 2007-06-01 | 2010-09-14 | Netapp, Inc. | System and method for providing space availability notification in a distributed striped volume set |
US8037524B1 (en) | 2007-06-19 | 2011-10-11 | Netapp, Inc. | System and method for differentiated cross-licensing for services across heterogeneous systems using transient keys |
US7801993B2 (en) * | 2007-07-19 | 2010-09-21 | Hitachi, Ltd. | Method and apparatus for storage-service-provider-aware storage system |
US8209365B2 (en) * | 2007-07-23 | 2012-06-26 | Hewlett-Packard Development Company, L.P. | Technique for virtualizing storage using stateless servers |
US8301791B2 (en) * | 2007-07-26 | 2012-10-30 | Netapp, Inc. | System and method for non-disruptive check of a mirror |
EP2176991B1 (en) * | 2007-08-16 | 2015-10-07 | Fisher Controls International LLC | Network scanning and management in a device type manager of type device |
EP2028603B1 (en) * | 2007-08-20 | 2011-07-13 | NTT DoCoMo, Inc. | External storage medium adapter |
US20090055556A1 (en) * | 2007-08-20 | 2009-02-26 | Ntt Docomo, Inc. | External storage medium adapter |
US8346952B2 (en) * | 2007-08-21 | 2013-01-01 | Netapp, Inc. | De-centralization of group administration authority within a network storage architecture |
US8196182B2 (en) | 2007-08-24 | 2012-06-05 | Netapp, Inc. | Distributed management of crypto module white lists |
US8793226B1 (en) | 2007-08-28 | 2014-07-29 | Netapp, Inc. | System and method for estimating duplicate data |
US9774445B1 (en) | 2007-09-04 | 2017-09-26 | Netapp, Inc. | Host based rekeying |
US7756832B1 (en) | 2007-09-21 | 2010-07-13 | Netapp, Inc. | Apparatus and method for providing upgrade compatibility |
US8903772B1 (en) | 2007-10-25 | 2014-12-02 | Emc Corporation | Direct or indirect mapping policy for data blocks of a file in a file system |
US7983423B1 (en) | 2007-10-29 | 2011-07-19 | Netapp, Inc. | Re-keying based on pre-generated keys |
US7996636B1 (en) | 2007-11-06 | 2011-08-09 | Netapp, Inc. | Uniquely identifying block context signatures in a storage volume hierarchy |
US7904690B2 (en) * | 2007-12-14 | 2011-03-08 | Netapp, Inc. | Policy based storage appliance virtualization |
US7984259B1 (en) | 2007-12-17 | 2011-07-19 | Netapp, Inc. | Reducing load imbalance in a storage system |
US7890504B2 (en) * | 2007-12-19 | 2011-02-15 | Netapp, Inc. | Using the LUN type for storage allocation |
US9507784B2 (en) | 2007-12-21 | 2016-11-29 | Netapp, Inc. | Selective extraction of information from a mirrored image file |
US7904466B1 (en) | 2007-12-21 | 2011-03-08 | Netapp, Inc. | Presenting differences in a file system |
US9128946B2 (en) * | 2007-12-31 | 2015-09-08 | Mastercard International Incorporated | Systems and methods for platform-independent data file transfers |
US8380674B1 (en) | 2008-01-09 | 2013-02-19 | Netapp, Inc. | System and method for migrating lun data between data containers |
US7996607B1 (en) | 2008-01-28 | 2011-08-09 | Netapp, Inc. | Distributing lookup operations in a striped storage system |
US8793117B1 (en) * | 2008-04-16 | 2014-07-29 | Scalable Network Technologies, Inc. | System and method for virtualization of networking system software via emulation |
US8725986B1 (en) | 2008-04-18 | 2014-05-13 | Netapp, Inc. | System and method for volume block number to disk block number mapping |
CN101566927B (en) * | 2008-04-23 | 2010-10-27 | 杭州华三通信技术有限公司 | Memory system, memory controller and data caching method |
US8219564B1 (en) | 2008-04-29 | 2012-07-10 | Netapp, Inc. | Two-dimensional indexes for quick multiple attribute search in a catalog system |
US8200638B1 (en) | 2008-04-30 | 2012-06-12 | Netapp, Inc. | Individual file restore from block-level incremental backups by using client-server backup protocol |
US8046333B1 (en) | 2008-04-30 | 2011-10-25 | Netapp, Inc. | Incremental dump with a metadata container walk using inode-to-parent mapping information |
US8654762B2 (en) * | 2008-05-21 | 2014-02-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Resource pooling in a blade cluster switching center server |
US8745336B2 (en) * | 2008-05-29 | 2014-06-03 | Vmware, Inc. | Offloading storage operations to storage hardware |
US8250043B2 (en) * | 2008-08-19 | 2012-08-21 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US8285687B2 (en) | 2008-08-27 | 2012-10-09 | Netapp, Inc. | System and method for file system level compression using compression group descriptors |
US8307177B2 (en) | 2008-09-05 | 2012-11-06 | Commvault Systems, Inc. | Systems and methods for management of virtualization data |
US8073674B2 (en) * | 2008-09-23 | 2011-12-06 | Oracle America, Inc. | SCSI device emulation in user space facilitating storage virtualization |
US8327186B2 (en) * | 2009-03-10 | 2012-12-04 | Netapp, Inc. | Takeover of a failed node of a cluster storage system on a per aggregate basis |
US8688798B1 (en) | 2009-04-03 | 2014-04-01 | Netapp, Inc. | System and method for a shared write address protocol over a remote direct memory access connection |
US8266136B1 (en) | 2009-04-13 | 2012-09-11 | Netapp, Inc. | Mechanism for performing fast directory lookup in a server system |
US8069366B1 (en) * | 2009-04-29 | 2011-11-29 | Netapp, Inc. | Global write-log device for managing write logs of nodes of a cluster storage system |
US8117388B2 (en) * | 2009-04-30 | 2012-02-14 | Netapp, Inc. | Data distribution through capacity leveling in a striped file system |
GB2470895A (en) * | 2009-06-08 | 2010-12-15 | Mark Klarzynski | Virtualisation of block level storage by compartmentalising data into virtual block files to establish a virtual block file system |
US8463989B2 (en) * | 2009-06-16 | 2013-06-11 | Hitachi, Ltd. | Storage device and method utilizing both block I/O and file I/O access |
US8504529B1 (en) | 2009-06-19 | 2013-08-06 | Netapp, Inc. | System and method for restoring data to a storage device based on a backup image |
US8510806B2 (en) * | 2009-10-22 | 2013-08-13 | Sap Ag | System and method of controlling access to information in a virtual computing environment |
US8527719B2 (en) * | 2009-10-26 | 2013-09-03 | Matthew H. Klapman | Concurrent access to a memory pool shared between a block access device and a graph access device |
US20110121108A1 (en) * | 2009-11-24 | 2011-05-26 | Stephan Rodewald | Plasma polymerization nozzle |
US9015333B2 (en) * | 2009-12-18 | 2015-04-21 | Cisco Technology, Inc. | Apparatus and methods for handling network file operations over a fibre channel network |
US8281105B2 (en) * | 2010-01-20 | 2012-10-02 | Hitachi, Ltd. | I/O conversion method and apparatus for storage system |
JP5244831B2 (en) * | 2010-01-25 | 2013-07-24 | 株式会社日立製作所 | Computer system and integrated storage management method |
US8086638B1 (en) | 2010-03-31 | 2011-12-27 | Emc Corporation | File handle banking to provide non-disruptive migration of files |
WO2011145148A1 (en) * | 2010-05-20 | 2011-11-24 | Hitachi Software Engineering Co., Ltd. | Computer system and storage capacity extension method |
US11449394B2 (en) | 2010-06-04 | 2022-09-20 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
CN101986651B (en) * | 2010-08-26 | 2013-01-30 | 上海网众信息技术有限公司 | Remote storage method, remote storage system and client |
CN101938523A (en) * | 2010-09-16 | 2011-01-05 | 华中科技大学 | Convergence method of iSCSI (Internet Small Computer System Interface) and FCP (Fiber Channel Protocol) and application thereof to disaster recovery |
US8620870B2 (en) | 2010-09-30 | 2013-12-31 | Commvault Systems, Inc. | Efficient data management improvements, such as docking limited-feature data management modules to a full-featured data management system |
US8495331B2 (en) | 2010-12-22 | 2013-07-23 | Hitachi, Ltd. | Storage apparatus and storage management method for storing entries in management tables |
CN102223409B (en) * | 2011-06-13 | 2013-08-21 | 浪潮(北京)电子信息产业有限公司 | Network storage resource application system and method |
US9461881B2 (en) | 2011-09-30 | 2016-10-04 | Commvault Systems, Inc. | Migration of existing computing systems to cloud computing sites or virtual machines |
US9116633B2 (en) | 2011-09-30 | 2015-08-25 | Commvault Systems, Inc. | Information management of virtual machines having mapped storage devices |
US8959389B2 (en) | 2011-11-23 | 2015-02-17 | International Business Machines Corporation | Use of a virtual drive as a hot spare for a raid group |
US10019159B2 (en) | 2012-03-14 | 2018-07-10 | Open Invention Network Llc | Systems, methods and devices for management of virtual memory systems |
US9639297B2 (en) * | 2012-03-30 | 2017-05-02 | Commvault Systems, Inc | Shared network-available storage that permits concurrent data access |
US9063938B2 (en) | 2012-03-30 | 2015-06-23 | Commvault Systems, Inc. | Search filtered file system using secondary storage, including multi-dimensional indexing and searching of archived files |
US8924443B2 (en) * | 2012-10-05 | 2014-12-30 | Gary Robin Maze | Document management systems and methods |
US20140181038A1 (en) | 2012-12-21 | 2014-06-26 | Commvault Systems, Inc. | Systems and methods to categorize unprotected virtual machines |
US9223597B2 (en) | 2012-12-21 | 2015-12-29 | Commvault Systems, Inc. | Archiving virtual machines in a data storage system |
CN103067488A (en) * | 2012-12-25 | 2013-04-24 | 中国科学院深圳先进技术研究院 | Implement method of unified storage |
US9378035B2 (en) | 2012-12-28 | 2016-06-28 | Commvault Systems, Inc. | Systems and methods for repurposing virtual machines |
US9020977B1 (en) * | 2012-12-31 | 2015-04-28 | Emc Corporation | Managing multiprotocol directories |
US20140196039A1 (en) | 2013-01-08 | 2014-07-10 | Commvault Systems, Inc. | Virtual machine categorization system and method |
US9015123B1 (en) | 2013-01-16 | 2015-04-21 | Netapp, Inc. | Methods and systems for identifying changed data in an expandable storage volume |
JP5877552B1 (en) | 2013-01-28 | 2016-03-08 | 株式会社日立製作所 | Storage system and resource allocation method |
US9286007B1 (en) | 2013-03-14 | 2016-03-15 | Emc Corporation | Unified datapath architecture |
US10447524B1 (en) | 2013-03-14 | 2019-10-15 | EMC IP Holding Company LLC | Unified datapath processing with virtualized storage processors |
US9424117B1 (en) * | 2013-03-15 | 2016-08-23 | Emc Corporation | Virtual storage processor failover |
US9507787B1 (en) | 2013-03-15 | 2016-11-29 | EMC IP Holding Company LLC | Providing mobility to virtual storage processors |
US9280555B1 (en) | 2013-03-29 | 2016-03-08 | Emc Corporation | Unified data protection for block and file objects |
US9122697B1 (en) * | 2013-03-29 | 2015-09-01 | Emc Corporation | Unified data services for block and file objects |
CN103257941B (en) * | 2013-04-17 | 2015-09-23 | 浪潮(北京)电子信息产业有限公司 | Multi-protocol storage controller and system |
US9400792B1 (en) | 2013-06-27 | 2016-07-26 | Emc Corporation | File system inline fine grained tiering |
US9535630B1 (en) * | 2013-06-27 | 2017-01-03 | EMC IP Holding Company LLC | Leveraging array operations at virtualized storage processor level |
US9430492B1 (en) | 2013-06-28 | 2016-08-30 | Emc Corporation | Efficient scavenging of data and metadata file system blocks |
US9355121B1 (en) | 2013-06-28 | 2016-05-31 | Emc Corporation | Segregating data and metadata in a file system |
US20150074536A1 (en) | 2013-09-12 | 2015-03-12 | Commvault Systems, Inc. | File manager integration with virtualization in an information management system, including user control and storage management of virtual machines |
US9378261B1 (en) | 2013-09-30 | 2016-06-28 | Emc Corporation | Unified synchronous replication for block and file objects |
US9330155B1 (en) * | 2013-09-30 | 2016-05-03 | Emc Corporation | Unified management of sync and async replication for block and file objects |
US9305009B1 (en) | 2013-09-30 | 2016-04-05 | Emc Corporation | Synchronous replication of virtualized storage processors |
US9378219B1 (en) | 2013-09-30 | 2016-06-28 | Emc Corporation | Metro-cluster based on synchronous replication of virtualized storage processors |
CN103617130A (en) * | 2013-11-15 | 2014-03-05 | 浪潮(北京)电子信息产业有限公司 | Multiple-protocol-supportive storage virtualization system |
EP3074873A4 (en) * | 2013-11-26 | 2017-08-16 | Intel Corporation | Method and apparatus for storing data |
CN103607465A (en) * | 2013-11-27 | 2014-02-26 | 浪潮电子信息产业股份有限公司 | Fusion link storage system |
US9880777B1 (en) | 2013-12-23 | 2018-01-30 | EMC IP Holding Company LLC | Embedded synchronous replication for block and file objects |
CN103685566A (en) * | 2013-12-25 | 2014-03-26 | 天津火星科技有限公司 | Mobile-terminal-oriented cloud storage achievement method |
US10915468B2 (en) * | 2013-12-26 | 2021-02-09 | Intel Corporation | Sharing memory and I/O services between nodes |
US9430480B1 (en) | 2013-12-31 | 2016-08-30 | Emc Corporation | Active-active metro-cluster scale-out for unified data path architecture |
US9069783B1 (en) | 2013-12-31 | 2015-06-30 | Emc Corporation | Active-active scale-out for unified data path architecture |
US9842026B2 (en) | 2013-12-31 | 2017-12-12 | Netapp, Inc. | Snapshot-protected consistency checking file systems |
US9811427B2 (en) | 2014-04-02 | 2017-11-07 | Commvault Systems, Inc. | Information management by a media agent in the absence of communications with a storage manager |
JP2015225603A (en) * | 2014-05-29 | 2015-12-14 | 富士通株式会社 | Storage control device, storage control method, and storage control program |
US9916312B1 (en) | 2014-06-30 | 2018-03-13 | EMC IP Holding Company LLC | Coordination of file system creation to ensure more deterministic performance characteristics |
US9690803B1 (en) | 2014-06-30 | 2017-06-27 | EMC IP Holding Company LLC | Auxiliary files in a container file system |
US10853311B1 (en) * | 2014-07-03 | 2020-12-01 | Pure Storage, Inc. | Administration through files in a storage system |
US20160019317A1 (en) | 2014-07-16 | 2016-01-21 | Commvault Systems, Inc. | Volume or virtual machine level backup and generating placeholders for virtual machine files |
US9880928B1 (en) | 2014-09-26 | 2018-01-30 | EMC IP Holding Company LLC | Storing compressed and uncompressed data in blocks having different allocation unit sizes |
US9158811B1 (en) | 2014-10-09 | 2015-10-13 | Splunk, Inc. | Incident review interface |
US11501238B2 (en) | 2014-10-09 | 2022-11-15 | Splunk Inc. | Per-entity breakdown of key performance indicators |
US11755559B1 (en) | 2014-10-09 | 2023-09-12 | Splunk Inc. | Automatic entity control in a machine data driven service monitoring system |
US11275775B2 (en) | 2014-10-09 | 2022-03-15 | Splunk Inc. | Performing search queries for key performance indicators using an optimized common information model |
US10536353B2 (en) | 2014-10-09 | 2020-01-14 | Splunk Inc. | Control interface for dynamic substitution of service monitoring dashboard source data |
US10447555B2 (en) | 2014-10-09 | 2019-10-15 | Splunk Inc. | Aggregate key performance indicator spanning multiple services |
US10474680B2 (en) | 2014-10-09 | 2019-11-12 | Splunk Inc. | Automatic entity definitions |
US9210056B1 (en) | 2014-10-09 | 2015-12-08 | Splunk Inc. | Service monitoring interface |
US11087263B2 (en) | 2014-10-09 | 2021-08-10 | Splunk Inc. | System monitoring with key performance indicators from shared base search of machine data |
US9760240B2 (en) | 2014-10-09 | 2017-09-12 | Splunk Inc. | Graphical user interface for static and adaptive thresholds |
US11296955B1 (en) | 2014-10-09 | 2022-04-05 | Splunk Inc. | Aggregate key performance indicator spanning multiple services and based on a priority value |
US9146962B1 (en) | 2014-10-09 | 2015-09-29 | Splunk, Inc. | Identifying events using informational fields |
US10193775B2 (en) | 2014-10-09 | 2019-01-29 | Splunk Inc. | Automatic event group action interface |
US10417225B2 (en) | 2015-09-18 | 2019-09-17 | Splunk Inc. | Entity detail monitoring console |
US11671312B2 (en) | 2014-10-09 | 2023-06-06 | Splunk Inc. | Service detail monitoring console |
US9146954B1 (en) | 2014-10-09 | 2015-09-29 | Splunk, Inc. | Creating entity definition from a search result set |
US10505825B1 (en) | 2014-10-09 | 2019-12-10 | Splunk Inc. | Automatic creation of related event groups for IT service monitoring |
US10209956B2 (en) | 2014-10-09 | 2019-02-19 | Splunk Inc. | Automatic event group actions |
US10592093B2 (en) | 2014-10-09 | 2020-03-17 | Splunk Inc. | Anomaly detection |
US10305758B1 (en) | 2014-10-09 | 2019-05-28 | Splunk Inc. | Service monitoring interface reflecting by-service mode |
US9491059B2 (en) | 2014-10-09 | 2016-11-08 | Splunk Inc. | Topology navigator for IT services |
US10235638B2 (en) | 2014-10-09 | 2019-03-19 | Splunk Inc. | Adaptive key performance indicator thresholds |
US9864797B2 (en) | 2014-10-09 | 2018-01-09 | Splunk Inc. | Defining a new search based on displayed graph lanes |
US11200130B2 (en) | 2015-09-18 | 2021-12-14 | Splunk Inc. | Automatic entity control in a machine data driven service monitoring system |
US11455590B2 (en) | 2014-10-09 | 2022-09-27 | Splunk Inc. | Service monitoring adaptation for maintenance downtime |
US9130832B1 (en) | 2014-10-09 | 2015-09-08 | Splunk, Inc. | Creating entity definition from a file |
US10417108B2 (en) | 2015-09-18 | 2019-09-17 | Splunk Inc. | Portable control modules in a machine data driven service monitoring system |
US9208463B1 (en) | 2014-10-09 | 2015-12-08 | Splunk Inc. | Thresholds for key performance indicators derived from machine data |
US10776209B2 (en) | 2014-11-10 | 2020-09-15 | Commvault Systems, Inc. | Cross-platform virtual machine backup and replication |
US9983936B2 (en) | 2014-11-20 | 2018-05-29 | Commvault Systems, Inc. | Virtual machine change block tracking |
US20160217175A1 (en) * | 2015-01-23 | 2016-07-28 | Netapp, Inc. | Techniques for asynchronous snapshot invalidation |
US10198155B2 (en) | 2015-01-31 | 2019-02-05 | Splunk Inc. | Interface for automated service discovery in I.T. environments |
US9967351B2 (en) | 2015-01-31 | 2018-05-08 | Splunk Inc. | Automated service discovery in I.T. environments |
US10037251B1 (en) | 2015-03-31 | 2018-07-31 | EMC IP Holding Company LLC | File system rollback to previous point in time |
US9563514B2 (en) | 2015-06-19 | 2017-02-07 | Commvault Systems, Inc. | Assignment of proxies for virtual-machine secondary copy operations including streaming backup jobs |
US10084873B2 (en) | 2015-06-19 | 2018-09-25 | Commvault Systems, Inc. | Assignment of data agent proxies for executing virtual-machine secondary copy operations including streaming backup jobs |
US10705909B2 (en) * | 2015-06-25 | 2020-07-07 | International Business Machines Corporation | File level defined de-clustered redundant array of independent storage devices solution |
US10942815B2 (en) * | 2015-07-09 | 2021-03-09 | Hitachi, Ltd. | Storage control system managing file-level and block-level storage services, and methods for controlling such storage control system |
US10523766B2 (en) * | 2015-08-27 | 2019-12-31 | Infinidat Ltd | Resolving path state conflicts in internet small computer system interfaces |
CN106997274B (en) * | 2016-01-25 | 2021-04-30 | 中兴通讯股份有限公司 | Architecture and method for realizing storage space management |
US11461258B2 (en) | 2016-09-14 | 2022-10-04 | Samsung Electronics Co., Ltd. | Self-configuring baseboard management controller (BMC) |
US20190109720A1 (en) | 2016-07-26 | 2019-04-11 | Samsung Electronics Co., Ltd. | Modular system (switch boards and mid-plane) for supporting 50g or 100g ethernet speeds of fpga+ssd |
US10210123B2 (en) * | 2016-07-26 | 2019-02-19 | Samsung Electronics Co., Ltd. | System and method for supporting multi-path and/or multi-mode NMVe over fabrics devices |
US10942960B2 (en) | 2016-09-26 | 2021-03-09 | Splunk Inc. | Automatic triage model execution in machine data driven monitoring automation apparatus with visualization |
US10942946B2 (en) | 2016-09-26 | 2021-03-09 | Splunk, Inc. | Automatic triage model execution in machine data driven monitoring automation apparatus |
US10747630B2 (en) | 2016-09-30 | 2020-08-18 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US10853320B1 (en) | 2016-09-30 | 2020-12-01 | EMC IP Holding Company LLC | Scavenging directories for free space |
US10162528B2 (en) | 2016-10-25 | 2018-12-25 | Commvault Systems, Inc. | Targeted snapshot based on virtual machine location |
US10628196B2 (en) * | 2016-11-12 | 2020-04-21 | Vmware, Inc. | Distributed iSCSI target for distributed hyper-converged storage |
RU2646312C1 (en) * | 2016-11-14 | 2018-03-02 | Общество с ограниченной ответственностью "ИБС Экспертиза" | Integrated hardware and software system |
US10678758B2 (en) | 2016-11-21 | 2020-06-09 | Commvault Systems, Inc. | Cross-platform virtual machine data and memory backup and replication |
US10949308B2 (en) | 2017-03-15 | 2021-03-16 | Commvault Systems, Inc. | Application aware backup of virtual machines |
US20180276085A1 (en) | 2017-03-24 | 2018-09-27 | Commvault Systems, Inc. | Virtual machine recovery point generation |
US10387073B2 (en) | 2017-03-29 | 2019-08-20 | Commvault Systems, Inc. | External dynamic virtual machine synchronization |
US10853195B2 (en) | 2017-03-31 | 2020-12-01 | Commvault Systems, Inc. | Granular restoration of virtual machine application data |
US10289325B1 (en) | 2017-07-31 | 2019-05-14 | EMC IP Holding Company LLC | Managing multiple tenants in NAS (network attached storage) clusters |
US10831718B1 (en) | 2017-07-31 | 2020-11-10 | EMC IP Holding Company LLC | Managing data using network attached storage (NAS) cluster |
US10789017B1 (en) | 2017-07-31 | 2020-09-29 | EMC IP Holding Company LLC | File system provisioning and management with reduced storage communication |
US10983964B1 (en) | 2017-07-31 | 2021-04-20 | EMC IP Holding Company LLC | Managing file system tailored for cluster deployment |
US11042512B1 (en) | 2017-08-02 | 2021-06-22 | EMC IP Holding Company LLC | Enabling granular snapshots and provisioning in NAS (network attached storage) clusters |
RU178459U1 (en) * | 2017-09-08 | 2018-04-04 | Общество с ограниченной ответственностью "БУЛАТ" | Data storage device |
US11093518B1 (en) | 2017-09-23 | 2021-08-17 | Splunk Inc. | Information technology networked entity monitoring with dynamic metric and threshold selection |
US11106442B1 (en) | 2017-09-23 | 2021-08-31 | Splunk Inc. | Information technology networked entity monitoring with metric selection prior to deployment |
US11159397B2 (en) | 2017-09-25 | 2021-10-26 | Splunk Inc. | Lower-tier application deployment for higher-tier system data monitoring |
CN107656704A (en) * | 2017-09-28 | 2018-02-02 | 郑州云海信息技术有限公司 | Multi-protocol data shares storage method, device, equipment and computer-readable storage medium |
US10740192B2 (en) | 2018-01-31 | 2020-08-11 | EMC IP Holding Company LLC | Restoring NAS servers from the cloud |
US11042448B2 (en) | 2018-01-31 | 2021-06-22 | EMC IP Holding Company LLC | Archiving NAS servers to the cloud |
US10848545B2 (en) | 2018-01-31 | 2020-11-24 | EMC IP Holding Company LLC | Managing cloud storage of block-based and file-based data |
US10764180B1 (en) * | 2018-02-20 | 2020-09-01 | Toshiba Memory Corporation | System and method for storing data using software defined networks |
US10877928B2 (en) | 2018-03-07 | 2020-12-29 | Commvault Systems, Inc. | Using utilities injected into cloud-based virtual machines for speeding up virtual machine backup operations |
RU182176U1 (en) * | 2018-04-18 | 2018-08-06 | Общество с ограниченной ответственностью "БУЛАТ" | Data storage device |
RU184681U1 (en) * | 2018-04-18 | 2018-11-02 | Общество с ограниченной ответственностью "БУЛАТ" | Data storage device |
CN110879760B (en) * | 2018-09-05 | 2022-09-02 | 北京鲸鲨软件科技有限公司 | Unified storage system and method and electronic equipment |
US11172052B2 (en) | 2018-09-13 | 2021-11-09 | International Business Machines Corporation | Merging storage protocols |
CN109117099A (en) * | 2018-10-23 | 2019-01-01 | 西安莫贝克半导体科技有限公司 | A kind of SAN fabric cabinet management system and data manipulation method |
US11604712B2 (en) | 2018-11-16 | 2023-03-14 | Vmware, Inc. | Active-active architecture for distributed ISCSI target in hyper-converged storage |
US11200124B2 (en) | 2018-12-06 | 2021-12-14 | Commvault Systems, Inc. | Assigning backup resources based on failover of partnered data storage servers in a data storage management system |
US10768971B2 (en) | 2019-01-30 | 2020-09-08 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data |
US10970257B2 (en) | 2019-01-31 | 2021-04-06 | EMC IP Holding Company LLC | Replicating file systems via cloud storage |
JP7288085B2 (en) * | 2019-05-17 | 2023-06-06 | ヒタチ ヴァンタラ エルエルシー | Apparatus, system and method for managing an object-based file system |
RU194502U1 (en) * | 2019-06-26 | 2019-12-12 | Общество с ограниченной ответственностью "БУЛАТ" | Data storage device |
US11281541B2 (en) | 2020-01-15 | 2022-03-22 | EMC IP Holding Company LLC | Dynamic snapshot backup in multi-cloud environment |
US11507409B2 (en) | 2020-01-22 | 2022-11-22 | Vmware, Inc. | Object-based load balancing approaches in distributed storage system |
US11500667B2 (en) | 2020-01-22 | 2022-11-15 | Vmware, Inc. | Object-based approaches to support internet small computer system interface (ISCSI) services in distributed storage system |
US11467753B2 (en) | 2020-02-14 | 2022-10-11 | Commvault Systems, Inc. | On-demand restore of virtual machine data |
CN111399771B (en) * | 2020-02-28 | 2023-01-10 | 苏州浪潮智能科技有限公司 | Protocol configuration method, device and equipment of MCS storage system |
US11442768B2 (en) | 2020-03-12 | 2022-09-13 | Commvault Systems, Inc. | Cross-hypervisor live recovery of virtual machines |
US11099956B1 (en) | 2020-03-26 | 2021-08-24 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US11748143B2 (en) | 2020-05-15 | 2023-09-05 | Commvault Systems, Inc. | Live mount of virtual machines in a public cloud computing environment |
US11023134B1 (en) * | 2020-05-22 | 2021-06-01 | EMC IP Holding Company LLC | Addition of data services to an operating system running a native multi-path input-output architecture |
CN112379826A (en) * | 2020-10-22 | 2021-02-19 | 中科热备(北京)云计算技术有限公司 | Application method of storage integration technology |
US11656951B2 (en) | 2020-10-28 | 2023-05-23 | Commvault Systems, Inc. | Data loss vulnerability detection |
US11676072B1 (en) | 2021-01-29 | 2023-06-13 | Splunk Inc. | Interface for incorporating user feedback into training of clustering model |
US11947501B2 (en) * | 2021-10-21 | 2024-04-02 | Dell Products L.P. | Two-hierarchy file system |
CN116126812B (en) * | 2023-02-27 | 2024-02-23 | 开元数智工程咨询集团有限公司 | Method and system for storing and integrating engineering industry files |
Family Cites Families (97)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4156907A (en) | 1977-03-02 | 1979-05-29 | Burroughs Corporation | Data communications subsystem |
US4399503A (en) | 1978-06-30 | 1983-08-16 | Bunker Ramo Corporation | Dynamic disk buffer control unit |
US4598357A (en) | 1980-11-14 | 1986-07-01 | Sperry Corporation | Cache/disk subsystem with file number for recovery of cached data |
US4837675A (en) | 1981-10-05 | 1989-06-06 | Digital Equipment Corporation | Secondary storage facility empolying serial communications between drive and controller |
US4570217A (en) | 1982-03-29 | 1986-02-11 | Allen Bruce S | Man machine interface |
JPS60142418A (en) | 1983-12-28 | 1985-07-27 | Hitachi Ltd | Input/output error recovery system |
US4896259A (en) | 1984-09-07 | 1990-01-23 | International Business Machines Corporation | Apparatus for storing modifying data prior to selectively storing data to be modified into a register |
JPS61141056A (en) | 1984-12-14 | 1986-06-28 | インタ−ナショナル ビジネス マシ−ンズ コ−ポレ−ション | Intermittent error detection for volatile memory |
US5202979A (en) | 1985-05-08 | 1993-04-13 | Thinking Machines Corporation | Storage system using multiple independently mechanically-driven storage units |
US4805090A (en) | 1985-09-27 | 1989-02-14 | Unisys Corporation | Peripheral-controller for multiple disk drive modules having different protocols and operating conditions |
US4916608A (en) | 1986-05-30 | 1990-04-10 | International Business Machines Corporation | Provision of virtual storage resources to an operating system control program |
US4761785B1 (en) | 1986-06-12 | 1996-03-12 | Ibm | Parity spreading to enhance storage access |
USRE34100E (en) | 1987-01-12 | 1992-10-13 | Seagate Technology, Inc. | Data error correction system |
US4843541A (en) | 1987-07-29 | 1989-06-27 | International Business Machines Corporation | Logical resource partitioning of a data processing system |
US5129088A (en) | 1987-11-30 | 1992-07-07 | International Business Machines Corporation | Data processing method to create virtual disks from non-contiguous groups of logically contiguous addressable blocks of direct access storage device |
US4899342A (en) | 1988-02-01 | 1990-02-06 | Thinking Machines Corporation | Method and apparatus for operating multi-unit array of memories |
US4864497A (en) | 1988-04-13 | 1989-09-05 | Digital Equipment Corporation | Method of integrating software application programs using an attributive data model database |
US4993030A (en) | 1988-04-22 | 1991-02-12 | Amdahl Corporation | File system for a plurality of storage classes |
US4989206A (en) | 1988-06-28 | 1991-01-29 | Storage Technology Corporation | Disk drive memory |
US5163131A (en) | 1989-09-08 | 1992-11-10 | Auspex Systems, Inc. | Parallel i/o network file server architecture |
EP0490980B1 (en) | 1989-09-08 | 1999-05-06 | Auspex Systems, Inc. | Multiple facility operating system architecture |
US5124987A (en) | 1990-04-16 | 1992-06-23 | Storage Technology Corporation | Logical track write scheduling system for a parallel disk drive array data storage subsystem |
US5155835A (en) | 1990-11-19 | 1992-10-13 | Storage Technology Corporation | Multilevel, hierarchical, dynamically mapped data storage subsystem |
US5278979A (en) | 1990-12-20 | 1994-01-11 | International Business Machines Corp. | Version management system using pointers shared by a plurality of versions for indicating active lines of a version |
US5426747A (en) | 1991-03-22 | 1995-06-20 | Object Design, Inc. | Method and apparatus for virtual memory mapping and transaction management in an object-oriented database system |
US5511177A (en) | 1991-11-21 | 1996-04-23 | Hitachi, Ltd. | File data multiplexing method and data processing system |
US5581724A (en) | 1992-10-19 | 1996-12-03 | Storage Technology Corporation | Dynamically mapped data storage subsystem having multiple open destage cylinders and method of managing that subsystem |
EP0681721B1 (en) | 1993-02-01 | 2005-03-23 | Sun Microsystems, Inc. | Archiving file system for data servers in a distributed network environment |
ATE409907T1 (en) | 1993-06-03 | 2008-10-15 | Network Appliance Inc | METHOD AND DEVICE FOR DESCRIBING ANY AREAS OF A FILE SYSTEM |
DE69431186T2 (en) | 1993-06-03 | 2003-05-08 | Network Appliance Inc | Method and file system for assigning file blocks to storage space in a RAID disk system |
US5963962A (en) | 1995-05-31 | 1999-10-05 | Network Appliance, Inc. | Write anywhere file-system layout |
US6138126A (en) | 1995-05-31 | 2000-10-24 | Network Appliance, Inc. | Method for allocating files in a file system integrated with a raid disk sub-system |
US5566331A (en) * | 1994-01-24 | 1996-10-15 | University Corporation For Atmospheric Research | Mass storage system for file-systems |
DE19513308A1 (en) | 1994-10-04 | 1996-04-11 | Hewlett Packard Co | Virtual node file system for computer data system |
US5907672A (en) | 1995-10-04 | 1999-05-25 | Stac, Inc. | System for backing up computer disk volumes with error remapping of flawed memory addresses |
US5859930A (en) * | 1995-12-06 | 1999-01-12 | Fpr Corporation | Fast pattern recognizer utilizing dispersive delay line |
US5996047A (en) | 1996-07-01 | 1999-11-30 | Sun Microsystems, Inc. | Method and apparatus for caching file control information corresponding to a second file block in a first file block |
US5828876A (en) | 1996-07-31 | 1998-10-27 | Ncr Corporation | File system for a clustered processing system |
US5944789A (en) * | 1996-08-14 | 1999-08-31 | Emc Corporation | Network file server maintaining local caches of file directory information in data mover computers |
US6148377A (en) | 1996-11-22 | 2000-11-14 | Mangosoft Corporation | Shared memory computer networks |
US6178173B1 (en) * | 1996-12-30 | 2001-01-23 | Paradyne Corporation | System and method for communicating pre-connect information in a digital communication system |
US5897661A (en) | 1997-02-25 | 1999-04-27 | International Business Machines Corporation | Logical volume manager and method having enhanced update capability with dynamic allocation of storage and minimal storage of metadata information |
US5946685A (en) | 1997-06-27 | 1999-08-31 | Sun Microsystems, Inc. | Global mount mechanism used in maintaining a global name space utilizing a distributed locking mechanism |
US5987477A (en) | 1997-07-11 | 1999-11-16 | International Business Machines Corporation | Parallel file system and method for parallel write sharing |
US6807581B1 (en) | 2000-09-29 | 2004-10-19 | Alacritech, Inc. | Intelligent network storage interface system |
US5941972A (en) | 1997-12-31 | 1999-08-24 | Crossroads Systems, Inc. | Storage router and method for providing virtual local storage |
US5996024A (en) | 1998-01-14 | 1999-11-30 | Emc Corporation | Method and apparatus for a SCSI applications server which extracts SCSI commands and data from message and encapsulates SCSI responses to provide transparent operation |
US6185655B1 (en) | 1998-01-22 | 2001-02-06 | Bull, S.A. | Computer system with distributed data storing |
US6493811B1 (en) * | 1998-01-26 | 2002-12-10 | Computer Associated Think, Inc. | Intelligent controller accessed through addressable virtual space |
US6173374B1 (en) | 1998-02-11 | 2001-01-09 | Lsi Logic Corporation | System and method for peer-to-peer accelerated I/O shipping between host bus adapters in clustered computer network |
US6173293B1 (en) | 1998-03-13 | 2001-01-09 | Digital Equipment Corporation | Scalable distributed file system |
US6697846B1 (en) | 1998-03-20 | 2004-02-24 | Dataplow, Inc. | Shared file system |
US6397242B1 (en) * | 1998-05-15 | 2002-05-28 | Vmware, Inc. | Virtualization system including a virtual machine monitor for a computer with a segmented architecture |
US6496847B1 (en) * | 1998-05-15 | 2002-12-17 | Vmware, Inc. | System and method for virtualizing computer systems |
US6438642B1 (en) * | 1999-05-18 | 2002-08-20 | Kom Networks Inc. | File-based virtual storage file system, method and computer program product for automated file management on multiple file system storage devices |
US6457021B1 (en) | 1998-08-18 | 2002-09-24 | Microsoft Corporation | In-memory database system |
US6324581B1 (en) | 1999-03-03 | 2001-11-27 | Emc Corporation | File server system using file system storage, data movers, and an exchange of meta data among data movers for file locking and direct access to shared file systems |
IE20000203A1 (en) * | 1999-03-25 | 2001-02-21 | Converge Net Technologies Inc | Storage domain management system |
US6275898B1 (en) | 1999-05-13 | 2001-08-14 | Lsi Logic Corporation | Methods and structure for RAID level migration within a logical unit |
US20020049883A1 (en) | 1999-11-29 | 2002-04-25 | Eric Schneider | System and method for restoring a computer system after a failure |
US6526478B1 (en) | 2000-02-02 | 2003-02-25 | Lsi Logic Corporation | Raid LUN creation using proportional disk mapping |
US6834326B1 (en) | 2000-02-04 | 2004-12-21 | 3Com Corporation | RAID method and device with network protocol between controller and storage devices |
US20010044879A1 (en) * | 2000-02-18 | 2001-11-22 | Moulton Gregory Hagan | System and method for distributed management of data storage |
US6701449B1 (en) | 2000-04-20 | 2004-03-02 | Ciprico, Inc. | Method and apparatus for monitoring and analyzing network appliance status information |
US6745207B2 (en) * | 2000-06-02 | 2004-06-01 | Hewlett-Packard Development Company, L.P. | System and method for managing virtual storage |
US6618798B1 (en) | 2000-07-11 | 2003-09-09 | International Business Machines Corporation | Method, system, program, and data structures for mapping logical units to a storage space comprises of at least one array of storage units |
US6636879B1 (en) | 2000-08-18 | 2003-10-21 | Network Appliance, Inc. | Space allocation in a write anywhere file system |
US6977927B1 (en) | 2000-09-18 | 2005-12-20 | Hewlett-Packard Development Company, L.P. | Method and system of allocating storage resources in a storage area network |
US7089293B2 (en) * | 2000-11-02 | 2006-08-08 | Sun Microsystems, Inc. | Switching system method for discovering and accessing SCSI devices in response to query |
US6671773B2 (en) | 2000-12-07 | 2003-12-30 | Spinnaker Networks, Llc | Method and system for responding to file system requests |
US6868417B2 (en) | 2000-12-18 | 2005-03-15 | Spinnaker Networks, Inc. | Mechanism for handling file level and block level remote file accesses using the same server |
US7165096B2 (en) * | 2000-12-22 | 2007-01-16 | Data Plow, Inc. | Storage area network file system |
EP1368736A2 (en) * | 2001-01-11 | 2003-12-10 | Z-Force Communications, Inc. | File switch and switched file system |
US6606690B2 (en) * | 2001-02-20 | 2003-08-12 | Hewlett-Packard Development Company, L.P. | System and method for accessing a storage area network as network attached storage |
US20040233910A1 (en) * | 2001-02-23 | 2004-11-25 | Wen-Shyen Chen | Storage area network using a data communication protocol |
US6779063B2 (en) * | 2001-04-09 | 2004-08-17 | Hitachi, Ltd. | Direct access storage system having plural interfaces which permit receipt of block and file I/O requests |
US20020161982A1 (en) * | 2001-04-30 | 2002-10-31 | Erik Riedel | System and method for implementing a storage area network system protocol |
JP4632574B2 (en) * | 2001-05-25 | 2011-02-16 | 株式会社日立製作所 | Storage device, file data backup method, and file data copy method |
CN1147793C (en) * | 2001-05-30 | 2004-04-28 | 深圳市朗科科技有限公司 | Semiconductor memory device |
US7685261B1 (en) * | 2001-06-29 | 2010-03-23 | Symantec Operating Corporation | Extensible architecture for the centralized discovery and management of heterogeneous SAN components |
JP4156817B2 (en) * | 2001-07-27 | 2008-09-24 | 株式会社日立製作所 | Storage system |
JP4217273B2 (en) * | 2001-09-17 | 2009-01-28 | 株式会社日立製作所 | Storage system |
US7127633B1 (en) * | 2001-11-15 | 2006-10-24 | Xiotech Corporation | System and method to failover storage area network targets from one interface to another |
US6978283B1 (en) | 2001-12-21 | 2005-12-20 | Network Appliance, Inc. | File system defragmentation technique via write allocation |
JP4146653B2 (en) * | 2002-02-28 | 2008-09-10 | 株式会社日立製作所 | Storage device |
US7039663B1 (en) | 2002-04-19 | 2006-05-02 | Network Appliance, Inc. | System and method for checkpointing and restarting an asynchronous transfer of data between a source and destination snapshot |
JP2003316713A (en) * | 2002-04-26 | 2003-11-07 | Hitachi Ltd | Storage device system |
US6757778B1 (en) * | 2002-05-07 | 2004-06-29 | Veritas Operating Corporation | Storage management system |
US7328260B1 (en) * | 2002-06-04 | 2008-02-05 | Symantec Operating Corporation | Mapping discovered devices to SAN-manageable objects using configurable rules |
US7194538B1 (en) * | 2002-06-04 | 2007-03-20 | Veritas Operating Corporation | Storage area network (SAN) management system for discovering SAN components using a SAN management server |
US7844833B2 (en) * | 2002-06-24 | 2010-11-30 | Microsoft Corporation | Method and system for user protected media pool |
US7873700B2 (en) | 2002-08-09 | 2011-01-18 | Netapp, Inc. | Multi-protocol storage appliance that provides integrated support for file and block access protocols |
US7107385B2 (en) * | 2002-08-09 | 2006-09-12 | Network Appliance, Inc. | Storage virtualization by layering virtual disk objects on a file system |
US20040139167A1 (en) | 2002-12-06 | 2004-07-15 | Andiamo Systems Inc., A Delaware Corporation | Apparatus and method for a scalable network attach storage system |
US7590807B2 (en) | 2003-11-03 | 2009-09-15 | Netapp, Inc. | System and method for record retention date in a write once read many storage system |
US7409494B2 (en) | 2004-04-30 | 2008-08-05 | Network Appliance, Inc. | Extension of write anywhere file system layout |
US20070088702A1 (en) | 2005-10-03 | 2007-04-19 | Fridella Stephen A | Intelligent network client for multi-protocol namespace redirection |
-
2002
- 2002-08-09 US US10/215,917 patent/US7873700B2/en active Active
-
2003
- 2003-07-28 CA CA2495180A patent/CA2495180C/en not_active Expired - Fee Related
- 2003-07-28 EP EP03784832A patent/EP1543399A4/en not_active Ceased
- 2003-07-28 RU RU2005103588/09A patent/RU2302034C9/en not_active IP Right Cessation
- 2003-07-28 CN CNB038238225A patent/CN100357916C/en not_active Expired - Lifetime
- 2003-07-28 WO PCT/US2003/023597 patent/WO2004015521A2/en active Application Filing
- 2003-07-28 JP JP2004527664A patent/JP4440098B2/en not_active Expired - Lifetime
- 2003-07-28 AU AU2003254238A patent/AU2003254238B2/en not_active Ceased
-
2005
- 2005-02-09 IL IL166786A patent/IL166786A/en not_active IP Right Cessation
-
2006
- 2006-03-06 HK HK06102881A patent/HK1082976A1/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
WO2004015521A3 (en) | 2004-07-01 |
RU2302034C9 (en) | 2007-09-27 |
JP4440098B2 (en) | 2010-03-24 |
AU2003254238A1 (en) | 2004-02-25 |
EP1543399A2 (en) | 2005-06-22 |
CN1688982A (en) | 2005-10-26 |
WO2004015521A2 (en) | 2004-02-19 |
CA2495180C (en) | 2013-04-30 |
US7873700B2 (en) | 2011-01-18 |
CN100357916C (en) | 2007-12-26 |
IL166786A0 (en) | 2006-01-15 |
RU2302034C2 (en) | 2007-06-27 |
HK1082976A1 (en) | 2006-06-23 |
IL166786A (en) | 2010-12-30 |
US20040030668A1 (en) | 2004-02-12 |
RU2005103588A (en) | 2005-10-10 |
JP2005535961A (en) | 2005-11-24 |
AU2003254238B2 (en) | 2008-03-20 |
EP1543399A4 (en) | 2007-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2495180C (en) | Multi-protocol storage appliance that provides integrated support for file and block access protocols | |
EP1543424B1 (en) | Storage virtualization by layering virtual disk objects on a file system | |
US7739250B1 (en) | System and method for managing file data during consistency points | |
US7603532B2 (en) | System and method for reclaiming unused space from a thinly provisioned data container | |
EP1763734B1 (en) | System and method for supporting block-based protocols on a virtual storage appliance executing within a physical storage appliance | |
US7904482B2 (en) | System and method for transparently accessing a virtual disk using a file-based protocol | |
US7849274B2 (en) | System and method for zero copy block protocol write operations | |
US7055014B1 (en) | User interface system for a multi-protocol storage appliance | |
US7437530B1 (en) | System and method for mapping file block numbers to logical block addresses | |
EP1859603B1 (en) | Integrated storage virtualization and switch system | |
US7383378B1 (en) | System and method for supporting file and block access to storage object on a storage appliance | |
US7069307B1 (en) | System and method for inband management of a virtual disk | |
US20050246345A1 (en) | System and method for configuring a storage network utilizing a multi-protocol storage appliance | |
US7523201B2 (en) | System and method for optimized lun masking | |
US7293152B1 (en) | Consistent logical naming of initiator groups | |
US7783611B1 (en) | System and method for managing file metadata during consistency points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |
Effective date: 20160728 |