US20060123062A1 - Virtual file system - Google Patents

Virtual file system Download PDF

Info

Publication number
US20060123062A1
US20060123062A1 US11/338,496 US33849606A US2006123062A1 US 20060123062 A1 US20060123062 A1 US 20060123062A1 US 33849606 A US33849606 A US 33849606A US 2006123062 A1 US2006123062 A1 US 2006123062A1
Authority
US
United States
Prior art keywords
file
virtual
recited
slave
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/338,496
Inventor
Jared Bobbitt
Stephan Doll
Marc Friedman
Patrick Lau
Joseph Mullally
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC Corp filed Critical EMC Corp
Priority to US11/338,496 priority Critical patent/US20060123062A1/en
Publication of US20060123062A1 publication Critical patent/US20060123062A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F2003/0697Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers device management, e.g. handlers, drivers, I/O schedulers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems

Definitions

  • the present invention generally relates to network file systems and schemes, and more particularly, to a network file system that appears to its clients to be a single file system, while locating its files an directories on multiple server computers.
  • File system growth management is a large and growing problem for the data centers of eBusinesses and corporate intranets. Depending on the source, data storage is estimated to be growing between 60% and 200% per year and accelerating. According to the Strategic Research Corp., the two major storage-related problems in the data center are managing disk space and running out of disk space. Information Technology (IT) administrators are also struggling with placing their key, most important data onto the best storage resource in their environment. With the explosion in growth, IT users are looking for alternative solutions that simplify growth management.
  • IT Information Technology
  • SAN architectures do allow for considerable scalability, flexibility and performance, but at a very high cost.
  • file servers are available: file servers.
  • NAS Network attached storage
  • GPFS General Purpose File Servers
  • NAS Network attached storage
  • GPFS General Purpose File Servers
  • each NAS device or GPFS sitting on a LAN has limited throughput. It is an island unto itself. IT departments have to statically partition their storage among these islands, which has a number of unpalatable consequences.
  • the present invention comprises a virtual file system and method that addresses many of the foregoing limitations found in the prior art.
  • the system architecture enables file systems to be virtualized.
  • the system provides one or more virtual file system volumes each of which appears to be a normal file system while in reality the files in each virtual file system may be stored on many file systems on a plurality of file servers.
  • File systems manifest themselves to users as a hierarchy of directories (also known as folders) containing files.
  • the virtual file system also manifests itself the same way. Unique to the invention is the independence of the name and position of a file in this hierarchy, from its location on one of the plurality of file servers.
  • This virtualization functionality is facilitated through the use of a software layer that, for each virtual volume and file pathname, intercepts file system requests and maps the virtual pathname to the actual server and pathname under which the file is stored.
  • This scheme is implemented on a set of computers called the virtual file system cluster. They are a cluster in the following senses: they are attached via a local area network; they share key configuration data; they communicate with each other to provide users the same virtual file system interface; and they are configured, monitored, and managed via a unified management console application.
  • the invention operates through the use of two software components on each machine in the virtual file system cluster: 1) an “agent” software module that maintains the global configuration state of the system, and 2) the file system interception layer itself.
  • the agent is implemented as a user-level process while the interception layer is implemented as a kernel-loadable module.
  • the virtual file system enables users to create virtual file hierarchies that are mapped behind the scene to one or more logical volumes on one or more servers.
  • the actual hierarchy of directories and files comprising the portion of a single logical volume devoted to a particular virtual file system is called a gtree.
  • Each virtual file system has two kinds of gtrees: a single master gtree and one or more slave gtrees.
  • the master gtree functions as a centralized name service for the entire virtual file system, containing the directory names, attributes, and contents, and the file names.
  • the slave gtree serves as a storage server, containing the file attributes and contents.
  • the master gtree For each file, the master gtree contains a file pointer that contains the file's unique identifier and the identifier of the slave on which the file's contents and attributes are located.
  • each directory contains a special file with a reserved name that contains a unique identifier for that directory.
  • FIG. 1 is a schematic diagram depicting a conventional NFS file system that is hosted by a single file server;
  • FIG. 2 is a schematic diagram illustrating the primary software components of the conventional NFS file system of FIG. 1 ;
  • FIG. 3 is a schematic diagram illustrating the primary software components of the virtual file system of the present invention when implemented using conventional file servers that host underlying NFS file systems;
  • FIG. 4 is a schematic diagram illustrating further details of the virtual filter driver used on the client side of the system of FIG. 3 ;
  • FIG. 5 is a schematic diagram illustrating further details of the GFS module that is implemented as a file system interception layer on the server side of the system to augment conventional file system behavior;
  • FIG. 6 is a schematic diagram that illustrates how data is stored in the master gtree (master directory structure) in accordance with the invention
  • FIG. 7 is a schematic diagram illustrating how data is stored on the slave gtrees of the present invention.
  • FIG. 8 is a schematic diagram illustrating the primary operations and interfaces provided by a client agent that runs on clients to facilitate operation of the virtual file system of the present invention
  • FIG. 9 is a flowchart illustrating the logic used by the present invention when accessing a data file
  • FIG. 10 is a schematic diagram illustrating an exemplary implementation of the present invention wherein a data file is migrated between two file servers;
  • FIG. 11 is a flowchart illustrating the logic used by the present invention when migrating a data file
  • FIGS. 12 A-D are schematic diagrams illustrating the state of the master gtree and slave gtrees during the data file migration process.
  • FIG. 13 is a schematic diagram of an exemplary computer system that may be implemented in the present invention.
  • a logical volume is a set of block storage resources managed as a unit by a single host computer, with the following usual characteristics: persistence of files when the computer turns on and off, a fixed storage size and replication strategy (e.g., RAID-5, mirrored, or none), and an associated set of one or more partitions on one or more physical storage media.
  • a local file system is an organizational structure for storage of computer files managed by a single host computer with the following usual characteristics: persistence of files when the computer turns on and off, fixed storage size, and a unified hierarchy of directories (a.k.a. folders).
  • a file system provides access to files and directories indexed by their pathname, the sequence of enclosing directories (folders), followed by the simple name of the file.
  • a network file sharing protocol is a standardized communication interface for computer servers to share local file systems to other computers connected by a network. Network file sharing protocols allow multiple computers to use a single file system, as if the file system was local to each computer, regardless of where it actually resides.
  • a server or file server is a computer system that is used to host a file system.
  • a vnode is a data structure that contains information about a file in a UNIX file system under the SUN SOLARISTM operating system.
  • a virtual file system (a.k.a. virtual volume) is a file system, except that rather being managed by a single host computer, it consists of a set of file systems or logical volumes on one or more host computers.
  • the virtual file system functions as a single local file system to each client, its files are in fact partitioned among multiple underlying file systems.
  • client applications and users are not aware that the file system is virtual, nor are they aware of the various locations of the files.
  • the term virtualized refers to the invention's ability to enable the creation of one or more virtual file systems comprising files and directories stored on one or more logical volumes on one or more file servers, wherein applications and users who access those files are not aware of their location.
  • the present invention enables file systems to be easily scaled through the use of a virtualized file system architecture and method.
  • the terms “Gossamer,” “Gossamer virtualized file system” and “Gossamer file system,” are used synonymously throughout to refer to exemplary virtual file system implementations in accordance with the invention.
  • FIG. 1 shows a conventional network file system (NFS) protocol file system 10 that enables local applications 12 running on various clients 11 , including personal computers 14 and 16 and a UNIX workstation 18 , to access files (i.e., store, retrieve, update, delete) that are stored on a file server 20 via a network 22 .
  • NFS network file system
  • network 22 will comprise a LAN (local area network) or WAN (wide area network).
  • File server 20 includes a disk sub-system 23 comprising a plurality of storage devices 24 (e.g., hard disks), each of what may be partitioned into one or more logical partitions 26 .
  • Operating system software may be present that organizes one or more devices into a single addressable storage space known as a logical volume.
  • the volume management software allows the addressable units to include multiple devices or partitions, with or without mirroring, RAID, remote replication, snapshot, hot-swap, or dynamic expansion functionality. All of this is invisible to, and unrelated to, the file-system software using the logical volume, to which this invention pertains.
  • volume will be used herein to refer to an addressable unit of disk, with the understanding that if volume management software is installed, then the volume is a logical volume, otherwise, it is merely a storage device or partition thereof.
  • storage devices 24 may be accessed via one or more device controller cards 28 connected to the file server's motherboard.
  • the functionality provided by such a device controller card may be built into the motherboard.
  • device controller cards 28 will comprise a single or multi-channel SCSI (small computer system interface) card that enables access to 14 storage devices per channel.
  • SCSI small computer system interface
  • storage devices 24 may be housed within file server 20 's chassis, or may be disposed in an external cabinet linked to the file server via one or more communications channels.
  • file server 20 many modern file servers provide “hot swap” drives, which enables network managers (and others) to easily remove, add and/or replace drives without having to shut down the server the drives are installed in.
  • a volume managed by that file server 20 appears to be a local file system 30 .
  • a local file system consists of a volume and a single hierarchical tree structure of directories with files at the leaves. The base of the tree is called the root directory 32 .
  • the file server exports part or all of the local file system for remote access. The “export” is the point in the hierarchy that appears to remote users to be the root. If the export is the local file system root, then the entire local file system is exported.
  • a block-level schematic diagram corresponding to conventional NFS file system 10 is shown in FIG. 2 .
  • local applications 12 that are running in the user mode level of the clients operating system are provided access to the file system via a client-side NFS client 34 running at the kernel mode level of the client's OS, and an NFS Daemon 36 , running on file server 20 at the kernel mode level of the server's OS.
  • NFS daemon 36 provides an abstracted interface between a local native file system 38 running on file server 20 and NFS clients running on various client machines 11 , whereby a single set of NFS commands can be used for any type of file system supported by variants of UNIX.
  • the NFS daemon and the server operating systems provide a uniform NFS interface to the NFS clients using the various local file system types, whether they be ext2, UFS, VxFS, or any file system supported by NFS.
  • local applications 40 running at the user mode level of the server accessing the local native file system 38 for management purposes, such as backup and restore utilities.
  • the conventional scheme suffers from a static mapping of file systems to file servers. This has several unpalatable consequences. Planning is difficult. Only one thing is certain about a static division of resources—you can't get it right ahead of time. Some islands of storage will be overtaxed long before the rest. Hot spots and above-trend growth will eventually bring existing resources to their knees, making some key data unavailable, and resulting in semi-unplanned, labor-intensive reconfiguration. The only way to delay reconfiguration is by throwing more hardware at the problem early, resulting in excess costs and unused resources. Under conventional file system operations, the only way to cure an overtaxed server is to take the file system offline, adding a new server, and reconfiguring the file system to divide the data up between servers. After such a reconfiguration is completed, applications will typically need to be changed to access the data in their new locations. This can be easy or impossible, depending on how well the applications are architected for such changes.
  • Gossamer virtual file system takes the next step beyond the sharing of a single file system by freeing the file system from the bounds of its host.
  • Gossamer comprises a software subsystem that manages virtualized file systems—that is, file systems without a single host.
  • Each virtualized file system (called a Gossamer virtual volume, or GVV) has all of the usual characteristics of a file system except that rather than having a fixed server, it is hosted on a dynamic set of (generally) smaller server computers.
  • a client computer through a Gossamer client-side component, can access the entire virtualized file system using conventional network file sharing protocols, such as NFS.
  • Gossamer can aggregate together all the storage capacity provided to it by the server computers, such that the total capacity of the virtual file system comprises the totality of the storage capacity of the underlying servers, which may be easily scaled by adding additional servers.
  • Gossamer enhances the client-server file-sharing model with location-independent access, which provides a significant advantage over the prior art.
  • a conventional NFS shared file system has three aspects: an exported directory hierarchy (i.e., the exports), physical disk space, and a server.
  • a GVV abstracts the file system from the server.
  • a GVV has an exported directory hierarchy and physical disk space, and a Gossamer GVV name, rather than a server name. Clients access GVVs as though they are NFS file systems, and Gossamer takes care of locating and accessing files on the proper servers.
  • a Gossamer file system is hosted on one or more server computers (i.e., file servers).
  • Each file server hosts one or more volumes, referred to herein as bricks.
  • An instance of a local file system software module (such as VxFS, UFS, or ext2) manages the layout of a brick. This module will be referred to herein as the “underlying file system,” since it is provided by another vendor and is accessed by this invention through a standard interface.
  • gtrees the building blocks of a GVV—are created as separate hierarchies on the underlying file system.
  • a GVV is a collection of gtrees working in concert. Two GVVs may not share any gtrees. Users of files see only the GVVs, which work like first-class file systems, except that they appear to each client computer to be local file systems.
  • Gossamer's functionality is achieved through the interaction of two separate software components: a server-side file system component called GFS, and a client-side virtualization component, called GVFD (Gossamer Virtualizing Filter Driver). Both of these components are implemented as filters that intercept, translate, and reroute file system traffic.
  • GFS server-side file system component
  • GVFD Gateway Virtualization Filter Driver
  • Both of these components are implemented as filters that intercept, translate, and reroute file system traffic.
  • SOLARISTM operating system a variant of the UNIX operating system manufactured by the Sun corporation
  • the interface it intercepts is known as the vfs/vnode interface.
  • GFS comprises a server-side layer of abstraction that manages reference counting and migration.
  • An instance of the GFS component runs on the server that hosts the corresponding underlying file system managed by that GFS instance.
  • the GFS instance is implemented as a local file system that exports the vfs/vnode interface for the SOLARISTM operating system.
  • Each instance of GFS maintains data structures that are used to manage a single gtree.
  • GVFD runs on any client machine (also called a storage client) accessing a Gossamer file server (i.e., a computer on which at least one of the underlying file system is hosted).
  • An instance of GVFD manages access to a single GVV from a single client. It maps virtual file names to physical locations for those files, and routes messages to appropriate servers, where they are ultimately received by GFS instances.
  • An optional GVFD module may also run on the server if there are any applications (such as backup) on the server that need to access the virtual file system.
  • FIG. 3 An overview of the Gossamer virtual file system is shown in FIG. 3 , wherein solid-lined boxes correspond to components of Gossamer, while boxes with dashed outlines correspond to conventional applications and OS components.
  • a GFVD module 42 runs at the kernel mode level on client 11 , and accesses external file servers, such as file server 20 , via an NFS client instance 34 A.
  • a Gossamer client agent 43 runs at the user mode level on client 11 , and accesses configuration information 45 via an NFS client instance 34 B.
  • a GFS module 44 runs at the kernel mode layer on file server 20 , which includes a virtual file system driver, a migration engine, and a replication engine.
  • the server side also includes an administration agent 46 , running at the user mode level on file server 20 , which is used in conjunction with a Gossamer Administration service 48 running on an external machine or the server to enable administrators to manage various virtual file system functions, including migration policies and schedules, file system configurations, and replication.
  • Gossamer Administration Agent 46 reads and writes to configuration information 45 via an NFS client instance 47 .
  • the GVFD module includes a file system API, a GVFD translation unit, a master directory lookup, and performs master directory/slave translation, all collectively identified by a block 50 .
  • GFVD module 42 functions as a filter that intercepts NFS file access requests, and translates those requests so they are sent to an appropriate server.
  • GFS module 44 provides several server-side functions that are collectively identified by a block 52 , including a file system interface, file system pass-through, object locking, reference counting, driver communication, adding and removing GVVs, adding and removing gtrees, migration job start, stop, and cancel, migration Job status, and replication.
  • the Gossamer file system driver is loaded into the OS as a kernel loadable module (KLM).
  • KLM kernel loadable module
  • GFS module 44 includes data file migration and maintaining configuration information 45 .
  • Migration of data files is enabled through the use of a migration engine 54 that accesses data files that may be stored locally or stored on a remote file server N via an NFS client instance 55 .
  • Configuration information 45 includes configuration data that identifies what physical server(s) the various gtrees for a given GVV are hosted on, what physical devices the master and slave gtrees are stored on, the exports each server provides, and the roles played by the various components in a Gossamer virtual file system.
  • Configuration information also may include schedule data (i.e., data pertaining to when migrations are to be performed or considered, when backups are to occur, when the background consistency checker may run, etc.), status files pertaining to operations in progress, such as migration and backup operations, and log files.
  • the configuration information may be stored on one of the servers used to store the master gtrees and/or the slave gtrees, including file server 20 , or may be stored on a separate server that is not used to store file system data files that are part of a GVV.
  • Gossamer uses local constructs called gtrees, which contain data corresponding to individual file systems to encode a single location-independent file system. This is accomplished by splitting the file system data into two parts, and storing the corresponding data into two separate types of gtrees. Metadata corresponding to a virtual directory and pathname hierarchy is stored on the master gtree, which functions as a name service. In one embodiment, the master name service for a GVV uses multiple gtrees as replicas. The file system data (i.e., data files and directories) is partitioned among multiple slave gtrees, which function as storage servers for file data. In one embodiment, each data file is stored on a single slave gtree.
  • redundant copies of a data file may be stored in multiple locations by the underlying file system, volume management software, or disk controller (e.g., a mirrored drive, RAID scheme, etc.); however, from the perspective of the Gossamer file system, the data file is manifest as a single local file.
  • the directories and their contents and attributes are stored in the master gtree.
  • Files and their contents and attributes are stored on the slave gtrees.
  • the master and slave gtrees are connected by file pointers, which are objects on the master gtree that map from the file's virtual pathname to a globally unique identifier (GUID) for the file and the gtree that hosts it.
  • GUID globally unique identifier
  • the master gtree is hosted on a single volume, while the slave gtrees are hosted on one or more volumes that may or may not include the volume the master gtree is hosted on.
  • Gossamer file systems may comprise millions of directories and files, and may reside on a single file server, multiple file servers on the same LAN, as well as a combination of local and remote file servers (servers on a WAN).
  • the exemplary virtual file system is exported such that it appears to have a virtual directory and file hierarchy structure 60 , also referred herein as the user-view tree 60 .
  • the user-view tree corresponds to a “virtual” directory and file hierarchy because users name objects in the hierarchy using a GVV and virtual pathname that is entirely location-independent. Translations between virtual pathname and actual server-pathname combinations are handled, in part, through data stored on the GVV master, as depicted by GVV master directory structure 62 .
  • the GVV master directory structure logically divides its data into three spaces, each having a separate subdirectory name stored under a common root.
  • These spaces include a Gossamer namespace 64 stored in a “/Namespace” subdirectory, a temporary migrating space 68 stored in a “/migrating” subdirectory, and a garbage space 70 stored in a “/Garbage” subdirectory.
  • Gossamer namespace 64 parallels the virtual directory hierarchy, wherein the files contained (logically) in the virtual directories are replaced by file pointers having the same names as the original files. For example, in user view tree 60 , there are two files under the “/usr/joe” subdirectory: “index.html” and “data.dat.” Accordingly, respective file pointers 72 and 74 to these files having the same name and located in the same subdirectory path (“/usr/joe”) relative to the /Namespace directory are stored in Gossamer namespace 64 .
  • Each of the file pointers comprises a very small file containing two pieces of information: a file GUID (guid) corresponding to the file itself, and a GUID slave location identifier (loc) that identifies the gtree the file is located on.
  • the gtree and file GUID are sufficient to retrieve the file's attributes and contents.
  • file pointer 72 corresponding to the “data.dat” file has a file GUID of 4267, and a slave location identifier of 3215.
  • the file and directory GUIDs are 128-bit identifiers generated by modern computers to be globally unique.
  • the slave location identifiers are also 128-bit GUIDs.
  • the values for the GUIDS discussed above and shown in the Figures herein are simplified to be four-digit base-ten numbers for clarity.
  • GUIDS are encoded using a reversible mapping into alphanumerical strings.
  • the four-bit encoding is as a 32-byte lower case hexadecimal string.
  • Another encoding is the following six-bit encoding, which results in a 22-byte string representation in Latin. Each character represents six contiguous bits of the GUID, so 22 characters represents 132 bits, the last 4 of which are always zero.
  • FIG. 7 An exemplary storage scheme corresponding to the present example is illustrated in FIG. 7 .
  • “index.html” file pointer 72 contains a slave location identifier of 2259 and a file GUID of 9991.
  • This slave location is located in a gtree 78 shown in FIG. 7 , and contains a file named “9991” corresponding to the to the “index.html” data file in user view 60 .
  • data file “data.dat” is stored on a gtree 80 in a slave location having a slave location identifier of 3215 in a file named “4267.”
  • Each slave gtree is exported by GFS 44 as a flat storage space, e.g.,:
  • the underlying storage is hierarchical, to support fast lookup using underlying file system implementations that store directories as linked lists.
  • the hierarchy is hidden by GFS 44 by cleverly translating all lookup, create, and delete calls into sequences of lookups down the hierarchy followed by the desired lookup, create, or delete call itself.
  • a four-digit portion of the name will provide 2 16 directories under the root directory, and room for 2 16 objects in each of these directories. If it is desired to provide access to more than 4,000,000,000 files, then another level should be placed in the hierarchy.
  • string should be generated from the portion of GUID bits that are changing most rapidly.
  • GUID bits 17 - 32 are the fastest-changing bits. Accordingly, bits 17 - 32 should be used to generate string on these computers.
  • Gossamer client agent 43 functions as a UNIX agent that performs polling for configuration changes, mounts gtrees (i.e., mounts the underlying file system corresponding to the gtree), and provides an interface for a centralized administration module to communicate with the GVFD module.
  • Agent module 82 communicates with GVFD 42 via a driver communication module 84 , which provides a driver communication interface, and enables GVVs and gtrees to be added and removed. Agent module 82 is also enabled to access configuration information 45 .
  • the GVFD, master GFS, and slave GFS cooperate to implement each operation of the SOLARISTM vnode/vfs interface in a way that provides a single user view of the entire virtual file system.
  • the invention relies on the cooperation of all these software components for normal operation.
  • the lookup command is responsible for locating files, to which other requests are then routed.
  • the create operation also is critical, since it selects a slave for a file to be located on.
  • the slave for storing a new file may be selected using various criteria, including storage space and load-balancing considerations.
  • new files are stored on the slave with the largest free disk space.
  • Vnode/vfs requests that are used by one embodiment of the invention are detailed in TABLE 1 below.
  • TABLE 1 describes the handling that occurs when a GVFD instance receives the request.
  • the column headed “Where is the Logic” indicates whether the GVFD does the operation alone (client) or forwards the operation to the appropriate GFS instance (master or slave or both).
  • the column headed “Modifies Metadata” indicates whether the operation changes any data on the master GFS.
  • the default treatment of an operation is “pass-through,” in which case the request is forwarded to the correct file server and then from the file server to the correct underlying file system and object. In pass-through operations the interception layers are passive.
  • slave Yes Pass-through getattr master No Pass-through getsecattr dir getattr
  • slave No Pass-through getsecattr file link master yes pass-through lookup master then No Gets file pointer then file.
  • slave Attributes are those of the file.
  • File pointer may be cached on client.
  • the Gossamer file system enables clients to access any data file within the aggregated storage space of a GVV through cooperation between the client-side GVFD and server-side GFS's corresponding to both the master and the slave volumes.
  • access functions include functions that are typically used to manipulate or obtain data from a data file, includes ACCESS, READ, WRITE, GETATTR, SETATTR, etc. These access functions are always preceded by a LOOKUP function which determines the physical location of the data file (file server and physical pathname) based on its virtual pathname.
  • the LOOKUP function is performed by block 90 - 95 below.
  • the process for access a data file begins in a block 90 in which a local application (e.g., local application 12 ) running on a client 11 requests access to the file using the file's virtual pathname.
  • a local application e.g., local application 12
  • the request is passed from the user mode level of the OS to the OS kernel, where it is intercepted in a block 91 by GVFD 42 running on client 11 .
  • GVFD 42 looks up the identity of the file server hosting the master gtree (the master file server) using its local copy of configuration information 45 , and then passes the virtual pathname of the file and a client identifier to the master file server in a block 93 .
  • the virtual pathname is sent as a file I/O request via NFS client instance 34 A to the master file server, wherein it is received by an NFS Daemon 36 .
  • NFS Daemon 36 would pass the request to local native file system 38 .
  • GFS module 44 on the master file server intercepts the file access requests in a block 94 , navigates the master gtree until is locates the pointer file corresponding to the virtual pathname, whereupon it returns the pointer file to GVFD 42 on the client.
  • GFVD 42 parses the pointer file in a block 95 to identify the data file's identifier and the file server hosting the slave volume (the slave file server) in which the data file is stored.
  • the pointer file includes two GUIDs, wherein the first GUID is used to identify the slave volume and the second GUID (the data file's identifier) is used to determine the physical pathname under which the data file is stored in the slave volume.
  • the slave file server can be determined by lookup using the local copy of configuration 105 maintained on client 11 . This completes the LOOKUP function.
  • GFVD 42 sends a file access request including the data file's identifier to slave file server in a block 96 , whereupon the file access request is intercepted by a GFS module 44 running on the slave. Slave GFS module 44 routes the request to the local native file system 38 corresponding to the slave volume in a block 97 . The file access process is completed in a block 98 in which the local native file system performs the file access request and returns the results to the GFS module, which then returns the results to the client.
  • a generally similar process is used for other types of file and directory accesses in which file system objects are changed, such as when a file or directory is added, deleted, renamed, or moved.
  • the appropriate change is made on the master directory tree. For example, a user may request to add a new data file f into a particular directory d.
  • a new pointer file by the name f is added to the directory d in the master gtree, and the new data file is stored on an appropriate slave volume hosted by one of the file system's file servers.
  • FIG. 10 An exemplary Gossamer file system 100 is illustrated in FIG. 10 .
  • the system includes four file servers 20 A, 20 B, 20 C, and 20 D.
  • Each of the file servers supports a single file system from which a respective gtree is generated (i.e., gtrees A, B, C and D).
  • Each files system is stored on a plurality of storage devices 24 , each of which may host a single export 25 or multiple exports 26 .
  • the master gtree and slave gtrees are stored on volumes of the various servers 102 , 24 , 25 , 26 , 108 .
  • a significant advantage provided by the invention is the ability to easily scale the virtual file system dynamically without having to change the location or name of any files or directories in the virtual directory and file hierarchy (e.g., user view 60 ).
  • the file system may be scaled by adding another server to a GVV without taking the system offline, and in a manner that is transparent to users and client applications.
  • Data migration enables files to be migrated (i.e., moved) between physical storage devices, including devices hosted by separate servers, in a transparent manner. For example, suppose that Gossamer file system 100 initially included file servers 20 A, 20 C, and 20 D, all of whose underlying file system capacities are becoming full. In order to provide additional storage capacity, the system administrator decides to add file server 20 B. In most instances, the first step upon adding a new file server to a Gossamer file system will be to load-balance the system by migrating data files from one or more existing file servers to the new file server.
  • the GFS module will attempt to migrate a set of files by copying them and deleting them. Write access to a file during migration causes that file's migration to abort. However, any file access after the migration will cause the our GVFD client to access the file in its new migrated location.
  • migrating a data file proceeds as follows. First, in a block 110 , the vnode of the pointer (named PointerFilePath) on the master gtree 102 is opened.
  • a block 112 the vnode of the file (which is local, and whose name is determined by the GUID) is opened.
  • the GUID name is 4267.
  • a decision block 114 a determination is made to whether the migration module is the only process with the vnode open. Effectively, this decision block determines if any client applications presently have access to the data file that is to be migrated. This determination can be made by examining the reference count for the vnode associated with ⁇ GUIDname>. If the reference count is greater than one, the migration of the file is stopped, as indicated by a return block 116 .
  • a hardlink is created on the master in migrating space 68 (i.e., under the “/Migration” directory) having an entry of “/ ⁇ GUIDsrc>/ ⁇ GUIDname> ⁇ PointerFilePath”.
  • GUID for the destination gtree (GUIDdest) is appended to the pointer file for the data file, such that the pointer file comprises ⁇ GUIDname> ⁇ GUIDsrc> ⁇ GUIDdest>.
  • this pointer file now comprises ⁇ 4267, 2259, 3215 ⁇ .
  • the file is then copied from its local location to the destination file server in a block 124 , as shown in FIG. 12B , whereupon the local file is deleted in a block 126 , as shown in FIG. 12C .
  • checks are made to ensure that the file has been successfully copied to the destination prior to deleting the local copy.
  • cleanup operations are performed to complete the migration process. This comprises updating the pointer file in a block 128 , deleting the hardlink on the master in a block 130 , unlocking and releasing the ⁇ GUIDname> vnode in a block 132 , and releasing the PointerFilePath vnode in a block 134 .
  • the results of these cleanup operations are depicted in FIGS. 12C and 12D .
  • Some embodiments employ a further mechanism to isolate use and migration of a given file, preventing them from occurring simultaneously.
  • any open file request causes that file on the slave to acquire a shared lock.
  • Subsequent close operations release that share lock.
  • Migrations do not attempt to migrate files that are locked by any client. This prevents migration from occurring on any file currently open.
  • the above procedure is crash-proof by design. At any point, there is enough information in the migrating space and the file pointer to quickly clean up any currently in-progress migration operations when a slave GFS is restarted.
  • Gossamer administration utility 48 This utility enables migrations to be initiated through manual intervention, or enables systems administrators to create migration policies that automatically invoke migration operations when a predetermined set of criteria is determined to occur. For example, a Gossamer system administrator can analyze file system statistics (e.g., percentage of space used, number of files, file accesses, etc.) or merely await broad recommendations from the system. She can enable the system to choose candidates for migration automatically, or select files manually. In addition, she can schedule migrations for one-time, daily or weekly execution through the use of the migration schedule management tool.
  • file system statistics e.g., percentage of space used, number of files, file accesses, etc.
  • She can enable the system to choose candidates for migration automatically, or select files manually.
  • she can schedule migrations for one-time, daily or weekly execution through the use of the migration schedule management tool.
  • a non-Gossamer file system must undergo some conversion to be used by Gossamer, and vice versa.
  • an import/export tool is provided to perform the conversion.
  • the import tool constructs a new master gtree, or connects to an existing gtree.
  • the import tool then inserts the directory hierarchy of the file system being converted into that master gtree. This involves copying all directories and their attributes and contents. Meanwhile, it assigns all files a GUID and rearranges them in the file hierarchy, so that after conversion the file system will be configured as a slave.
  • Backup and restore of a GVV can be a single operation for a small GVV, in which case a regular GVFD module is run on the master that the backup and restore procedures access. However, most likely each gtree needs to be backed up separately to keep the backup window small. In this instance, a special-purpose GVFD instance is run on each server to manage backup of that server's gtrees. This GVFD provides the backup tool (whether it is provided by SUN or a third party) a partial view of the file system, containing all the directories and the files hosted on the server being backed up.
  • a Single-gtree backup requires a modified GVFD running on each server, as depicted by a GVFD module 136 in FIG. 3 .
  • GVFD module 136 is enabled to access configuration information 45 via an NFS-client instance 138 .
  • some file systems provide replication functionality wherein various file system data, such as the data corresponding to the master gtree, are replicated by the file system itself.
  • the underlying replication of the file system data does not alter the operation of the Gossamer file system, and in fact is transparent to Gossamer.
  • an explicitly replicated master gtree means that replication of the master gtree is controlled by Gossamer.
  • Functionality for replicating the master gtree is provided by a replication engine module that is part of the GFS instance that provide access to the master volume.
  • Gossamer administration agent 46 may include replication management functions. In general, the master gtree will be replicated on a file server that is not the same file server that hosts the original master gtree.
  • a generally conventional computer server 200 is illustrated, which is suitable for use in connection with practicing the present invention, and may be used for the file servers in a Gossamer virtual file system.
  • Examples of computer systems that may be suitable for these purposes include stand-alone and enterprise-class servers operating UNIX-based and LINUX-based operating systems.
  • Computer server 200 includes a chassis 202 in which is mounted a motherboard (not shown) populated with appropriate integrated circuits, including one or more processors 204 and memory (e.g., DIMMs or SIMMS) 206 , as is generally well known to those of ordinary skill in the art.
  • a monitor 208 is included for displaying graphics and text generated by software programs and program modules that are run by the computer server.
  • a mouse 210 (or other pointing device) may be connected to a serial port (or to a bus port or USB port) on the rear of chassis 202 , and signals from mouse 210 are conveyed to the motherboard to control a cursor on the display and to select text, menu options, and graphic components displayed on monitor 208 by software programs and modules executing on the computer.
  • Computer server 200 also includes a network interface card (NIC) 214 , or equivalent circuitry built into the motherboard to enable the server to send and receive data via a network 216 .
  • NIC network interface card
  • File system storage corresponding to the invention may be implemented via a plurality of hard disks 218 that are stored internally within chassis 202 , and/or via a plurality of hard disks that are stored in an external disk array 220 that may be accessed via a SCSI card 222 or equivalent SCSI circuitry built into the motherboard.
  • disk array 220 may be accessed using a Fibre Channel link using an appropriate Fibre Channel interface card (not shown) or built-in circuitry.
  • Computer server 200 generally may include a compact disk-read only memory (CD-ROM) drive 224 into which a CD-ROM disk may be inserted so that executable files and data on the disk can be read for transfer into memory 206 and/or into storage on hard disk 218 .
  • a floppy drive 226 may be provided for such purposes.
  • Other mass memory storage devices such as an optical recorded medium or DVD drive may also be included.
  • the machine instructions comprising the software program that causes processor(s) 204 to implement the functions of the present invention that have been discussed above will typically be distributed on floppy disks 228 or CD-ROMs 230 (or other memory media) and stored in one or more hard disks 218 until loaded into memory 206 for execution by processor(s) 204 .
  • the machine instructions may be loaded via network 216 .

Abstract

A virtual file system and method. The system architecture enables a plurality of underlying file systems running on various file servers to be “virtualized” into one or more “virtual volumes” that appear as a local file system to clients that access the virtual volumes. The system also enables the storage spaces of the underlying file systems to be aggregated into a single virtual storage space, which can be dynamically scaled by adding or removing file servers without taking any of the file systems offline and in a manner transparent to the clients. This functionality is enabled through a software “virtualization” filter on the client that intercepts file system requests and a virtual file system driver on each file server. The system also provides for load balancing file accesses by distributing files across the various file servers in the system, through migration of data files between servers.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention generally relates to network file systems and schemes, and more particularly, to a network file system that appears to its clients to be a single file system, while locating its files an directories on multiple server computers.
  • 2. Background Information
  • File system growth management is a large and growing problem for the data centers of eBusinesses and corporate intranets. Depending on the source, data storage is estimated to be growing between 60% and 200% per year and accelerating. According to the Strategic Research Corp., the two major storage-related problems in the data center are managing disk space and running out of disk space. Information Technology (IT) administrators are also struggling with placing their key, most important data onto the best storage resource in their environment. With the explosion in growth, IT users are looking for alternative solutions that simplify growth management.
  • The most complex issue involving the growth of storage is the inability to manage storage environments efficiently and with qualified IT professionals. eBusinesses today are facing an influx of new storage technologies (e.g., network-attached storage (NAS) and storage area network (SAN)), which increase their storage capacity, speed, and availability, but in ways have made storage architectures more complex. When new technologies are deployed, IT professionals must quickly ramp up and learn these new technologies, and with the current lack of skilled IT talent (˜600,000 unfilled IT positions today), falling behind is easy. In fact the cost of managing high-performance storage environments is estimated to be far greater than the cost of purchase—by three to ten times. The required ongoing investments in both hardware/software and people into these storage architectures will continue to rise.
  • In many enterprises, data is distributed among various “islands of storage,” which are cut off from each other by their means of attachment, physical location, management policy, or software incompatibility. These islands require applications to select and name the specific hard-wired server hosting the desired files. Typically, when applications outgrow their islands, IT administrators must bring down the applications, add new storage devices, partition and move some of the data, and reprogram the applications to make them aware of the new division of resources.
  • At the high end, customers can opt for SAN solutions, which are extremely expensive to purchase and maintain, and require a commitment to proprietary hardware. SAN architectures do allow for considerable scalability, flexibility and performance, but at a very high cost. A lower-cost storage solution is available: file servers. Network attached storage (NAS) devices and General Purpose File Servers (GPFS) provide interoperable, incremental, and somewhat scalable storage. However, each NAS device or GPFS sitting on a LAN has limited throughput. It is an island unto itself. IT departments have to statically partition their storage among these islands, which has a number of unpalatable consequences.
  • SUMMARY OF THE INVENTION
  • The present invention comprises a virtual file system and method that addresses many of the foregoing limitations found in the prior art. The system architecture enables file systems to be virtualized. The system provides one or more virtual file system volumes each of which appears to be a normal file system while in reality the files in each virtual file system may be stored on many file systems on a plurality of file servers. File systems manifest themselves to users as a hierarchy of directories (also known as folders) containing files. The virtual file system also manifests itself the same way. Unique to the invention is the independence of the name and position of a file in this hierarchy, from its location on one of the plurality of file servers. This virtualization functionality is facilitated through the use of a software layer that, for each virtual volume and file pathname, intercepts file system requests and maps the virtual pathname to the actual server and pathname under which the file is stored. This scheme is implemented on a set of computers called the virtual file system cluster. They are a cluster in the following senses: they are attached via a local area network; they share key configuration data; they communicate with each other to provide users the same virtual file system interface; and they are configured, monitored, and managed via a unified management console application. The invention operates through the use of two software components on each machine in the virtual file system cluster: 1) an “agent” software module that maintains the global configuration state of the system, and 2) the file system interception layer itself. For UNIX variant and LINUX variant clients, the agent is implemented as a user-level process while the interception layer is implemented as a kernel-loadable module.
  • In one embodiment, the virtual file system enables users to create virtual file hierarchies that are mapped behind the scene to one or more logical volumes on one or more servers. The actual hierarchy of directories and files comprising the portion of a single logical volume devoted to a particular virtual file system is called a gtree. Each virtual file system has two kinds of gtrees: a single master gtree and one or more slave gtrees. The master gtree functions as a centralized name service for the entire virtual file system, containing the directory names, attributes, and contents, and the file names. The slave gtree serves as a storage server, containing the file attributes and contents. For each file, the master gtree contains a file pointer that contains the file's unique identifier and the identifier of the slave on which the file's contents and attributes are located. In addition, each directory contains a special file with a reserved name that contains a unique identifier for that directory.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is a schematic diagram depicting a conventional NFS file system that is hosted by a single file server;
  • FIG. 2 is a schematic diagram illustrating the primary software components of the conventional NFS file system of FIG. 1;
  • FIG. 3 is a schematic diagram illustrating the primary software components of the virtual file system of the present invention when implemented using conventional file servers that host underlying NFS file systems;
  • FIG. 4 is a schematic diagram illustrating further details of the virtual filter driver used on the client side of the system of FIG. 3;
  • FIG. 5 is a schematic diagram illustrating further details of the GFS module that is implemented as a file system interception layer on the server side of the system to augment conventional file system behavior;
  • FIG. 6 is a schematic diagram that illustrates how data is stored in the master gtree (master directory structure) in accordance with the invention;
  • FIG. 7 is a schematic diagram illustrating how data is stored on the slave gtrees of the present invention;
  • FIG. 8 is a schematic diagram illustrating the primary operations and interfaces provided by a client agent that runs on clients to facilitate operation of the virtual file system of the present invention;
  • FIG. 9 is a flowchart illustrating the logic used by the present invention when accessing a data file;
  • FIG. 10 is a schematic diagram illustrating an exemplary implementation of the present invention wherein a data file is migrated between two file servers;
  • FIG. 11 is a flowchart illustrating the logic used by the present invention when migrating a data file;
  • FIGS. 12A-D are schematic diagrams illustrating the state of the master gtree and slave gtrees during the data file migration process; and
  • FIG. 13 is a schematic diagram of an exemplary computer system that may be implemented in the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures or operations are not shown or described in detail to avoid obscuring aspects of various embodiments of the invention.
  • Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • Definitions
  • Several standard terms used in the following description of the invention will be defined. A logical volume is a set of block storage resources managed as a unit by a single host computer, with the following usual characteristics: persistence of files when the computer turns on and off, a fixed storage size and replication strategy (e.g., RAID-5, mirrored, or none), and an associated set of one or more partitions on one or more physical storage media. A local file system is an organizational structure for storage of computer files managed by a single host computer with the following usual characteristics: persistence of files when the computer turns on and off, fixed storage size, and a unified hierarchy of directories (a.k.a. folders). A file system provides access to files and directories indexed by their pathname, the sequence of enclosing directories (folders), followed by the simple name of the file. A network file sharing protocol is a standardized communication interface for computer servers to share local file systems to other computers connected by a network. Network file sharing protocols allow multiple computers to use a single file system, as if the file system was local to each computer, regardless of where it actually resides. A server or file server is a computer system that is used to host a file system. A vnode is a data structure that contains information about a file in a UNIX file system under the SUN SOLARIS™ operating system.
  • A virtual file system (a.k.a. virtual volume) is a file system, except that rather being managed by a single host computer, it consists of a set of file systems or logical volumes on one or more host computers. Although the virtual file system functions as a single local file system to each client, its files are in fact partitioned among multiple underlying file systems. Moreover, client applications and users are not aware that the file system is virtual, nor are they aware of the various locations of the files. The term virtualized refers to the invention's ability to enable the creation of one or more virtual file systems comprising files and directories stored on one or more logical volumes on one or more file servers, wherein applications and users who access those files are not aware of their location.
  • The present invention enables file systems to be easily scaled through the use of a virtualized file system architecture and method. In the following description, the terms “Gossamer,” “Gossamer virtualized file system” and “Gossamer file system,” are used synonymously throughout to refer to exemplary virtual file system implementations in accordance with the invention.
  • A Conventional Approach
  • FIG. 1 shows a conventional network file system (NFS) protocol file system 10 that enables local applications 12 running on various clients 11, including personal computers 14 and 16 and a UNIX workstation 18, to access files (i.e., store, retrieve, update, delete) that are stored on a file server 20 via a network 22. In typical environments, network 22 will comprise a LAN (local area network) or WAN (wide area network).
  • File server 20 includes a disk sub-system 23 comprising a plurality of storage devices 24 (e.g., hard disks), each of what may be partitioned into one or more logical partitions 26. Operating system software may be present that organizes one or more devices into a single addressable storage space known as a logical volume. The volume management software allows the addressable units to include multiple devices or partitions, with or without mirroring, RAID, remote replication, snapshot, hot-swap, or dynamic expansion functionality. All of this is invisible to, and unrelated to, the file-system software using the logical volume, to which this invention pertains. The term “volume” will be used herein to refer to an addressable unit of disk, with the understanding that if volume management software is installed, then the volume is a logical volume, otherwise, it is merely a storage device or partition thereof. Generally, storage devices 24 may be accessed via one or more device controller cards 28 connected to the file server's motherboard. Optionally, the functionality provided by such a device controller card may be built into the motherboard. In many implementations, device controller cards 28 will comprise a single or multi-channel SCSI (small computer system interface) card that enables access to 14 storage devices per channel.
  • In typical implementations, storage devices 24 may be housed within file server 20's chassis, or may be disposed in an external cabinet linked to the file server via one or more communications channels. For example, many modern file servers provide “hot swap” drives, which enables network managers (and others) to easily remove, add and/or replace drives without having to shut down the server the drives are installed in.
  • From the viewpoint of local applications 12 running on clients 11 sharing files from the file server 20, a volume managed by that file server 20 appears to be a local file system 30. On a network with a UNIX file server using the NFS file sharing protocol, a local file system consists of a volume and a single hierarchical tree structure of directories with files at the leaves. The base of the tree is called the root directory 32. The file server exports part or all of the local file system for remote access. The “export” is the point in the hierarchy that appears to remote users to be the root. If the export is the local file system root, then the entire local file system is exported. A block-level schematic diagram corresponding to conventional NFS file system 10 is shown in FIG. 2. As illustrated, local applications 12 that are running in the user mode level of the clients operating system (OS) are provided access to the file system via a client-side NFS client 34 running at the kernel mode level of the client's OS, and an NFS Daemon 36, running on file server 20 at the kernel mode level of the server's OS. NFS daemon 36 provides an abstracted interface between a local native file system 38 running on file server 20 and NFS clients running on various client machines 11, whereby a single set of NFS commands can be used for any type of file system supported by variants of UNIX. The NFS daemon and the server operating systems provide a uniform NFS interface to the NFS clients using the various local file system types, whether they be ext2, UFS, VxFS, or any file system supported by NFS.
  • In addition to supporting client requests, there will generally be one or more local applications 40 running at the user mode level of the server accessing the local native file system 38 for management purposes, such as backup and restore utilities.
  • The conventional scheme suffers from a static mapping of file systems to file servers. This has several unpalatable consequences. Planning is difficult. Only one thing is certain about a static division of resources—you can't get it right ahead of time. Some islands of storage will be overtaxed long before the rest. Hot spots and above-trend growth will eventually bring existing resources to their knees, making some key data unavailable, and resulting in semi-unplanned, labor-intensive reconfiguration. The only way to delay reconfiguration is by throwing more hardware at the problem early, resulting in excess costs and unused resources. Under conventional file system operations, the only way to cure an overtaxed server is to take the file system offline, adding a new server, and reconfiguring the file system to divide the data up between servers. After such a reconfiguration is completed, applications will typically need to be changed to access the data in their new locations. This can be easy or impossible, depending on how well the applications are architected for such changes.
  • Gossamer Virtual File System Architecture
  • The Gossamer virtual file system takes the next step beyond the sharing of a single file system by freeing the file system from the bounds of its host. Gossamer comprises a software subsystem that manages virtualized file systems—that is, file systems without a single host. Each virtualized file system (called a Gossamer virtual volume, or GVV) has all of the usual characteristics of a file system except that rather than having a fixed server, it is hosted on a dynamic set of (generally) smaller server computers. A client computer, through a Gossamer client-side component, can access the entire virtualized file system using conventional network file sharing protocols, such as NFS. Furthermore, Gossamer can aggregate together all the storage capacity provided to it by the server computers, such that the total capacity of the virtual file system comprises the totality of the storage capacity of the underlying servers, which may be easily scaled by adding additional servers.
  • Gossamer enhances the client-server file-sharing model with location-independent access, which provides a significant advantage over the prior art. A conventional NFS shared file system has three aspects: an exported directory hierarchy (i.e., the exports), physical disk space, and a server. In contrast, a GVV abstracts the file system from the server. A GVV has an exported directory hierarchy and physical disk space, and a Gossamer GVV name, rather than a server name. Clients access GVVs as though they are NFS file systems, and Gossamer takes care of locating and accessing files on the proper servers.
  • A Gossamer file system is hosted on one or more server computers (i.e., file servers). Each file server hosts one or more volumes, referred to herein as bricks. An instance of a local file system software module (such as VxFS, UFS, or ext2) manages the layout of a brick. This module will be referred to herein as the “underlying file system,” since it is provided by another vendor and is accessed by this invention through a standard interface. On each volume, one or more gtrees—the building blocks of a GVV—are created as separate hierarchies on the underlying file system. A GVV is a collection of gtrees working in concert. Two GVVs may not share any gtrees. Users of files see only the GVVs, which work like first-class file systems, except that they appear to each client computer to be local file systems.
  • Gossamer's functionality is achieved through the interaction of two separate software components: a server-side file system component called GFS, and a client-side virtualization component, called GVFD (Gossamer Virtualizing Filter Driver). Both of these components are implemented as filters that intercept, translate, and reroute file system traffic. In the SOLARIS™ operating system (a variant of the UNIX operating system manufactured by the Sun corporation), the interface it intercepts is known as the vfs/vnode interface.
  • GFS comprises a server-side layer of abstraction that manages reference counting and migration. An instance of the GFS component runs on the server that hosts the corresponding underlying file system managed by that GFS instance. The GFS instance is implemented as a local file system that exports the vfs/vnode interface for the SOLARIS™ operating system. Each instance of GFS maintains data structures that are used to manage a single gtree.
  • GVFD runs on any client machine (also called a storage client) accessing a Gossamer file server (i.e., a computer on which at least one of the underlying file system is hosted). An instance of GVFD manages access to a single GVV from a single client. It maps virtual file names to physical locations for those files, and routes messages to appropriate servers, where they are ultimately received by GFS instances. An optional GVFD module may also run on the server if there are any applications (such as backup) on the server that need to access the virtual file system.
  • An overview of the Gossamer virtual file system is shown in FIG. 3, wherein solid-lined boxes correspond to components of Gossamer, while boxes with dashed outlines correspond to conventional applications and OS components. On the client side, a GFVD module 42 runs at the kernel mode level on client 11, and accesses external file servers, such as file server 20, via an NFS client instance 34A. In addition, a Gossamer client agent 43 runs at the user mode level on client 11, and accesses configuration information 45 via an NFS client instance 34B. On the server side, a GFS module 44 runs at the kernel mode layer on file server 20, which includes a virtual file system driver, a migration engine, and a replication engine. The server side also includes an administration agent 46, running at the user mode level on file server 20, which is used in conjunction with a Gossamer Administration service 48 running on an external machine or the server to enable administrators to manage various virtual file system functions, including migration policies and schedules, file system configurations, and replication. Gossamer Administration Agent 46 reads and writes to configuration information 45 via an NFS client instance 47.
  • Further details of GFVD module 42 are shown in FIG. 4. The GVFD module includes a file system API, a GVFD translation unit, a master directory lookup, and performs master directory/slave translation, all collectively identified by a block 50. GFVD module 42 functions as a filter that intercepts NFS file access requests, and translates those requests so they are sent to an appropriate server.
  • As shown in FIG. 5, GFS module 44 provides several server-side functions that are collectively identified by a block 52, including a file system interface, file system pass-through, object locking, reference counting, driver communication, adding and removing GVVs, adding and removing gtrees, migration job start, stop, and cancel, migration Job status, and replication. In the SOLARIS™ operating system, the Gossamer file system driver is loaded into the OS as a kernel loadable module (KLM).
  • Included in the functions performed by GFS module 44 are data file migration and maintaining configuration information 45. Migration of data files is enabled through the use of a migration engine 54 that accesses data files that may be stored locally or stored on a remote file server N via an NFS client instance 55.
  • Configuration information 45 includes configuration data that identifies what physical server(s) the various gtrees for a given GVV are hosted on, what physical devices the master and slave gtrees are stored on, the exports each server provides, and the roles played by the various components in a Gossamer virtual file system. Configuration information also may include schedule data (i.e., data pertaining to when migrations are to be performed or considered, when backups are to occur, when the background consistency checker may run, etc.), status files pertaining to operations in progress, such as migration and backup operations, and log files. The configuration information may be stored on one of the servers used to store the master gtrees and/or the slave gtrees, including file server 20, or may be stored on a separate server that is not used to store file system data files that are part of a GVV.
  • Gossamer uses local constructs called gtrees, which contain data corresponding to individual file systems to encode a single location-independent file system. This is accomplished by splitting the file system data into two parts, and storing the corresponding data into two separate types of gtrees. Metadata corresponding to a virtual directory and pathname hierarchy is stored on the master gtree, which functions as a name service. In one embodiment, the master name service for a GVV uses multiple gtrees as replicas. The file system data (i.e., data files and directories) is partitioned among multiple slave gtrees, which function as storage servers for file data. In one embodiment, each data file is stored on a single slave gtree. It is noted that in this embodiment redundant copies of a data file may be stored in multiple locations by the underlying file system, volume management software, or disk controller (e.g., a mirrored drive, RAID scheme, etc.); however, from the perspective of the Gossamer file system, the data file is manifest as a single local file.
  • The directories and their contents and attributes are stored in the master gtree. Files and their contents and attributes are stored on the slave gtrees. The master and slave gtrees are connected by file pointers, which are objects on the master gtree that map from the file's virtual pathname to a globally unique identifier (GUID) for the file and the gtree that hosts it. In general, the master gtree is hosted on a single volume, while the slave gtrees are hosted on one or more volumes that may or may not include the volume the master gtree is hosted on.
  • In the following description and Figures, various data corresponding to an exemplary Gossamer file system are presented. It will be understood that actual implementations of Gossamer file systems may comprise millions of directories and files, and may reside on a single file server, multiple file servers on the same LAN, as well as a combination of local and remote file servers (servers on a WAN).
  • As shown in FIG. 6, from a user's viewpoint, such as client 11, the exemplary virtual file system is exported such that it appears to have a virtual directory and file hierarchy structure 60, also referred herein as the user-view tree 60. Notably, the user-view tree corresponds to a “virtual” directory and file hierarchy because users name objects in the hierarchy using a GVV and virtual pathname that is entirely location-independent. Translations between virtual pathname and actual server-pathname combinations are handled, in part, through data stored on the GVV master, as depicted by GVV master directory structure 62. The GVV master directory structure logically divides its data into three spaces, each having a separate subdirectory name stored under a common root. These spaces include a Gossamer namespace 64 stored in a “/Namespace” subdirectory, a temporary migrating space 68 stored in a “/migrating” subdirectory, and a garbage space 70 stored in a “/Garbage” subdirectory.
  • The directory structure stored in Gossamer namespace 64 parallels the virtual directory hierarchy, wherein the files contained (logically) in the virtual directories are replaced by file pointers having the same names as the original files. For example, in user view tree 60, there are two files under the “/usr/joe” subdirectory: “index.html” and “data.dat.” Accordingly, respective file pointers 72 and 74 to these files having the same name and located in the same subdirectory path (“/usr/joe”) relative to the /Namespace directory are stored in Gossamer namespace 64.
  • Each of the file pointers comprises a very small file containing two pieces of information: a file GUID (guid) corresponding to the file itself, and a GUID slave location identifier (loc) that identifies the gtree the file is located on. The gtree and file GUID are sufficient to retrieve the file's attributes and contents. For example, file pointer 72 corresponding to the “data.dat” file has a file GUID of 4267, and a slave location identifier of 3215.
  • In one embodiment, the file and directory GUIDs are 128-bit identifiers generated by modern computers to be globally unique. The slave location identifiers are also 128-bit GUIDs. The values for the GUIDS discussed above and shown in the Figures herein are simplified to be four-digit base-ten numbers for clarity.
  • File systems do not directly support the use of binary names. Rather, the files and-directories hosted on a file system use alphanumerical names. Accordingly, in one embodiment, the GUIDS are encoded using a reversible mapping into alphanumerical strings. For example, the following encodings are appropriate for file systems supporting a Latin character set. The four-bit encoding is as a 32-byte lower case hexadecimal string. Another encoding is the following six-bit encoding, which results in a 22-byte string representation in Latin. Each character represents six contiguous bits of the GUID, so 22 characters represents 132 bits, the last 4 of which are always zero.
  • There are 64 possibilities for a sequence of six bits, each of which matches a Latin symbol:
    ‘A’ = 0
    ‘B’ = 1
    . . .
    ‘Z’ = 25
    ‘a’ = 26
    ‘b’ = 27
    . . .
    ‘z’ = 51
    ‘0’ = 52
    ‘1’ = 53
    . . .
    ‘9’ = 61
    ‘_’ = 62
    ‘-‘ = 63
  • As discussed above, the data files themselves are stored on slave gtrees that are separate from the master gtree. An exemplary storage scheme corresponding to the present example is illustrated in FIG. 7. As depicted in Gossamer Namespace 64 of FIG. 6 and discussed above, “index.html” file pointer 72 contains a slave location identifier of 2259 and a file GUID of 9991. This slave location is located in a gtree 78 shown in FIG. 7, and contains a file named “9991” corresponding to the to the “index.html” data file in user view 60. Similarly, data file “data.dat” is stored on a gtree 80 in a slave location having a slave location identifier of 3215 in a file named “4267.”
  • Each slave gtree is exported by GFS 44 as a flat storage space, e.g.,:
      • /Guid1
      • /Guid2
      • /Guid3
  • In one embodiment, the underlying storage is hierarchical, to support fast lookup using underlying file system implementations that store directories as linked lists. The hierarchy is hidden by GFS 44 by cleverly translating all lookup, create, and delete calls into sequences of lookups down the hierarchy followed by the desired lookup, create, or delete call itself.
  • Rules of thumb in the industry suggest that performance problems are noticeable with more than about 100,000 objects in a single directory on most local file systems for UNIX-style operating systems. To target a limit of 216 (about 65,000) objects maximum in any directory, a two-level hierarchy provides the ability to support over 4,000,000,000 directories on a single partition. Instead of putting all the slave directories under the root directory, the slave directories are put in a /<string> directory, where string comprises a predetermined substring of the name of the GUID. For example, when a six-bit encoding is used, using a three-digit portion of the GUID provides 218 unique combinations for string. Under this embodiment, there will be on average about 214 objects in each when the total number of objects reaches 4,000,000,000. In a four-bit encoding, a four-digit portion of the name will provide 216 directories under the root directory, and room for 216 objects in each of these directories. If it is desired to provide access to more than 4,000,000,000 files, then another level should be placed in the hierarchy.
  • Preferably, string should be generated from the portion of GUID bits that are changing most rapidly. For example, with a GUID generator commonly used by modern computer systems, bits 17-32 are the fastest-changing bits. Accordingly, bits 17-32 should be used to generate string on these computers.
  • Further details of Gossamer client agent 43 are shown in FIG. 8. As indicated by an agent module 82, Gossamer client agent 43 functions as a UNIX agent that performs polling for configuration changes, mounts gtrees (i.e., mounts the underlying file system corresponding to the gtree), and provides an interface for a centralized administration module to communicate with the GVFD module. Agent module 82 communicates with GVFD 42 via a driver communication module 84, which provides a driver communication interface, and enables GVVs and gtrees to be added and removed. Agent module 82 is also enabled to access configuration information 45.
  • In embodiments implemented using the SUN SOLARIS™ operating system, the GVFD, master GFS, and slave GFS cooperate to implement each operation of the SOLARIS™ vnode/vfs interface in a way that provides a single user view of the entire virtual file system. The invention relies on the cooperation of all these software components for normal operation. In particular, the lookup command is responsible for locating files, to which other requests are then routed. The create operation also is critical, since it selects a slave for a file to be located on. In general, the slave for storing a new file may be selected using various criteria, including storage space and load-balancing considerations. In one embodiment, new files are stored on the slave with the largest free disk space.
  • Vnode/vfs requests that are used by one embodiment of the invention are detailed in TABLE 1 below. For each vnode/vfs request, TABLE 1 describes the handling that occurs when a GVFD instance receives the request. The column headed “Where is the Logic” indicates whether the GVFD does the operation alone (client) or forwards the operation to the appropriate GFS instance (master or slave or both). The column headed “Modifies Metadata” indicates whether the operation changes any data on the master GFS. The default treatment of an operation is “pass-through,” in which case the request is forwarded to the correct file server and then from the file server to the correct underlying file system and object. In pass-through operations the interception layers are passive. One operation (create) results in two client messages resulting in modifications to both the slave and the master data. Two operations (remove and rename) result in master-slave communication, and result in modifications to both the master and the slave data.
    TABLE 1
    Where
    VFS/ is
    Vnode the Modifies
    Request Logic Metadata Notes
    access dir master No pass-through
    access file slave No pass-through
    close master + slave No Since metadata writes are
    always followed by fsync, close
    need not force an extra fsync
    create slave then yes Three steps: create empty file
    master on slave, create file pointer on
    master, fill file pointer with
    identity and location of empty
    file.
    fsync slave Yes Pass-through
    getattr, master No Pass-through
    getsecattr
    dir
    getattr, slave No Pass-through
    getsecattr
    file
    link master yes pass-through
    lookup master then No Gets file pointer then file.
    slave Attributes are those of the file.
    File pointer may be cached on
    client.
    map file slave No pass-through
    mkdir master yes pass-through
    open client No Only affects NFS client module
    on client
    poll dir master No pass-through
    poll file slave No pass-through
    read, slave No pass-through
    write,
    putpage,
    getpage,
    pageio,
    readlink
    readdir master No pass-through
    remove master (which yes Two steps: master removes file
    sends pointer, then forwards remove
    message to to slave
    slave)
    rename master yes May result in remove of target,
    which works like ‘remove’
    above
    rmdir master yes Inverse of mkdir
    rwlock, master yes pass-through
    rwunlock,
    shrlock dir
    rwlock, slave No pass-through
    rwunlock,
    setfl,
    shrlock
    file
    seek slave no pass-through
    setattr, master yes pass-through
    setsecattr
    dir
    setattr, slave yes pass-through
    setsecattr
    file
    space master + slave no pass-through to both
    symlink Master Yes pass-through
  • The Gossamer file system enables clients to access any data file within the aggregated storage space of a GVV through cooperation between the client-side GVFD and server-side GFS's corresponding to both the master and the slave volumes. In general, access functions include functions that are typically used to manipulate or obtain data from a data file, includes ACCESS, READ, WRITE, GETATTR, SETATTR, etc. These access functions are always preceded by a LOOKUP function which determines the physical location of the data file (file server and physical pathname) based on its virtual pathname. The LOOKUP function is performed by block 90-95 below.
  • With reference to the flowchart of FIG. 9 and FIG. 3, the process for access a data file begins in a block 90 in which a local application (e.g., local application 12) running on a client 11 requests access to the file using the file's virtual pathname. As with a conventional file access request, the request is passed from the user mode level of the OS to the OS kernel, where it is intercepted in a block 91 by GVFD 42 running on client 11. In a block 92, GVFD 42 looks up the identity of the file server hosting the master gtree (the master file server) using its local copy of configuration information 45, and then passes the virtual pathname of the file and a client identifier to the master file server in a block 93. The virtual pathname is sent as a file I/O request via NFS client instance 34A to the master file server, wherein it is received by an NFS Daemon 36. Has shown in FIG. 2 and discussed above, normally. NFS Daemon 36 would pass the request to local native file system 38. However, under the invention's virtual file system scheme, GFS module 44 on the master file server intercepts the file access requests in a block 94, navigates the master gtree until is locates the pointer file corresponding to the virtual pathname, whereupon it returns the pointer file to GVFD 42 on the client. In response to receiving the pointer file, GFVD 42 parses the pointer file in a block 95 to identify the data file's identifier and the file server hosting the slave volume (the slave file server) in which the data file is stored. As discussed above, the pointer file includes two GUIDs, wherein the first GUID is used to identify the slave volume and the second GUID (the data file's identifier) is used to determine the physical pathname under which the data file is stored in the slave volume. Once the slave volume is known, the slave file server can be determined by lookup using the local copy of configuration 105 maintained on client 11. This completes the LOOKUP function.
  • After the file server and data file identifier are known the data file can be accessed. GFVD 42 sends a file access request including the data file's identifier to slave file server in a block 96, whereupon the file access request is intercepted by a GFS module 44 running on the slave. Slave GFS module 44 routes the request to the local native file system 38 corresponding to the slave volume in a block 97. The file access process is completed in a block 98 in which the local native file system performs the file access request and returns the results to the GFS module, which then returns the results to the client.
  • A generally similar process is used for other types of file and directory accesses in which file system objects are changed, such as when a file or directory is added, deleted, renamed, or moved. The appropriate change is made on the master directory tree. For example, a user may request to add a new data file f into a particular directory d. In response to the file system access request, a new pointer file by the name f is added to the directory d in the master gtree, and the new data file is stored on an appropriate slave volume hosted by one of the file system's file servers.
  • An exemplary Gossamer file system 100 is illustrated in FIG. 10. The system includes four file servers 20A, 20B, 20C, and 20D. Each of the file servers supports a single file system from which a respective gtree is generated (i.e., gtrees A, B, C and D). Each files system is stored on a plurality of storage devices 24, each of which may host a single export 25 or multiple exports 26. The master gtree and slave gtrees are stored on volumes of the various servers 102, 24, 25, 26, 108.
  • Dynamic Scaling and Migration
  • A significant advantage provided by the invention is the ability to easily scale the virtual file system dynamically without having to change the location or name of any files or directories in the virtual directory and file hierarchy (e.g., user view 60). The file system may be scaled by adding another server to a GVV without taking the system offline, and in a manner that is transparent to users and client applications.
  • A key function that enables the foregoing system scaling is called data “migration.” Data migration enables files to be migrated (i.e., moved) between physical storage devices, including devices hosted by separate servers, in a transparent manner. For example, suppose that Gossamer file system 100 initially included file servers 20A, 20C, and 20D, all of whose underlying file system capacities are becoming full. In order to provide additional storage capacity, the system administrator decides to add file server 20B. In most instances, the first step upon adding a new file server to a Gossamer file system will be to load-balance the system by migrating data files from one or more existing file servers to the new file server.
  • When a file is migrated, client applications are undisturbed. The GFS module will attempt to migrate a set of files by copying them and deleting them. Write access to a file during migration causes that file's migration to abort. However, any file access after the migration will cause the our GVFD client to access the file in its new migrated location.
  • Suppose it is desired to migrate the file “data.dat” in the /usr/joe directory from file server 20A to file server 20B, which for illustrative purposes is initially stored in a local export 103 hosted by a storage device 104 on file server 20A and ends up being migrated to a destination export 106 hosted on a storage device 108 on file server 20B. The migration of a file is managed by the GFS module on the source server. With reference to the flowchart of FIG. 11 and FIGS. 12A-D, migrating a data file proceeds as follows. First, in a block 110, the vnode of the pointer (named PointerFilePath) on the master gtree 102 is opened. In a block 112, the vnode of the file (which is local, and whose name is determined by the GUID) is opened. In this example, the GUID name is 4267. Next, in a decision block 114, a determination is made to whether the migration module is the only process with the vnode open. Effectively, this decision block determines if any client applications presently have access to the data file that is to be migrated. This determination can be made by examining the reference count for the vnode associated with <GUIDname>. If the reference count is greater than one, the migration of the file is stopped, as indicated by a return block 116.
  • If the reference count is 1, the logic proceeds to a block 118 in which the vnode is locked (preventing subsequent access requests for other users from being granted), and the migration operation is allowed to proceed. In a block 120, a hardlink is created on the master in migrating space 68 (i.e., under the “/Migration” directory) having an entry of “/<GUIDsrc>/<GUIDname>→PointerFilePath”. The GUIDsrc is the slave location identifier which is used to determine the gtree that the file is originally stored in; in this case the GUIDsrc=2259, which corresponds to gtree A, and the hardlink entry is /2259/4267, as shown in FIG. 11A.
  • In a block 122, the GUID for the destination gtree (GUIDdest) is appended to the pointer file for the data file, such that the pointer file comprises <GUIDname><GUIDsrc><GUIDdest>. As shown in FIG. 12A, this pointer file now comprises {4267, 2259, 3215}. The file is then copied from its local location to the destination file server in a block 124, as shown in FIG. 12B, whereupon the local file is deleted in a block 126, as shown in FIG. 12C. During the foregoing operations, checks are made to ensure that the file has been successfully copied to the destination prior to deleting the local copy.
  • Once the file has been successfully moved, cleanup operations are performed to complete the migration process. This comprises updating the pointer file in a block 128, deleting the hardlink on the master in a block 130, unlocking and releasing the <GUIDname> vnode in a block 132, and releasing the PointerFilePath vnode in a block 134. The results of these cleanup operations are depicted in FIGS. 12C and 12D.
  • Some embodiments employ a further mechanism to isolate use and migration of a given file, preventing them from occurring simultaneously. In such embodiments any open file request causes that file on the slave to acquire a shared lock. Subsequent close operations release that share lock. Migrations do not attempt to migrate files that are locked by any client. This prevents migration from occurring on any file currently open.
  • The above procedure is crash-proof by design. At any point, there is enough information in the migrating space and the file pointer to quickly clean up any currently in-progress migration operations when a slave GFS is restarted.
  • Generally, migration operations are handled through use of Gossamer administration utility 48. This utility enables migrations to be initiated through manual intervention, or enables systems administrators to create migration policies that automatically invoke migration operations when a predetermined set of criteria is determined to occur. For example, a Gossamer system administrator can analyze file system statistics (e.g., percentage of space used, number of files, file accesses, etc.) or merely await broad recommendations from the system. She can enable the system to choose candidates for migration automatically, or select files manually. In addition, she can schedule migrations for one-time, daily or weekly execution through the use of the migration schedule management tool.
  • Importing a Conventional File System
  • A non-Gossamer file system must undergo some conversion to be used by Gossamer, and vice versa. According, an import/export tool is provided to perform the conversion. The import tool constructs a new master gtree, or connects to an existing gtree. The import tool then inserts the directory hierarchy of the file system being converted into that master gtree. This involves copying all directories and their attributes and contents. Meanwhile, it assigns all files a GUID and rearranges them in the file hierarchy, so that after conversion the file system will be configured as a slave.
  • Backup and Restore
  • Backup and restore of a GVV can be a single operation for a small GVV, in which case a regular GVFD module is run on the master that the backup and restore procedures access. However, most likely each gtree needs to be backed up separately to keep the backup window small. In this instance, a special-purpose GVFD instance is run on each server to manage backup of that server's gtrees. This GVFD provides the backup tool (whether it is provided by SUN or a third party) a partial view of the file system, containing all the directories and the files hosted on the server being backed up.
  • A Single-gtree backup requires a modified GVFD running on each server, as depicted by a GVFD module 136 in FIG. 3. In this configuration, GVFD module 136 is enabled to access configuration information 45 via an NFS-client instance 138.
  • Master Gtree Replication
  • In some implementations, it may be desired to explicitly replicate the master gtree. It is noted that some file systems provide replication functionality wherein various file system data, such as the data corresponding to the master gtree, are replicated by the file system itself. In these type of implementations, the underlying replication of the file system data does not alter the operation of the Gossamer file system, and in fact is transparent to Gossamer. In contrast, an explicitly replicated master gtree means that replication of the master gtree is controlled by Gossamer. Functionality for replicating the master gtree is provided by a replication engine module that is part of the GFS instance that provide access to the master volume. In addition, Gossamer administration agent 46 may include replication management functions. In general, the master gtree will be replicated on a file server that is not the same file server that hosts the original master gtree.
  • Exemplary File Server Computer System
  • With reference to FIG. 13, a generally conventional computer server 200 is illustrated, which is suitable for use in connection with practicing the present invention, and may be used for the file servers in a Gossamer virtual file system. Examples of computer systems that may be suitable for these purposes include stand-alone and enterprise-class servers operating UNIX-based and LINUX-based operating systems.
  • Computer server 200 includes a chassis 202 in which is mounted a motherboard (not shown) populated with appropriate integrated circuits, including one or more processors 204 and memory (e.g., DIMMs or SIMMS) 206, as is generally well known to those of ordinary skill in the art. A monitor 208 is included for displaying graphics and text generated by software programs and program modules that are run by the computer server. A mouse 210 (or other pointing device) may be connected to a serial port (or to a bus port or USB port) on the rear of chassis 202, and signals from mouse 210 are conveyed to the motherboard to control a cursor on the display and to select text, menu options, and graphic components displayed on monitor 208 by software programs and modules executing on the computer. In addition, a keyboard 212 is coupled to the motherboard for user entry of text and commands that affect the running of software programs executing on the computer. Computer server 200 also includes a network interface card (NIC) 214, or equivalent circuitry built into the motherboard to enable the server to send and receive data via a network 216.
  • File system storage corresponding to the invention may be implemented via a plurality of hard disks 218 that are stored internally within chassis 202, and/or via a plurality of hard disks that are stored in an external disk array 220 that may be accessed via a SCSI card 222 or equivalent SCSI circuitry built into the motherboard. Optionally, disk array 220 may be accessed using a Fibre Channel link using an appropriate Fibre Channel interface card (not shown) or built-in circuitry.
  • Computer server 200 generally may include a compact disk-read only memory (CD-ROM) drive 224 into which a CD-ROM disk may be inserted so that executable files and data on the disk can be read for transfer into memory 206 and/or into storage on hard disk 218. Similarly, a floppy drive 226 may be provided for such purposes. Other mass memory storage devices such as an optical recorded medium or DVD drive may also be included. The machine instructions comprising the software program that causes processor(s) 204 to implement the functions of the present invention that have been discussed above will typically be distributed on floppy disks 228 or CD-ROMs 230 (or other memory media) and stored in one or more hard disks 218 until loaded into memory 206 for execution by processor(s) 204. Optionally, the machine instructions may be loaded via network 216.
  • Although the present invention has been described in connection with a preferred form of practicing it and modifications thereto, those of ordinary skill in the art will understand that many other modifications can be made to the invention within the scope of the claims that follow. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow.

Claims (21)

1.-30. (canceled)
31. A method of virtualizing a plurality of file systems, comprising:
aggregating the file systems into a single virtual storage volume;
creating in a master logical volume a virtual directory and file hierarchy including a virtual pathname for each file stored in the virtual storage volume;
for each file in the virtual directory and file hierarchy, associating with the file a metadata and a pointer to the file contents stored in one or more slave logical volumes; and
maintaining on each client that accesses the virtual storage volume a copy of configuration information that identifies a file system used to host the master logical volume and one or more file systems used to host the slave logical volumes.
32. A method as recited in claim 31, wherein the one or more file systems used to host the slave logical volumes includes the file system used to host the master logical volume.
33. A method as recited in claim 31, wherein the virtual storage volume appears to the each client as at least a portion of a local file system of the client.
34. A method as recited in claim 31, further comprising providing a virtualization layer that enables the each client to access a file stored in the virtual storage volume by using a reference to the virtual pathname of the file.
35. A method as recited in claim 31, wherein the each client does not need to know the file systems and pathnames under which the file is actually stored.
36. A method as recited in claim 31, wherein the file systems include at least two file systems that comprise different file system types.
37. A method as recited in claim 31, further comprising dynamically scaling the virtual storage volume by adding a new file system to the virtual storage volume without taking the virtual storage volume offline.
38. A method as recited in claim 31, further comprising migrating one or more files initially stored on one of the file systems to another one of the file systems, wherein the files are migrated without taking any of the file systems offline and in a manner that is transparent to one or more clients accessing the virtual storage volume.
39. A method as recited in claim 31, wherein the pointer to the file includes a first GUID (global unique identifier) that identifies at least one of the slave logical volumes on which the file is stored and a second GUID that is used to identify a storage location within the one of the slave logical volumes.
40. A method as recited in claim 31, wherein at least two of the file systems are hosted on a singe server.
41. A method as recited in claim 31, wherein each of the file systems are hosted on different servers.
42. A method as recited in claim 31, wherein configuration information that maps the master logical volume and the slave logical volumes to the file systems is maintained on one or more servers associated with the file systems.
43. A method as recited in claim 31, wherein the each client includes an agent that updates the local copy of the configuration information when the configuration information is modified.
44. A system for virtualizing a plurality of file systems, comprising:
a communication interface used to communicate with the file systems; and
a processor configured to aggregate the file systems into a single virtual storage volume, create in a master logical volume a virtual directory and file hierarchy including a virtual pathname for each file stored in the virtual storage volume, and for each file in the virtual directory and file hierarchy, associate with the file a metadata and a pointer to the file contents stored in one or more slave logical volumes;
wherein a copy of configuration information that identifies a file system used to host the master logical volume and one or more file systems used to host the slave logical volumes is maintained on each client that accesses the virtual storage volume.
45. A system as recited in claim 44, wherein the one or more file systems used to host the slave logical volumes includes the file system used to host the master logical volume.
46. A system as recited in claim 44, wherein a virtualization layer that enables the each client to access a file stored in the virtual storage volume by using a reference to the virtual pathname of the file.
47. A system as recited in claim 44, wherein the virtual storage volume can be dynamically scaled by adding a new file system to the virtual storage volume without taking the virtual storage volume offline.
48. A system as recited in claim 44, wherein configuration information that maps the master logical volume and the slave logical volumes to the file systems is maintained on one or more servers associated with the file systems.
49. A system as recited in claim 44, wherein the each client includes an agent that updates the local copy of the configuration information when the configuration information is modified.
50. A computer program product for virtualizing a plurality of file systems, the computer program product being embodied in a computer readable medium and comprising computer instructions for:
aggregating the file systems into a single virtual storage volume;
creating in a master logical volume a virtual directory and file hierarchy including a virtual pathname for each file stored in the virtual storage volume;
for each file in the virtual directory and file hierarchy, associating with the file a metadata and a pointer to the file contents stored in one or more slave logical volumes; and
maintaining on each client that accesses the virtual storage volume a copy of configuration information that identifies a file system used to host the master logical volume and one or more file systems used to host the slave logical volumes.
US11/338,496 2001-12-19 2006-01-23 Virtual file system Abandoned US20060123062A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/338,496 US20060123062A1 (en) 2001-12-19 2006-01-23 Virtual file system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/025,005 US7024427B2 (en) 2001-12-19 2001-12-19 Virtual file system
US11/338,496 US20060123062A1 (en) 2001-12-19 2006-01-23 Virtual file system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/025,005 Continuation US7024427B2 (en) 2001-12-19 2001-12-19 Virtual file system

Publications (1)

Publication Number Publication Date
US20060123062A1 true US20060123062A1 (en) 2006-06-08

Family

ID=21823520

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/025,005 Expired - Lifetime US7024427B2 (en) 2001-12-19 2001-12-19 Virtual file system
US11/338,496 Abandoned US20060123062A1 (en) 2001-12-19 2006-01-23 Virtual file system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/025,005 Expired - Lifetime US7024427B2 (en) 2001-12-19 2001-12-19 Virtual file system

Country Status (1)

Country Link
US (2) US7024427B2 (en)

Cited By (150)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070192561A1 (en) * 2006-02-13 2007-08-16 Ai Satoyama virtual storage system and control method thereof
US20070214183A1 (en) * 2006-03-08 2007-09-13 Omneon Video Networks Methods for dynamic partitioning of a redundant data fabric
US20070214384A1 (en) * 2006-03-07 2007-09-13 Manabu Kitamura Method for backing up data in a clustered file system
US20070255699A1 (en) * 2006-04-28 2007-11-01 Microsoft Corporation Bypass of the namespace hierarchy to open files
US20080126434A1 (en) * 2006-08-03 2008-05-29 Mustafa Uysal Protocol virtualization for a network file system
US20080162582A1 (en) * 2007-01-03 2008-07-03 International Business Machines Corporation Method, computer program product, and system for coordinating access to locally and remotely exported file systems
US20090049153A1 (en) * 2007-08-14 2009-02-19 International Business Machines Corporation Methods, computer program products, and apparatuses for providing remote client access to exported file systems
US20090144300A1 (en) * 2007-08-29 2009-06-04 Chatley Scott P Coupling a user file name with a physical data file stored in a storage delivery network
US20090198704A1 (en) * 2008-01-25 2009-08-06 Klavs Landberg Method for automated network file and directory virtualization
US20090204650A1 (en) * 2007-11-15 2009-08-13 Attune Systems, Inc. File Deduplication using Copy-on-Write Storage Tiers
US20090204705A1 (en) * 2007-11-12 2009-08-13 Attune Systems, Inc. On Demand File Virtualization for Server Configuration Management with Limited Interruption
US20090234856A1 (en) * 2001-01-11 2009-09-17 F5 Networks, Inc. Aggregated opportunistic lock and aggregated implicit lock management for locking aggregated files in a switched file system
US20090259665A1 (en) * 2008-04-09 2009-10-15 John Howe Directed placement of data in a redundant data storage system
US7606868B1 (en) * 2006-03-30 2009-10-20 Wmware, Inc. Universal file access architecture for a heterogeneous computing environment
US7698351B1 (en) 2006-04-28 2010-04-13 Netapp, Inc. GUI architecture for namespace and storage management
US20100169488A1 (en) * 2008-12-31 2010-07-01 Sap Ag System and method of consolidated central user administrative provisioning
US20100257218A1 (en) * 2009-04-03 2010-10-07 Konstantin Iliev Vassilev Merging multiple heterogeneous file systems into a single virtual unified file system
US20110087696A1 (en) * 2005-01-20 2011-04-14 F5 Networks, Inc. Scalable system for partitioning and accessing metadata over multiple servers
US8117244B2 (en) 2007-11-12 2012-02-14 F5 Networks, Inc. Non-disruptive file migration
US8151360B1 (en) 2006-03-20 2012-04-03 Netapp, Inc. System and method for administering security in a logical namespace of a storage system environment
USRE43346E1 (en) 2001-01-11 2012-05-01 F5 Networks, Inc. Transaction aggregation in a switched file system
US8180747B2 (en) 2007-11-12 2012-05-15 F5 Networks, Inc. Load sharing cluster file systems
US8195769B2 (en) 2001-01-11 2012-06-05 F5 Networks, Inc. Rule based aggregation of files and transactions in a switched file system
US8195760B2 (en) 2001-01-11 2012-06-05 F5 Networks, Inc. File aggregation in a switched file system
US8204860B1 (en) 2010-02-09 2012-06-19 F5 Networks, Inc. Methods and systems for snapshot reconstitution
US8239354B2 (en) 2005-03-03 2012-08-07 F5 Networks, Inc. System and method for managing small-size files in an aggregated file system
US8265919B1 (en) 2010-08-13 2012-09-11 Google Inc. Emulating a peripheral mass storage device with a portable device
US8285749B2 (en) * 2010-03-05 2012-10-09 Hitachi, Ltd. Computer system and recording medium
US8285817B1 (en) * 2006-03-20 2012-10-09 Netapp, Inc. Migration engine for use in a logical namespace of a storage system environment
US8352785B1 (en) 2007-12-13 2013-01-08 F5 Networks, Inc. Methods for generating a unified virtual snapshot and systems thereof
US8397059B1 (en) 2005-02-04 2013-03-12 F5 Networks, Inc. Methods and apparatus for implementing authentication
US8396895B2 (en) 2001-01-11 2013-03-12 F5 Networks, Inc. Directory aggregation for files distributed over a plurality of servers in a switched file system
US8396836B1 (en) 2011-06-30 2013-03-12 F5 Networks, Inc. System for mitigating file virtualization storage import latency
US8417681B1 (en) 2001-01-11 2013-04-09 F5 Networks, Inc. Aggregated lock management for locking aggregated files in a switched file system
US8417746B1 (en) 2006-04-03 2013-04-09 F5 Networks, Inc. File system management with enhanced searchability
US20130110903A1 (en) * 2011-10-27 2013-05-02 Microsoft Corporation File fetch from a remote client device
US8463850B1 (en) 2011-10-26 2013-06-11 F5 Networks, Inc. System and method of algorithmically generating a server side transaction identifier
GB2498626A (en) * 2011-12-13 2013-07-24 Ibm Optimising the storage allocation in a virtual desktop environment
US8515902B2 (en) 2011-10-14 2013-08-20 Box, Inc. Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution
US8549518B1 (en) * 2011-08-10 2013-10-01 Nutanix, Inc. Method and system for implementing a maintenanece service for managing I/O and storage for virtualization environment
US8549582B1 (en) 2008-07-11 2013-10-01 F5 Networks, Inc. Methods for handling a multi-protocol content name and systems thereof
US8548953B2 (en) 2007-11-12 2013-10-01 F5 Networks, Inc. File deduplication using storage tiers
GB2501182A (en) * 2012-04-11 2013-10-16 Box Inc Cloud service enabled to handle a set of files depicted to a user as a single file
US8583619B2 (en) 2007-12-05 2013-11-12 Box, Inc. Methods and systems for open source collaboration in an application service provider environment
US8601473B1 (en) 2011-08-10 2013-12-03 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US8635247B1 (en) 2006-04-28 2014-01-21 Netapp, Inc. Namespace and storage management application infrastructure for use in management of resources in a storage system environment
US8682916B2 (en) 2007-05-25 2014-03-25 F5 Networks, Inc. Remote file virtualization in a switched file system
US8719445B2 (en) 2012-07-03 2014-05-06 Box, Inc. System and method for load balancing multiple file transfer protocol (FTP) servers to service FTP connections for a cloud-based service
US20140137252A1 (en) * 2011-06-27 2014-05-15 Beijing Qihood Technology Company Limited Method and system for unlocking and deleting file and folder
US8738673B2 (en) 2010-09-03 2014-05-27 International Business Machines Corporation Index partition maintenance over monotonically addressed document sequences
US8745267B2 (en) 2012-08-19 2014-06-03 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US8782174B1 (en) 2011-03-31 2014-07-15 Emc Corporation Uploading and downloading unsecured files via a virtual machine environment
US8850130B1 (en) 2011-08-10 2014-09-30 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization
US8863124B1 (en) 2011-08-10 2014-10-14 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US8868574B2 (en) 2012-07-30 2014-10-21 Box, Inc. System and method for advanced search and filtering mechanisms for enterprise administrators in a cloud-based environment
US8892679B1 (en) 2013-09-13 2014-11-18 Box, Inc. Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform
US8914900B2 (en) 2012-05-23 2014-12-16 Box, Inc. Methods, architectures and security mechanisms for a third-party application to access content in a cloud-based platform
US8990307B2 (en) 2011-11-16 2015-03-24 Box, Inc. Resource effective incremental updating of a remote client with events which occurred via a cloud-enabled platform
US9009106B1 (en) 2011-08-10 2015-04-14 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US9015601B2 (en) 2011-06-21 2015-04-21 Box, Inc. Batch uploading of content to a web-based collaboration environment
US9020912B1 (en) 2012-02-20 2015-04-28 F5 Networks, Inc. Methods for accessing data in a compressed file system and devices thereof
US9019123B2 (en) 2011-12-22 2015-04-28 Box, Inc. Health check services for web-based collaboration environments
US9027108B2 (en) 2012-05-23 2015-05-05 Box, Inc. Systems and methods for secure file portability between mobile applications on a mobile device
US9054919B2 (en) 2012-04-05 2015-06-09 Box, Inc. Device pinning capability for enterprise cloud service and storage accounts
US9063912B2 (en) 2011-06-22 2015-06-23 Box, Inc. Multimedia content preview rendering in a cloud content management system
US9098474B2 (en) 2011-10-26 2015-08-04 Box, Inc. Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience
US9118697B1 (en) 2006-03-20 2015-08-25 Netapp, Inc. System and method for integrating namespace management and storage management in a storage system environment
US9117087B2 (en) 2012-09-06 2015-08-25 Box, Inc. System and method for creating a secure channel for inter-application communication based on intents
US9135462B2 (en) 2012-08-29 2015-09-15 Box, Inc. Upload and download streaming encryption to/from a cloud-based platform
US9195636B2 (en) 2012-03-07 2015-11-24 Box, Inc. Universal file type preview for mobile devices
US9197718B2 (en) 2011-09-23 2015-11-24 Box, Inc. Central management and control of user-contributed content in a web-based collaboration environment and management console thereof
US9195519B2 (en) 2012-09-06 2015-11-24 Box, Inc. Disabling the self-referential appearance of a mobile application in an intent via a background registration
US9195500B1 (en) 2010-02-09 2015-11-24 F5 Networks, Inc. Methods for seamless storage importing and devices thereof
US9213684B2 (en) 2013-09-13 2015-12-15 Box, Inc. System and method for rendering document in web browser or mobile device regardless of third-party plug-in software
US9237170B2 (en) 2012-07-19 2016-01-12 Box, Inc. Data loss prevention (DLP) methods and architectures by a cloud service
US9286298B1 (en) 2010-10-14 2016-03-15 F5 Networks, Inc. Methods for enhancing management of backup data sets and devices thereof
US9292833B2 (en) 2012-09-14 2016-03-22 Box, Inc. Batching notifications of activities that occur in a web-based collaboration environment
US9311071B2 (en) 2012-09-06 2016-04-12 Box, Inc. Force upgrade of a mobile application via a server side configuration file
US9369520B2 (en) 2012-08-19 2016-06-14 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9396245B2 (en) 2013-01-02 2016-07-19 Box, Inc. Race condition handling in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9413587B2 (en) 2012-05-02 2016-08-09 Box, Inc. System and method for a third-party application to access content within a cloud-based platform
US9483473B2 (en) 2013-09-13 2016-11-01 Box, Inc. High availability architecture for a cloud-based concurrent-access collaboration platform
US9495364B2 (en) 2012-10-04 2016-11-15 Box, Inc. Enhanced quick search features, low-barrier commenting/interactive features in a collaboration platform
US9507795B2 (en) 2013-01-11 2016-11-29 Box, Inc. Functionalities, features, and user interface of a synchronization client to a cloud-based environment
US9519501B1 (en) 2012-09-30 2016-12-13 F5 Networks, Inc. Hardware assisted flow acceleration and L2 SMAC management in a heterogeneous distributed multi-tenant virtualized clustered system
US9519886B2 (en) 2013-09-13 2016-12-13 Box, Inc. Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform
US9535924B2 (en) 2013-07-30 2017-01-03 Box, Inc. Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9535909B2 (en) 2013-09-13 2017-01-03 Box, Inc. Configurable event-based automation architecture for cloud-based collaboration platforms
US9553758B2 (en) 2012-09-18 2017-01-24 Box, Inc. Sandboxing individual applications to specific user folders in a cloud-based service
US9554418B1 (en) 2013-02-28 2017-01-24 F5 Networks, Inc. Device for topology hiding of a visited network
US9558202B2 (en) 2012-08-27 2017-01-31 Box, Inc. Server side techniques for reducing database workload in implementing selective subfolder synchronization in a cloud-based environment
US9575981B2 (en) 2012-04-11 2017-02-21 Box, Inc. Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
US9602514B2 (en) 2014-06-16 2017-03-21 Box, Inc. Enterprise mobility management and verification of a managed application by a content provider
US9628268B2 (en) 2012-10-17 2017-04-18 Box, Inc. Remote key management in a cloud-based environment
US9633037B2 (en) 2013-06-13 2017-04-25 Box, Inc Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9652741B2 (en) 2011-07-08 2017-05-16 Box, Inc. Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof
US9652265B1 (en) 2011-08-10 2017-05-16 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types
US9665349B2 (en) 2012-10-05 2017-05-30 Box, Inc. System and method for generating embeddable widgets which enable access to a cloud-based collaboration platform
US9691051B2 (en) 2012-05-21 2017-06-27 Box, Inc. Security enhancement through application access control
US9705967B2 (en) 2012-10-04 2017-07-11 Box, Inc. Corporate user discovery and identification of recommended collaborators in a cloud platform
US9712510B2 (en) 2012-07-06 2017-07-18 Box, Inc. Systems and methods for securely submitting comments among users via external messaging applications in a cloud-based platform
US9747287B1 (en) 2011-08-10 2017-08-29 Nutanix, Inc. Method and system for managing metadata for a virtualization environment
US9756022B2 (en) 2014-08-29 2017-09-05 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US9772866B1 (en) 2012-07-17 2017-09-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US9773051B2 (en) 2011-11-29 2017-09-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9785518B2 (en) 2013-09-04 2017-10-10 Hytrust, Inc. Multi-threaded transaction log for primary and restore/intelligence
US9792320B2 (en) 2012-07-06 2017-10-17 Box, Inc. System and method for performing shard migration to support functions of a cloud-based service
US9794256B2 (en) 2012-07-30 2017-10-17 Box, Inc. System and method for advanced control tools for administrators in a cloud-based service
US9805050B2 (en) 2013-06-21 2017-10-31 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US9894119B2 (en) 2014-08-29 2018-02-13 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US9904435B2 (en) 2012-01-06 2018-02-27 Box, Inc. System and method for actionable event generation for task delegation and management via a discussion forum in a web-based collaboration environment
US9953036B2 (en) 2013-01-09 2018-04-24 Box, Inc. File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9959420B2 (en) 2012-10-02 2018-05-01 Box, Inc. System and method for enhanced security and management mechanisms for enterprise administrators in a cloud-based environment
US9965745B2 (en) 2012-02-24 2018-05-08 Box, Inc. System and method for promoting enterprise adoption of a web-based collaboration environment
US9978040B2 (en) 2011-07-08 2018-05-22 Box, Inc. Collaboration sessions in a workspace on a cloud-based content management system
US10038731B2 (en) 2014-08-29 2018-07-31 Box, Inc. Managing flow-based interactions with cloud-based shared content
USRE47019E1 (en) 2010-07-14 2018-08-28 F5 Networks, Inc. Methods for DNSSEC proxying and deployment amelioration and systems thereof
US10078555B1 (en) * 2015-04-14 2018-09-18 EMC IP Holding Company LLC Synthetic full backups for incremental file backups
US10110656B2 (en) 2013-06-25 2018-10-23 Box, Inc. Systems and methods for providing shell communication in a cloud-based platform
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US10200256B2 (en) 2012-09-17 2019-02-05 Box, Inc. System and method of a manipulative handle in an interactive mobile user interface
US10229134B2 (en) 2013-06-25 2019-03-12 Box, Inc. Systems and methods for managing upgrades, migration of user data and improving performance of a cloud-based platform
US10235383B2 (en) 2012-12-19 2019-03-19 Box, Inc. Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment
US10321167B1 (en) 2016-01-21 2019-06-11 GrayMeta, Inc. Method and system for determining media file identifiers and likelihood of media file relationships
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof
US10412198B1 (en) 2016-10-27 2019-09-10 F5 Networks, Inc. Methods for improved transmission control protocol (TCP) performance visibility and devices thereof
US10452667B2 (en) 2012-07-06 2019-10-22 Box Inc. Identification of people as search results from key-word based searches of content in a cloud-based environment
US10459891B2 (en) 2015-09-30 2019-10-29 Western Digital Technologies, Inc. Replicating data across data storage devices of a logical volume
US10467103B1 (en) 2016-03-25 2019-11-05 Nutanix, Inc. Efficient change block training
US10509527B2 (en) 2013-09-13 2019-12-17 Box, Inc. Systems and methods for configuring event-based automation in cloud-based collaboration platforms
US10530854B2 (en) 2014-05-30 2020-01-07 Box, Inc. Synchronization of permissioned content in cloud-based environments
US10554426B2 (en) 2011-01-20 2020-02-04 Box, Inc. Real time notification of activities that occur in a web-based collaboration environment
US10567492B1 (en) 2017-05-11 2020-02-18 F5 Networks, Inc. Methods for load balancing in a federated identity environment and devices thereof
US10574442B2 (en) 2014-08-29 2020-02-25 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US10599671B2 (en) 2013-01-17 2020-03-24 Box, Inc. Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform
US10721269B1 (en) 2009-11-06 2020-07-21 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US10719492B1 (en) 2016-12-07 2020-07-21 GrayMeta, Inc. Automatic reconciliation and consolidation of disparate repositories
US10725968B2 (en) 2013-05-10 2020-07-28 Box, Inc. Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform
US10797888B1 (en) 2016-01-20 2020-10-06 F5 Networks, Inc. Methods for secured SCEP enrollment for client devices and devices thereof
US10833943B1 (en) 2018-03-01 2020-11-10 F5 Networks, Inc. Methods for service chaining and devices thereof
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US10846074B2 (en) 2013-05-10 2020-11-24 Box, Inc. Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client
US10866931B2 (en) 2013-10-22 2020-12-15 Box, Inc. Desktop application for accessing a cloud collaboration platform
US11169706B2 (en) * 2016-05-26 2021-11-09 Nutanix, Inc. Rebalancing storage I/O workloads by storage controller selection and redirection
US11210610B2 (en) 2011-10-26 2021-12-28 Box, Inc. Enhanced multimedia content preview rendering in a cloud content management system
US11223689B1 (en) 2018-01-05 2022-01-11 F5 Networks, Inc. Methods for multipath transmission control protocol (MPTCP) based session migration and devices thereof
US11232481B2 (en) 2012-01-30 2022-01-25 Box, Inc. Extended applications of multimedia content previews in the cloud-based content management system
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof

Families Citing this family (262)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9361243B2 (en) 1998-07-31 2016-06-07 Kom Networks Inc. Method and system for providing restricted access to a storage medium
US7392234B2 (en) * 1999-05-18 2008-06-24 Kom, Inc. Method and system for electronic file lifecycle management
US7370071B2 (en) * 2000-03-17 2008-05-06 Microsoft Corporation Method for serving third party software applications from servers to client computers
US8099758B2 (en) * 1999-05-12 2012-01-17 Microsoft Corporation Policy based composite file system and method
US6895591B1 (en) * 1999-10-18 2005-05-17 Unisys Corporation Virtual file system and method
US7584228B1 (en) * 2001-07-18 2009-09-01 Swsoft Holdings, Ltd. System and method for duplication of virtual private server files
US6917343B2 (en) * 2001-09-19 2005-07-12 Titan Aerospace Electronics Division Broadband antennas over electronically reconfigurable artificial magnetic conductor surfaces
US7921288B1 (en) 2001-12-12 2011-04-05 Hildebrand Hal S System and method for providing different levels of key security for controlling access to secured items
US7478418B2 (en) * 2001-12-12 2009-01-13 Guardian Data Storage, Llc Guaranteed delivery of changes to security policies in a distributed system
US7260555B2 (en) 2001-12-12 2007-08-21 Guardian Data Storage, Llc Method and architecture for providing pervasive security to digital assets
US7783765B2 (en) * 2001-12-12 2010-08-24 Hildebrand Hal S System and method for providing distributed access control to secured documents
US10360545B2 (en) 2001-12-12 2019-07-23 Guardian Data Storage, Llc Method and apparatus for accessing secured electronic data off-line
US10033700B2 (en) 2001-12-12 2018-07-24 Intellectual Ventures I Llc Dynamic evaluation of access rights
US7930756B1 (en) 2001-12-12 2011-04-19 Crocker Steven Toye Multi-level cryptographic transformations for securing digital assets
US7681034B1 (en) 2001-12-12 2010-03-16 Chang-Ping Lee Method and apparatus for securing electronic data
USRE41546E1 (en) 2001-12-12 2010-08-17 Klimenty Vainstein Method and system for managing security tiers
US7921450B1 (en) 2001-12-12 2011-04-05 Klimenty Vainstein Security system using indirect key generation from access rules and methods therefor
US8006280B1 (en) 2001-12-12 2011-08-23 Hildebrand Hal S Security system for generating keys from access rules in a decentralized manner and methods therefor
US7565683B1 (en) 2001-12-12 2009-07-21 Weiqing Huang Method and system for implementing changes to security policies in a distributed security system
US7380120B1 (en) 2001-12-12 2008-05-27 Guardian Data Storage, Llc Secured data format for access control
US8065713B1 (en) 2001-12-12 2011-11-22 Klimenty Vainstein System and method for providing multi-location access management to secured items
US7178033B1 (en) 2001-12-12 2007-02-13 Pss Systems, Inc. Method and apparatus for securing digital assets
US7562232B2 (en) * 2001-12-12 2009-07-14 Patrick Zuili System and method for providing manageability to security information for secured items
US7921284B1 (en) 2001-12-12 2011-04-05 Gary Mark Kinghorn Method and system for protecting electronic data in enterprise environment
US7950066B1 (en) 2001-12-21 2011-05-24 Guardian Data Storage, Llc Method and system for restricting use of a clipboard application
US7360034B1 (en) * 2001-12-28 2008-04-15 Network Appliance, Inc. Architecture for creating and maintaining virtual filers on a filer
US8176334B2 (en) 2002-09-30 2012-05-08 Guardian Data Storage, Llc Document security system that permits external users to gain access to secured files
US7194519B1 (en) * 2002-03-15 2007-03-20 Network Appliance, Inc. System and method for administering a filer having a plurality of virtual filers
US8613102B2 (en) 2004-03-30 2013-12-17 Intellectual Ventures I Llc Method and system for providing document retention using cryptography
US20030229689A1 (en) * 2002-06-06 2003-12-11 Microsoft Corporation Method and system for managing stored data on a computer network
US7003527B1 (en) * 2002-06-27 2006-02-21 Emc Corporation Methods and apparatus for managing devices within storage area networks
JP4240930B2 (en) * 2002-07-15 2009-03-18 株式会社日立製作所 Method and apparatus for unifying temporary transmission of multiple network storages
JP2004054721A (en) * 2002-07-23 2004-02-19 Hitachi Ltd Network storage virtualization method
CA2398043C (en) * 2002-08-27 2004-10-05 Kevin W. Jameson Collection view expander
US20040044692A1 (en) * 2002-08-27 2004-03-04 Jameson Kevin Wade Collection storage system
US7146389B2 (en) * 2002-08-30 2006-12-05 Hitachi, Ltd. Method for rebalancing free disk space among network storages virtualized into a single file system view
US7512810B1 (en) 2002-09-11 2009-03-31 Guardian Data Storage Llc Method and system for protecting encrypted files transmitted over a network
JP2004110367A (en) * 2002-09-18 2004-04-08 Hitachi Ltd Storage system control method, storage control device, and storage system
US7836310B1 (en) 2002-11-01 2010-11-16 Yevgeniy Gutnik Security system that uses indirect password-based encryption
US7263593B2 (en) * 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
WO2004055675A1 (en) * 2002-12-18 2004-07-01 Fujitsu Limited File management apparatus, file management program, file management method, and file system
US7689715B1 (en) * 2002-12-20 2010-03-30 Symantec Operating Corporation Method and system for implementing a global name space service
US7890990B1 (en) 2002-12-20 2011-02-15 Klimenty Vainstein Security system with staging capabilities
US20040139141A1 (en) * 2002-12-31 2004-07-15 Lessard Michael R. Integration of virtual data within a host operating environment
US7167872B2 (en) * 2003-01-08 2007-01-23 Harris Corporation Efficient file interface and method for providing access to files using a JTRS SCA core framework
US7877511B1 (en) 2003-01-13 2011-01-25 F5 Networks, Inc. Method and apparatus for adaptive services networking
JP2004220450A (en) * 2003-01-16 2004-08-05 Hitachi Ltd Storage device, its introduction method and its introduction program
JP4237515B2 (en) * 2003-02-07 2009-03-11 株式会社日立グローバルストレージテクノロジーズ Network storage virtualization method and network storage system
US7865536B1 (en) * 2003-02-14 2011-01-04 Google Inc. Garbage collecting systems and methods
US20040230679A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for portal and web server administration
JP4320195B2 (en) 2003-03-19 2009-08-26 株式会社日立製作所 File storage service system, file management apparatus, file management method, ID designation type NAS server, and file reading method
US7409644B2 (en) * 2003-05-16 2008-08-05 Microsoft Corporation File system shell
US7421438B2 (en) 2004-04-29 2008-09-02 Microsoft Corporation Metadata editing control
US7188316B2 (en) * 2003-03-24 2007-03-06 Microsoft Corporation System and method for viewing and editing multi-value properties
US7712034B2 (en) * 2003-03-24 2010-05-04 Microsoft Corporation System and method for shell browser
US7627552B2 (en) * 2003-03-27 2009-12-01 Microsoft Corporation System and method for filtering and organizing items based on common elements
US7245819B1 (en) * 2003-03-24 2007-07-17 Microsoft Corporation Cross-file DVR record padding playback
US7747660B1 (en) * 2003-03-24 2010-06-29 Symantec Operating Corporation Method and system of providing access to a virtual storage device
US7240292B2 (en) * 2003-04-17 2007-07-03 Microsoft Corporation Virtual address bar user interface control
US7823077B2 (en) 2003-03-24 2010-10-26 Microsoft Corporation System and method for user modification of metadata in a shell browser
US7234114B2 (en) * 2003-03-24 2007-06-19 Microsoft Corporation Extensible object previewer in a shell browser
US7769794B2 (en) 2003-03-24 2010-08-03 Microsoft Corporation User interface for a file system shell
US7827561B2 (en) 2003-03-26 2010-11-02 Microsoft Corporation System and method for public consumption of communication events between arbitrary processes
US7890960B2 (en) 2003-03-26 2011-02-15 Microsoft Corporation Extensible user context system for delivery of notifications
US7536386B2 (en) * 2003-03-27 2009-05-19 Microsoft Corporation System and method for sharing items in a computer system
US7650575B2 (en) 2003-03-27 2010-01-19 Microsoft Corporation Rich drag drop user interface
US7925682B2 (en) 2003-03-27 2011-04-12 Microsoft Corporation System and method utilizing virtual folders
US7219206B1 (en) * 2003-04-11 2007-05-15 Sun Microsystems, Inc. File system virtual memory descriptor generation interface system and method
US20050005018A1 (en) * 2003-05-02 2005-01-06 Anindya Datta Method and apparatus for performing application virtualization
US8707034B1 (en) 2003-05-30 2014-04-22 Intellectual Ventures I Llc Method and system for using remote headers to secure electronic files
JP2005018193A (en) 2003-06-24 2005-01-20 Hitachi Ltd Interface command control method for disk device, and computer system
US7325097B1 (en) * 2003-06-26 2008-01-29 Emc Corporation Method and apparatus for distributing a logical volume of storage for shared access by multiple host computers
US7340739B2 (en) * 2003-06-27 2008-03-04 International Business Machines Corporation Automatic configuration of a server
US7730543B1 (en) 2003-06-30 2010-06-01 Satyajit Nath Method and system for enabling users of a group shared across multiple file security systems to access secured files
US7996361B1 (en) * 2003-06-30 2011-08-09 Symantec Operating Corporation Method and system of providing replica files within a fileset
TW200511029A (en) * 2003-07-24 2005-03-16 Matsushita Electric Ind Co Ltd File management method and data processing device
US20050027938A1 (en) * 2003-07-29 2005-02-03 Xiotech Corporation Method, apparatus and program storage device for dynamically resizing mirrored virtual disks in a RAID storage system
US20050108486A1 (en) * 2003-08-05 2005-05-19 Miklos Sandorfi Emulated storage system supporting instant volume restore
US7146476B2 (en) * 2003-08-05 2006-12-05 Sepaton, Inc. Emulated storage system
US8938595B2 (en) * 2003-08-05 2015-01-20 Sepaton, Inc. Emulated storage system
US20050193235A1 (en) * 2003-08-05 2005-09-01 Miklos Sandorfi Emulated storage system
US8386272B2 (en) * 2003-08-06 2013-02-26 International Business Machines Corporation Autonomic assistance for policy generation
US20050044523A1 (en) * 2003-08-20 2005-02-24 International Business Machines Corporation Method and system for compiling Java code with referenced classes in a workspace environment
US8776050B2 (en) * 2003-08-20 2014-07-08 Oracle International Corporation Distributed virtual machine monitor for managing multiple virtual resources across multiple physical nodes
US20050044301A1 (en) * 2003-08-20 2005-02-24 Vasilevsky Alexander David Method and apparatus for providing virtual computing services
JP4437650B2 (en) * 2003-08-25 2010-03-24 株式会社日立製作所 Storage system
JP4349871B2 (en) * 2003-09-09 2009-10-21 株式会社日立製作所 File sharing apparatus and data migration method between file sharing apparatuses
JP4386694B2 (en) * 2003-09-16 2009-12-16 株式会社日立製作所 Storage system and storage control device
CN1286018C (en) * 2003-09-29 2006-11-22 摩托罗拉公司 Method for locating files in logical file systems
JP4307202B2 (en) * 2003-09-29 2009-08-05 株式会社日立製作所 Storage system and storage control device
US7703140B2 (en) 2003-09-30 2010-04-20 Guardian Data Storage, Llc Method and system for securing digital assets using process-driven security policies
US8127366B2 (en) 2003-09-30 2012-02-28 Guardian Data Storage, Llc Method and apparatus for transitioning between states of security policies used to secure electronic documents
US20050188174A1 (en) * 2003-10-12 2005-08-25 Microsoft Corporation Extensible creation and editing of collections of objects
JP4257783B2 (en) * 2003-10-23 2009-04-22 株式会社日立製作所 Logically partitionable storage device and storage device system
US8024335B2 (en) 2004-05-03 2011-09-20 Microsoft Corporation System and method for dynamically generating a selectable search extension
US7181463B2 (en) 2003-10-24 2007-02-20 Microsoft Corporation System and method for managing data using static lists
US7243089B2 (en) * 2003-11-25 2007-07-10 International Business Machines Corporation System, method, and service for federating and optionally migrating a local file system into a distributed file system while preserving local access to existing data
JP2005202893A (en) * 2004-01-19 2005-07-28 Hitachi Ltd Storage device controller, storage system, recording medium recording program, information processor, and method for controlling storage system
US7814131B1 (en) * 2004-02-02 2010-10-12 Network Appliance, Inc. Aliasing of exported paths in a storage system
US7627617B2 (en) * 2004-02-11 2009-12-01 Storage Technology Corporation Clustered hierarchical file services
JP2005228170A (en) 2004-02-16 2005-08-25 Hitachi Ltd Storage device system
US7133988B2 (en) 2004-02-25 2006-11-07 Hitachi, Ltd. Method and apparatus for managing direct I/O to storage systems in virtualization
US7844646B1 (en) * 2004-03-12 2010-11-30 Netapp, Inc. Method and apparatus for representing file system metadata within a database for efficient queries
US7630994B1 (en) 2004-03-12 2009-12-08 Netapp, Inc. On the fly summarization of file walk data
US7293039B1 (en) 2004-03-12 2007-11-06 Network Appliance, Inc. Storage resource management across multiple paths
US7539702B2 (en) * 2004-03-12 2009-05-26 Netapp, Inc. Pre-summarization and analysis of results generated by an agent
JP2005267008A (en) * 2004-03-17 2005-09-29 Hitachi Ltd Method and system for storage management
US7480789B1 (en) * 2004-03-29 2009-01-20 Xilinx, Inc. Virtual file system interface to configuration data of a PLD
US7657846B2 (en) * 2004-04-23 2010-02-02 Microsoft Corporation System and method for displaying stack icons
US7694236B2 (en) * 2004-04-23 2010-04-06 Microsoft Corporation Stack icons representing multiple objects
US20050240878A1 (en) * 2004-04-26 2005-10-27 Microsoft Corporation System and method for scaling icons
US7992103B2 (en) * 2004-04-26 2011-08-02 Microsoft Corporation Scaling icons for representing files
US8707209B2 (en) 2004-04-29 2014-04-22 Microsoft Corporation Save preview representation of files being created
US7430571B2 (en) * 2004-04-30 2008-09-30 Network Appliance, Inc. Extension of write anywhere file layout write allocation
US8108430B2 (en) 2004-04-30 2012-01-31 Microsoft Corporation Carousel control for metadata navigation and assignment
US7409494B2 (en) * 2004-04-30 2008-08-05 Network Appliance, Inc. Extension of write anywhere file system layout
US7392261B2 (en) * 2004-05-20 2008-06-24 International Business Machines Corporation Method, system, and program for maintaining a namespace of filesets accessible to clients over a network
US20060010301A1 (en) * 2004-07-06 2006-01-12 Hitachi, Ltd. Method and apparatus for file guard and file shredding
US7131027B2 (en) 2004-07-09 2006-10-31 Hitachi, Ltd. Method and apparatus for disk array based I/O routing and multi-layered external storage linkage
US7206790B2 (en) * 2004-07-13 2007-04-17 Hitachi, Ltd. Data management system
US7707427B1 (en) 2004-07-19 2010-04-27 Michael Frederick Kenrich Multi-level file digests
US7765243B2 (en) * 2004-07-26 2010-07-27 Sandisk Il Ltd. Unified local-remote logical volume
US7515589B2 (en) * 2004-08-27 2009-04-07 International Business Machines Corporation Method and apparatus for providing network virtualization
JP4646574B2 (en) * 2004-08-30 2011-03-09 株式会社日立製作所 Data processing system
US7991783B2 (en) * 2004-10-05 2011-08-02 International Business Machines Corporation Apparatus, system, and method for supporting storage functions using an embedded database management system
US20060075199A1 (en) * 2004-10-06 2006-04-06 Mahesh Kallahalla Method of providing storage to virtual computer cluster within shared computing environment
US7620984B2 (en) * 2004-10-06 2009-11-17 Hewlett-Packard Development Company, L.P. Method of managing computer system
US8095928B2 (en) * 2004-10-06 2012-01-10 Hewlett-Packard Development Company, L.P. Method of forming virtual computer cluster within shared computing environment
US7664796B2 (en) * 2004-10-13 2010-02-16 Microsoft Corporation Electronic labeling for offline management of storage devices
US7581036B2 (en) * 2004-10-13 2009-08-25 Microsoft Corporation Offline caching of control transactions for storage devices
US7730277B1 (en) * 2004-10-25 2010-06-01 Netapp, Inc. System and method for using pvbn placeholders in a flexible volume of a storage system
JP2006127028A (en) 2004-10-27 2006-05-18 Hitachi Ltd Memory system and storage controller
US9165003B1 (en) * 2004-11-29 2015-10-20 Netapp, Inc. Technique for permitting multiple virtual file systems having the same identifier to be served by a single storage system
JP4341072B2 (en) * 2004-12-16 2009-10-07 日本電気株式会社 Data arrangement management method, system, apparatus and program
US20060161752A1 (en) * 2005-01-18 2006-07-20 Burkey Todd R Method, apparatus and program storage device for providing adaptive, attribute driven, closed-loop storage management configuration and control
US7424497B1 (en) * 2005-01-27 2008-09-09 Network Appliance, Inc. Technique for accelerating the creation of a point in time prepresentation of a virtual file system
US7941602B2 (en) * 2005-02-10 2011-05-10 Xiotech Corporation Method, apparatus and program storage device for providing geographically isolated failover using instant RAID swapping in mirrored virtual disks
US20060218360A1 (en) * 2005-03-22 2006-09-28 Burkey Todd R Method, apparatus and program storage device for providing an optimized read methodology for synchronously mirrored virtual disk pairs
US8490015B2 (en) * 2005-04-15 2013-07-16 Microsoft Corporation Task dialog and programming interface for same
US20060236244A1 (en) * 2005-04-15 2006-10-19 Microsoft Corporation Command links
US20060236253A1 (en) * 2005-04-15 2006-10-19 Microsoft Corporation Dialog user interfaces for related tasks and programming interface for same
US7647359B1 (en) * 2005-04-20 2010-01-12 Novell, Inc. Techniques for file system translation
US8195646B2 (en) 2005-04-22 2012-06-05 Microsoft Corporation Systems, methods, and user interfaces for storing, searching, navigating, and retrieving electronic information
US8522154B2 (en) * 2005-04-22 2013-08-27 Microsoft Corporation Scenario specialization of file browser
US20080172445A1 (en) * 2005-07-09 2008-07-17 Netbarrage Method and System For Increasing Popularity of Content Items Shared Over Peer-to-Peer Networks
US7665028B2 (en) * 2005-07-13 2010-02-16 Microsoft Corporation Rich drag drop user interface
US8447781B2 (en) * 2005-07-29 2013-05-21 International Business Machines Corporation Content-based file system security
EP1934794B1 (en) * 2005-09-15 2017-08-02 CA, Inc. Apparatus, method and system for rapid delivery of distributed applications
US8938594B2 (en) * 2005-11-04 2015-01-20 Oracle America, Inc. Method and system for metadata-based resilvering
US20070162510A1 (en) * 2005-12-30 2007-07-12 Microsoft Corporation Delayed file virtualization
JP2007206931A (en) * 2006-02-01 2007-08-16 Hitachi Ltd Storage system, data processing method and storage device
US20070220029A1 (en) * 2006-03-17 2007-09-20 Novell, Inc. System and method for hierarchical storage management using shadow volumes
JP4492569B2 (en) * 2006-03-20 2010-06-30 日本電気株式会社 File operation control device, file operation control system, file operation control method, and file operation control program
US7899780B1 (en) * 2006-03-30 2011-03-01 Emc Corporation Methods and apparatus for structured partitioning of management information
JP4912026B2 (en) * 2006-04-27 2012-04-04 キヤノン株式会社 Information processing apparatus and information processing method
US20080022120A1 (en) * 2006-06-05 2008-01-24 Michael Factor System, Method and Computer Program Product for Secure Access Control to a Storage Device
US7783686B2 (en) * 2006-06-16 2010-08-24 Microsoft Corporation Application program interface to manage media files
US7769779B2 (en) * 2006-11-02 2010-08-03 Microsoft Corporation Reverse name mappings in restricted namespace environments
US9946791B1 (en) * 2006-11-21 2018-04-17 Google Llc Making modified content available
US20080154986A1 (en) * 2006-12-22 2008-06-26 Storage Technology Corporation System and Method for Compression of Data Objects in a Data Storage System
US20080168224A1 (en) * 2007-01-09 2008-07-10 Ibm Corporation Data protection via software configuration of multiple disk drives
JP4919851B2 (en) 2007-03-23 2012-04-18 株式会社日立製作所 Intermediate device for file level virtualization
US7730260B2 (en) * 2007-04-20 2010-06-01 International Business Machines Corporation Delete recycling of holographic data storage
JP2008269300A (en) * 2007-04-20 2008-11-06 Hitachi Ltd Computer system, intermediate node and log management method
US7660948B2 (en) * 2007-04-20 2010-02-09 International Business Machines Corporation Arranging and destaging data to holographic storage
US7689769B2 (en) * 2007-04-20 2010-03-30 International Business Machines Corporation Arranging and destaging data to holographic storage
US7925749B1 (en) * 2007-04-24 2011-04-12 Netapp, Inc. System and method for transparent data replication over migrating virtual servers
US9110920B1 (en) * 2007-05-03 2015-08-18 Emc Corporation CIFS access to NFS files and directories by translating NFS file handles into pseudo-pathnames
US8819344B1 (en) 2007-08-09 2014-08-26 Emc Corporation Shared storage access load balancing for a large number of hosts
US7970943B2 (en) * 2007-08-14 2011-06-28 Oracle International Corporation Providing interoperability in software identifier standards
US8271911B1 (en) 2007-09-13 2012-09-18 Xilinx, Inc. Programmable hardware event reporting
US8006111B1 (en) 2007-09-21 2011-08-23 Emc Corporation Intelligent file system based power management for shared storage that migrates groups of files based on inactivity threshold
US9569443B1 (en) * 2007-09-28 2017-02-14 Symantec Corpoartion Method and apparatus for providing access to data in unsupported file systems and storage containers
US8903772B1 (en) 2007-10-25 2014-12-02 Emc Corporation Direct or indirect mapping policy for data blocks of a file in a file system
US20090112919A1 (en) * 2007-10-26 2009-04-30 Qlayer Nv Method and system to model and create a virtual private datacenter
WO2009064720A2 (en) * 2007-11-12 2009-05-22 Attune Systems, Inc. Load sharing, file migration, network configuration, and file deduplication using file virtualization
US20090210647A1 (en) 2008-02-15 2009-08-20 Madhusudanan Kandasamy Method for dynamically resizing file systems
US8140807B2 (en) * 2008-02-15 2012-03-20 International Business Machines Corporation System and computer program product for dynamically resizing file systems
US7890916B1 (en) 2008-03-25 2011-02-15 Xilinx, Inc. Debugging using a virtual file system interface
US7971013B2 (en) * 2008-04-30 2011-06-28 Xiotech Corporation Compensating for write speed differences between mirroring storage devices by striping
US8577845B2 (en) * 2008-06-13 2013-11-05 Symantec Operating Corporation Remote, granular restore from full virtual machine backup
US20100011176A1 (en) * 2008-07-11 2010-01-14 Burkey Todd R Performance of binary bulk IO operations on virtual disks by interleaving
US20100011371A1 (en) * 2008-07-11 2010-01-14 Burkey Todd R Performance of unary bulk IO operations on virtual disks by interleaving
US20100070544A1 (en) * 2008-09-12 2010-03-18 Microsoft Corporation Virtual block-level storage over a file system
US9213721B1 (en) 2009-01-05 2015-12-15 Emc Corporation File server system having tiered storage including solid-state drive primary storage and magnetic disk drive secondary storage
US9178935B2 (en) * 2009-03-05 2015-11-03 Paypal, Inc. Distributed steam processing
US8271557B1 (en) 2009-04-20 2012-09-18 Xilinx, Inc. Configuration of a large-scale reconfigurable computing arrangement using a virtual file system interface
US8032498B1 (en) 2009-06-29 2011-10-04 Emc Corporation Delegated reference count base file versioning
US9959131B2 (en) * 2009-08-03 2018-05-01 Quantum Corporation Systems and methods for providing a file system viewing of a storeage environment
US8473531B2 (en) * 2009-09-03 2013-06-25 Quantum Corporation Presenting a file system for a file containing items
US8533650B2 (en) * 2009-09-17 2013-09-10 Cadence Design Systems, Inc. Annotation management for hierarchical designs of integrated circuits
US20110072036A1 (en) * 2009-09-23 2011-03-24 Microsoft Corporation Page-based content storage system
US8321648B2 (en) 2009-10-26 2012-11-27 Netapp, Inc Use of similarity hash to route data for improved deduplication in a storage server cluster
US8417705B2 (en) 2009-10-30 2013-04-09 International Business Machines Corporation Graphically displaying a file system
GB0921669D0 (en) * 2009-12-10 2010-01-27 Chesterdeal Ltd Accessing stored electronic resources
US8495028B2 (en) * 2010-01-25 2013-07-23 Sepaton, Inc. System and method for data driven de-duplication
US9495119B1 (en) 2010-07-08 2016-11-15 EMC IP Holding Company LLC Static load balancing for file systems in a multipath I/O environment
US8527558B2 (en) 2010-09-15 2013-09-03 Sepation, Inc. Distributed garbage collection
MY177055A (en) * 2010-12-02 2020-09-03 Mimos Berhad System architecture with cluster file for virtualization hosting environment
WO2012094330A1 (en) 2011-01-03 2012-07-12 Planetary Data LLC Community internet drive
US9122639B2 (en) 2011-01-25 2015-09-01 Sepaton, Inc. Detection and deduplication of backup sets exhibiting poor locality
US9721033B2 (en) 2011-02-28 2017-08-01 Micro Focus Software Inc. Social networking content management
US8423585B2 (en) * 2011-03-14 2013-04-16 Amazon Technologies, Inc. Variants of files in a file system
US8943019B1 (en) * 2011-04-13 2015-01-27 Symantec Corporation Lookup optimization during online file system migration
US8789146B2 (en) * 2011-04-14 2014-07-22 Yubico Inc. Dual interface device for access control and a method therefor
US8769531B2 (en) * 2011-05-25 2014-07-01 International Business Machines Corporation Optimizing the configuration of virtual machine instances in a networked computing environment
CN102289513A (en) * 2011-09-05 2011-12-21 盛乐信息技术(上海)有限公司 Method and system for obtaining internal files of virtual machine
CN102394935A (en) * 2011-11-10 2012-03-28 方正国际软件有限公司 Wireless shared storage system and wireless shared storage method thereof
US9171178B1 (en) * 2012-05-14 2015-10-27 Symantec Corporation Systems and methods for optimizing security controls for virtual data centers
US9229657B1 (en) 2012-11-01 2016-01-05 Quantcast Corporation Redistributing data in a distributed storage system based on attributes of the data
JP5701846B2 (en) * 2012-11-28 2015-04-15 京セラドキュメントソリューションズ株式会社 Image forming apparatus
US9811529B1 (en) * 2013-02-06 2017-11-07 Quantcast Corporation Automatically redistributing data of multiple file systems in a distributed storage system
US9792295B1 (en) * 2013-02-06 2017-10-17 Quantcast Corporation Distributing data of multiple logically independent file systems in distributed storage systems including physically partitioned disks
US9766832B2 (en) 2013-03-15 2017-09-19 Hitachi Data Systems Corporation Systems and methods of locating redundant data using patterns of matching fingerprints
US9256611B2 (en) 2013-06-06 2016-02-09 Sepaton, Inc. System and method for multi-scale navigation of data
US10013217B1 (en) * 2013-06-28 2018-07-03 EMC IP Holding Company LLC Upper deck file system shrink for directly and thinly provisioned lower deck file system in which upper deck file system is stored in a volume file within lower deck file system where both upper deck file system and lower deck file system resides in storage processor memory
US9432457B2 (en) * 2013-08-30 2016-08-30 Citrix Systems, Inc. Redirecting local storage to cloud storage
US9678973B2 (en) 2013-10-15 2017-06-13 Hitachi Data Systems Corporation Multi-node hybrid deduplication
CN104571935A (en) * 2013-10-18 2015-04-29 宇宙互联有限公司 Global scheduling system and method
WO2015151113A1 (en) * 2014-04-02 2015-10-08 Hewlett-Packard Development Company, L.P. Direct access to network file system exported share
US10542049B2 (en) 2014-05-09 2020-01-21 Nutanix, Inc. Mechanism for providing external access to a secured networked virtualization environment
EP2975537A1 (en) * 2014-07-17 2016-01-20 Tiger Technology AD Merging multiple heterogeneous file systems into a single unified file system
CN104408091B (en) * 2014-11-11 2019-03-01 清华大学 The date storage method and system of distributed file system
US20170004131A1 (en) * 2015-07-01 2017-01-05 Weka.IO LTD Virtual File System Supporting Multi-Tiered Storage
CN105511810A (en) * 2015-12-07 2016-04-20 中国建设银行股份有限公司 Control method and device of virtualization resource pool
US9733834B1 (en) 2016-01-28 2017-08-15 Weka.IO Ltd. Congestion mitigation in a distributed storage system
US10133516B2 (en) 2016-01-28 2018-11-20 Weka.IO Ltd. Quality of service management in a distributed storage system
US11669320B2 (en) 2016-02-12 2023-06-06 Nutanix, Inc. Self-healing virtualized file server
GB201604070D0 (en) 2016-03-09 2016-04-20 Ibm On-premise and off-premise communication
US10423581B1 (en) * 2016-03-30 2019-09-24 EMC IP Holding Company LLC Data storage system employing file space reclaim without data movement
US11218418B2 (en) 2016-05-20 2022-01-04 Nutanix, Inc. Scalable leadership election in a multi-processing computing environment
US10594770B2 (en) * 2016-11-01 2020-03-17 International Business Machines Corporation On-premises and off-premises communication
US10824455B2 (en) * 2016-12-02 2020-11-03 Nutanix, Inc. Virtualized server systems and methods including load balancing for virtualized file servers
US11562034B2 (en) 2016-12-02 2023-01-24 Nutanix, Inc. Transparent referrals for distributed file servers
US10728090B2 (en) 2016-12-02 2020-07-28 Nutanix, Inc. Configuring network segmentation for a virtualization environment
US11568073B2 (en) 2016-12-02 2023-01-31 Nutanix, Inc. Handling permissions for virtualized file servers
US11294777B2 (en) 2016-12-05 2022-04-05 Nutanix, Inc. Disaster recovery for distributed file servers, including metadata fixers
US11288239B2 (en) 2016-12-06 2022-03-29 Nutanix, Inc. Cloning virtualized file servers
US11281484B2 (en) 2016-12-06 2022-03-22 Nutanix, Inc. Virtualized server systems and methods including scaling of file system virtual machines
US10585860B2 (en) 2017-01-03 2020-03-10 International Business Machines Corporation Global namespace for a hierarchical set of file systems
US10579598B2 (en) * 2017-01-03 2020-03-03 International Business Machines Corporation Global namespace for a hierarchical set of file systems
US10579587B2 (en) * 2017-01-03 2020-03-03 International Business Machines Corporation Space management for a hierarchical set of file systems
US10657102B2 (en) 2017-01-03 2020-05-19 International Business Machines Corporation Storage space management in union mounted file systems
US10592479B2 (en) * 2017-01-03 2020-03-17 International Business Machines Corporation Space management for a hierarchical set of file systems
US10649955B2 (en) 2017-01-03 2020-05-12 International Business Machines Corporation Providing unique inodes across multiple file system namespaces
US10936405B2 (en) 2017-11-13 2021-03-02 Weka.IO Ltd. Efficient networking for a distributed storage system
US11216210B2 (en) 2017-11-13 2022-01-04 Weka.IO Ltd. Flash registry with on-disk hashing
US11782875B2 (en) 2017-11-13 2023-10-10 Weka.IO Ltd. Directory structure for a distributed storage system
US11301433B2 (en) 2017-11-13 2022-04-12 Weka.IO Ltd. Metadata journal in a distributed storage system
US11385980B2 (en) 2017-11-13 2022-07-12 Weka.IO Ltd. Methods and systems for rapid failure recovery for a distributed storage system
US11262912B2 (en) 2017-11-13 2022-03-01 Weka.IO Ltd. File operations in a distributed storage system
US11061622B2 (en) 2017-11-13 2021-07-13 Weka.IO Ltd. Tiering data strategy for a distributed storage system
US11561860B2 (en) 2017-11-13 2023-01-24 Weka.IO Ltd. Methods and systems for power failure resistance for a distributed storage system
US11086826B2 (en) 2018-04-30 2021-08-10 Nutanix, Inc. Virtualized server systems and methods including domain joining techniques
US10977218B1 (en) * 2018-05-18 2021-04-13 Amazon Technologies, Inc. Distributed application development
US11194680B2 (en) 2018-07-20 2021-12-07 Nutanix, Inc. Two node clusters recovery on a failure
US11770447B2 (en) 2018-10-31 2023-09-26 Nutanix, Inc. Managing high-availability file servers
US11204890B2 (en) 2018-12-27 2021-12-21 EMC IP Holding Company LLC System and method for archiving data in a decentralized data protection system
US11687471B2 (en) * 2020-03-27 2023-06-27 Sk Hynix Nand Product Solutions Corp. Solid state drive with external software execution to effect internal solid-state drive operations
US11768809B2 (en) 2020-05-08 2023-09-26 Nutanix, Inc. Managing incremental snapshots for fast leader node bring-up
US20220237191A1 (en) * 2021-01-25 2022-07-28 Salesforce.Com, Inc. System and method for supporting very large data sets in databases
CN114282214B (en) * 2021-12-17 2022-10-21 北京天融信网络安全技术有限公司 Virus checking and killing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5566339A (en) * 1992-10-23 1996-10-15 Fox Network Systems, Inc. System and method for monitoring computer environment and operation
US5778384A (en) * 1995-12-22 1998-07-07 Sun Microsystems, Inc. System and method for automounting and accessing remote file systems in Microsoft Windows in a networking environment
US5991753A (en) * 1993-06-16 1999-11-23 Lachman Technology, Inc. Method and system for computer file management, including file migration, special handling, and associating extended attributes with files
US6085262A (en) * 1994-04-25 2000-07-04 Sony Corporation Hierarchical data storage processing apparatus for partitioning resource across the storage hierarchy
US20030188109A1 (en) * 2002-03-28 2003-10-02 Yasuo Yamasaki Information processing system
US20050114595A1 (en) * 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
US7475199B1 (en) * 2000-10-19 2009-01-06 Emc Corporation Scalable network file system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1010076A1 (en) * 1996-11-27 2000-06-21 1Vision Software, L.L.C. File directory and file navigation system
US6061692A (en) * 1997-11-04 2000-05-09 Microsoft Corporation System and method for administering a meta database as an integral component of an information server
US6351773B1 (en) * 1998-12-21 2002-02-26 3Com Corporation Methods for restricting access of network devices to subscription services in a data-over-cable system
US6678700B1 (en) * 2000-04-27 2004-01-13 General Atomics System of and method for transparent management of data objects in containers across distributed heterogenous resources
US6745207B2 (en) * 2000-06-02 2004-06-01 Hewlett-Packard Development Company, L.P. System and method for managing virtual storage

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5566339A (en) * 1992-10-23 1996-10-15 Fox Network Systems, Inc. System and method for monitoring computer environment and operation
US5991753A (en) * 1993-06-16 1999-11-23 Lachman Technology, Inc. Method and system for computer file management, including file migration, special handling, and associating extended attributes with files
US6085262A (en) * 1994-04-25 2000-07-04 Sony Corporation Hierarchical data storage processing apparatus for partitioning resource across the storage hierarchy
US5778384A (en) * 1995-12-22 1998-07-07 Sun Microsystems, Inc. System and method for automounting and accessing remote file systems in Microsoft Windows in a networking environment
US7475199B1 (en) * 2000-10-19 2009-01-06 Emc Corporation Scalable network file system
US20030188109A1 (en) * 2002-03-28 2003-10-02 Yasuo Yamasaki Information processing system
US20050114595A1 (en) * 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes

Cited By (218)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234856A1 (en) * 2001-01-11 2009-09-17 F5 Networks, Inc. Aggregated opportunistic lock and aggregated implicit lock management for locking aggregated files in a switched file system
USRE43346E1 (en) 2001-01-11 2012-05-01 F5 Networks, Inc. Transaction aggregation in a switched file system
US8195769B2 (en) 2001-01-11 2012-06-05 F5 Networks, Inc. Rule based aggregation of files and transactions in a switched file system
US8195760B2 (en) 2001-01-11 2012-06-05 F5 Networks, Inc. File aggregation in a switched file system
US8005953B2 (en) 2001-01-11 2011-08-23 F5 Networks, Inc. Aggregated opportunistic lock and aggregated implicit lock management for locking aggregated files in a switched file system
US8417681B1 (en) 2001-01-11 2013-04-09 F5 Networks, Inc. Aggregated lock management for locking aggregated files in a switched file system
US8396895B2 (en) 2001-01-11 2013-03-12 F5 Networks, Inc. Directory aggregation for files distributed over a plurality of servers in a switched file system
US20110087696A1 (en) * 2005-01-20 2011-04-14 F5 Networks, Inc. Scalable system for partitioning and accessing metadata over multiple servers
US8433735B2 (en) 2005-01-20 2013-04-30 F5 Networks, Inc. Scalable system for partitioning and accessing metadata over multiple servers
US8397059B1 (en) 2005-02-04 2013-03-12 F5 Networks, Inc. Methods and apparatus for implementing authentication
US8239354B2 (en) 2005-03-03 2012-08-07 F5 Networks, Inc. System and method for managing small-size files in an aggregated file system
US7711908B2 (en) * 2006-02-13 2010-05-04 Hitachi, Ltd. Virtual storage system for virtualizing a plurality of storage systems logically into a single storage resource provided to a host computer
US8595436B2 (en) 2006-02-13 2013-11-26 Hitachi, Ltd. Virtual storage system and control method thereof
US20070192561A1 (en) * 2006-02-13 2007-08-16 Ai Satoyama virtual storage system and control method thereof
US8161239B2 (en) 2006-02-13 2012-04-17 Hitachi, Ltd. Optimized computer system providing functions of a virtual storage system
US20070214384A1 (en) * 2006-03-07 2007-09-13 Manabu Kitamura Method for backing up data in a clustered file system
US20070214183A1 (en) * 2006-03-08 2007-09-13 Omneon Video Networks Methods for dynamic partitioning of a redundant data fabric
US8285817B1 (en) * 2006-03-20 2012-10-09 Netapp, Inc. Migration engine for use in a logical namespace of a storage system environment
US9118697B1 (en) 2006-03-20 2015-08-25 Netapp, Inc. System and method for integrating namespace management and storage management in a storage system environment
US8151360B1 (en) 2006-03-20 2012-04-03 Netapp, Inc. System and method for administering security in a logical namespace of a storage system environment
US7606868B1 (en) * 2006-03-30 2009-10-20 Wmware, Inc. Universal file access architecture for a heterogeneous computing environment
US8417746B1 (en) 2006-04-03 2013-04-09 F5 Networks, Inc. File system management with enhanced searchability
US7698351B1 (en) 2006-04-28 2010-04-13 Netapp, Inc. GUI architecture for namespace and storage management
US8635247B1 (en) 2006-04-28 2014-01-21 Netapp, Inc. Namespace and storage management application infrastructure for use in management of resources in a storage system environment
US20070255699A1 (en) * 2006-04-28 2007-11-01 Microsoft Corporation Bypass of the namespace hierarchy to open files
US7925681B2 (en) * 2006-04-28 2011-04-12 Microsoft Corporation Bypass of the namespace hierarchy to open files
US8065346B1 (en) 2006-04-28 2011-11-22 Netapp, Inc. Graphical user interface architecture for namespace and storage management
US9270741B2 (en) 2006-04-28 2016-02-23 Netapp, Inc. Namespace and storage management application infrastructure for use in management of resources in a storage system environment
US20080126434A1 (en) * 2006-08-03 2008-05-29 Mustafa Uysal Protocol virtualization for a network file system
US8990270B2 (en) * 2006-08-03 2015-03-24 Hewlett-Packard Development Company, L. P. Protocol virtualization for a network file system
US7996421B2 (en) 2007-01-03 2011-08-09 International Business Machines Corporation Method, computer program product, and system for coordinating access to locally and remotely exported file systems
US20080162582A1 (en) * 2007-01-03 2008-07-03 International Business Machines Corporation Method, computer program product, and system for coordinating access to locally and remotely exported file systems
US8682916B2 (en) 2007-05-25 2014-03-25 F5 Networks, Inc. Remote file virtualization in a switched file system
US20090049153A1 (en) * 2007-08-14 2009-02-19 International Business Machines Corporation Methods, computer program products, and apparatuses for providing remote client access to exported file systems
US7958200B2 (en) * 2007-08-14 2011-06-07 International Business Machines Corporation Methods, computer program products, and apparatuses for providing remote client access to exported file systems
US10523747B2 (en) 2007-08-29 2019-12-31 Oracle International Corporation Method and system for selecting a storage node based on a distance from a requesting device
US20120191673A1 (en) * 2007-08-29 2012-07-26 Nirvanix, Inc. Coupling a user file name with a physical data file stored in a storage delivery network
US10924536B2 (en) 2007-08-29 2021-02-16 Oracle International Corporation Method and system for selecting a storage node based on a distance from a requesting device
US10193967B2 (en) 2007-08-29 2019-01-29 Oracle International Corporation Redirecting devices requesting access to files
US20090144300A1 (en) * 2007-08-29 2009-06-04 Chatley Scott P Coupling a user file name with a physical data file stored in a storage delivery network
US8117244B2 (en) 2007-11-12 2012-02-14 F5 Networks, Inc. Non-disruptive file migration
US8180747B2 (en) 2007-11-12 2012-05-15 F5 Networks, Inc. Load sharing cluster file systems
US20090204705A1 (en) * 2007-11-12 2009-08-13 Attune Systems, Inc. On Demand File Virtualization for Server Configuration Management with Limited Interruption
US8548953B2 (en) 2007-11-12 2013-10-01 F5 Networks, Inc. File deduplication using storage tiers
US20090204650A1 (en) * 2007-11-15 2009-08-13 Attune Systems, Inc. File Deduplication using Copy-on-Write Storage Tiers
US8583619B2 (en) 2007-12-05 2013-11-12 Box, Inc. Methods and systems for open source collaboration in an application service provider environment
US9519526B2 (en) 2007-12-05 2016-12-13 Box, Inc. File management system and collaboration service and integration capabilities with third party applications
US8352785B1 (en) 2007-12-13 2013-01-08 F5 Networks, Inc. Methods for generating a unified virtual snapshot and systems thereof
US20090198704A1 (en) * 2008-01-25 2009-08-06 Klavs Landberg Method for automated network file and directory virtualization
US8103628B2 (en) * 2008-04-09 2012-01-24 Harmonic Inc. Directed placement of data in a redundant data storage system
US8504571B2 (en) 2008-04-09 2013-08-06 Harmonic Inc. Directed placement of data in a redundant data storage system
US20090259665A1 (en) * 2008-04-09 2009-10-15 John Howe Directed placement of data in a redundant data storage system
US8549582B1 (en) 2008-07-11 2013-10-01 F5 Networks, Inc. Methods for handling a multi-protocol content name and systems thereof
US20100169488A1 (en) * 2008-12-31 2010-07-01 Sap Ag System and method of consolidated central user administrative provisioning
US9704134B2 (en) 2008-12-31 2017-07-11 Sap Se System and method of consolidated central user administrative provisioning
US8788666B2 (en) * 2008-12-31 2014-07-22 Sap Ag System and method of consolidated central user administrative provisioning
US20100257218A1 (en) * 2009-04-03 2010-10-07 Konstantin Iliev Vassilev Merging multiple heterogeneous file systems into a single virtual unified file system
US11108815B1 (en) 2009-11-06 2021-08-31 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US10721269B1 (en) 2009-11-06 2020-07-21 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US8392372B2 (en) 2010-02-09 2013-03-05 F5 Networks, Inc. Methods and systems for snapshot reconstitution
US9195500B1 (en) 2010-02-09 2015-11-24 F5 Networks, Inc. Methods for seamless storage importing and devices thereof
US8204860B1 (en) 2010-02-09 2012-06-19 F5 Networks, Inc. Methods and systems for snapshot reconstitution
US8285749B2 (en) * 2010-03-05 2012-10-09 Hitachi, Ltd. Computer system and recording medium
USRE47019E1 (en) 2010-07-14 2018-08-28 F5 Networks, Inc. Methods for DNSSEC proxying and deployment amelioration and systems thereof
US8468007B1 (en) 2010-08-13 2013-06-18 Google Inc. Emulating a peripheral mass storage device with a portable device
US8265919B1 (en) 2010-08-13 2012-09-11 Google Inc. Emulating a peripheral mass storage device with a portable device
US8738673B2 (en) 2010-09-03 2014-05-27 International Business Machines Corporation Index partition maintenance over monotonically addressed document sequences
US9286298B1 (en) 2010-10-14 2016-03-15 F5 Networks, Inc. Methods for enhancing management of backup data sets and devices thereof
US10554426B2 (en) 2011-01-20 2020-02-04 Box, Inc. Real time notification of activities that occur in a web-based collaboration environment
US8782174B1 (en) 2011-03-31 2014-07-15 Emc Corporation Uploading and downloading unsecured files via a virtual machine environment
US9015601B2 (en) 2011-06-21 2015-04-21 Box, Inc. Batch uploading of content to a web-based collaboration environment
US9063912B2 (en) 2011-06-22 2015-06-23 Box, Inc. Multimedia content preview rendering in a cloud content management system
US9152792B2 (en) * 2011-06-27 2015-10-06 Beijing Qihoo Technology Company Limited Method and system for unlocking and deleting file and folder
US20140137252A1 (en) * 2011-06-27 2014-05-15 Beijing Qihood Technology Company Limited Method and system for unlocking and deleting file and folder
US10061926B2 (en) 2011-06-27 2018-08-28 Beijing Qihoo Technology Company Limited Method and system for unlocking and deleting file and folder
US8396836B1 (en) 2011-06-30 2013-03-12 F5 Networks, Inc. System for mitigating file virtualization storage import latency
US9652741B2 (en) 2011-07-08 2017-05-16 Box, Inc. Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof
US9978040B2 (en) 2011-07-08 2018-05-22 Box, Inc. Collaboration sessions in a workspace on a cloud-based content management system
US8549518B1 (en) * 2011-08-10 2013-10-01 Nutanix, Inc. Method and system for implementing a maintenanece service for managing I/O and storage for virtualization environment
US9256456B1 (en) 2011-08-10 2016-02-09 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US8997097B1 (en) 2011-08-10 2015-03-31 Nutanix, Inc. System for implementing a virtual disk in a virtualization environment
US9652265B1 (en) 2011-08-10 2017-05-16 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types
US9747287B1 (en) 2011-08-10 2017-08-29 Nutanix, Inc. Method and system for managing metadata for a virtualization environment
US8601473B1 (en) 2011-08-10 2013-12-03 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US11301274B2 (en) 2011-08-10 2022-04-12 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US9389887B1 (en) 2011-08-10 2016-07-12 Nutanix, Inc. Method and system for managing de-duplication of data in a virtualization environment
US9052936B1 (en) 2011-08-10 2015-06-09 Nutanix, Inc. Method and system for communicating to a storage controller in a virtualization environment
US9354912B1 (en) 2011-08-10 2016-05-31 Nutanix, Inc. Method and system for implementing a maintenance service for managing I/O and storage for a virtualization environment
US10359952B1 (en) 2011-08-10 2019-07-23 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US9009106B1 (en) 2011-08-10 2015-04-14 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US9575784B1 (en) 2011-08-10 2017-02-21 Nutanix, Inc. Method and system for handling storage in response to migration of a virtual machine in a virtualization environment
US9619257B1 (en) 2011-08-10 2017-04-11 Nutanix, Inc. System and method for implementing storage for a virtualization environment
US11853780B2 (en) 2011-08-10 2023-12-26 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US8863124B1 (en) 2011-08-10 2014-10-14 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US9256475B1 (en) 2011-08-10 2016-02-09 Nutanix, Inc. Method and system for handling ownership transfer in a virtualization environment
US11314421B2 (en) 2011-08-10 2022-04-26 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US9256374B1 (en) 2011-08-10 2016-02-09 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization environment
US8850130B1 (en) 2011-08-10 2014-09-30 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization
US9197718B2 (en) 2011-09-23 2015-11-24 Box, Inc. Central management and control of user-contributed content in a web-based collaboration environment and management console thereof
US8990151B2 (en) 2011-10-14 2015-03-24 Box, Inc. Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution
US8515902B2 (en) 2011-10-14 2013-08-20 Box, Inc. Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution
US9098474B2 (en) 2011-10-26 2015-08-04 Box, Inc. Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience
US8463850B1 (en) 2011-10-26 2013-06-11 F5 Networks, Inc. System and method of algorithmically generating a server side transaction identifier
US11210610B2 (en) 2011-10-26 2021-12-28 Box, Inc. Enhanced multimedia content preview rendering in a cloud content management system
US8965958B2 (en) * 2011-10-27 2015-02-24 Microsoft Corporation File fetch from a remote client device
US20130110903A1 (en) * 2011-10-27 2013-05-02 Microsoft Corporation File fetch from a remote client device
US8990307B2 (en) 2011-11-16 2015-03-24 Box, Inc. Resource effective incremental updating of a remote client with events which occurred via a cloud-enabled platform
US9015248B2 (en) 2011-11-16 2015-04-21 Box, Inc. Managing updates at clients used by a user to access a cloud-based collaboration service
US11853320B2 (en) 2011-11-29 2023-12-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US10909141B2 (en) 2011-11-29 2021-02-02 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9773051B2 (en) 2011-11-29 2017-09-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US11537630B2 (en) 2011-11-29 2022-12-27 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
GB2498626A (en) * 2011-12-13 2013-07-24 Ibm Optimising the storage allocation in a virtual desktop environment
US9235589B2 (en) 2011-12-13 2016-01-12 International Business Machines Corporation Optimizing storage allocation in a virtual desktop environment
GB2498626B (en) * 2011-12-13 2015-02-18 Ibm Optimizing the storage allocation in a virtual desktop environment
US9019123B2 (en) 2011-12-22 2015-04-28 Box, Inc. Health check services for web-based collaboration environments
US9904435B2 (en) 2012-01-06 2018-02-27 Box, Inc. System and method for actionable event generation for task delegation and management via a discussion forum in a web-based collaboration environment
US11232481B2 (en) 2012-01-30 2022-01-25 Box, Inc. Extended applications of multimedia content previews in the cloud-based content management system
USRE48725E1 (en) 2012-02-20 2021-09-07 F5 Networks, Inc. Methods for accessing data in a compressed file system and devices thereof
US9020912B1 (en) 2012-02-20 2015-04-28 F5 Networks, Inc. Methods for accessing data in a compressed file system and devices thereof
US9965745B2 (en) 2012-02-24 2018-05-08 Box, Inc. System and method for promoting enterprise adoption of a web-based collaboration environment
US10713624B2 (en) 2012-02-24 2020-07-14 Box, Inc. System and method for promoting enterprise adoption of a web-based collaboration environment
US9195636B2 (en) 2012-03-07 2015-11-24 Box, Inc. Universal file type preview for mobile devices
US9054919B2 (en) 2012-04-05 2015-06-09 Box, Inc. Device pinning capability for enterprise cloud service and storage accounts
GB2501182B (en) * 2012-04-11 2014-02-26 Box Inc Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
GB2501182A (en) * 2012-04-11 2013-10-16 Box Inc Cloud service enabled to handle a set of files depicted to a user as a single file
US9575981B2 (en) 2012-04-11 2017-02-21 Box, Inc. Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
US9413587B2 (en) 2012-05-02 2016-08-09 Box, Inc. System and method for a third-party application to access content within a cloud-based platform
US9691051B2 (en) 2012-05-21 2017-06-27 Box, Inc. Security enhancement through application access control
US8914900B2 (en) 2012-05-23 2014-12-16 Box, Inc. Methods, architectures and security mechanisms for a third-party application to access content in a cloud-based platform
US9552444B2 (en) 2012-05-23 2017-01-24 Box, Inc. Identification verification mechanisms for a third-party application to access content in a cloud-based platform
US9027108B2 (en) 2012-05-23 2015-05-05 Box, Inc. Systems and methods for secure file portability between mobile applications on a mobile device
US9280613B2 (en) 2012-05-23 2016-03-08 Box, Inc. Metadata enabled third-party application access of content at a cloud-based platform via a native client to the cloud-based platform
US9021099B2 (en) 2012-07-03 2015-04-28 Box, Inc. Load balancing secure FTP connections among multiple FTP servers
US8719445B2 (en) 2012-07-03 2014-05-06 Box, Inc. System and method for load balancing multiple file transfer protocol (FTP) servers to service FTP connections for a cloud-based service
US9792320B2 (en) 2012-07-06 2017-10-17 Box, Inc. System and method for performing shard migration to support functions of a cloud-based service
US10452667B2 (en) 2012-07-06 2019-10-22 Box Inc. Identification of people as search results from key-word based searches of content in a cloud-based environment
US9712510B2 (en) 2012-07-06 2017-07-18 Box, Inc. Systems and methods for securely submitting comments among users via external messaging applications in a cloud-based platform
US9772866B1 (en) 2012-07-17 2017-09-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US10747570B2 (en) 2012-07-17 2020-08-18 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US11314543B2 (en) 2012-07-17 2022-04-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US10684879B2 (en) 2012-07-17 2020-06-16 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US9237170B2 (en) 2012-07-19 2016-01-12 Box, Inc. Data loss prevention (DLP) methods and architectures by a cloud service
US9473532B2 (en) 2012-07-19 2016-10-18 Box, Inc. Data loss prevention (DLP) methods by a cloud service including third party integration architectures
US8868574B2 (en) 2012-07-30 2014-10-21 Box, Inc. System and method for advanced search and filtering mechanisms for enterprise administrators in a cloud-based environment
US9794256B2 (en) 2012-07-30 2017-10-17 Box, Inc. System and method for advanced control tools for administrators in a cloud-based service
US9369520B2 (en) 2012-08-19 2016-06-14 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9729675B2 (en) 2012-08-19 2017-08-08 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US8745267B2 (en) 2012-08-19 2014-06-03 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9558202B2 (en) 2012-08-27 2017-01-31 Box, Inc. Server side techniques for reducing database workload in implementing selective subfolder synchronization in a cloud-based environment
US9450926B2 (en) 2012-08-29 2016-09-20 Box, Inc. Upload and download streaming encryption to/from a cloud-based platform
US9135462B2 (en) 2012-08-29 2015-09-15 Box, Inc. Upload and download streaming encryption to/from a cloud-based platform
US9311071B2 (en) 2012-09-06 2016-04-12 Box, Inc. Force upgrade of a mobile application via a server side configuration file
US9195519B2 (en) 2012-09-06 2015-11-24 Box, Inc. Disabling the self-referential appearance of a mobile application in an intent via a background registration
US9117087B2 (en) 2012-09-06 2015-08-25 Box, Inc. System and method for creating a secure channel for inter-application communication based on intents
US9292833B2 (en) 2012-09-14 2016-03-22 Box, Inc. Batching notifications of activities that occur in a web-based collaboration environment
US10200256B2 (en) 2012-09-17 2019-02-05 Box, Inc. System and method of a manipulative handle in an interactive mobile user interface
US9553758B2 (en) 2012-09-18 2017-01-24 Box, Inc. Sandboxing individual applications to specific user folders in a cloud-based service
US9519501B1 (en) 2012-09-30 2016-12-13 F5 Networks, Inc. Hardware assisted flow acceleration and L2 SMAC management in a heterogeneous distributed multi-tenant virtualized clustered system
US9959420B2 (en) 2012-10-02 2018-05-01 Box, Inc. System and method for enhanced security and management mechanisms for enterprise administrators in a cloud-based environment
US9495364B2 (en) 2012-10-04 2016-11-15 Box, Inc. Enhanced quick search features, low-barrier commenting/interactive features in a collaboration platform
US9705967B2 (en) 2012-10-04 2017-07-11 Box, Inc. Corporate user discovery and identification of recommended collaborators in a cloud platform
US9665349B2 (en) 2012-10-05 2017-05-30 Box, Inc. System and method for generating embeddable widgets which enable access to a cloud-based collaboration platform
US9628268B2 (en) 2012-10-17 2017-04-18 Box, Inc. Remote key management in a cloud-based environment
US10235383B2 (en) 2012-12-19 2019-03-19 Box, Inc. Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment
US9396245B2 (en) 2013-01-02 2016-07-19 Box, Inc. Race condition handling in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9953036B2 (en) 2013-01-09 2018-04-24 Box, Inc. File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9507795B2 (en) 2013-01-11 2016-11-29 Box, Inc. Functionalities, features, and user interface of a synchronization client to a cloud-based environment
US10599671B2 (en) 2013-01-17 2020-03-24 Box, Inc. Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US9554418B1 (en) 2013-02-28 2017-01-24 F5 Networks, Inc. Device for topology hiding of a visited network
US10846074B2 (en) 2013-05-10 2020-11-24 Box, Inc. Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client
US10725968B2 (en) 2013-05-10 2020-07-28 Box, Inc. Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform
US10877937B2 (en) 2013-06-13 2020-12-29 Box, Inc. Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9633037B2 (en) 2013-06-13 2017-04-25 Box, Inc Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9805050B2 (en) 2013-06-21 2017-10-31 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US11531648B2 (en) 2013-06-21 2022-12-20 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US10229134B2 (en) 2013-06-25 2019-03-12 Box, Inc. Systems and methods for managing upgrades, migration of user data and improving performance of a cloud-based platform
US10110656B2 (en) 2013-06-25 2018-10-23 Box, Inc. Systems and methods for providing shell communication in a cloud-based platform
US9535924B2 (en) 2013-07-30 2017-01-03 Box, Inc. Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9785518B2 (en) 2013-09-04 2017-10-10 Hytrust, Inc. Multi-threaded transaction log for primary and restore/intelligence
US11435865B2 (en) 2013-09-13 2022-09-06 Box, Inc. System and methods for configuring event-based automation in cloud-based collaboration platforms
US9519886B2 (en) 2013-09-13 2016-12-13 Box, Inc. Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform
US9704137B2 (en) 2013-09-13 2017-07-11 Box, Inc. Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform
US10509527B2 (en) 2013-09-13 2019-12-17 Box, Inc. Systems and methods for configuring event-based automation in cloud-based collaboration platforms
US11822759B2 (en) 2013-09-13 2023-11-21 Box, Inc. System and methods for configuring event-based automation in cloud-based collaboration platforms
US10044773B2 (en) 2013-09-13 2018-08-07 Box, Inc. System and method of a multi-functional managing user interface for accessing a cloud-based platform via mobile devices
US9483473B2 (en) 2013-09-13 2016-11-01 Box, Inc. High availability architecture for a cloud-based concurrent-access collaboration platform
US8892679B1 (en) 2013-09-13 2014-11-18 Box, Inc. Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform
US9213684B2 (en) 2013-09-13 2015-12-15 Box, Inc. System and method for rendering document in web browser or mobile device regardless of third-party plug-in software
US9535909B2 (en) 2013-09-13 2017-01-03 Box, Inc. Configurable event-based automation architecture for cloud-based collaboration platforms
US10866931B2 (en) 2013-10-22 2020-12-15 Box, Inc. Desktop application for accessing a cloud collaboration platform
US10530854B2 (en) 2014-05-30 2020-01-07 Box, Inc. Synchronization of permissioned content in cloud-based environments
US9602514B2 (en) 2014-06-16 2017-03-21 Box, Inc. Enterprise mobility management and verification of a managed application by a content provider
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US10708321B2 (en) 2014-08-29 2020-07-07 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US10574442B2 (en) 2014-08-29 2020-02-25 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US11876845B2 (en) 2014-08-29 2024-01-16 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US9894119B2 (en) 2014-08-29 2018-02-13 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US10038731B2 (en) 2014-08-29 2018-07-31 Box, Inc. Managing flow-based interactions with cloud-based shared content
US10708323B2 (en) 2014-08-29 2020-07-07 Box, Inc. Managing flow-based interactions with cloud-based shared content
US11146600B2 (en) 2014-08-29 2021-10-12 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US9756022B2 (en) 2014-08-29 2017-09-05 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US10078555B1 (en) * 2015-04-14 2018-09-18 EMC IP Holding Company LLC Synthetic full backups for incremental file backups
US10459891B2 (en) 2015-09-30 2019-10-29 Western Digital Technologies, Inc. Replicating data across data storage devices of a logical volume
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof
US10797888B1 (en) 2016-01-20 2020-10-06 F5 Networks, Inc. Methods for secured SCEP enrollment for client devices and devices thereof
US10321167B1 (en) 2016-01-21 2019-06-11 GrayMeta, Inc. Method and system for determining media file identifiers and likelihood of media file relationships
US10467103B1 (en) 2016-03-25 2019-11-05 Nutanix, Inc. Efficient change block training
US11169706B2 (en) * 2016-05-26 2021-11-09 Nutanix, Inc. Rebalancing storage I/O workloads by storage controller selection and redirection
US10412198B1 (en) 2016-10-27 2019-09-10 F5 Networks, Inc. Methods for improved transmission control protocol (TCP) performance visibility and devices thereof
US10719492B1 (en) 2016-12-07 2020-07-21 GrayMeta, Inc. Automatic reconciliation and consolidation of disparate repositories
US10567492B1 (en) 2017-05-11 2020-02-18 F5 Networks, Inc. Methods for load balancing in a federated identity environment and devices thereof
US11223689B1 (en) 2018-01-05 2022-01-11 F5 Networks, Inc. Methods for multipath transmission control protocol (MPTCP) based session migration and devices thereof
US10833943B1 (en) 2018-03-01 2020-11-10 F5 Networks, Inc. Methods for service chaining and devices thereof

Also Published As

Publication number Publication date
US7024427B2 (en) 2006-04-04
US20030115218A1 (en) 2003-06-19

Similar Documents

Publication Publication Date Title
US7024427B2 (en) Virtual file system
US10430392B2 (en) Computer file system with path lookup tables
US7743111B2 (en) Shared file system
US7165096B2 (en) Storage area network file system
US8650168B2 (en) Methods of processing files in a multiple quality of service system
US7653699B1 (en) System and method for partitioning a file system for enhanced availability and scalability
US7457982B2 (en) Writable virtual disk of read-only snapshot file objects
JP6009097B2 (en) Separation of content and metadata in a distributed object storage ecosystem
US7464116B2 (en) Method and apparatus for cloning filesystems across computing systems
US7395389B2 (en) Extending non-volatile storage at a computer system
US7475077B2 (en) System and method for emulating a virtual boundary of a file system for data management at a fileset granularity
US7836017B1 (en) File replication in a distributed segmented file system
US7424497B1 (en) Technique for accelerating the creation of a point in time prepresentation of a virtual file system
US20050071560A1 (en) Autonomic block-level hierarchical storage management for storage networks
US8938425B1 (en) Managing logical views of storage
US20070192375A1 (en) Method and computer system for updating data when reference load is balanced by mirroring
US10740039B2 (en) Supporting file system clones in any ordered key-value store
US11263252B2 (en) Supporting file system clones in any ordered key-value store using inode back pointers
US9727588B1 (en) Applying XAM processes
US10387384B1 (en) Method and system for semantic metadata compression in a two-tier storage system using copy-on-write
US10628391B1 (en) Method and system for reducing metadata overhead in a two-tier storage architecture
JP2004252957A (en) Method and device for file replication in distributed file system
Dell
GB2439752A (en) Copy on write data storage

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION