US20030041097A1 - Distributed transactional network storage system - Google Patents

Distributed transactional network storage system Download PDF

Info

Publication number
US20030041097A1
US20030041097A1 US10/193,830 US19383002A US2003041097A1 US 20030041097 A1 US20030041097 A1 US 20030041097A1 US 19383002 A US19383002 A US 19383002A US 2003041097 A1 US2003041097 A1 US 2003041097A1
Authority
US
United States
Prior art keywords
file
data file
local
data
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/193,830
Inventor
Alexander Tormasov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Parallels IP Holdings GmbH
Original Assignee
Alexander Tormasov
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alexander Tormasov filed Critical Alexander Tormasov
Priority to US10/193,830 priority Critical patent/US20030041097A1/en
Priority to US10/293,196 priority patent/US7886016B1/en
Publication of US20030041097A1 publication Critical patent/US20030041097A1/en
Assigned to SWSOFT HOLDINGS LTD. reassignment SWSOFT HOLDINGS LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TORMASOV, ALEXANDER
Assigned to SWSOFT HOLDINGS LTD. reassignment SWSOFT HOLDINGS LTD. CONFIRMATORY ASSIGNMENT Assignors: SWSOFT HOLDINGS, INC.
Assigned to PARALLELS HOLDINGS, LTD. reassignment PARALLELS HOLDINGS, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SWSOFT HOLDINGS, LTD.
Assigned to Parallels IP Holdings GmbH reassignment Parallels IP Holdings GmbH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PARALLELS HOLDINGS, LTD.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1865Transactional file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1435Saving, restoring, recovering or retrying at system level using file system or storage system metadata

Definitions

  • the system and method of the present invention relates to a distributed transactional network data storage system. More particularly, this invention relates to highly scalable distributed transactional network storage of information or data in a computer network with a common namespace, guaranteed access level and no dedicated server.
  • FIG. 1 illustrates one prior art method for shared access to a file at a file server 10 as developed for personal DOS-based IBM compatible computers.
  • Client software for DOS-based IBM compatible computers if properly connected to the local network 20 and the corresponding file server 10 , permitted viewing of the network drive.
  • Clustering is one solution to the problem of file distortion or inability to gain access to data files.
  • Digital Equipment Company developed and implemented a well-known hardware and software concept in the field of clustering. Specifically, clustering is the creation of a special disk array linked up to several computer processor units. [See Roy G. Davis, VAXcluster Principles (Digital Press) ISBN 1555581129].
  • special disk array is linked up to several computer processor units, special task hardware, not a normal computer, provides shared access and guarantees absolute interchangeability of all participating computers. Being less complex, clustering hardware provides higher reliability in comparison to a standalone computer.
  • a clustering configuration requires the installation of corresponding software on all of the operating systems of the linked client computers. This method provides flexible independent client computer services, but failure of the clustering hardware again causes loss of service.
  • Service distribution implies that all service processes of the operating system are performed at the network nodes (servers) instead of at a local computer. Such service distribution reduces response time and improves provider-to-client channel capacity. Simultaneously, this distribution solves the problem of limited single network server processor power, because, for example, a service request can be processed by a larger number of computers. All of the incoming requests are done at a larger number of network servers. Thus, network server overloading is decreased even in cases of non-parallel requests, processing on a cluster node due to request distribution.
  • Customized distributed data storage enhances service fault-tolerance level. Specifically, when a network server fails or the network is inaccessible, a client computer may switch over to a similar network server and receive the same service. The symmetry of the network servers in the computer network determines service availability.
  • Such customized distributed data storage service requires distributed data storage to enforce symmetry of services provided for client computers.
  • Such algorithms would maintain consistent network server content at the different network servers in a computer network to provide service symmetry to client computers.
  • Regular network data systems such as NFS (Network File System) [See BRIAN PAWLOWSKI, NFS VERSION 3 DESIGN AND IMPLEMENTATION (USENIX Summer 1994)] at UNIX (developed by Sun Microsystems), usually include a pre-defined network server and client computers for accessing a particular network server to obtain a necessary data file.
  • NFS Network File System
  • client computers for accessing a particular network server to obtain a necessary data file.
  • Such network data file systems are generally used with a minimum number of network servers (See U.S. Pat. No. 5,513,314, Kandasamy, et al.).
  • Network distributed file systems are arranged in a more complicated manner. Such network distributed file systems generally permit users to work with the distributed file system as a whole (not with just a selected sever as in the NFS case) in a shared uniform namespace, regardless of whether a specific file server is accessible.
  • Namespace is a collection of unique names, where a name is an arbitrary identifier, usually an integer or a character string. Usually the term “name” is applied to such objects as files, directories, devices, computers, etc.
  • Another approach to creating a distributed data file storage access model is the hierarchical system of file naming combined with local data caching on the client computer server.
  • Transarc Corporation now IBM Transarc Labs
  • AFS See RICHARD CAMPBELL et al. MANAGING AFS: THE ANDREW FILE SYSTEM (Prentice Hall 1997) ISBN 0138027293
  • Coda See P. J. Braam, The Coda Distributed File System (#74, Linux Journal #50 June 1998); M.
  • AFS transmits all of the data file requests to the system file server (even files within the cache of a local data file system) but permits access to the data file requests only after it is determined that the data files were not altered after the copying process was finished.
  • AFS usually does not allow data file access. Coda, in contrast, assumes that such data files tend to stay intact, and permits working on these data files without complete recovery of the file server connection.
  • the fault tolerance level under this approach is higher than with the regular use of pre-defined network servers, which requires being permanently online. However, such an approach permits several client computers to concurrently access the same data file, with the potential for errors.
  • access to data files outside the cache is possible only after those data files have been fully loaded to the cache.
  • data accessibility levels can be susceptible to failure in case of a server disconnection.
  • Namespace of these AFS and Coda file systems is hierarchical; that is, it stems from a shared point, i.e., the root of a data file system. Nevertheless, every AFS/DFS/Coda name corresponds to a specific file server. Loss of a specific file server results in loss of access to certain data files. When this occurs, data files get split apart. A special function is used to search the namespace, recognize the server, and access the data files. Thus, potential file interchangeability exists, for example, by direct substitution of a data file which is not found by another file. But, even if properly organized, such a system does not offer any improvement in fault tolerance level.
  • Distributed access to data files may also be achieved by a distributed storage of network data blocks, rather than distributed storage of entire data files.
  • the file system is built over such a set of network data blocks.
  • the server software emulates a powerful virtual low-level disk which is accessible by software running on the client's computer.
  • a regular data file system is built up over the storage of network data blocks as if it was working with a local disk. If there is a need to synchronize records in the same network data blocks, e.g., when two independent client computers request write access to the directory, special locking algorithms would be required. Such a distributed data storage system would be rather expensive with respect to both scalability and efficiency.
  • RAID Level 5 Another method of data storage distribution, RAID Level 5 [See GREGORY F. PFTSTER, IN SEARCH OF CLUSTERS (Prentice Hall 1998) ISBN 0138997098], allows data acquisition even if a server or disks containing data are not accessible. RAID Level 5 is extensively used to deliver higher fault-tolerance efficiency of data files stored on disk.
  • the Serverless File System See TOM ANDERSON et al., SERVERLESS NETWORK FILE SYSTEMS (15th Symposium on Operating Systems Principles, ACM Transactions on Computer Systems 1995)] was developed at UC-Berkeley. The Serverless File System uses a group of network servers rather than a single dedicated server.
  • the Serverless File System is based on distributed storage of data blocks, wherein a RAID algorithm can successfully restore every data block (stopping at most one server at a time).
  • the file system asymmetrically divides supporting data blocks between different network servers and possesses two different states: a normal state when all the network servers are accessible, and a failure state when a special recovery procedure is required for an unavailable network server.
  • the system does not allow use of network servers with unequal efficiency and connection quality, since data accessibility depends on access to all of the network servers.
  • VAX VMS file system [See KIRBY MCCOY, VMS FILE SYSTEM INTERNALS (Digital Press 1990) ISBN 1555580564)] records every data file modification as a whole data file under a new name, while keeping the previous version of that data file accessible. Then every data file modification, or version, is sent to the data file directory.
  • the data file versions share the same data file name, but differ in data file numbering, temporarily ranked during the process of data file modification.
  • FIG. 4 illustrates prior art data file storage 90 with the form versions 100 ranked by time. The new version 110 goes in full to data storage 120 after the file has been edited 130 .
  • this method of data storage yields numerous, virtually redundant, data file copies.
  • this data file modification method is very inefficient in that the operating system first reads the final file modification and then saves it to a new location, thus requiring disk space and disk I/O bandwidth nearly equal to the size of a doubled file.
  • FIG. 3 illustrates the process by which discrete changes 80 a , 80 b and 80 c to the original data file are entered in the log, and then step-by-step copied to file 60 .
  • Such a transactional method reveals either all the changes to a data file or none of them, with no intermediate positions.
  • the log contains a detailed indivisible stream of structured changes to every file.
  • Data file systems based on this method are characterized by fast failure recovery. Changes to the data file system are highly coherent, and it is not necessary to check all available data to assure data file system consistency. This method, however, does not permit recording variances, as contrasted with an undo/redo log recording database technique.
  • the present invention provides a system and method for fault tolerant distributed data file storage over a highly scalable set of functionally equal network servers which will optimize distributed data storage with respect to both data content and resource requirements. Specifically, the same data content is available when accessing different network servers to provide client computer symmetry.
  • the network servers are linked through a client-server model via a local computer network, wherein each network server supports some set of network services and is ranked according to available capacity and accessibility.
  • the highly scalable distributed transactional network data storage system of the present invention functions at the data file level, with a data file being the information unit for both the network server and the user of the data storage file.
  • special file disassembly/assembly procedure is introduced.
  • Data file disassembly assures data availability, with a data file being disassembled into redundant, functionally identical data file pieces.
  • Data file reassembly is dependent only upon the number of data file pieces and not on the presence or absence of any data file piece in particular.
  • a set of data file pieces is stored at a set of separate network servers. Initial data redundancy and functional equality of data file pieces assures that data file reassembly is independent of access to any particular network server.
  • the highly scalable distributed transactional network storage system of the present invention utilizes strictly local algorithms which control network server selection for connection to local client computers, selecting the network server which is least loaded and most accessible.
  • the data file storage system is based upon two file classes: regular data files and directory files containing directory and other possible data necessary for translation of a data file pathway.
  • the regular data files utilize common namespace which is accessible via typical data file pathname.
  • the directory file is used for translation of file requests originated from local client computers from logical data file names to internal unique data file identifiers. The totally local algorithms generate unique data file identifiers upon data file creation and disassembly.
  • the fault tolerance level is determined by the degree of redundancy which is built into the running system.
  • the predetermined data file piece redundancy volume is based upon prospective data file stability.
  • changes to every data file are stored as separate records with unique transaction identifiers in addition to the unique data file identifiers.
  • the implementing software constructed and arranged to run on client computers and for enabling such data file storage, consists of two subsystems: one subsystem for the computer's local data file system, and the other subsystem for the distributed data network. Changes to a data file are recorded to a local data storage file, including the time it was recorded.
  • the software running on a local client computer generates a transaction identifier and a separate low-level data file to store all of the data file changes and make a transactional record.
  • the transactional record is recorded by disassembling the low level data file into pieces which are stored at the network servers by the network part of local client software.
  • the software on the local client computer records any attempt from the local operating system processes to read the data file from network storage and sends a request to any network servers to locate this file. If this data file exists and has a unique data file identifier as determined by the directory service, the software on the local client computer requests the storage file data and obtains the list of the data file transactions for a period of time. Then the software running on the local client computer receives the piece of the data file associated with these transactions and collects the low-level transactional files in order to assemble the original contents of the data file. The local operating system where the software running on the local client computer is installed continues working with the assembled file as if the file had always existed there.
  • FIG. 1 is a schematic illustration of a prior art method to provide split access to a data file located at a network file server;
  • FIG. 2 is a schematic illustration of data file disassembly into redundant pieces and assembly of the original from a certain number of data file pieces;
  • FIG. 3 is a schematic illustration of the prior art storage method of step-by-step data file changes in the log and their further recording into the original data file;
  • FIG. 4 is a schematic illustration of prior art data file storage in the form of versions ranked by order of creation where the new version (6) goes in full to the storage after the data file has been edited, while some old version (2) can be purged out of storage;
  • FIG. 5 is a schematic illustration of a service system with no dedicated computer
  • FIG. 6 is a schematic illustration of data file storage in its initial form including a set of transactional changes.
  • FIG. 7 is a schematic illustration of a file search procedure to locate a unique identifier by its logical name (pathname traverse procedure).
  • the present invention relates to a highly scalable distributed transactional network storage system and method, which is intended for storage of information or data in a local network with a common namespace, guaranteed access level, and no dedicated network servers.
  • Local network as used herein means a regular local computer network installed at an office or at a data center.
  • a regular local computer network usually consists of several standard network servers that are completely interchangeable with respect to service functioning.
  • Access to network servers is based on a regular client-server model, i.e., the software installed on a local client computer provides access to the data storage files through connection to one of the network servers. All the network servers are equal in rights as far as data file request processing is concerned, i.e., to obtain information, the local client computer may link to any network server, selecting the one which is least loaded and most accessible.
  • the set of network servers connected via the local network is called a cluster.
  • the highly scalable distributed transactional network storage system of the present invention functions at a data file level, i.e., a data file represents the information unit for both the network server and the user of the stored data.
  • the data availability level is guaranteed by the data file disassembly/assembly procedure.
  • a data file destined for storage is first disassembled into pieces in such a way as to be later re-assembled from these data file pieces in the future.
  • this procedure is not just a splitting of a data file from one piece into several pieces. Each data piece is formed as result of a complex generation procedure. The only requirement for these data file pieces is that there should be some assembling procedure which takes some of the generated data file pieces and then restores the original file as a whole.
  • Assembly of a usable data file may require fewer than all of the data file pieces available.
  • the data file pieces must be functionally identical, such that proper assembly of a usable data file only depends on the number of data file pieces and not on any data file piece in particular.
  • a data file 40 is disassembled into redundant data file pieces 50 a through 50 n , and then properly assembled 55 from the combination of a certain number of data file pieces 50 a through 50 n.
  • each piece of a data file which is stored goes to one network server from a distribution server. Under these conditions, partially switching off some of the servers does not impact data accessibility if the rest of network servers contain sufficient numbers of data file pieces. Initial data redundancy assures successful assembly, and equality of data file pieces makes them independent of access to a particular network server.
  • the scalability and fault tolerance of such a data storage system is determined by multiple factors, particularly the algorithms which are applicable to all of the network servers. All of the algorithms, functioning in such an interconnected network server design, must be of a local nature, i.e., the system does not contain a complete list of all of its network servers. The same is true with respect to data file naming. There is no location to verify name uniqueness, as usually found in a catalogue. This method promotes successful growth and self-organization of the system, since the addition or deletion of network servers influences only the neighboring network servers, not the whole system. Thus, each network server contains and supports a dynamic list of its neighboring network servers, which is smaller than the total number of network servers and is able to evolve over time.
  • the local client computer connects to any network server.
  • the network servers all function identically, making data file access independent of any particular network server.
  • FIG. 5 illustrates a service system 300 with no single dedicated network server computer, whereby the local client computer 310 is able to connect to any network server 320 to obtain any data file.
  • the algorithm reads the loading information for the network servers 320 and selects the network server 320 which is least loaded for connection.
  • the client computer may access a data file by specifying its name and its path from the root directory. The path does not depend on the location of the local client computer nor the network server to which the local client computer is connected.
  • Namespace is a collection of unique names, where a name is an arbitrary identifier, usually an integer or a character string. [See CHARLES CROWLEY, OPERATING SYSTEMS: A DESIGN-ORIENTED APPROACH (Irwin, 1997) ISBN 0256151512]. Usually the term “name” is applied to such objects as data files, directories, devices, computers, etc. More information about typical distributed data file system name space and related problems can be found in the references that follow [See R. KUMAR, OSF's DISTRIBUTED COMPUTING ENVIRONMENT (Aixpert, IBM Corporation, Fall 1991); G.
  • Directory files information helps to translate requests from a local client computer for a logical file name into the form of an internal identifier used to acquire the data file contents. This procedure is applicable to every subdirectory.
  • FIG. 7 illustrates the data file search procedure used to locate a unique data file identifier “C” by its logical name, according to the data file path “/aaa/bbb/c”.
  • the root directory file 200 must be located.
  • the record corresponding with the aaa file 230 is identified and confirmed to be a pointer to a directory file 250 .
  • the same procedure takes place for the “bbb” file 240 .
  • the “C” file via pointer 250 is located.
  • the directory represents a set of records corresponding to data files. At a minimum, each record contains a logical file name and a unique identifier corresponding to it.
  • All the data files, including directory files, are viewed by the system as equal and possess unique across-cluster file identifiers for assembly. Any network server, requesting access to a directory as described above, may be considered a client computer for this directory service.
  • the unique data file identifier is generated at the moment a data file is created.
  • the uniqueness of the data file identifier is derived using the totally local algorithms and does not require confirmation.
  • the local client computer is connected to a network server as described above and sends out a request for a data file operation.
  • a file change recording operation file write
  • the local client computer creates a low-level data file with changes record, then disassembles the low-level data file into data pieces and sends the data pieces to a network server.
  • the network server sends the data file pieces to all of the network servers in the group.
  • the neighboring network servers send the data file pieces further, until all of the data pieces are placed with a network server (with at least one data piece at each network server).
  • a unique identifier is generated in order to identify the disassembled data file pieces in the future.
  • the local client computer is connected to any network server and sends a request containing the full file name with access pathname.
  • a network server translates the data file name into a unique identifier using directory information and retrieves information about sufficient data file pieces for reassembly of all required for original data file to assembly low-level data files.
  • the network server first checks the availability of the data file pieces, and requests the data file pieces at the other network servers if the number of data pieces is insufficient.
  • the network server collects the data pieces required for file assembly and sends them to the requesting local client computer. Only then may the client computer assemble the original data file.
  • the fault tolerance level (network server accessibility depending on disconnection or network access failure) is determined by the degree of redundancy which is built into the network data storage system. Data file pieces are created in predetermined redundancy and placed at different network servers. So inaccessibility of some network servers does not influence data file assembly and accessibility to the local client computer if the overall number of accessible network servers is greater than some pre-defined number. Redundancy volume is determined at the moment the data file is stored and is dependent on the prospective stability of the data file storage.
  • This disclosed system and method for data storage is convenient for working with unmodified data files.
  • the algorithm for data file storage by data pieces is very dependent on the contents of the data file. Slight changes to the data file may require changes to all of the data file pieces. Such a change to all data file pieces is expensive and inefficient.
  • Each change to the contents of a data file represents a set of triplets: the offset from the beginning of data file, the data length, and the data itself.
  • Each change to the data file or metadata gets arranged in the form of a separate record.
  • the physical data file gets stored in the form of a series of records.
  • Each record is regarded as a low-level unmodified data file.
  • a unique transaction identifier introduced in addition to the unique data file identifier, distinguishes the records and possesses the property of a timing mark to establish the “before-after” relationship between the identifiers and the time of their creation.
  • Information about the state of the data file at a particular moment in time requires the availability of all of the transactions related to that data file, with the time of creation being less than or equal to that requested.
  • the data file is stored in its initial form 180 without recording the set of transactional changes 190 a , 190 b , 190 c to it.
  • Each state of the data file is available at any point in time.
  • Client software for such storage consists of two elements: one part for the computer's local file system and the other for the distributed data network communications.
  • the software running on the local client computer records information to a local data file and saves the data about the recording, including the time it was recorded. At the time that a transaction ends, the software running on the local client computer generates a transaction identifier and a separate low-level data file to store all of the changes to the data file and make a transactional record.
  • One transaction can contain data for different files.
  • the transactional files are sent to the network software part of a local client computer and are recorded by disassembling the data file into data pieces which are placed at the network servers.
  • the software running on the local client computer hooks any attempt from the local programs and services of the local operating system to read the stored data file and sends a request to the network server to locate this data file. If this data file exists and has a unique data file identifier as determined by the directory service, the software running on the local client computer requests the storage file or files and obtains the list of the file transactions for a period of time. Then the software running on the local client computer receives the data file pieces associated with these transactions and collects the low-level transactional data files in order to assemble the original data file contents.
  • the programs and services of the local operating system where the software running on the local client computer is installed continues working with the assembled data file at the local file system as if the data file had always existed there.
  • the software running on the local client computer provides additional network functionality, data integrity, and accessibility to a local data file system.
  • Coda A Highly Available File System for a Distributed Workstation Environment (# 13) Satyanarayanan, M. Proceedings of the Second IEEE Workshop on Workstation Operating Systems September 1989, Pacific Grove, Calif.

Abstract

The present invention provides a highly scalable system for fault tolerant distributed data file storage over a set of functionally equal network servers linked through a local network with network servers and client computers. Data files are represented as a set of transactional records, each record is disassembled into redundant, functionally identical data pieces with original file reassembly dependent only upon the number of data file pieces and not on the presence or absence of any particular data file piece. Local algorithms generate unique data file identifiers upon file creation and disassembly. Changes to the data file storage system are ranked by creation time and stored as separate records with unique transaction identifiers in addition to unique data file identifiers. A transactional data file record is stored by disassembling the transactional file into pieces placed at the network servers. Low-level transactional files are collected to reassemble the data file contents.

Description

    REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application for Patent No. 60/304,655 titled “Distributed Transactional Network Storage of High Scalability Meant for Storage of Information in a Local Network with Common Namespace, Guaranteed Access Level and No Dedicated Server” filed on Jul. 11, 2001 for priority under 35 U.S.C. §119 (e), is related thereto, and the subject matter thereof is incorporated herein by reference in its entirety. [0001]
  • FIELD
  • The system and method of the present invention relates to a distributed transactional network data storage system. More particularly, this invention relates to highly scalable distributed transactional network storage of information or data in a computer network with a common namespace, guaranteed access level and no dedicated server. [0002]
  • BACKGROUND
  • The problem of network data file storage began when computers were first linked together. Traditionally, one solution to the problem of storing data has been to allocate services to a network computer or file server [See Distributed Operating Systems by Andrew S. Tanenbaum; 1994 Prentice Hall; ISBN:0132199084]. Software, installed at other network client computers, permitted access to various network servers by copying the files of the network servers locally or by emulating access to files on network servers from a virtual local disk. FIG. 1 illustrates one prior art method for shared access to a file at a [0003] file server 10 as developed for personal DOS-based IBM compatible computers. Client software for DOS-based IBM compatible computers, if properly connected to the local network 20 and the corresponding file server 10, permitted viewing of the network drive. Software running on client computers 30 made files located at a remote file server 10 appear to be local. Thus, the allocation of services to a network computer or file server requires a dedicated file server and the client-server access model in order to access network files. [See CHARLES CROWLEY, OPERATING SYSTEMS: A DESIGN-ORIENTED APPROACH (Irwin, 1997) ISBN 0256151512].
  • This allocation of services to a network computer or file server has several disadvantages. In the case of shared access, several clients may view the same data file locally at the client computer. Users of client computers may be unaware of the shared access to a data file and start writing pseudo-local files which are stored to the same location. The result is file distortion. Multiple failures are bound to occur. Because pseudo-local files are physically located at the same network server, the pseudo-local files are entirely dependent on that network server. This means that any hardware, software or network failure at that network server makes file access impossible. Even properly functioning network servers may cause such a problem while rebooting their operating system. Any scheduled reboot of an operating system inevitably blocks data file access and service. [0004]
  • Clustering is one solution to the problem of file distortion or inability to gain access to data files. Digital Equipment Company (DEC) developed and implemented a well-known hardware and software concept in the field of clustering. Specifically, clustering is the creation of a special disk array linked up to several computer processor units. [See Roy G. Davis, VAXcluster Principles (Digital Press) ISBN 1555581129]. When a special disk array is linked up to several computer processor units, special task hardware, not a normal computer, provides shared access and guarantees absolute interchangeability of all participating computers. Being less complex, clustering hardware provides higher reliability in comparison to a standalone computer. However, a clustering configuration requires the installation of corresponding software on all of the operating systems of the linked client computers. This method provides flexible independent client computer services, but failure of the clustering hardware again causes loss of service. [0005]
  • Several similar network servers, interacting with client computers, may provide identical service and data access to every client computer. Data replication at every network server together with identical service, independent of the location of the client computer and service center, may be regarded as the easiest solution to this problem. However, some inconveniences, such as complex data synchronizing processes, remain. [0006]
  • Another solution to the problem of file distortion or the inability to gain access to files is the creation of customized distributed data storage. Service distribution implies that all service processes of the operating system are performed at the network nodes (servers) instead of at a local computer. Such service distribution reduces response time and improves provider-to-client channel capacity. Simultaneously, this distribution solves the problem of limited single network server processor power, because, for example, a service request can be processed by a larger number of computers. All of the incoming requests are done at a larger number of network servers. Thus, network server overloading is decreased even in cases of non-parallel requests, processing on a cluster node due to request distribution. Customized distributed data storage enhances service fault-tolerance level. Specifically, when a network server fails or the network is inaccessible, a client computer may switch over to a similar network server and receive the same service. The symmetry of the network servers in the computer network determines service availability. [0007]
  • Such customized distributed data storage service requires distributed data storage to enforce symmetry of services provided for client computers. There is a need for the development of special-purpose distribution and storage algorithms to yield optimum distributed data storage with respect to both data content and resource requirements. Such algorithms would maintain consistent network server content at the different network servers in a computer network to provide service symmetry to client computers. [0008]
  • Currently available methods and algorithms for distributed data storage are complex. The data duplication or mirroring approach is frequently used, in which the server at every network node possesses a complete copy of all stored data files. Mirroring systems of FTP servers have been arranged in such a manner, as discussed in the following references (See U.S. Pat. No. 5,835,911, Nakagawa; U.S. Pat. No. 5,434,994, Shaheen; U.S. Pat. No. 5,155,847, Kirouac; U.S. Pat. No. 5,742,792, Yanai). [0009]
  • Regular network data systems, such as NFS (Network File System) [See BRIAN PAWLOWSKI, NFS VERSION 3 DESIGN AND IMPLEMENTATION (USENIX Summer 1994)] at UNIX (developed by Sun Microsystems), usually include a pre-defined network server and client computers for accessing a particular network server to obtain a necessary data file. Such network data file systems are generally used with a minimum number of network servers (See U.S. Pat. No. 5,513,314, Kandasamy, et al.). [0010]
  • Network distributed file systems are arranged in a more complicated manner. Such network distributed file systems generally permit users to work with the distributed file system as a whole (not with just a selected sever as in the NFS case) in a shared uniform namespace, regardless of whether a specific file server is accessible. Namespace is a collection of unique names, where a name is an arbitrary identifier, usually an integer or a character string. Usually the term “name” is applied to such objects as files, directories, devices, computers, etc. [0011]
  • Another approach to creating a distributed data file storage access model is the hierarchical system of file naming combined with local data caching on the client computer server. Transarc Corporation (now IBM Transarc Labs), AFS [See RICHARD CAMPBELL et al. MANAGING AFS: THE ANDREW FILE SYSTEM (Prentice Hall 1997) ISBN 0138027293] and Coda [See P. J. Braam, The Coda Distributed File System (#74, Linux Journal #50 June 1998); M. SATYANARAYANAN, CODA: A HIGHLY AVAILABLE FILE SYSTEM FOR A DISTRIBUTED WORKSTATION ENVIRONMENT (Proceedings of the Second IEEE Workshop on Workstation Operating Systems September 1989)] systems are examples of such distributed data file storage systems. For optimal data access, these distributed data file storage systems intensively cache data at the local file system of a client computer and fully utilize this cache to reduce the number and size of requests to the system file server. [0012]
  • AFS transmits all of the data file requests to the system file server (even files within the cache of a local data file system) but permits access to the data file requests only after it is determined that the data files were not altered after the copying process was finished. In case of file server disconnection, AFS usually does not allow data file access. Coda, in contrast, assumes that such data files tend to stay intact, and permits working on these data files without complete recovery of the file server connection. The fault tolerance level under this approach is higher than with the regular use of pre-defined network servers, which requires being permanently online. However, such an approach permits several client computers to concurrently access the same data file, with the potential for errors. [0013]
  • Both the AFS and the Coda approaches cache entire data files and possess multiple file copies with various modifications. The possession of multiple file copies with various modifications complicates the efficiency of file system support for data coherence. Moreover, access to data files outside the cache is possible only after those data files have been fully loaded to the cache. Thus, in the model when different data is stored at different servers, data accessibility levels can be susceptible to failure in case of a server disconnection. [0014]
  • Namespace of these AFS and Coda file systems is hierarchical; that is, it stems from a shared point, i.e., the root of a data file system. Nevertheless, every AFS/DFS/Coda name corresponds to a specific file server. Loss of a specific file server results in loss of access to certain data files. When this occurs, data files get split apart. A special function is used to search the namespace, recognize the server, and access the data files. Thus, potential file interchangeability exists, for example, by direct substitution of a data file which is not found by another file. But, even if properly organized, such a system does not offer any improvement in fault tolerance level. [0015]
  • Distributed access to data files may also be achieved by a distributed storage of network data blocks, rather than distributed storage of entire data files. In this approach, the file system is built over such a set of network data blocks. The server software emulates a powerful virtual low-level disk which is accessible by software running on the client's computer. A regular data file system is built up over the storage of network data blocks as if it was working with a local disk. If there is a need to synchronize records in the same network data blocks, e.g., when two independent client computers request write access to the directory, special locking algorithms would be required. Such a distributed data storage system would be rather expensive with respect to both scalability and efficiency. [0016]
  • Another method of data storage distribution, RAID Level 5 [See GREGORY F. PFTSTER, IN SEARCH OF CLUSTERS (Prentice Hall 1998) ISBN 0138997098], allows data acquisition even if a server or disks containing data are not accessible. RAID Level 5 is extensively used to deliver higher fault-tolerance efficiency of data files stored on disk. Using a similar algorithm, the Serverless File System [See TOM ANDERSON et al., SERVERLESS NETWORK FILE SYSTEMS (15th Symposium on Operating Systems Principles, ACM Transactions on Computer Systems 1995)] was developed at UC-Berkeley. The Serverless File System uses a group of network servers rather than a single dedicated server. The Serverless File System is based on distributed storage of data blocks, wherein a RAID algorithm can successfully restore every data block (stopping at most one server at a time). According to the Serverless File System, the file system asymmetrically divides supporting data blocks between different network servers and possesses two different states: a normal state when all the network servers are accessible, and a failure state when a special recovery procedure is required for an unavailable network server. The system does not allow use of network servers with unequal efficiency and connection quality, since data accessibility depends on access to all of the network servers. [0017]
  • All file system developers inevitably come across the problem of dynamic file content changes. It is well known that almost all data storage files eventually require some content changes. Various methods of changing data file content have been proposed to solve this problem. The most common method of providing for content changes in data storage files includes changing the file content at the file location, i.e. in the file system. Most of the old MSDOS and UNIX operating systems are arranged in such a manner. Changing the data file content at the location of the file has certain disadvantages, since any errors made during file recording can influence the content of the data file. For instance, if the computer stops working while a data file is being recorded, the file will be irreparably damaged or irretrievably lost. Thus, it is preferable to have an operating system with unmodifiable files of a fixed size and location. [0018]
  • To solve the data file modification problem, some systems support different versions of the same file. VAX VMS file system [See KIRBY MCCOY, VMS FILE SYSTEM INTERNALS (Digital Press 1990) ISBN 1555580564)] records every data file modification as a whole data file under a new name, while keeping the previous version of that data file accessible. Then every data file modification, or version, is sent to the data file directory. The data file versions share the same data file name, but differ in data file numbering, temporarily ranked during the process of data file modification. FIG. 4 illustrates prior art data file [0019] storage 90 with the form versions 100 ranked by time. The new version 110 goes in full to data storage 120 after the file has been edited 130. Of course, this method of data storage yields numerous, virtually redundant, data file copies. Moreover, this data file modification method is very inefficient in that the operating system first reads the final file modification and then saves it to a new location, thus requiring disk space and disk I/O bandwidth nearly equal to the size of a doubled file.
  • Recording all changes to a data file in a special journal is another potential solution to the problem of data file system development. As later discussed, this technique was developed for databases to assure data safety and accessibility to data files in case of system failure. In this approach, changes to a data file are recorded in a special standard form usually called a log. From that log, records are gradually put into the current data file. FIG. 3 illustrates the process by which [0020] discrete changes 80 a, 80 b and 80 c to the original data file are entered in the log, and then step-by-step copied to file 60. Such a transactional method reveals either all the changes to a data file or none of them, with no intermediate positions. The log contains a detailed indivisible stream of structured changes to every file. Data file systems based on this method are characterized by fast failure recovery. Changes to the data file system are highly coherent, and it is not necessary to check all available data to assure data file system consistency. This method, however, does not permit recording variances, as contrasted with an undo/redo log recording database technique.
  • What is needed is a fault tolerant data storage system which will optimize distributed data storage with respect to both data content and resource requirements. The same content should be available at different servers in order to provide client computer symmetry and promote data synchronization. [0021]
  • SUMMARY
  • The present invention provides a system and method for fault tolerant distributed data file storage over a highly scalable set of functionally equal network servers which will optimize distributed data storage with respect to both data content and resource requirements. Specifically, the same data content is available when accessing different network servers to provide client computer symmetry. The network servers are linked through a client-server model via a local computer network, wherein each network server supports some set of network services and is ranked according to available capacity and accessibility. [0022]
  • The highly scalable distributed transactional network data storage system of the present invention functions at the data file level, with a data file being the information unit for both the network server and the user of the data storage file. According to the present invention, special file disassembly/assembly procedure is introduced. Data file disassembly assures data availability, with a data file being disassembled into redundant, functionally identical data file pieces. Data file reassembly is dependent only upon the number of data file pieces and not on the presence or absence of any data file piece in particular. A set of data file pieces is stored at a set of separate network servers. Initial data redundancy and functional equality of data file pieces assures that data file reassembly is independent of access to any particular network server. The highly scalable distributed transactional network storage system of the present invention utilizes strictly local algorithms which control network server selection for connection to local client computers, selecting the network server which is least loaded and most accessible. [0023]
  • Organization of the data file storage system is based upon two file classes: regular data files and directory files containing directory and other possible data necessary for translation of a data file pathway. The regular data files utilize common namespace which is accessible via typical data file pathname. The directory file is used for translation of file requests originated from local client computers from logical data file names to internal unique data file identifiers. The totally local algorithms generate unique data file identifiers upon data file creation and disassembly. [0024]
  • The fault tolerance level is determined by the degree of redundancy which is built into the running system. The predetermined data file piece redundancy volume is based upon prospective data file stability. In the present invention, changes to every data file are stored as separate records with unique transaction identifiers in addition to the unique data file identifiers. [0025]
  • The implementing software, constructed and arranged to run on client computers and for enabling such data file storage, consists of two subsystems: one subsystem for the computer's local data file system, and the other subsystem for the distributed data network. Changes to a data file are recorded to a local data storage file, including the time it was recorded. The software running on a local client computer generates a transaction identifier and a separate low-level data file to store all of the data file changes and make a transactional record. The transactional record is recorded by disassembling the low level data file into pieces which are stored at the network servers by the network part of local client software. [0026]
  • The software on the local client computer records any attempt from the local operating system processes to read the data file from network storage and sends a request to any network servers to locate this file. If this data file exists and has a unique data file identifier as determined by the directory service, the software on the local client computer requests the storage file data and obtains the list of the data file transactions for a period of time. Then the software running on the local client computer receives the piece of the data file associated with these transactions and collects the low-level transactional files in order to assemble the original contents of the data file. The local operating system where the software running on the local client computer is installed continues working with the assembled file as if the file had always existed there.[0027]
  • BRIEF DESCRIPTION OF THE DRAWING FIGURES
  • A better understanding of the distributed transactional network storage system and method of the present invention may be had by reference to the drawing figures wherein: [0028]
  • FIG. 1 is a schematic illustration of a prior art method to provide split access to a data file located at a network file server; [0029]
  • FIG. 2 is a schematic illustration of data file disassembly into redundant pieces and assembly of the original from a certain number of data file pieces; [0030]
  • FIG. 3 is a schematic illustration of the prior art storage method of step-by-step data file changes in the log and their further recording into the original data file; [0031]
  • FIG. 4 is a schematic illustration of prior art data file storage in the form of versions ranked by order of creation where the new version (6) goes in full to the storage after the data file has been edited, while some old version (2) can be purged out of storage; [0032]
  • FIG. 5 is a schematic illustration of a service system with no dedicated computer; [0033]
  • FIG. 6 is a schematic illustration of data file storage in its initial form including a set of transactional changes; and [0034]
  • FIG. 7 is a schematic illustration of a file search procedure to locate a unique identifier by its logical name (pathname traverse procedure).[0035]
  • DESCRIPTION OF THE EMBODIMENTS
  • The present invention relates to a highly scalable distributed transactional network storage system and method, which is intended for storage of information or data in a local network with a common namespace, guaranteed access level, and no dedicated network servers. [0036]
  • Local network as used herein means a regular local computer network installed at an office or at a data center. Such a regular local computer network usually consists of several standard network servers that are completely interchangeable with respect to service functioning. Access to network servers is based on a regular client-server model, i.e., the software installed on a local client computer provides access to the data storage files through connection to one of the network servers. All the network servers are equal in rights as far as data file request processing is concerned, i.e., to obtain information, the local client computer may link to any network server, selecting the one which is least loaded and most accessible. The set of network servers connected via the local network is called a cluster. [0037]
  • The highly scalable distributed transactional network storage system of the present invention functions at a data file level, i.e., a data file represents the information unit for both the network server and the user of the stored data. The data availability level is guaranteed by the data file disassembly/assembly procedure. A data file destined for storage is first disassembled into pieces in such a way as to be later re-assembled from these data file pieces in the future. Technically, this procedure is not just a splitting of a data file from one piece into several pieces. Each data piece is formed as result of a complex generation procedure. The only requirement for these data file pieces is that there should be some assembling procedure which takes some of the generated data file pieces and then restores the original file as a whole. Assembly of a usable data file may require fewer than all of the data file pieces available. To correctly assemble the source file, the data file pieces must be functionally identical, such that proper assembly of a usable data file only depends on the number of data file pieces and not on any data file piece in particular. As shown in FIG. 2, a [0038] data file 40 is disassembled into redundant data file pieces 50 a through 50 n, and then properly assembled 55 from the combination of a certain number of data file pieces 50 a through 50 n.
  • In the storage process, each piece of a data file which is stored goes to one network server from a distribution server. Under these conditions, partially switching off some of the servers does not impact data accessibility if the rest of network servers contain sufficient numbers of data file pieces. Initial data redundancy assures successful assembly, and equality of data file pieces makes them independent of access to a particular network server. [0039]
  • The scalability and fault tolerance of such a data storage system is determined by multiple factors, particularly the algorithms which are applicable to all of the network servers. All of the algorithms, functioning in such an interconnected network server design, must be of a local nature, i.e., the system does not contain a complete list of all of its network servers. The same is true with respect to data file naming. There is no location to verify name uniqueness, as usually found in a catalogue. This method promotes successful growth and self-organization of the system, since the addition or deletion of network servers influences only the neighboring network servers, not the whole system. Thus, each network server contains and supports a dynamic list of its neighboring network servers, which is smaller than the total number of network servers and is able to evolve over time. [0040]
  • To access the data storage system, the local client computer connects to any network server. The network servers all function identically, making data file access independent of any particular network server. FIG. 5 illustrates a [0041] service system 300 with no single dedicated network server computer, whereby the local client computer 310 is able to connect to any network server 320 to obtain any data file. The algorithm reads the loading information for the network servers 320 and selects the network server 320 which is least loaded for connection.
  • In order to organize the data file storage over the network server system, all of the data files are divided into two classes: regular data files, and directory files containing directory and other possible data necessary for translation of the file pathway. [0042]
  • For regular data files, accessible namespace which is common to all of the network servers is introduced. The client computer may access a data file by specifying its name and its path from the root directory. The path does not depend on the location of the local client computer nor the network server to which the local client computer is connected. [0043]
  • Namespace is a collection of unique names, where a name is an arbitrary identifier, usually an integer or a character string. [See CHARLES CROWLEY, OPERATING SYSTEMS: A DESIGN-ORIENTED APPROACH (Irwin, 1997) ISBN 0256151512]. Usually the term “name” is applied to such objects as data files, directories, devices, computers, etc. More information about typical distributed data file system name space and related problems can be found in the references that follow [See R. KUMAR, OSF's DISTRIBUTED COMPUTING ENVIRONMENT (Aixpert, IBM Corporation, Fall 1991); G. LEBOVITZ, AN OVERVIEW OF THE OSF DCE DISTRIBUTED FILE SYSTEM (Aixpert, IBM February 1992); The Distributed File System (DFS) for AIX/6000 (IBM May 1994) Doc. No. GG24-4255-00; W. ROSENBERRY, et al. UNDERSTANDING DCE (O'Reilly & Associates, Inc. September 1992)]. [0044]
  • Using directory files information, it is possible to determine how to assemble data files which are requested by a local client computer. Directory files information helps to translate requests from a local client computer for a logical file name into the form of an internal identifier used to acquire the data file contents. This procedure is applicable to every subdirectory. [0045]
  • FIG. 7 illustrates the data file search procedure used to locate a unique data file identifier “C” by its logical name, according to the data file path “/aaa/bbb/c”. First, the [0046] root directory file 200 must be located. Then the record corresponding with the aaa file 230 is identified and confirmed to be a pointer to a directory file 250. The same procedure takes place for the “bbb” file 240. After both procedures have been accomplished, the “C” file via pointer 250 is located.
  • Thus, the directory represents a set of records corresponding to data files. At a minimum, each record contains a logical file name and a unique identifier corresponding to it. [0047]
  • All the data files, including directory files, are viewed by the system as equal and possess unique across-cluster file identifiers for assembly. Any network server, requesting access to a directory as described above, may be considered a client computer for this directory service. [0048]
  • The unique data file identifier is generated at the moment a data file is created. The uniqueness of the data file identifier is derived using the totally local algorithms and does not require confirmation. [0049]
  • To start working with the disclosed transactional network storage system, the local client computer is connected to a network server as described above and sends out a request for a data file operation. Consider the file change recording operation (file write). First, the local client computer creates a low-level data file with changes record, then disassembles the low-level data file into data pieces and sends the data pieces to a network server. The network server sends the data file pieces to all of the network servers in the group. The neighboring network servers send the data file pieces further, until all of the data pieces are placed with a network server (with at least one data piece at each network server). During disassembly of a data file, a unique identifier is generated in order to identify the disassembled data file pieces in the future. [0050]
  • To read a data file, the local client computer is connected to any network server and sends a request containing the full file name with access pathname. A network server translates the data file name into a unique identifier using directory information and retrieves information about sufficient data file pieces for reassembly of all required for original data file to assembly low-level data files. The network server first checks the availability of the data file pieces, and requests the data file pieces at the other network servers if the number of data pieces is insufficient. The network server collects the data pieces required for file assembly and sends them to the requesting local client computer. Only then may the client computer assemble the original data file. [0051]
  • The fault tolerance level (network server accessibility depending on disconnection or network access failure) is determined by the degree of redundancy which is built into the network data storage system. Data file pieces are created in predetermined redundancy and placed at different network servers. So inaccessibility of some network servers does not influence data file assembly and accessibility to the local client computer if the overall number of accessible network servers is greater than some pre-defined number. Redundancy volume is determined at the moment the data file is stored and is dependent on the prospective stability of the data file storage. [0052]
  • This disclosed system and method for data storage is convenient for working with unmodified data files. The algorithm for data file storage by data pieces is very dependent on the contents of the data file. Slight changes to the data file may require changes to all of the data file pieces. Such a change to all data file pieces is expensive and inefficient. [0053]
  • The problem of having to make changes to all data file pieces can be solved by a data file storage system which ranks data file changes in time with probable overlapping. Each change to the contents of a data file represents a set of triplets: the offset from the beginning of data file, the data length, and the data itself. Each change to the data file or metadata gets arranged in the form of a separate record. Thus, the physical data file gets stored in the form of a series of records. Each record is regarded as a low-level unmodified data file. A unique transaction identifier, introduced in addition to the unique data file identifier, distinguishes the records and possesses the property of a timing mark to establish the “before-after” relationship between the identifiers and the time of their creation. Information about the state of the data file at a particular moment in time requires the availability of all of the transactions related to that data file, with the time of creation being less than or equal to that requested. As shown in FIG. 6, the data file is stored in its [0054] initial form 180 without recording the set of transactional changes 190 a, 190 b, 190 c to it. Each state of the data file is available at any point in time.
  • Client software for such storage consists of two elements: one part for the computer's local file system and the other for the distributed data network communications. [0055]
  • The software running on the local client computer records information to a local data file and saves the data about the recording, including the time it was recorded. At the time that a transaction ends, the software running on the local client computer generates a transaction identifier and a separate low-level data file to store all of the changes to the data file and make a transactional record. One transaction can contain data for different files. The transactional files are sent to the network software part of a local client computer and are recorded by disassembling the data file into data pieces which are placed at the network servers. [0056]
  • The software running on the local client computer hooks any attempt from the local programs and services of the local operating system to read the stored data file and sends a request to the network server to locate this data file. If this data file exists and has a unique data file identifier as determined by the directory service, the software running on the local client computer requests the storage file or files and obtains the list of the file transactions for a period of time. Then the software running on the local client computer receives the data file pieces associated with these transactions and collects the low-level transactional data files in order to assemble the original data file contents. The programs and services of the local operating system where the software running on the local client computer is installed continues working with the assembled data file at the local file system as if the data file had always existed there. Thus, the software running on the local client computer provides additional network functionality, data integrity, and accessibility to a local data file system. [0057]
  • While the present system has been disclosed according to its preferred and alternate embodiments, those of ordinary skill in the art will understand that other embodiments have been enabled by the foregoing disclosure. Such other embodiments shall be included within the scope and meaning of the appended claims. [0058]
  • Bibliography
  • 1. KUMAR, R., OSF's DISTRIBUTED COMPUTING ENVIRONMENT, Aixpert, IBM Corporation, at 22-29 (Fall 1991). [0059]
  • 2. LEBOVITZ, G., AN OVERVIEW OF THE OSF DCE DISTRIBUTED FILE SYSTEM, Aixpert, IBM, at 55-64 (February 1992). [0060]
  • 3. THE DISTRIBUTED FILE SYSTEM (DFS) FOR AIX/6000, IBM, Doc. No. GG24-4255-00, at 1-15 (May 1994). [0061]
  • 4. ROSENBERRY, W. ET AL, UNDERSTANDING, DCE, 6-100 (O'Reilly & Associates, Inc. Publishers Sept. 1992). [0062]
  • 5. CROWLEY, CHARLES, OPERATING SYSTEMS: A DESIGN-ORIENTED APPROACH (Irwin, 1997) ISBN 0-256-1.5151-2. [0063]
  • 6. BACH, MAURICE J. et al., DESIGN OF THE UNIX OPERATING SYSTEM (Prentice Hall 1st ed. Feb. 27, 1987); ISBN: 0132017997. [0064]
  • 7. THE DESIGN AND IMPLEMENTATION OF THE 4.4BSD OPERATING SYSTEM, (UNIX AND OPEN SYSTEMS SERIES) (Marshall Kirk McKusick, et al. eds., Addison-Wesley Pub Co.) ISBN: 0201549794. [0065]
  • 8. Data Communications and Networking Fundamentals Using Novell Netware (4.11) Ann Beheler/Prentice Hall/1998/0135920078 [0066]
  • 9. VAXcluster Principles. Roy G. Davis. Digital Press. ISBN 1-55558-112-9 [0067]
  • 10. U.S. Pat. No. 5,835,911 Nov. 10, 1998 Nakagawa, et al. 707/203 [0068]
  • 11. U.S. Pat. No. 5,434,994 Jul. 18, 1995 Shaheen, et al. 709/223 [0069]
  • 12. U.S. Pat. No. 5,155,847 Oct. 13, 1992 Kirouac, et al. 709/221 [0070]
  • 13. U.S. Pat. No. 5,742,792 Apr. 21, 1998 Yanai, et al. 711/162 [0071]
  • 14. [0072] NFS Version 3 Design and Implementation. Brian Pawlowski 1994, USENIX Summer 1994.
  • 15. U.S. Pat. No. 5,513,314 Apr. 30, 1996 Kandasamy, et al. 714/6 [0073]
  • 16. In search of clusters. Second edition. Gregory F Pfister. 1998, Prentice Hall, ISBN 0-13-899709-8. [0074]
  • 17. Managing AFS: The Andrew File System. Richard Campbell, Andrew Campbell. Prentice Hall 1997, ISBN: 0138027293 [0075]
  • 18. The Coda Distributed File System (# 74) Braam, P. J. Linux Journal, #50 June 1998 [0076]
  • 19. Coda: A Highly Available File System for a Distributed Workstation Environment (# 13) Satyanarayanan, M. Proceedings of the Second IEEE Workshop on Workstation Operating Systems September 1989, Pacific Grove, Calif. [0077]
  • 20. Serverless Network File Systems. 15th Symposium on Operating Systems Principles, ACM Transactions on Computer Systems, 1995. Tom Anderson, Michael Dahlin, Jeanna Neefe, David Patterson, Drew Roselli, Randy Wang [0078]
  • 21. VMS file system internals. Kirby McCoy. Digital press, ISBN 1-55558-056-4.1990. [0079]

Claims (15)

What is claimed is:
1. A system of distributed file storage, comprising:
a local computer network with network servers and client computers;
a plurality of functionally equal network servers linked together by said local network with said network servers and said client computers and ranked according to available capacity and accessibility;
a said plurality of functionally equal network servers organized into a plurality of groups where each said server can participate in a couple of different groups and said servers inside each said group are considered neighbors;
a pre-defined set of network services supported by each of said plurality of functionally equal network servers;
a plurality of client computers utilizing said local network with network servers and client computers;
software constructed and arranged to run on a local client computer to enable distributed data file storage;
software constructed and arranged to run on a network server computer to enable distributed data file storage;
a common file namespace in the form of a tree with a shared root;
directory files and common data files within said common file namespace;
wherein accessibility to stored data does not depend upon dedicated access to any particular member of said plurality of functionally equal network servers, but rather depends only upon access to a pre-defined quantity of network servers from said plurality of functionally equal network servers.
2. The system of claim 1, wherein said software constructed and arranged to run on a local client computer for distributed file storage traces changes to said local data file system, puts said changes into the form of transaction records, and verifies said local data file content to be consistent with the content of the files stored in said distributed data file storage in case of local data file open-and-create requests.
3. The system of claim 1, wherein said software is constructed and arranged to run on a local client computer to be used for said distributed file storage communications with said plurality of functionally equal network servers, records transactions made into said distributed data file storage, and reads and assembles data files into the local file system of a client computer.
4. A method of providing distributed file storage, comprising the steps of:
utilizing a local network with network servers and client computers;
establishing a plurality of functionally equal network servers;
linking said plurality of functionally equal network servers together by said local network with network servers and client computers;
ranking said plurality of functionally equal network servers according to available capacity and accessibility;
supporting a pre-defined set of network services at each of said plurality of functionally equal network servers;
establishing a plurality of client computers utilizing said local network with network servers and client computers;
utilizing software constructed and arranged to run on a local client computer for distributed data file storage;
utilizing software constructed and arranged to run on a network server computer for distributed data file storage;
establishing a common file namespace in the form of a tree with a shared root; and
establishing directory files and common files within said common file namespace;
whereby accessibility to said directory files and said common files does not require dedicated access to any particular member of said plurality of functionally equal network servers, but rather depends only upon access to a pre-defined quantity from said plurality of functionally equal network servers.
5. The method of claim 4, further including the step of:
providing client file access through the highest ranked member of said plurality of functionally equal network servers with respect to available capacity and accessibility, where client could be on client computer or initiate request from network server computer.
6. The method of claim 5, wherein the step of providing client file access further includes the steps of:
requesting a data file by its full name in said common file namespace from any network server;
requesting information about the availability of a plurality of file transaction record pieces necessary for file data assembling;
collecting a plurality of data file pieces for each transaction record;
checking the presence of said data file pieces in the local cache;
checking the presence of said data file pieces in neighboring servers;
sending said data file pieces from a neighboring server to a requesting network server;
sending said data file pieces from said requesting network server to at least one client computer;
assembling said data file pieces into a requested transactions file on a client computer; and
assembling said transactions into a requested file.
7. The method of claim 4, wherein the step of establishing directory files and common files within a common file namespace, further includes the steps of:
assigning a unique data file identifier to each of said directory files and said common files, wherein said unique data file identifier is unique across said local network with network servers and client computers and independent of any particular member of said plurality of functionally equal network servers; and
translation of said common files or directory file full pathname within said common file namespace by traverse procedure using said directory file data along a logical pathname into said unique file identifiers.
8. The method of claim 4, wherein the step of establishing directory files and common files within a common file namespace, further includes the step of:
creation of representation of file in the form of transactions records;
creating a procedure to disassemble said transaction records based upon a predetermined fault tolerance level; and
disassembling each of said transaction records according to said procedure.
9. The method of claim 8, wherein the step of creating a procedure to disassemble each of said transaction records based upon a predetermined fault tolerance level further includes the steps of:
determining the size of each of said transaction records;
determining the required degree of said file redundancy; and
determining the minimum functional number of said servers based upon said predetermined fault tolerance levels.
10. The method of claim 8, wherein the step of disassembling each of said transaction records according to said procedure further includes the steps of:
disassembling each of said transaction records into a plurality of data file pieces;
distributing said plurality of data file pieces to a member of said plurality of functionally equal network servers; and
further distributing said plurality of data file pieces to neighboring members of said plurality of functionally equal network servers;
whereby said predetermined fault tolerance level is achieved and each of said files remains accessible.
11. The method of claim 4, wherein the step of utilizing client local software for distributed data file storage further includes the steps of:
tracing changes to said local file system of performed local operation system processes and daemons;
putting said changes into the form of transaction records;
disassembling transaction records into data pieces; and
if requested file does not exist in said local file system, when said local file access is requested from local operation system processes and daemons, performs a search procedure using client network software to find whether this data file is stored in the distributed file storage;
if said search procedure finds said requested file and downloads said requested file into local storage, readdresses said local data file access request to said loaded file and continues work in normal mode;
if said search procedure does not find said requested file, finish said local file access request with appropriate local operation system code denoting that file has not been found;
if said requested file exists in said local file system when said local file access is requested from local operating system processes and daemons, a check is performed of configuration conditions, and after positive response of said check performs the same search procedure using client network software to determine whether said requested file is stored in the distributed file storage and overlaps local files by data stored in distributed storage; after a negative response of said check readdresses said local file access request to said found file and continue work in normal mode.
12. The method of claim 4, wherein the step of utilizing client network software for distributed file storage further includes the steps of:
communicating with the network servers;
recording said disassembled transaction pieces created by said software running on said local client computer into the distributed file storage; and
searching for pieces of said transaction records of said data file inside distributed storage;
collecting said pieces of said transaction records of said data file; and
assembling said transaction records and said files and putting them into the local file system of said client computer.
13. The method of claim 4, wherein the step of utilizing client network software for distributed file storage further includes the steps of:
requesting a fault tolerance level;
sending transactional file records to the client network software;
disassembling said transactional file records into transactional data file pieces;
distributing said transactional data file pieces to a network server;
further distributing said data file pieces to neighboring servers;
whereby said requested fault tolerance level is achieved.
14. The method of claim 13, wherein said step of requesting a fault tolerance level further includes the steps of:
determining the required degree of redundancy of said transactional file record pieces;
determining the required number of said transactional file record pieces;
determining the minimum number of functional servers;
whereby all data files in storage remain accessible.
15. A method of organizing distributed file storage, comprising the steps of:
storing a data file as a series of transactions whereby each of said transactions is an incremental change which is integral and indivisible;
ordering said transactions by time;
storing each of said transactions logically as a special separate file with a unique transaction identifier;
collecting all the transactions for a certain time period;
reassembling said data file from said plurality of transactions.
US10/193,830 2001-07-11 2002-07-11 Distributed transactional network storage system Abandoned US20030041097A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/193,830 US20030041097A1 (en) 2001-07-11 2002-07-11 Distributed transactional network storage system
US10/293,196 US7886016B1 (en) 2001-07-11 2002-11-13 Distributed transactional network storage system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US30465501P 2001-07-11 2001-07-11
US10/193,830 US20030041097A1 (en) 2001-07-11 2002-07-11 Distributed transactional network storage system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/293,196 Continuation US7886016B1 (en) 2001-07-11 2002-11-13 Distributed transactional network storage system

Publications (1)

Publication Number Publication Date
US20030041097A1 true US20030041097A1 (en) 2003-02-27

Family

ID=26889395

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/193,830 Abandoned US20030041097A1 (en) 2001-07-11 2002-07-11 Distributed transactional network storage system
US10/293,196 Expired - Fee Related US7886016B1 (en) 2001-07-11 2002-11-13 Distributed transactional network storage system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/293,196 Expired - Fee Related US7886016B1 (en) 2001-07-11 2002-11-13 Distributed transactional network storage system

Country Status (1)

Country Link
US (2) US20030041097A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060253502A1 (en) * 2005-05-06 2006-11-09 Microsoft Corporation Maintenance of link level consistency between database and file system
US20070118559A1 (en) * 2005-11-18 2007-05-24 Microsoft Corporation File system filters and transactions
US20070165865A1 (en) * 2003-05-16 2007-07-19 Jarmo Talvitie Method and system for encryption and storage of information
US20100106934A1 (en) * 2008-10-24 2010-04-29 Microsoft Corporation Partition management in a partitioned, scalable, and available structured storage
WO2010048027A3 (en) * 2008-10-24 2010-06-17 Microsoft Corporation Atomic multiple modification of data in a distributed storage system
US20100235878A1 (en) * 2009-03-13 2010-09-16 Creative Technology Ltd. Method and system for file distribution
US20120176638A1 (en) * 2004-09-07 2012-07-12 Canon Kabushiki Kaisha Information processing device capable of outputing print data to print data device, and control method thereof
US20120226714A1 (en) * 2011-03-02 2012-09-06 Cleversafe, Inc. Selecting a directory of a dispersed storage network
US20140250073A1 (en) * 2013-03-01 2014-09-04 Datadirect Networks, Inc. Asynchronous namespace maintenance
US9268834B2 (en) 2012-12-13 2016-02-23 Microsoft Technology Licensing, Llc Distributed SQL query processing using key-value storage system
US20160371353A1 (en) * 2013-06-28 2016-12-22 Qatar Foundation A method and system for processing data
US20170195333A1 (en) * 2012-10-05 2017-07-06 Gary Robin Maze Document management systems and methods
US10635997B1 (en) * 2012-06-15 2020-04-28 Amazon Technologies, Inc. Finite life instances
CN112532700A (en) * 2020-11-17 2021-03-19 华帝股份有限公司 Data transmission method and related equipment
CN114741693A (en) * 2021-11-18 2022-07-12 北京珞安科技有限责任公司 Operation and maintenance system and method for safety protection
CN115292420A (en) * 2022-10-10 2022-11-04 天津南大通用数据技术股份有限公司 Method and device for rapidly loading data in distributed database
CN116149575A (en) * 2023-04-20 2023-05-23 北京大学 Server-oriented non-perception computing disk redundant array writing method and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6526448B1 (en) * 1998-12-22 2003-02-25 At&T Corp. Pseudo proxy server providing instant overflow capacity to computer networks
US10812590B2 (en) 2017-11-17 2020-10-20 Bank Of America Corporation System for generating distributed cloud data storage on disparate devices
US10866963B2 (en) 2017-12-28 2020-12-15 Dropbox, Inc. File system authentication
JP6949801B2 (en) * 2018-10-17 2021-10-13 株式会社日立製作所 Storage system and data placement method in the storage system
CN109548060B (en) * 2018-12-29 2022-05-13 广州敬信药草园信息科技有限公司 Processing method for abnormal disconnection of recorded broadcast network

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555404A (en) * 1992-03-17 1996-09-10 Telenor As Continuously available database server having multiple groups of nodes with minimum intersecting sets of database fragment replicas
US6662307B1 (en) * 1993-06-14 2003-12-09 Unisys Corporation Disk recovery/reconstruction
JP3817339B2 (en) * 1997-06-26 2006-09-06 株式会社日立製作所 File input / output control method
WO1999023571A1 (en) * 1997-11-03 1999-05-14 Inca Technology, Inc. Automatically configuring network-name-services
US6119005A (en) * 1998-05-27 2000-09-12 Lucent Technologies Inc. System for automated determination of handoff neighbor list for cellular communication systems
US6308284B1 (en) * 1998-08-28 2001-10-23 Emc Corporation Method and apparatus for maintaining data coherency
US6560611B1 (en) * 1998-10-13 2003-05-06 Netarx, Inc. Method, apparatus, and article of manufacture for a network monitoring system
US6901457B1 (en) * 1998-11-04 2005-05-31 Sandisk Corporation Multiple mode communications system
JP2000261482A (en) * 1999-03-08 2000-09-22 Sony Corp Address setting method, client device, server and client server system
US6446218B1 (en) * 1999-06-30 2002-09-03 B-Hub, Inc. Techniques for maintaining fault tolerance for software programs in a clustered computer system
US6760763B2 (en) * 1999-08-27 2004-07-06 International Business Machines Corporation Server site restructuring
US6366907B1 (en) * 1999-12-15 2002-04-02 Napster, Inc. Real-time search engine
US6606643B1 (en) * 2000-01-04 2003-08-12 International Business Machines Corporation Method of automatically selecting a mirror server for web-based client-host interaction
US6952737B1 (en) * 2000-03-03 2005-10-04 Intel Corporation Method and apparatus for accessing remote storage in a distributed storage cluster architecture
US6789076B1 (en) * 2000-05-11 2004-09-07 International Business Machines Corp. System, method and program for augmenting information retrieval in a client/server network using client-side searching
KR100390853B1 (en) * 2000-06-07 2003-07-10 차상균 A Logging Method and System for Highly Parallel Recovery Operation in Main-Memory Transaction Processing Systems
US6988124B2 (en) * 2001-06-06 2006-01-17 Microsoft Corporation Locating potentially identical objects across multiple computers based on stochastic partitioning of workload

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070165865A1 (en) * 2003-05-16 2007-07-19 Jarmo Talvitie Method and system for encryption and storage of information
US9424501B2 (en) 2004-09-07 2016-08-23 Canon Kabushiki Kaisha Information processing device capable of outputting print data to print device, and control method thereof
US8760698B2 (en) * 2004-09-07 2014-06-24 Canon Kabushiki Kaisha Information processing device capable of outputting print data to print data device, and control method thereof
US20120176638A1 (en) * 2004-09-07 2012-07-12 Canon Kabushiki Kaisha Information processing device capable of outputing print data to print data device, and control method thereof
US8145686B2 (en) * 2005-05-06 2012-03-27 Microsoft Corporation Maintenance of link level consistency between database and file system
US20060253502A1 (en) * 2005-05-06 2006-11-09 Microsoft Corporation Maintenance of link level consistency between database and file system
US20070118559A1 (en) * 2005-11-18 2007-05-24 Microsoft Corporation File system filters and transactions
US20110113021A1 (en) * 2005-11-18 2011-05-12 Microsoft Corporation File system filters and transactions
US8078639B2 (en) 2005-11-18 2011-12-13 Microsoft Corporation File system filters and transactions
WO2010048027A3 (en) * 2008-10-24 2010-06-17 Microsoft Corporation Atomic multiple modification of data in a distributed storage system
CN102197365A (en) * 2008-10-24 2011-09-21 微软公司 Atomic multiple modification of data in a distributed storage system
US8255373B2 (en) 2008-10-24 2012-08-28 Microsoft Corporation Atomic multiple modification of data in a distributed storage system
US9996572B2 (en) 2008-10-24 2018-06-12 Microsoft Technology Licensing, Llc Partition management in a partitioned, scalable, and available structured storage
US20100106934A1 (en) * 2008-10-24 2010-04-29 Microsoft Corporation Partition management in a partitioned, scalable, and available structured storage
US20100235878A1 (en) * 2009-03-13 2010-09-16 Creative Technology Ltd. Method and system for file distribution
US20120226714A1 (en) * 2011-03-02 2012-09-06 Cleversafe, Inc. Selecting a directory of a dispersed storage network
US9658911B2 (en) * 2011-03-02 2017-05-23 International Business Machines Corporation Selecting a directory of a dispersed storage network
US10635997B1 (en) * 2012-06-15 2020-04-28 Amazon Technologies, Inc. Finite life instances
US20170195333A1 (en) * 2012-10-05 2017-07-06 Gary Robin Maze Document management systems and methods
US10536459B2 (en) * 2012-10-05 2020-01-14 Kptools, Inc. Document management systems and methods
US9626404B2 (en) 2012-12-13 2017-04-18 Microsoft Technology Licensing, Llc Distributed SQL query processing using key-value storage system
US9268834B2 (en) 2012-12-13 2016-02-23 Microsoft Technology Licensing, Llc Distributed SQL query processing using key-value storage system
US9792344B2 (en) 2013-03-01 2017-10-17 Datadirect Networks, Inc. Asynchronous namespace maintenance
US9020893B2 (en) * 2013-03-01 2015-04-28 Datadirect Networks, Inc. Asynchronous namespace maintenance
US20140250073A1 (en) * 2013-03-01 2014-09-04 Datadirect Networks, Inc. Asynchronous namespace maintenance
US20160371353A1 (en) * 2013-06-28 2016-12-22 Qatar Foundation A method and system for processing data
CN112532700A (en) * 2020-11-17 2021-03-19 华帝股份有限公司 Data transmission method and related equipment
CN114741693A (en) * 2021-11-18 2022-07-12 北京珞安科技有限责任公司 Operation and maintenance system and method for safety protection
CN115292420A (en) * 2022-10-10 2022-11-04 天津南大通用数据技术股份有限公司 Method and device for rapidly loading data in distributed database
CN116149575A (en) * 2023-04-20 2023-05-23 北京大学 Server-oriented non-perception computing disk redundant array writing method and system

Also Published As

Publication number Publication date
US7886016B1 (en) 2011-02-08

Similar Documents

Publication Publication Date Title
US20030041097A1 (en) Distributed transactional network storage system
US8122284B2 (en) N+1 failover and resynchronization of data storage appliances
US7209973B2 (en) Distributed network data storage system and method
EP1782289B1 (en) Metadata management for fixed content distributed data storage
US8935211B2 (en) Metadata management for fixed content distributed data storage
Pu et al. Regeneration of replicated objects: A technique and its Eden implementation
US6658589B1 (en) System and method for backup a parallel server data storage system
US9785691B2 (en) Method and apparatus for sequencing transactions globally in a distributed database cluster
US7392425B1 (en) Mirror split brain avoidance
US8856091B2 (en) Method and apparatus for sequencing transactions globally in distributed database cluster
US8255364B2 (en) System for emulating a virtual boundary of a file system for data management at a fileset granularity
US7778984B2 (en) System and method for a distributed object store
JP3864244B2 (en) System for transferring related data objects in a distributed data storage environment
CN101460930B (en) Maintenance of link level consistency between database and file system
US7778970B1 (en) Method and system for managing independent object evolution
US20040148306A1 (en) Hash file system and method for use in a commonality factoring system
US7054887B2 (en) Method and system for object replication in a content management system
JP2005502096A (en) File switch and exchange file system
WO2012039988A2 (en) System and method for managing integrity in a distributed database
WO2013147782A1 (en) Cluster-wide unique id for object access control lists
US20120323869A1 (en) File State Subset Satellites to Provide Block-Based Version Control
US20050097105A1 (en) Distributed database for one search key
AU2011265370B2 (en) Metadata management for fixed content distributed data storage
Ault et al. Oracle9i RAC: Oracle real application clusters configuration and internals
CN112380182A (en) Distributed storage method based on disk management mode

Legal Events

Date Code Title Description
AS Assignment

Owner name: SWSOFT HOLDINGS LTD., VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TORMASOV, ALEXANDER;REEL/FRAME:014433/0705

Effective date: 20030720

AS Assignment

Owner name: SWSOFT HOLDINGS LTD., VIRGINIA

Free format text: CONFIRMATORY ASSIGNMENT;ASSIGNOR:SWSOFT HOLDINGS, INC.;REEL/FRAME:014433/0820

Effective date: 20030720

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION

AS Assignment

Owner name: PARALLELS HOLDINGS, LTD., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SWSOFT HOLDINGS, LTD.;REEL/FRAME:027467/0345

Effective date: 20111230

AS Assignment

Owner name: PARALLELS IP HOLDINGS GMBH, SWITZERLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PARALLELS HOLDINGS, LTD.;REEL/FRAME:027595/0187

Effective date: 20120125