WO2012074354A1 - System architecture with cluster file for virtualization hosting environment - Google Patents

System architecture with cluster file for virtualization hosting environment Download PDF

Info

Publication number
WO2012074354A1
WO2012074354A1 PCT/MY2011/000092 MY2011000092W WO2012074354A1 WO 2012074354 A1 WO2012074354 A1 WO 2012074354A1 MY 2011000092 W MY2011000092 W MY 2011000092W WO 2012074354 A1 WO2012074354 A1 WO 2012074354A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
nodes
virtualization
cluster
server
Prior art date
Application number
PCT/MY2011/000092
Other languages
French (fr)
Inventor
Jing Yuan Luke
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2012074354A1 publication Critical patent/WO2012074354A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Definitions

  • the present invention relates to cloud computing and more particularly to system architecture with cluster file for virtualization hosting environment.
  • Storage of virtualization hosting has received considerable attention in the industry. It offers the ability to isolate a host from changes in the physical placement of storage. The result is a substantial reduction in support effort and end-user impact.
  • a Storage Virtualization Layer referred to a level of abstraction implemented in software that servers use to divide available physical storage into virtual disks or volumes. Virtual volumes are used by the Operating System (OS) as if they were physical disks. In fact, it is generally impossible for an operating system to perceive them anything other than real disks.
  • the Storage Virtualization Layer redirects or maps I/O requests made against virtual disks to block in real storage.
  • This direction/redirection means that changes in the physical location of storage blocks (to service access patterns, performance, requirements, growth requirements or failure recovery) can be accommodated by a simple update of the virtual-to-real mappings.
  • a virtual volume can be created, expanded, deleted, moved and selectively presented independent of the storage subsystems on which it resides. Furthermore, a virtual volume may include storage space in different storage subsystems, each with different characteristics, Virtualization architectures play a key role in solving centralization problems, enabling important functions such as storage sharing, data sharing, performance optimization, storage on demand and data protection.
  • the large shared storage should be scalable but simple to implement without the complexity of SAN (Storage Area Network) or NAS (Network Attached Storage) technologies. If necessary, data (virtual servers' images, etc) stored in this large shared storage can be encrypted or decrypted. As per observed, local storage/disk installed in the hosts are getting larger but sometimes underutilized. While we may have a lot of hosts with large individual storage in the laaS infrastructure thus collectively have a huge storage capacity (if each host has 500GB and there is 100 hosts available, theoretically there will be a total of 50TB), these virtualization hosts however could not use any of these additional storages which are remote to them. There are several patents available in order to provide solution towards these problems for example patent number US2009307702 (Al).
  • patent number US 2006070069 presented a system and method for sharing resources between realtime and virtualizing operating systems.
  • a computer uses effective address mapping of support processors' local memory to share resources between separate operating systems. When threads are created for either operating system, the thread's corresponding processor memory is mapped into an effective address space.
  • processor's local memory is accessible by the thread, regardless of whether the processor is running, or whether the processor is executing a different thread from a different operating system.
  • these inventions did not disclose any of the utilization of cluster file system particularly for infrastructure to support Grid 4.0/laaS.
  • the present invention introduces a solution towards the problem statements mentioned earlier.
  • the solutions presented by the present invention are by providing a storage environment that serves as a centralized storage for virtualization hosting environment utilizing additional storage available at each virtualization hosts as well as dedicated hosts that will provide a distributed yet shared and centralized storage system to host all the virtualization images, a cluster of I/O nodes to provide a single or unified view to a shared high performance storage pool and a system hook on the virtualization host to mount the storage pool.
  • the data can be stripped and stored into this storage using a mathematical formula similar to those in RAID controllers, a Metadata Server which will keep track of the location of the stripped data as well as the checksums, a high availability Metadata Server setup to provide lookup capability to the storage pool and a High Bandwidth Interconnect system that will connect all virtualization hosts to the shared clustered storage system.
  • the present invention aims to provide a cloud computing and more particularly to system architecture with cluster file for virtualization hosting environment.
  • a system architecture with cluster file for virtualization hosting environment comprising a Virtualization Hosting Environment (Grid 4.1/laaS 1.x) communicating with shared Cluster File System Environment, a High Bandwidth connecting the Metadata Server, cluster of I/O nodes, Virtualization Hosts and Virtualization Host with I/O Nodes, a Metadata Server communicating with an I/O Node component, a cluster of I/O Nodes communicating with a Client Component and the Metadata Server, a Virtualization Host communicating with the I/O Nodes, at least two private networks connecting Metadata Servers to the system and connecting cluster of I/O Nodes and an Inter Private Network connecting the private networks.
  • the High Bandwidth is a High Bandwidth Storage Interconnect.
  • the Metadata Server further includes a Metadata Server Component to communicate with the I/O Node component, maintaining and updating on data location being stored in the shared distributed file system, replicating information among Metadata Servers for redundancy, monitoring health signals from both Metadata Servers and I/O Nodes and generating, maintaining and updating necessary keys and checksum if I/O Node Component requires such information for encryption and decryption.
  • a Metadata Server Component to communicate with the I/O Node component, maintaining and updating on data location being stored in the shared distributed file system, replicating information among Metadata Servers for redundancy, monitoring health signals from both Metadata Servers and I/O Nodes and generating, maintaining and updating necessary keys and checksum if I/O Node Component requires such information for encryption and decryption.
  • the cluster of I/O Nodes further includes I/O Node component that serves to communicate with Client Component, Metadata Server Component and other I/O Node Component.
  • the Virtualization Host further includes Client Component that communicate with I/O Node Component, system hook to mount shared distributed file system as a local file system natively and providing interface to users for data or file related processes.
  • the private networks comprises of Heartbeat Network 1 and Heartbeat Network 2.
  • said inter private network comprises of Inter Heartbeat Network.
  • the method wherein if a server is determined as Metadata Server includes the step of installing the Metadata Server components, creating Cluster Configuration File, connecting the server to High Bandwidth Storage Interconnect and Heartbeat Network 1 , connecting Heartbeat Network 1 to Heartbeat Network 2 via Inter Heartbeat Network and starting Metadata Server Service.
  • the steps include installing I/O Node Components, creating Cluster Configuration File, connecting the server to the High Bandwidth Storage Interconnect and Heartbeat Network 1 , connecting Heartbeat Network 1 to Heartbeat Network 2 via Inter Heartbeat Network and starting I/O Node ( 14) Service.
  • the steps include, installing I/O Node and Client Components, creating Cluster Configuration File, connecting I/O Nodes and Virtualization Hosts to the High Bandwidth Storage Interconnect and Heartbeat Network 2, starting I/O Node and Client Service and mounting Cluster File System.
  • the steps include installing Client Components, creating Cluster Configuration File, connecting Virtualization Hosts to High Bandwidth Storage Interconnect, starting Client Service and mounting Cluster File System.
  • FIG.l illustrates system architecture with cluster file system for virtualization hosting environment (100).
  • FIG.2 illustrates system setup process flow (200).
  • the system architecture (100) comprises of convergence between shared cluster file system environment (16) virtualization hosting environment (Grid 4.1/laaS 1.x) (20) which operates through the internet (10).
  • a distributed yet shared and centralized storage system hosts all the virtualized images in the architecture (100).
  • a cluster of I/O nodes (14) provide a single or unified view to a shared high performance storage pool and a system hook on the virtualization host (19) to mount the storage pool in a cluster of I/O nodes (14) as if it is a native file system to the host (19).
  • a Metadata Server (17) will keep track of the location of the stripped data as well as the checksums.
  • a high availability Metadata Server (17) setup provides lookup capability to the storage pool.
  • a High Bandwidth Interconnect (18) system connects all the virtualization hosts (19) to the shared clustered storage system. The virtualization hosts are connected to a plurality of virtual servers (11) in the virtualization hosting environment (20).
  • a Metadata Server (17) Component communicates with the I/O Node (14) Component via High Bandwidth Storage Interconnect (18). It further maintains and updates information on data location being stored in the shared distributed file system as well as replicating the information among Metadata Servers (17) for redundancy. Metadata Server (17) components then monitors health signals from both Metadata Servers (17) and I/O Nodes (14) via heartbeat Network 1 (15) and Inter Heartbeat Network (21). It further generates, maintains and updates necessary keys and checksums if I/O Node (14) Component requires such information for encryption and decryption.
  • the system architecture (100) further comprises a cluster of I/O Nodes (14) containing I/O Node (14) Component that communicates with Client Component (not shown), Metadata Server (17) Component, and other I/O Node (14) Component via High Bandwidth Storage Interconnect (18).
  • Client Component Upon receiving request from a user, Client Component (not shown) will decide if encryption is required, thereafter get a key from Metadata Server (17) to encrypt the data and create checksum. It will further stripe or replicate the incoming data to itself and other I/O Nodes (14) and update the Metadata Server (17) location of the stripped or replicated data together with the checksum.
  • the cluster of I/O Nodes (14) will read and decrypt if decryption is required and get a key and checksum from Metadata Server (17). It further locates information of the required data, then retrieves the striped or replicated data back and reconstruct back and send back to user via Client Component (not shown).
  • the I/O Nodes (14) also monitors health signals from other I/O Nodes (14) via Heartbeat Network 2
  • the main component in this architecture is the cluster of I/O nodes (14).
  • Each I/O node (14) dedicates a portion of its storage to the storage pool that consists of other I/O nodes
  • I/O Node (14) When running an I/O Node (14) Component, data can be striped and stored in the pool as if in a regular RAID (Redundant Array of Independent Disks) storage system. Alternatively, data can be replicated as well if not striped. Information on how the data are striped or replicated and their location are stored in the Metadata Server(s) (17). If required, the I/O Node (14) Component would provide additional data encryption and decryption services where the checksums and keys are stored in Metadata Server(s) (17). All the I/O Nodes (14) are connected to the High Bandwidth Storage interconnect (18) to a Virtualization Hosts (19) and Metadata Servers (17). In addition, a heartbeat network (13) connects these I/O Nodes (14) together to determine the health of the I/O Nodes (14).
  • a heartbeat network (13) connects these I/O Nodes (14) together to determine the health of the I/O Nodes (14).
  • the Virtualization Hosting Environment (20) can proceed with the setup.
  • Each virtualization hosts (19) i.e. physical servers
  • the virtualization hosts (19) see and/or mount the distributed shared storage as if it is a single native storage to the system.
  • a virtualization host (19) can also become an I/O Node (14) as well.
  • the Client Component (not shown) will then send a request to the respective I/O Node (14) nearest to it to be further processed.
  • a Metadata Server (17) in the Cluster File System Environment (16) will then maintain and track how the files created by these files are stripped or replicated and stored as well as their respective location.
  • Multiple metadata Servers (17) can be setup to provide a resiliency and high availability by replicating metadata in a constant manner using a different heartbeat network, which also used as a network to monitor each other's health.
  • the Metadata Servers (17) may also secure Cluster File System (16) by providing and storing checksums and keys.
  • An Inter Heartbeat Network (21) connects the heartbeat network (15) of the Metadata Servers (17) to the I/O Nodes (14) such that Metadata Servers (17) know the health of the I/O Nodes (14).
  • the process commences (110) with determining role of a server (1 1 1). If the server is determined as metadata server (1 12), metadata server components will be installed (1 13). The process will then proceed with creating cluster of configuration file (114) and connect the server to the High Bandwidth Storage Interconnect and Heartbeat Network 1 (115). Thereafter, Heartbeat Network 1 will be connected to Heartbeat Network 2 via Inter Heartbeat Network (116) to start the Metadata server service (117).
  • the system installs I/O Node components (119) and create cluster configuration file (120).
  • the system also connects I/O Nodes to High Bandwidth Interconnect and Heartbeat Network 2 (121) to start an I/O service (122).
  • the server is determined as both I/O node and Virtualization Host (123)
  • the system installs I/O Node and Client Components (124) and creates Cluster Configuration File (125). Thereafter, the system connects I/O and Virtualization Nodes to High Bandwidth Storage Interconnect and Heartbeat Network 2 (126).
  • the I/O Node and Client Services (127) start by mounting Cluster File System (128).
  • the server is determined as a virtualization host (129)
  • the system installs Client Component (130) and creates Cluster Configuration File (131).
  • the Virtualization host is then connected to High Bandwidth Storage Interconnect (133) and mount Cluster File System (134).

Abstract

A system architecture with cluster file for virtualization hosting environment comprising: a Virtualization Hosting Environment (Grid 4.1/laaS 1.x) (20) communicating with shared Cluster File System Environment (16), a High Bandwidth connecting the Metadata Server (17), cluster of I/O nodes (14), Virtualization Hosts (19) and Virtualization Host with I/O Nodes (12), a Metadata Server (17) communicating with an I/O Node (14) component, a cluster of I/O Nodes (14) communicating with a Client Component and the Metadata Server (17), a Virtualization Host communicating with the I/O Nodes (12), at least two private networks connecting Metadata Servers (17) to the system and connecting cluster of I/O Nodes (14) and an Inter Private Network connecting the private networks.

Description

SYSTEM ARCHITECTURE WITH CLUSTER FILE FOR VIRTUALIZATION
HOSTING ENVIRONMENT
TECHNICAL FIELD
The present invention relates to cloud computing and more particularly to system architecture with cluster file for virtualization hosting environment.
BACKGROUND ART
Storage of virtualization hosting has received considerable attention in the industry. It offers the ability to isolate a host from changes in the physical placement of storage. The result is a substantial reduction in support effort and end-user impact. Traditionally, a Storage Virtualization Layer (SVL) referred to a level of abstraction implemented in software that servers use to divide available physical storage into virtual disks or volumes. Virtual volumes are used by the Operating System (OS) as if they were physical disks. In fact, it is generally impossible for an operating system to perceive them anything other than real disks. The Storage Virtualization Layer redirects or maps I/O requests made against virtual disks to block in real storage. This direction/redirection means that changes in the physical location of storage blocks (to service access patterns, performance, requirements, growth requirements or failure recovery) can be accommodated by a simple update of the virtual-to-real mappings. A virtual volume can be created, expanded, deleted, moved and selectively presented independent of the storage subsystems on which it resides. Furthermore, a virtual volume may include storage space in different storage subsystems, each with different characteristics, Virtualization architectures play a key role in solving centralization problems, enabling important functions such as storage sharing, data sharing, performance optimization, storage on demand and data protection.
For current infrastructure to support Grid 4.0 (to be referred as laaS)/ Cloud computing project, ensuring all the virtualization hosts (servers that hosts the virtual machines) or cloud nodes to share the same storage becomes a bottleneck in realizing the migration feature of the laaS. Migration is a key feature to ensure high availability and auto scaling features in the laaS. All virtual servers's images are stored in a large shared storage and all hosts must be able to see this large shared storage as it is part of their local storage. This large shared storage should have some form of redundancy for example using replication techniques or even some form of mechanism similar to RAID (redundant array of inexpensive disks). The large shared storage should be scalable but simple to implement without the complexity of SAN (Storage Area Network) or NAS (Network Attached Storage) technologies. If necessary, data (virtual servers' images, etc) stored in this large shared storage can be encrypted or decrypted. As per observed, local storage/disk installed in the hosts are getting larger but sometimes underutilized. While we may have a lot of hosts with large individual storage in the laaS infrastructure thus collectively have a huge storage capacity (if each host has 500GB and there is 100 hosts available, theoretically there will be a total of 50TB), these virtualization hosts however could not use any of these additional storages which are remote to them. There are several patents available in order to provide solution towards these problems for example patent number US2009307702 (Al). The patent disclosed a system and method for discovering and protecting allocated resources in a shared virtualized I/O device wherein a system processor allocates the plurality of available hardware resources to one or more functions, and to populate each entry of the resource discovery table for each function. The processing units execute one or more processes. In another example of a patent disclosure, patent number US 2006070069 (Al) presented a system and method for sharing resources between realtime and virtualizing operating systems. A computer uses effective address mapping of support processors' local memory to share resources between separate operating systems. When threads are created for either operating system, the thread's corresponding processor memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by the thread, regardless of whether the processor is running, or whether the processor is executing a different thread from a different operating system. However, these inventions did not disclose any of the utilization of cluster file system particularly for infrastructure to support Grid 4.0/laaS.
Therefore, there exists a need for system architecture with cluster file system for virtualization hosting environment that able to ensure migration feature of the Grid 4.0 (laaS) that having high availability and auto scaling features.
The present invention introduces a solution towards the problem statements mentioned earlier. The solutions presented by the present invention are by providing a storage environment that serves as a centralized storage for virtualization hosting environment utilizing additional storage available at each virtualization hosts as well as dedicated hosts that will provide a distributed yet shared and centralized storage system to host all the virtualization images, a cluster of I/O nodes to provide a single or unified view to a shared high performance storage pool and a system hook on the virtualization host to mount the storage pool. Furthermore, the data can be stripped and stored into this storage using a mathematical formula similar to those in RAID controllers, a Metadata Server which will keep track of the location of the stripped data as well as the checksums, a high availability Metadata Server setup to provide lookup capability to the storage pool and a High Bandwidth Interconnect system that will connect all virtualization hosts to the shared clustered storage system. DISCLOSURE OF THE INVENTION
The present invention aims to provide a cloud computing and more particularly to system architecture with cluster file for virtualization hosting environment. In a preferred embodiment of the present invention, a system architecture with cluster file for virtualization hosting environment comprising a Virtualization Hosting Environment (Grid 4.1/laaS 1.x) communicating with shared Cluster File System Environment, a High Bandwidth connecting the Metadata Server, cluster of I/O nodes, Virtualization Hosts and Virtualization Host with I/O Nodes, a Metadata Server communicating with an I/O Node component, a cluster of I/O Nodes communicating with a Client Component and the Metadata Server, a Virtualization Host communicating with the I/O Nodes, at least two private networks connecting Metadata Servers to the system and connecting cluster of I/O Nodes and an Inter Private Network connecting the private networks. In another preferred embodiment of the present invention, the High Bandwidth is a High Bandwidth Storage Interconnect.
In another preferred embodiment of the present invention, the Metadata Server further includes a Metadata Server Component to communicate with the I/O Node component, maintaining and updating on data location being stored in the shared distributed file system, replicating information among Metadata Servers for redundancy, monitoring health signals from both Metadata Servers and I/O Nodes and generating, maintaining and updating necessary keys and checksum if I/O Node Component requires such information for encryption and decryption.
In another preferred embodiment of the present invention, the cluster of I/O Nodes further includes I/O Node component that serves to communicate with Client Component, Metadata Server Component and other I/O Node Component.
In another preferred embodiment of the present invention, the Virtualization Host further includes Client Component that communicate with I/O Node Component, system hook to mount shared distributed file system as a local file system natively and providing interface to users for data or file related processes.
In another preferred embodiment of the present invention, the private networks comprises of Heartbeat Network 1 and Heartbeat Network 2.
In another preferred embodiment of the present invention, said inter private network comprises of Inter Heartbeat Network. In another preferred embodiment of the present invention, method of determining setup process based on role of a server or node. In another preferred embodiment of the present invention, the method wherein if a server is determined as Metadata Server includes the step of installing the Metadata Server components, creating Cluster Configuration File, connecting the server to High Bandwidth Storage Interconnect and Heartbeat Network 1 , connecting Heartbeat Network 1 to Heartbeat Network 2 via Inter Heartbeat Network and starting Metadata Server Service.
In another preferred embodiment of the present invention, if the server is determined as I/O Node, the steps include installing I/O Node Components, creating Cluster Configuration File, connecting the server to the High Bandwidth Storage Interconnect and Heartbeat Network 1 , connecting Heartbeat Network 1 to Heartbeat Network 2 via Inter Heartbeat Network and starting I/O Node ( 14) Service.
In another preferred embodiment of the present invention, if the server is determined as both I/O Node and Virtualization Host, the steps include, installing I/O Node and Client Components, creating Cluster Configuration File, connecting I/O Nodes and Virtualization Hosts to the High Bandwidth Storage Interconnect and Heartbeat Network 2, starting I/O Node and Client Service and mounting Cluster File System.
In another preferred embodiment of the present invention, if the server is determined as a Virtualization Host, the steps include installing Client Components, creating Cluster Configuration File, connecting Virtualization Hosts to High Bandwidth Storage Interconnect, starting Client Service and mounting Cluster File System.
The present invention consists of features and a combination of parts hereinafter fully described and illustrated in the accompanying drawings, it being understood that various changes in the details may be made without departing from the scope of the invention or sacrificing any of the advantages of the present invention.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
To further clarify various aspects of some embodiments of the present invention, a more particular description of the invention will be rendered by references to specific embodiments thereof, which are illustrated, in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail through the accompanying drawings in which:
FIG.l illustrates system architecture with cluster file system for virtualization hosting environment (100).
FIG.2 illustrates system setup process flow (200).
DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS According to FIG.l, the figure explains the system architecture with cluster file for virtualization hosting environment (100). The system architecture (100) comprises of convergence between shared cluster file system environment (16) virtualization hosting environment (Grid 4.1/laaS 1.x) (20) which operates through the internet (10). A distributed yet shared and centralized storage system hosts all the virtualized images in the architecture (100). A cluster of I/O nodes (14) provide a single or unified view to a shared high performance storage pool and a system hook on the virtualization host (19) to mount the storage pool in a cluster of I/O nodes (14) as if it is a native file system to the host (19). Data can be stripped and stored into this storage pool using a mathematical formula similar to those use in RAID controllers (redundant array of independent disks). A Metadata Server (17) will keep track of the location of the stripped data as well as the checksums. In an embodiment of the invention, a high availability Metadata Server (17) setup provides lookup capability to the storage pool. In another embodiment of the invention, a High Bandwidth Interconnect (18) system connects all the virtualization hosts (19) to the shared clustered storage system. The virtualization hosts are connected to a plurality of virtual servers (11) in the virtualization hosting environment (20).
A Metadata Server (17) Component communicates with the I/O Node (14) Component via High Bandwidth Storage Interconnect (18). It further maintains and updates information on data location being stored in the shared distributed file system as well as replicating the information among Metadata Servers (17) for redundancy. Metadata Server (17) components then monitors health signals from both Metadata Servers (17) and I/O Nodes (14) via heartbeat Network 1 (15) and Inter Heartbeat Network (21). It further generates, maintains and updates necessary keys and checksums if I/O Node (14) Component requires such information for encryption and decryption. The system architecture (100) further comprises a cluster of I/O Nodes (14) containing I/O Node (14) Component that communicates with Client Component (not shown), Metadata Server (17) Component, and other I/O Node (14) Component via High Bandwidth Storage Interconnect (18). Upon receiving request from a user, Client Component (not shown) will decide if encryption is required, thereafter get a key from Metadata Server (17) to encrypt the data and create checksum. It will further stripe or replicate the incoming data to itself and other I/O Nodes (14) and update the Metadata Server (17) location of the stripped or replicated data together with the checksum. The cluster of I/O Nodes (14) will read and decrypt if decryption is required and get a key and checksum from Metadata Server (17). It further locates information of the required data, then retrieves the striped or replicated data back and reconstruct back and send back to user via Client Component (not shown). The I/O Nodes (14) also monitors health signals from other I/O Nodes (14) via Heartbeat Network 2
(13) .
A virtualization host (19) containing a Client Component (not shown) that communicates with I/O Nodes (14). Virtualization Hosts and I/O Nodes (12) communicates via High Bandwidth Storage Interconnect (18) and System Hook (not shown) which is provided to mount the shared distributed file system as a local file system natively to the system. It also provides interface to the users for data or file related process.
The main component in this architecture is the cluster of I/O nodes (14). Each I/O node (14) dedicates a portion of its storage to the storage pool that consists of other I/O nodes
(14) . When running an I/O Node (14) Component, data can be striped and stored in the pool as if in a regular RAID (Redundant Array of Independent Disks) storage system. Alternatively, data can be replicated as well if not striped. Information on how the data are striped or replicated and their location are stored in the Metadata Server(s) (17). If required, the I/O Node (14) Component would provide additional data encryption and decryption services where the checksums and keys are stored in Metadata Server(s) (17). All the I/O Nodes (14) are connected to the High Bandwidth Storage interconnect (18) to a Virtualization Hosts (19) and Metadata Servers (17). In addition, a heartbeat network (13) connects these I/O Nodes (14) together to determine the health of the I/O Nodes (14).
Once a Cluster File System Environment (16) is prepared, the Virtualization Hosting Environment (20) can proceed with the setup. Each virtualization hosts (19) (i.e. physical servers) will then be connected to a High Bandwidth Storage Interconnect (18), that will link the Virtualization Hosts (19) to the I/O Nodes (14) and the Metadata Servers (17). Through usage of a Client Component, the virtualization hosts (19) see and/or mount the distributed shared storage as if it is a single native storage to the system. In addition, a virtualization host (19) can also become an I/O Node (14) as well. When a data/file is written or read into or from the shared clustered file system (16), the Client Component (not shown) will then send a request to the respective I/O Node (14) nearest to it to be further processed.
A Metadata Server (17) in the Cluster File System Environment (16) will then maintain and track how the files created by these files are stripped or replicated and stored as well as their respective location. Multiple metadata Servers (17) can be setup to provide a resiliency and high availability by replicating metadata in a constant manner using a different heartbeat network, which also used as a network to monitor each other's health. The Metadata Servers (17) may also secure Cluster File System (16) by providing and storing checksums and keys. An Inter Heartbeat Network (21) connects the heartbeat network (15) of the Metadata Servers (17) to the I/O Nodes (14) such that Metadata Servers (17) know the health of the I/O Nodes (14).
Now referring to FIG. 2, the process commences (110) with determining role of a server (1 1 1). If the server is determined as metadata server (1 12), metadata server components will be installed (1 13). The process will then proceed with creating cluster of configuration file (114) and connect the server to the High Bandwidth Storage Interconnect and Heartbeat Network 1 (115). Thereafter, Heartbeat Network 1 will be connected to Heartbeat Network 2 via Inter Heartbeat Network (116) to start the Metadata server service (117).
If the server is determined as an I/O Node (118), the system installs I/O Node components (119) and create cluster configuration file (120). The system also connects I/O Nodes to High Bandwidth Interconnect and Heartbeat Network 2 (121) to start an I/O service (122). If the server is determined as both I/O node and Virtualization Host (123), the system installs I/O Node and Client Components (124) and creates Cluster Configuration File (125). Thereafter, the system connects I/O and Virtualization Nodes to High Bandwidth Storage Interconnect and Heartbeat Network 2 (126). The I/O Node and Client Services (127) start by mounting Cluster File System (128).
If the server is determined as a virtualization host (129), the system installs Client Component (130) and creates Cluster Configuration File (131). The Virtualization host is then connected to High Bandwidth Storage Interconnect (133) and mount Cluster File System (134). In the foregoing specification, specific embodiments of the present invention have been described. However, one of ordinary skill in the art will appreciate that various modifications and changes can be made without departing from the scope of the present invention as set forth in the various embodiments discussed above and the claims that follow. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements as described herein.

Claims

CLAIMS:
1. A system architecture with cluster file for virtualization hosting environment comprising:
a Virtualization Hosting Environment (Grid 4.1/laaS 1.x) (20) communicating with shared Cluster File System Environment (16);
a High Bandwidth connecting the Metadata Server (17), cluster of I/O nodes (14), Virtualization Hosts (19) and Virtualization Host with I/O Nodes (12);
a Metadata Server (17) communicating with an I/O Node (14) component;
a cluster of I/O Nodes (14) communicating with a Client Component and the Metadata Server (17);
a Virtualization Host communicating with the I/O Nodes (12);
at least two private networks connecting Metadata Servers(17) to the system and connecting cluster of I/O Nodes (14); and
an Inter Private Network connecting the private networks.
2. The system according to claim 1 wherein the High Bandwidth is a High Bandwidth Storage Interconnect (18).
3. The system according to claim 1 wherein the Metadata Server (17) further includes a Metadata Server (17) Component to communicate with the I/O Node (14) component, maintaining and updating on data location being stored in the shared distributed file system, replicating information among Metadata Servers (17) for redundancy, monitoring health signals from both Metadata Servers (17) and I/O Nodes (14) and generating, maintaining and updating necessary keys and checksum if I/O Node (14) Component requires such information for encryption and decryption.
4. The system according to claim 1 wherein the cluster of I/O Nodes (14) further includes I/O Node (14) component that serves to communicate with Client Component,
Metadata Server (17) Component and other I/O Node (14) Component.
5. The system according to claim 1 wherein the Virtualization Host (19) further includes Client Component that communicate with I/O Node (14) Component, system hook to mount shared distributed file system as a local file system natively and providing interface to users for data or file related processes.
6. The system according to claim 1 wherein the plurality of Nodes acts as both I/O Nodes (14) and Virtualization Hosts (12) running both I/O Node (14) Components and Client components.
7. The system according to claim 1 wherein the private networks comprises of Heartbeat Network 1 (15) and Heartbeat Network 2 (13).
8. The system according to claim 1 wherein inter private network comprises of Inter Heartbeat Network (21).
9. The system according to claim 1 further includes the method of determining setup process based on role of a server or node.
10. The method according to claim 9 wherein if a server is determined as Metadata Server (17), the steps include:
installing the Metadata Server (17) components;
creating Cluster Configuration File;
connecting the server to High Bandwidth Storage Interconnect (18) and Heartbeat Network 1 ; connecting Heartbeat Network 1 (15) to Heartbeat Network 2 (13) via Inter Heartbeat Network (21); and
starting Metadata Server (17) Service.
11. The method according to claim 9 wherein, if the server is determined as I/O Node, the steps include:
installing I/O Node (14) Components;
creating Cluster Configuration File;
connecting the server to the High Bandwidth Storage Interconnect (18) and Heartbeat Network 1 (15);
connecting Heartbeat Network 1 (15) to Heartbeat Network 2 (13) via Inter Heartbeat
Network (21); and
starting I/O Node (14) Service.
12. The method according to claim 9 wherein, if the server is determined as both I/O Node and Virtualization Host (12), the steps include:
installing I/O Node (14) and Client Components;
creating Cluster Configuration File;
connecting I/O Nodes (14) and Virtualization Hosts (19) to the High Bandwidth Storage Interconnect (18) and Heartbeat Network 2 (13); starting I/O Node (14) and Client Service; and
mounting Cluster File System.
13. The method according to claim 9 wherein, if the server is determined as a Virtualization Host (19), the steps include
installing Client Components;
creating Cluster Configuration File;
connecting Virtualization Hosts (19) to High Bandwidth Storage Interconnect (18);
starting Client Service; and
mounting Cluster File System.
PCT/MY2011/000092 2010-12-02 2011-06-14 System architecture with cluster file for virtualization hosting environment WO2012074354A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2010005759A MY177055A (en) 2010-12-02 2010-12-02 System architecture with cluster file for virtualization hosting environment
MYPI2010005759 2010-12-02

Publications (1)

Publication Number Publication Date
WO2012074354A1 true WO2012074354A1 (en) 2012-06-07

Family

ID=46172115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2011/000092 WO2012074354A1 (en) 2010-12-02 2011-06-14 System architecture with cluster file for virtualization hosting environment

Country Status (2)

Country Link
MY (1) MY177055A (en)
WO (1) WO2012074354A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427861A (en) * 2020-02-28 2020-07-17 云知声智能科技股份有限公司 Distributed file system configuration method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003010678A1 (en) * 2001-07-23 2003-02-06 Network Appliance, Inc. High-availability cluster virtual server system
US20050031980A1 (en) * 2003-08-07 2005-02-10 Ryohta Inoue Toner, method for manufacturing the toner, developer including the toner, toner container containing the toner, and image forming method, image forming apparatus and process cartridge using the toner
US20050120160A1 (en) * 2003-08-20 2005-06-02 Jerry Plouffe System and method for managing virtual servers
US7024427B2 (en) * 2001-12-19 2006-04-04 Emc Corporation Virtual file system
US7739541B1 (en) * 2003-07-25 2010-06-15 Symantec Operating Corporation System and method for resolving cluster partitions in out-of-band storage virtualization environments

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003010678A1 (en) * 2001-07-23 2003-02-06 Network Appliance, Inc. High-availability cluster virtual server system
US7024427B2 (en) * 2001-12-19 2006-04-04 Emc Corporation Virtual file system
US7739541B1 (en) * 2003-07-25 2010-06-15 Symantec Operating Corporation System and method for resolving cluster partitions in out-of-band storage virtualization environments
US20050031980A1 (en) * 2003-08-07 2005-02-10 Ryohta Inoue Toner, method for manufacturing the toner, developer including the toner, toner container containing the toner, and image forming method, image forming apparatus and process cartridge using the toner
US20050120160A1 (en) * 2003-08-20 2005-06-02 Jerry Plouffe System and method for managing virtual servers

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427861A (en) * 2020-02-28 2020-07-17 云知声智能科技股份有限公司 Distributed file system configuration method and device
CN111427861B (en) * 2020-02-28 2023-05-05 云知声智能科技股份有限公司 Distributed file system configuration method and device

Also Published As

Publication number Publication date
MY177055A (en) 2020-09-03

Similar Documents

Publication Publication Date Title
US11570249B2 (en) Redundant storage gateways
US11914736B2 (en) Encryption for a distributed filesystem
US20210336844A1 (en) Remote storage gateway management using gateway-initiated connections
US10536520B2 (en) Shadowing storage gateway
US9916321B2 (en) Methods and apparatus for controlling snapshot exports
EP2799973B1 (en) A method for layered storage of enterprise data
US9710294B2 (en) Methods and apparatus for providing hypervisor level data services for server virtualization
US9225697B2 (en) Storage gateway activation process
US10089009B2 (en) Method for layered storage of enterprise data
US11636223B2 (en) Data encryption for directly connected host
US11693581B2 (en) Authenticated stateless mount string for a distributed file system
US20230315695A1 (en) Byte-addressable journal hosted using block storage device
WO2012074354A1 (en) System architecture with cluster file for virtualization hosting environment
US11258877B2 (en) Methods for managing workloads in a storage system and devices thereof
US10768834B2 (en) Methods for managing group objects with different service level objectives for an application and devices thereof
US20230409215A1 (en) Graph-based storage management
US11831762B1 (en) Pre-generating secure channel credentials
US11294570B2 (en) Data compression for having one direct connection between host and port of storage system via internal fabric interface
US20200334115A1 (en) Methods for cache rewarming in a failover domain and devices thereof
WO2023244948A1 (en) Graph-based storage management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11844129

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11844129

Country of ref document: EP

Kind code of ref document: A1